The idea behind this was to be able to determine, perhaps from the generated ECL graphs or earlier, either from the AST or in the resourcing of the graphs themselves, certain measures of good ECL vs bad ECL.
Such measures could be
- Detectable use of ECL that appears more procedural in nature than declarative.
- Use of certain ECL functionality such as persist, sequential, cron, when, etc that also begin to converge on more procedural thinking then desired.
- Walking the graphs to determine 'pinch points' and other bottlenecks that prevent useful optimizations, hoisting, simplification, factorizing and possible parallelization of the desired implementation.
One of the general points is that any compartmentalization within the code, whether it be functional, modular, or attribute in nature, that can create a boundary to the above refactoring and simplification of the ECL, by the compiler, should be spotted and such boundaries dissolved.
A further iteration is that we state one of the prime directives or using ECL as the ability to state what you want and not how you want to do it. It is then the compiler (and engines) that will determine the how to part. With ECL code use becoming more and more complex, more and more having one attribute built upon another etc etc, the harder it is for the compiler to determine the best-how-to and the easier and easier it is for the user to facilitate in such difficulties.