The current 10000 or 50000 absolute limit on custom function calls absolutely cripples their usefulness:
- The absolute limit ruins the potential power of divide-and-conquer recursion methods, and
- makes it unfeasible to call custom functions within custom functions, as that leads to a rapidly diminishing limit of the actual values processed.
The developer should have the option to go beyond those limits
- Instead of an ABSOLUTE CALL LIMIT...
- ...there should be a CALL-DEPTH LIMIT
- Maybe this should be an OPTION which can be applied to individual custom functions
Which value the CALL-DEPTH-LIMIT should have is up to FMI - but even if it were only 40 (yes forty), this would mean that a divide-and-conquer custom function could iterate over 2^40 values, that is, a little over a million million* values. (* let's avoid the word billion, eh? ) That should be enough for most purposes I think.
(While we are at it, maybe the execution of custom functions can be improved + tuned to takle advantage of modern (64-bit) multi-core processors and spread the load across multiple cores.)
I am aware of the potential problems that can occur with badly programmed CFs (resources, memory, time, '?'), and it may be necessary to discuss and develop further safety measures (e,g, ESC = abort => '?'), however, in my experience, the results of a finely programmed custom function can often be well worth waiting for!!
- Powerful custom functions to process found sets of (potentially) considerably larger quantities than 50000
- Scalability: '?'-death when the database grows out of its custom functions can be avoided
- Better encapsulation + maintenance
- CFs will be more maintainable, because CFs can call other CFs without fear of premature-'?'-death
- Thus logic does not have to be duplicated into several CFs
- Lookup / binary search over found set or related records using GetNthRecord
- Create data-visualizations by generating SVG image text
- Data Mining