I do not understand why you are using that function.
for a related records, there is the standard "List" function: List (Related_Table::Field)
for a found set, a simple loop is faster and independent of the number of records (you can create a generic script and pass to it the field name to be listed).
Moreover, if yourscriteria to find the records can be used to create a dynamic relationship, you are just in the first case.
When you call a recursive function, each iteration, until the end, is kept open in memory. So, more iterations you have, more memory you use.
Thanks a lot for your quick answer. The List function doesn't really apply to my situation as I want the list of the current foundset, and not the related records.
I agree I definitely use a loop over a recursive function if I use a script. But the reason why I would want to use a recursive function over a loop is that in some cases I want the list to update itself without having to call a script with a loop. The recursive function can be added in a calculation field.
When you say the more iteration, the more memory it uses, is it the clients machine ram? is it the cache memory set in the filemaker preference or is it a server side limitation?
As far as I know, all the calculations are done in the client machine, so it should use the local memory.
A recursive function uses memory for each iteration. only when it achieve the latest iteration start release the memory.
let have an example of 3 iterations:
first ) call the function and stay in the memory until the second iteration in ended
second ) call the function and stay in the memory until the third iteration in ended
third) call the function. it is the last iteration.
In my program, I never use a recursive function to evaluate a set of records until I'm sure that the number of records is limited and all machine are the needed amount of memory.
Have you think to use a self-related relationship?
Exactly what I was looking for! Thanks!!
I've tried the solution mentionned in the video to rise the limit to 50'000 iteration (adding a result parameter to the function) and it seems to work fine now. But I guess, I'll always have to make sure that the found count isn't too high.
I've thought about the self-relationship, but I don't really know how to be able show only the found set through the relationship to be able to list it. But I'd be really interested if you have a solution that would work using the self relationship
depends on how are your search criteria.
for instance I have used two date global fields related to the same date field to get related records between two dates.
So, my script set the two global fields with the interval dates, and in the relation table I have just the records that have the date field between the two dates.
ok, so that would work in a case where I could replace a search by a specific filter. But I haven't found a solution for a completely custom found set.
1 of 1 people found this helpful
Another option for generating the list of records in the found set involves the HyperList ModularFM module:
Has the advantage of not relying on the custom function engine, and thus avoids the recursion limit.
Thanks for this!
Listing by batch seems to be an interesting and fast concept! I'll make some testing over a wan network to check if this really speed things up!
1 of 1 people found this helpful
I just wanted to update this thread as since Filemaker 13 has now a "List of" option for the summary field, I no longer use recursive functions to list values from the current found set.
Even with big found set, it's much faster than with the recursive function and it doesn't seem to have any limitation in terms of number of records to be listed
It is good new feature.
One of the 'limitation' may be resulting text length <10,000,000.
(I wonder just now that this is decreased from 1billion of FM12)
It takes about 12sec for 120,000 records on my win7 local file, this is about as same time as old script like as
goto layout (it has one field)
copy all records
goto layout (it has field to paste)
There is one other limitation: It doesn't omit duplicates. Won't matter if it's a true primary key, but will matter if you're collecting, perhaps, a list of foreign keys.
But yes, it's a very good option.
+1 to hyperlist