1 of 1 people found this helpful
Have you tried running your loop script on a blank layout?
Justy a shot in the dark... You might try tinkering with the FM Memory Cache size...
Just created a blank layout. Re-ran the script and it completed way more quickly.
Created 1,000,000 blank records in 2m34s. (mac 2.8ghz intel core i7)
Seems like this is absolutely part of the solution.
Curiously, the import still works faster. Exporting the 1million blank records and then reimported them, creating new records in the process. The import completed in 37 seconds - 2minutes faster than the script.
37 seconds is better than 2minutes and 34 seconds! So any suggestions on how to bring that 2 minutes 34 seconds down is appreciated.
Was already running a cache of 128MB.
Bumped it up to the max of 256MB. Then ran the record creation loop again (tho' this time I used the blank layout.)
There was no change to the time taken when the cache was set to 128MB
We are working on a solution that does some simulation in the medical field.
The simulation element occasionally requires the creation of a large number of records. Sometimes half a million.
Our challenge is to do this quickly.
1. You could have a filemaker table which contains 500000 empty records and import them.
2. Have a "tabula rasa" with 500000 records which slurps in data using Bruce Robertson's Virtual List.
The fastest way to create new records is to import from a "bank" of blank records using repeating fields - with the import set to split repeating fields into individual records. Or at least it was the fastest several versions back - it might need re-checking. The credit goes to Ugo DiLuca.
I seem to recall Ray from Nightwing using a technique where you could create a relationship from a global in one table to another table (making sure that "Allow creation of related records") is checked. The global field then gets populated via set field in a loop. The records get created in the other table only once the record containing the global is committed.
Not sure if this would work, not sure if it's any faster, but an interesting approach.
Just another point, I know of someone else who had scripts creating multiple records and had te same slow down. Turned out that a "Clear Cache" within the loop made a massive difference.
Found this general approach to be the fastest.
Slightly fiddly to set up the 'bank' of blank records.
But worth the effort once its working.
Lots of possible ways of implementing the 'bank', so we will experiment with the best one for us.
BTW - tested this and Ray Cologon's approach takes much longer for bulk records.
TX to all for the advice.
Have tested all the suggested approaches and the winner by a long shot is to import from a 'bank' of null records.
Keep in mind that a repeating field can have up to 32k repetitions. Using base-2, you could import any number of records up to 32,767 from a bank of 15 records only.
We like the 'bank' idea and it is certainly the fastest in our trials.
But we havent quite figured out how to capitalise on your idea of importing the repeating records.
We understand that the import can be configured to split repeating records up.
But what is the best way to create the 'bank'? Is it a single record with a repeating field of 32k repetitions?
What did you have in mind?
If you have a repeating field of 32k non-empty repetitions, then each bank record, when imported, will generate 32k records in the target table. My idea is to use a repeating calculation field to control the number of non-empty repetitions in each record - so that you can quickly assemble a found set where the total number of non-empty repetitions equals the number of records you wish to create.
In testing 11 vs 12, here are a few comparisons in creating 160,000 new records using the two fastest techniques (AFAIK). I ran each test 7 times and reopened the file between each test. Test files available for anyone if requested.
I used global number field (num) and global calc (number) with 32,000 repetitions and 5 records producing 160,000 new records. Calculation used was:
Case( Get ( CalculationRepetitionNumber ) ≤ Extend ( num ) ; Get ( CalculationRepetitionNumber ) )
Importing into data table which contained uniqueID and num (number) field (not on layout) in form view, splitting reps.
I created 160,000 temp records in a new table with only a num field with auto-enter 1.
Results are close but Test#1 wins in both version for adding and deleting. :^)
Ooops - it didn't keep my copy/paste. Here is png of it
Additionally not taken into account with Test#2, script would need to constrain to the number of records first and it also would have a greater footprint.