FMPA v17.02 Not FMPS.
How does one determine the ideal FMPA cache setting?
When should the cache be flushed manually?
Are there problems with FMP caching, as reported elsewhere?
My platform iMac i7, 32GB RAM; external 1TB Samsung EVO850 boot disk running High Sierra on APFS.
I import and manipulate large datasets in FMPA, then output the results to a separate FMPS file for enduser access.
"Large" meaning several million wide records, for now, containing alphanumeric and GPS coordinate data.
Among other things, the routine retrieves values from many unrelated tables and inputs the results into the then current table. Previously this was done exclusively using a simple Replace Field Contents() routine, for each variable, with a calculated executeSQL SELECT statement. But that took lot's of time.
Now, where a simple SELECT statement can yield multiple relevant field values for the same record, the eSQL results are input into a LOOP that distributes the results across relevant fields using Set Field(). In testing, on smaller data sets, using a combination of Replace Field Contents() and Set Field() improved performance markedly.
But running the entire routine from scratch, on the full data set, is painfully slow - 36 hours so far, and waiting.
I am wondering whether I have missed some subtle FMP trick.
For example, the FMPA cache is set at 2048. Yet I have no idea how FMPA handles cache and whether the size is too large, and whether the cache should be flushed within the looping routines.
Could managing the large cache, in these circumstances, impact performance? Would a smaller cache do better?
Presumably, I could farm out the process to FMPS. Then, where feasible, run the routines concurrently using PSoS.
Here I assume FMPS could then take better advantage of concurrency and the underlying multicore hardware.
The alternative is to rewrite this particular database in MySQL or PostgreSQL with FMP still acting as the front end.