Filemaker 15 and then, even more, in 16 introducing persistent caching, to speed up application launching.
As always with FMI, good idea, terrible execution : Now thanks to it my solution takes 70 to start, instead of 18,9s without cache (cache trashed). True, with brand new cache it's faster 5,8s to boot. But it wont take long before it gets slower (3rd try already added 1 s).
The cause is obvious, and even shows in topcalls log, with "counting difference". If you've a big dataset, the cache gets big (2 GB in may case) and filemaker just takes a lot of time to compare the cache to the online version. So I've you have 350K records, that you happened to see using the solution once, then at each start-up it will compare its 350K records in cache to the 350K actual records on the server, even if you set your solution to only go to 1 record at startup.
It's an obvious colossal waste of time.
Obviously, the cache has to be throttled in size and its resolution time tested. If times is more than first non cache startup time, it should obviously be trashed.
That's a totally obviousness that should have been considered in day one of the implementation of the feature. But we're in the second version of it (16 introduced more caching), and that obviousness wasn't still addressed.
So the idea is this : When cache file doesn't exist yet, measure the time to startup the solution.
Then on each startup measure the time it takes to startup. If that new startup time is > as no cache time, then ditch the cache. Of course, just measure the database opening time (not scripts times), and of course log it by connection kind (WAN,LAN).
That way, the feature would ensure itself that the goal of the cache : to speed up starting time, is always met, instead of having the feature do exactly the opposite of its goal.