My solution slowly became very long to start, now it's 1 min 10 seconds to start.
However there were no worthwhile change in the solution logic and size.
So I deleted the Filemaker cache :
First start no cache : 18,9 s
Second start with brand new cache : 5,8s
2rd start : 6,8 s
I felt that the cache was slowing down startup time a while ago, while playing with a test database with millions records, that soon took 2 minutes to launch (without any script, relationship etc, I reported here informally in another topic's thread).
So today I just took the time to measure it.
The cause is obvious, and even shows in topcalls log, with "counting difference". If you've a big dataset, the cache gets big (2 GB in may case) and filemaker just takes a lot of time to compare the cache to the online version. So I've you have 350K records, taht you happened to see using the solution once, then at each start-upt it will compare its 350K records in cache to the 350K actual records on the server, even if you set your solution to only go to 1 record at startup.
It's an obvious colossal waste of time.
Obviously, the cache has to to throttled in size and its resolution time tested. If times is more than first non cache startup time, it should obviously be trashed.
That's a totally obviousness that should have been considered in day one of the implementation of the feature. But we're in the second version of it (16 introduced more caching), and that obviousness wasn't still addressed.
I'm on Mac 10.2.6, both client and server, of course latest Filemaker version every where, in LAN. And using apple built-in FAST SSDs