Could someone from Filemaker answer this please?
A cache change of a few MB is not going to make any difference to performance. If you were changing by Gigabytes you might see a difference in some cases.
Rule of thumb:
The cache has already been set to the minimum recommended value. Lower than that may reduce performance and reliability. Don't reduce cache below its default setting.
Is performance your only criteria?
The cache is used to store new and old data in RAM so the hard drive isn't accessed as much. The more cache the less disk access and the better performance up to the point where it's more overhead to manage the cache size than is gained by not accessing the drive. A lot of united cache is what you are trying to avoid.
Cache is impacted as much by adding new data as it is requesting existing data from the Hard Drive. This makes looking at the stats problematic if you don't know what's happening. You may see the cache hits drop without knowing why. A large import can use a lot of cache thereby dropping cache hits when existing data is requested.
Cache flushing is perfumed during Idle time. If there is no idle time then the cache isn't flushed unless its forced to by being filled up. But, except in the case of a long import or Replace command there is always idle time.
With FMS 14 I use Progressive backup to flush cache. Backups require the cache be flushed to the drive so the backup file is complete.
When I set up an FMS 14 I use cache of 1200 MB and Progressive backup every 1 min.
I could use a cache of 2400 and it wouldn't make a difference in performance but might make a difference in how long it takes to do the progressive backup.
If you are looking to speed up reports then use faster drives. SSD drives would be best. The reason is a report usually needs a lot of records (data) to compose. But, it is unlikely the data will be in cache already so it must be retrieved. Cache size in this case is not the limiting factor.
Maximizing performance means giving the server more cache to store information in cache instead of on the much slower storage. The risk is if it crashes, there is more data available to be lost. So increasing cache increases performance and increases potential to loose data if the server crashes or looses power. Ideally you don't want the cache to be at 100%. You really want it to vary around 96-99%. If it drops lower, you need to increase cache. If you stay at 100% all the time, you can drop the cache because it isn't needed.
ch0c0halic has some good advice for you. Cache will matter more the slower your storage is. For example, Mac Mini servers with 5400 rpm hard drives really can use a lot of cache since the drives are so slow. Having a really fast RAID, SSD or flash storage means the cache isn't as important and they will increase performance overall.
Thanks for your reply. I don't do much FMP db development. I primarily just administer the server.
The document I referred to suggests lowering the server cache number if the stats show that it's constantly at 100%, which one of my servers is. I tried lowering the cache and that didn't change the stats. (I didn't reboot the server) So I'm trying to understand the linked recommendation.
Speed seems ok but I suppose it could likely be improved. I was just reading the document to see what server recommendations I could apply. So if the default setting is the lowest recommended setting and it's constantly at 100% then what do you recommend, from the server config side of things? Sounds like I should leave the cache at 512.
I currently don't do progressive backups, only nightly full backups.
I am currently using an SSD RAID for the databases and the Cache pretty much sits at 98-100% most of the time. Lowering it to the minimum does not change the stats much but the Cache hit % does drop a little. I currently have a cache that is 50% of the database size. I also have a Refresh:Flush Cache to Disk script step that is run as part of a few very common scripts in the solution to keep up with storing the cache data. The minimum is 200MB and there are some solutions that may only have 200MB of data and they are just fine running 100% in the cache. I am not sure it is much different as the files get larger.
Use progressive backups.
For performance, SSD storage is the number one advantage I have seen. My development server is the same as my production server except the dev server has a 5400RPM drive. Slow backups, slow cache flush, slow server side scripts, and other things are just slow. People using native PCIe storage on the server have reported it is even faster.
You should really look at the stats from your server about how much network data is moving in and out and also the disk read/write stats. Cache alone is only part of the puzzle. Cache Unsaved % is important too.
How many GB total of databases are you hosting?
The Distribute Cache Flush is not in the admin console. The document you read is old, but most of it all still holds true. I am not sure if this setting can be changed via command line. Maybe somebody knows.