AnsweredAssumed Answered

Server cache optimization - correct approach?

Question asked by wxtyrs on Oct 17, 2018
Latest reply on Oct 20, 2018 by wimdecorte

Is there something I am missing here - problems with my cache optimization approach: downsides?

 

I know others have concerns about large FMS cache allocations.

 

Any comments are welcome ...

 

I am using FMS as a "single-user" app to process large tables concurrently.  This entails calculations lookups, using eSQL(), and a whole lot else.  My current FMS cache size is now 8GB, having worked my way up from 512MB.

 

This is not a typical transactions db concerned with users.  Rather, it transforms data for analysis dealing with complex issues such as changes in data over time, in different contexts, etc, for use elsewhere.

 

By "large" I mean roughly > 1 - 6 million rows each, and 25 - 140 variables wide.

 

The performance (time) benefit of using FMS as opposed to FMPA is like night and day.

 

The whole process still takes > 12 hours with the code 75% optimized >> more to do yet.

 

In use, the cache hit rate remains a constant 100%, and Cache Unsaved % is a steady 0%, mostly.  The latter very occasionally rises as high as 40% during an especially intense SSD read (30420KB/sec).

 

In < 1% of the time has the cache hit rate fallen to 80-99%.  The median figure is more like 99%.

 

I raised the cache allocation sequentially while monitoring performance via stat.log, and based on the Cache Unsaved % figure, focusing on 0% target 99% of the time.  At the default setting the Unsaved figure was consistently > 0%.

 

Platform: iMac 17 @4GHz, 32GB RAM, and 1TB SSD.  FM client and server on the same machine, for now.

Outcomes