crw030

FM12:ESS:Problem computing Grand Summary of ESS records (SQL Server)

Discussion created by crw030 on Aug 19, 2013
Latest reply on Sep 3, 2013 by LSNOVER

All.

 

At my wits end - been working on this for days trying various things in both FM client/script and even trying to tackle from SQL Server side of things.

 

My environment is:

Filemaker Server Advanced 12.0v3 ,

Filemaker Pro Clients,

SQL Server 2008R2 for 2 very large Revenue tables.

- table#1 averages 300k rows/year - 10 years of data

- table#2 averages 600k rows/year - 10 years of data

 

Problem Statement: Users are accustomed to having Grand Summary for Found Set within either RAW table. This worked great when all data was in FMS (about 1-2 years worth), but disk storage was climbing rapidly and application responsiveness was declining every month as more records were loaded.

 

Observed Performance under different Trial Scenarios:

Being a "SQL Guy" I moved both large tables to a SQL Server 2008R2 backend. Most everything is working great, except SUMMARY's of any type on the found set data are painfully slow. At a couple hundred its probably only 100x slower than when the data was on FMS - but at 30,000-100,000 its essentially unuseable.

 

If I add Filemaker Summary Field Type - layouts take minutes to load and seem to recalculate randomly (even when the summary field is absent from the layout)

If I use a SQL trick to calculate the summary on the SQL Server, it doesn't balance as users adjust the Found Set using Omit Record, Omit Multiple, Constrain Found Set.

 

If I attempt to compute the summary manually (say with a script Get(Nth Record)) - I can see Filemaker querying 25 rows at a time by primary key - hence the application waits for thousands of queries the first time, but a subsequent recalc takes only moments. However every new FIND request appears to clear the cache and the whole process starts over again.

 

If I perform an EXPORT to local disk, Filemaker appears to work in larger batches (looks to be a couple hundred), but appears to pull all columns of data, not just those I am requesting to export (bandwidth usage is excessive). However, once completed, a local query (Query against the Text output) is virtually instantaneous - but I am concerned how kludgy that would be to have setup on every client.

 

My question: Has anyone experienced a similar problem and can recommend an approach to improve? I've fed SQL as many as 60,000 rowid's, requesting SUM(Revenue), but generating that list (from the current Found Set) takes MINUTES - whilst the SQL portion takes seconds.

 

If I could change the batch size to 1000, and restrict the fields being retrieved for each row to the 1-2 numeric values I needed (during the summary phase), this would probably be ideal. However, I have been unable to find anything which explains the 25-row batch size being used by the client (or any way to change it).

 

In cross-db h3ll.

crw030

Outcomes