One thing to consider is not use regular number fields and update the field (and all of its dependents) as part of the scripted workflow. For instance: if an item is added to an invoice, the invoice total field can be a regular number field that gets set by the script that adds the item.
There is some extra work to be done here to make sure nobody has a lock on any record that needs to be updated, and you'll probably want to run a nightly server-side script schedule to make sure all totals are still accurate.
Yeah ... get rid of the unstored calculations.
What you're discovering is two "under the hood" behaviors of FileMaker. One is the over-the-network client-server record-fetching model, and the other is the dependency chain.
First, record-fetching. When a client requests a record from the server, it gets it all of it, every field, with a few exceptions (like global fields, container fields that haven't been displayed, and unstored calculations that aren't needed). The records are fetched based on how many the client needs (25 in Form View, as many as are needed for display in List and Table View). However, if you have an aggregate calculation, that will change. A summary field will need the entire found set; an aggregate calculation (using Sum or Count) will need the entire set on which it operates (such as a related set). The larger the set, the more fields per record, and the slower the network, the longer the fetch time.
The second issue is the dependency chain. When calculation A "depends" on the results of calculation B, you've created a dependency. It means FileMaker has to evaluate calculation B before it can start evaluating calculation A. It can't work them simultaneously, which slows it down.
In your case, you have (based on your description) at least two levels of dependency, which themselves are based on unstored calculations. Depending on the exact nature of the calculation, the width of the tables (i.e., how many fields they have), the network speed, and how the calculations themselves have been built, you can easily run into a performance problem.
So, how to fix it? First, evaluate the width of your tables. Is there a way to reduce the number of fields? Can you move some of the fields out of the schema (the database table) and into the user interface, using tools like Conditional Formatting? Or can you just get rid of the fields by referencing the field in the parent table?
Second, you might look at running a script that turns your unstored calculation into a stored result when a user performs an operation that would cause it to update. I don't know the exact nature of the application, but often, converting simple data updates into a scripted operation and then forcing the update to be a stored result is feasible. This can do away with the unstored calculation and convert it into an indexed field, which will be much faster.
Third, since you mentioned this is a web app, can you use the PHP engine to calculate some of what you need at runtime, directly in code?
Other ideas you can look at:
- Use ExecuteSQL to remove table occurrences from your graph that are only needed for specific display options (too many table occurrences can slow down a solution)
- Run a batch job at night to update data that don't have to be real-time
- Convert the unstored calculation that sums or counts another unstored calculation to one that goes directly to the source data (take one level out of the dependency tree)
You might also take a look at this related thread: https://fmdev.filemaker.com/message/125841#125841