You are caught up in a classic "catch - 22".
Your calculation uses Get ( CurrentDate ) to get today's date to use in the calculation. If you make this a stored calculation, it won't update to show the correct value as time passes.
But as you have experienced, as an unstored calculation, certain database operations become much slower as Filemaker has to stop and build a "temporary index" before it can complete that action. The amount of added delay can be barely noticeable or hours long as it is a nonlinear function of the number of records involved.
There are a number of possible alternative approaches, from a script that updates this field for you to a modified script that uses a different method for achieving the same result as your current script, but whether any of those are possible or practical would require a more complete picture of how you are using this calculation field and what your script has to do.
Unfortunately, not storing the results in this instance seems to slow down the find A LOT. :/
Basically I have script which runs several different scripts back to back, saving results in variables and then pasting them into a layout. In some scripts such as this one which include calculation fields it seems if I don't select index its extremely long to run. This one in particular is trying to find out how long someone has waited for an assessment from the date they were accepted to either the current date or an assessment date if available.
I'm wondering based on what you said, if I create a script that updates this field, and run it as the first script in the parent one that runs them back to back if that might work. I'm just not sure how I go about writing that script to update a field.
I really can't offer any more specific advice than I have already without a much more detailed understanding of your specific issue. I just don't know how you are using this field and thus any advice I give could do more harm than good.
If you want more assistance, you'll need to describe the design of your database and your script in a lot more detail.
Ok....I have attached a view of one of the scripts I use. This script tells me how long a person is waiting for an assessment. But I need to break it down by the persons age and how long they have been waiting. So there would be several more like this (16 in total) but the ClientAgeYear would look for a different range as would the AxWaitingTimeDays. Those are the two fields which are calculations that are giving me the issue in the time to process the find if I don't store the info.
There are global start date and end date fields the person completes, then selection a button which runs a script which for this purpose I'll call Master Script. The Master Script sets variable for the start and end dates and runs each of the scripts like the one in the example attached. It then goes to a designated layout and pastes the variable results into appropriate fields. The reason I use the $$EndDate is that if we are looking up a date which has past I want what their age was and waiting time was at the given date.
The ClientAgeYear calculation is:
Let ( [ B = ClientDOB ;
T = $$EndDate
Year ( T ) - Year ( B ) - ( T < Date ( Month ( B ) ; Day ( B ) ; Year ( T ) ) )
The AxWaitingTimeDays calculation is:
If ( IsEmpty ( Date Received ) and IsEmpty ( Date Accepted ); "Please Enter Dates";
Let ( [D2 = If ( IsEmpty ( Date Accepted ) ; Date Received ; Date Accepted ) ;
My = If ( IsEmpty (Ax Date ) ; $$EndDate; Ax Date )] ;
Does this help any more??
I think a script to update stored fields in place of your unstored fields might take more time than it saves, but it is an option where you might run such a script once when the file opens and then your data searches will be fast throughout the day. This sticks you with one, possibly massive delay the first time you open the file (go get a coffee or something...) but then you have good performance from that point onwards. (But if you stay up late and work past midnight, your data goes one day out of date until you run another script. Such a script would typically find all records (but might be able to find just a sub set of all records in some cases) and then Replace Field Contents can use the calculation currently defined as the calc for your field to update the value in a field that can now be stored/indexed. (note that in very large databases (say with millions of records) this might take a truly unacceptable amount of time to complete.)
There is, however, a Find strategy, that might work to greatly reduce the time required to run your script with these unstored calculation fields.
Do the find in two stages. In stage one, you perform a find specifying only criteria specified in stored indexed fields. Then your script returns to find mode, specifies criteria in your unstored fields and then, instead of perform find, constrains the found set produced by the first stage of your script.
This works if your stage 1 find consistently produces a found set of records much smaller than the total number of records in your table as the "constrain" step will only apply your specified criteria against this smaller group of records instead of every record in your table.
Thanks, I might try that find in two stages it may be the fix as there would be tens of thousands of records on the table, but only a few hundred in the found set.
Thank you so much for the advice!
It still seems to be taking forever using the constrained method but I'm not sure why it would be. I've attached a couple of screenshots to show the master script running all the individual scripts as well as a couple of screenshots of of the two types of scripts it's running. Can you see anything I'm doing wrong that might be making this painfully slow?
There are three different types of scripts that are mostly the same aside from the AgeAtReportingMonth and the AxWaitingDays fields (These are the unstored fields). I've edited the scripts to use the constrain method you had suggested. I've attached a copy of two of them, the third is very similar to the second except it doesn't look for a particular AxWaitingDays so I didn't bother to screen shot it.
Are you sure that stage 1 is searching only with criteria entered into indexed fields?
This method isn't going to save you time if one of the fields being searched in stage 1 is not indexed. It also will not save you much time if the number of records found in stage 1 is very large.
Your overall script is performing not just one find but many different finds so that is definitely going to be a long running script if your table has a lot of records no matter how you structure your scripted finds.
Have you considered using a Creating Filemaker Pro summary reports--Tutorial for this data instead of performing so many finds just to copy the data back to a single record on your original layout? I think that would be a much faster way to get these record counts.
Yes, all the original finds are indexed, the field that are constrained are the only ones that aren't. But, it turns out the painful speed was not actually due to the finds/scripts themselves but an all over slow down on the network due to some issue at the time. It has since been resolved and is working quite well now. It takes about a minute or two to run the full script with the multiple finds but that's not a problem given that they would spend an hour or more looking up the info manually!! Thank you very much for your advice, the constrain step made a big difference!