There are several options (and the fact that you have multiple choices probably adds to your confusion).
A long standing method is to define a new relationship between Table2 and a new Tutorial: What are Table Occurrences? of Table1 where the added criteria of "less than 30 days" is built in to the relationship.
It would appear that your current relationship is:
Table2::ID = Table1::ID AND
Table2::Year = Table1::year AND
Table2::Forecast Indicator = Table1::Forecast Indicator
But if you duplicate the Table 1 table occurrence (and rename it), you can then add this relationship:
Table2::ID = Table1|30days::ID AND
Table2::Year = Table1|30days::year AND
Table2::Forecast Indicator = Table1|30days::Forecast Indicator AND
Table2::const30 > Table1|30days::Date1MinuwDate2
const30 would be a calculation field with a number result type defined to always return the value 30.
For an explanation of the notation that I am using, see the first post of: Common Forum Relationship and Field Notations Explained
And now your aggregate functions can refer to values in the Table1|30days table occurrence to compute values for related records where the elapsed time is less than 30 days.
An alternative approach is to use ExecuteSQL to compute this same value. This avoids the need to set up an added table occurrence, but requires working with SQL to produce the needed values.
Thanks a lot Phil.
I created three copies of the Table1 in order to match the time periods i want to filter. This worked fine, but I noticed that in the table2 performs find queries every time i scroll. The data on Table2 are not stored permanently because they are calculated every time from the related Table1 and Table1's occurances. This might explain why exporting Table2 is so slow.
I am trying to export the table2 as csv in order to use it in Matlab and it seems that it will take about 20 days to accomplish. Note that Table1 has 7 million records with about 20 fields (including calculation fields) and table2 has 5 million records with about 20 fields which summarize and calculate data from Table1.
Is there something that we can do to increase performance especially for exporting the data?
I don't know MatLab, but might it be possible to export the data without any calc or summaries and heve them calculated in the destination application? Whether exporting or not, any time seven million records need to be calculated and summarized all together (which is unlikely during day to day normal usage) the time needed can be significant. Another approach could be to denormalize before export.
How can i denormalize data of table2 (which are calculated from the related Table1 and Table1's occurrences)? Shall i write a script to do the summarizing calculations i need (min, max, average, etc) and hard write the data from Table1 into Table2 instead of relationally calculate them?
If calculations which now reside in Table 2 referencing Table one could be calculated in Table one and stored as a number result (static) your speed would increase dramatically. However I'm only speculating as I don't know your solution.
Phil, do u think that i should do this project in MySQL? Is there something that i can do in filemaker to increase performance of exporting the data?
Like Rick, I would look at denormalizing the data. This is a big word for storing the unstored calculation result in a number field and then exporting from the number field instead of having to calculate the value for each record as part of the export.
This doesn't really save time, the total time can even be more, but it can be a way to distribute the time needed out in small "chunks" when the delay (due to calculating the value for just a few records at a time) is not noticeable.
Whether this works for you or not will depend on the design of your database and in how you use it, but if you can use a script trigger to perform a script to calculate this value for just one record when a specific value is changed, this might work. But the fact that we are looking at dates that "age" may not make that approach practical. Another option can be to "batch" the updates by running a script once a night "after hours" that finds and updates all records that have now crossed the 30 day threshold.
And you might see some improvement in performance if we got rid of the > operator in the relationship. Date1MinuwDate2 could be redefined to produce the value 1 if the difference is less than 30 days and then you can change const30 to constOne and match with the = operator instead of >.
I am trying to find information how to export them batch by batch. All I found is this script help page about Filemaker Server. How can I export a "find records" result? (like Select in SQL) ?
"You can set the export order before adding this script step, or perform the step with a dialog box so a user can set new criteria. Export Records exports all the records currently being browsed. If you want to limit the set of records you're exporting, use a find request before using Export Records."