2 Replies Latest reply on May 22, 2015 7:25 AM by Stu412

    A question about speed, stored and unstored results



      A question about speed, stored and unstored results


      My DB is, on the whole, pretty snappy.  I have a massive table which will only get bigger and currently there are around 50,000 records on there with around 128 fields.  This data is disseminated between customer layouts through sorting and filtering and displayed as I want it.

      I need summaries on foreign tables, so have used GetSummary extensively.  I also need more advanced calculations on other foreign tables, so for these I have used ExecSQL alongside multiple TO's which filter the results, and this is where things really slow down.

      If I flick between foreign records on the screens which have GetSummary to display results, there's a slight lag (I notice it, users may not).  If I flick between foreign records using the ExecSQL / multiple TO screens to display results, theres a MASSIVE lag.

      Initially, the ExecSQL / multiple TO screens were very quick, but as I've added more data to the main table, these are now super slow.

      I need to know in people's experience which is best - GetSummary, SQL or TO's?  I have a mix of all three at the moment.

      I also need help on storing / not storing results.  For the most part, the data won't change (so therefore I think I can store it, therefore making it quicker???)  But if it does change, what will trigger a recalculation??  I may have this completely misunderstood

      Thanks in advance


        • 1. Re: A question about speed, stored and unstored results

          Your best bet for tables with a lot of records is to store calculated values whenever possible. This can sometimes only be done by using an auto-entered calculation if your calculation references a summary field or field from a related table.

          Here's a way to trigger a stored ExecuteSQL expression to update:

          ExecuteSQL ("SELECT .... " ; "" ; "" ; YourTable::TriggerField )

          If you change the value of YourTable::TriggerField, even if you set it to itself in a set field step, it will case the query to re-evaluate. If you already use an optional parameter in the query, you won't need to add another as you can then use it to trigger re-evaluation.

          Auto-enter calculations, if the "do not replace existing value..." check box is cleared will re-evaluate only if a field in the same record that is referenced by your calculation is modified. Changes in values of other records (such as in a field that affects the value of a summary field) will not cause an update. So the same "trigger" trick is sometimes used, sometimes in this manner:

          Let ( Trigger = TriggerField ; Calculation Goes here )

          It can also be possible to set up a "transactional update" method where you just have simple data fields in your table, but have scripts--often performed by script triggers that calculate a new value and update the appropriate field or fields each time the user changes a value in a field that then requires such a recalculation. This last method can result in pretty fast data, but also requires very meticulous interface design in order to prevent any "loopholes" where the user is able to modify data and not get the required update.

          I've also found recently, in a 3 million + table of records, that turning off indexing on a frequently modified field (that did not require indexing) resulted in a small but significant improvement in performance.

          • 2. Re: A question about speed, stored and unstored results

            This forum software is bugged! I'll try replying for a third time!!

            Thanks for the refresh tips, I've got a couple of areas they'll come in useful.


            This link above looks good as well to serve as a replacement for a select distinct count function I have which recently killed my DB!