3 Replies Latest reply on Feb 23, 2012 11:51 AM by philmodjunk

    Inherited database...  Improving performance

    ryyno10

      Title

      Inherited database...  Improving performance

      Post

       

      I am attempting to improve the performance of an inherited database (FMP v11/FS11) that is used to track hospital patients.  The previous DBA constructed a multi-file system that contains some “data only” db’s and another primary file that contains both interface and data elements (the auxiliary files contain insurance info, pictures, etc.)

      There are several large tables in the main file that contain normalized aspects of patients (consults, implanted patients, etc.) with other entities branching from them (such as a mailing reminder system and several tables designed to handle research studies). 

      It seems that previous DBA started branching additional tables from the original table in the main file (essentially a spider web) to accommodate departmental needs.   Then, it appears that he figured out how to use table occurrence groups for temporary data tables (e.g.; for new studies).   However, as the web began to expand, many TO’s were used to provide context to these branches. 

      It is my understanding that “archived” data existing within the file system that is not being used should not have any effect on performance ( so should I assume that relic TOG’s will not affect performance if not being called by scripts? ). 

      However, several massive tables are constantly used by secretary’s, transcriptionists, and nurses to peruse patient data for everyday tasks (Five or six tables contain over 10,000 records a piece, all within the main file containing all the interfaces). 

      Here are my approaches to improving performance (of course nested with some questions):

      1. Create a clone and transfer data:  I have read that this will reset indexes used by filemaker.  Is this true?  Also, what precautions should I take before transferring the data and launching the new file on the server?  This seems useful considering that the db was not serviced (at all) 6 months before inherited.

       

      2. Reduce unstored calculations:  The previous DBA has cluttered the db with many unstored calculations.  I am systematically reducing them into scripts that will place a value in a field that can be indexed (if indexing needed).  Is this a better alternative?  Are there other options?

      3. “Heavy Users”:  There are several employees that must conduct MANUY searches within the data tables that contain 10,000+ patients on a daily bases.  Should I try to isolate them to a separate file?

      4. Interface files:  Could an interface file for each user improve overall performance?  After all, these file would be calling back to the several massive tables that already exist. 

      Sorry for the long post but I would love to hear some input before I make a decision on a course of action…

        • 1. Re: Inherited database...  Improving performance
          philmodjunk

          1) No real issues here as long as you are careful to import all data into the clone. You can also use advanced recovery options in FileMaker 11 to rebuild all your field indexes and this doesn't require importing data though it does set an internal flag on the file to show that it was recovered. It is unlikely to have much effect on performance, however.

          2) Careful with number 2. Unstored calculations will update automatically when the data they reference is changed. Your indexed data fields will not and you'll need to make sure you have all possible changes to your data handled by update scripts. I wouldn't do this unless I was absolutely sure that the calculation was responsible for slowing down the system. An unstored calculation can slow the system down in...

          1) IF it is used in finds

          2) If it is used in sorts

          3) if it is located on a layout that deals with large numbers of records in it's found set

          4) If it summarizes large quantities of data from a related table.

          If none of these scenarios apply, I'd leave the calcualtion field as is and not replace it with a script updated data field.

          For the case of item 1, sometimes you can use a two stage find that works for unstored calculations. Stage 1, perform a find specfiying only criteria in stored/indexed fields. Stage 2, return to find mode, specify criteria in unstored fields and constrain the found set. Since the constrain found set is applied to the found set produced by stage 1 and this is very often a much smaller number of records than your entire table, this often executes much quicker on large tables than a single stage find.

          Also be aware that layouts can be redesigned so that they update faster. If you have lots of conditional formats, for example, they can slow down the rate at which the screen updates as each such conditional format evaluates. If this includes a summary field with a large found set or an aggregate function (sum, count, max, ... etc. ) that accesses large quantities of related data, this layout can get pretty slow to update.

          Large numbers of filtered portals on your layout can also produce a performance "hit" on your system.

          3) I doubt the separate file will make much difference. If the data they are searching is fairly static, you might set them up with their own copy of the database data--maybe one that is updated on a nightly basis and that copy can be optimized for the fastest possible searching.

          4) try it and see. Put the interface file on one client machine, update external data source references if necessary and then test out the file and see if you get any change in performance.

          • 2. Re: Inherited database...  Improving performance
            ryyno10

            Thanks for your response Phil.

             

            Also be aware that layouts can be redesigned so that they update faster. If you have lots of conditional formats, for example, they can slow down the rate at which the screen updates as each such conditional format evaluates. If this includes a summary field with a large found set or an aggregate function (sum, count, max, ... etc. ) that accesses large quantities of related data, this layout can get pretty slow to update.

             

            Thanks for your suggestion. I found text with conditional formatting that was using 7 variables pulling from 6 different tables. Long story short, I was able to replace the function with much simpler formatting. I've changed quite a few similar instances with a little noticeable improvement.

             

            When it comes to placing interface files on desktops, I decided against this simply because our workers are constantly re-imaging their computers (viruses, user error, etc.). If there is not much of a benefit to this approach then I'm not going to waste time.


             

            We also have 4 identical NIC cards (on our server), two of which are teamed and setup as 802.3ad with Fault Tolerance of which (I found out ) are not currently activated. This would allow Filemaker to run at 2 GBPS, which may improve performance.

             

            In regard to the FileMaker Server statistics though, Elapsed Time/call and Wait Time/call sometimes average 10,000 accompanied with high RAM usage.  Do you know of any other common culprits that contribute to such a performance hit?

             

            • 3. Re: Inherited database...  Improving performance
              philmodjunk

              That last question sounds like one to post in the server forum. Be warned though that it doesn't get a lot of activity so you may have to wait and "bump" it back into recent items a few times before you get a response.