7 Replies Latest reply on Apr 30, 2013 6:31 PM by taylorsharpe

    Best speed optimization strategy

    mikem

      Hi Forum,

       

      I am working with a complex inherited database that I am trying to optimize for speed, especially via remote connections. 20 fp7 files hosted on FMS 11. There are a few areas I've identified for improvement, and I thought I'd seek forum feedback from FM veterans on what might be the best prioritization of actions. Disk space is not an issue. Actions I am planning:

       

      - Purge redundant External Data Sources. I have some files that, under External Data Sources, list the same File data source 2 or more times. Is there any reason to do so? Planning to purge the redundant ones.

       

      - Store all unstored calculation fields (majority are unstored, and there are many)

       

      - Purge redundant Relationships Graph. I have some very complex/redundant relationship graphs. I think the modus operandi by past developers (there may have been many over the years) may have been, "if you need a relationship, just create it, don't worry about what is already there." So I plan to identify redundancies and purge.

       

      - Index all fields that can be indexed (most already are)

       

      - Purge unused fields (I think there are a few)

       

      - Purge unused layouts

       

      - I don't think there is any conditional formatting

       

      So, I have actually listed these actions in the priority order I am thinking will provide the best bang for the buck (aka hour spent). Does this look about right?

      FYI, I have already moved the files on the server to a local SSD, which did help some.

       

      Much thanks for any guidance,

       

      Mike

        • 1. Re: Best speed optimization strategy
          Mike_Mitchell

          Mike -

           

          I think you're largely on the right track. Look out for External Data Sources with wildcards (this sounds like it may be a pre-version 7 system, with 20 separate files); these will slow things down to a crawl.

           

          I would be careful, though, about storing *all* unstored calculations. Unstored calculations refresh only when they're needed, and they do it at the client. If it's an aggregate function, it'll be a big performance hit, especially across large data sets, especially especially across the network (since the client has to grab the entire found / related set in order to perform the calculation). However, because unstored calculations don't consume bandwidth and stored values do, you may end up actually hurting yourself by storing rarely-used values.

           

          Other things to look for:

           

          • Old systems tended to paste graphics onto the layout. Make sure these are "small" - low resolution, limited colors.
          • You've already mentioned removing unused fields. I would definitely do that, but also look to see if you can move fields out of the data layer and into the interface layer (Conditional Formatting, for instance). This cuts back on bandwidth demand.
          • Look at the efficiency of any calculations being performed. Take advantage of the Let ( ) function to pull related values only once. Put the most likely occurrence first in your Case ( ) statements. That sort of thing.
          • Review the database normalization. I know that may represent some major surgery, but redundant data will slow the system down.

           

          That's all I can think of off the top of my head. Good luck!

           

          Mike

          • 2. Re: Best speed optimization strategy
            taylorsharpe

            Sometimes it is really hard to optimize a database without loosing features.  One possible solution is to use MirrorSync from 360Works so that you sync the data, and all of the data resides locally.  It is amazing how fast your iPad or remote PC can feel if the data is local and just synced.  Check on the video 360Works has on Mirror Sync and I bet you'll be impressed:  http://360works.com/filemaker-sync/

            1 of 1 people found this helpful
            • 3. Re: Best speed optimization strategy
              mikem

              Thank you Mike for the detailed reply, and additional pointers. Glad to hear I'm basically on the right track. One question: I'm a little unclear on moving fields "out of the data layer and into the interface layer"—would one example of that be to change a calculation field in a table to a non-calculation field, and use conditional formatting instead to indicate whatever needs to be indicated? Are there other examples besides using conditional formatting? Maybe using button scripts in lieu of calculation fields?

               

              Thanks again,

               

              Mike

              • 4. Re: Best speed optimization strategy
                mikem

                Thanks Taylor, I wil definately investigate MirrorSync.

                • 5. Re: Best speed optimization strategy
                  Stephen Huston

                  Actions I am planning:

                   

                  - Purge redundant External Data Sources. I have some files that, under External Data Sources, list the same File data source 2 or more times. Is there any reason to do so? Planning to purge the redundant ones.

                   

                  - Store all unstored calculation fields (majority are unstored, and there are many)

                   

                  - Purge redundant Relationships Graph. I have some very complex/redundant relationship graphs. I think the modus operandi by past developers (there may have been many over the years) may have been, "if you need a relationship, just create it, don't worry about what is already there." So I plan to identify redundancies and purge.

                   

                  - Index all fields that can be indexed (most already are)

                   

                  - Purge unused fields (I think there are a few)

                   

                  - Purge unused layouts

                   

                  • Purging external data sources has the caveat that you need to first determine which TOs in the relationship graph use which data source (they won't repoint themselves), and be sure each TO to a redundent DS is first repointed so it doesn't just fail. You can get that info from a DDR.
                  • Store calc results where you can safely, but be sure that those which need to update during  data edits, or are based on related table data, are set/updated via scripting  when the source values change to keep them current.
                  • Indexing stored data which isn't used for sorts or finds, nor as the basis of calcs, probably won't matter, but indexing them will add to file size.
                  • Purging unused fields is good, but be sure they are unused everywhere in the system, including references in other files in the system.
                  • Removing unused layouts will reduce file size, but probably won't affect network speed in any significant way.

                  But the number 1 item:

                  1. Be sure you have a full backup of everything before the purging begins.   Also,
                  2. Try to set your files (in an unhosted environment) to close on a layout based on a table with only one record, such as a table of global values, so that when the file is opened hosted, that's where it will open and do it's initial cache of records, reducing network traffic for clients until they are taken to wherever they need to land.
                  1 of 1 people found this helpful
                  • 6. Re: Best speed optimization strategy
                    Mike_Mitchell

                    Hello, Mike.

                     

                    Yes, Conditional Formatting is one example (perhaps the most obvious) of moving information out of the data layer and into the interface layer. Other examples might include:

                     

                    1) Tooltips. (If you have an "information" field on your layout, it can, in many cases, be replaced with a tooltip.)

                    2) Merge variables. (Use Script Triggers to update, as appropriate.)

                    3) Web Viewers. (A huge variety of effects and displays can be created with these, in combination with variables and / or fields.)

                     

                    In all cases, what you're doing is moving the display calculation from the server down to the client. In some cases, this will improve performance (sometimes dramatically) because the information doesn't have to be downloaded over the wire. It's calculated when it's needed, and only then. Of course, you may wind up with a performance slowdown if you do too much of it, so be judicious and experiment.

                     

                    HTH

                     

                    Mike

                    • 7. Re: Best speed optimization strategy
                      taylorsharpe

                      Indexing is an interesting topic when it comes to optimizing.  Fields that are indexed can be searched must faster, sorted quickly, and can be used for relationships and value lists.  Those are the benefits of indexing.  The drawbacks are that the files becoming bigger and every time you do a function involving editing, importing or deleting an indexed field means the index needs updated, thereby slowing performance dramatically.  As any DBA of a large SQL database will tell you, there is a lot of consideration into which fields to index and which ones to not index.  And there isn't a black and white answer in that you will have to see how the database is used to optimize which fields should be indexed and which ones should not. 

                       

                      The same type of argument goes into stored verses unstored calcluations.  In most SQL solutions, calcluations are stored and reperformed as needed.  Because unstored calculatilons are so easy in the FileMaker world, we too often use them at the detriment to database performance.  There are times where they are the only way to get what we need, but we need to clearly understand their performance penalties. 

                       

                      You said you moved your files over to an SSD.  I recommend a very fast RAID, preferable formatted as RAID 10 since databases perform best in that format.  You can even use SSDs in a RAID.  In many cases, a faster drive system is more important than the CPU.  FMS 12 is 64 bit and can address a lot more RAM.  So increase your FMS cache in the Admin Console.  Minimize or eliminate other services used by the Server so that it can focus on FileMaker Server service.  And make sure you have a good network both LAN and WAN.  Remember a low latency can be as important to a database as bandwidth. 

                       

                      I already mentioned MirroSync above and that is a good solution for improving remote connection over the WAN.