1 2 Previous Next 16 Replies Latest reply on Oct 28, 2015 3:18 PM by bigtom

    Best find and sort efficiency in a server environment

    Stu412

      Hi there

       

      I'm rolling out my solution to clients with the main FM database stored on FM13.  A network with 10GB switches will provide the link between the two.   The server is a virtualised machine and set up to deliver more than is needed.  It's not used for anything else besides FM.

       

      I need to ensure that the user experience is snappy, particularly when moving from one layout of data to another layout of data, but for the same customer.  I have a main data table which has around 70,000 rows, requires lots of summaries and case calculations.  Much of the data is warehoused overnight to a number-only table, to make page loads quicker and many reports are based from here.  However, I still need to load some summary data from the main live table and this is what slows things down are where I need some help.

       

      In one example, of the 70,000 rows, I only actually need to load around 18.  I limit my searching first by customer (to get around 400 rows) then by row type to get to the 18 or so I need.  This is done on a script with an initial find, followed by a constrain to further find the type.

       

      Is this the most efficient way to get to the 18 rows I need?  It feels right, ie minimising the found set before sorting, but I may have missed something fundamental as I've not got too much experience with the FM server environment.

       

      Thanks in advance guru's and wise ones.

        • 1. Re: Best find and sort efficiency in a server environment
          siplus

          Have you tried with ExecuteSQL(....WHERE CustomerID = ? AND RowType = ? FETCH FIRST 18 ROWS ONLY"....) ?

          • 2. Re: Best find and sort efficiency in a server environment
            Mike_Mitchell

            Stu412 wrote:

             

             

            Is this the most efficient way to get to the 18 rows I need?  It feels right, ie minimising the found set before sorting...

             

            In general, the answer is yes. If you sort first, the client will have to pull down the entire found set - which, with 70,000 records, will slow things down to a crawl.

             

            Something else you should probably look at: Prior to doing your Find, navigate to a layout with no fields. Do your find, do your sort, then go to your final layout. Doing it this way postpones fetching the records from the server until you have what you want. Why? Because if you go to a layout that has even one field, FileMaker will fetch the first 25 records (Form view) or as many records as will fit in the window (List or Table view). That will likely slow you down.

             

            HTH


            Mike

            • 3. Re: Best find and sort efficiency in a server environment
              bigtom

              Mike_Mitchell has some good advice. When you say it is too slow, how slow is it? Keeping the data from moving to the client is important.

               

              Do yo have SSD or HDD storage for the database files? This made a big difference for me. Keeping the database cache size up as large as you can also makes things faster.

               

              You can try ExecuteSQL and see if it is faster in the same scenario. You can also add ORDER BY to the query. If you just need the data this is ok, but if you need an actual found set the find might still be the better way. You could also run this on the server with PSOS (likely much faster) and return a list if you need data. Running the query from a layout of an unrelated table is also faster ( wimdecorte has explained this a few times). If you need a found set it is the same problem.

              • 4. Re: Best find and sort efficiency in a server environment
                Stu412

                Hi all

                 

                Some very useful advice all around and a some new stuff I didn't know about.  Thanks everyone

                 

                I've managed to optimise my find routine in the script and also, work to a blank layout first, before rendering the results on the final layout.  This has resulted in an increase in speed.

                 

                I want to use PSoS next but I'm struggling to get anything to happen and probably I've missed something basic.

                 

                The 'reduce records' script runs from a button, so instead I've set that button to PSoS 'Reduce Records' with the grand result that I'm not taken to the expected layout nor are there any searches undertaken.  I've tried debugging and things seem to stop immediately after the Single Step to PSoS.

                 

                I get the feeling I just cannot lift a client script and PSoS, so not sure how to finish this off.

                 

                Thanks in advance

                • 5. Re: Best find and sort efficiency in a server environment
                  Mike_Mitchell

                  PSoS starts a completely new session. You can think of the server as an independent client, who logs into the database, executes the OnFirstWindowOpen script, and starts fresh. Since every client gets its own found set, globals, and variables, any found set you execute on PSoS will not carry back to the client once the PSoS session terminates.

                   

                  There are some methods for passing the found set back to the client, but in your scenario, there's very little point in doing so. Those methods generally entail saving a set of record IDs and passing it back to the client, which then executes a Find for that set of IDs. Largely pointless, since you still end up executing a Find anyway.

                  • 6. Re: Best find and sort efficiency in a server environment
                    bigtom

                    Glad to hear you got it running faster already.

                     

                    You have to remember that running PSOS is like having a colleague sitting at the server with a client open and running the script for you. When it is done he can send you back a result, but not a found set or have any control over your client layout.

                     

                    For example, if you get back a list of IDs for the records you want you need to turn that into a found set. If you just need summary data you can just have the summary data sent back and then store it to a global variable and then use a merge variable on a layout to view the data.

                     

                    If you need something back you cannot just use a client script with PSOS. I am sure it is working fine on the server but you never see it.

                    • 7. Re: Best find and sort efficiency in a server environment
                      Stu412

                      Mike, Tom

                       

                      Thanks for explaining that.  I have a couple of other things I can look into trying but now I can move along from PSoS for this case

                      • 8. Re: Best find and sort efficiency in a server environment
                        bigtom

                        If you are able to get some good results it would be great if you keep us updated on what worked best for you.

                        • 9. Re: Best find and sort efficiency in a server environment
                          Stu412

                          Hi Tom

                           

                          It's quite a heavy solution I'm putting together with 10 columns of information cross tabbed from the 70,000 row table.  Once I have the columns on screen, there's a whole load of row subtotals and calculations that need to take place as well, so this is all so slow.

                           

                          Main solution is to warehouse the data for the columns that will never change, the old history stuff, and store just values in static fields.  This is planned to be updated each night by copying data from the live table to the warehouse when nobody is about.

                           

                          The majority of reports run from this warehouse where speed would otherwise be an issue.

                           

                          Second part is to use summaries on the live current data, but with a minimised found set arrived at using the techniques described earlier in this post - thanks!

                           

                          Third part is a small twist - the warehouse history data can sometimes change per customer if a manager wants to amend things, so I've scripted a 'update warehouse for this customer only' script which does just that.  It rebuilds the warehouse for one customer based on the updated values in the live 70,000 table.  This takes a couple of seconds and is user initiated.

                           

                          The end result is (hopefully) a hybrid of live and static data on major cross tab reports which has already showed speed increases.

                           

                          It's a case of trying to squeeze as much performance from the solution now so as to keep the users happy and engaged.

                          • 10. Re: Best find and sort efficiency in a server environment
                            siplus

                            Stu412 wrote:

                             

                            It's quite a heavy solution I'm putting together with 10 columns of information cross tabbed from the 70,000 row table.

                             

                            This is from one solution of ours. Figures are not photoshopped

                             

                            52C74CAD-D497-4A72-A8DF-535AA49F6716.png

                            • 11. Re: Best find and sort efficiency in a server environment
                              Stu412

                              Hi Siplus

                               

                              I'm led to believe that if it were Photoshopped it'd somehow be thinner!   In all seriousness, it's interesting to see the quantity of records in a table that can be reached.  I have many calculation fields due mainly to the cross tab requirements, splitting across years etc, so my tables tend to be wide, although a recent re-build halved the quantities of fields involved from my first FM attempt.

                               

                              I'm always wondering if there's a better way to do things though...

                              • 12. Re: Best find and sort efficiency in a server environment
                                Mike_Mitchell

                                One thing you might want to look at is eliminating some of the calculations / summaries, using batch jobs or Script Triggers to store the results rather than relying on real-time calculations. For data that are static (such as you find in many warehouse-type builds), you can thereby pay the calculation penalty only once and dramatically improve the speed of the solution.

                                 

                                I don't know the particulars of your situation, but this is often something work examining.

                                • 13. Re: Best find and sort efficiency in a server environment
                                  wimdecorte

                                  Stu412 wrote:

                                   

                                  I have many calculation fields due mainly to the cross tab requirements, splitting across years etc

                                   

                                  Echoing what Mike says: crosstabing is a report function that should not make your tables wide - crosstab fields and their data should go in their own report tables.  Splitting across years sounds like static data that you should calculate once and store instead of relying on calculated fields....

                                  • 14. Re: Best find and sort efficiency in a server environment
                                    Stu412

                                    Wimdecorte

                                     

                                    Thanks for the input, I think this is the part I've been missing which has led to super fat tables, or rather, one super fat table.

                                     

                                    You're correct in that the annual data is stored, as much as possible.  I have a single static table for this which is updated for every customer overnight, or per a single customer on demand (which is sometimes needed contrary to the truest definition of warehousing).

                                     

                                    The interesting part to your post is having separate tables for each report.  This is something until now I've not looked into because I felt it may be over simplifying things, especially with a warehouse alongside.

                                     

                                    Many of my reports come from the one warehouse, but are you saying it can also be efficient to have smaller warehouses per report as well as smaller calculation tables for crosstab work per report as well?

                                    1 2 Previous Next