2 Replies Latest reply on Apr 15, 2011 12:36 PM by RestaurantCharlie

    Data Separation/Server Speed Advice

    RestaurantCharlie

      Title

      Data Separation/Server Speed Advice

      Post

      I've been developing a solution for several months. It is my first FM experience.

      The solution is hosted on FMS11 in our main office. We have several locations, so some users are on LAN and others on WAN. Main location's highest available speed is 5mb down 1 mb up. Or branches have the same upload with 2-4 mb down.

      As the solution grows more powerfull, performance is starting to really become a problem. There are a lot of scripts that control business logic and automate many complex tasks that use to take a person hours to manualy calculate.

      I need advice on two things:

      1) Server Speed. How much can the Server internet speed be affecting my performance? There is no faster internet where we are located. My alternative here would be to get a cloud server or host with one of the companies that host FM solutions. Our solution has more than 75 tables, more than 10 tables have over 50,000 records. 1 table has close to a million records and a couple more have half a million records. Agregate calculations and unstored calculations seem to take quite some time, i.e. I have one table with 1,506 records with the fields Part, Week, Location and another table that has the same fields plus the date and quantity which rings it in at about 13,000 records. It concentrates the qty by week on the first table, than calculates the percentage of sales from that part relating the week and location to a sales by location by date table. I have opted more for using replace field instead of unstored calcs, becasue unstored calcs create a lot of lag when scrolling. The replace field (four fields) takes about two minutes. Seems pretty slow to me. I've been using SQL as well, and it seems SQL could to that same operation in less then 10 seconds.

      2) Data Separation. Even if I host my solution on a server with blazing internet speeds, I still have really slow speeds at our branches, so I've decided to create an interface file to be stored locally on client computers. This way the only thing travelling is raw data, not layouts and hundreds of script steps.

      2a) My first issue is that right now I have people on LAN and WAN. I develop in our main office so I'm on LAN and the IP for the main solution is an internal one. But when at our branches, it is a public IP address. Is there a way to change a FM File's location in one step, or do I manually have to change this source in each Table Occurence from a public IP.

      2b) Because the solution is going to be on each computer, I can't have users up do date on the Interface application. Any suggestions??? Do I have the file open automatically as a log in user, goes directly to a log in layout and can't go anywhere from there without putting in username and password. Then do I have to store user passwords? How do I keep them safe (encryption)? Privilege sets don't really worry me, I have just two: full access for me, and one generic for everybody else. All rights are controlled via scripts. I have an internal rights table that is associated with each user. This is cool because I can assign rights by group, then assign a user to one or more groups and his/her rights are figured out from what groups he/she is part of (if member of a and b, a can do X but b can't, that user CAN because at least one of his groups CAN). These privileges are then compiled into a return separated list. This list is set into a global field on start up. Status bar is always hidden and custom menus are in use. So all actions are controlled by scripts, and the first steps are always If[ middlevalues(global_user_priv; Nuserright;1)=1] Else [Exit Script]. So basically my problem is how do I have them log on from the client interface to the main file so I can get there userid and privileges so the solution can work properly.

      Thanks in advance for your help!

        • 1. Re: Data Separation/Server Speed Advice
          philmodjunk

          1) Database design can be a major factor in how fast your system is for your users. Any steps you can take to eliminate searches, sorts on unstored or unindexed fields will save major time for large data sets. We have tables here with record counts exceeding a million records in some tables and have made design changes to improve response times. Here's a few of the things we've done:

          a) denormalize data carefully and strategically to reduce the need for unstored calculations and searches/sorts on the same. We have an Invoicing system here, so when a sale is final and the cashier prints the invoice, the script computes invoice totals from the line items and stores them into number fields so that a search for an invoice by total invoice value can be performed on this field instead of the unstored calculation field that totals the same value from the related line items records. At the end of the day, a "summarizing" script pulls up all line item records for todays transactions and creates a small number of records in a summary table where each record represents one type of item sold today and includes the total quantity and value. When we need a report summarizing sales by type of item spanning large periods of times such as month, quarter year or even in a 5 year comparision chart, we reference the data in this summary table instead of the original line items. In our business, over a thousand line items for the day get "summarized" into less than a dozen summary records, so the reduction in computation needed for our reports is huge.

          b) Consider whether you can move older records into an archive file to keep overall file size down. This was bigger savings when systems were slower, but since it works for us, we haven't changed it. Every night after the summarizing is done, scripts kick in to move the days invoice and line items records into tables of an Archive file. After comparing counts of related records in the archive file to make sure the records were moved, the records are then deleted from the file we use each day to record new sales transactions.

          c) When we still have to search with criteria in an unstored field, we first search on all criteria that we can by entering data in stored/indexed fields, then return to find mode, specify criteria in the unstored field(s) and use constrain found set to further reduce the records in the current found set. This can be hundreds of times faster than entering all criteria into a single find request.

          d) Simplify your layouts to make them more "plain vanilla". This isn't what we like to do, but all those cool conditional formats, web viewers, summary fields, unstored calcs all demand processor speed and band width. If you can reduce the use of these features, they'll display and update much more smoothly. In one of my files, I put the web viewer inside a tab control just so that the users don't have to wait for the web viewer and display a different web page every time they change records. they click a tab only when then need to check the viewer content.

          e) Keep your found sets as small as possible when working with summary fields and the like. In one of our files, I took the file down off the network, opened it with filemaker, pulled up the summary report layout and used Show All Records, then Show Omitted Only to create a found set of zero records. I then put this file back up on the server. Now, when users first access the layout, they get a blank layout, (but I pop up a search window for their report criteria in front of it) and no longer have to waith for summary fields to compute totals before they pull up the report that they want to see.

          2a) open Manage Externa Data Sources. You'll find that you only have to update file references once for each file you reference here. If you use relative path references, there won't be any updating needed, you can just move the files.

          2b) I haven't used them myself, but there are utilities you can use to "push" updated files out to your networked users. In your database, you can add a field in a table in your data file where you record the current release number of your front end file. You can add the same field to a table in the front end file. You can then include script steps in a script that runs when the front end file is opened that checks the version numbers of these two tables and pops up a warning message if they do not match.

          • 2. Re: Data Separation/Server Speed Advice
            RestaurantCharlie

            powerfull response! thank you so much! I'm going to go over each of these things in practice and come back with the results!