8 Replies Latest reply on Mar 9, 2012 1:17 PM by philmodjunk

    Server performance

    ryyno10

      Title

      Server performance

      Your post

      HP
      Intel Xeon X5550 @ 2.67 Ghz (2)
      12 GB
      64 (bit)
      C drive: 136 GB
      D drive: 136 GB (back up drive)

      I've been dealing with increased Elapsed Time/call ( ~12000) and Wait Time/call statistic (~10000) as well as a persistant memory leak (~11 GB out of ~12 GB). 

      In an effort to increase performance, we recently teamed two NIC cards on the server (Windows Server R2 2008) to allevate a possible bottle neck.  However, little improvement, if not a decrease in performance has occurred. 

      Currently the cache is set at about 600 MB and my interval at 3 minutes.  I have back ups that run at 9, 11 , 1, 3, and 5, as well as nightly backups.  The backups are not set for consistency checks. 

      There are 31 databases, roughly 9 gigs total, that 35 people use on a daily basis. 

      As side from database design (which isn't great, I inherited a bad design), can anyone else provide suggestions to improve performance (in relation to the server or server settings)?

       

        • 1. Re: Server performance
          HOnzaKoudelka

          Is your primary issue consistently slow performance, or is the performance much worse for a noticeably long time during backups?

          Although I don't have that confirmed from other developers yet, my experience with backups is that using a slower drive for backups than is used for the live database, decreases the performance impact of backup itself. My theory is that when the backup drive is faster or equally fast as the live data drive then during backup the bandwidth of the live data drive is fully used for the backup and not available for data hosting, while when the backup drive is slow then the primary drive's bandwidth is also available for hosting data.

          Of course, slower backup drive will make the backup take longer. So I only recommend this solution if a single backup operation takes significant amount of time (more than 10 seconds).

          If you are trying to solve permanent performance issue, try to find the real bottleneck in the inherited bad design. It's usually not necessary to redesign the whole solution. Just finding one bottleneck and optimizing it can often make the whole solution twice as fast with minimum effort. 24U FM Bench can help you with that.

          • 2. Re: Server performance
            ryyno10

            BUMP

            All the drive are identical, SAS. 

            My windows log is also littered with Event 661 errors ( something to do with external authentication errors).  Apparently this is a common issue on Citrix networks (which our server is on) but I'm don't think that is entirely related to the performance issues. 

            I'm running FMS 11.0.95 but I don't yet have a testing environment for the updates (cannot risk a production loss).  Is there a performance issue with early version of FMS?

             

            • 3. Re: Server performance
              HOnzaKoudelka

              I don't recall any significant performance difference between 11v1 and 11v3 which I am currently running.

              You have not answered if your primary issue is a consistently slow performance, or if the performance is much worse for a noticeably long time during backups, so it's hard to guess what to focus on. I suggest you first find out whether the backups are the main problem or whether it's the design, and then focus on the main problem.

              The 661 warnings may be caused by using the separation model. If you have an inteface file separate from the data file and use a different username and/or password for each of them then you'll get at least one 661 warning for every user login. Nothing to affect speed but may be tedious if you have e-mail notifications turned on for warnings.

              • 4. Re: Server performance
                ryyno10

                I suppose it's not a noticiable performance hit (though loading does take considerable time) as much as alarming statistics and RAM usage as indicated by the server (~90% RAM usage). 

                I ran a consistency check on a backup file and an error was found:

                WARNING: problems were detected while recovering the database.  The recovered file should NOT be used going forward; copy only the most recent work from it into a backup copy of the original file.
                2012-03-09 11:43:46.598           _Hospital.fp7 0 File blocks: scanned and rebuilt 281896 block(s), dropped 0 invalid data block(s)
                2012-03-09 11:43:46.598 -0500 _Hospital.fp7 0 Schema: scanned fields and tables; some problems were found...
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0   fields created to match record data: 0
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0   field values deleted due to invalid ID or repetition: 0
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0   records deleted due to invalid ID: 0
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0 Structure: scanned; 1 item(s) modified
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0 File size after recovery is 1176285184 bytes
                2012-03-09 11:43:46.614 -0500 _Hospital.fp7 0 *** Completed recovery to '_Hospital

                What type of inference can be made about the structure??  This is kind of vague... 

                I have also located information that suggests windows server r2 allocates a high percentage of RAM to SQL programs...  Perhaps this is true for FileMaker?

                Overall, I suspect the design of the DB is the culprit, however, I want to rule out all other possiblities before moving forward.

                • 5. Re: Server performance
                  philmodjunk

                  You should first use Recover to check your file for damage.

                  Things to keep in mind about Recover:

                  While Recover almost always detects and fully corrects any problems with your file...

                  1. Recover does not detect all problems
                  2. Recover doesn't always fix all problems correctly
                  3. Best Practice is to never put a recovered copy back into regular use or development. Instead, replace the damaged file with an undamaged back up copy if this is at all possible. You may have to save a clone of the back up copy and import all data from your recovered copy to get a working copy with the most up to date information possible.
                  • 6. Re: Server performance
                    ryyno10

                    Phil,

                    I missed typed above, a recover did indicate the latter dialog in the previous post.  Should I run recover on older backups to see if they are damaged as well?  I'm not sure if I have a viable backup.

                    • 7. Re: Server performance
                      HOnzaKoudelka

                      If you can find a backup that has the same structure but no corruption then it's really a good idea to take that backup and import data into if from the latest version. If you keep using a database that has a hidden corruption inside, you may get to a point when it will get corrupt so badly that you will have no other choice than re-building it from scratch.

                      • 8. Re: Server performance
                        philmodjunk

                        Yes, run a recover on the back ups, working back in time until you get one that does not report this issue. Then you can save a clone of the file and import all the data from a recovered copy of your most recent file. If all else fails, use the recovered copy if Recover reports "Ok to use", an undamaged backup is safer, but chances are the recovered copy will be fine unless recover tells you not to use it.