3 Replies Latest reply on Feb 24, 2015 2:28 PM by NickLightbody

    What is tokenisation and can it help performance?

    NickLightbody

      I have mentioned tokenisation in the past in relation to my observations on server performance here

       

      What is happening when FileMaker Server becomes overloaded (and how to avoid it):

      https://community.filemaker.com/thread/79075

       

      http://filemakerhacks.com/2014/12/17/what-is-happening-when-filemaker-server-becomes-overloaded-and-how-to-avoid-it

       

      I have been asked to explain what I mean.

       

      The term comes from the railways, for an engine to go through a two way piece of track the driver had to pick up a physical token otherwise s/he was not permitted to use the track. If here was only one token then two trains could not collide since only one could ever be on the track at one time.

       

      So it is a method of controlling the flow of competing calls on a finite resource.

       

      In this case when I observed how fms choked, that one call was a little slower than the previous, then that delay impeded the next call so the whole thing became choked by continuing calls as each slowed more and more as the delay cumulated - even though performance was flagging - I reasoned that if I could monitor performance I could feedback that information to control when the next call was permitted?

       

      Since creating new records seemed the most processor intensive operation it seemed best to start there.

       

      Records in our framework are all created server side since otherwise WAN + mobile performance is poor.

       

      So the call to create a new record is first put into a loop to seek a free token, vacated by a preceding call, when it finds one it exits that loop and is put into a second loop from which the exit is controlled by the average speed of completion of the last few operations, so the flow of calls to the cpu is slowed by these two techniques - which is why you see the difference in operation shown in the charts with and without tokens.

       

      The primary challenge was to make the token process use less resource itself than it was attempting to save.

       

      Originally it didn't work efficiently enough. I managed to simplify it sufficiently to give, I think, a benefit. It is only where lots of other people see a benefit that I shall be completely confident in it but I do think that it works based on about 3 months observation on various systems.

       

      The cost is that a user gets a second or two delay in a new record being created some of the time. If the delay gets to 3 seconds then they get a dialogue and generally by the time they have responded their call is free.

       

      I think that by adding the Dequeue (delay) time in to the chart we should be able to observe it and fine tune the variable that controls the delay.

       

      In an ideal world the Token system would prevent fms ever becoming over burdened, currently it reduces and controls the pressure to an extent but does not entirely protect it.

       

      Several folk to whom I have shown and explained this are first surprised it works - that includes me of course - and second wonder why fms doesn't include this already. I would suspect that it must already have something like this and that this technique is merely giving it a little more space in which to operate.

       

      The attached pdf illustrates the same test run twice on the same machine first without and then with token control.

       

      Cheers, Nick

        • 1. Re: What is tokenisation and can it help performance?
          NickLightbody

          Here is an important caveat to this technique discovered from further research, testing and getting some informed advice.

           

          If the core structure of a solution is simple and the points at which there is most load can be clearly identified then this Token Control technique can provide an advantage by regulating access to the bottleneck and preventing or reducing the increasing contention that leads to a choke.

           

          There are two types of contention between many clients we have to consider:

           

          1. Resource Contention: competition for the server resources between and
          2. Solution Contention: competition for access to solution tables and records

           

          However, such a simple solution does by its nature create greater Solution Contention for key elements in the solution. If there is for example just one data table then whenever a new record is created that table is locked until the record creation is committed, hence this delays other calls / threads getting access to the table thus greater contention. So if you have say 30 clients all accessing the same core table you will get substantially reduced performance than if you have 10 clients accessing each of 3 identical files on the same server since the solution contention is reduced by 3. I have demonstrated this recently in testing.

           

          I think, but have not done the testing yet, a similar thing happens with multiple core tables in a solution but I suspect that the improvement in performance is not as great.

           

          To build a Token Control System of a limited number of points of maximum Solution Contention is possible, however, if you increase the number of files (or tables) in order to reduce the Solution Contention then building an effective Token Control System becomes far less practicable.

           

          The best option would be for FileMaker Server to provide more help to detect reducing performance and provide feedback to the user:

          1. perhaps along the lines of "The server is busy - please wait a few moments" or
          2. generate a error code which the solution could use? Or
          3. maybe FMS could accept a query from a client requesting the current Wait Time (µsec/call) - which the server statistics already gather? Then the developer could merely choose to delay a process until the Wait Time drops to a specific level?

           

          This general point about performance also relates to a recent dialogue I had with wimdecorte about breaking a solution into multiple files. I said I didn't think that multiple files - as such - all other things being equal - necessarily performed better than a single file, that increased performance resulted from reduced caching where inefficient relationship graphs were split and reduced in size. However, on reflection, and now considering Solution Contention - I was wrong Wim - you were correct!

           

          On the basis of reducing Solution Contention multiple files must enable the server to support more clients.

           

          However, that assumes that these multiple files are as efficient as a solution with a simpler core structure - which is I think a different question?

           

          Cheers, Nick

          • 2. Re: What is tokenisation and can it help performance?
            gdurniak

            Yes, this was demonstrated at DevCon, a few years back

             

            Creating a separate file,  with a simple Graph,  to do a single task,  can help

             

            greg

             

            > increased performance resulted from reduced caching where inefficient relationship graphs were split and reduced in size

            • 3. Re: What is tokenisation and can it help performance?
              NickLightbody

              Hi Greg,

              yes, thanks

              Cheers, Nick