NickLightbody

What is tokenisation and can it help performance?

Discussion created by NickLightbody on Feb 13, 2015
Latest reply on Feb 24, 2015 by NickLightbody

I have mentioned tokenisation in the past in relation to my observations on server performance here

 

What is happening when FileMaker Server becomes overloaded (and how to avoid it):

https://community.filemaker.com/thread/79075

 

http://filemakerhacks.com/2014/12/17/what-is-happening-when-filemaker-server-becomes-overloaded-and-how-to-avoid-it

 

I have been asked to explain what I mean.

 

The term comes from the railways, for an engine to go through a two way piece of track the driver had to pick up a physical token otherwise s/he was not permitted to use the track. If here was only one token then two trains could not collide since only one could ever be on the track at one time.

 

So it is a method of controlling the flow of competing calls on a finite resource.

 

In this case when I observed how fms choked, that one call was a little slower than the previous, then that delay impeded the next call so the whole thing became choked by continuing calls as each slowed more and more as the delay cumulated - even though performance was flagging - I reasoned that if I could monitor performance I could feedback that information to control when the next call was permitted?

 

Since creating new records seemed the most processor intensive operation it seemed best to start there.

 

Records in our framework are all created server side since otherwise WAN + mobile performance is poor.

 

So the call to create a new record is first put into a loop to seek a free token, vacated by a preceding call, when it finds one it exits that loop and is put into a second loop from which the exit is controlled by the average speed of completion of the last few operations, so the flow of calls to the cpu is slowed by these two techniques - which is why you see the difference in operation shown in the charts with and without tokens.

 

The primary challenge was to make the token process use less resource itself than it was attempting to save.

 

Originally it didn't work efficiently enough. I managed to simplify it sufficiently to give, I think, a benefit. It is only where lots of other people see a benefit that I shall be completely confident in it but I do think that it works based on about 3 months observation on various systems.

 

The cost is that a user gets a second or two delay in a new record being created some of the time. If the delay gets to 3 seconds then they get a dialogue and generally by the time they have responded their call is free.

 

I think that by adding the Dequeue (delay) time in to the chart we should be able to observe it and fine tune the variable that controls the delay.

 

In an ideal world the Token system would prevent fms ever becoming over burdened, currently it reduces and controls the pressure to an extent but does not entirely protect it.

 

Several folk to whom I have shown and explained this are first surprised it works - that includes me of course - and second wonder why fms doesn't include this already. I would suspect that it must already have something like this and that this technique is merely giving it a little more space in which to operate.

 

The attached pdf illustrates the same test run twice on the same machine first without and then with token control.

 

Cheers, Nick

Outcomes