At Devcon, one of the speakers explained that FM12 has the server do the initial portal filter delivering only the necessary portal records to the client. In FM11 and before, the entire portal relationship is delivered and the client would filter out what it needed. This was particularly problematic if the relationship yielded a lot of records and could really slow things down a lot. So this is one nice behind-the-scenes improvement in version 12.
1 of 1 people found this helpful
What I heard from I-don't-remember-which-FMI-employee during I-don't-remember-which-webinar is that the server only takes over the calculation in certain very simple filter calculations that 1) only reference fields in the table occurrence being filtered (i.e. the TO of the portal), and 2) only compare those fields to constant values. (So comparing BaseTable::field > PortalTable::field would not get the optimized treatment.) There was some uncertainty as to whether the server-side optimization would work if the comparison referenced $$variables (and some embarassing ignorance from the FMI rep who wasn't sure if it was valid to use $$variables in a portal filter calculation!). I could be misremembering some significant details.
Thanks for trying to help. Yet again we have a problem with FileMaker documentation. I am hoping to find more detail and more certainty.
1 of 1 people found this helpful
I just spoke with Matt O'Dell, and he confirmed what "jbante" posted. That is, the only change in Portal Filter Optimization between FileMaker Server 11 and FileMaker Server 12 is when comparing a field in the referenced portal to a constant value, as this is executed at the Server before sending the results to the client. All other occurrences, including referencing global variables, are performed on the client side.
I agree that FileMaker often lacks documentation on how things work under the hood and this has been consistent for a long time. This is why us developers often have to do a lot of testing to find new ways to optimize things through trial and error. But I do like when people post things here in technet that they find out that significantly improve performance so that we can all benefit from making more nimble solutions.
Thanks, jbante, for sharing what you heard!
I wonder if you can ask Matt another question. Can multiple operations be performed and do they still benefit from this optimization?
Example of something I am testing to evaluate this:
I have a table of states that contains only unique values for states, 60 records.
I have another table of zip codes and cities, 42,382 records.
I set up a cartesian relation between the two.
On a States based layout, I display a portal to Zips.
It displays the 42,382 related records, with the expected performance hit.
Now I modify the portal, and add a filter calc.
Zips::City = "San Antonio"
The displayed result in the portal is correct; and the performance appears to be faster.
If I set up a different calc:
Zips::City = $$city
and set $$city to "San Antonio"
Again the portal displays the correct result - but according to Matt, performance will suffer because this only evaluates on the Client and evaluates across the entire record set.
That appears to be the case.
Next post: my question.
Can we "chain" these operations, and keep the performance benefit if the first part of the statement deals with a constant?
Zips::city = "San Antonio" ;
Left( Zips::Zip ; 1) = $$N
It appears to me that this works.
That is, it does work - the portal displays the correct results.
But - was only the subset evauated on the client?
Or did Server say "Oh wait there's a variable in that expression! Client - here's the whole darn table, you look at it."
It appears to me that the performance benefit is retained, is that what we should expect?
> It appears to me that the performance benefit is retained, is that what we should expect?
Just to clarify a point re testing -- did you close out your session and go to a new parrent record to e sure you were starting a new cache from the server each time? If not, you may have been benefiting from client-side processing of pre-cached records.
Thanks for bringing up an important (and under-documented) behavior change in FileMaker 12. I just came across this in Jon Thatcher's DevCon 2012 session on "Server and WAN Performance Under the Hood," which is posted here (https://fmdev.filemaker.com/docs/DOC-3295). At the 56:25 mark, he dives into this very issue.
As posted above (with the caveats mentioned by jbante and TSGal), the client adds portal filter calcs to the join query that is sent up to the host, with the result that fewer records are sent back across the WAN.
But, and here's where it gets even more exciting, it sounds like there was another, closely related, portal-behavior change. That is that only the first 25 records (enough to fill the portal) are initially sent back, the rest presumably being sent on demand. (He gives an example where an FM 11 portal generated nearly 500 remote calls to fetch portal records and this was decreased to 2 remote calls in FM 12!)
Currently, filter calcs that use a Get function or comparison to a constant are optimized for this, but not (as mentioned) comparison to a global (nor to a global variable, per TSGal).
Check out Jon's presentation for his complete explanation.
What wasn't clear from Jon's presentation—maybe we can get a clarification from TSGal on this, as well—was whether the change to only downloading the first 25 records also applies in the absence of a portal-level filter. One would presume it does. Clearly, in the presence of a portal filter, this other change was predicated on letting server process the filter calc (when it can), so that it knows which initial 25 to send to client. But this change would seem to be more broadly applicable and beneficial for non-filtered portals, as well.
Thanks, that is an excellent reference. It mostly seems to answer my question; but does not address the case statement method.
I'm going to declare the question answered but I really think we need some examples or experimental results.
So I hope TSGal can get Matt or somebody to continue look into this.
Mark Scott and BruceRobertson:
So to not leave either of you hanging, inquiries have been sent out. I will post again when I receive replies.
And thanks, Bruce, for my first "Correct Answer." Had I anticipated this moment, I would have bought you a beer last week in San Diego, but will instead drink one myself tonight in celebration!
How about we just welcome you to TechNet, TSGal!!
-- sent from my iPhone4 --
Although the limitations of the portal filter are significant, there are actually a lot of cool things you CAN do with the exisitng functionality.
Since all (?) of the get functions can be used, then you can do things like filter the portal by get(username) or filter it by get(currentDate) or by get( windowName ) or by get( scriptname ) or get ( scriptparameter ) or get( activeFieldContent) etc.
In my States/Zip example I can name windows with the state name and set the portal to only display Zips for that state.