CICT

ExecuteSQL Speed Improvements

Discussion created by CICT on Oct 26, 2013
Latest reply on Nov 5, 2013 by Oliver_Reid

We've loved the opportunities that ExecuteSQL has opened up but on many occasions had to abandon use of it due to slow calculation speeds. However, we've been persevering and would welcome comments by people with far more experience than us on the following:

 

We've carried out some controlled tests, that includes quitting FileMaker prior to running each test, as a second run of a query can often be much much faster than the first. Equally, all fields referenced have been indexed.

 

Our tests were carried out on a horse insurance document management system that required the consolidation of data from many tables, but to try to keep it more generic, I've substituted the actual tables and fields to an 'Orders' and 'LineItems' generic example.

 

The 'Orders' table consists of 5k records and 'LineItems' 13.5k records. A script was used to display a dialogue box containing the time it took to run the ExecuteSQL calculation. The SQL statement is built up within a Let statement, with the result being the ExecuteSQL calculation, so here goes:

 

The following

 

SELECT l.description, l.qty, l.unit, l.total

FROM LineItems l

INNER JOIN Orders o ON l.kIDOrd = o.kID

WHERE o.kID= ? AND l.group = 'red'

 

where ? is the Orders::kID parameter within ExecuteSQL and kID and kIDOrd are the local and foreign keys.

 

We varied the joins, fields and parameters but this consistently took between 10 and 11 minutes to run. Operationally completely unacceptable.

 

The problem appears to be that the above has to run the inner join across the 5k Order records, before isolating down to the unique order kID hence the time taken. If the number of order records are reduced, then the above calculates quicker. However, we don't know of many databases where the number of records gets smaller!

 

In the above example we are only pulling records from the single sub-table, so in this case we can simplify the sql as:

 

SELECT l.description, l.qty, l.unit, l.total

FROM LineItems l

WHERE l.kIDOrd = ? AND l.group = 'red'

 

Across the 5k records this is almost instantaneous, but only works when selecting from 1 table.

 

All documentation we've used recommends using JOIN, so we haven't tried multiple WHERE statements.

 

Although we've been looking at nested options, the simplest solution to our speed problems has been to introduce something like the following:

 

*asterisks only added to make it easier to see where the variable name is used

Let ( [

*ordID* = Orders::kID ;

sql =

SELECT l.description, l.qty, l.unit, l.total

FROM LineItems l

INNER JOIN Orders o ON l.kIDOrd = *ordID*

WHERE o.kID= ? AND l.group = 'red'

;

ExecuteSQL ( sql ; "" ; "" ; Orders::kID)

)

 

which can be further simplified to (but has had no noticeable effect on speed):

Let ( [

*ordID* = Orders::kID ;

sql =

SELECT l.description, l.qty, l.unit, l.total

FROM LineItems l

INNER JOIN Orders o ON l.kIDOrd = *ordID*

l.group = 'red'

;

ExecuteSQL ( sql ; "" ; "" )

)

 

By linking the inner join directly to the Let variable (Orders::ID), the 10/11 minutes has reduced to produce almost instantaneous results, as only the single required record of the 5k records is being assessed. We would have thought the WHERE statement and ExecuteSQL parameter would have done this, but these appear to run after the 5k records have been assessed by the standard INNER JOIN statement. By using the Let variable, this works for each record visited.

 

We can see additional benefits of this by using 'IN' where a subset of IDs could be reported against and we have the options of combining data from many sub tables.

 

I fully appreciate that this isn't suitable for wider reporting, but so much of what we do is combining information from various tables for a single record where SQL appears to be the best tool for the job and reduces the TOs required in the organisational chart.

 

So, we were wondering whether anyone got a more acknowledged way of achieving the above? Can anyone see any pitfalls to the above? Does anyone have any other techniques they use to overcome the apparent slowness of ExecuteSQL in FileMaker?

 

I look forward to your comments.

 

Andy

Outcomes