If you mean FETCH FIRST:
the columns returned are a part of the SELECT:
SELECT f1, f2, f3
get the PDF (more complete) and the example file(s)!
Your question is confusing. Yes, you're correct that FETCH FIRST only controls entire rows; that's its purpose. If you want specific fields, you specify them in the SELECT statement.
FileMaker can access records outside the found set in a number of ways:
1) Create a new window and perform a separate Find.
2) Use a global field to create a temporary relationship.
3) Use existing relationships and summary or calculation fields.
What exactly are you trying to accomplish here? What is the "top 25% of results"? We would need more detail before we can help you.
Thanks for the input everybody, to add some clarity as an example:
I have sales records for 200 customers
They each sell 100 products each with prices stored on my table, each has its own ID
Data table therefore is 20000 records
Each customer has a sub summary report, line by line of their price per product
This is calculated simply using Summary commands
In another column, the average price of the same product for all customers is displayed
Example report for customer 1 might look like:
Product ID Their price Total average Top25% Average
10 50 55
20 45 47.50
30 60 58
1000 50 57.20
'Their price' is taken solely from the calculations derived from the customer's own found set.
'Total average' is taken from ExecuteSQL looking at the table as a whole for ID's 10, 20, 30 etc.
As well as the above, I now need a further calculation which:
Looks at the entire range for a specific product ID
Sorts into descending order
Selects top 25% of this range
Averages that sub range
Displays it on the relevant row
This is why I was looking at the Fetch Percent command, to produce a formula that's practical for use for 100 rows in a sub summary report.
Any advances on that greatly appreciated
Instead of trying to do this with in-table calculation fields, I suggest you script it instead. Create a completely separate table that has simple number fields for the values you want, and populate it via scripting. If you try to do this with in-table calculation fields, you’re going to suffer performance issues as the data set grows (especially using ExecuteSQL).
If this is intended to be an on-demand report, you might consider assembling the data in variables and using a Virtual List instead.
Thanks for the ideas, I agree with both your points about a scratch table and the virtual lists.
The reason I've not looked at this so far in this instance is ironically due to speed. I may be wrong here, but as I understand it:
If I run a scratch table with simple numbers, it will run the report very quickly on demand as calculations are limited, but will take time to generate due to the same calculations for 'Total Average' being required to be run at some point. The issue is that it's a live and always changing system, so 'Total Average' is a moving figure. It's a case of choosing when to generate the 'Total Average' figures.
The other side on virtual lists is that they'd be based on ExecuteSQL initially to generate the global variable. Given I have in this example 100 rows of products to get overall average prices for, surely that would require 100 ExecuteSQL statements, albeit looped.
1 of 1 people found this helpful
Virtual Lists do NOT require ExecuteSQL. That's one way, but not the only way. A script can assemble the list without SQL.
Edit: And even if you do use ExecuteSQL for your VL, you can control the commit state of the records inside a script. You can't do that when it's defined as an unstored calculation field in a table. If there are open records, the server will send all the records of all the tables in the query down to the client - which can be a horrible performance hit. If all records are committed, on the other hand, the server will process the query and just return the results. Much, much faster.
Make the relation on Product ID as sorted,
Let ( [
all = List ( data::price )
; onefourth = LeftValues ( all ; Ceiling ( ValueCount ( all ) / 4 ) )
Case ( ValueCount ( onefourth ) < 2 ; GetAsNumber ( onefourth ) ;
"Average ( " &
Substitute ( Left ( onefourth ; Length ( onefourth ) - 1 ) ; ¶ ; ";" ) &
This will work only if each product have less than 4000 related data.
If you have more records, add calculation field
Let ( all = List ( data::pk ) ; LeftValues ( all ; Ceiling ( ValueCount ( all ) / 4 ) ) )
and make another relation using this = data::pk. You get result with
Average ( dataOnNewRelation::price )
pk is primary key, unique non-null field. If not exist, Get(RecordID) can be used.
Thanks for the inspiration. I've done a bit of a rewrite of the report involved and now this references a static number warehouse rather than the live data. The warehouse is updated each time a user imports data. Results are amazingly quick and the report now renders on screen in seconds rather than minutes.
This has also had a positive effect on the printing and PDFing of the report which was always a 15 minute bottleneck. I've discovered that using script steps:
Enter preview mode
Save as PDF
Enter browse mode
is so much quicker than printing to any sort of PDF.