4 Replies Latest reply on Apr 23, 2014 1:05 PM by BenDouglas

    Optimize Sub-Summary Text Aggregation OR Unique Value filter for List()



      Optimize Sub-Summary Text Aggregation OR Unique Value filter for List()


           tl;dr version: I've used a really round-about way of creating an aggregated text list of record codes in the header of a sub-summary, but it's really slow and there's probably some easy way of optimizing this that I'm missing, and I want to know if there is a faster way.

           Long version:

           Ok here's the background: My boss wants a summary layout where at the header of each section subsummary of jobs, it has a list of each of the job codes in that sub-summary section. Unfortunately, it does not seem that Summary fields work with text - ideally it would have the function to aggregate a list of names of the subsummary (feature request?)

           So to solve this, I made a table relation to the table itself based on the break fields used in the summary - see Exhibit A.

           Then I created a Calculation field - unstored - That used the List() function on that relationship for all the Job Codes - see Exhibit B.

           BUT since the job codes are not a unique key (and I really am not familiar with this particular database), there are a bundle of duplicate entries in the calculation field. So you may get the following calculation:











           When all I want is


                25055, 25056

           I couldn't figure out how to make this unique set show up in the table relationship, nor could I figure out how to modify the Calculation field so it only made a list of unique values.

           So instead, I had to build a manual unique value script look that looks like Exhibit C. This script basically works by looping through each value of the calculated list, checking if it's already in a new variable, and if not it adds it to the new variable, and then copies the variable to another text field (also shown in Exhibit B)

           The problem with this is I'm working with a HUGE number of job records and codes in each list and this takes 3-5 minutes to run, and it's definitely not an ideal solution.

      What would be PERFECT is just a built in function (I'm not using FMP Advanced) that could filter unique values out of a list, and I could just throw that into the calculation field in Exhibit B, surrounding the List function.

           But if that doesn't exist, is there some more optimized way of running the script in Exhibit C?