Not positive this will help but you never know. This is mostly about minimizing the UI and layout swapping.
0. Sort data as you are doing.
1. Create a join to the summary table using the claim# as the key and allow creation of related records.
2. Perhaps a very lightly populated table of your medical claims data - just data fields. Or maybe no fields at all on the layout.
3. Freeze window.
4. Commit every "n'' records.
5. Perform your script on the server.
6. Or maybe you could export your data summarized by claim# and just export claim#. Then import results into your summary table.
"Research is what I'm doing when I don't know what I'm doing." ~Wernher von Braun
You can take a similar approach to your method but change to simply omitting records with duplicate claim numbers and then import all of the remaining records into the summary table. This method would allow you to run it as two separate steps and give you a chance to check that your duplicate omit step worked as expected. From my experiences you can easily import in 50,000 records in around a minuted depending on your system. The looping through each record will take some time but all of it could be offloaded to the server to make it quicker.
Thank you both so much for the assist. I found a super fast way to do it directly from FileMaker. More or less, create a self join using the field that has the repeats. Have a serial field in the table and a calculation field that uses the self join to identify the FIRST of any of the duplicates. Then create a script that finds all the "Unique" values and copy all of them over to the table. Worked like a charm and ran in seconds.