1 Reply Latest reply on Jul 3, 2016 6:47 PM by user19752

    Keep the duplicates instead

    diegoin619

      Hello,

       

      I've found many great resources on how to mark and delete duplicate records, but this time I need to remove records that do not have a duplicate.

       

      I get a report in Excel from my client, it contains a list of customers and proprietary equipment in their possession. Some customers have more than one piece of equipment and the report deals with this by adding a row for each piece of equipment.  So you can have two, or sometimes up to twelve rows with the same account number, customer name, and address information, but unique equipment identifiers (serial numbers).

      Currently, I open the report in Excel, apply conditional formatting to the account number column so that all duplicate numbers are highlighted red. I then sort by color so the accounts that only have one piece of equipment end up at the bottom and I can remove them from the report.

      I then copy the tab and remove all duplicate account numbers. Both tabs are then imported to my Filemaker db.

       

      The first tab is imported to a table called "Bounty Report".  The second tab is imported to a table named "Work Order". The Work Order layout lists unique accounts and all the equipment associated with each (from the bounty report table).

       

      I'd like to skip the Excel formatting, sorting, and deleting because sorting by color can take a very long time on my computer.  I've been looking for a script that will identify accounts with more than one piece of equipment, and create a list of these account numbers, names, and addresses which I can import separately to the 'work order' table. Bonus if the script can write this list to the 'work order' table as well.

       

      I've tried using a script to mark duplicates, so I can do a find based on that field, but can't find a script that will mark the duplicates AND the first occurrence of each. Everything seems geared toward eliminating the duplicates.

       

      Thanks