I have a found set that has multiple duplicates.
Is there an easy way to get the found set, keep the 1st record and remove the duplicates of the 1st?
I can script it, mark the 1st one with an x.... but is there an easier way?
depends on what you need to do. A script that finds duplicates, sorts to group them by duplicate values and then marks all but the first record of each group to then find and delete the marked records after the loop completes is pretty standard. There's a knowledge base article that describes this method.
You could, in theory, delete each duplicate at the point where the above mentioned script "marks" it, but you have to be careful of what current record you are then on after the delete and when to go to the next record as well as when not to--which I assume is why the "mark the record" approach doesn't do that.
An option that I have used that "batch cleans" an entire table of duplicates is to import the records were the field that needs to be unique has "Unique values" and "validate always" specified in field options. During import, subsequent records with duplicate values in this field are automatically omitted during import. This is simple, no script at all needed, but leaves your cleaned records in a different table from which you have to import the records back into the original after deleting (Or truncating) all records. This can create additional complications and may be slower then the looping script if you have a lot of records in the table with only a few duplicates.
Retrieving data ...