I'm a little lost here. I'm trying to remove duplicates in a file and every time I have a match the IF statement skips it.
Your variable is $$Sdupes (with an S in front of the word "dupes")
and your if statement checks against $$dupes (without the S)
You might want to try changing that.
Your Go to Record / Request / Page [Next;Exit after last] line is in the incorrect location, it should come right before End Loop and after the End If.
I've tried that too, but the IF statement still skip to the ELSE even if there is a match. To explain the logic here. I'm going through a file and prior to this I have a lot of string parsing and record deletion and everything works perfectly until these last steps to delete the dupes.
So basically by moving the GTR it is still no go on getting the IF statement to delete the record when there is a match. I'm stumped.
Duh, I never saw that coming. I've been sitting here too long and I'm blind. Thanks for clearing that up and now it works. I also saw that I need to step back one after deletion so that I can check the next record for the same. The script looks like this now. Your help is much appreciated.
Instead of immediatly deleting the record you could also set a value like "duplicate" in a seperate field. That way you have a way to verify what is marked as duplicate. Then you could perform a find for all the records with "duplicate" in that field and delete them.
But that's only a suggestion. Might not be neccesary with you.
Thanks for the suggestion. We get a file every day that we need to parse and split into different regions which will be outputed to separate Excel files for follow ups. At this point I need to get rid of all the dupes and there is no need for them. Today this is done manually.
Another way to remove duplicates is with import records. If "we get a file everyday" means that you have to import this data every day, you can set a Unique Values, validate always validation setting on the location field and then the duplicates are automatically excluded during the import.
I've not found a way to control Excel import. We get an excel file every morning so I simply import it and do all the other parsing before I remove the dupes. The result will be written to two separate Excel files for each reagion they belong to and then emailed to the support team whi will handle the cases. I'm not displaying any windows just the fileopen dialog to select the Excel file to import.
Anyway, where can I find more information on reading an Excel file row by row?
I provided all that you need to know in my last post. This works with imports from all file types, even FileMaker files.
Open Manage | Database | Fields. Find your location field and double click it to open Field options.
On the validation tab, select Unique values and Validate Always.
During import, enable auto entry options in the last dialog that pops up before you initiate the import records process.
If you can't have such a field option in place all the time, make a copy of your data source table, set the options on the copy and do a two stage import where you import first into the copy to filter out the duplicates, then import from it into your final table.
You can actually set up a number of auto-enter calcluations that process your data during import in this way. LaRetta even showed me a method where you map the field to a global field and use an auto-enter calculation on the target field that takes the data from the global field and manipulates it, storing the result in the target field.
Retrieving data ...