Change your calculation field from type calculation to type number or type text (match the "result type" setting for your calculation.
Then open field options for the field and select the calculation option. You'll find that your original calculation expression appears automatically.
Now click the validation tab and specify unique values.
Note: I can't tell whether or not the "replace exsting value" option should or should not be selected from the info that you have posted.
I want the new info I'm importing to replace any different info that might exist if the unique id's match.
Does that mean I should use the replace existing value option?
Why would a unique ID ever change?
By "importing" are you using Import records or some other method to get new data into the file?
If you are using Import Records, there's an option that you can use that can match up records in the imported data to records in the existing table and update them by replacing data in the record with imported data. Is that what you need to do here?
the data i am importing will also have a unique id for each record so if the unique ids match then the record should be updated, if they don't it is a new record to be added to the database. the unique id for each record does not change
I am using the import records and then choosing the file to import so yes, that is what i need to do from here (as well as adding i thenew records that are not in my database yet).
Then you would not use "replace existing value".
The basic steps are:
1) Go to a layout based on the table into which you want to import data
2) Show All Records
3) Select Import Records, select your file of data to import and select the appropriate import matching values option.
4) map the fields and designate the match field or fields such as your unique ID field
5) import the records
This can also be scripted.
Before starting to import more data as we discussed, I looked over the data left and I have a question back to the original script, I'm still having issues with the script to remove the already existing duplicates. For example I have a few copies of the record with unique id Yunjung8666048222 still left after running the script. Here is the script after I made the changes you suggested. The id field "First Phone" is a calculation field set up as a text type combining First Name & Work Phone as we discussed.
Find your records with Yunjung8666048222 and then click into each field. Put the cursor as far to the right as it will go in the field of each record. Perhaps there is an extra, invisible character such as a space, return or tab that results in two records that look like they have the same text in this field but do not.
As I look through, I see that the reason multiple copies are remaining is that the script only puts ax "X" in the field "Mark" the first instance of duplication. If there are 6 duplicates it only marks 1 with an "X"
How do I fix this?
If I export the whole database and then import it into a new database with validation in check will that help?
That's what the script is supposed to do. Then you are supposed to perform a find for the records marked with "X" and then use delete found set to delete them. This "mark" then "delete" approach is set up so that you can confirm that your script worked before you actually delete any records.
but if it is leaving several duplicates un marked, do i need to run it several times?
is the export then import idea, possible?
the script is just not working right so I'm looking for other ideas.
Will this work:
FileMaker Pro 6, FileMaker Pro 4.x, FileMaker Pro 3.x
This article describes the simplest way to identify duplicate records in your database.
For this example we will assume that you have a file containing the fields Name and PhoneNumber, and you want to find all records that have a duplicate phone number.
1. Define a self-join relationship with 'PhoneNumber' as the key field for both the main file and the related file. A self-join relationship is one in which the main file and related file are the same file; thus, you would simply choose the same file when you are prompted to specify the related file. Name your relationship SelfJoin.
2. Define two fields:
Counter (Text, Auto-enter serial number)
CheckDuplicates (Calculation, Text result)=
If(Counter = SelfJoin::Counter, 'Unique', 'Duplicate')
3. Find All records, click into the new Counter field, and perform a Replace, replacing with a Serial number. This will assign a serial number value to all existing records in your database; records entered in the future will automatically have serial numbers entered.
4. The first record in any series of duplicates will now hold the value'Unique' in the CheckDuplicates field, and all duplicate records within the same series will be marked 'Duplicate'. At this point all duplicate records can be found and omitted or deleted if desired. Note that records with nothing in the key field (PhoneNumber) will also be flagged as duplicates.
Once set up as above, this system will mark duplicate records automatically as they are created; simply perform a Find for 'Duplicate' in the Counter field.
I just took a closer look at your script. There appears to be an error in it. The set field step is in the wrong place. It should be after the IF and before the Else.
The other method should also work, but I prefer a third option:
Save a clone (empty copy) of the file.
In this table in the cloned file, open Manage | Database | Fields and select Unique Values, Validate always as validation field options for the phone number field.
Now use import records to import this data from your existing table into the new, empty table. The duplicates will be automatically excluded during the import. If your file is a single table file, you can simply replace the old file with the new. If you have other tables with data you need in your file, delete all records from this table in your original table (look out for relationship based cascading deletes), and import the data from the cloned file back into the original.
When I import new files into this database (after its fixed as described above) what settings do i use to prevent duplicates being imported while updating matching records with new updated information?