1 2 Previous Next 18 Replies Latest reply on Mar 21, 2017 11:48 AM by sprynmd

    Duplicate Records

    leviileka

      Hello guys,

       

      I have a problem with my timesheet layout, it contains duplicates records. I want to get rid of them but cannot find a way to do this. This is how it looks like. I would like the record to just end at the first Wed, 31.08.16 but it continues. Can i get some advise on how to tackle this please?

       

      Screen Shot 2016-12-06 at 3.40.49 PM.png

        • 1. Re: Duplicate Records
          coherentkris

          Finding duplicate values

           

          the help system is a great source of information as is Google

          • 2. Re: Duplicate Records
            dtcgnet

            Go into Find mode. Put an exclamation point in the date field. Perform find. Sort by date. Manually delete the duplicate records. Then try to figure out how the duplicates got created and prevent that.

            • 3. Re: Duplicate Records
              SteveMartino

              Equally important is why do you have duplicates in the first place, and how to prevent that from happening again.

              • 4. Re: Duplicate Records
                philmodjunk

                The previous respondents have logically assumed that you want to delete the duplicates. But if you want to just omit the duplicates from your report or to combine multiple records into a single row of data--both of which leave your duplicates in the table, this is also possible.

                 

                And one of my preferred methods for eliminating duplicate records is to import the records into a table where "unique values/validate always" has been specified on the relevant fields. Duplicates are then not imported and you get just one record with each value.

                • 5. Re: Duplicate Records
                  fmpdude

                  Pretty cool. Never thought of that.

                  • 6. Re: Duplicate Records
                    fmpdude

                    Using SQL, you can always find the duplicates in a table like this:

                     

                    Select <Field_name) from <table_name> GROUP BY <field_name> having count(*) > 1

                     

                    That will also give you a list of PKs (or other field you can substitute). You could write a script using that SQL to remove just the dups so you're left with unique records.

                     

                    Or do what philmodjunk suggested. His suggestion is very cool.

                     

                    HOPE THIS HELPS.

                    • 7. Re: Duplicate Records
                      JackRodges

                      Create a TO on the date field.

                      Add a calculate field that evaluates to 1.

                      Add a calculated field (Duplicates) Sum(TOdupetable:Onefield)

                       

                      You have a number for singles(1)  and duplicates (2+) Search for Duplicates = > 1

                       

                      Now you know which records are duplicated and how you deal with them is up to you.

                       

                      This is a typical FileMaker method and requires no plug in or use of extraneous script methodologies.

                       

                      It has the added benefit of allow you use the 1 or >1  to colorize a field or line if it is duplicated or do other things in a script.

                      1 of 1 people found this helpful
                      • 8. Re: Duplicate Records
                        fmpdude

                        A nice tip. I wish there were a "FileMaker Recipes" book out there with stuff just like this.

                         

                        Excellent.

                        • 9. Re: Duplicate Records
                          taylorsharpe

                          gofmp's idea was creative with TO relationships.  But personally I would just sort them and run a script looping through comparing to the previous record and deleting duplicate records.  You can use the GetNthRecord for comparison. 

                          • 10. Re: Duplicate Records
                            richardthebeard

                            Sorry, new to this forum, what is a "TO on the date field" please?

                            1 of 1 people found this helpful
                            • 11. Re: Duplicate Records
                              erolst

                              richardthebeard wrote:

                              "TO on the date field" please?

                              This is the extremely compact form of "create a new table occurrence of the table where the date field resides, and use that new TO to create a self-join relationship matching date on itself"

                              • 12. Re: Duplicate Records
                                erolst

                                gofmp wrote:

                                Create a TO on the date field.

                                Add a calculate field that evaluates to 1.

                                Add a calculated field (Duplicates) Sum(TOdupetable:Onefield) [...]

                                As straightforward as that is, it can be even simpler.

                                 

                                There is no need to "add a calculate field that evaluates to 1", because you can simply

                                 

                                Count ( DupeCheck_byDate::theDateField )

                                • 13. Re: Duplicate Records
                                  erolst

                                  Taylor Sharpe wrote:

                                  [...] just sort them and run a script looping through comparing to the previous record and deleting duplicate records.[...]

                                  Why Taylor! Are you advocating "extraneous script methodologies"?

                                  • 14. Re: Duplicate Records
                                    JackRodges

                                    One interesting idea I had about my suggestion is to find the >1

                                    then do a mini loop

                                     

                                    Test on a copy...

                                    This should work on any selection of records and they need not be sorted but find and sort reduces number.

                                    loop

                                         exit loop if found count < 1

                                       if field >1

                                          delete record

                                       else

                                         omit record

                                       end if

                                    end loop

                                     

                                    In my minds eye it duplicates the action of manually deleting the record with the next record becoming the active record.

                                     

                                    If this works it is the simplest method of deleting duplicates I have ever scripted. Besure to leave user abort on just in case.

                                    1 2 Previous Next