Thanks to all the help from the forum, I have been able to work my way through a few use cases that involve retrieving data from APIs.
Here is the issue:
Many of my scripts use a search/search suggest query to return data. Those results are added to a filemaker table.
An example of this is a Facebook Marketing API interest search.
As a result, there are many results that appear for different search terms, leading to duplicate records in the database.
What I have been doing, thus far, is letting the results get stored in a table, then periodically running a script to delete the duplicates.
I would like to know if this is the best way of handling it.
For example, would it be better to have a script step check to see if the record is already in the table before adding it? Or would that add more time/complexity unnecessarily?
Any feedback would be appreciated.