3 Replies Latest reply on Feb 10, 2017 9:24 AM by Johan Hedman

    How to add new records to a table from an ODBC import

    ryan@onf.coop

      Hopefully that question is complete; I'm not exactly sure how to ask it simply. 

       

      I imagine this is a fairly fundamental portion of Filemaker, and I'm just now cutting my teeth on writing scripts.  Here's what I'm trying to accomplish. 

       

      I have an ODBC-driven table (readonly), and a secondary table that reads data from it and updates itself accordingly.  I've tested using the ODBC table directly for my main table, but the inability to create fields within it restricts what I can do.  Also, since it's constantly talking to the reference DB, many things I've tried with it are very slow; I feel like I need to use a system whereby I keep a local table updated (as external records are updated) rather than using the ODBC-updated table directly.  If there's a lean way to use that table correctly, and to run (for example) dynamic search scripts on it (found one online that requires defining fields in the table it's running on), I'd love to hear it. 

       

      Anyway, I'm trying to make sure the secondary table stays relatively up to date, as far as standard data and new records.  We're always adding and deleting data from the database the ODBC table's reading from, and I want the data in the secondary table to remain mostly synchronized with the data from the external DB.  That means, as records are changed or deleted from the external DB, I would like for these changes to be reflected in the secondary table. 

       

      The secondary table currently lacks over 10,000 records from the ODBC table.  I first want to populate the secondary table with all of the (relevant; I have several fields of the over 160 from the external DB I use) data currently in the ODBC table.  Second, I want to set up a recurring script that moves all changes detected (on a reasonable schedule; multiple times daily would be great, but if this process is too slow, I could just do overnights) in the external DB to the secondary table (this includes changes to existing records and the addition of new records if the keys are not found in the secondary table [I am using two keys, UPC and something called a price level; every item has two price levels]), all with minimal processing time.  I've already figured out some methods (though they're probably clunky, given my poor scripting skills) for having a given record check the external DB and grab the latest data, but I want to make sure, when someone's browsing the secondary table in list form, they see the most up-to-date data I can possibly provide without sacrificing performance. 

       

      I hope that's a relatively complete picture.  Please let me know what other information you need to understand the challenge.  I could show you what I've tried to make of my script so far, but I think it's bad enough that it's not worth even trying to parse. 

       

      Thanks so much for any help you can provide!

       

      ~Ryan

        • 1. Re: How to add new records to a table from an ODBC import
          Johan Hedman

          Best way to handle other ODBC data sources is to setup a ESS

          https://www.filemaker.com/downloads/documentation/techbrief_intro_ess.pdf

           

          Then you can do all sort of things with the data that comes from that data source. You can like you wrote add some fields but not all. But for those that you can't, I usually create a own table and have it relate to that ODBC-table in a one to one relationship so that I can make layouts based on the ODBC-table with related table data on the same layout.

          • 2. Re: How to add new records to a table from an ODBC import
            ryan@onf.coop

            Hi Johan,

             

            Thanks so much for the information!  I didn't know what the kind of relationship I put into place was called, but it seems like I already made an ESS connection with the external DB.  I guess that means my main issue is in using it effectively. 

             

            I read through the whole document you linked, and it did give me a few ideas.  However, pulling them off is another thing entirely.  For example, it seems like only the shadow table will automatically handle record additions and deletions; if I want my secondary table to do this, too, I'll need to establish some kind of script I think?  Could you possibly provide some guidelines on how such a script would be designed? 

             

            Also, since my users are accessing the secondary table directly, and since my search scripts perform keystroke-by-keystroke searches on fairly large datasets (~40,000 records; the script provides "Google-like" search functionality), I feel like it is integral to make sure every single field is properly indexed.  As the ESS document you provided mentions, it's sometimes necessary to make sure your dataset (especially if you'll be performing searches on it) is properly indexed on the SQL side.  However, we don't have the ability to make modifications to the external database (or, well, I can, but it's potentially a really bad, or costly, idea).  And, even if we could, I feel like the complexity of designing this within SQL, rather than keeping all of the logic within FM, might add significant complexity to the overall solution.  I think I can circumvent all of this by not directly using any of the fields from the shadow table in the user-facing table, instead moving data from it to the secondary table(s).  I just need to figure out how to keep that secondary table as up-to-the-minute synchronized with the shadow table as possible. 

             

            As a smaller-scale example of what I'm referring to, I have a current script that does this sort of on demand record-by-record; it uses the UPC of the item (haven't integrated price level yet), goes to the layout of the shadow table, performs a find on that UPC, refreshes the window (clearing the external cache, to grab the latest data from the external DB record), returns to the original layout, and updates all of the relevant data (I'm currently using lookups tied to a relationship between the UPC fields in the secondary table and the shadow table).  I doubt this is the most efficient way to do this (it's the only way I could figure to cobble together), but the performance is good so far.  However, I can't figure out how to make this (or something similar) work across all records, and/or to work efficiently.

             

            Maybe some of my basic logic for my initial scripts is faulty, and needs a overhaul?  That certainly would surprise me. 

             

            Thanks again!

            • 3. Re: How to add new records to a table from an ODBC import
              Johan Hedman

              If you want to have fast search on letter by letter you need to create script triggers to handle that. If you fields are not index on your shadow table that would never be fast on many records.

               

              you can find good ways to handle script triggers on the example files that comes along with FileMaker Pro

               

              fot your situation I would have a third party product called MirrorSync from 360works that's syncs data from your ESS table to your own tables where you then can index fields.