8 Replies Latest reply on Aug 3, 2016 12:06 PM by beverly

    Formatting Data for Import


      Hi guys,

      I have data that is exported from another program that basically comes out looking like this:


      Record Import.jpg

      This sample represents 2 records with data and links to images. The goal is to import the data into specific fields. And then link in the given images into a table linked to a portal. When accessing this data from the original program I scan the file till I find a given keyword like "Data3 : " and then read the rest line beyond it. Or use the key word "Images" in a loop to create an array of the image files. Looking here FileMaker Pro 15 Help it seems I would not be able to import the data because of how the file is organized. Can a script be written to better control how filemaker imports this data? Or do I need to reformat how this text file is written? And if so, what's the best way to put the image files into their own table?


      Thanks for the help in advance!


        • 1. Re: Formatting Data for Import

          the 'format' is _variable_. by that I mean there can be different number of images, different number of thumbnails and quite possibly different number of "Data" lines - although this sample shows 5 each.


          FileMaker is pretty powerful in 'parsing' (extracting) text using the Text functions:

          * https://www.filemaker.com/help/15/fmp/en/index.html#page/FMP_Help/text-functions.html


          since there are "Patterns" in your example, it can be leverage with

          PatternCount() along with Position() to get at distinct "lines" and parts of each line.


          I'm assuming that you have carriage returns between the "lines", so the 'value' functions can also help us here to read each line, test for patterns and branch for different fields to be populated.


          Let me start a script and see if this makes sense:

          1. import the document (as text) into one field (probably a global)

          2. from there:

            Set Variable ( $line_count ; ValueCount ( myTest_g ; Char (13) // how many returns/lines? ) )

            Set Variable ( $summary_count ; PatternCount ( myTest_g ; "summary"  // how many summary 'sections'? ) )

            Set Variable ( $custom_count ; PatternCount ( myTest_g ; "custom"  // how many custom sections? ) )

            # at this point there may or may not be the same count of 'summary' and 'custom', but likely there is

            # for brevity, we'll use the $summary_count to mean a New Record in the table for each summary


            Set Variable ( $counter ; 1 // to start )


            # read the lines in the global

            Set Variable ( $line ; GetValue ( myTest_g ; $count  // grab a line ) )



            # .... parse here ...



            # increment the line

            Set Variable ( $counter ; $counter + 1 )

            Exit Loop If ( $counter > $line_count // stop if we have no more lines )

            End Loop


          3. I left out the "... parse here...", right? You'll test each $line as you have it read in and branch:

            If ( PatternCount ( $line ; "summary" ) )

            New Record

            # this is our "trigger" to start a new record

            End If


            If ( PatternCount ( $line ; "Identifier" ) )

            Set Variable ( $content ;

            Let (

            [ _start = Postion ( $line ; ":" ; 1 ; 1 ) // where is the separator in this line?

            ; _end = Length ( $line ) // how long is this line, the end is the last character

            ; _content = Middle ( $line; _start ; _end ) // grab just the part _after_ the ":"

            ]; _ content = Trim ( _content ) // remove extra spacing, if any before and after


            ) // we have extracted using a Let() statement to do several things in one step

            Set Field ( Identifier ; $content // push the extracted text to the field )

            Set Variable ( $content ; "" // I like to clear after each one )

            End If


            ... repeat using the "Revision" to set a field and the "Title" and other fields



          4. now it get trickier with the Images and Thumbnails as they may have variable # of lines following.

            If you can follow the logic of the above, you can set all these into a single field and then parse to a separate table later.


          5. if you can extract as XML or JSON or CSV then you may have an easier time of 'parsing'.





          p,s. EDITED as post-by-email munged a bit of my code. I hope it all looks ok now!

          • 2. Re: Formatting Data for Import

            The Trim function might be useful as a way to discard trailing and leading spaces.


            You might also want to research custom function sites for parsing custom functions that can make this simpler for you to work out.

            • 3. Re: Formatting Data for Import
              David Moyer


              Scripting the import is the way to go.  Beverly's generous outline should give you a good idea of how to go about it.


              To add my opinion, you'll likely want to create a separate table for the image/thumb paths.


              Also, if it were me, I'd create a temporary table with a single text field, then import your text file into that table.  Each line should occupy one record.  Then I'd clean up the data with various finds, replaces and deletes.

              Now, I'd write a script to run through the remaining records and act somewhat as described in Beverly's script, hopping between layouts and populating fields.  (If you have both a data table and an image path table, you'll need to switch between layouts in your script.)

              • 4. Re: Formatting Data for Import

                Good suggestions, David. I've used that method (import into single field, many records). It can be easy to loose context if a record happens to get out of order. I recommend an "auto-enter" serial to ensure these can be re-sorted to import order should that happen. Also sometime the data is "multi-line" with a start and stop "pattern" that could contain return-characters as well between start & stop. So consider the data.


                I, too, would put the images into related fields. An interim import (as list into multi-line field) may help. These can be parsed out as needed.


                So, there are more options! And that's why we use FM, so many ways to do the same thing. The data  may determine what to do with it.



                • 5. Re: Formatting Data for Import

                  Thank you everyone! I really appreciate the detailed response . I'll dig into these suggestions and see if I can get it working.




                  • 6. Re: Formatting Data for Import

                    come back and let us know what you did or if you need more help. You may post your script if you need us to 'debug'.



                    • 7. Re: Formatting Data for Import

                      Here's my solution to the sorting problem. When I imported the file into filemaker it read the tabs and moved entries to different fields. That really wasn't any big deal ultimately. The solution below works though I'm sure it could be simplified. The next step will be figuring out how to import an updated txt file after entries have been deleted, added, or modified without losing/duplicating ID numbers or messing up other tables that may be linked to the records. Lots of testing to do I guess. Thanks for your help guys and gal!



                      • 8. Re: Formatting Data for Import

                        you said:

                        it read the tabs and moved entries to different fields


                        And that is one of the reasons to not let it import into records. Any tab characters in the text will be assumed to be a new field.



                        1 of 1 people found this helpful