9 Replies Latest reply on May 16, 2011 10:34 AM by aammondd

    indirect go to field

    doncarp_1

      Title

      indirect go to field

      Post

      I am relatively new as a FMPro script writer. While I think the "go to next field" and "go to previous field" are a great help for my script, it is driving me nuts that these two script steps would exist WITHOUT one to set the field focus by calculation in a script step. Can someone give me explicitly a script step(s) to set the field focus to a field whose name is contained in the variable $FldName as text?

      For reference, I have a table of 1400 field records which contain multiple parameters for each record and need to be parsed out to a related table. It took me some time to discover that I had to copy to a variable any field I wanted to transfer and then create a new field in a new related table by setting it with the variable. Since I know all the respective field names, I could have written a book length script to do this, but silly me, I thought it would be easier to just use some indirect addressing in a generic script and set up a list of the fields I wanted to transfer to use as indirect values inside the loop.  The go to next field made that loop quite elegantly simple, but GAWD HELP ME I then spent the next week squeezing any and every Filemaker resource I could find to figure how to set the field focus indirectly. No doubt there is a simple, easy, well documented solution to this obvious script step need, especially since the FMP programmers saw the need to give me the next field and previous field steps. 

      Don

      in Houston

        • 1. Re: indirect go to field
          LaRetta_1

          "For reference, I have a table of 1400 field records which contain multiple parameters for each record and need to be parsed out to a related table"

          Hi Don,

          I am afraid that I cannot get perspective on what you are attempting and, without understanding the purpose or context, I cannot assist.

          What are '1400 field records' ... what are 'contain multiple paramters for each record' and why do they need to be parsed out to a related table?  Is this for a data migration or part of a standard solution you are designing?

          All I can share at this point is that, when I see something this strange and complex, 99% of the time it points to incorrect structure of the tables and poor (or lack of) proper relationships.

          Can you drop down to basics and explain what the data represents, how you are structured, what you want to accomplish and why?  Thank you. :^)

          • 2. Re: indirect go to field
            doncarp_1

            LaRetta-

            I agree with your observations, however this data is coming from a third party and is a commercial configuration that serves several specially coded applications. I can not change the data format. Have to deal with it. A file is comprised of between 100 and 200 records of 1400 fields each. Each record has a serial compilation of one to many data that you would normally expect to be delivered in a separate table, but alas it is not. I built an application that attempted to duplicate what the others are doing, but it is so excruciatingly complex that it makes it terribly bug prone and nearly impossible to adapt easily. This is the reason that I am trying to break it down into multiple tables (parse it). The data has never been used in the way I am interpreting it, and since it can be tested for accuracy of prediction, there is a great opportunity to optimize the solution. It would be and is impossible to easily or sanely adapt an application that peels off sequential strips of data and tries to treat them as separate records even thought in the original configuration they are all part of a single record. 

            As I said, above, I know all the field names on both sides of the parsing translation from ugly monster strand to reasonable multi table data base. I can code it all out one by one and it will be in itself a monster, which is part of why I want to do it in indirectly addressed loops, where the sequential names of the before and after  fields are fed to the script loop as two lists. Since it is not possible, so far as I can tell, to just do a set field for one table with a get from another table, I have to get the before field to a variable and then set the variable to the after field. 

            I can use an indirect get for the before fields and step through getting them to variables them in a loop, but I can find no way to set a field by name in which the name is indirect. I'm thinking there is a get field of some kind that I could use after having created a new record so as to set the focus pointer to that field, then I would need to set the field without specifying the name. Don't know if that is possible, but if it is, that could be a solution. 

            One way of looking at it is that each record is a carton of eggs where each egg has inside it forty or fifty fields specific to it, that would give you a 600 field record. I have egg carton records that hold 35 eggs or so with each egg having its forty or fifty fields specific to it. I know its nutty, but it's the source data I have to work with. I have no influence on the source as I do not yet represent any revenue to them. However, if we get this down to the iPad level it could be a fair bit of revenue for them. They have been trying to solve the larger application problem themselves, but are stuck to conventional analysis and those monster applications that digest this stuff which we are throwing away in favor of more intuitive graphical analysis suitable for mobile devices. 

            Before record:: f1,f2,f3,etc to f1400.   f45 is the key field.

            After record in new related table key= (f45)  name = f121;  address = f133;  city = f145 etc for as many fields as are needed in this table's records

            next after record         key = f45;  name = f122;  address = f134; city = f146 etc for as many fields as are needed in this table's records

            add records to make 12 new ones like the two above because that is the number of one to many sets that is collapsed into the original before record

            then loop to next before record and do it again 

            • 3. Re: indirect go to field
              LaRetta_1

              Is f45 always the key field?  Can you import f45 into FileMaker, relate the two and import, mapping as:  f121 = name, 133 address etc?

              It appears that each record 'drops' down one?  It is VERY late here and my eyes are crossing.  And I didn't really understand your last paragraph.  Are you saying that every record will map differently from f## to name, address etc and no record is consistent?

              Anyway, I must sleep.  I'll take another look in the morning if I get time otherwise others will probably offer suggestions.  And I wasn't implying there was something wrong with your attempt ... only that NORMALLY forum posts are newer people who are structuring improperly and don't yet know the terminology so of course that is my direction of interpretation.

              We DO have a Set Field By Name now.  As you loop through this, you could use a global and input the field name (probably using a pop-up of all field names in the table).  But I just hit the wall, night night. :^)

              • 4. Re: indirect go to field
                doncarp_1

                Yes f45 is always the main key field. Import mapping may solve the problem easily, but I've not been able to find any docs for it that are more than one or two sentences. Can a single input field be mapped to multiple tables, and if so how? I'm a retired executive whose time in the software business was spent managing projects that had a way of being 90 per cent done for long periods of time.  :^)    Earlier I did some coding but it was simplistic linear coding and the syntax and function are quite different these days, and I've been out of that business for a long time. This is an effort to produce a product I've toyed with doing for a while. I've a sharp young iPad guy, but he has no insight to the application (and lots of other work to keep him busy). So I'm chugging along picking up the things I need to learn to get it to something he can visualize. 

                • 5. Re: indirect go to field
                  philmodjunk

                  I'm just tossing out a tidbit you may find useful:

                  Set Field By Name is a script step that enables you to indirectly modify a field. Unlike Set Field, this step uses a text expression to compute the table::fieldName of the field to be modified so if you pass a list of fields to a script as a parameter, your script can loop through the field names in the list using this script step to set each one in turn.

                   

                  • 6. Re: indirect go to field
                    aammondd

                    What you seem to be trying to describe as a source is a multi-type record file.

                    Where the number of fields(and possibly data types) in the record does not remain constant and that there are a large number of types of records in the single file. (this is true of a lot of legacy applications I happen to be working on one now)

                     

                    Yes what you want to do can be done with the Set Field By Name

                    I did this once by creating a table that was entirely global fields. I would parse my string into the global fields as needed. I created reference tables and a series of relationships between the global fields and these reference tables. Which would tell me the layout:table:field that I would  need to populate then used the Set Field by name to  process the fields and or records.

                    The script logic can still be complex and rather lengthy especially if you have to do significant looping and different processes based on the keys.

                    This let me add,delete and update records based on the values in the import file.

                    This only  really works if you have a key to process the string on (apparently you do)

                    I also would add a couple of processing fields at the end of my input string record to help me evaluate any records that had not processed completely (basically if there were no values in the reference table for the keys or some field wasnt formatted correctly) That way  I  could add new input records to the table but only process those records I needed to.

                    I also took advantage of nesting the scripts in such a way as to be able to restart the process to deal with failed records.

                     

                    • 7. Re: indirect go to field
                      doncarp_1

                      RE: Set field by name

                      Thanks for the past two responses, and the previous ones as well. They will give me some direction to try -- in addition to digging deeper into the input mapping possibility as it relates to one input key field being mapped to multiple related tables. 

                      I did try the set field by name and it did work, however it was a calculation load and it resided within three nested loops so the performance was far too slow. Now of course I may have inserted, in my inexperience, a step that got caught, in some circumstances, in a time out loop, so I see this as not so much trying to get an "ah ha" moment as to build the competency with the syntax and dictionary and relevant application so that what results actually can be defined by one of you as "pretty neat". :^). 

                      I'm open to any other suggestions or embellishments, but for now will try to wring these suggestions out and measure my success.  

                      Thanks again. (to all)

                      • 8. Re: indirect go to field
                        philmodjunk

                        "in addition to digging deeper into the input mapping possibility as it relates to one input key field being mapped to multiple related tables"

                        With Import Records, you can import into only one table at a time and a given field can map to only one field at a time during import. You can, however, perfom mutiple imports from the same file and specifying different tables each time--so that may be a useful option.

                        The other option is to import a single time into a table of the same structure as your import file, and then use scripts to "parse" the data into the appropriate related tables.

                        However you slice it, this is a going to be a challenge to set up and best bet is to set it up as a scripted "batch" operation that can take place overnight or on a weekend when, hopefully, other users are not trying to access the database. That can take reduce the concern over performance issues a bit as you can set this up to run overnight and then check back on in the morning. (Assuming that it finishes in time, of course...)

                        • 9. Re: indirect go to field
                          aammondd

                          Yes importing the string into a single field in an input processing table (with a couple of extra empty fields) then parsing it into a Global work string will really help. Using relationships to reference tables will really cut down  the processing time.

                          For example I have 5 record types. Each with different sets of fields and data.

                          I would use a case statement to evaluate the record type from within the string and call an appropriate parsing script

                          I would have 5 different parsing scripts to populate my global record appropriately (so numbers and dates and text  all end up in the right fields) My record would have like 25 text fields 10 Date fields 20 number fields. etc enough to hold the max number of each type you could encounter (and a couple extra in case)

                           

                          Each parsing script  would then call a population script based on a case statement.

                          That population script would use the appropriate linked relationships to loop through all the tables and fields you want to process the information into.

                          After each stage of the process completes Id update my input string table with a process step id and a timestamp

                          After reaching the end Id delete all the completely processed records from the input processing table and then evaluate any records that may have not processed completely (ie a loop ended up finding no records)

                          I would do this rather than abort the script on an error so you can determine what key might be causing the problem and alter the input data or reference tables if needed

                          I agree with Phil though you would probably want this to batch during non-peak processing time.