8 Replies Latest reply on Aug 29, 2013 9:46 PM by thumper

    How to de-serialize in filemaker?

    thumper

      Title

      How to de-serialize in filemaker?

      Post

           Hey i have this in a text field:  a:2:{i:0;a:3:{s:2:"id";s:1:"1";s:4:"name";s:6:"Albino";s:4:"type";s:9:"RECESSIVE";}i:1;a:3:{s:2:"id";s:2:"33";s:4:"name";s:12:"Black Pastel";s:4:"type";s:11:"CO-DOMINANT";}}

            

      I need it to be deserialized in another field but unsure if there is a custom function to do this? any help is much appreciated :)

        • 1. Re: How to de-serialize in filemaker?
          philmodjunk

               What do you mean by "deserialize"?

          • 2. Re: How to de-serialize in filemaker?
            thumper

                 Well i need to display it in a new field as a array like this...

                  

                  

            • 3. Re: How to de-serialize in filemaker?
              philmodjunk

                   So you want the text after "type" followed by the text after "name" followed by a blank line, then the text after the second type followed by the text after the second name.

                   And how typical is your example? Always two pairs of type and name text or could there be multiple pairs or just one pair in other cases?

                   You may want to look into a custom function called "parse" that I think you can find on the Brian Dunning site for custom functions. You may be able to use it or adapt it to work here.

              • 4. Re: How to de-serialize in filemaker?
                thumper

                     Hey Phil,

                     yes there can be one or multiple variations and i have thousands of these that i need to "de-serialize" back into an array...

                     I have that custom function however im unsure how i would use it in this case with the serialized data always being different...

                • 5. Re: How to de-serialize in filemaker?
                  philmodjunk

                       Yes the variations make a difference and complicate the ultimate solution, but am I correct with the rest of my analysis?

                       Sure you don't want to put this data into a table of related records instead of a return separated list of values in one field?

                       The solution that I have in mind would use a loop--either recursion in a custom function or a looping script to loop through your text, parsing out each value and that should handle the varying numbers of these type|name pairs.

                  • 6. Re: How to de-serialize in filemaker?
                    thumper

                         Yeah the rest of your analysis is correct, I would like to keep it all in the same table, the data is only for display purposes... i am pasing a xml file from online and getting the serialized data from a <genetics></genetics> tags, there is one for each entry and over 2800 entries, could you show me a example of how you would parse out data from the serialized data? keep in mind the serialized data differs with each entry like this...

                          

                    a:2:{i:0;a:3:{s:2:"id";s:1:"1";s:4:"name";s:6:"Albino";s:4:"type";s:9:"RECESSIVE";}i:1;a:3:{s:2:"id";s:2:"33";s:4:"name";s:12:"Black Pastel";s:4:"type";s:11:"CO-DOMINANT";}}

                          

                    a:1:{i:0;a:3:{s:2:"id";s:1:"3";s:4:"name";s:22:"Albino - High contrast";s:4:"type";s:9:"RECESSIVE";}}

                          

                    a:2:{i:0;a:3:{s:2:"id";s:1:"1";s:4:"name";s:6:"Albino";s:4:"type";s:9:"RECESSIVE";}i:1;a:3:{s:2:"id";s:2:"42";s:4:"name";s:5:"Clown";s:4:"type";s:9:"RECESSIVE";}}

                          

                    a:2:{i:0;a:3:{s:2:"id";s:2:"88";s:4:"name";s:5:"Sable";s:4:"type";s:11:"CO-DOMINANT";}i:1;a:3:{s:2:"id";s:4:"1100";s:4:"name";s:8:"Spotnose";s:4:"type";s:11:"CO-DOMINANT";}}

                          

                          

                         etc....

                    • 7. Re: How to de-serialize in filemaker?
                      philmodjunk

                           the data is only for display purposes

                           Which doesn't mean that putting all the data in one text field is the best option. It still may be a better approach to load up a set of related records with this info. But that's your call. The basic algorithm is the same.

                           First step would be to filter out as much of the extra characters as possible:

                           Let ( [T = YourTextFieldHere ;
                                    T1 = Substitute ( T ; ["a:" ; "" ] ; ["s:" ; "" ] ; ["}" ; "¶" ] ) ;
                                    t = Substitute ( Filter ( T1 ; "abcdefghijklmnopqrstuvwxyz" & Upper ( "abcdefghijklmnopqrstuvwxyz" ) & "-¶" ) ; ["name" ; "¶name¶" ] ; ["type" ; "¶type¶" ] )
                                   ] ;
                                   t)

                           Using the last line of your examples as test data, that produces:

                           iid
                           name
                           Sable
                           type
                           CO-DOMINANT
                           iid
                           name
                           Spotnose
                           type
                           CO-DOMINANT
                            

                           Looping through that text to extract names and types, re-ordering them to put the type before the name, would then be fairly simple.

                           But why not import the xml data into a file? An xml import should be able to filter out the extra characters and leave you with just the data.

                      • 8. Re: How to de-serialize in filemaker?
                        thumper

                             Well i cant import by xml cuase i cant figure out the hole xml style sheet and also the data that is pulled into a global field for text parsing comes from a php file hosted on our server.... the php file consist of entries that are like this:

                              

                        <entry1><wobid>2736</wobid><record#>1</record#><name>Sterling Bee Yellow Belly</name><aka>Super Pastel Spider Cinnamon Yellow Belly</aka><description></description><updated>0000-00-00</updated><created>0000-00-00</created><codominant>0</codominant><dominant>0</dominant><recessive>0</recessive><genetics>a:4:{i:0;a:3:{s:2:"id";s:2:"20";s:4:"name";s:12:"Yellow Belly";s:4:"type";s:11:"CO-DOMINANT";}i:1;a:3:{s:2:"id";s:2:"40";s:4:"name";s:8:"Cinnamon";s:4:"type";s:11:"CO-DOMINANT";}i:2;a:3:{s:2:"id";s:2:"91";s:4:"name";s:6:"Spider";s:4:"type";s:8:"DOMINANT";}i:3;a:3:{s:2:"id";s:3:"112";s:4:"name";s:12:"Super Pastel";s:4:"type";s:5:"SUPER";}}</genetics><first_breeder>a:1:{i:0;a:3:{s:4:"name";s:11:"Snakings.nl";s:3:"url";s:55:"http://www.worldofballpythons.com/breeders/snakings-nl/";s:4:"year";i:2013;}}</first_breeder><url>http://www.worldofballpythons.com/morphs/sterling-bee-yellow-belly/</url><importwave>7</importwave><date>12/31/1969</date></entry1>

                              

                             thats just one of nearly 3000 entries and each entry needs to be added into my table as a record, i use a parsedata function to grab everything inbetween the tags like <name>albino</name> and so on for all tags, the issue im facing now is how slow my loop script is working when going through nearly 3000 of these entries and there are about 1-5 new entries added to the list daily. 

                             my script is just a simple loop like:

                             set variable [$Counter="1"]

                             Loop

                             New record

                             set field  [name field;ParseData ( table::z_DataHolder; "<name>"; "</name>"; $Counter)]

                             set field  [description field; ( table::z_DataHolder; "<description>"; "</d4escription>"; $Counter)]

                             more set fields for the other tags follow here

                             end loop

                              

                             I thought if maybe after i created a record and parsed the data for each entry into the fields of that record that it would speed things up to remove/extract the text for that entry in the z_DataHolder field so that field will decrease in size making the text parsing run faster, but im unsure how to extract the text that has been parsed and also unsure if it would make any difference in speed.

                              

                             The let function works great for filtering the genetics :) Thanks so much!