Why not just write the script properly as a standard FileMaker script?
Hello! Take a look at the first example I post. I see nothing "improper". Remember it's running on server so there are no "windows" issues.
Sent from my iPhone
You say you're running FMS12 - so you don't have perform script on server, unless you're doing this as a scheduled script you'll see bottlenecks.
If nothing else, a Freeze Window would likely speed things up. If this is running on a client you're still transferring all the data back and forth across the network. Something that I leanred at DevCon is that performing SQL actions (especially with ExecuteSQL, can't speak to 3rd party stuff) any open records will slow down the execution considerably.
I'd agree with Bruce that there is probably a lot of optimization that can be done to your original script to speed things.
Siince you're doing a search and then looping through everything - there are a number of alternate methods that might work faster. Bruce is the king of the virtual list technique so I'd defer to him. Perhaps stuffing all your source data into a VL or even repeating variables and then populating the destination table would be much faster. If nothing else you'd only change layouts once intead of in every repetition of the loop.
If Bruce says there are places to optimize, I'd believe him.
Your original script changed context for each iteration in the loop and that is very easy to avoid and is going to save a lot of time righ there. Collect all the data in a variable and then change context once, loop through the variable and create the new records.
There may be other things that are going on that we're not seeing (record load / layout triggers? Stored calcs in the log table?) that would slow this down.
Hi folks! Thanks for your input. I DO use VL a lot, also. For this particular script I just didn't want to load so much into ram. I need to save each record while processed.
The table has no calcs, triggers. It's just a simple table.
What is interesting for me is with the SQL approach how fast it was to implement and how fast the records are being written. It's just an experience I wanted to share.
We all know that with FM there are an infinite number of roads to Rome.
PS: it runs every 4 hours, server side.
Sent from my iPhone
There is a good chance it has nothing to do with SQL.
It's about your script design and continuous context switching.
Maybe you are right. That's why I use virtual lists a lot. But, for this example I needed to process and save each record at a time.
Thanks for your feedback!
Sent from Samsung tablet
Curious. What is the reason you need to save each record at a time?
I tried it with no "go to layout", just add ( insert ) record, and ExecuteSQL with the Plugin was twice as slow
> I boosted performance when creating New Record
The script parses an audit log and push that information to another table. Due to government requirements the method to parse that sort of information has to be “per record”.
Otherwise, I’d do a multidimensional variable and parse it all at once.
I will add a little bit more information to the paper so everyone can get the context of why I am doing it this way.
Thanks for all your input and comments.
I concur with Wim above that the main lesson learned is that avoiding changing context in FileMaker can save a lot of time and the lesson to be learned is to avoid changing layouts if you don't have to. That is an important lesson for FileMaker developers to know and your experiment shows how important it is.
If you're going to use a plugin so you can use the INSERT function in SQL, why not save the whole Find and Looping in FileMaker. Just do a WHERE clause on what you want to find and INSERT them all at once into the table instead of one record at a time. That might improve things and make the script a one step script.
If you move up to 13, you can support the iOS devices by having a Perform on Server script step and have the plugin on the server. The Perform on Server script step really is a game changer in optimization for FileMaker databases.
That's what I figured. Have you considered using a transactional approach. Adding the records via a relationship. You parse, set the related fields, clear a global field, set the next set of values, clear the global, set the next set of values...etc...then commit.
See this for an example of what I mean:
It will likely be a ton faster than changing context, and still satisfies the per record processing requirement. And will be a little easier to manage when things go wonky...because you can revert the changes and not have partial success/problems.
I’ll take a look at it! Thanks for all your feedback
I also note that your scripts (both original and revised) do not indicate where they declare the variables $date, $time, $user, $field, $old, $new.