first thing i would do is count how many times you use get app version. If its more than 1 then set it to a variable at the beginning of the script and reference the variable when you need it.
same with windows desktop height/width, get user name, and get account name.
This wont answer the question about the CHANGE in speed. When you went to backup did the speed return to what you expected?
If yes then run a recover on the file to see what shows up. Corruption may be rearing its ugly head
Thanks! I hadn't noticed the Get(ApplicationVersion). An InspectorPro search turned up 30 times for that. In prior versions I had put that into a variable $$Device_Mode. I will check your other suggestions as well. I had not even thought about the windows desktop height/width.
Backing the file up and saving it as a compacted copy had no effect on changing the speed. I ran the "Recover" which created a new copy and said it detected no problems and the file was safe to use. But as you said - the speed remained the same. Despite the "recover" message - I am concerned about corruption.
Again thanks so much for your help.
There are a lot of subscripts in here. I suggest you drill down and add additional time tracking before and after each subscript. The you can identify the bottleneck.
Lots of variables can contribute to a change in performance - server load, data load, network conditions, cache contents, even whether or not the current record is open or committed. I'm not so sure I would jump directly to file corruption as the cause.
The only reason that i even went down the road of corruption is that the OP said he restored from back up. He didn't say until later post that restore from backup did NOT restore the speed. Because he didn't provide all the info to diagnose i was guessing.
first of all, your script does not start with going to a specific layout; some lines down there's a Show all Records which can impact, depending on how many records there are, how many fields per record you have, whether there are unstored calcs on the layout etc.
Also we don't know if any of your layouts you go to have triggers on them.
simple rule is to call a function once and store the value if you need to reuse it in a script and pass it to a sub script if a sub needs it
Things to check:
1. At the beginning, anyone who's Application Version isn't "Server" performs the script 0696a, "Write Generic Audit Trail Entry". Has it changed? Take a look at it.
2. Near the beginning, and we don't know (nor do you) which layout you're doing it from, you do a search on "T17 STAFF:FileMaker_Account. If that find is from a layout that doesn't use the STAFF TO as the basis for the layout, then you are doing a find on a related field, and that will slow things down a bit. Add a Freeze Window and go to layout T17 (or better yet, a blank layout which uses T17 STAFF as the base table) before you do that find. It will speed the find up.
3. You have a "Show All Records" after the Find in #2, then instantly go to L0_Opening Window. Remove that Show All Records. It's causing data to be sent to you that you may never need, and if that layout contains summary fields, fields from related tables, or unstored calls, you'll be slowing things down.
4. Look carefully at script "0027 PSOS Diagnostic", which is another subscript that gets called.
5. You go to L275_INVOICES_Data_Entry and then set 2 globals. Then you go to L310_STAFF_Data_Entry and set a different global. Then you go to L300_PROJECTS_Data_Entry and set another. You can set global fields from anywhere, so you don't need to go to any of those layouts. Doing so causes data you may not even need to be sent to your computer.
Let's just say you set those global fields right near the beginning, and remove 3 or 4 "go to layout" steps. Your startup script will automatically speed up because it won't need to receive needless data from INVOICES, STAFF, and PROJECTS.
Those are some things to look at that might help with speed, too.
Those are all really great points. After looking at them, so far I've decreased the execution time in the upgrade file by .574 seconds to 1.69 seconds. On my older data file which is currently in use by my customer, I have also decrease execution time from about .464 seconds or slightly less than 1/2 second to about .185 seconds.
Thanks to everyone for your help.