Set iterations to at least 50'000.
try this alternative, too.
What I'm trying to demonstrate is that
for the - I hope - few of us that must absolutely run time-dependent / time-critical scripts on a large set of records,
• it's better to avoid comments in the script (use a twin commented script with "doc only" in the name)
• if you want comments, it's better to use a disabled Let() than a real comment...
As you set it up I got 2742, 3498, 3071
HOWEVER, you are using the Random function, which I think might also have suttle nuances in timing.
I removed the random part in a copy, (just set $whoCares to 10000, instead of Random * 10000) and got 2669, 2533, 2405.
Also, I added a third option, that leaves the random factor in place, but adds a refresh that clears external data before the execution of each script. With that refresh, I now get 3050, 3334, 2795.
There's could be a lot dependant on your system itself, and I'd imagine the RAM and processor allocation between FM/OS/other apps varies wildly during any given runtime. Just for kicks, here's your original script run 5 more times:
If I quit all apps except FM and Chrome, I get:
etc... so it appears that Mac OSX and FM are pretty decent at handling processor/RAM affinity.
Attaching my modifications.
Personally, running 5 * 50,000 for the comments with only a variation of <1 second over no comments doesn't seem like a big deal to me.
That's 250,000 comment line reads, and only <1 second variance. Meaning that in an extra minute, you could get 15,000,000 comment line reads (assuming no time lapse degredation, which I HAVE experienced in FM for long running scripts).
My final thought is that if your script is long enough, or run with enough frequency that it makes a difference, by all means separate it into a production only copy and optimize the hell out of it.
Otherwise, for all 99.9% of my practical scripting purposes, comments assist more than they detract.
Here's a good read as well:
For instance, if you have a script you run once daily, and optimize it to one second faster, then you save 30 minutes over the course of five years. Maybe it's not as big of a deal?
PS - if it IS a big deal, man, what the heck are you doing with FileMaker? Most people going off into that territory / need are working with much faster DB platforms.
there is a silent minority out there which stretches Filemaker's functionality and competence into new territories on a daily basis. I think I'm part of it - we've set our bets on FM many, many years ago and we like to think that anything, everything can be done with this wonderful piece of software. As much as we love it, we do explore its boundaries, limits and possible improvements - this discussion is an example. Personally I base my living on Filemaker, on delivering and covering what our clients are asking for. In a world that wants everything, immediately we do have to fit and optimize our solutions; if some ideas and valid points crystallize and gain momentum, catching Filemaker's own architects' interest, everybody has something to gain, that's the whole point.
And by the way, Mike, what's your take on this ?
(Hint: hit RUN more than once)
No special take on this... A better test would be to run these 3 things sepearately after each time quitting out of FM to get rid of any cached data.
What it demonstrates is what has been dicussed many times, including at Devcon: ExecuteSQL() is not always faster than other native FM mechanisms. What was your expectation?
Interesting test. I'd never really thought about the possible impact of comments or leaving disabled lines in place. As mike points out, not a lot in the grand scheme of things but nonetheless worth exploring a little. To speed things up a bit I found 10,000 was plenty of loops to show a discernable difference. My first observation is that the scripts never execute in exactly the same time. I tried shuffling the order but that was not a significant factor. Then I tried making all three scripts identical and there is still an execution time difference. How come that?
Anyway, for what it's worth, here's a series of tests I ran, with the time result for the altered script only:
Test 1 ($whoCares variable disabled, #comments in place): time—486
Test 2 ($whoCares variable disabled, #comments in place, add SetField () step after each comment): time—811
Test 3 ($whoCares variable enabled, #comments in place): time—941
Test 4 ($whoCares variable disabled, #comments disabled): time—518
Test 5 ($whoCares variable disabled, #comments removed): time—453
Test 6 ($whoCares variable removed, #comments removed): time—435
Test 7 ($whoCares variable removed, #comments increased from 5 lines to 20): time—752
(1) a disabled line of script has an impact on execution time, albeit small, so while we disable code during script development it's best not to leave disabled code in place when you are sure it's not needed.
(2) comments also have an impact on execution time (interesting that they have an even bigger impact if disabled!), so use them to the extent that they are needed but be aware that they are slowing script execution, albeit infinitessimally per line, but collectively it becomes significant.
Optimizing applications and pushing FileMaker's limits are both worthy causes, but there's value in keeping the bigger picture of any optimization in mind before making recommendations.
A sophomoric approach might demand that we start by identifying the bottlenecks in our current solutions so we can focus our efforts on improving the most problematic spots, and only turn our attention to optimizing non-bottlenecks when we've exhausted our imaginations for alternative solutions to the bigger problems — optimize our optimization efforts. You must have done a lot of optimization work if comments and disabled script steps are the most fruitful place you have left to search for speed improvements! I imagine your results from some of that other work might have broader appeal, and I'd love to see it on the forums.
A not-much-wiser approach would suggest that we start by understanding and balancing the priorities of a solution — figure out what needs to be optimized. You appear to have found an example of the infrequent scenario where improving execution speed and improving clarity for maintenance developers might be at odds with each other. We can't make general recommendations based on your findings without understanding the priorities of the solution that recommendation will be applied in.
I don't mean to discourage the effort here. This is fascinating and useful stuff. Being strict about either approach I mention here can easily lead to pathologies similar to statistical over-fitting or over-financialization of business management. Cost/benefit analyses are most helpful when we have good reason to believe that the costs and benefits of our actions and strategies are predictable, which is often not the case in software development. (However, analyzing performance bottlenecks does show us the ceilings on how much benefit might be possible from which optimization.) So, let a thousand experiments bloom!
keywords wrote: (2) comments also have an impact on execution time (interesting that they have an even bigger impact if disabled!), so use them to the extent that they are needed but be aware that they are slowing script execution, albeit infinitessimally per line, but collectively it becomes significant.
Ok, time to fight this because it seems to be building up to a recommendation to avoid commenting code.
Collectively they DO NOT become significant because you can not measure the impact of code commentting purely by its execution time.
For 99.9% of the solutions out there, avoiding comments or disabled code is going to fall in the "Premature Optimization" category (http://c2.com/cgi/wiki?PrematureOptimization)
As Jbante mentioned, you would have to have exhaused all other optimizations before you would turn to avoiding comments and disabled code.
Without that proper perspective this thread may lead developers to avoid comments and lead to obscure code and wasted development time and potential for errors due to obfuscation in code maintenance.
I am not against commenting code AT ALL! In fact quite the opposite. And I firmly believe that there is far more to it than execution time. Commenting scripts, calculations and field definitions adds far more value than it costs. It's just interesting to note that it does actually have a cost, albeit miniscule, that's all.
I base my living on FM as well, as do some of the heavy hitters that have commented below. But I also have coded previously, and still dabble, in other platforms enough to not be naive about what FM's limits are. There's many great uses for FileMaker, but performance driven applications on massive data sets with copious amounts of calculations while expecting great speed is not necessarily one of them. I am very much a "right tool for the right job" developer. I also know that client relationships will eventually deterioriate if you sell someone FM when it's not the right solution for them.
Take for instance another technology I've played around with a bit: hadoop. Hadoop is a crazy awesome platform, hive computing delivers insanely fast results (like google as-you-type search results). The ability to rapidly scale from one to one hundred, or one thousand machines to distribute and speed up the workload is nothing short of a miracle of modern computing. I can take terabytes of data, and run calculations and queries against it pretty much in realtime. But I don't see that sort of application as EVER being done in FileMaker. Yes, FM has a record limit of 64 quadrillion records, but that doesn't mean you should go index the text of every book in the library of congress to make a new catalog. It's just not the right tool.
Everyone explores the boundaries of FileMaker in their own way. I don't want to sound like a jerk but you're not in a minority of developers out there. There isn't a dev I've met that says "my solutions are fast enough", or "I think I've learned everything I need to know". Part of the appeal of FM is that even novice developers can build powerful solutions in no time. A junior level developer I trained at my last job is now using techniques and strategies that some 20 year veterans I've met wouldn't be able to wrap their head around, and he's only two years into ANY programming platform.
As Jeremy suggests, most developers here may not even consider comments an issue. As Wim suggests, trying to introduce a coding standard that you should either forego commenting or keep double copies of scripts for performance might also be dangerous (I'd actually bump up Wim's number to 99.999%). As I demonstrated on your original test, slight modifications can wildly change the performance (IE, removing the random part made the second test WITH the comments FASTER than the first one without).
If you still want to continue, you really need to thoroughly think of how you can truly isolate and test for performance to see where the net gains would be. As Wim stated, execution time may not even be a valid measure of true performance. I also want to echo Jeremy's thoughts of not trying to discourage this, even though my long comment here may touch on ranting a bit. If you're trying to introduce such a drastic measure to eek out the tiniest bit of performance, while trying to gain community support for a coding practice AND the attention of the FM architects, then you need a much more thorough test, with thousands of iterations and hundreds of testers to prove your point. Not just a casual file that was thrown together and introduced on technet.
Thanks for the Tip
If I ever need to shave a few seconds off a huge loop, I will try it
However, I would be more interested to know WHY this might happen, e.g. scripts are interpreted, or compiled just in time
This would make a good Under the Hood Topic at DevCon
> What I'm trying to demonstrate is that
and by the way, I'm getting a nice speed boost by enclosing subscript calls between Allow user abort (off) / Allow user abort (on) steps.
As much as 14%.
Of course you must be sure about the validity of your enclosed subscripts.
I'm just surprised to see so many nay sayers
If you come to FileMaker after doing game programming, this thread does make sense
it seems Set Error Capture ON also makes a difference
The "Comment" time difference does get larger, with more loops, in FileMaker 11, but seems fixed in 12
The "Disabled Line" time difference does seem to get larger, with more loops, in FileMaker 12
Either way, my little test script runs faster in FileMaker 11 than 12
> and by the way, I'm getting a nice speed boost by enclosing subscript calls between Allow user abort (off) / Allow user abort (on) steps
Please stop these nonsense arguments on commenting scripts and also calculations.
Comments have absolutely NO effect on performance speed. Please take this as a true fact!
This is why: Every script and every calculation exists in two versions in a file.
Version 1 is the text representation that you enter or edit.
As soon as a script (or calculation) is saved, it is copied and translated to version 2 which solely contains the executable part of it, i.e. there are no comments, blanks, or formatting left. Only version 2 is ever being performed, whereas version 1 is kept as human interface.
Greg, if you ever figure out how to do bit-packing in filemaker let me know.
Also, what game programmer would ever want to go to FM? It is in no way, shape or form remotely close to any coding language for gaming, not to mention a lot less cool.
Thanks a lot Winfried for this awesome information.
Maybe your knowledge of the inners of FMP could offers some other performances related facts ?
Thanks a lot.
Vincent, if you haven't met or heard of Winfried before, I'd suggest you check out the stuff he's already written about best practice, troubleshooting and performance.
Here's a good start to his already massive contributions outside of this thread:
I doubt he's going to comment further, as the original purpose of this thread has been exhausted.
as the original purpose of this thread has been exhausted.
I thought that such conclusions belong to the community, not to individuals.
I was wrong, apparently.
I'm not trying to insult you, but apparently you think so.
Please stop trolling and bumping your thread. Take people's opinions for what they are, and stop trying to start controversy that doesn't exist.
I personally thank you for the question you raised that led to a definite answer from Winfried. It's a question I asked myself a lot and I'm sure others probably thought about it. The outcoume of this thread is that now the community knows the answer and that's a big plus in FMP Knowledge that your concern triggered.
So many thanks !
P.S : I just wish FMI would publish low level data like this so it would help the community to better understand ohow FMP works
So you want an example of speed critical processing.... here comes one:
Immagine a billing system based upon more than 4000 standardised, different entries. Call them "performances".
Put them together in chapters, blocks and groups.
Immagine hundreds of rules - always with, never with, only if, max of, and so on - rules that state things like
- this performance is not billable if it has been billed more than 30 times in the last 6 months.
- this performance can only be billed if performance Y is also billed.
- this performance can be billed only if no performance in block A is present.
- this performance is billable only if patient is less than 6 years old.
- this performance can be billed a maximum of 3 times per week.
- Performances in group A are not billable together with performances in group B.
- Performance X has 4 different weights, depending on week day and time of the day.
Any bill you compose must go through a Validator test, which checks all the rules and answers "OK" if you didn't screw up:
Every bill of every physician must be validated. Example: a simple 15 minute visit is composed of 3 performances
00.0010 First 5 minutes
00.0020 1 x 5 minutes
00.0030 Last 5 minutes
and there are at least 15 rules to check out, rules that confirm this minimalistic bill being a valid one.
Every bill line must be compared with all the n lines in the same bill (n ! = factorial of n) and with bill lines from the last year on the same patient.
No bill can go out without being validated.
We've FileMakered it: I've built this Validator in FMPro, it's a HUGE script and believe me, both perceived and real speed does matter:
when tens of doctors input hundreds of performances per hour and produce tons of bills to be validated
and when your bill unit is 5 minutes, you can't wait more than 3 seconds for a validation to happen.
This is not a nightmare, it's real. (A real nightmare)... it's the swiss TarMed system - google it if you have doubts.
You can improve the perception of speed in a two ways. By actually using methods that are quicker, or by spreading the time around so the processing doesn't happen when the user is looking.
Jeremy's point about optimizing above is a good one. Based on your description, there's a half dozen techniques you could use that will have a greated effect on speed than comments.
Is the file hosted locally or across the internet? Can you host locally if it's remote? Can you split the file into smaller files to run faster?
Are you running validations on a the final invoice or as items are added? Can you perform the calc using Perform Script on Server?
Are you using Case() and If in ways that reduce calculation times...by having the stuff that evaluates locally come first?
Instead of checking the bill items against the items from the last year (or 6 months) on the same patient, can you store that collated information in the patient record instead, with a server script running nightly to update?
Those are a few things that come to mind.
Yes, comments seem to have little effect, but what about disabled lines ?
There does seem to be a delay, somehow
> Comments have absolutely NO effect on performance speed
Comments have absolutely NO effect on performance speed
Interesting stuff for sure. I do think comments have an affect (as does Allow User Abort[Off]), but there are lots of things in your test that could be affecting performance. Try switching the order of the two subscripts. It looks like that has an impact.
Also, does a flat number versus a Pi() call make a difference? What about setting variables to store the time and then the global fields? Is declaring the $i counter in the Let() faster than as a variable?
If there is an effect, and there might be, keep in mind:
The effect is fractional. We're talking about 400-500 milliseconds on a 10,000 $i loop. With 50 comments, that's .001 miliseconds (that's .000001 seconds!) per comment.
There are other ways of optimizing that have less impact on development. I wouldn't leave Allow User Abort[Off] to improve performace. The disaster of a user aborting mid-script is too terrible. Just like poorly commented scripts introduce the potential for more mistakes.
david, I edited the post - threw out the pi calc in the attachment, please re-download it.
Also try setting the iterations to 5000 and hit the TEST button repeatedly, watch the "xx times" values.
I started this thread explicitly stating that 99% of filemaker programmers and users can disregard it because the impact of my findings on their installed solutions is negligible.
It’s a thread destined to the ones that have millions of records to process and/or people like me who try to optimise scripts that run a factorial of 20 times (a mean value). It’s a niche thread, worth nothing for normal solutions but maybe interesting for a few. Nothing more, nothing less.
In a world in which we facepalm ourselves often (sometimes twice a day) when reading the posts on this forum, I thought I might bring in some positive curiosity and hmmm factor, something that might wake up the dormant - looks like I failed, please excuse me all.
Still, while my rebel nature does not allow me to take bullshit from any self-proclaimed or elected forum guru which surfs my posts unloading absolute “deep knowledge” on them, I do appreciate forum peace and that’s why I only say
Just wanted to say, Radu, that I appreciated reading this thread. You raised some interesting questions. Yours and others' results informed us that: 1) there can be a small performance impact if dataset size and looping structure are extremely large, and 2) most of us can comment and disable without performance concerns. There are few things more important to us developers than performance; anything that proves or disproves a technique's impact on performance adds wisdom to the conversation.
Odd that so many people are dismissing something that is new information to most of us. An adverse effect is an adverse effect, and any assumption that this is the only thing the original poster is looking at for speed is absurd. It is quite possible that this tidbit of seemingly innocuous information will rear its ugly head under different circumstances and cause a noticeable slowdown.
I could indicate a few FMP oddities that would not adversely affect most database tables, but then show you that there is a cutoff point at which the effect becomes compounded. Some of you would scoff and tell me that the milliseconds don't matter, then scratch your heads a month later when your record set hits some seemingly arbitrary threshold and things start to crawl.
Want a fast daily calendar/dashboard with 45 rows x 5 columns x 3 fields = 675 references to be fast for remote users? Then be sure to either delete the oldest records in the table that is referenced or turn off the calculation to the foreign key (set to null) after you reach 24,000 records or so. Test with less than 20,000 records and I'd be hammered with "don't be ridiculous, you're saving milliseconds. Look elsewhere." when my point was to make people aware of a looming issue. Aside: Problem solved with 10-20x increase in speed using variables, while no streamlining of relationships ever fixed the problem.
Ever try changing the formula of an unstored calculation then wait an hour for it to complete some unknown cycle? These are supposed to be immediate changes, right? I have waited an hour for some calculations and not others, and it's completely reproducible for some formulas when you reach 50 million records, give or take 10 million, while the delay never happens for others. Large formulas on large tables is all I can tell you right now, but it most certainly happens.
Is it possible that the information from siplus saves me hours of wasted time in the future. I'll keep this news in mind and maybe it will pay off later, so thanks for the info.
How do you know about a delay? You may experience some milliseconds delay the first time you use the changed script, but this is the same as for a new script.
Huslik Verlag GmbH • Bgm.-Widmeier-Str. 42 • 86179 Augsburg, DE
CEO Winfried Huslik - HRB Augsburg 12386 - VAT-Id. DE127485099
Phone +49 821 565606, Fax +49 821 565001, Email firstname.lastname@example.org
Verify your FileMaker Pro files with FMDiff - http://fmdiff.com
FileMaker Developer Conference July 28-31 2014 • San Antonio TX
It is repeatable, and obvious even in seconds. Just Time End - Time Start. Something is going on
The delay gets longer, with more loops
> How do you know about a delay? You may experience some milliseconds delay the first time you use the changed script, but this is the same as for a new script
I don't know what you're doing. Please provide a sample file and a description!
Here is ( another ) sample file
There are 3 looping scripts, one plain, one with comment lines, and one with disabled lines
Disabled Lines runs consistently slower
You can enter the number of loops
> Please provide a sample file and a description!
Using your sample file, I changed the time recording to use the new UTC milliseconds to make it very accurate.
I ran each script 5 times:
The times in milliseconds:
Normal script: Commented lines: Disabled Lines:
2635 2470 2326
2535 2534 2309
2529 2443 2315
2586 2428 2329
2585 2931 2310
So I'm not seeing any slowdowns with comments or disabled lines
If I close the file before running each script I am getting fairly consistent timings. Other machine activity (incoming mail check) etc does have a measurable effect. Comments and disabled lines does not, at least not in a way that is accurately measurable. The effort to try and determine if there is (I doubt it) is not worth the trouble: you'd have to make sure absolutely nothing else runs on the machine, which is not valid in the real world anyway. I've seen the routine take half a second longer when mail is checked. Other than that there are variations between 50 and 100ms in running the exact same script.
I'm as sorry as can be, but the three scripts run exactly with the same speed here.
Funnily the script with the disabled lines run one tick faster.
Test with 200,000 repititions on FileMaker 13
Processor 2 x 2,26 GHz Quad-Core Intel Xeon
Memory 16 GB 1066 MHz DDR3 ECC
Software OS X 10.8.5 (12F45)
Big Loop no extra 14
with comments 14
disabled lines 13
What I have done was to add a Get(Scriptname) to the final custom dialog. What are your numbers?
"Disabled Lines run consistently slower" is no exact science.
Getting accurate speed metrics in FileMaker is very very tricky. I have spent significant effort trying to work a methodology out for testing. Historically I have thought I had improved something only to discover that restarting the server or client resets the gain, implying a cache is responsible more than any improvements.
A function simillar to SQL's EXPLAIN would be nice......
The use of comments and disabling script(s)/steps in milliseconds SAVED because I could clearly see what I was doing later:
I'd probably say at least a week's worth of milliseconds in any given year...
This is what I get on a SSD MacBook Pro, with no applications except Filemaker 13 running, with the whole Extensions folder moved out of the Filemaker folder, while hitting for the 10th time the TEST button. (The other 9 results are consistent with this one anyway).
If you get the same values for all 4 scripts I'd really like to know what system / Filemaker / Extensions you're using.
I'm with Beverly. Comments are much more valuable in terms of saving My Personal thinking time than anything cost in computer processing time.
Nobody wants to ban comments from scripts.
I'm only trying to underline the fact that If you got mission critical scripts, scripts that run thousand or more times per hour, optimized to the bone, you might get another boost duplicating them, keeping a version with comments, disabled lines and allow user abort on, and a running version with no comments, no commented out lines and allow user abort off.
I'm surprised about how difficult it is to get through this simple idea, AFAIK we all want to squeeze better performance and this is a provable way to do it.
I think we all understand. You've been given tests showing the effort to remove does not make enough of a difference to those of us that rely on the comments, etc. I do attempt to optimize the code, but comments are not IMHO the "problem". I've taken others code from 50 lines to three. That seems to speed things up! Also, I script rather than calculate much and that shows marked improvement on speed. I also think I'm saying if you feel better by removing comments and disabled script lines - more power to ya!
I hear what your getting at, and you are probably right about these steps having an impact, but am not convinced from tests of various machines, reported earlier in the thread, that the impact of these would be significant where regular users have a ton of other stuff running anyway.
I've also engaged in script rewriting in old systems to trim hundreds of lines to a few dozen, knowing that shorter is better. However, while doing so, I also heavily comment -- still coming out way ahead, faster cleaner scripts with explanations.
The kind of performance improvement that matters to me is when users see a speed improvement in a report I rewrote, or in a process they run when closing a job or an invoice process -- things where they might notice having to wait. I've never written a script which executed 1,000x per minute except for testing. If I needed such a system, and users complained of its slowness, I might well reconsider this topic.
Meanwhile, I saved your test file -- just in case. Thanks!
I agree with comments of others that commenting in scripts is worth its weight in gold, as the saying goes. I simply add that what this discussion has shown is that comments do actually—and surprisingly—have a "weight" (in terms of processing time) albeit it minuscule for the most part.
Like most developers, i am not going to cease commenting scripts and calcs in order to save a millisecond or two, but I do thank siplus for raising it and thus adding to my store of knowledge.
Let's take another swing at this, I've used your latest file. The script that is the slowest is this one:
Clearly, to optimize you'd remove all but one comment. If you do the script performs just as fast as the one as the one that has no comment. But let's take a more real-life example.
I've modified your script to actually do something in-between those comments:
And I have the same script without the comments. When I run them the script with comments takes 66 to 68 seconds. The one without comments takes 64 to 65 seconds. The smallest difference I have seen is 1.5 seconds, the longest 2.8 seconds.
That's for 100,000 iterations. So clearly the script with comments takes longer. Breaking it down and using the biggest difference between the scripts it comes down to 0.00175 milliseconds per comment per iteration (2869 milliseconds difference over 100,000 iterations and 16 comments).
Clearly I am not going to worry about adding something to my script that takes 0.00175 milliseconds to execute. If I have a straightforward script (not a loop) and I add 1,000 comments, the impact is going to be less than 2 milliseconds.
So for a single comment to make a noticable impact (let's say noticable = 1 second), it would about 600,000 loop iterations (1,200,000 iterations in my most favorable scenario)
So based on this you could say: "let's take out the comments in that script", I'll get better speed and shave 3 seconds of this process.
But what if I do this, I keep the 100,000 iterations and I still set the same variable to a random number and I keep all my comments
Now my script runs in 31 seconds. I just saved 30,000 milliseconds, not 3,000.
What's the point? I'll bet the farm there are other optimizations that can be done before you would even have to consider taking out comments...
In the example I used I am not going to care too much if the process takes 64 seconds or 68 second especially since the exact runtime varies so much even for one variation of the script. If I'm am looking for speed I will be looking for something that cuts that execution time dramatically, even if it means taking the logic out of FM and have it done by something else that I then integrate with.
In my mind, removing comments is never going to be a way to improve performance. The speed impact is negligible and the impact of having no comments is way to great. And no: maintaining a second script with comments is not workable in my book. They will get out of sync at some point, too much of a risk for errors.
I simply add that what this discussion has shown is that comments do actually—and surprisingly—have a "weight" (in terms of processing time) albeit it minuscule for the most part.
Unless they don’t. I just ran five tests.
In three of the tests the difference between the speed of each script is similar to the variability of speeds between test runs. In other words, from run to run, the differences in speed, running exactly the same script, show the same amount of variation as the differences in speed between the different versions of the script. What does that tell us? If it is consistent over a large number of tests it tells us that the variations in speed between the different scripts equals the normal variation in execution speed for any script.
In one of the test runs the disabled line script is takes four times longer than the quickest. It’s clearly not four times longer in other runs so this is a signal that other factors can have a big impact on speed of execution.
In two of the test runs the script with no comments is slower than the script with disabled lines. Should we add disabled code lines to our scripts?
Sorry for the confusion. That was a bad example
It turns out you need a LOT of comments, to make a difference ( a few dozen )
It also pays to use a slower machine. My dual core Mac Mini shows a much greater difference than my Quad Core Xenon PC
The question is: How can there be a difference at all ?
What you say should be true, that FileMaker scripts are compiled
However, they appear to be interpreted
Thanks for trying, even though it proved nothing
it did make for an interesting discussion
> I'm as sorry, but the three scripts run exactly with the same speed here.
gdurniak wrote: It turns out you need a LOT of comments, to make a difference ( a few dozen )
The script steps that execute things that your comments explain have a much bigger impact on the execution time of the script, that's what we seem to be forgetting here. As I pointed out in my speed test earlier, the extra effect of a comment (or even a lot of comments in a busy script) is negligible.
Winfried is spot on for calculations but AFAIK scripts are interpreted.
What we seem to be forgetting here, is that siplus was correct
> that's what we seem to be forgetting here
> AFAIK scripts are interpreted
I don't think anyone is forgetting. There are choices and we make them based on the needs and available data. // I USE COMMENTS!!
Retrieving data ...