I don't have much experience with Server 12 but I have lots of experience with Server 11 server side scripts. The server is much much slower than a client.
12 does have some optimisations so it runs faster than 11, so I can't give you any details on what should and shouldn't work, I believe that 12 is a fairly different beast under the hood.
The FMS hardware, cache, processor, free hard drive space, and software load, number of users, are they doing reports (amount of data they are using), user imports, etc. are all involved in the equation. On the local computer only you are doing this work. On the FMS you have multiple user's all doing various amounts of work. If the FMS hardware isn't up to the work load then FMS scripts can't run well.
Another issue to consider is client caching. Whenever a record is updated, all clients must receive that update, which comes in the form of a user cache. These are updated when needed, or during idle times. The server has to handle that load. When you're updating the tables on the back end in a batch, you're directly affecting all the caches of all the connected users.
Further, the more fields there are in a table, the longer it will take to update that cache, because all fields must be downloaded to each client for each record being updated. Records are fetches in batches - 25 record batches for Form View, or as many records as can be displayed in the current List or Table View. If you're replacing the contents of all the records in a table, then every client will have to be updated with those changes - and this will put a strain on the network as well.
So yes, I would fully expect this operation to be slower when processed by Server than by a client. That's why normal practice would be to schedule it for a low-activity time, when few or no users are connected.
Thanks, good to hear that in general it is to be expected.
I shall test with no users connected to see how that goes.
You will find that even with only one client connected it is still around 5 times slower than a local DB on the same server. I have done extensive benchmarking and this is persistent even with some serious hardware. There are some other things to consider such as Filemaker server's native protocol parsing which is an overhead as well as logging of events (which can't be turned off). A Filemaker support person here in Australia helped me develop a benchmark db which we ran on several systems and he agreed that the performance difference was significant. He queried this condition with engineers in the USA and the response was 'what's your point?'. We left it there and have concluded that it is what it is and have not pursued this any further.
Many people run server side scripts in the middle of the night when the server usually has few or no connections and time is not so important. I know it is slower running on the server because the server has a lot of overhead, but I'm really surprised to hear numbers like "5 times slower". I know it is slower, but it doesn't seem that much slower. If time really is important, I've done things when updating like have it only look for records that have changed and reimporting those and not touching ones that have not changed and other such tricks to improve speed a lot.
Are you scheduling any server-side scripts to run simultaneously? If you are, stop. A FileMaker system engineer told me the SASE process is only single-threaded; despite the ability to run several scheduled scripts at the same time, those simultaneous scripts will be constrained to using the resources of one.
Thanks for all the additional comments.
Tried it when the server had no connections (or anything else going on) and it didn't seem to change the speed. The biggest import actually took 45mins this time, which is too long
Not doing any scripts conncurently.
I will break it down in to smaller imports more regularily (it is constantly updated data) and see how well it handles it. I am feel like it must be the calculation/summary/indexed fields are they handled differently on server?
Will run some tests without those feilds (and at night) and see what happens.
Something is not quite right here. 45 minutes to import 10,000 records is abnormally long, even server-side. I routinely pull 15,000 - 20,000 records in just a few minutes. One thing you can try is to turn off indexing on fields that don't need it in the receiving table. Updating indexes takes a long time, especially if you're dealing with a delete-and-replace operation (I don't know that you are, but I thought I'd throw that out there).
Another thing to look at is the width of the table. I mentioned this before, but if you have a lot of fields in the table, it can slow things down. Eliminate what you don't need. You mentioned calculation and summary fields. Summary fields don't update until they're needed (they're inherently unstored), but calculations (stored ones, anyway) will. Also, watch out for a dependency chain, where calculation A depends on the result of calculation B, which depends on the result of calculation C. That forces FileMaker to wait for C to be evaluated before evaluating B, and then again before evaluating A. Rework your calculations to avoid this situation (if you can't get rid of them).
And again, check your server specs.
I will go through the table and see what I have there and try with out any
of the indexes/stored calcs, and also try the import to a fresh table.
As for the server specs, the server is faster than the client with which
this comparison started. Mac mini server with 2.53 Intel Core 2 Duo, 4GB
Ram. Database is on local drive. Can't imagine that is not fast enough?
Seamus Phillips MEng
Yes, a Mac Mini server with Core 2 Duo and 4 Gigs meets the minimum specs. But clearly raises a red flag as being a bare minimum for a server. Even when I use a Mac Mini, I use external RAIDs like Pegauss and very fast drives and in the last year, I now put 16 Gigs of RAM on them and assign FMS to 8 gigs of it. I would consider a minimum business level RAM to be 8 Gigs.
If you're serious about your data, you'll clearly want better hardware for a server. I assume you also are not on a recent version of the OS either. All of my Mac Minis have either Lion or Mountain Lion and we'll be testing Maverick soon. But I don't ever let my servers get more than 1 OS generation old (granted I also don't jump on the newest version right away either).
Mac Minis have VERY slow drives. Do anything but us a slow 2.5" 5400 rpm Mac Mini drive. Maybe get a new SSD and/or an external fast RAID and performance will increase a lot. You would be amazed at how fast a new Mac Mini Server with SSD goes compared to what you have now.
Best of luck!
If you are up for trying a simple test, take your 10,000 line csv file and:
1) create a new FileMaker file with just the bare columns in the csv file.
2) Upload the empty database to FMServer
3) Import the records and check the time.
4) Re-import the records with matching and mark the time.
That gives you the baseline for measuring performance. If you have calculated fields in the 'live' system that are bogging things down, then you'll be able to easily see it.
Also, if you are doing a lot of record deletions, it might be helpful to create a new copy of the file as a compressed copy to remove the blank space and clear out index values that have been deleted.
-- Drew Tenenholz
Thanks for everyone's help. Just to report back:
I reduced the CVS file to 1000 records and it imported in under a minute.
That is working for me now so have left it at that.
I will try some more tests when I have a moment, as I am not convinced it
is as simple as that because the maths doesn't add up. It may well be
something else I changed without realising it had an effect.
Now have a problem that the FMSE process seems to crash every now and
then... but thats a different problem!
Seamus Phillips MEng