Oh I wish it was that simple. I've seen files that weren't even that big, but were so complicated, 10 users bogged it down. There are other databases that literraly can have 1000 concurrent users on the same database and it works fine. FileMaker used to cap the limit at 250 users, but removed that limit on the Advanced server with the caveat that it really depends on the hardware, network and database design. Personally, when I see more than even 100 concurrent users, I like to see if it is possible to split the database and put some of the database on another FileMaker Server. When you get to 250 concurrent users, that means lots of optmiziation and testing. But if I am really looking at 250 active concurrent users on a FileMaker database and know the numbers will grow, I probably would start looking for some SQL solution. If it is a simple database, you can go with larger numbers, but my personal feeling is somewhere in the 250+ range, I would start looking elsewhere. Then again, I'm hoping FileMaker 12 will be even more robust and maybe 64 bit and be able to better handle larger concurrent users. FileMaker can handle well almost any size database... it is only when you get large numbers of concurrent users that it weakens compared to some of the enterprise SQL iron. And there are some additional admin functionality available in the big SQL databases that would be nice to have in FileMaker.
FileMaker 11's sweet spot is in the 10-200 concurrent user range in my opinion.
When I asked this question, I fully expected to get the "It depends" answer, and I realize that it actually is the correct answer. But it's not a useful answer. I don't have experience with large FileMaker databases. I was hoping someone with experience would give me their opinion. You did just that and I thank you. 10...200 users is the sweet spot. That is useful and aligns with my thoughts.
My other thoughts are:
(assuming a typical FileMaker/database structure)
I beleive most FileMaker databases are for companies of 40 or less users.
File size: 5GB for any single file.
Importing is too slow.
Table count: 25 per file
Relationship graph gets too large and/or complex (I use Squid - Anchor Buoy). (see below for my naming conventions)
Layouts: Doesn't matter
They can be grouped and searched on.
Scripts: Doesn't matter
They can be grouped and searched on.
Server power: Doesn't seem to matter
In that past, I've wanted more speed, so I added a more powerful server. The users experienced no noticeable speed increase. I could easily speed up the slow functions of the database by optimization or creating a workaround by not using standard FileMaker practices. Nowadays FileMaker and basic computers are fast enough that almost any server works for the typical 20...40 user database.
Those are my opinions.
Relationship naming convention:
_Accounts | Contacts | Phone Numbers_Primary
All tables have a squid head in the form of "_NameOfTable". (this puts them at the top of the pull down list.)
TO's are the table name, table names are human readable.
A pipe separates the TO's (this keeps everything grouped and aliged in the pull down list)
The relationship is assumed. (the Contact is related to the Account by the Account Number)
If the relationship is unusual, it is appended to the TO name by a underscore
Field names are human readable. The are usually in the form of Group_FieldName. I rarely find the need to embed other information in a field name (ie: type)
Layout names are similar to field names.
I name my scripts with a human readable description (sometimes quite a long one) and keep them in a folder named after the table they generally affect.
I have found no need to make to more complex than that.
Most of my clients have between 10-30 users, but I don't know if that is typical. I think most larger installations are likely to use in-house developers and have less need of a small developer like myself. I know in my scientific databases I put hundreds of tables and don't seem to have a problem. Most of these usually are lots of tables with lots of records, but few users. And that seems to work fairly well if you have it on a good server machine.
I love FileMaker scripting and it gives non-programmers the ability to do many things that usually needs a programmer. The drawback is that FileMaker scripts are not compiled and often don't run all that fast.
Imports can be real slow because FileMaker goes through and indexes everything, performs stored calculations, and gets it all ready for use before giving control back. I usually just assign these slow tasks to server side schedule to import in the background. You can also set up a system where you submit batch files to a table and a server schedule checks for any scripts that need run and it starts to run them automatically without the end user having to know how to use the Admin Console.
I went from spiders to Anchor Buoys and kindof going back now. If I have a layout that has a performance issue, I go back to Anchor Buoy, but I'm not doing it so much anymore.
It is interesting that you don't seem to see much difference in server performance. Could it be that your limiting factor is the network and that is why you do not seem to see improved performance with a better server? Also, individual scripts won't improve much with multi-core processors because no script can thread between multiple cores. But if you have a bunch of users, each process can go to another core and a big server with lots of ram and many cores will handle more users much better. You will only see this type of improvement though when you have a lot of users.
If you want to get some improvements, you can sometimes do this by making SQL calls to FileMaker through plugins like MMQuery. I've been tesing a Mac Mini Server with i7 processor and Pegasus Thunderbolt R4 drives that are proving about 20% faster than a Mac Pro with Apple RAID card.
Organizing and commenting scripts are always important if you are a good developer!
Hi Kyle and Tailor,
I read through your little discussion her. We are and independant FileMaker Developer House with 8 developers. Our solutions range from <10 user-solutions to >100 user solutions. But architecture is much more important than the number of users.
First a little explanation:
If you want to display very complex information or to perform complex datamanipulation in a very deep structure with, lets say a MySQL solution, you will need to write a routine gathering the information and then delivering the presentation.
Going go a layout and a record in FileMaker is actually equal to the above mentioned process.
When something in FileMaker is turning out to be to slow - the user has to wait or the procedure is making FileMaker slow for all the users - it is because we as developers have not used the right method to handle the specific task.
Often because it worked fine for the developer while building the solution, with only limited amounts of data and with only very few users. And when testing with the customer they are usually from 1-5 concurrent testers and not 130-170 users working hard at the same time.
After defining the requirement of you customers business, the business rules and the workprocesses you will set up the data architecture of the solution. Not only the ERD but also the descriptions of the routines/functionality. Now you should try to isolate potential performance problems and come up with solutions.
The developer should keep evaluating the solution and the individual layouts and processes (filemaker scripts).
It is our experience that in larger solutions that are used by +50 concurrent users there will always be some performance problems showing up after deployment. You need to understand the solution and be able to isolate and optimize the relevant views and scripts.
Why not solve everything before deployment
It is our experience that many issues where we expect problems with performance turns out to work very fast. FileMaker Pro and FileMaker Server 11 are doing a very good job when "choosing" where and how to perform the procedures fast and efficient. That is why we are not optimizing every tiny bit of our solutions. In most cases it is OK to do the job in the simplest and most transparrant way, using the FileMaker Relational model and Calculation Engine the basic and direct way.
It is better to concentrate on solving and optimizing the few issues that turn out to cause problems.
But still: Do build your solution simple and easy to work with. As it is now, stick to a method like Anchor Bouy and only use spiderdiagrammes for the fundamental calculation graph used in your data file*.
Keep it Simple Sxxxxx (forgive me please), and do not tempt your selv to keep adding TO's and to use graphs more complex than needed by adding and adding to the same TOG.
The devil is hiding behind the words stored/unstored. When searching, sorting and displaying stored data FileMaker is very very fast. And it is not FileMakers fault if you deside to have your primary data unstored for use your procedures.
Some data is by nature unstores, if they are calculated by relation and other ways that forces an update and recalculation on the fly. The same of course goes for some summary fields based on calculated/related values.
But it can always be solved. Here just solving unstored values in fields:
- Let the server set stored values in extra fields during the night
- Let a scripttrigger set the stored value when some of the values behind it is changed by a user.
Problems with users getting bad performance using the solution via WAN/Internet can sometimes be solved by improving your FileMaker solution. But in many cases you will need to use solutions like Terminal Server. This is not a FileMaker problem, it is a generel issue.
And the answer is
Done the right way you can let in more than 250 users at the same time in a very large solution.
*Yes, I do assume that you are dividing your solution into at least one UI file and one Data file (or more if relevant).
Going through my answer once again, I must admit that I find what I wrote to be very fluffy. But if you can come up with more specific scenarios, then I and other will probably be able to give you better and more precise answers.
I have no specific issue that I am trying to solve. I just wanted to hear the experiences and opinions of others.
Your statement about stored vs unstored is probably the root cause of most speed issues. My rule of thumb is "If it CAN be indexed, it wont be a problem."
That statement brings up a question I always wanted to test, but never have. Does indexing various field types add significantly to the file size? A number field for example. Or maybe a text field that is populated by a value list of only 5 different single word values.
Carsten... no worries, I thought your comments were good. Basically, it can be summed up into bad database architecture makes for a poorly performing database no matter how many users. Still, FileMaker's performance with simultanous transactions is limited compared to what some of the big SQL engines can do. I've had clients hire me not to change any functionality, but simply to improve performance. Those are always fun challenges and rewarding when success yields significant speed improvements. I find lots of in-house developers just throw things together for a quick result with little consideration on performance and when things grow, performance becomes a big issue. So I get hired to come do the cleanup work. I have to admit, cleanup can often take more time than doing it right the first time. Oh well, it is good for my business <grin>.
I of course agree with you, Kyle, when you write: "stored vs unstored is probably the root cause of most speed issues. My rule of thumb is "If it CAN be indexed, it wont be a problem."
And while this is crucial it is also possible to make the data indexed (stored).
The way to do it must be desided case by case.
And still, also remember, that in most cases "unstored" and thereby unindexed data is OK and is working just as fast as we need.
Oh ho ho ho filemaker can handle it