There are no hard and fast rules. Here are some considerations:
1) Bandwidth matters. Is the system to be delivered over an intranet (i.e. Ethernet / WiFi speed) or over the cell network (3G / 4G)? Or is it a local file (runtime or similar)?
2) How is the system to be designed? How "wide" are your tables (i.e., fields per table)? How many records per table? Do you have a lot of unstored calculations? Summary fields? Other aggregate functions? These considerations will affect system performance and may require design attention.
3) What about mobile (iOS)? That represents a far more stringent file size limit than a PC.
4) Are you connecting to external data stores of some sort, using ESS? How about container data? (Storing container data inside the database is a huge contributor to file bloat; external storage is your friend.)
Typically, your database design and the physical environment (network, iOS) represent far larger constraints than the physical limitations of FileMaker. In my experience, you'll likely run into a performance elbow long before you run into a file size issue. I've never personally run into a problem with the physical limitations; I have run into performance issues that required tuning.
One thing you may want to consider is the separation model (one or more files for the data tables, one or more files for interface), which will make design, updates, and recovery far easier.
Now that I’ve just set up the database, it works fine, but I’m afraid I’ll run in to issues when the tables start getting populated…
Attached a printscreen of the Database
Especially the PRODUCT layout is considerably slower on loading, eventhough only a few example records are present.
1. Running on intranet and Wifi (but even on host machine slow loading of PRODUCT to be seen) ; with occasional 3G
2. Unstored calculations : since there is no data yet, effects of this cannot be assessed yet
3. Mobile : accessible via filemaker go is what I’m thinking off (IWP) / the file will never be copied to a mobile if that’s what you mean.
4. I agree on external strage : have set it up like this
Could you give me some more info (links / referrals) on:
“One thing you may want to consider is the separation model (one or more files for the data tables, one or more files for interface), which will make design, updates, and recovery far easier.”
Especially for updates lateron this would be helpful… I had already wondered how to do that …
Thanks a million, Mike !!!
A pitty there is no “Filemaker network” in Belgium… I’d surely attend… I’m already hooked to Filemaker, and I’ve just started exploring its possibilities J
Van: Mike_Mitchell firstname.lastname@example.org
Verzonden: dinsdag 26 februari 2013 15:42
Onderwerp: Re: setup a database with sperate files or with tables under one file : breakpoint?
Your Product table has 428 fields. That's a lot. It suggests that you have a lot of calculation fields, perhaps, in that table? A general rule of thumb is if you have more than, say 3 dozen or so fields in a table, you might want to look at your data model; something is possibly wrong with the way you have your tables arranged.
Here's a brief explanation of what will happen when the Products table goes to load. FileMaker is a client-server architecture, which means the host (the server) delivers data to individual workstations (the clients). FileMaker's model is what's called "record-centric", meaning it's modeled around the record. When a client requests a record from the host, the host will deliver it. All of it - every field (with a few notable exceptions). (In Find Mode, the client will get the records in 25-record blocks. In List or Table view, it'll get enough to display, loading more as the user scrolls.)
What that means, in practice, is a big chunk of memory and processing gets used when those 428 fields load. It also means, when you move to a network arrangement, all 428 fields will have to be shoved down the pipe to the client - and your performance will likely be awful. (If you think it's slow now, wait until you have to contend with the network!) And let's make matters worse, just for fun: If you're using any aggregate functions (Sum, Count, or summary fields), then the client has to load all the fields on every record in the found set! If you can, remove as many of these fields from your database schema (table) as you can; replace them with scripting, Conditional Formatting, Script Triggers, etc.
I can't say, not having seen the field definitions, but you may also be seeing issues related to dependencies. That's what happens when one calculation "depends" on the results of another. For example:
A = B + C
D = A - E
F = D + A - G
If this is what's happening, FileMaker has to calculate the results of A before it can determine D, which it has to have before it can figure out F. This will kill performance. Try to avoid this if you can by writing calculations that limit dependency.
There are other things you can do in your calculations to help with performance. For example, use the Case statement smartly. It evaluates until it hits a true condition, so put your most likely condition first; that will speed it up.
Also, you seem to have a lot of table occurrences. That may be perfectly OK, but be aware that, at a certain point (and that will vary based on the database), you'll hit a performance "elbow". This is due to the caching behavior FileMaker uses when it first opens a system. It evaluates every join on the graph and caches the results; this will slow you down and consume memory if your graph is too crowded. Consider using tools like ExecuteSQL and scripted Finds to remove TOs you don't strictly need. (Another reason for the separation model; you can remove some of the caching burden.)
Speaking of which, the Migration Guide from some time back (http://www.filemaker.com/downloads/pdf/techbrief_fm8_migrtn_found.pdf) has a section on the Separation Model that might help you.
I agree with Mike, the products layout in production will no doubt bring the db to a crawl if it is slow with only a few records inserted for testing. Could you attach a pdf of the products table showing the fields. Maybe that table can be further normalized, spreading the heavy lifting to more than one table.
In my experience the real speed issues are not big things, rather the sum total of many little things, storing a calc that was unstored and using a script trigger to update, etc.
Is it not the case that FM is also ‘found-set’ centric and ‘layout’ centric?
Open a database to a layout with a summary field on it while viewing 10,000 records will take a lot longer to load that opening a database to a layout without that summary field on it and only showing say, 10 records of the 10,000.
It’s as much about what your layout designs allow the user to see (ideally, data entry only screens) as the number of fields and calcs in your tables?
Just a thought.
You're correct that the found set will greatly affect things; I said so above ("If you're using any aggregate functions (Sum, Count, or summary fields), then the client has to load all the fields on every record in the found set!"). Of course, if you don't do anything to prevent it, the user can just hit Ctrl-J (or Cmd-J) and find all records on any layout. He will then engage in creative thumb-twiddling for a while.
But how do you prevent the user from "seeing" report layouts? Won't you have to use those to print reports - even if you don't preview them? And won't those have summary fields on them - and won't the records in the table still have the same number of fields in them? Your reports will still crawl if your schema are inefficient, regardless of whether you let the user see the fields on the data entry layout.
You're right; putting a summary field on a data entry layout is something you should do with caution (if at all). However, the presence of the summary field is not exactly a "layout" feature, but rather a schema-level feature. Its speed will depend on what it's summarizing and on how many fields FileMaker has to download on each record.
Fair points all!
Attached the PDF.
Thanks for your thoughts
Van: timwhisenant email@example.com
Verzonden: dinsdag 26 februari 2013 23:08
Onderwerp: Re: setup a database with sperate files or with tables under one file : breakpoint?
product.pdf 89.2 K
Looks like Tim was right; you have a definite data modeling issue. You have a lot of different field "arrays" - multiple fields like Cost_DocPoc1 - Cost_DocPoc4, Cost_OtherPart1 - Cost_OtherPart10, OtherPart_Description1 - OtherPart_Description10, and so forth. Whenever I see things like this, it indicates a data model that hasn't been fully normalized. Worse, many of these are unstored calculations that are further being added up in other unstored calculations - that dependency issue I mentioned before.
You'll need to split these items out into other tables and use portals to speed this up. Go back and look at your actual data entities. It appears that you have line items associated with your products; each line item should be a separate record in a related table. You can then use a Sum function in the Products table to add up all related values, which will be considerably faster.
Mike is right you have WAY too many fields in your products table and you need to break it up into different tables and redo your data model to speed things up. Tables in in FM solutions are "cheap" so to speak but unstored calcs are very "expensive" and will kill your performance. The more tables you have in a solution the longer it takes to update when changes are madebut other then that it doesn't really slow things down. Unless you have a truly massive solution 100 tables or less is usually suffcient and won't slow you down too much. I have one solution with almost 200 tables across 5 files and that is the biggest I have ever needed. In general unless you are going to use data separtion you can add everything into one file without too much trouble.
I always avoided porting my 6 era solutionsa and always just re-created them it seems faster in the end. Good luck!