Any database's performance will suffer as the number of fields increases. This is because FileMaker's record fetching and caching behavior is record-centric, meaning whenever the client requests a record from the server, it gets all fields, whether they're on the layout or not. This also applies to cache refresh, as the server keeps the client updated with changes other users have made to records it's accessed.
There is no "magic number" of fields you can get away with. The performance depends on a lot of factors: Network latency (huge), bandwidth, server hardware, client hardware, available memory, etc. But I will tell you this: If you have more than about 3 dozen or so fields in a single table, that's usually a red flag there's something broken in your data model.
The iPhone 5 is an older model, and its hardware and available memory are "behind the times", as it were. Using a commercial hosting service (which I often recommend) means dealing with the open WAN of the Internet, which potentially means significant network latency. So you're already "up against it", so to speak. My suggestion to you would be not to think in terms of what you can get away with, but rather in terms of how you can maximally optimize the system for the best user experience.
a quite small 'other' database solution (means an other database system than FileMaker) often has 100+ tables (without all of the utility tables..) and those tables will have just a couple of fields. With FileMaker, we find often just a few tables with hundreds of fields... recently we've been asked for consulting on a FM solution - one table had more than 4'500 fields.. That thing runs quite fine in a local network - but is awfull when it comes to maintenance and really ugly when the net isn't propper... We had to create utility tables for data-entry just to keep that thing letting users add data..
I created iOS/FMGo databases/solution (some of them didn't even have fields..) and after iOS9 came out, they were not fast, really not fast.
then, iPhones 6s became available here and I bought one - A9 processor (don't know what else was changed) - and those devices are fast, really fast as far as I can compare it concerning FilMaker Go 14. Means: Device and OS matters as well (not only because of the number of fields - the interface was much slower, sometimes too slow!). Of course, the way layouts are built also matters.
iOS means slower network, could even be a 3g network (here, the network will change from location to location, automatically - one has no control.. We're often in trains abd the net will change within minutes). Also a wifi can have slow speed - and FileMaker needs a quick net (ie small numbers when doing a 'ping').
therefore, when one can optimize, that will be necessary when it comes to FM Go!
Mike's advice (..about 3 dozen..) is correct IMHO - depending on the kind of the fields as well - formulas, unstored, statistic... that will slow down even more.
Any database's performance will suffer as the number of fields increases. This is because FileMaker's record fetching and caching behavior is record-centric, meaning whenever the client requests a record from the server, it gets all fields, whether they're on the layout or not.
This circumstances, accurately described by Mike, should also be put in consideration when it comes to database models. Anchor-Buoy might be standard as of today for common PC applications.
The Selector-Connector model might be born out of the need of reducing data been transmitted thru the WLAN Bottleneck with smart devices as iPhone AND iPad.
Performance comparison between data by Relationship OR by SQL, and if FMServer is used, which Scripts will be faster when performed on the Server, are some more aspects that catches my mind.
Also remember the screen size on a iPhone is much smaller and it would be hard to place 20 fields on the screen. Fields must be large enough for the user to read and be able to enter data. I would hate to enter data into 500 field from an iPhone.
Very helpful responses!! So I guess I am back to taking a second look at using several small tables instead of the one big one - each of these small tables having 10 to 20 fields. They will 'need' a common date index amongst them at least for reporting. That 'need' may bring difficultlies.
Can you describe your workflow a bit? There may be another solution.
That is a lot of fields. Any chance to get them grouped and moved into separate tables by group, all linked to a parent table? This would help reducing the network load and processor load.
Today I tested a solution on my good old iPhone 5. It took about 60 sec for a task that is done in 15 sec on iPad Air 2 and in less than 1 sec on a MacBook. It does the job, but be prepared for some waiting time when dealing with greater qty of data.
You seem to realise that having a large number of fields in a table will have performance impacts. This is quite true. These performance impacts really show up on the WAN.
You can do the maths on the amount of overhead that you are putting on your system.
If the found set is greater than 25, filemaker does not load the whole set. It will load the current record plus about 25 records. Assuming about one kilobyte per field, in a table with 1000 fields, each record represents one megabyte. A found set of 25 records represents 25MB. That's a fairly chunky download on 3G, especially if you only want to see a small part of the data.
If you reduce the number of fields in your table so that it only contains mandatory fields or only the 40 most frequently used fields and create a one-to-one relationship to ten tables each containing ~100 fields the load is very different. Assuming 40 mandatory fields then a record represents 40KB. A found set of 25 records represents one megabyte.
In this situation, when a user moves to a tab that displays records from any of the one-to-one relationship to other tables, the only extra data required is a single record. If you created ten related tables each containing 100 fields, the extra load to retrieve a record from one of these table is 100KB.
The advantages at the user end are huge. If you're on a poor connection and you mistakenly obtain a large found set you do not have a long wait for the data to load. And, as your users drill down into the data, each extra chunk of data is only 100KB. These chunks of data are only loaded as required and they load very quickly.
We make several different types of observations in various orchards. In the past I used a db schema on my desktop which had a different table for each type of observation - this worked very well but people have advised against this approach in FM since I need to report from these different observation tables in a common report indexed by a 'universal' date field.
If I need small tables, maybe I am back to my desktop schema. I would have one parent table which would have the 'universal' date field for indexing by date all the observation types. All the observation tables would also have as a parent table an orchard table which defines the area in which the observation took place.
Thanks for your interest.
I might add that the parent 'date' table would have a one-to-one relationship with any particular observation table. But every observation table would have the standard many-to-one relationship with the parent orchard table.
It would largely depend on exactly what you’re tracking on these observations. The fact that they’re different types of observations is irrelevant; what matters is the data you’re tracking.
For example, when you make an observation, you’ll likely always record the location, regardless of the type of observation. So it’s counterproductive to have that replicated across many different tables. Similarly, you would have a date / time, person making the observation, etc. These are always the same. What differs is perhaps the specific data being gathered. You might want to record fruit size on one observation, tree height on another, number of lesions from insects, etc.
So what you wind up with is a parent table that records the information common to each observational visit, and then individual line items that record every observation. In other words, for observation #1, I record tree height, fruit size, and lesions. For observation #2, I only record tree height. And so forth. It’s largely similar to the relationship between invoices and line items. Each invoice is a specific item; each line item on the invoice is its own, separate, unique entity, tied to the parent entity.
Different “observation types” is the wrong way to think about it. Think instead of what “entities” you’re tracking. What are the unique “things” about which I’m recording data, and how do they relate to each other?
This goes back to basic normalization techniques. Have you gone through the FileMaker Training Series, especially the chapter on Data Modeling?
Thanks for the insights Mike. The analogy to the invoice I think doesn't work because the line items on an invoice can be handled by standard invoice 'columns' (fields) such as item#, description, etc. while these observations (that term is misleading, sorry) need fields that vary too much between my 'types', e.g. a leaf analysis results needs 10 different number fields to hold the values found by the lab while the leaf observation needs fields to hold various descriptions and the insect observation needs fields to hold insect counts. I realize that a field could hold a certain 'value' for one record type and another 'value' for another record type but there are many cases for which I need to use fields in record-specific calculations and it is just much easier to have at hand a field name which is sufficiently specific and the tables which have over 30 or 50 fields need not be in FM memory all the time - more common are the tables which have only 10 or 12 fields which don't hit on performance so heavy.
I have been thru the training series though I'm far from well educated in FM. Thanks for the advice on entities and attributes - I will re-study that again. Allan
Sometimes you can set up a fairly general data collection table. I've attached an example.
In this example I've only used one number field and one text field. Typically you need more of each.
The trick for this sort of thing is to provide support for data entry through the user interface.
If you inspect the portals you'll see that they all have object names. The portals filter records based on portal name, to present different sets of records. The buttons pass a parameter to a script to pre-populate a field. The value list filters values based on the category.
View a report and you can see that you can report on these different things.
observations.fmp12.zip 114.6 K
Looks terrific!! I would have to add an Orchard table since all 'observations' (inspections, leaf analysis, harvests, etc) happen within a particular block and one of my critical reports needs to list all the 'observations' that took place in each orchard, sorted by date/time of the 'observation'. Many thanks.
ps perhaps 'observations' would be better named as 'events'. In any case, the orchard table is a parent table of an event or observation table(s).