Your graph should not be a mess because of the data normalization...
In general data generalization should be the norm and you de-normalize when you need it for performance reasons.
The answer is simple; Yes, No, Sometimes
A lot depends on what sort of speed you are after. Higher levels of normalisation, if applied correctly, will lead to a savings in terms of data management but finds across relationships may be slower. De-normalising and indexing may increase the write/update times, increase file size but will improve finds.
As a slight aside I am under the impression indexes do not help sort performance.
FileMaker slowness is often a result of summaries and complex unstored calcs. These can be speed up often by de normalising the data or changing the way information is stored. For example a 'live' stock system may add all the ins take away all the outs and always be correct. You might use this in an environment where the ins and outs are small. If the number of ins and outs are large this will be slow.
regarding your relationship graph, the recommendation on page 16, Use Table Occurrences Groups to keep context and scope contained should help you find your way.
I listened to the recorded presentation from Mark Richman and I now read the document. Very interesting !
Regarding data normalization, the document goes one step further about performance, suggesting to break a table into tables in order to isolate large fields in the second related table.
1 of 1 people found this helpful
hi, Scott, there are good answers here! Keep in mind that we have the ability (in FM) to:
search by any case, begins-with as default ('joe' will find 'JOE', 'Joella', etc.) and use a greater variety of wildcards (symbols)
make use of "multi-line" keys (use your favorite search engine - and this forum - to learn about that!) that certainly are NOT 'normalized'.
have basic relationships (on the graph) and use Portal Filtering to further define the related results and/or make the portal results variable - While these may compare to 'views' (perhaps), they are very different.
allows for non-standard, non-normalized naming of fields/column, tables, scripts, value-lists, layouts, etc. These would surely break in the SQL world.
sometimes it makes more sense to have a "multi-use" table that is very un-normalized. For example a people attributes table. While an EAV-like structure may serve the same purpose, it's not always as easily used with FM functions.
less is more, often and FM seems to allow everything you can't "normally" do within SQL.
WHAT DOES optimize FM? (my observations):
limit calculations (SQL presents them only in the query, right?) by making auto-entry or script entry of data
limit summary field (also may be achieved by other means.
if absolutely necessary to have these types of fields, limit access/view on layouts (speed re-draw)
I work in the SQL world, too, so I do tend to 'normalize' more than many. But due to the ability to do the above, I definitely have some out-of-norm structures.
As a slight aside I am under the impression indexes do not help sort
Indexes are not used for sorting!
Not a strong suggestion, but it is on page 13
Interesting, that they chose not to use the word "normalization"
Since FileMaker 7, they do advise against "wide" tables, which makes sense
FileMaker downloads the entire record, to get one field
> I don't recall any strong suggestion toward data normalization.