7 Replies Latest reply on Oct 28, 2016 12:31 AM by FrankvanderMost

    Filemaker Capacity for Non-Normalized Data

    jeremyb

      My databases main table is updated weekly with an additional 10,000 records, each containing 113 fields.  I then have calculation, summary and text fields built off of that and other joined tables.  Currently that main table has about 1.5 million records.  While I would like to normalize my database and generally reconstruct it better, should I be concerned that this volume will soon overwhelm Filemaker?  While no two databases are exactly alike, does anyone have experience with data that would dwarf my database, thereby allowing me to feel more comfortable for the time being?

      Thanks,

      Jeremy

        • 1. Re: Filemaker Capacity for Non-Normalized Data
          gdurniak

          Please provide some more detail

           

          Which version of FileMaker ?  What is the file size in Gb ?

           

          1.5 Million records ( and 520K more per year ) is not unusual

           

          With 113 Fields,  your Table is also not that "wide"

           

          Normalizing is nice,  but not always practical ( for performance reasons ),  e.g. storing a related value may give you faster reporting

           

          greg

           

          > Currently that main table has about 1.5 million records

          • 2. Re: Filemaker Capacity for Non-Normalized Data
            jeremyb

            Slightly over 4 gig Filemaker file size. Sotred on Filemaker Server.   Myself and coworkers use Filemaker Pro 15

            • 3. Re: Filemaker Capacity for Non-Normalized Data
              philmodjunk

              Any time you get that many records in a table, you have to treat it with respect. Operations that are reasonably fast with smaller numbers of records--especially on layouts with summary fields can bring your system to it's knees as it tries to summarize or index massive quantities of data.

               

              But with the appropriate precautions taken, you can manage tables with this much data and much more. You just need to avoid things like showing all records on a layout based on this table if the layout also has one or more summary fields on it.

               

              Filtered portals to this table using a cartesian join are also out of the question.

              • 4. Re: Filemaker Capacity for Non-Normalized Data
                RickWhitelaw

                I would say a table with 113 fields is VERY wide.

                • 5. Re: Filemaker Capacity for Non-Normalized Data
                  gdurniak

                  Just for comparison,  we have a file with:

                   

                  265 Fields ( about half are unstored calcs, and summary fields )

                  7.5 Million Records

                  8.7 Gb file size,   on FileMaker Server 13

                   

                  It runs quite well

                   

                  greg

                  • 6. Re: Filemaker Capacity for Non-Normalized Data
                    alangodfrey

                    I've heard this before, Rick, and we have many tables wider than that.  Damned if I can see a way to make them any narrower, though.  Every time I read that we should be optimising to narrow tables, I go red and slink to the back of the room.  But we already use as many child tables as I can see can be used - each entity in some of our tables genuinely has (or to me, 'seems to have') maybe 150+ unique characteristics.  I feel like I have somehow 'Failed' the Database Masonic.

                    • 7. Re: Filemaker Capacity for Non-Normalized Data
                      FrankvanderMost

                      I'm not sure what they mean when they say to narrow your tables (is a wide table seen as a sign of bad normalisation or is it a capacity issue?) and perhaps you've thought of this yourself, but just in case: if it means less stored fields, then put multiple characteristics in one field separated by some divider and read them separately with calculation fields/calculations/custom calculations. Storing and editing is cumbersome and when you need indexes on the separate characteristics, then you're back at square one of course because that requires storing.