4 Replies Latest reply on Jan 23, 2017 10:06 AM by greglane

    maximum records in file

    tmlutas

      I have a file with 4 index fields and 2 data fields to document government features.
      The data fields cover the name of the feature and data describing the actual feature (text).

      The four indexes cover, what government has this feature, a feature UID, type of feature, and source.

       

      I anticipate hitting the upper limits of filemaker (64 quadrillion records) and needing to move this table to postgresql or mysql within a few years. What I am not sure of is what is meant by lifetime records and how much that is likely to shave off the theoretical limit. I'm assuming that if you add a record, delete it, and then add it again, you have used up 2 of your limit.

       

      Would saving an empty copy and importing a large table restore record limits so they equal records currently in table?

       

      At what point would would it be wise for such a table as described above to shift out of Filemaker and into a heftier database engine, moving Filemaker to the role of a front end to access that data?

        • 1. Re: maximum records in file
          RickWhitelaw

          How could you possibly reach that limit on a DB of government features?

          • 2. Re: maximum records in file
            taylorsharpe

            What you do with it in FileMaker will determine your pain point of performance.  I have seen a database with only 10,000 records and some incredibly slow unstored calculations bring FileMaker to its feet.   I've also used a FileMaker database with 83 million records that had no unstored fields and searching through it was reasonably fast.  At least your file is narrow with only 6 fields.  I assume your database will grow by importing data and expect FileMaker imports to be real slow because FileMaker updates each index for each record it imports.  It does not do the entire import and then re-index like some big database engines do. 

             

            Note that FileMaker Server 15, through Actual Technologies, supports Extended SQL Sources (ESS) for PostGres which is new to 15.  MySQL has been supported with ESS for years.  But you can have the table in FileMaker and if you go to PostGres or MySQL, set up ESS and the table occurrence can be changed to point to those databases for information.  Very easy to do. 

             

            FileMaker can be easy as a front end UI for ESS databases.  But it is not always the fastest and may need some tweaking to get it to work well.  However, it sure is easy to do. 

             

            Obviously you will want to have a server with lots of RAM and very fast storage access (SSD / RAID). 

            • 3. Re: maximum records in file
              greatgrey

              Why does it need to be one single file? When file A gets full start file B and relate them. unless there limit on how many records FM can see.

              • 4. Re: maximum records in file
                greglane

                Unless you’re constantly deleting and importing large sets of records, I’d expect you’ll run into the 8 TB per file limit long before you’ll hit the 64 quadrillion record limit.

                 

                If you expect 100 million+ records soon and continued rapid growth, I’d strongly consider segmenting the database as greatgrey suggested (although that comes with UI challenges) or move to a different database engine sooner rather than later.

                1 of 1 people found this helpful