1 Reply Latest reply on Nov 2, 2015 9:07 PM by taylorsharpe

    How many records is too many for FM?


      I have an intermediate import table that was not cleared out for a couple months. I discovered that it has about 2 million records hanging out. Just for fun a did a couple of sorts on the table and they only took about 30 seconds. Not bad I think. Once the table was loaded a regular find was just as fast as ExcecuteSQL. ExecuteSQL was pretty quick to start with. I was impressed.


      I checked the limits and it is listed as 64 quadrillion total records per table over life time of file. Is this just a record ID limit? I still have a long way to go before I hit 64,000,000,000,000,000 on the record IDs. At the current rate I might hit it in about 26,000 years or more. I think I will be ok until then. If there was ever an issue you just need a new table right?


      In practice when do you start thinking about archiving old data and pulling it out of the current file? I know it depends on each application, but I am sure there is some experience floating around out there.


      I have been thinking of archiving some data over one year old twice a year and deleting the records from the working file. Still keeping access to the archived file for audits or other reference if needed. Maybe keeping a large master archive file of everything. How do other people do this sort of thing?

        • 1. Re: How many records is too many for FM?

          There are lots of other things that have to be asked.  Like how many fields?  Are you using unstored calculations?  Are you having to do many searches via relationships?


          My biggest table had about 20 fields, all stored, combination of numbers, text and date.  It had about 83 million records.  Finds really only took about 3-5 seconds over the LAN. 


          But I've brought a database to its knees with only about 1000 records with complicated unstored calcs and complicated relationships.