AnsweredAssumed Answered

How many records is too many for FM?

Question asked by bigtom on Nov 2, 2015
Latest reply on Nov 2, 2015 by taylorsharpe

I have an intermediate import table that was not cleared out for a couple months. I discovered that it has about 2 million records hanging out. Just for fun a did a couple of sorts on the table and they only took about 30 seconds. Not bad I think. Once the table was loaded a regular find was just as fast as ExcecuteSQL. ExecuteSQL was pretty quick to start with. I was impressed.

 

I checked the limits and it is listed as 64 quadrillion total records per table over life time of file. Is this just a record ID limit? I still have a long way to go before I hit 64,000,000,000,000,000 on the record IDs. At the current rate I might hit it in about 26,000 years or more. I think I will be ok until then. If there was ever an issue you just need a new table right?

 

In practice when do you start thinking about archiving old data and pulling it out of the current file? I know it depends on each application, but I am sure there is some experience floating around out there.

 

I have been thinking of archiving some data over one year old twice a year and deleting the records from the working file. Still keeping access to the archived file for audits or other reference if needed. Maybe keeping a large master archive file of everything. How do other people do this sort of thing?

Outcomes