I would recommend you not to compact your Database at all. If you have done your job correctly setting up your database this should not be a problem. If the database is getting to big for your hard drive, you just need to buy a bigger one.
Thanks for your input Johan. What I'm trying to establish is, rather than recommendations, establish a factual best practice based on the wider knowledge available here. With the size of storage available now it is very unlikely a FileMaker file will become so large that it cannot be stored.
However, I haven't seen much discussion about compacting, which puts as much data into each block as possible and rebuilds the structure, perhaps FileMaker could interject and advise whether this is good practice or not (Steve?). Are there tangible benefits for running this, or if not, why is it sill offered as an option? We have clients' systems that have been updated and upgraded for 20 years, I can't believe best practice should be to just leave the files alone forever (the ostrich approach)?
Taking this a little further on to what is and what isn't good housekeeping practices, there is a good argument to take a copy of files and run them through the Recover command to check for hidden problems. In the event of a reported problem a decision can be made as to whether to try to manually correct it or use recovered versions of the files.
For instance, when we commence a new project based on an existing framework we would always run the cloned files through Recover twice. If any problems are found the first time and are fixed, the second run should confirm that the database is good to use. If this is not the case, again, why have the feature?
I appreciate there are those who would recommend 'if there is a problem, rewrite it', but on base systems that have taken 10,000 hours plus to create, this is not practical.
I'd welcome you following up your reply with some reasons behind your recommendation not to do compact.
Good comments Andy! I use FMS to verify my databases every night. This will give me a good chance of catching problems before users starts working on the solution in the morning if I get a Error email from FMS in the middle of the night.
Soltant wrote a article about the Recovery Tool a little bit ago where you can find some info
Their is not much info about proactive use of compaction because its not a common practice.
I have always used compaction, and recovery as well, as a reaction to signs and symptoms of corruption.
Otherwise I leave my files alone.
We typically do not use compact unless a problem surfaces. The main reason is we update systems fairly frequently, and when we do, we put a fresh clone up on the server. So compaction is usually not needed.
Thanks to coherentkris and Mike.
Mike, you've raised an interesting point, as virtually all our solutions are separated and frequent updates can be applied by replacing the UI file, but this then leaves the data files remaining untouched other than being closed and reopened.
The files for our healthcare system installed a couple of years ago have grown to total about 1Gb and our customer is expanding rapidly. The size of this system has already outgrown another unrelated system that has been used for nearly 20 years, which is still utilising the pre fm7 multi-file structure (because it still does the job required of it).
Again, if Steve or anyone from FMI views this, we'd really appreciate FileMaker's recommendations as to whether files of a certain age benefit from housekeeping or not.