Inherited database... Improving performance
I am attempting to improve the performance of an inherited database (FMP v11/FS11) that is used to track hospital patients. The previous DBA constructed a multi-file system that contains some “data only” db’s and another primary file that contains both interface and data elements (the auxiliary files contain insurance info, pictures, etc.)
There are several large tables in the main file that contain normalized aspects of patients (consults, implanted patients, etc.) with other entities branching from them (such as a mailing reminder system and several tables designed to handle research studies).
It seems that previous DBA started branching additional tables from the original table in the main file (essentially a spider web) to accommodate departmental needs. Then, it appears that he figured out how to use table occurrence groups for temporary data tables (e.g.; for new studies). However, as the web began to expand, many TO’s were used to provide context to these branches.
It is my understanding that “archived” data existing within the file system that is not being used should not have any effect on performance ( so should I assume that relic TOG’s will not affect performance if not being called by scripts? ).
However, several massive tables are constantly used by secretary’s, transcriptionists, and nurses to peruse patient data for everyday tasks (Five or six tables contain over 10,000 records a piece, all within the main file containing all the interfaces).
Here are my approaches to improving performance (of course nested with some questions):
1. Create a clone and transfer data: I have read that this will reset indexes used by filemaker. Is this true? Also, what precautions should I take before transferring the data and launching the new file on the server? This seems useful considering that the db was not serviced (at all) 6 months before inherited.
2. Reduce unstored calculations: The previous DBA has cluttered the db with many unstored calculations. I am systematically reducing them into scripts that will place a value in a field that can be indexed (if indexing needed). Is this a better alternative? Are there other options?
3. “Heavy Users”: There are several employees that must conduct MANUY searches within the data tables that contain 10,000+ patients on a daily bases. Should I try to isolate them to a separate file?
4. Interface files: Could an interface file for each user improve overall performance? After all, these file would be calling back to the several massive tables that already exist.
Sorry for the long post but I would love to hear some input before I make a decision on a course of action…