Another title for this would be "data integrity issue"
After working with FM for many years I've yet to have a problem with data loss that I have not caused myself. Having a good backup system in place is a good first step while you determine how your data is being "lost". Cascading deletes comes to mind right away but I don't pretend to understand your solution. Perhaps you could narrow down the circumstances that lead to this data loss. I can pretty much guarantee that FileMaker has no "data integrity issue". I understand how this can be very frustrating, but there's a reason this is happening. People here will be happy to help you.
It is possible to repair the indexes in a FileMaker file, though as Rick points out, this is not the most likely reason for records to go "missing". Indexing issues can keep records fro sorting in the correct order or linking correctly to records in other tables so they can appear to disappear because they no longer appear in a portal or do not appear where expected in a list or table view.
But if that's all that's gone wrong, you can usually find them if you go to a layout based on that table and do a "show all records".
To rebuild the index of a single field:
- Open Manage | Database | Fields and double click the field
- Use either the storage tab or the storage options button to turn off indexing.
- Exit Manage | Database, then return and turn indexing back on.
You can also rebuild all your file's indexes by importing all the data into an empty copy (clone) of your file.
If you have FileMaker 11 or newer, you can use Advanced Recovery options to rebuild your file's indexes:
- With the file closed, select Recover from the File Menu.
- Select "Use advanced Options"
- Select only: "Copy File Blocks as-is" and "Rebuild Field Indexes Now".
- The recovered copy of the file will be identical to the original copy except that it has completely rebuilt indexes.
Thanks for both your responses. Can you clarify "cascading deletes"? My assumption is that you are talking about deleting , say, a record in a portal that has a one to many relationship with other records?
I was also interested to know that you would recommend using Recover, Phil, because I have seen you refer to Recover as a rather doubtful process, recommending backups instead. So, I have never even attempted Recover.
I would much prefer to know some "rules of the road." What causes whole chunks of data to go missing? Like 43 records which are all prints of the same original (share the same Image_no). Or 430 records that are in the same "volume" (bound book of prints). That's why I was asking for a "white paper". The documentation for Filemaker doesn't even mention losing records as a problem.
I can set up an automatic backup of the database, but it won't make my client happy to have to research what data entry has been done since the last backup and redo those transactions.
If I am doing something wrong, I'd like to know how to prevent it from happening.
Another issue this brings up, When I pull up a report and change a sort, how does that impact the original sort when I created the layout? What if I need to change the original sort after creating the report, and I want the new sort to come up everytime instead of the one I entered because I want to track what was going on with a particular field? How do I designate a default sort? One that "abides" with the report layout? Must I script it everytime I call a report layout from the program?
Oh, and, what about suggestions for importing records that use "psuedo logical" fields (numeric Yers/No)? This data is what causes me to have to take so much time to restore records.
Cascading Delete refers to when, if a relationship between Parent and Child is set to delete records in the child, deleting a Parent record deletes all related records in the Child table. This can be useful if used correctly, but disastrous if set in error.
I was also interested to know that you would recommend using Recover, Phil
Recover is a very useful tool, but one must understand it's strengths and weaknesses in order to use it effectively. Please note that this special use of Recover does not attempt to repair anything but the field indexes. It leaves the rest of your file's design unchanged and thus is a pretty safe use of recover.
I think Rick would agree with me that "chunks of data" very rarely "go missing" in a FileMaker database. If records truly are missing from the file, the most likely causes are (in order from most likely to least likely):
a) A user or script action edited the data to the point that it is no longer recognizable.
b) A user or script action deleted the record directly
c) A user or script action deleted the record indirectly through a cascading delete. This is when you select the "delete" option in a relationship between two tables such that deleting the parent record in table A automatically deletes all related records in Table B. Access does indeed have the same feature.
d) The file is damaged in some way. A damaged file behaves unpredictably and rendering blocks of data inaccessible is definitely a possible result of file damage.
But records can appear to go missing for other reasons:
a) records are not accessible in current found set or a portal's related set of records due to a find or relationship omitting them from that set of records.
b) A portal filter may omit them from a portal in an unexpected manner
c) other layout design issues might conceal or radically change how data from a given record is displayed
d) a damaged index can keep Finds from finding, sorts from sorting and relationships from matching records as expected.
What do you think of this article?
Steven Blackwell is one of THE premier authorities on all things FileMaker. Anything that he writes about FileMaker and database design should be given careful attention by serious developers. But: 1) this is an old article. FileMaker and regulatory environments continually change. 2) much of this article has little to do with "data going missing" like you describe. The key point he makes that is worth considering is that if you have a database with multiple people using it that have access privileges that enable them to greatly modify and/or delete record in your database, some kind of tracking system that tracks who modified or deleted a record and why becomes a very valuable thing to add to your database design. There are a number of third party produced tools or examples of such that you can research.
One such change is to use FileMaker Advanced to remove the option to delete records and replace it with a menu option that "marks" the record for deletion. Scripts, portal filters and relationships are then set up to automatically omit such "deleted" records from view. Then, when user comes to you hat in hand and admits to deleting a record that should not have been deleted, you can smile and "restore" the related record by finding it and "unmarking" it.
I think the problem may be cascading deletes. I shall have to go through all the delete related records settings. I am the only one operating this system presently, and I am darned sure I don't have any scripts that "delete all" or "delete while" (were that possible), and that I haven't deleted entire found sets maliciously. Nor could I have made a mistake as frequently as I have been missing records. But I have been testing new code, and I have been removing records that were obviously messed up. One area that particularly concerns me is that there is a connection between the large table that I store print records in and another table that I store images in. There are only about 800 images for 5600 prints. So it would seem that if I delete a record with an image, I might possible delete all the records in prints that constitute that one to many relationship? Will this hold true even if I am not workiing with the records through a portal?
The idea of marking records for deletion but not actually deleting them sounds like a smart option too -- depending on how much of a workaround it is. Do you mean creating a field that would track a deletion? is it a feature? How does it work? Do I have to go into all my layouts and force these records not to show in reports, etc.? If this is a feature, the records marked for deletion would probably give me a decent clue about where the problem is coming from also.
Cascading deletes will take place if you have set that option up in Manage | Database | Relationships. You'll get the same effect whether you used delete portal row to delete the record in a portal or used delete record to delete the record in a table based on the portal's table.
How do I shut down cascading deletes other than through manage database, modifying the relationship? And how to I set up the database so that it merely marks records for deletion rather than deleting them?
Why would you need to use a different method to shut it down? It's simply a matter of clearing a check box option there to eliminate cascading deletes.
If you want to keep the delete option specified but delete a record without deleting the related child records it calls into question whether that option should be set up in the first place, but if you modify match field values so that the records are not currently related, you can then delete the parent record without also deleting the related child records.
Marking a record as "deleted". Is simply a matter of setting a field to a specified value. You might have a number field named "Deleted" that you set to the value 1 to mark it as deleted. The trick is to then hide the record from the user by making sure that finds, relationship matches, portal filters or some such always omit records with a 1 in that field from the current found set or portal.
I am sorry, I thought both of these options might be global. Sounds like the idea of marking records is impractical.. By my reckoning I would then have to go through each and every layout and add something so records marked as deleted would hide, and how the hell can I set the field equal to, sa,y 1 for "Deleted" if I am deleting records without intending to?
Ah well. Even more reason to have some sort of warning that FileMaker will drop records unexpectedly if you delete a parent in a parent-child and have allow deletion set. Now, with the vast majority of the database built, I have to go "make like a porcupine!" In case you don't follow the reference, I'm referring to the question how do porcupine's get it on? The answer:" very carefully."
But at least I have a clue. So, tell me, if I am modifying a join between parent and child, and I want to be able to delete records on one side but not on the other, if I click on the box that allows deleting an instance of a parent record, but do not allow deletions on the child, how does that work? Generally the records I am deleting in this "cleanup" are "prints" (child records), but I might on occasion want to dump a parent record as well. And I really don't want any of this going on once I turn my database over to the client. If I have an instance of a table view without fields from another table, must I go into the relationships and separate the child before I can delete a parent?
Also, do you have any ideas on how to import faux-logicals?