"Our file is pretty big, and restoring a backup, that has been made 7 days ago, is not an option.
I recommend that backups be made at least once a day and much more often for mission critical systems.
That would take to much time. Our file has around 150-200 TOs."
I'm probably nitpicking here, but TO's really don't have anything to do with how long it will take to import your data into a clone of your backup (and with some data you may not need to start with a clone.) You need to import from each data source table--the tables listed in Manage | Database | Tables, not the possibly much more numerous table occurrences shown on the relationships tab.
The number of Data Source tables and the number of records stored in them is the largest factor in how much time it takes. (How many fields are indexed also is a large factor here.) Please note that you can script the import process so that you can import from table to table until all data has been imported and all serial number fields have been updated with a single mouse click. You can leave such an import script running all night, if necessary and then install the new copy in the morning.
You have indeed identified one of the advantages to a multi-file system, but it also complicates the design of your system. This is one of those areas where developers don't have any clear cut consensus as to whether a multi-file system is better than a single or split file (One data file, one interface file) system. I personnally prefer a split file approach as this makes updating or restoring the interface file simply a matter of replacing one file copy with another.
I use a multi-file approach to the solution that helps run my business. It has pros and cons. One "not so good thing" to keep in mind is the multiple relationship graphs. If you have five files you have 5 sets of relationships. Many of these TOs and relationships are duplicated from file to file. This can get complicated to maintain as data in effect is used across all the files. It's also too easy to create duplicate data scenarios between files without realizing. I'm often moving tables from one file to another after seeing that it can be simpler one way over the other.Then there's the upside . . . having modules that can be replaced, but this can be very complicated as well. I'm not sure "if I had known then what I know now" that I would have chosen the multi-file approach. I believe these days that the Data Separation Model (two files) is likely the best way to go.
We do backups every day. But the error has been discovered 7 days later.
Of course, the import concenrs the tables only, not the TOS. But I am not sure, if I can guarantee that the import will work flawlessly. I haven't designed the whole solution, and I am afraid, the the wrong order of import, might mess things up, without me noticing it. We do have 71 tables :/
On the other side, i do not see a problem with importing all tables, with taking care of the import order. Right now I'd import all tables with "Auto perform" disabled.
Is it risky to import from corrupted files? I'd say yes.
A member of the fmforum wrote, that with amulti file solution an error might hit all connected files. But what are the chances of that? It is just a data link.
I do not quiet understand. When I want to use an external file, then I create a TO in the relationship graph. I repeat this when need. What do you mean exactly by duplicates? I do not see how this complicates the maintainace.
I really like the idea of having the data seperated. Just like when I do some sql stuff. I thing I'll do that. Most of the time, I do changes to the layouts/functons/scripts. So corruptions will be more like in the interface file, rather than in my data files.
"Is it risky to import from corrupted files?"
There is some risk, but if you see trouble like you are reporting, the corruption is unlikely to exist in the actual data stored in your tables. Your imported data will actually be a touch cleaner as the indexes associated with them are rebuilt from scratch during the import process and this sometimes corrects problems. You also have the option to recover your file and then import from the recovered file. That, in fact, was the original purpose for Recover--to enable users to extract their data from a damaged file and import it into a backup clone.
Can all your multi files get corrupted by the same problem? I have seen this happen. It depends on what glitch damaged the file in the first place. Some problems can damage more than one file--if you have a problem with your disk drive, for example, or you have have a system crash or force quit--which could affect any file you have open at the time.
Thank you for your advice. I am thankful. And I have another questoin :)
I discussed the seperated model idea with my boss, and he likes it very much. Likley because he will take the beating from the big boss, if a crash will throw us back a week. So i will do the following.
1. Create a copy of our original database
2. Inside the copy(the new interface file), replace all the sources of each TO, with the original source file.
3. Delete all tables inside the interface file.
I plan to put the interface file on each client machine. Therefore I can use the power of each machine to do all calculations. But do you know if this is really worth the effort?
What calculations does the server perform, that can also be performed on a client machine? Example: Lets say I am sorting 100000 records. My interface file resides on the client machine. Who is doing the sorting now? The server or the client?
bye and thank you
Yes, those 4 steps are correct. You may want to rethink some of the relationships, in both files. Some TOs which were used for relational calculations in the DATA file may not be needed in the Interface file. But usually almost all are, for navigation if nothing else.
Conversely, at least some of the TOs will no longer be needed in the DATA file, especially ones which were ONLY for filtering value lists. Though, similarly, you may want to keep ones for navigation. I mean, you may not really "need" them, but they're already there, and my be convenient to keep. Be careful not to remove any that are used in DATA calculation fields.
It sort of depends on how many users as to whether you put the Interface on the client machines. I never have, as I've mostly used Separation of Data on solutions with lots of users. But some people feel it speeds up layouts, especially those with heavy graphic interfaces (which I don't use).
I think (and I'm not an "expert" on this) that FileMaker decides whether the local machine does calculations and operations (stored fields, stored Finds, etc.), and FileMaker Server handles most unstored operations. I'd love to see an extensive list, but likely there's some gray areas. So, since ALL the data is on the server, I don't think the location of the Interface file has much to do with it.
You may want to rethink some of the relationships,
Well, that would have been my next question. I thought about it, and I'd say that all field calculations(Auto enter and formulas) have to be done on the datafile. Simply because your can only maintain those fields in the source file only. And the sourcefile(data file) does only show tables an TOs from itself, but not from the interface file. The rest of my work will be done in the interfacefile.
So, if i am not mistaken, I cannot do anything wrong, if I simply go through my 4 steps, without removing any realtionships. The solution should work instantly. Right? That's important, when I try to sell this idea to my boss.
And the speed issue....... well I can test this, after I have taken care of data corruption scenarios.
Btw, our database got corrupted, while I was playing around. with functions/scripts/layouts. So If I would have had a seperate interface file, then this would not have(unless the corruption spreads, like you mentioned) happend. I could simply replace the file.
This is an annoying issue :/ I think i am goint o programm some import/export scripts first, before i decide.
Thank you very much.