It means the file is corrupted--possibly due to copying an open file from one location to another.
Exactly how are you "copying the file from the server"?
What method are you using to back up these files?
FileMaker files should be closed before you copy them to another location. The one exception to this is to use either a scheduled back up in FileMaker Server or "save a copy as" in files not hosted by server to make a copy of the open file.
The backups are created by the FileMaker advanced 10 server, set to back up every hr, and then once a week. it has been working great for 3 years.
I'm now afraid to close the database on the server, if it won't reopen I'm in trouble In trouble..
All the (This month backedup files have this problem. i can open the last backup file created Aug. (Last month)
Are your backups verified during the back up?
You may need to open the last good copy and save a copy with the clone option to produce an empty file, then import all records from your current file into the back up clone, taking care to also update next serial value settings on any auto-entered serial number fields. You can import data directly from the hosted file if you have access permissions to export data.
I have a last good copy, 8/29/11
Is there any way I can fix the file running now. It is seems to be working fine, just can't copy it.. afraid to stop it.
the block is checked (Verify backup integrity)
I wouldn't trust it. You can try using the scheduler to make a back up copy and run a recover on it, but there's no way to be sure that the recover fixed all problems correctly nor that it found all problems to fix. It's much safer to import the data into the back up clone.
The database is huge. Can you advise the best way to transfer data from the corrupt database to the clone.
Can I transfer from table to table somehow?
Contains 209 tables over 200K files
I hear you. One of my archive files is now well over two Gigs with millions of records.
Much depends on the type of data in your file and how frequently older records are edited by your users and whether you use a modification date fields to track such edits or not.
The basic method is to set up a script that imports data from each table into the matching table in the clone file. The same script can check maximum values for any serial number fields and use that value plus 1 to update next serial values also. SInce it's scripted, you can put this on a dedicated machine with the clone located on it's hard drive and let it import all night.
If your tables contain very "static" data where they aren't subject to revision long after they are created (such as sales invoices), you may be able to do the import in stages, Stage one, can take several days even for large files with lots of indexed fields. Stage two is a follow up import that imports just the records added/modified since the last import and should take a much shorter time.
Can you give me a sample of a script you would use to do this?
Thank you ... I really appreciate your help with this..
I guess there is no way to transfer any relationships and scripts I've added this week huh..
You can import scripts, but the relationships would need to be recreated by hand in the back up. It would be a good idea to try saving a back up of your open file and then running recover on it before importing things from the recovered file. You might also want to use some utilities to check out your server's hard drive to make sure that it is healthy.
The basic script for importing the data goes something like this:
Import records //specify your current file or the recovered back up copy, select matching tables and specify matching field names. And use the no dialog option.
#if table has a serial number field add these steps:
Go To layout [specify layout of table of newly imported records]
Sort Records [Restore ; no dialog ] //sort on serial number field in descending order
go To Record/request/page [first]
set next serial value [YourTable::SerialNumberField ; YourTable::SerialNumberField + 1]
// now add additiional script steps to do the same for every file.
Before running the above script, go to a layout for every table and do a show all records. This can also be scripted.
Will not restore:
You can also extract your existing data from your file by exporting them into text files which can then be imported into a good clone of your backup file. This is even more tedious, but may be your only option here.
Have you checked to make sure a hard drive issue is not corrupting your files?
looking at the hard drive, needs to be defragged badly (could that cause the corruption)?
It generally can slow your disk access speeds down, but shouldn't corrupt a file unless something more serious is wrong with your disk.
I took the last good core and updated it, then tranfered the files by table, just in the nick of time before the server forced a restart.
Thank you for all the information, it helped a great deal.