Does this problem seem specific to a particular file?
If so, try a recover on that file and then put the recoverd file up for testing purposes and see if this fixes the issue.
Things to keep in mind about Recover:
While Recover almost always detects and fully corrects any problems with your file...
- The recovered copy may behave differently even if recover reports "no problems found".
- Recover does not detect all problems
- Recover doesn't always fix all problems correctly
- Best Practice is to never put a recovered copy back into regular use or development. Instead, replace the damaged file with an undamaged back up copy if this is at all possible. You may have to save a clone of the back up copy and import all data from your recovered copy to get a working copy with the most up to date information possible.
And here's a knowledgebase article that you may find useful: What to do when your file is corrupt (KB5421).
Could you solve your problem? Two months ago we moved our application from FM11 Server on a MacMini to FM12 Server on a IBM Server with Xeon E5, running a virtualized Windows Server 2008 x64 (6.0.6002, ServicePack 2), 2 core 16 GB RAM. Since then we have a crash about every two weeks loosing up to one hour of our work every time (10 to 20 clients).
In such cases, the Server becomes very slow. Clients who close or force quit filemaker and connect again have two entries in the clients list on the console. Disconnecting them from the console normally works, but closing the databases mostly doesn't finish anymore. Today we tried to stop the server and it seemed ok, but after a restart it still had to verify the files.
After restart a new Server-crash protocoll (*.dmp) was found from half an hour earlier. What is this file good for? Who can read and analyze it?
Thanks for any suggestions!
No we have not solved it and still suffer it on a regular basis. Our server is handling Pro, Go and IWP clients and crashes often come at busy times but can also occur when only one or two users are connected.
Our databases utilise container fields quite heavily for uploading images via FMGo and viewing those images via IWP, I am not sure if this is a factor or not.
I would be interested to hear what types of clients you have (Pro, Go, IWP, CWP etc) and if container usage is a significant part of your solution to compare.
Right now I have a REALLY ugly task manager event which runs a script every minute that looks for a *.dmp file in the FileMaker Server\Logs\ folder. If it finds one, it moves it to a subfolder, kills fmserver.exe then does a netstop and netstart on FileMaker server. This gets the database back up and running again in about a minute. I can post it if it would be useful to you.
Like I said though, it is really ugly, and the fact that you had the crash in the first place means your files could be damaged so as a matter of course we check them with the Recover command regularly and if any errors are found revert back to an earlier version (and import records from the damaged file). Generally the recoveries are clean though but it makes me uneasy.
Regarding the content of the DMP files, they are a standard Windows memory dump file created at the point of crash. You can read them with standard windows debugging tools but I think only FileMaker support would be able to draw any conclusions from the content as to the cause of the issue.
Thanks a lot for your answer. For our solution we have only Pro clients and no container fields at all.
Did you ever contact the support to analyze the *.dmp file?
After restart FM does'n verify the files? That's what takes most of the time. Either verify or move the files from the backup takes at least 10 minutes. I once had a case where verifying one single file took more than 2 hours. So I started to always take the backups. With the new system I now realized that it takes much shorter.
What our clients experience when it comes to a crash is a very hight response time for some time. As far as I have seen till now, server scripts at the same time seem to run normal. I do not understand when the *.dmp file is written. In the log file there are many entries after the timestamp of the dump. Does it mean that some parts of the server gave up?
It's quite difficult to get ahead with this problem. I would be very happy if you could post the script as you offered. At least for the time being it might be the best solution - I still hope it will not be forever.
Thanks a lot in advance.