This does appy to other versions as I have been doing this for a number of years when I get a non-matching records search and we know the record is in front of us and Filemaker can't find it. The indexing is always set to "All" simply remove "All" and reset to minimal and then the offending record will be found.
Get this a lot when developing in FMStudio, and like the original postee - why can't we re-index the whole database easily - or have a script step that we could run overnight from the server.
Version 13 maybe?
Sorry A2analytics - don't have an answer - just the same question.
I think you can use the Advanced Options in Recover to do this. Actually, I think you don't even need to use the Advanced Options. Just recover the file. If there are no issues (i.e. corruption) with it, then I believe you are fine, AND you just re-indexed the whole thing.
Certified for Filemaker 7, 10, 11
I would SUGGEST that execution of a find where Filemaker does not return apprioriate records indicates corruption and any save as copy...compacted, recover, or turning on and off indexing to get the find working again should be a temporary fix.
This has been covered many, many times in this forum (but probably before you joined).
1) DO NOT Recover the file.
I can't emphasize this enough. Recover, while better than back in the .fp3 days, is still an attempt to rescue data and the overall file structure and accepts that if something in the file is unacceptable, it can be tossed out to preserve the ability to open the file and export the data. If the file is still working in every other respect, then you don't need/want this tool.
DO NOT Recover the file.
2) coherentkris is on the right track with the idea of 'Save A Copy As...', but instead of trying to trick FMPro with compacting the index (which might change the index), save a CLONE of the file. When you do this, all of the indexes are removed, since there is going to be no data in the file. When you import the existing data into the new clone, all indexed fields must have their indexes rebuilt. Just make sure that any field you know you want indexed is defined as such, and you'll get exactly that.
3) Turning indexing on/off for a single field also works, but you asked about all fields, and a clone is the quickest and least destructive way to do this.
4) Please take a look at the extensive and clearly documented work by Winfried Huslik. http://www.fmdiff.com/fm/recordindex.html. He's the real expert on this, and has covered this ground many times before.
-- Drew Tenenholz
Christopher is correct, and, well, slightly off base. :-)
The Advanced Options in Recover that allow you just to rebuild the indexes will indeed recover the indexes and get you going again. However, I would not - ever - put a fully recovered database (one that uses all the Recover options) back into production unless I had no other choice. Why? Because Recover literally goes into the database and rips out any bad blocks it finds. Although theoretically, if it reports nothing was done, then nothing was done, it is still not recommended to put a recovered database back into production. Instead, do as Kris suggests and save a compacted copy; this recovers the indexes as well as repairs other minor issues. (As a maintenance issue, this should be done on a periodic basis anyway.) Next step up is to clone the database and import your data. Use Recover as a last resort to get your data out before reverting to a backup.
The question is, though, why are you seeing index corruption? When I see this, it's normally for one of a few reasons:
1) A table with lots of turnover - say, mass delete / import.
2) A file being shared from a network drive.
3) A file that was abnormally closed, especially with one or more records open at the time.
Any of these apply?
My point was that if you have to force rebuild the indexes to get find to function correctly the file is corrupted.
Steps to fix:
1. Do what you have to do to restore the file(s) to production status based on business needs.
2. Perform root cause analysis on why the corruption occurred
3. Generate, install and verify corrective actions for the root cause.
4. Rebuild the file or files from scratch or restore from a known uncorrupted copy.
5. Restore the data.
6. Make the rebuilt/restored file the live file
7. Delete the corrupt file(s)
If you end up cloning the file as Mike suggests, you could use Goya's RefreshFM tool to import the data into all tables.
I know the rule about not recovering, and have done both the turning-on-and-off-of-the-indices, and the clone-re-import thing, many times.
My (admittedly somewhat experimental?) thinking was . . . . .if indexing is the only issue, and there is no structural corruption (i.e. . . . the tables, fields, relationships, layouts, etc. are all good ) , then the recover advanced option to re-build the indices would probably get you to the same place as turning on and off the indices in all the fields. But then . . if the corruption is only in the indices, then just doing recover (without the advanced options) would essentially just rebuild the indices, since the rest of the file would not be re-built in any way. (Or not in any harmful way).
But I defer to the official doctrine that recover should be avoided, except to create a data source for a clean un-recovered copy of the file.
Certified for Filemaker 7, 10, 11
> The question is, though, why are you seeing index corruption? When I see this, it's normally for one of a few reasons:
> 1) A table with lots of turnover - say, mass delete / import.
> 2) A file being shared from a network drive.
> 3) A file that was abnormally closed, especially with one or more records open at the time.
Another thing I noticed can create corruption: Replace Field Contents script step, in certain situations.
Years ago, I built a system that, when people logged in, would do some Replaces across the database. Bad idea: everyone logged in in the morning at rougly the same time, hit the same set of records with their replaces, and the result was instant data corruption.
Since then my rule has been: only use replace records (in a script) if you're certain that the set of records being operated on is only available to the user running the script.
Certified for Filemaker 7, 10, 11
Agreed. Also, monkeying with anything in Manage Database (especially calculations) or Manage Security with users logged in is often an invitation to trouble.
And trouble is very obliging. It usually accepts invitations.
Yes, I was too tired to search the archives for this topic before.
This is a personal file of some depth. It crashes frequently due to other application activity. It has been around. The suggested preventative maintenance routines above are just not going to happen.
This file has 19 tables and there is as much risk of corrupting the data in moving it to a clone file and checking all the serial entries as there is in just recovering the file and continuing. Plus, the file is constantly in flux, with script modifications and layout/report changes. A backup clone is just not feasible. I do have a scripted startup routine that saves a copy for severe emergencies (like a mistaken data delete), but in one afternoon session I can make so many design changes in the file that the clone is no longer valid. I may not even remember all the changes I made.
And hello FileMaker: these various recommendations about protecting the file design are just a fantasy. I have been hearing them for 10+ years, and they are in complete conflict with the target market of this product (less technical users and low cost rapid development). Most customers are ignorant of them or ignore them intentionally. A very small number of professional developers follow them. It just supports the sales argument that FileMaker is not suitable for mission critical databases.
It should have been addressed with the file format change in FMP 12.
The issues that cause corruption and recovery and mitigation strategies are available if you look.
Following those recconendations is up to each individual developer.
Just know that if indexes are corrupt and your making business decisions on the data you may be making decisions on bad data.
Its all about your own tolerance for pain.
Agreed. If the application is indeed mission critical, I should think perhaps hosting it on Server and keeping regular clones would be wise. The question is one of risk tolerance versus recovery time and the importance of the data. That is a case-by-case decision ... but it's really not a question of the technology. It's a question of risk management.
All database applications carry inherent risk of data loss - just like any other computer application. When we craft risk management plans in IT, we look at things like how long the application can be down, how much data (in terms of work) you can afford to lose, what the likelihood of loss will be, and plan accordingly. Sometimes, it means redundancy in servers with failover; sometimes, it means more frequent backup; sometimes, it means requiring the developers to keep independent backups of all their changes in addition to using the source control system. And then sometimes, I get calls from panicky people in the field who had only one copy of a database that's now crashed and they don't know what to do.
It's a lot like someone telling you not to touch a hot stove. If you choose to touch the stove, you might get away with it for a while, but ... eventually, you're gonna get burned.
At the risk of being totally discredited in the Filemaker world, I admit that I've been running 3 companies on 'Recovered Files' for the past 10 years with NO troubles.
I agree with you that the 'clones' become outdated as fast as our computers. Working on databases, making updates and changes yield the clones unusable in a blink.
I have a hard time tracking all the clones I've made!
I understand the reasons for not using recovered files, but to be totally honest I was too overwhelmed with the thought of having to rebuild the files from scratch, or importing gig's of data. The pressure to keep the wheel rolling ya know! I wasn't aware of Goya's FMRefresh Tool.
I think Filemaker has become increasingly less tolerant to bad design in it's releases from 6 to 12.
So there's my two cents,.....gone down the drain!