Sounds like index corruption. Try shutting down the database (if hosted on Server), save a compacted copy, and see if that fixes it.
How do I have save a compacted copy?
I know when I used to administrate Access DB's, it had a compact and repair feature. Does Filemaker have something similar?
In the File menu, choose Save a Copy As. Then choose "Compacted copy" in the Save dialog. You won't be able to do this with a hosted database (hence the instruction to shut it down first).
And yes, this is a rough analog to the Access "compact and repair" feature. FileMaker has two additional features, steps further up the chain, in case compacting doesn't work. You can save a clone (same menu item, just choose "Clone") and import the records back in; this offers a fuller rebuild than a compacted copy. And, failing all else, you can use the Recover command - but standard recommendation is not to put a recovered database back into service, because the recovery process can result in an unstable condition (it has a tendency to rip bad blocks out of the structure, which may or may not render the database safe to use going forward). Recover should only be used to grab data and push it into a known-good backup, unless in desperate circumstances.
Great. This made the ?'s go away, but it still makes me nervous because does that mean I will randomly lose data like that? How can I find out what caused this? I can't have data disappearing like this, especially if it's something we don't notice. We do nightly backups (and weekly backups of those), so I'm not worried about recovery unless we get beyond a couple of weeks.
Are there preventative maintenance I need to be doing as the DB Admin? And are there things I should be looking at as I continue to work on/build the database to avoid these sort of issues?
My original thought was that this was caused by a error in transferring database from the FilemakerGo app in the field, back to the main database, but I have no way to prove that.
How is your database used in daily operation? Are you using peer-to-peer sharing (i.e., one user hosts and the other users attach)? This might be the source of the problem; I've seen corrupted databases come up quite frequently from this practice.
Other possibilities involve large imports (especially while other users are in the database), copying the database while someone has it open, or network interruptions.
How are you transferring the data from Go?
Probably need a bit more information in order to come up with some likely culprits.
We are serving it to our LAN and WAN (Go) video Filemaker Server 11. Each user has their own username, and it isn't used for mass data import. It's basically a customer database, with notes, a few reports, and several tables. A total of maybe 10 people use it, with 5 of them also using it on 2 iPad's and 3 iPhones.
Could it have something to do with the 3G? We have a public IP in which they connect to the database back at the office and edit data that way.
Sorry; didn't catch the Server piece. Okay, so no peer-to-peer.
I'd say your hunch is on the right track; it's probably due to network interruptions. Would be the first place I'd look.
Indexes are updated when records are committed to the database. Have your Go users reported any dropped connections?
Do you have any operations that use, for example, Replace Field Contents or other global operations? If these types of operations are interrupted by a network drop, it can corrupt the index.
What kinds of fields came up with bad indexes? Calc fields? Fields that users update?
Another place to look (and I may get some disagreement from the other forum members on this) would be development during use. While you can technically perform database schema updates (that's the Manage Database dialog) while users are in the database, I've experienced issues similar to what you describe from doing it. This can be especially true if you've changed the field definition on something like a calculation field that's indexed.
I didn't answer your original question, but yes, in situations where you're likely to see index corruption, it's a good idea to shut the database down on some frequency (once a month, every couple weeks, something like that) and save a compacted copy. It keeps the indexes healthy.
Anyway, see if the users are reporting network drops. That would be the first thing I would look for in the situation you're describing.
Thanks, Winfried. Good detail.
This helps clear up a few things in my head for sure. I still have a couple of lingering questions that may or may not have been addressed in that write-up.
Is there a way to prevent this happening? Do I just need to create a compacted copy every so often, and shut the DB down occasionally? Our network, to my knowledge, has been solid, though with 3G solid is a very flexible term, outside of the building.
Could this issue be cause enough to NOT use 3G if at all? I hope not, as that will cripple what we've built up to now, and have to find another solution.
The fields that come up bad are fields that users update, which in turn make calcuations come up with the ?'s.
Well, like I said, the first thing I would do is query the users about dropped connections.
The other thing to look at is make sure the users don't leave a record open and then put their devices to sleep. Server will attempt to disconnect them after a certain time if they do that, but their changes won't be committed, and that could (again, possible) introduce a problem. Just training them to commit their changes before putting their devices to sleep is a good practice (just tap outside a field, change layouts, or close the database).
Other than that, yes, periodic maintenance is a good idea. You'll have to shut the DB down to make a copy.
Winfried may have other suggestions; he's kind of the guru of file corruption around here. :-)
These are great thoughts.
Winfried, I'd appreciate any other detail you can add.