You shouldn't be making changes to field definitions in a hosted database.
That can can damage a file and you lock everyone out while you commit your changes. This can be more than an inconvenience for them as scripts are also locked out and can thus fail to execute correctly.
1 of 1 people found this helpful
You can also create a module that allows you to run a script on their machine from another machine, either using a plugin like Remote Scripter or an offscreen window that can run a trigger when a record is deleted.
Access to Define Fields in a multi-user solution was purposely added to FileMaker Pro 5.0v3 and has not been removed since. While some may argue that changing schema remotely is risky, I contend it is just as risky locally with corruption coming from improperly closed database files via crashes and power outages. While I don't recommend developing a solution from scratch on a server, accessing a FileMaker solution remotely with a good internet connection to make a few changes is not blocked by the FileMaker application and so must be a valid method for updating solutions.
My advice comes from earlier conversations with FileMaker tech support personnel and at least one senior Engineer for FileMaker. While they have taken steps to reduce the chance of damage due to a "network glitch", it's still possible and such disconnects are more likely than a local "crash" while developing. Plus, you know when your local machine has crashed. You might not for a network glitch.
And that's all beside the main issue--that if you are modifying field definitions while the system is in active use, you lock users out of the entire table--in some cases, this can have catastrophic problems for your data due to blocking scripts from correctly updating data in the table.
Editing a field in a live database is a bad idea - and I do it all the time. When some smurf has camped a table, I go to the server, and you can send them a message, and it defaults to kicking them in two minutes.
"Editing a field in a live database is a bad idea - and I do it all the time." Me too, but wherever possible think about philmodjunk's excellent advice, to check whether it's a system that does have the kind of scripting that could be catastrophically affected.
That said, the single, 'missing' user can be a pain. If so, you might like to vote for my product suggestion: Lazy User Filtering
1 of 1 people found this helpful
A DB is hosted. A user sticks the cursor in a field and leaves it there for hours, while doing other things.
A small point but important in the total understanding & potential fixes: sticking a cursor in a field does not lock a record. The record only gets locked when the user actually starts editing the record.
So force-submitting at fairly random on-idle intervals *can* lead to bad data (incomplete records) because it assumes that it is ok to commit no matter what the state of the current edit intent is. That is a huge assumption to make.
IMHO a better solution to this problem is to create the data entry/edit UX so that the user feels compelled to finish it when they start.
All good points made here by phil, mardi, data wolf and win!
Here's a good Knowledge Base article on the subject:
It lists 6 potential reasons for file corruption of which network latency is one. I think it is important to note that the ability to define schema on a live database exists for a reason. There are times when when taking down the entire system to add a single field or modifying a backup and importing the data is just too time consuming. I'm not suggesting that an entire solution be created on a live system, it's just a tool in your arsenal that you need to weigh against the other options you have available.
Literally dealing with that right now.
A "live server" version of an existing mobile file was needed. It would be used by the public to enter data to an isolated table, so I disabled Disconnect User from Server. I'm working on refinements and can't do anything because someone is still in a field and left her desk over an hour ago.
There are times when when taking down the entire system to add a single field or modifying a backup and importing the data is just too time consuming.
But you've missed another option that is safe and completely avoids imports. Doesn't even require moving files nor does it require data separation.
We have a 100+ file system used by over 300 people almost 24/7. And making "data level mods" are where the "almost" comes in. We have a regularly scheduled day of the week when the system is taken down for 30 minutes to a bit over an hour at 10 pm or so. Users are notified in advance by email, a warning message pops up when the main file is opened on the day of the down time and I use the admin console to broadcast warnings at 30 and 15 minutes out from the down time.
I then close all files, use FileMaker Advanced to open them right where they are in the server folder, and make the needed changes. We have a table in the system where developers put in the data mods that they need for that week. I try to make sure that all is in readiness in that file so that the needed changes can either by copy pasted from the mod table or from a development copy of the file (say to add a new table).
Once changes are made, the files are re-opened, external authenticated accounts are allowed back in and an email is broadcast to the users that they can get back in if they need to burn midnight oil that badly.
In cases where a change has been needed on an emergency basis, I've been able to take down a single file, update it and put it back up in less than 15 minutes. (Though I still warn users in advance, both with the time of the downtime and what subsystem will be non-functional.)
Yes, I agree wholeheartedly with your decision on this complex system phil. Thanks for sharing. On less complex systems with less people using them, live schema modification can be considered. It all depends on the scenario. The only issue I have is "always" calling live schema manipulation a bad idea. If you just need to add a single calculation field to create a new report, I'm not sure this lengthy process is necessary.
With a smaller system and fewer users, this technique becomes easier, less intrusive on users, not harder.
We may just have to agree to disagree here, but I'll never unnecessarily risk damaging a file--I've seen too many cases where the damage goes undiscovered for extended periods of time--when there's an alternative. I'd much rather inconvenience myself by staying up a little bit late once a week rather than take such a risk.
We feel the same way. We have a large complex system that is being hammered on by ~100 users all day. Live development isn't just an option, it's necessary for a lot of items. We simply don't have the cycles most of the year to build in a dev version and either move it, or do imports. Simply way too much data across too many tables.
Most of our development, though, doesn't require us to touch the data schema too much. That we typically plan out. But it's rare. The system has run for 10 years with no issues, and no evidence of corruption ( and we do often check ).
We almost always develop on a host file. And almost NEVER on a local file. If the local file crashes, we have found it to do more damage than if it's just FMPA crashing. Almost never run into network issues here.
We almost always develop on a host file. And almost NEVER on a local file. If the local file crashes, we have found it to do more damage than if it's just FMPA crashing.
To repeat a comment that I made earlier. Yes, a local crash is far more likely to damage a file BUT, you know that it crashed and thus can revert to backups. You do not always know that a network glitch introduced latent corruption into a file.