Confused - why can't you just keep developing locally, then when you have a new release - upload it to the FM server?
Somewhat related, I am looking at the best way for multiple developers to be working in one fmp12 file at the same time (just different screens...)
Andrew, I guess I assume that once on the server the solution will begin accumulating data. If I upload the update solution, I will overwrite that data. I don't see a way to separate the data and design. Is there a way to easily export the data from the live site and import it into the new deployment? What if the data structure has changed?
This is an ongoing issue.
A couple of things. First of all, you won't really find anything in the way of version control, merging, that sort of thing.
There are three general approaches.
1. Leave the live database live, develop locally, take notes as you go, and then replicate your finished work into the live database, without changing data.
2. Develop locally, and when you're ready, swap the local file onto the server and then import the data from the old file to the new file. People have come up with various ways of doing all this. RefreshFM might figure into your solution.
3. Do it all live. Set Server to take regular backups to fall back on. Do your new work in separate layouts, then when you're ready to go live, update navigation etc to point to the new layouts.
Johnny, Thank you for the insight!
Is there a best practice for creating a cross-reference table for swapping in new layouts. Sort of a macro substitution for layout changes. (i.e.: a table with a field named 'customer detail' and the contents the field 'customer detail v2'?)
I would just develop live on server.
But that's just me.
You might want to investigate the data separation model.
It does not solve all of the problems inherent in continuous development but may help your specific case.
Even if it is not a good choice for you the knowledge of how to do development with the separation model is invaluable.
The issues you will need to solve are how to get the data from production to development and how to deal with prod<>dev schema differences. Export as .mer from production and import into dev is the best choice i think. I don't think their are any identified best practices in these areas. if i am wrong i'm interested in hearing more.
There have been several discussions over the years on this topic. You can review these related threads:
For what it's worth, I recommend against doing much live updating. I have personally damaged a database quite badly doing that. Yes, it's possible, and yes, I still do it in a limited fashion. However, there are several problems with using extensive live updates:
1) You're working in your production copy. That means what you're doing directly affects users who are doing their daily work. Making a mistake has immediate negative consequences for your customers. Bad.
2) You're working in your production copy. Making a mistake you can't easily back out means you have to revert to a backup, which means you have to do the data import anyway, destroying any alleged time savings from avoiding it. Bad.
3) You never tested the complete fix. Bad.
4) You're either relying on some sort of written procedure (Bad; extra time that could have been better spent) or on your memory to track the changes that need to be made (Very Bad).
5) Modifying schema while users are working carries the risk of corrupting the database. While debate rages on how much of a risk it is, damaged production databases mean at the very least lost production time, and possibly lost data. Very Bad.
i agree, but ...
if the functionality exists, ...I can use it ? ...even It should work properly.
It's a question of risk management. It's not intended for extensive development on a live database. It's intended for minor adjustments.
You can drive a nail with a crescent wrench, but that doesn't mean it's a good idea.
Risk management is how I would put it, full stop. To torture the metaphor, if you need to drive a nail and you don't have a hammer, you adapt the tools available.
It is a frustrating situation. With tools for version management, branching, and merging, FMP would become a continuous-integration platform. Without them, we either take the risks and work live, or we take the risks and roll our own deployment process prone to inconsistency and error.
As bad and very bad as all the options can be, all you can do is evaluate your environment and choose the path that minimizes your exposure.
A solo developer on a project used by a handful of people who can tolerate a few hours of downtime can and perhaps should simply work live and keep frequent backups. A handful of developers on a project used all day by dozens of people should probably take that approach much more cautiously and begin working out a versioned deployment procedure. A developer of a commercial product with service agreements and dozens or hundreds of customers should have a deployment and rollback procedure well in place before launching.
So you'll have to think about your environment and users and explore things piecemeal. In answer to your question, there is no one answer.
In general, I agree. I just don't like the "They made it possible to do live changes, so they just ought to make it work 100% reliably so we can do whatever we want" notion that seems to crop up in this discussion.
And yes, I have driven a nail with a crescent wrench on occasion.
Mike, Thank you for the links. I will review those discussions to see what insights I can get. Being brand new to FileMaker, I'm still trying to figure out the right questions to ask. I really appreciate your time to send me the info!
JohnnyB, Thank you for the advice. I'm going to also look into RefreshFM. I glanced at the site last night, and it seems to provide a structured update solution.
Although I am a single developer working on a solution used by a few people, I'm very risk intolerant when it comes to loosing data.
Mike pretty much nailed it, and the discussions he linked to have a lot information.
My take - beyond building in an import all/update process - is that there is no easy answer to this issue. Deciding how to update live solutions depends on too many variables to come up with one simple answer. Making these decisions properly is a key skill set for a FileMaker developer.
We do some live changes on almost every solution; "safe" things like layout formatting.
We have some small clients with small budgets. We're much more aggressive about live changes with them. The cost/benefit of making a quick change to the live solution is tremendous. Other clients with heavy user counts and mission critical solutions have to be handled differently.
One technique I use often is to add new elements to the live files but keep them completely isolated from current processes. For example dupe off a script and modify the dupe. Then pull a copy and test it aggressively off-line before "flipping the switch." On complex problems there will often be many iterations of this process. This requires a certain level of care and experience, however, and can lead to a lot of orphan elements in the solution.
It goes without saying - which is why I'm saying it - that we only do live development on clients with Server and current backups/clones. Often we'll make changes when the system isn't in use - late in the day or early in the morning. This way we can trigger a backup before we start, and immediately implement that backup if something goes wrong. Combined with the above paragraph this would go: 1) Make a backup. 2) Make the change. 3) Make another backup and test the new process offline.
We're also much more aggressive about changes to the solutions we've written and are intimately familiar with. When working on a inherited systems you have to be much more careful: even changing the tab order can sometimes screw things up if they script using "Go to next field."