#2 isn't much different than #1.
What you do depends on the situation.
If it's a low traffic system, or you can access it during low-traffic times (eg nights) working on it live might be fine.
You can ease the pain of #1/#2 by using multiple files. The most common way is to have an interface file and a data file. Any layout or script changes in the interface file can be made and tested and then you swamp out the interface files. If you make data changes (tables, fields, relationships), you'll still need to do one of the other options. If all you're doing is adding completely new functionality, not changing existing tables, you could add a new data file.
#3 can be done with 3rd party resources or with your own scripts. Once you've got a system in place it's not so bad.
David and I don't fully agree on #1, but we don't need to rehash it here as we've had that argument elsewhere. I contend that certain data level changes should not be made to the live system while it is hosted as you can damage the file and not know that you damaged it until--possibly a great deal of time later, you discover unexpected results/behavior in your file. He points out that there are also major risks inherent to developing in a single user file and that using server allows you to automatically generate and keep back up copies. That's my best attempt at summarizing a long argument elsewhere with apologies to David if I've misstated his position.
Since you can not import or copy and paste layouts and relationships
While technically true, it's not as bad as that sounds. You CAN for example select all layout objects on a layout in your development copy and paste it into a layout in the live file. There are some picky details to stay careful of, but assuming that the layouts' TO is the same in both cases, this can work without too much trouble.
And if your naming conventions for TO's include a summary of the TOG structure as mine usually do, you can copy and paste several TO names into a comment box in the development file, then copy the resulting block of text into a comment box in the new file to refer to while building the needed relationships. I sometimes have a screen shot of the relevant relationships open in another application at the same time to use as a reference guide as well. It's still a tedious job if you are working with more than just a few new TO's/relationships, but it reduces it by a few percentage points at least. If there are a lot of TO/table changes, I'd definitely consider importing the data into a clone of the new version. (#3 in David's post.)
And for #3, as David indicates, you can script the import process, just be sure to take the file down off the server or at least block user access to it while you do this so that some "night owl" user doesn't try to edit data in the middle of the scripted import. And Primary keys using Get ( UUID ) make the import process a few steps simpler (though it's not all that hard to include script steps that update serial number settings if the file still uses serial numbers for PK's.) You can even test the scripted update on back up copies of the file before you do it for real on the live copy. Once you have such a script in place, you can make small updates to it if needed for future scripted import tasks.
I suppose you have a development copy (containing random data) hosted on your development server on which you implement new stuff, be it new tables / layouts, new fields, new scripts ?
Unless we know how you work it's a hard and useless guess.
and btw "Random data" for me means real data randomized at the individual level. We work with patient data, which for us means keeping the figures but altering names, addresses and so on with relookups on data retrieved from http://www.fakenamegenerator.com/ in order to have data making sense but related to fake profiles.
Much of the methods discussed here work whether the development copy is also hosted or not.
We use both approaches and we don't use random data, we use back up copies so that we are using real, actual data in our testing and development. Of course, we do take precautions--including Encryption at rest to protect sensitive data present in those development files.
If you need to do frequent updates, build a script that moves from table to table and finds then imports all records in to a blank original.
If it is a one up, make changes on the copy, at night or for 5 minutes in the day, take the solution down, copy data changes from each table and paste them in to solution. Bring it back on line then update the layouts as needed (copy paste the entire layout) is how I do it if there are lots of changes.
If I know what the data/table changes are, I also sometimes make a duplicate of the layout and work on that in the live database, when done I select the entire layout and copy, then paste this on to the original layout (first deleting all objects)
2 of 2 people found this helpful
There's a product that helps you migrate data in an automated way : www.refreshfm.com. Built for exactly this situation.
Another trick when making a small change to a layout in a live system is to set up this hide object when expression on the new layout feature:
Get ( Accountprivilegesetname ) <> "[Full Access]"
I often keep a new button or other small feature invisible while installing and testing, then remove the expression when it's time to go live. I've used it so often now that I created a CF just to save a bit of typing.
I did build a complete series of imports so I can clone my development copy and then import all the live data with one click.
But I sometime run into issues. You only have one choice Update Serial Number and Look-Up or neither.
In some table I have look ups that I do not want updated, but I always need the serial number updated.
That leave me with having to go table by table and adjust the Serial Numbers.
Anyway. I appreciate all the feedback and suggestions and will definitely check into www.refreshfm.com
You absolutely, positively should NOT change the serial numbers for existing records during an import. You should only change the NEXT serial number setting and this is done with a script step specifically designed for that purpose.
If if you do need to batch update specific fields as part of the update, you can use replace field contents.
When I import with update Serial numbers option on it does NOT CHANGE the serial number on imported records, it only updates the serial number in the table so that the NEXT serial number is correct.
Use the separation model.
"When I import with update Serial numbers option on it does NOT CHANGE the serial number on imported records, it only updates the serial number in the table so that the NEXT serial number is correct."
For updating a solution by replacing an older file with a newer file and then importing the data from the older file into the newer, you do not, change the serial numbers in the records that you are importing!. Doing so will screw up your data. You want the data exactly as it was in the original copy of the file. If this serial number is the primary key of your table, changing the value will disconnect any related data in other tables that link by that data by these serial numbers.
Thus, what you describe is exactly what should happen when doing this kind of solution update.
I'd use option Nº 3 and create an import script so I can automate the data migration process.