Most projects I've worked on with the deployment model you're describing — you work on a local development copy, then import the real application data before hosting it, replacing the previous version — also have a data migration file developed in parallel. The migration file is a standalone file with references to both the old and the new versions of the application. The migration file contains scripted imports (and resets serial numbers if you aren't using UUIDs) so that when it's time to migrate, you don't have to manually import data for each table (and so you don't forget to do anything important).
Many folks have claimed some success with the separation model for this purpose, but I've never personally seen it work out as advertised. Hypothetically, a developer should be able to swap out just the interface file with your new updates; but it's only that simple if your updates don't include any data schema changes. In my experience, most updates include some kind of supporting data schema change. The separation model can still be a useful organizational tool, especially in larger enterprise systems where it makes sense for different users using different applications accessing the same data to have separate interface files for the distinct applications.
Thanks for the reply. That makes sense on the seperation model, especially in the current state of my project because I am still changing tables and relationships fairly regularly.
Any writeups you know of that describe the import scripts and setup you talk about along with the migration file? That sounds like what I need to set up, but I am still trying to understand exactly what is involved in creating that system.
Virtually all of my solutions include an Export / Import process like jbante mentioned. The key is planning, when you create a solution it is important to set up either an export or "save a copy as" process or both. Additionally you need a method to reference a path where the backup and restore data will be. There are things to consider when planning an approach. One of them is the platform... with a purely iOS solution there is no .fmp12 export, so you will need to use the save a copy method... on the other hand... FileMaker Server hosted files would only function with the export/import option. I prefer to set my solutions up for both. Also make sure you add any newly created "stored" fields that would be "in user data tables" to the export script.
The "Separation Model" will not really solve your problem. Based on my own experience and other developers I have known, I believe, that generally the changes you will often make, ( as much as 50% or more of the time ) require changes to the "Data File/s" as well. While there are some situations where the Separation Model makes sense, in this case it will not be a complete solution to your problem. The Separation Model does have the advantage of allowing the user to be updated while still running the solution (but only if the interface file "alone" is being updated.)
The way I approach the export is to export all the tables that contain data that the "user" will add, delete or modify, as separate .fmp12 files. I only export the "Stored" fields. No Calc's or Globals get exported.
There are a lot of things that go into refining the process, additionally making it so the user can do it themselves (if necessary) in an easy process can also take some planning. Another thing to keep in mind is the time to run the import. With smaller volume clients it will typically run only a few minutes, but with clients who have many millions of records in multiple tables, it could take hours or even an entire night.
Laying out the process that I use completely, would take far too much time, and make for an enormous thread, but, what I can tell you is that it can absolutely be done with a single file solution, and I have had this approach working seamlessly on different solutions for close to 15 years, and the nice part is that the updates are generally done by the user not me.
Thanks again for all th replies. The RefreshFM solution seems solid, I am going to demo it and see how it goes.
Once the update is done and I upload all the data to the new file - I assume I then need to re-upload that to filemaker server. If I name it the exact same, will I be able to seemlessly replace my existing file without any major impact to the end users that are connecting to it?
Only if you close the file first.
Snarkiness aside, no, you have to take an outage, close the database (which necessarily means disconnecting all users), replace the file, and then re-host it.
EDIT - scratch that - didnt realize that context was taken into consideration when exporting. I have since updated my found count records and context before exporting and its working correctly Simple oversight.
I wasnt too impressed with RefreshFM - pretty glitchy - going to try to attempt to build my own export/import scripts. So far I have just set up three of my tables for testing. I am exporting each table to its own FM file.
I am having some strange behavior. When I run the script, it first exports all my data from the customer table, no problem. Then it moves on to the Alerts table, and it seems to export the entire alerts table for every customer record. I have 69 customer records, and it exports all my alerts TIMES 69! Same with my calendar events table! So instead of 718 calendar events, I end up with 49,542 I must be missing a script step that clears my export or something or I am just completely going about this the wrong way
Looking at the script you need to:
(1) use the Go To Layout script step to move to an appropriate layout, based on the table (table occurence) you want
(2) perform a find to find the records you want in that table or show all records
(3) use your existing set variable and Export records script steps.
Go to Layout "Customers List"
Show All Records
Set Varaible $filename
Export Records (check the exported fields are the same table occurence as the layout)
Go to Layout "Assets List"
Show All Records
Set Varaible $filename
Export Records (again check the exported fields are the same table occurence as the layout)
Hope this helps.
Bye for now.
What Michael said.
There's one interesting Filemaker record resource seldom mentioned that can be used to validate a datafile as a check for fraud, files replaced after a crash, etc. It is the Record ID that is incremented with every new record. This is separate from any serialized field. Serialized numbers can be started at any time or replaced for all records. Deleted records can thus be hidden.
The Record ID runs all the time and deleting records does not affect the next value. It shows a total of all new records created.
I haven't tested this but I wonder if this number can be confused by importing records into an offsite development copy. The developer may have only created 100 test records while the employees enter a thousand or more new records. After importing the records and resetting the serial id (if not using a uuid) this record id will be out of sync. Its value will no longer be a reliable indication of the number of records created in a table.
This record id cannot be changed by any talented hacker who might adjust the next serial number to hide a deleted invoice paid in cash and pocketed. When faced with an angry client who insisted that their database had never lost a record or had records deleted, I was able to show large gaps in the record sequences and prove that I was not to blame.
If such a historical record is needed, then the multi-file tecnique is essential so that the data file is never imported into a clone altering the true record id total.
The only problem with such a scheme is that you're forced to make any schema changes on the live database. This is risky and can (and often does) contribute to corrupted database structure. In addition, it deprives you of the ability to perform offline testing of major database upgrades unless you're VERY good at replicating your changes in schema.
Should you need historical records of deletions, perhaps a better solution would be to modify the deletion to shuttle the records off to an archive table instead of actually deleting them. This can be accomplished with a roll-your-own solution and custom menus or through plugins.
Another IMPORTANT consideration (you may already know this) is that when you import the data into a newly updated file, all the Auto input for the Primary Key ( "ID's") have to be reset to the next highest value. If not the minute the user starts creating new records there will likely be ID conflicts. As I "always" use Alphanumeric Keys (Text) for my primary keys (except in a virtual list).., I also use a Numeric ( Calc Key ) for sorting, which can be used on to get the highest value (after sorting on it), for a serial reset. Example GetAsNumber ( ID )... The reason for this is that the sort can be very different for an Alphanumeric value than a Numeric Value. Every table that runs an Import needs to have that Auto Import reset.
Thanks for the heads up - is that what the "Auto Insert Options while importing" does? (see pic)
I was curious if I needed to check that but I thought it might actually reset my data - I dont want to reset my modification dates if possible as that will mess with my sync script.
No "DANGER", You do not want to use that...