I think that behind the scenes, Filemaker could care less what the Name of the object is. It uses its own internal index numbers. [Filemaker Magazine.com has examples of a custom function "Object.id" to get the internal id and use that instead of table/layout names.] So if file A contains table ONE and you make a copy of file A and open the copy to make modifications to table ONE, when you import (copy / paste) the modified table ONE back into file A, it is given a brand new index number. So any scripts referencing fields in the original table ONE, will not reference the revised table ONE despite having the same name. This is why some people like to use a data separation model. When making updates to my solutions, I have written scripts to delete all data from the new version of the tables, then import data into the revised version. That way I can make schema changes, add layouts, etc. in the new version and when I'm ready to install it, I run the script that moves the current data into the revised system. It's sort of a pain and down-and-dirty, but it works.
Scripts are usually not a problem, when you import scripts, the matching is done by name, not by internal ID. But pretty much everything else is by ID.
You can't avoid what is happening in your scenario though because you are keeping the old Table_1. A better approach would be to duplicate Table_1 and name that one Table_1_OLD and make the necessary changes to the original Table_1.
Note that the internal FM metatables "FileMaker_Tables" and "FileMaker_Fields" that you can query by ExecuteSQL() will tell you the internal IDs so you can make use of those to help you avoid the mismatches.
For instances, when you add fields to your dev copy of the solution, you can make sure that you know what the IDs are for the fields on the production copy so that you can create dummy fields if necessary to keep the IDs in line. Same with tables.
Updates at clients: From our experience, the best thing to do - although lengthy - is to install the new (call it development version) set of databases with no data and then painstakingly import the old data into it. "new" values in globals (like containers) need special attention.
There really are only two choices:
- redo all the changes and run into the issues that planteg ran into
- or import all the data into the new "golden master", but that carries its own risks. The typical ones are:
-- making sure the full record set gets imported
-- making sure the serial #s are reset to match the ones from the production copy
-- running any data migration scripts
-- if the solution depends on globals being set (without setting them explicitly in an onOpen script) then making sure that those are properly populated
The main factor for me to decide between the two approaches is time and risk. If importing and baby-sitting that is going to take longer than re-doing the changes the I will go ahead and re-do the changes. I don't favor one approach over the other. I typically consider both for any given deployment and figure out what is going to be faster with the fewest amount of risk.
In may case, the data was not a concern because the table was a virtual table that gets filled with script. The difference was for a bunch a calculated fields whose calculation was different. I could have copy/paste calculation from one database to the other one, but that takes time and could lead to error.
So as far as I understand, there is no easy way, even through scripts in base FileMaker. At which ever way you go, it takes time and can be prone to errors.
I wonder if there is a plug-in that could update fields from a reference table to a destination table ? That could be worth the $$ if it saves you time. If that plug-in doesn't exist, can it be developed, in other works does the SDK provides functions to manipulate tables ?
RefreshFM from Goya??
Rather than copy a table, I copy the fields and paste them into the equivalent table on the other side.
I have an order that I do changes in.
1) Custom Functions
4) Value lists
By following that order I usually don't run into troubles except where a layout needs certain scripts. In which case Layouts appears again in #7 and I just re-copy them from the dev version.
I work from a list of changes that was written while changing the development version of a database which usually has notes like:
add x fields to table y
change x calculations to table y
add x relationships
add / edit x layouts
add / edit x scripts.
The only thing that needs to be managed is making changes to existing elements that are already in use, if the database is being used live while the changes are made.
To improve your process.
5) Layouts <-- create the layout, leave it blank
7) Layout objects <-- add your layout content
If it's about new fields or other types of calculations...
How about adding the new file as External Data Source and pointing the old TO to the new one.
Then importing the new table and pointing the old TO to the imported table.
Some cleaning up has to happen afterwards and making sure everything is well planned if tables rely on new fields from other relations. The only thing, you'd need to shut down the database for the process.
Depends on the scenario and there's no one easy way. Most automated would be a scripted data migration.
Then swap the file.
Have you tried this at all or just brainstorming?
Every time you repoint a TO, FM will try and match up fields by ID so if the field IDs are not in-line you would actually break a lot of things by doing what you propose
Yes, I have actually. Comes from the separation model technique by Todd Geist. Here's the tube link.
I'm by far no expert but it looks like all filemaker cares about is the TO ID and not field id nor where the data comes from. As long as the field names match they will be properly displayed on the layout.
EDIT: I stand corrected wim, some id matching is going on. Works on tables that have modified / added fields stemming from original table. And appology where due.
When you split a file for the data separation model you'll be fine. But not for anything else. FM does care about the field IDs and not the field names.
When you split a file in two then obviously the field ids will match. But in porting changes between a dev solution and a production system that assumption will not hold up.
Bummer. Not even a brainstorm, more a brain fart.. Face-palm.
I find this process works fine for minor changes.
I should have noted that if I'm doing a major changes then I have an "Import" script in the Dev database that:
1) Deletes all Dev data
2) Imports live data and
3) Performs whatever data manipulations necessary in the process.
I run that a few times and test it, then one night I shut the live database down, import the data, then load the Dev database onto the production server.