There are only two solutions to this I'm familiar with that work consistently:
1) On your production machine, switch the container storage from external to internal prior to copy the file down locally for migration. Move the new version up to the server, then switch the storage back to external.
2) Copy the file and the entire RC directory down locally from the production server. Rename the old version to "[databasename]_old". Rename the folder to "[databasename]_old" and repoint the old database to that. Duplicate the RC directory and remove the "_old" tag. Import records from the old database to the new one. Push the entire shebang (database and folder) back up to production.
Obviously, you'll need server directory access for method 2 to work. Method 1 is time-consuming and a little ugly, since you're monkying with database schema on a hosted file. Strongly recommend performing it on an off-use period (i.e., when users are not in the database) to reduce the risk of corruption.
You can potentially use method 1 without the "while hosted" risk if you can remote into the server and run the file using a copy of Pro or Advanced that's installed directly on the server. Then, you can stop the hosting and prevent users from being able to monkey with it while you're updating the storage method.
The easiest way to find out where to put the files is to create a new record when the file is hosted and insert something into that container field. Then check where that file ended up. You can then move the other RC files into the same structure.
As to exporting/importing data as part of a migration: that can be tricky. An emerging best practice is to keep a separate "documents" file that has the container fields in it, with a 1x1 relationship to the original table & record that has the rest of the data. And then you never replace that "documents" file so that the internal IDs never change.
+1 on the separate documents file.
What's weird is that putting the files in place manually worked on one server, but not the other. I think it's a permissions issue, but I'm not sure I know how to correct that. (I've already tried to unlock and give full permissions to the folders and all of the subfolders and files.) Anybody know why the names of the files appear in red even after changing the permissions?
It is my understanding that hosted external storage requires that the FM Server Admin be used to upload the external files when the FM file(s) is uploaded. Placing the externally stored files manually can result in FileMaker treating externally-stored container content as have been "tampered" with so the data is no longer reliable.
There are no good reasons for using "open" external storage in a served/hosted system, and a lot of potential problems from people touching the external storage area instead of leaving it entirely to FMServer and FMPro to handle it.
On the topic of the separate container documents file -- the trick of offloading specific fields to a separate table -- and, with container fields, even a separate file -- is one that has been a great boon for several years. I think the idea first made sense to me after an Under the Hood session on server performance and record caching.
If you use a separate documents file, are there still benefits to storing the container data externally in the separate documents file?
If the container data is not changed frequently, then FMP-file-level storage still has the benefit of speeding up incremental backups, but if that file also changes between each backup, then you do get better backup times from external storage.
The separate documents file makes the upload/download process from the server much simpler than external storage, but depending on how often the file is updated, backups can take a hit. One can also set a less frequent backup time on the larger file, taking it out of the backup schedule used for the other files. That will reduce it's impact on the other backups.
Yes. You can greatly reduce the chances of file corruption by storing container data externally. You also reduce the size of the file, which helps with record caching (and, in certain circumstances, performance).
Stephen Huston wrote:
The separate documents file makes the upload/download process from the server much simpler than external storage, but depending on how often the file is updated, backups can take a hit.
Not sure that we are talking about the same thing, Stephen.
Even with external storage, it makes sense to keep a separate documents file. That file would hold all the container fields for each table that exist in the other files. The sole purpose of this file is that its structure and records (and thus the file itself) never gets changed during version upgrades and the data migration that comes with that. That keeps the link between the record & field and its externally stored data intact. That link can otherwise get broken, orphaning the remote container data.
also +1 re separate documents file
I am not sure that I have the recommended structure straight in my mind for a documents file. Can someone comment on the below to see if I have it correct?
- ContactsTable::ID (primary key)
- You want to store a picture associated with the ContactsTable record in a container field
- CorrespondenceTable::ID (primary key)
- You want to store a PDF associated with a CorrespondenceTable record in a container field
In the RG of the Regular file, you would have TO's with relationships between them like:
1) ContactsTable::ID = Documents::ContactsPicture_fk
2) CorrespondenceTable::ID = Documents::CorrespondencePDF_fk
To show the container fields, I would just place the related fields from the Documents File on my layouts in the Regular file.
It would probably be better to have two tables (one for each parent table that will be storing container data). Represent it as a one-to-one relationship; the tables just live in separate files (in your example). Otherwise, it gets a little confusing over which container goes with which table. (Nothing says you HAVE to do it that way; I just find it cleaner.)
In other cases, it's a one-to-many relationship. But the tables still live in separate files.
I like the idea of using a separate file for container data, whether stored internally or externally. My experience has been to store it internally to simplify the downloading/uploading of the container contents from/to the server.
However, your plan of using the separate file AND external storage clearly has some different advantages in better protecting the links on the server by not needing to touch the container data file or its links when getting copies of the main data file off the server. Takes its a step further....
Images::image_id ( pk)
Documents::document_id ( pk)
I use this ( images and docs tables in a media file) routinely. Where images ( or docs) are stored from multiple tables, and you want some extra visual comfort re where they originate, can add a source_tye_code text field and stamp psn (contact) or whatever ( if your pk are numeric only) . Or if using a text prefix for all primary keys, then the FK values in Images/Docs are obvious anyway.
IF storing images from multiple source tables, in the one Image table in media, then if using just numeric PK ( ie no alpha prefix), then the source_tye_code is mandatory, so it can be used as a REL predicate in the regular file ( or portal filter)