AnsweredAssumed Answered

Questions: Nuances of Managed Storage Container Fields [Long]

Question asked by rickenbacker360 on Mar 20, 2016
Latest reply on Mar 20, 2016 by rickenbacker360

First, some background: I intend to use managed storage using the External (Open) method at the default [Hosted Location] on FM server. I have written code requiring user be a guest of FM Server to Insert File, Open, Email, and Delete the Document Management record containing the inserted file. The Open and Email functions use Get(Temporary Path) and there is liberal messaging telling users that they are opening a COPY of the file and that if they make changes they should save a copy of the file to a PERMANENT location and from there, re-insert the newly-saved file into the container field so the new file is again on the FM Server. I am trying to make the solution as bullet-proof as practical, but am also considering the 90/10 rule.

 

Questions:

  1. When the solution file is hosted using FM Server, I notice the “Save A Copy As…” command is grayed out. This is good. However, If FMS is brought down (or the files are closed) and the user opens them using FM Pro, he/she will be able to use the “self-contained copy (single file)” option, thus altering the container field’s option to no longer use managed storage. I understand this method of saving literally deselects the “Store container data externally” parameter on the field definition’s “Storage” tab. This overall process embeds the externally (FMS) stored data in the self-contained copy. Assuming the user properly renames the file (as part of a multi-file solution) and replaces the originating file with this self-contained copy in the served directory back on FMS:
    1. Does this self-contained copy process remove the formerly externally stored files from the “RC_Data_FMS” folder hierarchy at the moment the file is saved? Or are the files still in place on FM Server but no longer "attached" to the solution file?
    2. Is there a script step which can turn on (reselect) the “Store container data externally” parameter? And, of course, the “Open Storage > [Hosted Location]” parameter to make the self contained copy return to its intended behavior? (Remember, the file does not have full access.) As you know, we do have a related script, “Set Next Serial Value,” allowing us to reset serial numbers inside fields when we do not have full access to field definitions.
    3. Failing that, it would seem that FMI breaks my solution. (I will forgo any editorial on whether that was a good idea.) Is my fallback method to use Custom Menus somehow to disable the “Save A Copy As…” command entirely? (I do already have a script to save a compressed copy for rebuilding indexes and making the file smaller.)
    4. Failing that, is my only option to force the user to import all data into an intact (unbroken) copy and redeploy same on FMS?
    5. Is there a “get” function to know if a file is now a self-contained copy? At least I could alert the user that the document management feature no longer works correctly, lock them out of the feature, and provide instructions what to do. At this point I only see huge file size increases and failure.
  2. If User 1 attaches a file using the Insert File from their Computer 1 and User 2 on Computer 2 inserts a different file to a different document record, FMS now has the files in the “RC_Data_FMS” folder hierarchy. This is good; all guest machines have access to the files without regard to their originating source location(s) on Computers 1 and 2. In my solution, the same file can be attached to a found set of parents by duplicating the desired child record and populating the foreign key with the parent’s primary key. “FileName A” can thus be in 50 document records with only a single iteration present in the “RC_Data_FMS” folder. Let’s say that parent (with serial number 23) attaches the first iteration in a child doc mgmt. record. If my directory structure on FMS includes the sub directory “/DOCS/DocContainer/23”, Filename A resides in it. When the user duplicates 50 iterations, the single Filename A remains, but hard links are made to it (inside the 23 sub directory). Now, in a beautiful design, FMI allows the user to delete the child document record for parent 23 and not delete the contents of folder 23 (because other child records have the same FileName A attached). In fact, the user can delete 49 of 50 child records and FileName A remains in sub directory 23. When the user deletes the last document record having the iteration of “FileName A,” the sub directory 23 and Filename A are deleted on FMS.
    1. Is there a way to store the originating directory from when FileName A from Computer 1 was attached by user 1 and FileName B from computer 2 was attached from Computer 2? Keep in mind that two different computers might have the same directory structure (Macintosh HD/Users/Name/Documents/Filename A for example).
    2. Is there a way to inform the user that they are about to delete the last iteration of a document record having FileName A attached/inserted? Right now, I provide general instructions to first open FileName A, save a copy to a permanent location for safe keeping; the presumption being that users who attach files will believe that things are all now being taken care of by FM Server and possibly delete FileName A at the originating source (where it resided before being inserted/attached).
    3. Is the serial number sub directory value [ Example: (23) ] of any worth? Originally, I though it would be helpful to help the user locate the file on the FM Server, but I’ve since learned this is extremely bad practice. As stated, I now use Get(TemporaryPath) to open or email a file. And, given my scenario, a user would likely be unable to locate the file anyway for the several reasons outlined.
  3. My solution has two deployment versions. The FMS hosted “mothership” version and the “light” version used by external field personnel to gather information such as courthouse document scans, etc. The mothership version is what I’m focused on here. However, I have a method of importing from the light version like this: The light version has dedicated “XP” files with complex structure (layouts, scripts, etc.) which Import from the corresponding live data files. These XP files are sent to the mothership site where FM Server closes the normally hosted files. While files are closed the XP files are placed into the live data (normally served) directory, replacing files of same name, and the solution is opened on FM Server using FileMaker Pro (not server) where the contents of the XP files are now imported into the mothership.
    1. Is there a way for the light version users to insert files into a self-contained copy, import into a self-contained set of XP files and then have that set of XP files import into the managed storage mothership files, where the desired outcome is that all container contents are then automatically placed into the RC_Data_FMS directory?

 

Again, sorry for the length. I tried to be concise but give enough background for thoughtful and informed answers.

 

Thanks for any help.

Outcomes