You have three basic options:
1) Access the hosted database and make changes while it is hosted from the server--This is best for making small changes when there are no other clients logged in.
2) Take a copy of the database and modify it as needed, then save a clone of it and import all your current data into it just before you bring it up live on the server again. You can write a script that imports records directly from the original copy one table at a time and which also resets serial number fields to the correct next value. This is a script you can set to run overnight if there's a lot of data to import
3) You can split your database into at least two parts. Part one is an interface file that does not have any data tables--just references to the second, data file. Since a large majority of database changes are changes to the interface or a script, those updates can be made to a copy of the interface file and then this file can simply be replaced with the new copy.
If you don't have access to remove and upload files, do you have access to someone that does? If so, your best bet is what Phil said: close the live file, import the data to a clone of your new file, and upload the new file. (No export needed, just import.)
You didn't say whether anyone else modifies the database. If so, their changes would be lost. However, unless you've kept very good track of your changes, what other choice is there?
If you have kept good track, it may be easier than you think to move your updates to the live system. You can copy all the objects on your new layout, go to the old layout, delete all, and paste. As Phil said, it's best not to do that while other users are logged in as it can be disruptive. And layouts would be the last step, after making changes to fields, relationships, and scripts.
The issue with the copy approach is that I would first have to have all the table structures in place.
Im sure eventually Ill be importing into a clone but the business owner here is a bit nervous because they have had some problems in their standard import processes.
Just a question does anyone know of a development environment solution that may have already incorporated ways to automate some of these things.
A long time ago I seem to remember being able to access more "meta-data" "system data" than currently appears available. Is there some trick or a set of "custom functions" that have been developed.
Like Ive said before Ive been away from FM since v5.
With a major rewrite of a database in production, I think you would, at some time, want to use all 3 of the methods Phil proposed. The 2nd one is the one you would need to start with. You absolutely need a method to Import all the data, from all the tables, into a clone with newer structure. It is also critical that it resets properly any serial IDs required (and that you're aware of the pros and cons of the 2 methods to do so, read on).
In your situation, I would have 2 such scripted routines in place. The 1st would be a scripted method to bring the existing data, from the old structure, into the new structure. This may be a 1-time thing; but it is a long and complex 1-time thing. I think it is about the same to set up a scripted Import as a manual one; but much better in many ways; you can adjust as needed, you can chain multiple imports, etc..
You'd be able to use it right way, to populate and test your new structure, without it having to be a 1-time thing.
One thing I have to interject here. If you have absolutely no access to open the database files locally, then you have no way to modifiy the existed structure, which may need a few tweaks, field changes, data movement, etc., in order to even fit into the newer (more correct) structure. You should NOT do this remotely over a network, especially a WAN, especially when users are logged in. If the file crashes when you're changing the structure, that file will no longer be safe to use.
The most basic access to work on the existing files would be to use screen sharing, into the host machine, with the files stopped on FileMaker Server (using the Admin Console). You can then safely open them with a local copy of FileMaker Pro Advanced. Make the tweaks, quit FileMaker, reopen the files with FileMaker Server. It can be done after hours. It is dangerous if you forget to stop the files, and try and open the files locally while still hosted; but FileMaker will warn you.
It is however safe to open the hosted files using Open Remote with that local copy of FileMaker, as you are in that case just another client, even though on the same machine. The local copy of FileMaker should have Sharing Off. But you should not update structure this way (unless no one is using the files, in which case you can just stop them, so do that).
You'd also need a regular scripted routine, for your new structure to update itself. This is what you'd use after your new structure is in place. Yes, it is kind of a PITA to create; and it needs to be kept updated for new fields. Do NOT Delete fields after you build this, unless you update and save the Import for that table (yes, there will be multiple Import steps, one per table). Just mark fields as obsolete, and reuse.
It also must update the serial numbers. A few things to consider:
1. Use UIDs instead of serial IDs for those little tables at the end of the relational lines (which may still require a primary ID, but not always). UIDs do not require updating. Yes, they are a PITA to read; but these tables generally have no further relationships; it's the end of the line. I use Ray Cologon's uID_Create Custom Function, via auto-enter. I use regular serial IDs on tables where either users or I need to see and troubleshoot IDs, use them in relationships a lot, etc., ie., most tables.
2. There are 2 basic methods. A) Use FileMaker's functions and script step for this, re: serial ids. Or, B) Go to the last record, and use its ID + 1. I use method A. I also create a subscript test that goes thru again, and compares the next ID to the original file. So I don't have to worry.
3. Once this is all in place, you have a reliable method to update the files. It may take some time. It is not only needed when you update a file. It is also needed every time their files on the FileMaker Server crash and burn, requiring a Recover. They can continue for a short time with a Recovered file; but not for long. You can use your "update" routine to restore all their data into a clean file, in a matter of hours; worth its weight in gold.
You would also, in the long run, want to do a Separation of Data, with separate Interface and Data files. That way you can make some fairly drastic changes to layouts, scripts and relationship, in your offline development INT file, then just swap it in for the existing one, in a matter of minutes (after the files are stopped).
Thanks they also have another product listed on the site that sounds like what I might be looking for too.
I have a couple of applications Im working on personally and dont really want to reinvent the wheel so to speak.
Thanks for all the input.
I have full access to open the file locally. There are only a couple of users currently so getting people in and out isnt too much of an issue.
Threre are only a few serial ids and thats good to know about. (All the little things you forget about when you've been away from developing for years)
I tend to use a lot of Global Fields and relationships based on calculations doesn't separating the UI and the data present problems in this area. At least it did some time ago. (Last version I worked on was 5)
Your UI file would have layouts that are based on TOs from the data file. So no, the calculations would not be a problem.