There is no creat field script step. You can import records and have the import create a new table. I haven't tried this in a script. Usual imports I script brings data into an existing table and fields.
Why do you want to create a field in this manner?
First thank you for your reply , Bruce !
we are thinking that use a separation model for a project , so every time we want to update the program we send them the "app" file only , and if there's a new field added to an existing table, we want to create that field on the user's "data" file using a script ( so we don't bother to get their file and add a field on it . )
Just like to see if there's any easier way of doing that !!
Aside from the fact that there is no script to create a field, there is a much bigger issue to keep in mind if you are planning to use the so called "separation model"... FileMaker handles much of its internal field referencing by a field ID, not by the field name. The danger is you could do a lot of work on an offline version, then when you replace only the interface file many relationships, scripts, calculations, etc... can reference the wrong fields.
There are many ways this can occur...
1. Adding fields in a different order in the online version than the development version.
2. Creating a field, then deleting that field, in only one of the versions will also through the sequence off.
Hope this helps.
One way to "plan ahead" in such a situation is to add a few extra fields to your data file(s) of each type (text, number, date, etc.) and just give them generic names (spareText01, spareNumber01, whatever). Then, if you find a given upgrade only requires you to add one field to a table, you can just use the "extra" field temporarily, until you have a major upgrade, at which time you can give the field a more proper name when you do the data migration.
Separation file architectures can have practical benefits, but being able to swap out an interface file and not having to migrate data out of old data files is not one of them, in my experience. New or changed features usually means more fields, sometimes new tables. You can mitigate this by following Mike's suggestion of adding extra fields to be renamed in a later, larger update; but I wouldn't want the responsibility of remembering to rename those fields in the future. You could also mitigate the issue by implementing an EAV model for sections of the system you anticipate are likely to change, but that has the potential to make your solution more complicated under-the-hood than you might otherwise like. Any project I do involving updates to an existing system usually includes migrating data out of an old copy into a newer copy using a standalone migrator file that automates the task. Unless the system is being completely re-structured, it's pretty quick and straightforward.
Jeremy, can you also point to some good examples of FMP EAV? I know that Heather McCue covered it in a 2010 Devcon presentation.
Actually it appears it was a November 2010 Dallas FMPUG presentation according to the copy of her Keynote file that I have.
in my experience. New or changed features usually means more fields, sometimes new tables.
In addition... Often many times calculations need to be changed or added, that can only be handled correctly in the data table's. While there may be "some" situations where only a script or layout needs altering, I believe a solution that works "some" of the time "is NOT a solution at all"...
I do understand that there are applications where the "Separation Model" can be used effectively (particularly in environments that can "NEVER" be put offline),... But for more of a "Boxed Product Developer" (which I consider myself) this method is completely useless. There is no way I am going into thousands of different systems and making all the "data file" changes in each one of them. From my point of view there is only one way to handle updates... Export user data (or Save File as) then import user data into the new version. If you set the process up correctly it works simply and reliably and It can easily be done by the user at the click of a button. In most cases it will take from 2 to 30 minutes, depending on the clients volume... Cases where Clients have many Millions of records could extend into the 2-6 hour range. But they can let this type of thing run overnight "un-attended"... I have used this method for well over a decade and it has always served me well.
I wonder if we could collectively build an example.
Attached is a file that contains the tables defined in Heather's document, set up according to her graph.
In this starting point I have declared different tables for each of the table occurrences on Heather's graph.
I suspect that isn't correct - that some of these occurrences were really intended to be different graph instances of the same base table.
Perhaps you can comment or modify the file?
EAVBasics.fmp12.zip 13.8 K
Rare that I would see things differently from you but I do.
Using separation model saves me a lot of work and it is safer (in my opinion). Most times, after the first initial build, remaining changes are UI developer tables/fields and layout- or script-based and I rarely need to visit Data field definitions. I was FORCED to use separation model the first time by one of the top developers in this business when we collaborated on a major project. I am so happy he made me use separation and he refused to allow calculations until all other options were explored. If one is forced to think it through, there are usually better options than creating a calculation which adds to field clutter and increases time to serve up records.
Every change should be documented anyway in Change Control file (whether collaborating on a team or as lone developer) so if something goes buggy, you will know where to look. Therefore any changes to Data would be simple replication of items in Change Control indicating DATA file. That usually takes only a few minutes to replicate versus how long to migrate the data? If you don't use Change Control, you can easily see changes made to Data with tool such as http://www.fmdiff.com
I've designed with single file (5 years) and separation (7 years). Simply, replacing the UI and replicating a few changes in data is simpler than maintaining import maps and running migration. And the risk ... whenever you 'move' data from one table to another (or one file to another), you increase risk that something will go wrong, forgetting to change an import map, forgetting to set the serial (if not using UUIDs) etc. And data migrations can take a long time ... the longer it takes the more chance of connection issue.
I go into all my reasoning here on post #12: http://fmforums.com/forum/topic/82917-creating-a-separation-model-from-an-existing-solution/?hl=separation#entry384360
It fascinates me to see such a division of opinion on separation ... and from top developers that I greatly respect (ending on different sides)!!
It's interesting you should say that a "boxed product" would be a poor environment for Separation Model. I remember a DevCon presentation from Matt Navarre a few years back making exactly the opposite case, since many changes can be incorporated into the UI file(s) and distributed without affecting the data at all.
But I'm going to chime in here and say that, since the advent of the varied tools we've been given to move business and application logic out of the data layer and into the interface layer (Conditional Formatting, filtered portals, Bruce's excellent Virtual List technique, Merge Variables, etc.), the Separation Model has only gained in strength. It's certainly true (and I've had to do it myself) that a major upgrade will sometimes, if not often, require changes to the schema, in which case a migration strategy will be needed. Nevertheless, I have had considerable success with changes that don't require a migration with separation.
Everyone has different experiences, of course, and I would never presume to tell someone of Jeremy's caliber that he's wrong about his. But I do think Separation Model is worthwhile.
Just my $0.02.
There is no way I am going into thousands of different systems and making all the "data file" changes in each one of them.
I am not sure I understand ... why would opening Server Admin and changing out UI THEN making changes to its Data file be any more strenuous? Can you explain what you mean by 'thousands of different systems"? Do you have thousands of servers? Even so, you still need to change out the UI in Admin so I don't see the issue since you'll be there anyway. Do I misunderstand your statement?
I agree that separation file architectures are worthwhile, and I personally prefer them, just not for any benefit of minimizing work during an update. It's more of an organization strategy for me, especially on systems broad enough to have multiple interface files.
Regarding maintaining a change control record vs. keeping a migration mapping up-to-date: six of one, half-dozen of the other. Unless and until better version control features become available, a record of changes needs labor to maintain, same as with a migrator tool. One method risks mistranscriptions of changes, another of neglecting to reset an import map. We'll each argue that the plausible errors of our ways of doing things are less probable or less consequential or easier to discover and faster to fix, and we'll never get anywhere for lack of evidence beyond our respective personal anecdotes.
What I do like about the migrator, however, is I can test the migration long before a go-live date, and because the migrator file automates the whole thing, I'm certain that what worked correctly during my test will be exactly the same thing that happens when it really counts. I can practice manually copying the data file changes 'till my fingers bleed, and I can write myself checklists of everything to remember, but manually copying things into the existing data file will never be as reliably repeatable because I'm a feeble human. It may be possible to automate a transfer of the features documented in a change control file with tools like Keyboard Maestro, but I trust that less (the fewer technologies involved, the better, via Occam's Razor), and it isn't a viable option if my client has a policy that their data can't leave their hardware.
LaRetta, I think you are misunderstanding what I mean by a Boxed Product Developer. As a boxed product type of developer, the solutions I build, are generally sold by companies as retail products or as part of a business solution. Basically the developer will NEVER be in the users system or Server Admin. All updates need to be done by the user. The developer would also NOT be involved in any part of any hosting process either.
I think it is also important to clarify a few points so people can understand where I am coming from.
1. Since FM7 I have become a firm believer in building single file solutions. I feel it is a cleaner, less redundant and vastly more efficient method of development. The separation model (a multi-file concept) still requires some redundant relational structure in the data tables to perform calculations. Only under the threat of extreme physical torchure would I ever build another multi-file solution.
2. I am the only developer that works on my solutions. While I value greatly the ideas and suggestions from other developers, I do not want other developers working in my solutions, nor do I wish to work in other developers solutions.
3. No customized solutions to any given user. The solutions have to be built so the users or resellers can customize the solution by modifying user data only. But there will only be ONE version of the solution, NEVER would any user have a modified version of any kind.
4. I have no interest in making detailed change logs of every item that gets changed. The way I type, it would take me 10 times longer to document, than it would to make the changes.
I respect the opinions of all the fine people here, and I know many will disagree, but I stand by my position on the separation model in a boxed product environment.