I would really like to hear ideas, suggestions and methods for multiple
people to work on a single file filemaker solution... I am in the
situation now and we are struggling with methods.
thanks in advance
You could use a FileMaker Hosting service, such as ours:
This would allow you to place your files on the remote server, then log
in with multiple FileMaker Pro clients from disparate locations to work
thanks for the idea but we already have the database online.. it is a question of "best method" in how to simultaneously develop the solution. There are a variety of methods when we write programs in C++, etc but we are at a loss with Filemaker.
Only one person can work on a layout at a time. Making schema changes can also be problematic if one person has open records on the table that is changing.
The "best" solution is to divide the work as much as possible so that there is only 1 person working on a portion of the database at a time. Multiple developers, but all working in different areas.
Alternatively, everybody works on their own copy of the file and you then have to merge them (lots of copying and pasting). I wouldn't recommend this method (been there, hated that).
FM does not provide for any kind of version control system. You end up with lots of waiting for other people to finish their tasks and then commiting yours.
The separation model of development is also something to look into. A single interface file with multiple, external data files.
thanks for the tip.
any ideas or tips, links about the seperation model development would be welcome. like all great ideas with filemaker, trying to seperate pieces seems to be very problematic for us.
One thing to be aware of is, only one person can edit a script at time. So possibly meet at the start of the day and split up the work. So that each person is responsible for a specific section. Hopefully with out overlap when it comes to layouts, scripts and schema. If you can split this up cleanly you everyone should be able to make progress.
If staff are on a layout that is being edited and then saved, they will see their screen update and move to the default tabs if there are any.
I am new to FM Pro and have the same problem. I want to port our company's ERP system from PowerBuilder + Oracle to Filemaker Pro 12. I find that there is no separation of DATA from EXE file. I worry very much that after our new FM ERP system is alive with 100+ users online, how can we (5 developers) update/amend the system without the risk of messing up the live data when we change the data schema, layout and scripts? We normally throroughly test our new versions before deploying (a new exe) to users. Does FM Pro 12 has a very different approach?
Should we just use FM Pro 12 as a frontend for our Oracle database? If we do so, do we forgo a lot of FM Pro 12 functionalities like quick find? What will be the trade-off?
Investigate the separation method where the data is in one FM12 (or 7-11)file and the GUI is in another FM12 (or 7-11) file. It makes for easier updates rather than exporting and reimporting everytime there is a fix.
The separation model offers you a lot of scope. The most common example is one user interface file with a data file that contains all the tables. You may assign large data tables to be one table per file, simply to assist housekeeping.
The UI may be comprised of several files too. Many filemaker apps are omnibus applications that allow you to do everything. You can separate the UI into functional modules. It makes it very easy to control access to data and to special functionality when you break up the app into smaller pieces.
Also, remember that having separation means that you are using External Data Sources. The link between the UI component and the data is kept in the external data source, which is simply a text string. You can develop locally, using FMserver or not, with small sets of dummy data. To do real world testing, or when you are ready to go live, you can edit the external data source, changing the reference from the local file to the live data files.
The separation model is often promoted as a way to provide functionality updates to a live solution by swapping out the UI files without swapping out the data files. In my experience, though, most functionality updates involve handling additional or different data, i.e., schema changes, so it's rarely possible to deploy new functionality more substantial than cosmetic changes without importing data from older files into newer files as part of the process. The separation model can still make that easier by splitting your systems into smaller chunks that can be updated independently without having to migrate existing data into tables that didn't change as part of any given update.
I find that the separation model really shines in making work easier on large systems as a way to keep developers from stepping on each other's toes. If there are multiple groups of users who need to use a system, I encourage you to consider building a separate UI file for each job. Consider a system for fulfilling orders where there are customer service reps who enter orders and update customers on the statuses of one order at a time, pickers who pull the items for each order together for an order, shippers who box everything up and hand it off to UPS, and managers who need to see the big picture. Those could be 4 separate UI files accessing the same order data, and developers assigned to work on one of those at a time wont be locking up each others' scripts, layouts, or relationship graphs. You could do something similar on the data side, like splitting order and inventory data into separate files so they can be worked on and updated independently of each other — I don't have as nice a heuristic for when this might be a good idea, but maybe think about what parts of the data schema are likely to be worked on by developers who are not in close communication with each other, and do what you have to do to keep them from schema-locking each other out of doing their jobs.
This also splits very large systems into more tractable chunks, which as an organizational tool makes it much easier to find everything than running the whole organization on one monolithic UI file.
In my experience, though, most functionality updates involve handling additional or different data, i.e., schema changes, so it's rarely possible to deploy new functionality more substantial than cosmetic changes without importing data from older files into newer files as part of the process.
In the book "Filemaker 12 in Depth" by Jesse Feiler, it states that "...one of the best (and often unsung) features of FileMaker is that database schema changes can be made while the database is live, on a server, as other users are in the system. This capability is an extraordinary boon for FileMaker developers and will make a real difference in all of our lives."
Do you agree? Do you think this "capability" is real, and reliable? If this is true, probably exporting and re-importing data from old to new tables will not be necessary at all in most cases. Agree? It seems to me that it is too good to be true. I wonder if anyone has hands-on experience of changing schema when database is live.
I imagine most FileMaker developers have hands-on experience changing schema when a database is in production use. It's a real ability, but there are important caveats due to "schema locking." Only one developer at a time can have "Manage Database" open (i.e., be editing the schema) for any given file. Once a developer starts modifying schema for a particular table, users of the live system will not be able to do certain routine operations using that table. In many organizations, this kind of random-seeming unannounced downtime is not acceptable. For many other organizations, there are strict procedures where any changes to software must be built and tested in separate environments before being deployed at times scheduled and announced in advance to ensure software quality and to avoid schema locking on live systems. In short, FileMaker can edit schema on a running database, and that's great for making development faster and easier, but it's usually not a good idea while users are running sensitive applications.
Yes, you can make schema changes to a live file. There are caveats though.
If you attempt to save changes to a table that has open records, you'll get an error message preventing you from saving.
If, while you are saving changes, a user attempts to open a record, they'll get an error (this includes scripts).
If, after making edits to the schema, your network connection goes wonky or your machine crashes, you can corrupt your database rendering the current file unusable.
In general, I think you can safely make schema changes if you have less than 5 simultaneous users. More than that and it's too easy to step on user's toes. You need a rock solid connection (the recommended solution is to develop from a client on the same machine running Server), you should make changes quick, do them when traffic is low, and make backups frequently.
It depends on the nature of the system you're developing. Will you ALWAYS
be on-site? You don't want to test changes on live data unless it isn't
I'm not always on-site and keep a local copy of the DBs to test on. Every
project has changes and requests that come up down the road. I'm able to
do most scripting changes on my local copy and if I need to add one or two
fields, I can do it the next time I'm on-site or logged in remotely via
For a separate commercial FM-based solution, sending out updates with a new
GUI file has been tremendous.
I agree with what David's saying above, except that I have successfully made live schema changes with more than 5 users (maybe 30-40). But only something I can do very quickly. For schema changes that will take time in Manage Database, I'm waiting til after hours (do you have an after hours option or is the facility 24/7?). David's advice to make changes from a client on the server and to make frequent backups is particularly important.
I recently saw something somewhere (sorry, don't have the reference handy) where someone documented what can go wrong when you're in Manage Database and you have live users. The key thing documented was that the New Record command, via script or menu or button, will fail if you're in Manage Database and click the Options dialog box. This can be particularly troublesome if someone runs a script that includes the New Record command. That step will fail and subsequent steps will proceed, so if you do some Set Fields after the New Record, you'll now be changing data in the wrong record. Not good! The person found, however, that if you don't click the Options button or if you're just making changes to the Relationship Graph, the New Record step does NOT fail.
There are people in this forum who will tell you to absolutely, positively never ever ever ever make schema changes on a live database. Alternatively, I think it wise to do a risk-benefit analysis. There absolutely, positively are risks related to schema changes in a live database. Some of the risks are minor, for example, if someone has an open record when you try to exit Manage Database, you'll be blocked from doing so and will have to try again later. Some are major like the example above where you changed data in the wrong record. I've definitely been bitten by these issues over the years. But it's been very infrequent and the consequences, especially considering that I run a robust backup strategy on all my client's servers, have been minimal.
On the flip side, the advantages to my clients and me to develop live are huge. Doing everything after hours likely won't appeal to your team of programmers. Doing everything in backups then transferring schema changes over means more development time and cost. The separation model can be useful, but it requires more development time and it has, as referenced above, its limitations.
One of my favorite things to do in a live system is create a new script and any layouts required. I'll put the button to trigger the script on screen, but the first script step calls a subscript that tests to see if it's a [Full Access] user and if it isn't it halts the user with a message that this button is in testing. Then I run a backup of the live database, copy the backup out of the Backups folder, rename it with today's date (this assumes a single-file multi-table database), and test the script in the backup. I prefer to do it this way rather than develop in a backup and then move code over because it takes less time and you don't have to worry about forgetting to move something you developed in the backup.
Ann Arbor, MI