Filemaker 11 does not offer any improvments to the Import Records process.
Neither is there any native capability for filemaker to resume an interrupted import. If your records in your import file are sorted by a Primary Key and you have a way to remove a selected range of data, you might be able to use the Primary Key value of the last imported record to find and remove that range of data from the import file before once again importing from that file.
Things that can increase the time per record of an import:
- Lots of fields with indexing turned on
- Lots of stored, indexed calculation fields.
- Importing with auto-entry options enabled and having lots of auto-enter settings on the table's fields
- Lots of fields with "validate always" validation rules.
If you can remove or reduce those features for your import target table, you can increase the rate at which the data is imported. Of course, the design requirements of your database may keep such changes from being a practical option.
Idea, unconfirmed for 100% safety however, though I have run both at once. If you have FileMaker 10 and FileMaker 11 and the file is hosted by FileMaker Server, then you could open the files remotely, in each version. You could NOT have FileMaker sharing on in either client (as there'd be a port conflict, both use 5003 to get the data).
If all this is all on just your computer, and you do not have FileMaker Server, you can get a user-limited copy by joining FileMaker's TechNet discussion group ($99/year).
Alternatively, if you can put the files to import into a place FileMaker Server can access (its own Documents folder), then you could set up a scheduled Import (theoretically, not tried). Or, if the source is a FileMaker file, just a regular scheduled script. That would be the least demanding on resources, as FileMaker Server has its own. Then you don't need 2 versions of FileMaker running.
But really, at this point, it would be easier to just use another machine.
Phil - thanks for getting back to me. In order to keep things as stripped down as possible in this dbase I do all formatting, calcs, etc in a separate dbase. The main dbase has no auto-enter calculations, no validation rules and only two fields with minimal indexing. BUT, I am importing several million records at a time. And my experience has been that once you get above about 5 million records, even minimal indexing really slows FMP down.
I wish FMP offered a way to create a multi-GB RAM cache just for indexing. That alone might save significant amounts of time. It seems that once FMP has done the indexing the performance is usually fine. But the indexing times can be just brutal.
Fenton - Thanks for the info and suggestions. Currently I am just running FMP 10 locally on my computer. No server. But I will look into some of the suggestions you made. And I think you may be right - at some point this may simply require a dedicated machine.
Much as I like filemaker, it isn't the best software for every situation. This may be a case where you want to use other database software for your backend and perhaps keep filemaker as your front end. That's one of the things you can do with an ODBC link between filemaker and other database applications.
I have a question i am a beginner using filemaker 11 on windows xp if i want a filemaker database to update instantaneously with info from another database would i need to create an odbc relationship?
I lack sufficient experience in using ODBC with filemaker to be able to answer your question. It's a capability that I'm aware of but haven't needed to use it in any of my projects.
Phil - Actually I was thinking the same thing, so I started to build the dbase in MySQL. I figured that I would use FMP as a front end, giving me the ease of use of FMP with the power of MySQL. But after weeks of learning how to set it up (MySQL is many things, but intuitive is not one of them) I was very much surprised and disappointed to find that MySQL also takes a really long time to build indexes for large data sets.
MySQL offers a host of configuration options for memory caches, etc. that FMP does not. So it's entirely possible that there is some secret recipe that I didn't figure out. And I think that it's possible that MySQL was in fact marginally faster at building the indexes. But all told, the difference was not enough to justify battling that learning curve.
Actually I remain somewhat shocked by this. I know that MySQL is used by many organizations for really large, complex dbases. And I know that MySQL has a reputation for being really fast with large data sets. For queries this may be true. But for building indexes I didn't see the speed. And because of the nature of my dbase, it's only the indexes that are an issue for me.
So I'm back in FMP. I'm fighting the indexing speed, but at least I have some sense of how to actually use the program.
Rafael - I have very limited experience with this as well, so your best bet is to start a new thread. I do know that if you use FMP as a front end for another dbase, then you can set up the other dbase so that it shows as another table within a FMP dbase. As far as I know, when you conduct searches and other functions using FMP, the commands are sent to the external dbase program which actually then runs the query and sends back the results to FMP.
Is it "instantaneous" like you said? Well no, but then nothing is. But if you have made changes in the external dbase FMP doesn't need to be "updated" necessarily because it just sends information back and forth to the external dbase.
Now that having been said, there may in fact be situations in which the data from the external dbase is imported into FMP, either to update FMP data or for other reasons. This would most definitely not be instantaneous, but with small enough data sets it shouldn't be too bad. But again, I have no experience with that so you should start a separate thread.
I don't know if this would work but have you tried importing your data logging in your computer as a different user while running FM as your actual user?
Thanks for the advice Michael =)