Short answer: Yes. Sort of.
In order to do a direct ODBC import, you need the DSN on the client. Using the DSN on the server works only for ESS connections.
However, this opens up another possibility: You can do the import from an ESS occurrence to the FileMaker table. Just select the FileMaker database as the source and destination file, using the ESS table as the source and the FileMaker table as the destination. Performance may be questionable, so you'll have to experiment.
Thanks Mike. That is exactly what I wanted to do. Import into the solution on the server not each individual client. I'll give it a shot and see how slow it is.
That's not going to work because fmserver cannot import data from another fm file (c'mon filemaker, it's about time you allowed this).
To get this to work you'll have to create the dsn on your client and give it the exact same name as the dsn on the fm server. Create your import script using your fm pro client and after you've tested it, open your admin console and create a schedule to fire that import script for whatever schedule is required.
We do this on a nightly basis with dozens of tables and millions of records and it works very well.
Missed the part about wanting to run the script on the server. So PowerSlave is correct about that part.
But you don't have to install the DSN on the robot if you use an ESS occurrence. So you can do what I suggested, or you can install the DSN on the client robot. Pick your poison.
Thanks PowerSlave. I found the same shortly after reading Mike's post.
I understand your suggestion. Create DSN on my machine, write and test script, upload and schedule script. I don't have the DSN details but will try to get those and test. Group that owns the DB is super secret society. Blood in, blood out kinda thing. Stupid because I can see the data online!
Mike - not sure I follow you. Are you saying in the import record step to chose file and then use the FM file as source and destination? Is one way more efficient than the other?
Yes, exactly. Use the file and import into itself. Just use the ESS table as the source and the FileMaker table as the destination.
As for which one is more efficient, it depends on how the Oracle views are built, network speed, indexing, etc. There are a lot of variables. But I've found a direct ODBC import is often faster.
Your results may vary.
I'm trying to setup the ESS import and in the Import Records step I am at the view where you match up your fields. It keeps popping a find as if it is running a find on every field. Below is what I keep seeing. Would this be due to the fact that a primary key has not been created in the oracle view? When adding the table to my relationship graph I had to select the field I wanted to use. It took several minutes to get to a point to allow me to match up the fields.
Forget using ESS as with some DB's ESS won't be using the index. Instead use the method I described above by importing via ODBC and use your SQL query to import the data that you require. It's so much easier and more efficient.
This may not be an indexing issue. FileMaker is doing what it normally does; it's fetching the records in batches of 25. (I've seen this behavior on large Oracle databases even where indexes were in use.) The catch is, it's fetching every field in every record, so, if your back-end Oracle table is large, it will take quite a while because of network bandwidth.
You might be able to speed this up if you go into Manage Database and remove any fields you're not using from the shadow table. I'm not sure if that would help or not, but it might reduce the bandwidth required to fetch the records.
Of course, if you run the ODBC import as a server-side script, as PowerSlave suggests, then you might be able to dispense with the network bandwidth issue altogether - which will make everything much faster and prevent your having to install the DSN on the client. Can you remote into the server, by any chance, to get the script working that way? (Probably not, if your server group are the Illuminati, but I thought I'd ask anyway.)
Thanks Mike, still waiting on the details to create the DSN on my machine. They won't allow access to the actual server.
When adding the view to my RG I found another view that has a pri key. I added it to my solution and pulled up all records on a layout. There are 30 fields and 2 million records and this view pull up immediately. Any ideas why this would be so different from the view they built for my needs?
Well, it depends.
How many fields are in the view you're having trouble with?
Are you having trouble with doing the same thing - putting the fields on a layout and just looking at it - with that view too?
FileMaker will cache the first 25 records immediately, and will pull down additional records as it needs them. 30 fields is pretty much nothing, so you'll see an update immediately if you pull 25 records of 30 fields each. However, if you try to pull 2 million records of 30 fields each, it'll take a while - and that's what happens if you're trying to run an import. If the other view has, say, 100 fields, then that will compound the problem, especially if there's a significant amount of data in those fields.
Fast table - 30 fields with 2 million records. 5 seconds to load all fields in table view
Slow table - 110 fields. 15 removed leaving 95 w/ <32,400 records. 1:52 to load 95 fields in table view.
The data is similar but a few of the fields from the slow table have more data in them. Things like a problem description or customer symptom.
I'm hoping setting up the ODBC import like you guys recommended is a game changer!
The problem is the width of the table. 95 fields is a lot. You need all 95?
Table view only loads as many records as will fit on the screen. It'll load more as they scroll into view. So the problem is most likely coming from the number of fields.
Still and all, if you want to run this server-side, the best option will be an ODBC import. Or get rid of the extra fields if you don't need them.
I do. my solution is a reporting tool of trouble tickets.
I also pinged my FM sales expert and her response was
Fundamentally, the slow table is slow because there is no indexed unique row identifier (single field or combination of fields) which is what FileMaker relies on for ESS. The amount of data in each row will have a small impact on performance, but not to the degree you have observed.
I'll update you guys once I can get the ODBC import in place.