xdbc listeners "stalls" (JDBC, FMS 11 on Mac)
we have to integrate a very large (record count in the seven-folds, database sizes in the GB, FM solution consists of around 60 files) FileMaker legacy solution into a MySQL-based System. We are trying to set up a replication process FM->MySQL now.
The JDBC client driver works quite well feature-wise, but we are facing a severe performance degradation over time. We have the new driver from 21.12. . I cannot say that it is memory leaking, the memory usage as reported by activity monitor remains quiete stable. The problem is, that the FileMaker stops to execute queries after some time, and the listener is still at 100% CPU. We can fix this problem easily by restarting the JDBC listener, but that is not an option because the client won't get through all records in time before it hangs, and we need to be able to let this process run copletely because of integrity concerns. Is there a work-around for this behaviour? Is it possible to modify JVM parameters for the JDBC listener - assuming that it's java-based? (The obfuscation of the system doesn't make integration work easier, btw.)
The query schema is as follows:
First select all primary keys in the table.
Then, ensure that a specific "last modified" field is set on the source table by issueing a UPDATE SET query.
Next, query all records with SELECT .. WHERE primarykey IN (...), to be able to batch the queries to 100 records max. we found that querying all records at once will load all the records at query execute time - this is probably a severe bug in the driver, as it is supposed to only load the actual data when the recordset's cursor is moved. The JDBC-API specifies the setFetchSize setting, which seems to be ignored by the driver. Also FM doesn't support LIMIT and OFFSET queries. We know that the SELECT WHERE pk IN () approach is not optimal, but we didn't find another way to load our data - if all records are immediateley loaded and stored in memory, the heap size will explode and ultimateley lead to out of memory exception - even with a heapsize of 2G. Is there another way to do batched querying?
We then insert the records into MySQL - which is still lightning fast compared to FileMaker.
Last, we set a lastSyncAt field in a special metadata-table that exists per FileMaker table noting the last successful sync. We do this by issuing a query against the metadata table checking if a metadata-record already exists for the record that we just copied from the source table. If so, we UPDATE it else we INSERT a new one. This enables us to do incremental updates later.
Is there an update for the listener available? We would also like to use a prerelease version, if one existed.
Thanks for your time and support,
pme Familienservice GmbH