I’ve been working on a FileMaker - Mailchimp integration, though this question is mostly about JSON handling in FileMaker v16, which I don't have any prior experience with.
Using REST and the Mailchimp API, I’ve worked out how to retrieve batches of Mailchimp subscriber records that have been added or changed since a certain time frame, eg in the last day. Once the JSON is dumped into a FileMaker field by the Insert from URL script step, I then parse it into a holding table, and then run a comparison script against the holding table and our local records and apply any necessary changes (add new subscribers, remove unsubscribers, update subscribers with new details).
I notice that parsing out small JSON chunks into the FileMaker table is pretty quick (eg 4 subscriber records per second on my laptop with the test file hosted locally), but as the number of records increases, the time it takes rises exponentially, particularly as you go past 500 mailchimp subscribers in a batch:
|Mailchimp subscriber records downloaded||Time to download from Mailchimp via API in seconds||Time to parse JSON into FM table in seconds||Parse time per subscriber in seconds|
|400||31||390 (ie 6.5 mins)||.975|
|500||42||600 (ie 10 mins)||1.2|
|1000||185||2820 (ie 47 mins)||2.82|
Can someone please review my code snippet in the following screenshot and tell me if I’m approaching this parsing process correctly, or if there is a more optimal way to achieve the same result?
Secondly, is my way of counting the number of records retrieved in the JSON the best approach? (line 1)
I could retrieve smaller chunks of JSON from Mailchimp (using the API's count and offset pagination parameters) and parse these smaller chunks in FileMaker one after the other, however there is quite an overhead in each call to Mailchimp and the retrieval times vary quite a bit as per my table above. Or said another way, pulling a large batch from Mailchimp is only slightly slower than pulling a small one.
Thanks for any help you can offer!