1 of 1 people found this helpful
There is a number of plugins that can parse JSON into FileMaker
But besides that you also have a few FM native modules that you can use like
Here you can read a very good article about parsing JSON to FM
I'm not sure this is the best(maybe not).
But if you think that each/value pair to FileMaker Record is not a good idea,
how about this?
/* In JS, convert date/value pair to */
var dateArray = [date1,date2,....];
var valueArray = [value1,value2,....];
/* add additional info if you want */
/* Save this to FileMaker, using FMPURL or some other technique*/
JSON_SVG_Chart.fmp12.zip 78.8 K
1 of 1 people found this helpful
I think storing every datapoint in a separate record is precisely the way it should be done. Datapoints are the basic elements (i.e. records in a database) in this problem, and using them as record items is the logical approach ...
I use a similar approach to processing a directory listing from a webserver, to get me a list of files, dates and sizes, and (crucially) the full path name. The directory listing is a little over 355.000 lines (about 20MB of data), and I repeat this process for every update of the website (from 1 to maybe 6 times a day) ... On a MacBook Pro (2.2GHz quad core i7) this takes a little over 30 minutes ... using the database even with 355.000 records is no problem whatsoever ... (I even have FileMaker databases with more than 5.000.000 records ...)
The most difficult part is extending the listing from filenames grouped by folder to full pathnames for every file. You have a similar problem to extend the "header information" to be included in every record (tags, devices, ...) with the datapoints. If you structure the database this way records are in fact very simple and short, and the solution can be extended to include more tags and devices without the need to restructure the database or the processing ...
You can just select whatever devices/tags are of interest for a specific reporting ...
Contrary to the directory listing, which is essentially a return delimited list of records, your file is one long string. This has to be preprocessed before importing into FIleMaker or any other database.
Basically adding returns, to split the string into a list of lines for every "datapoint string", and then removing non-data text elements like "date", "value", brackets, etc ...
Depending on location/number format standards, you also have to decide to maybe change values from decimal point to decimal comma and importing them as numbers, or leaving it as is, and proces this (string) after importing.
The timestamp data has to be "cleaned up" as well, again, either in the preprocessing, or after import, to match the internal representation FileMaker uses ...
Hope this gives you an idea to go forward.
I would be very interested in hearing how you go forward and the resulting solution
Thanks for the replies. I think the biggest challenge for me is figuring out the best method to store, find, and retrieve the data. I'm not too concerned with the conversion of the JSON. I'm using some python to bring in the JSON, parse it, and make it available to Filemaker Server. I've actually tried out the ModularFilemaker plugin and read that Soliant Consulting blog (and others they've posted about handling JSON; super helpful btw!). Really good suggestions and helpful.
At the moment, the biggest challenge I have is deciding whether to just go for putting each value pair into its own record or grouping them by time (i.e. taking all the values in a 10-minute period, return delimiting them, and putting them into a single record). I'll definitely be doing the find !s requested by the user via Filemaker Server especially since this solution is going to be accessed on mobile devices. But even that I'm not really sure is the best (i.e. fastest) method for retrieving and displaying the data into a trend chart/multiple trend charts.
The questions I have in that case are:
1. What's the best method for the user to find the data. In other words, what could they search on? I could timestamp when the record was created and put the time or date range of the data in another FM field, I suppose. But I really want the user to be able to select to see data in no smaller than 10-minite chunks.
2. How does the visualization come into play? I need some kind of trend visualization. Would I be able to use Filemaker's charts in the case that you're suggesting? Would I have to pass the contents to a third party tool through a web viewer?
I have a question, where do you get JSON data from?
Via webviewer using Ajax?
After the visualization, in saving phase, transform the JSON data as you can easily find afterwards.
e.g. divide the data by time( duration ), and labelling it.
Labelling means creating new key(e.g."startTime") with time data.( Insert this into your modified JSON )
Then, save this to FileMaker.
In FileMaker, using this label, you can find the data easily.
In reporting, pass the found data to webviewer and rebuild data object and visualize it.
I hope this will help.
Really interesting suggestion. I'm definitely willing to try it. However, I ran the numbers yesterday and it looks like would arrive at the 5M record point within ~140 days assuming 35k data points (aka records) a day. I know that the maximum size for number of records in a table is WAY higher than 5M but at some point server performance with finds becomes a serious issue (and I don't actually know what that performance threshold is).
Like you pointed out, the good thing is that the values are short (one timestamp field and one small, float value). And those values aren't changing at all, just being stored, so they can be indexed to drastically improve performance. And you're also right about the flexibility that allows additions or edits to be made easier in the future.
Good points about needing to parse the data. Not too worried about that at all. As other people have pointed out on this thread, there are a ton of good resources out there that will parse through JSON to make sure that I can get what I need to do an import into FM and create the records.
I'll be interested to know how I go forward too! =P
2 of 2 people found this helpful
In any kind of analysis of this kind of high-density stream data, the conventional practice is to periodically calculate summaries over some time window and archive or discard the raw data. Which summary over which time window to calculate will depend on what kind of analysis you want to do.