3 Replies Latest reply on Apr 29, 2016 4:23 AM by wimdecorte

    Hosted file metadata, and performance monitoring - script or relationships?

    mchancevet@gmail.com

      Hello,

      I'm considering a proposed hosted multi-file, data separation model, solution. I am concerned about performance and don't really know what to expect or how to anticipate the performance of the solution.

      To give me a handle on it over time and identify limiting functions and bottlenecks I propose to include a suite of internal monitoring functions - file and performance metadata.

      I anticipate 'start' and 'end' time fields with 'set field- get(current time)' steps for scripted processes to see how long things take to happen. I can include these performance monitoring fields in affected records to get averages for processes over time (to give some context here - some processes will be external to FM (apple scripted) and acting on PNG files of varying sizes in differing ways).

      i would also have fields giving metadata such as file size for container fields containing PNGs and probably some global fields for the whole FM file

      (get (file size) etc.

      So over time I hope to be able to summarise things like:

       

      • How long does script "A" take on average per record.
      • How long does Script "B" take per Mb PNG file on average.
      • How long does Photoshop action "x" take to process a PNG on average. (one applescript calls a dynamically determined  'action' in photoshop on a  PNG file held in a container in the FM file.
      • How is Script "C" affected by the number records in the Table "Y"

       

       

      I hope that makes sense.

       

      Now my question

       

      I could conceivably, populate, access and process the performance and metadata summary data to some extent by using relationships and calculation or summary fields. I expect this would create more 'overhead' (is that the correct term) and actually generate more work for the processor almost all the time. i.e a bigger, more complex file with a bigger more complex relationship graph. In this way monitoring performance could actually affect performance or so I imagine.

       

      Alternatively, I could generate and access my metadata summaries scripturally, this would probably be slower and more cumbersome at the point in time when I want to access the data but would only affect performance at the time when I was interrogating the file about it's performance in other functions.

       

      I think I'm leaning towards a scripted solution.
      I hope to elicit commentary about my thinking on this or my performance monitoring plan generally or any other thoughts on performance that might help my thinking here.

       

      Thanks,

      Morgan