So each event for a given item might log the change in status--off to on or on to off?
Presumably there is a field that identifies the item whose status is changing?
Such a field would let you perform a find for all records for a given item and it can also be used in relationships to isolate event records referring to a given item.
The difference between the timestamp logged in each event record can be used to compute elapsed time, --getNthRecord or a relationship can be used to access the timestamp of the other record to compute the elapsed time but getting a total for each status value will be a challenge.
I think this might be possible if you can figure out a self join relationship that matches each event record to the preceding event record for the same device/status change. Then it would be possible to sort your records to put all the "on" records in one group to compute the total time the device was "off" and the group of all the "off" records would compute the time the device was "on".
Would this perhaps be easier on a chart?
Maybe it wouldn't count the hours but we could run one each day and 'see' what is happening? So, i'm thinking if we used a line chart that it would be straight if everything was normal,but if that value where to change then it would jump up and this could alert us that there is a problem.
Although - I haven't had much luck with using the charts yet.
I'm also wondering - if there is a way that when a field gets a certain value to have the database flag it or send an email or some notification? I'm trying to come up with a way for the database to notify us or visually show us when there is a problem.
Millions of records? There are reports that Filemaker (depending on setup design) can have problems when the 100,000's of records is reached.
I would assume you are interested in recent activity and can archive or delete records from 3 months ago.Title: How many millions can FileMaker handle?
Obscene File Size or Record Count
Although it may seem like a huge file size (more than a few GB) or record count (more than a few million) are performance problems that can be improved by deleting/archiving old records, I've found this is rarely the case.
Yes, almost all operations on a file will be faster with 10 records than with 10 million. But design is far more important than record count. I once was given a file with six (!) records that was so slow it was completely unusable. It had hundreds of fields, nearly all of which were calculation fields, based on other calculation fields. Relationships twisted everywhere. Everything was, necessarily, unstored. It was an intractable mess.
In one of my databases, one table has 1.3 million records and another has 4.8 milllion. Such massive numbers of records must be treated with respect--I can get some really long delays if I get careless with my finds and sorts, but I haven't had any situations occur where Filemaker simply refuses to work correctly due to the number of records. But David's comments are quite on point here. In fact, the database file where I have these massive record numbers are in fact our "archive" tables and aren't used for active data entry, but are used as a source of data for a number of reports that span large time intervals.
Here's the relationship based method that I dreamed up, but it may not fit the actual structure of your data. And you will need to test to see if large numbers of records make it unacceptably slow:
Let's say your Events table has these fields:
WindmillID-->foreignKey to windmill table identifying the specific WIndmill that logged this event.
SubSystemID--->identifies the specific subsystem of the windmill unit responsible for this event (what got turned on or off).
Status--> the new status of the subsystem, on, or off.
TimsStamp---> a TimeStamp field recording the date and time this event occurred
The following self join relationship would match to the preceding event record for the same Windmill and Subsystem:
Events::WindmillID = PrecedingEvent::WindMillID AND
Events::SubsystemID = PrecedingEvent::SubsSystemID AND
Events::TimeStamp > PrecedingEvent::TimeStamp
And in this relationship, PrecedingEvent would be sorted by TimeStamp in descending order.
PrecedingEvent would be a second occurrence of the events table.
This would enable you to define this calculation to compute elapsed time:
PrecedingEvent::TimeStamp - Events::TimeStamp.
And I would definitely consider using this approach only with relatively small subsets of your total records. To get faster reporting, you may want to use a script that pulls up this data and generates new record in a different table that records the elapsed time in a simple number field. depending on your reporting needs, this might be one record for each unique combination of windmill, subsystem and status for each day or for smaller intervals of time. A script could be used to generate this data from your events table on regular intervals.
I use a similar method in one of my databases to generate data from a line items table once a night. This condenses the data from a set of nearly a 1000 lineitems down to less than 20 records and this makes for reports, that might span as much as 5 years time, compute and display many times faster than when I generate the same report directly from the line items table. (But I have maintained that capability as a crosscheck to make sure that my summarized data is an accurate summary of the original data.)