So what you want is an audit trail so you could have the history of changes that occured for a given record for all of its field contents?
Yeb that's it! Do you know how to do that?
1 of 1 people found this helpful
Here's a way to do this...
You have Table1 with all of the fields you want to track in it. You have to first create a mimic of that table (along with some new fields that will help you track info as changes are made to the records in Table1).
Inside Manage Database... Copy and paste that Table1 into a new table and call it Table2 (or you can create a secondary table manually and create all the fields you want to track by hand -- but, eventually we will be writing Set Fields using the two tables as source and target, so you want field names to match up between the two as much as you can, but this is up to you).
If you do not have it, create a primary key field in Table1. Call the field _pKey and using Options set it as Auto-Enter Serial, Can't Modify Auto.
Set up a mod_name field in Table1, then using Options set it to Modification Account Name.
In Table2 create a foreign key field, call it _fKey and leave it as a text field. Also, create a mod_name field, and also leave it as text.
In the Relationship graph, create a relationship between these two tables by dragging from _pKey in Table1 to _fKey in Table2.
Double click the "=" and under Table2, check Allow creation of records... (and also Delete related records... if you like, if you want to remove any Table2 records if/when you delete a record from Table1, but you may want to leave these in the system as history even if the parent record is removed from the system -- if this second approach is good for you, leave that second check box UNchecked, then click OK).
Additionally, so that you can track when the records in Table1 were edited by a user, add a field to Table2 called mod_timestamp. Make it a Calculation field, and set the calculation of it to Get ( CurrentTimeStamp ).
Now, time to write the script that will make use of all of the above...
OVERVIEW: This script creates a new record in Table2 and then sets your data field for field from Table1 into Table2 using a series of Set Fields over multiple script steps, finishing off by stamping the record in Table2 with a timestamp of when the tracking record was created, and finally adding in the username of the user who made the changes (all lines preceded by "#" are Comment lines -- you may leave these out if you like, though I think it is always a good thing to comment your scripting):
SCRIPT: Modification Tracker
#Let's not allow the user to stop this process!
Allow User Abort [Off]
#Freeze the window to stop any flashing in the UI (sometimes happens when switching between layouts via scripting, especially if a system is served):
#Set a local variable -- call it $RelateID -- to the _pKey of the record you are in on Table1:
Set Variable [$RelateID; Value:Table1::_pKey]
#Switch over to a Table2 layout:
Go to Layout ["Table2" (Table2)]
#Create a new record (currently UNrelated) on the Table2 layout:
#Relate the two tables by setting the _pKey into the _fKey field of Table2:
Set Field [Table2::_fKey; $RelateID]
#Set all relevant fields using Set Field:
Set Field [Table2::field; Table1::field]
Set Field [Table2::field2; Table1::field2]
NOTE: I called the matching fields the same thing in each of the tables -- field and field; field2 and field2, etc. Do all of the Set Fields that you need to do to copy ALL of your fields from Table1 over into the new record in Table2.
#Finish your Set Fields with a final set for the mod-name from Table1:
Set Field [Table2::mod_name; Table1::mod_name]
#Go back to the Table1 layout:
Go to Layout [original layout]
And that's it for the script!
Now, on your Table1 layout, go into Layout Mode and click the Layout Setup icon in the Layout Toolbar, and click the Script Triggers tab in there.
Check the box adjacent to OnRecordCommit and then select your script to run. It will now run every time a record commits (denoting that changes have been made to it, that is when all changes have been made and the user moves on to doing something else). The commit will happen before they can go elsewhere, and the script runs, writing a copy of the altered record in Table1 over into Table2. Additionally, the new record in Table2 is timstamped and also includes the user name of the person making the changes.
I'm sure there are other ways to do this, but this is one way. Good luck!
I have a couple of questions do dive a little further in:
1) Regarding the behavior of OnRecordCommit. If you go in to the record, without modyfing it, or lets say field1 has "ABC" in it and you Backspace "C" and retype it, so the record actually didn't change, would it create a record on the table 2 of the changes?
2) How does this change if you have Table1 as a parent and Table1a as line items to that parent. Would you need 2 tables to record changes, or do you add a couple of fields to Table2 and put all changes there?
Thanks for the good words. I appreciate it!
>> 1) Regarding the behavior of OnRecordCommit. If you go in to the record, without modyfing it, or lets say field1 has "ABC" in it and you Backspace "C" and retype it, so the record actually didn't change, would it create a record on the table 2 of the changes?
Backspacing over the "C" and then retyping the "C" is in fact a change, and a new record will get created in Table2 as a result of this change. Of ccourse, if you really wanted to keep things clean, you could add steps to the scripting such that if a matching record in Table2 is found, the new one is deleted, keeping Table2 'cleaner,' but I don't think it hurts entirely to have that record in there, even if the info is the same -- it does show that someone was mucking about wiht the record after all, even if when they finished up, there was no changes made to the data...
>> 2) How does this change if you have Table1 as a parent and Table1a as line items to that parent. Would you need 2 tables to record changes, or do you add a couple of fields to Table2 and put all changes there?
Not entirely sure. Your Table1a is related to Table1 but not Table2 (at least as written right now), so not sure how to affect the needed changes without testing this out some more with some more relations and Set Field steps. My solution above was written for just the two tables.
I've been playing around with a method that uses script triggers and global variables to develop an audit trail. The results seem promising and I don't need a mimic table to log the changes--though such a table can still be useful when stepping back through them without yet applying the change to the actual record. I haven't worked out all the details to my complete satisfaction yet as I'm still evaluating some options, but the idea works like this:
Each time you enter a field, the value is captured in the same global variable. Whenever an OnObjectSave event happens on a field, a script uses that global variable to store the prior value, as well as fieldname, table name, etc in lists maintained in global variables. Whenever the record is committed, another script saves the lists of prior values, etc to a single change log record and clears the global list variables. Changes can then be rolled back by using a script that restores the prior values from the info in the change log record.
I've been doing something similar with gobal variables and script triggers.
I have only used it on one situation in my solution. It is a work scheduling, time tracking and payroll module. I have one record per work day per employee, so their check in and check out times will be registered there and compared to scheduled work time. Whenever a change is made to a record, it makes an entree in that record. Depending on the script parameter it also enters a brief descript of the change, for example:
Set field[ modifications; modifications &"*checkINchange from 9:00 am to 8:30 am by user Manager on may/1/2011 10:30 am"]
I use the * for several reasons, one of the being I can count the number of modificacions, I can also use it to give format substituting it for a ¶, etc...
I can use realtionships to join all the weeks modifications into that employees payroll record for that week.
I've also been thinking about it a lot and have thought of the following:
create a single change log table for the entire solution. Have fields for what table the change occured on, what field it occured on, when the change happened and what the previous value was.
I can then set a gField in each table with a tableID to relate it to that change log table. The field Can be used to track changes on a particular field or it can be ignored or used for sorting, etc...
Do you think this is an efficient/clean way to solve this problem? I can't find enough pros in keeping a duplicate record for each time maybe just one field was changed. Or can you go into more detail on your change log record and how you write to it, restore from it and display a list history?
Yes, my little incomplete experiment logs only the prior values of fields that were changed. It also logs the records primary key and table name so that I can use a relationship that matches by layout table name (unstored calculation field) and primary key to link to the correct set of change records in order to support edits on multiple tables with a single change log table.
My first take, listed all the changes stored from a single record commit in a large text file and my script looped through them, using GetValue and a loop variable to extract the field names and prior values. I used Set Field by name to roll back a value on each such recorded prior value inside the loop.
I'm now playing with making multiple change log records for a single commit, but with a common verion number stored with the edited record so that I can loop through scripts rather than having to parse values out of a large text field.
I've found that I can also use a mimic table, not for storing prior copies of every modified record but to display temporary copies of the same record after each commit. This can allow the user to move forward and backwards over each record update before deciding how or if to roll back the data edits. It's even possible to then selectively roll back changes only on selected fields and all these changes can also be logged back to the change log once the changes are applied back to the original record. (And this could even be done without the mimic table by creating multiple temporary copies of the record in the original table, but I'm leaning towards using the separate table as a safer way to do all this.)
@ PhilModJunk I think that your third paragraph is in sync with what I described. One record per modified field, only I left out the very important detail of the version number.
In my personal experience, what is most usefull about change logs isn't necesarilly having the record automatically change back to a previous version, but rather know when something was changed and to what it was changed, and WHO changed it. It is the only way to get to the bottom of things... and have people trust a solution.
I'll keep you posted on how I put this into practice and hope you'll do the same as your ideas progress.
One thing I want to "benchmark" is whether rolling back or logging changes for a complex record with lots of fields and a portal or two can be done more quickly by loading the contents of a single field into a variable and parsing it or by looping through a set of records where each record logs the prior value of one edited field.
Another use for such a change log might be to more intelligently synch copies of a distributed database back together--something that occurred to me after I started playing with this idea when a client asked me about different options for synchronizing two copies of his database.
I make return separated lists and store them into text field a lot. I then parse when needed with the middlevalue function and for me it seems to handle rather quickly.
Quickbooks uses version numbers for their tables (both current and change logs) so when integrating with third party applications, it compares the version number of the record to see if it is current.
Your syncing scenarios sounds complex. Are you talking about having a solution built to work offline (lets say on a wi-fi ipad) and then push created records to the main solution when connected to the internet with a button/script? If so, you can add a field "posted", so the solution will go throught different table searching for records to be posted and then post them on the hosted solution. You can also create a POST Table in which you have Posted Field (0 or 1), type to post (for scripting purposes) and list of field values. This would work as follows:
I create a new contact on my iPad. The solution creates a record on the POST table with 0 in the posted field, 1 in type to post (1 being New Contact Created), and then a return separated list of values, must contain a space for each field, even if left blank, so you can't go with the list function because it skips blank fields. So this would look like "Restaraurant¶Charlie¶¶firstname.lastname@example.org¶555-555-5555".
Then you can do one of two things. Have the remote solution put that info into the main solution, or you can have the host process the posting.
In the first option, open file and parse each record to post with a loop and if statements.
In the second option, just copy the Post Table Records with 0 in the posted field and have the host run a scheduled script that would go through and enter each record accordingly using
go to layout contacts
set field first name middlevalues(listofvalues;1;1)
set field middle name middlevalues(listofvalues;2;1)
else if type=2
the second option would make it a lot faster for the remote user
I have used this technique, not for this, but for having the server process things it can do in 2 seconds that would take the remote user 2 minutes.
I have a table called RunScript, that remote users create a record on when they want something done on the server. The server is in an infinite loop waiting for something to be added to that table, when it is done it changes a field called open to 0. The remote client is looping waiting for that 0, as soon as it is there, it deletes the record (to keep the table lean and clean) and the script finishes on this end normally switching to a layout and a found set.
You might look into useing GetValue instead of MiddleValues. It's just a little bit cleaner when working with lists of values IMO. I almost never use MiddleValues any more unless I need to extract more than one value at a time.
Right now, it's purely a theoretical discussion much like this one. When working through the options for the client, I conceived of this hypothetical scenario:
- Sales rep pulls new copy of database onto lap top and hits the road.
- On Monday, home office user notes that the street address has a misspelling and changes "Rood" to "Road" for client 123.
- On Tuesday, Sales rep meets with client 123 and is told "here's my new cell phone number, I don't have the old one anymore." and Sales Rep changes the clients phone number accordingly.
The challenge to an effective synch system is to successfully merge the Monday and Tuesday edits of the same record without overwriting the Monday correction with the Tuesday copy of the data and thus changing "Road" back to "Rood".
What I realized, was that it might be possible for the script to use a Change log to intelligently determine that the Monday edit was to a different field from the Tuesday edit and thus be able to keep both changes successfully and this would eliminate about 90% or more of the synch issues that can make this such a major headache. You'd still need to decide how to handle this situation when both edits are to the same field of the same record, but at least that's a much smaller subset of the total changes made between synchs.
you are completely right with the GetValue Function. That's what happens when you teach yourself, you inevitably miss something.
For what you are describing, it ocurrs to me to have a field on the client side that is a timestamp of the time the last update was completed.
Then on the host side, have a table as described before, that has all changes in it, with a field for table and a field for what field from that table.
You can do a search from the client for all changes that have take place after his timestamp and update those accordingly.
Then when he reaches the last record he updates, mark that records timestamp as his new timestamp from which to start looking for updates.
If you want it to be a two way street, you might probably want to have the remote users changes on the server first, just on that change log table, so then the server can evaluate if there has been a more recent change to that field with a self join sorted by timestamp of record change descending, and my logic says to only impliment the most recent change, while keeping in the change log that the remote user had changed it to something else.
This will obviously involve some serious scripting. Have you any thoughts on how to make this process eficient. With my knowledge, I can only think of having a script step with set field for each field that is a candidate to be updated. Is there a way to streamline the process?
That's pretty close to what I had in mind. The one thing you may have missed, is that you can use set field by Name and get Field instead of set field and only examine/compare/change fields whose names and prior values are listed in the change log for any changes made since the last synch.