I have not used these, but have tried to get some clients to buy it so I could install them. Maybe soon.
From my investigation, there are two solutions. The older one is FMDataGuard from WorldSync and the newer one is CNS Audit from CNS. Both seem to do the same thing and there are a number of differences. I wish I could tell which one slowed the database down the least, but I don't have any such info. My personal preference is for fmDataGuard due to their licensing, but for no other reason. Both seem to be good solutions. Best of luck!
We have users using the Audit functions in our MBS Filemaker Plugin.
If you have questions for that functions, please email me.
2 of 2 people found this helpful
We deployed fmDataGuard and have a couple of feedback items.
1) you have to add some structure to your tables and database files. It is the same set of table level changes in every table, and the same set of file changes in every file. In our case we abstracted the fmDataGuard functions into a set of custom functions that also went into every file. We have a solution of 65 files and 250+ tables, I wrote an AppleScript that went through all the files and all the tables, implementing identical changes everywhere.
2) it works as advertised, but there is a bit of a learning curve, especially when learning to troubleshoot it. I placed the call to the plugin inside a custom function that is in turned called by the special "trigger field" you put in each of your tables. I have the custom function setting a global variable for every operation it does. (I also have a flag $$variable, and when it set, it uses SimpleDialog to display the error and the description from the documentation that matches that error). This makes troubleshooting it much simpler.
3) the plugin is deployed to clients, which means you can't audit from Filemaker Go, since plugins can't be deployed to Filemaker Go. You can still alllow editing etc, but can't audit. Web Publishing and server side processes CAN use the plugin and operate correctly.
A workaround for point 3 would be to use WorldSync instead (same developer as fmDataGuard) and have it run auditing, on the server side only. This would mean no plugin deployent to clients, and the server would monitor changes and build an audit log. There is a catch though : for server side only operation, record deletions would not actually be deleted by clients, they would instead be "flagged" to be deleted by the server. To make them invisible to the client after they are flagged to be deleted, you use a record level permission setting. Since record level permissions act at the server side, in order to view every single record on your layout, the client has to ask the server if you are allowed to see each record separately. The performance hit is nothing for a single record, but builds up when you are dealing with portals of records or big lists, especially over a slow network..
Before we purchased fmDataGuard, I also built my own auditing solution using the qFMSQL plugin, and that would be transportable to any of the free SQL plugins if you didn't want to pay for it. The catch with doing it yourself is that you have to put a function call in the validation calculation in every stored field in every table. (Applescript to the rescue again....) Using an SQL plugin to write to an AuditLog table is the best way to do it yourself, as there is no context issue and it is REALLY fast.
There may be one other possible way to implement an AuditLog yourself that would operate server side only, and that would be to maintain a "truth database" that is a shadow of all your stored fields in all your tables. This would be trivially easy using ODBC and SQL (I've already built this much as part of my research). The catch would be to have it regularly comparing the "truth database" against the live system, and building "change records" as it notices each change. That would be a major amount of work. (Anybody got ideas how to implement this in SQL?)
For those readers who wonder if AppleScript has access to the structure of Filemaker Database files, it has read-only access. The AppleScripts I wrote drove the user interface directly using Interface Scripting. Anybody wanting to do this should purchase http://pfiddlesoft.com/uibrowser/ it's more than worth the price if you do any regular AppleScript work wtih the user interface. Using interface scripting you can do almost anything a user can do, including creating tables, adding table occurrences, adding relationships, etc and it's very good where you have to make the same changes repeatadly in many places.
In summary, I would most certainly recommend fmDataGuard, if you know that Filemaker Go isn't in your target customer installed base, or they don't mind that you can't audit changes made by Filemaker Go clients. fmDataGuard does do what it says it does, and after we'd implemented it, it has proved very reliable.
If you have to audit changes made by Filemaker Go, then the ONLY solution I am aware of would be WorldSync, or to roll your own server side solution that tracks changes to records. I'd be very happy to be proved wronga about that.
I have hopes that Filemaker themselves may eventually give us a native auditing solution, because Server 12 implements "progressive backups" where changes made are being mirrored. This implies that internally in Filemaker Server 12, there is some element of "change tracking" going on. Here's hoping.....
Peter, I sure agree it would be nice if FileMaker extends their features with progressive backups so that auditing could be a native FileMaker feature on the server side. I'll have to put that in as a product suggestion myself because I think it would be really great way to address future needs as mobile computing becomes more important to FileMaker.
Thanks, Peter. That's very helpful because fmDataGuard is one of the products we've been considering. The Go issue is something of which we`ll need to be mindful.
What criteria did you use when deciding which audit solution to adopt? Did you consider CNS Audit?
Progressive back ups would compliment this but auditing really needs to let you see in one spot a chronology of changes at the field level across the database. What I mean is a chronology of changes and the ability to see a change that was made 4 changes back and roll the value back or forward. At the field level to be able to track when the change was made, who made it , what was the original value and what was it changed to.
I have implement the CNS Audit plug in at one site. It worked but as time went by it began to slow the solution down. Had discussions with the CNS guys and was not able to get this corrected. Could have been the network set up at client, could be this or that but just could not find a solution.
What I did come to realize was that I didn't need to audit all the fields in my solutions. There are fields that don't require this kind of auditing spec and back up etc would do the trick. However, there was a set of fields where the auditing was needed. For example, there are fields that relate to performance measurement indicators were this chronologic view of changes was critical and it was important to see what had entered data when. With this more limited need for auditing I built a solution using on enter and on exit script triggers to provide this tracking. Works across multiple table and multiple fields giving a chronology for each record in each table as well as a solution wide view.
Hope this helps,
Background : I worked through several levels of building auditing solutions myself.
First variant used a unstored calc for what the changes were, and a stored history field that looked up the unstored calc whenever any of the fields that "triggered it" were modifiied. This was implemented in Filemaker 6 initally by my boss, from an article he'd read in Filemaker Advisor. Worked reasonably well in Filemaker 6. Proved very bad when migrated to Filemaker 7 because Server 7 was initally very very slow at lookups and this happened whenever an audited field was modified.
Second variant,like the first, used a couple of fields in the same table to store the changes. Instead of a self lookup, it used a couple of custom functions, and was triggered at record commit time. Each field that had been modified (and were listed in my field list) then got their new values recorded, along with who was making the change and when. When auditing a dozen fields or more, record commits slowed down. I got the idea for this, along with some of the functions, from articles I read online, and I'd love to give attribution for it because some of it was very clever, but I can't remember for sure where I saw it. Might have been Excelisys, but I am not sure.
Both of those solutions stored the changes with the record, and therefore if the record was lost, the change history was too, and in addition, once they had been operating a year or two, the text field containing all the changes started to get very long, and started to cause performance loss. We eneded up making an AuditLog file for these solutions, and once a year the system administrator would run a routine in that file, it would go through all the stored AuditHistory fields in all the tables, and copy them into the AuditLog file and empty the fields in the source tables.
I also came across an solution for recording audits at the record level by Ray Cologon, using auto enter and custom functions. I never actually implemented it, as I was determined to find a way to record the audits separately, in a separate database, so that deleting a record did not delete it's change history.
My third variant used a validation calc, to call a script via plugin, and the script it called was in a separate audltLog file, all the values it needed were passed as parameters. I initially used myFMButler_DoScript for this but any of the plugins that can run a script via a calc trigger could do it. A couple of catches : you have to put the validation calc into all the fields you want to audit, and from Filemaker 10 onwards, the script run to create the audit record would get "appended to the script queue", meaning it didn't actually run until all other scripts had completed. So if you did a bulk operation, such as finance posting, then after the bulk operation ended, thousands of individual audit record creation script calls would execute, one after the other. In normal operation I got it up to 25 or so audit records per second, but this was far too slow and inconvenient for general use. There seemed to be no limit to the script execution queue though, in testing I ran it up to over 80,000 audit record script calls, and it performed flawlessly, just slowly.
My fourth variant worked the best in that instead of calling a script to create the record in the auditlog, I made an SQL call via qFMSQL plugin to create the auditlog record. Basically I made a custom function that took the various things I wanted to record, constructed the SQL statement, and exectuted it. I had to put the custom function call into the validation calc fields I wanted to audit. This was REALLY fast, 240+ records per second, and was much more usable. In normal use, the end users couldn't tell the auditing was taking place, not even when running big batch operations.
I was prompted to look for a better solution when Denbigh (the company I work for) took on a huge non-education project. One of the requirements for their project was both auditing and "roll forward" capability to restore from backup. The company involved had a second office set up but unmanned, as a business contingency. If something happened at their normal office, at one side of the city, they could decamp en masse to the second office 60 kilometers away, and log in there on the shadow system that was being kept up to date with all the changes that had been made at the primary site up to that point. This just sang out for either WorldSync or fmDataGuard, and we chose fmDataGuard at that time, in part because tracking what each user did was so easy, and in part due to the ease of control of the "roll forward" capability. For instance we could do a subset of the roll forward if we wanted too.
I did experiment with CNS Audit, but for some reason I never got it to work properly. When I told it to "start audit" the client performance slowed dramaticallly for every editing operation. I did contact them about it at the time, but I couldn't get it to work well. Maybe it was just me, because I wasn't giving it the explicit field list, instead I gave it a wild card and that may have slowed it down. Maybe it was the inital version of their plugin and it's better now. I certainly advise trying it and testing. It doesn't take long to build a small test database and implement it for testing.
We have implemented WorldSync as well, we have a couple of customers each with three sites syncing data in near-real-time to keep them in sync, and one of them maintaining an AuditLog. Because WorldSync runs as a Java Process on the server with a web interface for administering it, I personally find it a bit opaque when it comes to troubleshooting. I also wonder, if two users change a record between "World Sync passes" does it record both changes or only the later change? I haven't tested this.
I do really hope that Filemaker implement an Auditing solution, because they are really pushing mobile solutions, and so many organisations require some form of auditing and desire mobile solutions for their work. In the meantime however, I personally think the fmDataGuard & WorldSync solutions are the best to implement.
PeterGort makes a brief reference to it, but I suggest you visit Ray Cologon's website, NightWing Enterprises and check out his UltraLog demo at: http://www.nightwing.com.au/FileMaker/demosX/demoX01.html
It incorporates a facility to archive log details to address the log field overload issue Peter refers to. From my reading of the posts so far, I suspect it also involves far less coding overhead than some of the methods discussed, and because it's native it should also still work in FMgo.
I remember looking at SuperLog!, but I hadn't seen UltraLog!. Very clean and elegant. Has the disadvantage that you are storing the change history with the record, but as "keywords" says, you can always extract it and put it in a separate AuditLog database as another (perhaps nightly?) process. it certainly is extremely easy to implement.
One thing to watch our for, is the commit time for the user if you have a long field list. However, Ray's implementation is very clean and fast, so it is definetley worth testing it out.
Another note about UltraLog, Ray included the functionality to save the changes to another table and prime the log field with the current data.
One thing you may need to test with that one ( because I can't remember if it was a problem with SuperLog or UltraLog )...was changes made from a related table. I believe SuperLog missed some of the data, and UltraLog still picked it up.
The nice thing about any of the SQL based plug-in Audit solutions is that it can save the changes into a separate table from the start.
In playing around with UltraLog!, I looked at the fact that the developer has to manually build the list of fields to be audited. I thought "If I want to audit stored fields, wouldn't it be cool if it could calculate the field list itself?". Well in FMP12 it can, using the built in ExecuteSQL function, and the built-in "Filemaker_Fields" table. I made a custom function
Get Normal FielName List For Table ( tableOccuranceName )
sqlQuery = "select FieldName from Filemaker_Fields where FieldClass = 'Normal' and TableName = '" & tableOccuranceName & "'"
ExecuteSQL ( sqlQuery ; "," ; "
The FieldClass = 'Normal' clause causes it to omit calcs and summaries etc.
There's a catch though : the Filemaker_Fields table doesn't seem to have a column that says whether or not the field is a global. Filtering down to non-globals is an exercise for the reader ;-)
Your adaptation to collect a field list based on the SQL property 'FieldClass' is very interesting. Do you know if a SQL keyword that can be used to look through the FileMaker 'field comment' property in Define Database? I could see this as another way to specify the fields to be included in the audit trial.
to get list of stored field names
tabchar = Char ( 9 );
returnchar = Char ( 13 );
ltn = Get ( LayoutTableName );
sqlQuery = "SELECT FieldName FROM Filemaker_fields ff
WHERE ff.Tablename = ? AND ff.FieldClass = 'Normal' AND ff.FieldType NOT LIKE 'global%'
ORDER BY ff.FieldName"
ExecuteSQL ( sqlQuery ; tabchar ; returnchar ; ltn )
For anybody that references this thread, it was talking about FileMaker < 13. From v13 onwards there is a function get(modifiedfields) that can be used at commit time to get the list of fields that were modified. Also, I've implemented Ray Cologon's UltraLog 2, across a system of 81 files and 250+ tables, and it is both quick and easy to implement. I also managed to put the archiving process into a separate dedicated database file, and use a server side process to retrieve the log entries from all the tables.