Control record deletion and log when it occurs.
Then, during your sync process - add a step to process the log of deleted records.
360Works have MirrorSync for syncing only the data that have changed from your FM Go solution to FMS. Works great and each sync that minimal amount of time since it only needs to sync data that have changed/been deleted/been created
When a record is deleted, unless you log it as suggested by lamar, you won't have any idea whether the record on the missing side was deleted, or the still-present record was added. So you must (in my experience) have a separate file that retains the ID of the record after syncing. If the record is in this log and missing on one side, then it's been deleted. If it's missing in the log, then it's never been synced, which means it's been added.
I have never attempted this before, but wouldn't it be possible, if you are using UUID to identify the record ID, couldn't you have a relationship between the Server table and the iPad table then using this, narrow down the missing records?
Curious... as I said, I have never done this before, I'm just thinking out loud... (this would be option 3 in your list, find the records and delete non matching records) I'd imagine finding matching records is simple, if the relationship is there, then you can create a calc to test if there is a related record or not. The only drama I see is that if there is a relationship to the server file and the iPad does NOT have connection to the server, then you might be annoyed by it attempting a connection, so perhaps an SQL call to the server table might be better?
You typically don't use a direct relationship between the local file and the hosted file, for exactly the reason you point out: Every time you open the local file, it'll want to open the hosted file. Not only is this unnecessary, it's annoying to the user.
Instead, you use a connector file between the two. This is a local file deployed via a container field in the local database, then deleted after use.
But yes, you would use a relationship from the connector file that joins the local and hosted files together based on UUID.
yeah, makes sense... but would SQL direct from mobile device to server would not be so annoying, right? As long as there is a file reference and a TOC in the rel graph, would that work? I'm experimenting with this thought as we speak...
I honestly don't care for SQL so much through a connection, its a great tool to grab data from multiple tables to do statistics but its slow. I do use UUID on everything, (learned that awhile ago). Finding non matching records is a bit difficult when handling data through the connector file. I'd have to pass each ID through the global in connector or make a self relationship. After playing with it today, I'm almost leaning on erasing everything from the iPad and having the server update it from time to time, re-downloading the records. More for simplicity and an exact reference of the server. Its not that much data, however down the road it could be a problem. The files associated with the equipment or projects are only downloaded if it has been assigned to that user. I'm more afraid of having replicas of records between local and deployed.
PeterWindle, I can give you what I have to help you on your way with syncing data through a connector.
I'm experimenting with the SQL method now, it works a treat. How fast or how well it would work in the real world, unknown. We're dealing with unstored calcs that only ever check one record in one table against the instance of the same record in a another table, so I would imagine it would not be too bad. It works two ways, so from the server to the iPad it would be quick. But deleting a record from the server and then checking that it's gone from the iPad... might be a different story.
I have no real time to try a fully loaded sync test at the moment, but I have given it some thought in the past and yeah, I figured I would need to have an exported filemaker file going through a container from one table to another, this way it would have a way of comparing changes in records between one table and another.
The big questions that entered my mind here was...
who or what has the highest rights to claim the correct and up-to date data..? Is it based on who updated the data last or something else entirely...?
Within a multi user (therefore multi-sync) environment would you need to develop a table of hierarchy of which user has higher writes to data than another user? Then - is this record level or field level?
I'm trying to imagine situations where one person might be creating customer info, someone on the field updates an address, someone else updates an address and then someone is the office updates the contact name, because it was typed in incorrectly... how do you consolidate all that? Sure, there will be separate tables for addresses and contact info... but you get my point? Who is right, who is wrong...?
I've done de-duplication techniques along these lines and there is no hard and fast rule, there is no right and wrong... the user spends a lot of time deciding what bits are right and which bits are not... lots of data weeding out is required. Similar situations can occur with sync.
Even now, within the Contacts and music areas of my iPhone, I'm continually finding stuff I am sure I've deleted and that's just a simple delete. Sigh... about the only consistent thing is email via IMAP. Lessons to be learnt there, I'm sure.
Am I'm over thinking it?
No, you're not overthinking it at all. Sync is complicated!
You're absolutely correct in that you need to be able to track when the modifications happened. A lot depends on the business rules. For example, if the server should take priority, then all the record changes on the server win. Or vice versa. However, if it's a true sync (two-way), then you have to determine which is the latest change and use that instead.
In the case of the solution I developed, I use a set of fields in each table to track field level modifications. Then while the sync process is happening, each field's timestamp is compared against the timestamp for the same field in the other file. (This is one of the things that makes import a poor solution once you get outside the basics: no way to do field-level syncing.)
You're also correct about a multiuser situation. That introduces a lot of complications. (Without that complication, for instance, you wouldn't need to log the changes outside of the main files.) However, I don't worry about which user change the record. Latest timestamp wins. Here's the reason: in a sync solution, it's supposed to synchronize across all the instances of the solution. All users have essentially the same priority. But I suppose if your business rules called for it, then you could set up a table of users.
Thanks mike for the information. One further question. How do you handle timestamps in multiple timezones? Or what's the best way? The latest bug that I've found in my solution when traveling abroad. Local time on the iPad is different then the server time, so a sync doesn't occur.
My idea would be in the very beginning of the sync process in the connector file, to get the local time, get server time and set a field in the local for the time zone conversion. Then all time zones would need to be calculated by that. However, then you loose the ability to use update timestamp on modification in the field options. So therefore you'd have to still use that, but with another calc field that updates the timestamp to server time once the local timestamp updates.
Sounds like a fun project to do across all of my tables........ What are your thoughts on this?
Thanks again for your help.
Good question! I haven't had to deal with multiple time zones, but the approach I would probably take is to convert all the timestamps into GMT and standardize on that. There's a calculation here that should get you started:
I think you're on the right track by setting the time zone offset at the beginning, but I would do it every time you open the database. That would retain your ability to use a calculated modification timestamp based on GMT.
I have changed all the date stamps to UTC by using an additional calc field based off the auto-enter modification timestamp field. I have noticed something interesting though during the sync process. Fields are calculating correctly on both local and server. However when it syncs, there is something built into filemaker where it automatically adjusts the time zone during the importing process from server to local. There is an hour difference between me and the server, and each time a sync the local ends up to be an hour later. However, if i edit the record on the local, the timestamp updates to an hour earlier, which then defeats my sync....
The trick is to keep the time the same across all platforms for the sync to work. I'll have to have a think about this. I think by using the timestamp field options filemaker has some built-in timezone converter based on the operating system. Maybe by using just a number field i can get away from the timezone converter. Thinking out loud.
I don't use importing. I use direct record creation through a relationship. Importing creates other issues as well, like dealing with error trapping.
Have you seen the sync document from NightWing? It's in the resources section of this site. Might want to give it a look.