I have an offline sync setup where there is a middle-man file. This file contains references to both the local (iOS device) file and the hosted remote file. (See attached Graph.) The sync-file is in the middle, and has one table with global fields that act as conduits between the deployed file and the hosted file, using a global field. E.g. Deployed_Orders::OrderID <=> Sync::OrderID_g <=> Hosted_Orders::OrderID. There is one of these for each table being sync'ed.
I thought this was all working well, but then discovered that there appears to be a bug in my sync routine.
I am using transactions on the sync process, and thus using magic-key and Selector style technique to do the reading/writing of record data. The scripted routine collects a list of IDs that need sync'ed, and then loops through those: it sets the global key field in the central Sync table, and then reads from one side (deployed or hosted - depending on which direction the sync is going) and writes to the other side. There is also a check to see if a record needs to be deleted; to do that I have a portal off in the gutter of the layout that looks at the Orders table, using the same relationship as the sync process, i.e. the global key field is set and it controls the portal (it shows only 1 record at a time). The script does a 'go to object' (the portal), 'go to portal row (first)', 'delete portal row'.
My problem showed up because when pulling records from the server to the device not all of the Orders were showing up. This was during the Pull process, reading from the Hosted file (source) and writing to the Deployed file (target). The first record or two would come through OK. The problem happened after creating or updating the first record. I discovered that the magic key process was NOT updating the write-target relationship correctly. The central global key field would be set, the Source/Hosted relationship would update to reflect that it was looking at the next OrderID, but the Deployed relationship was still pointing at the previous OrderID. Thus, it would overwrite the target record that it had created in the previous loop iteration.
I tried a variety of fixes: using Todd Geist's 'Refresh XJoin' script (from his Selector architecture - which is partly what this is based on) to touch the relationship, but that didn't work. (This situation only has two tables, the Sync and the target, as opposed to the typical 3 table chain in Todd's Selector architecture. So I was already setting one of the two fields via setting the global in the first place. The 2nd field in the relation-predicate would be set to...what? The ID of the record it was already looking at? I tried that - it didn't help; but it didn't make sense to set it to that value anyway, because that was the wrong field id. Todd's technique uses a cross-join relation with a generic "constant1" style field.)
I tried putting in a 'pause' step on a whim, and that didn't work.
I did figure out a work around after stepping through my process a number of times. Remember that deletion portal I mentioned? Well, it turns out that when a deletion was occurring, the sequence that did the [go-to-object, go-to-portal-row] would cause the relation to correctly update at the 'go to portal row [first]' step. So I put that sequence into my sync loop steps and it works OK now. (I'm not doing the deletion step, obviously, just the 'go to portal row [first]'.)
I have to say that I was just examining the Pull side of the sync while trying to figure this out, so I am not sure if this same problem crops up on the Push half of the process.
My question is why isn't the original setting of the key field causing the target-side of the relationship to update? The source-side updates just fine. I do have the workaround but it seems rather kludgy. I would rather it work as it seems it should.