Can you test by deleting a relevant # of records to bring the count down to what is in the other tables?
You know, Wim, sometimes the obvious is staring me in the face and I still miss it.
I will definitely try that!
Now that NEVER happens to me
The other thing I was thinking about was the number of indexes on those two rogue tables vs the others...
I'm assuming the slowdown happens at the moment of commit and that you are committing after each record? If so; can you keep all changed records open and do only one commit at the end?
No, the slowdown occurs right at the Set Field step. I already hold the commit until the very end (in case it's interrupted, it can back out and not leave things in a ... bad state). But good thought.
Okay, I ran two tests, one with about 7500 - 8000 records in each table, and one with 3500 - 4000. The results indicate that yes, Virginia, the data load is the problem. After the second run, each field is comparable to the time in other, less-populated tables. Slightly longer, but the entire process gets down to about 15 minutes, which should be okay.
So unless there's another answer, I guess I need to recommend archiving records to a table not involved in the sync so only the active stuff is part of the syncing process.
Thanks for the help.
It's still odd though. Because the Magic Key is just a variation on the transactional approach. I'm using the transaction-in-a-portal approach to create records in a portal with tables well over 1,000,000 records without any noticeable delay.
Are those two tables wider than the "normal" ones?
It really is a transactional approach. I have a portal on the parent side, pointing to those tables, from a Virtual List table that contains the necessary keys.
The tables aren't appreciably wider than any others in the system. (I know the lineItem table has 70 fields, but 52 of them are unstored calculations not anywhere on the current layout. And you can see lineItemElement has a minimal number.)
do you use the parent as the base for the transaction? If so, I wonder if it would make a difference if you used a "transaction" table for the parent and treat the data tables as children for that?
The other test that I would try is to delete half of the unstored calcs and see if there is a difference.
Yes, the VL table is the parent table for the transaction. I’m not sure I’m following you, though, on the alternate approach. How would a “transactions” table work?
I could try removing the unstored calcs, but there aren’t any in lineItemElement, so it seems unlikely they’re involved. That table is basically just as slow.
And you’re right: very weird.
It's basically extra work
I typically use a transaction table where I create a new "parent" record for the transactions I want to handle. Gives me a chance to keep track of them and see which ones were ok and which ones were not. From that transaction parents I have relationships to to the entities whose data I want to manipulate. Allows me to create "allow create" relationships for the transaction purpose that have no place anywhere in the "real graph"
Okay, that’s easy enough.
In this case, the records are being created from a completely different file (a connector file between the local and hosted files of the “real” system). I thought about issues with the Graph, but the only relationships in the connector file are the ones that are needed for record manipulation.
Unless the Graph in the target file is causing an issue?