EDIT: This script is running as a server scheduled script.
I have an accounts table in my database that has 3 unstored calc fields. One field for the account status, one for the status date, and one for the last time the account was touched (not the same as last modified).
These 3 fields are all based on related records existing and when they were created. That's fine for display purposes, but for performing a find on a specific account status it's obviously very slow.
So I mirrored those fields with stored fields and wrote a script that runs every 30 minutes to get the calculated values and store them for searching & sorting quickly.
I'm trying to figure out how I can speed this script up because it takes about 15 minutes to go through all the account records (7,200+ records) and update them.
I tried creating another calculated field called is_stored_field_mismatch which returns 1 if any of those fields don't match their respective calc field. I figured maybe I could sort by that field (descending) and loop through the records checking if that field is true or not. Upon reaching the first false record, it would exit the loop and save the time of looping through the entire set of records.
The sorting is super slow, as slow as just looping through the entire table and checking it one-by-one.
I've also tried ExecuteSQL to get a list of UUID's that need to be update based on that field hoping it would be quicky but it isn't.
Anyway, just wondering if anyone has any tricks they use to speed this process up at all. It's not a real big deal right now but as the database continues to grow it may not be possible to update every 30 minutes and that would cause some issues.