What about 'Record Locking'? Are there other users in the file at the same time as the script runs, and be locking the skipped records?
Good thought, but I don't think that is the case because I set this up on a standalone install on my laptop and I am the sole user with access to it. The database won't be put into a network version until later.
I wonder if the file's indexes are in proper condition.
You can try:
Turning the indexing on and off for fields used in the search, sort or relationship
Test a recovered copy. (recover rebuilds all the indexes in all fields--so this may fix a field you didn't think to re-index.)
I think I didn't explain the problem very clearly. Let me try again. From another source document, I'm copying a block of text from a .txt file into a global field. The text is actually a sectional index for a reference manual containing 9 different sections. The idea is to create a file that becomes a master index with similar topics from different sections cross-indexed. Here's a representative sample from the source list:
Backing Up Your Data|10
Deleting a Document|10
A variable is populated with the first line (e.g. Quick Start) and a new record is created. If the line does not contain a pipe char (|), the entry represents a major topic and the text is added to the primary category field. If there IS a pipe char in the line, the text is parsed into that which is left of the pipe (goes to the subcategory field) and the chars to the right of the pipe flow to another field (page numbers). The top line is then removed from the initial "list" and the process repeats until the list is empty. The total number of lines in a given section varies from 80 or so to as many as 800. What's going wrong is that some lines are skipped (no record created) or the entry is partially incorrect. There wasn't any discernible pattern, e.g., every so many lines the record wasn't created. It might be the 27th, 30th, 34th and 45th or some other seemingly random sequence. I ran a series of experiments to try to isolate the issue. I found that, if instead of processing the entire list (e.g., 80 lines), I broke it down to only 25 lines, all the records are created and the data is correct. Thinking it was a timing issue, I tried timestamping each record and several other techniques. Even the number of records created in a single second varied from 25 to as few as 10. Each record contains a small amount of data only, an auto-enter ID, Section ID (added by script), the usual creator, modifier and record ID fields as well as the essential data. There are no relationships through which I'm working, no layout navigation, no other windows open. I tried flushing the cache to memory every 25 records but I still found the same hiccoughs (no record created for the same lines each time). The text in the "skipped" line varied, sometimes it was the main topic line but others, it might be the middle of a series of subcategory items. Obviously if I have to, I can work with only 25 lines at a time but that's just a workaround. I'd like to understand what is causing the problem.
I suggest posting your script(s) here for examination. Perhaps there's an issue there. If you user FMP advanced, you can copy and paste your script from a database design report. If you don't, you can print your script as a PDF and copy the text from the PDF.