I'm not sure if this will help, but I have a technique I use to improve performance with search-as-you-type finds: I don't trigger a new find for every new keystroke. On the "enter your search term here" field, there's an OnObjectModify trigger. The OnObjectModify-triggered script doesn't do anything but install an OnTimer trigger for a fraction of a second from now (0.2-0.3 seconds works reasonably well for me). The OnTimer-triggered script performs the find (and uninstalls the OnTimer trigger so it doesn't repeat until more characters get typed.) This way, the find wont run for every ... single ... typed ... character, but instead waits for a pause in the user's typing, since each new keystroke resets the OnTimer. Since users tend not to pay attention to find results until a pause in their typing anyway, this works out just fine.
If the script you're using to perform the find is also sorting the found set after each find, this might help the sort performance on the first find, since a longer first acted-on search string would result in a smaller found set to sort. Faster performance generally comes from asking the computer to do less.
An alternative method that I've used is to have the script test the number of characters. It doesn't trigger until there are three letters entered.
Another thing which will slow your first results is that the client machine is having to cache the largest number of records from the server at that point.
Check the number of fields in the table you are searching. Remember that caching requires downloading all fields in the viewed records' table, not just the fields on screen. This includes every field in the table except unstored-calcs, and even them if they appear on the layout. The same is true if you display any related data or calculation field results which rely on related data. THen the related data also has to be cached.
Your first and largest found set, is going to trigger the largest caching of records from server to client. Each adidtional letter-search will be narrowing the results from those already-cached records, which will always be very fast compared to the first results.
Thank you very much for your suggestions. I am not too familiar with the OnTimer trigger, but will try it out this weekend. Sounds like a good option.
Stephen, as for caching the records of the viewed records' table: If I understand you correctly, all the fields of the whole table are being downloaded and sorted. That would be a huge ammount of data. In particular as I have container fields as well.
I just tried implementing an "invisible" find in the script that runs when a user opens the database (like freezing the window while in the background the find and sort are being executed). It does not kill the break, but users would not have to wait when they actually want to start a search. (But then everybody has to wait at the beginning. Not ideal either.)
Not necessarily all of the records for the whole table, but yes, all of the fields in the referenced tables for all records in the list view, even if the fields are not visible. And, yes, it can be huge if you have a wide table (lots of fields per record).
In those cases, the first caching will stay slow.
Stephen, Thank you for your explanation. Very helpful.
It led me to a solution that might not be state of the art, but works fast.
As my main table is quite large and as there are many references to other tables, I tried the following: I created a new table called QuickSearch. This table has a corresponding record for each record in the main table, and it consists mainly of two fields:
- the first field holds the primary key of the corresponding record,
- the second field gathers the information of the fields that I want to search (field1 & " " & field2 & " " &...) of the corresponding record.
The find layout that is based on the QuickSearch table shows a list with the matching records; clicking on one of these records opens a new window with the record of the main table in full view, (using the primary key to find it).
The advantage is:
- Searching the QuickSearch table using the seach-as-you-type feature is extremely fast, as there is far less data that needs to be cached.
The disadvantages are:
- The file needs more storage (in my case about 0.3%)
- You must make sure that when a record is being created / changed / deleted that the corresponding record in the QuickSearch table is being updated. (This can be done easily via a script.)
I think I like this idea. Unless there are pitfalls that I am not aware of. In case anybody thinks of such a pitfall, please let me know.
I use a similar utility table with just a few fields and a primary key for special reporting. The main trick is to make sure the records in the reporting table exist and are up-to-date at the time of the report/sort/whatever.
This can be done with a creation script that is called when creating the master record, if it's also created via a script. Or it can be done with an import on matching fields, using the pirmary Key as the match field with "add" new records for non-matches. That way it pulls the info it needs into the special fields (matching names, or via auto-enter calcs) based on the primary Key value, creating new records if there is no match existing.
Just make sure you always have a corresponding record in the reporting table when you do the report.
Actually, the one field type that is NOT send from server to client, and therefor not cached, is a container field (unless it is visible on a layout).