Assuming that the files are stored in a directory on the server and you are referencing the files, there are 3rd party apps out there that you can point to a directory and it will do a batch re-save of the file with better compression. Had this happen with TIF files years ago, we used a utility to scan the directory and re-save the files using Group 4 compression. The utility we used was called REA Converter located here http://www.reaconverter.com/, it appears PDFs are supported, just not sure to what extent.
If the files are stored in the container field as opposed to a reference then more steps would be needed that would export the file, run the conversion then re-import the file.
Pre-FM11 you could set imports to grab a thumbnail of imported PDFs, but FM11 does not actual resample for thumbnails on PDFs.
We have encountered exactly this same problem, and are keeping one FM10 machine running just for managing PDF imports so that files size grows much more slowly than with FM11. We have the added advantage over your situation of not doing PDF additions constantly, but rather on a periodic schedule, so we only backup the PDF storage file once each night, which is more often than we import into it.
Remote/WAN access will fail much more often if large PDFs are part of the record because of the need to cache groups of records IN FULL, all fields including container content set to be shown.
WAN connections can take 10 or even 100X as long to cache records from a server, so record field counts and content size can be critical.
One solution is to place the PDF in a related table, so only the one record's container content from the related record has to be cached when the record is loaded, not a group of records.
There are some FMI documents on WAN issues available which may help you better understand what you are experiencing with WAN connections, and offer some additional ideas.
My medical practice files are also huge… 65 gigs in total. I have a scan table that is in a separate database. I only backup through the night. I can do four separate and sequential backups, each taking about an hour and ten or 15 minutes at this point.
It is just not feasible to do backups during the day while seeing patients.
Ron Smith, MD
The other thing we did to speed backups was to switch our server to Solid State Drives. Our backups were over 12 minutes with fast hard drives, but they dropped to 10 SECONDS when we went to SSDs.
For a time savings of over 98% of our backup times it was well worth it. Live clients don't even notice a backup while using the files now. We have live client connections 24/7 so backups were becoming a major issue before going to the SSDs.
Interesting. Is the server's main (root) hard drive also SSD, or just the FMS directory? I'm curious what might be gained just by moving the db backups to a SSD.
The db in question was something I inherited. After visiting with the primary user, I realized that the previous developer had set it up that every initial connection would load all patients, and their record has a portal to the attachment file. Talk about a spike. I can't recall for certain, and I have yet to review the WAN docs (thanks), but I'm pretty sure when a record loads, related data is pulled as well.
Fortunately, their office is closed over the weekend. I'll be removing the "open all" script buttons which the owner doesn't want anyway, as well as removing the attachment portal and replacing it with a button that will connect to a list of just that patient's attachments. That should certainly speed things up a bit and hopefully put a halt to number of lost connections each day.
Thanks to everyone for the feedback. I'll be updating this thread after the weekend.
You may also want to take a look at using Ghostscript along with Imagemagick to batch convert your PDFs to a smaller size. I have used these to batch convert single page PDFs to JPGs and it works very well ... less than a minute to do 230 single page PDFs. Runs from the DOS command line.
Yes, your WAN performance with a portal onScreen for the image/PDFs at opening is going to SLOOOOOW things way down.
However, its at least as much affected by where you leave the file (which layout) when you last close the file UNhosted.
Pick a layout for a CLOSING Script of the file so that when you open it off-line unhosted, you are leaving it closed on a layout which does not include images, portals, nor list view (which caches more records that Form view). If you have a layout for a single-record table of global fields, that's a wonderful place to Close the file unhosted, regardless of where you want to go in your on-Open script.
The last unhosted closing layout is where the file opens and caches data BEFORE running the On-Open script. All that stuff has to come down the pipe when it's hosted, and the on-Open script won't solve the problem alone.
And, yes, ALL drives on our FM Server machine are SSDs.