At what size does a file become too large to work well on a server? I have a file that is 137,000 KB. Is this too large to be placed on the server? Should it be broken down by years?
Thank you for your help.
Thanks for posting!
I think file size is a relative issue in terms of creating a 'performance cap' of sorts. Look at it this way: which solution would you expect to have smoother performance a 80 GB file solution containing two text fields or a 80 GB file solution containing hundreds of unstored calculations and container fields w/ images?
I would place more emphasis on how complex is your database (do x amount of fields really need to be indexed, do you really need x amount of unstored calculations or file references, etc.), how many concurrent users are you estimating, how will they be using the database, etc.?
So, essentially, there is not a specific/known performance cap size for databases. :smileyhappy:
Hope this helps,
I'd appreciate your view of my position. I have four databases hosted online, and want to make part of one table in one of them publicly accessible over the web. My instinct is to shift this public data into its own database. But: We pay for hosting per database, so I want to make the right call about this.
The database concerned has about 45 tables and currently stands at 46MB. There are dozens of complex relationships. The table I want to publish has about 450 fields, but I need to publish only about 20. There are about 1500 records.
Do you think we would see worthwhile web publishing performance gains by shifting the 20 fields of data into its own database?
Retrieving data ...