Your observation is substantiated.
There is nothing wrong with a multi-file solution, but this requires a little more care when making updates that SHOULD apply to all files. Security, custom functions and so on...
Try maintaining a database with over 600 tables.
The multi-file solution is useful. Thru careful planning you can separate your files between those that change frequently and those that do not. There can be the main file with the programing and infrequently changing tables and one file with all your tables critical data (D data - see below).
I also begin each of my tables with a letter defining the nature of the data contained within.
D = Data = User Enter (Critical)
C = Calc = Calculated data, the application can be regenerated
L = Lookup = Similar to D but rarely changes
T = Temporary = Content not important except during actual processing.
I = Import = Used during importing process when data/record clean-up necessary
R = Report = Output/Reporting format, content not important except during actual processing.
Using groupings help identify which tables are important for transfer.
Only D and occasionally L need be moved.
All of the other categories can be regenerated by a process.
You can also create a transfer process where you download your data to temporary files. I use the .mer format. And then an import routine. When dealing with more than 1M records at a time csv or mer are the only logical formats. Even when I use the CSV extention I use the MER setting to generate.
I use a combination of the multi-file format and an import/export process. You did not say how many records you have, but I doubt it is more than the 50M+ my application maintains, with 25M+ in one table alone and an overall size of 60GB+..
With 35 tables it should not be very difficult to separate and define the update/transfer processes.