It might be full index of the text fields.
Make a copy of the file, turn indexing off for these 2 fields, save a copy as compressed copy and see how big it is.
(remember that not explicitly turning indexing off can result in later indexing as soon as you do a find; same holds for defining relationships).
Anyway, being a test database, you could post a zipped clone for us to play with.
I removed the indexing of the primary key field and compressed the file, which dropped the file size to 2.84GB. None of the other text fields were indexed. 2.84GB still seems quite large for a record count of only ~46000.
if you're on a Mac, you can test your HD for being screwed up by rebooting with cmd-S held down.
When you can type, type /sbin/fsck -fy
then press return
once finished type reboot then press return...
the 2.84 GB is out of this world, I'd really fancy to experiment a bit with your file !
I think I've discovered the issue:
The file is testing the speed difference between creating records via:
A looping script using the "New Record" script step
A looping script using the "Duplicate Record" script step
A script performing an export/import routine to create new records
A portal creating new records via a relationship
The second test, duplicating records, is the offender in this case. For reasons unknown to me, duplicating a record is taking up a huge amount of space. I imagine something strange must be going on with the indexing.