Thank you for your post.
The more information you put into a single field, the more the text needs to be interpreted, and the more memory is required. 86,000,000 characters in a text field can definitely slow down the performance.
Assuming there are no return characters in the file, I would try and separate the text file into smaller files. That way, you can parse the information in smaller groups.
If there are return characters in the file, then I would import the data into a separate table and parse the information, grabbing and putting information into the main table.
Hi, in my debut I wanted FMP to be the tools for anything. But it's not, for one, it's not good to parse text. So I would parse the text using command line invoked by applescript, it would be a lot faster. You take your fike A, and parse it to a 7000 object XML file B and then import b.
Take a look also at 360 free plug-in or smarpill
I would do as the two previous posts mentioned. The problem isn't that FileMaker can't store 1 billion characters in a text field, the problem is that the text must be stored in intermediate memory during import or copy-paste through clipboard, and there either the plugin or FileMaker may fail.
As soon as your text is structured (e.g. as XML), import gets easier, because FileMaker can handle individual, smaller chunks. In your case, if you are able to assign a structure before import, your text would be structured in chunks of 86*1024*1024/7000 = about 12.6 KB average size.
According to my experience, one can import XML files with size of up to 650 MB. After that, the import fails because the Xerces XML parser can't handle more than to 2 GB in memory, because it was compiled with these limits by FileMaker (it's not only the text that must be kept in memory, but also the XML structure).
Another trick during import is to turn field indexing off. This speeds import up considerably. After import, indexing can be turned on. Don't be embarrassed if indexing may take a long time, maybe even hours.
I'm wondering how I use FM to import any large text file, structured or not, in pieces as you suggest. I suppose I need a plugin, but at least the one I have from Troi doesn't work. The data I'm looking at is already structured (although not true XML). It seems to me that my only option is to use a text editor to break the file up manually into smaller pieces. Any other ideas? Is there no native FM method to import a large text file?
Mark Dambro wrote:
The data I'm looking at is already structured (although not true XML). It seems to me that my only option is to use a text editor to break the file up manually into smaller pieces. Any other ideas? Is there no native FM method to import a large text file?
Could you provide an overview of how the original data is structured?