YES! thank you, Stephen.
We will be doing some Server 12-based container imports this week, using Secure External Storage on the server. I will also post our speed test results when done.
The following works for Open storage running on FMS, and may lead to some customized ways to separate large lists.
"Slides/Image/" & Minute ( Create_TimeStamp )
I used a field that would not change for the life of the record because I'm not sure how the server would want to handle having a path that would change. As it appears with a few simple tests, changing the path using
"Slides/Image/" & Some_Enterable_Field
keeps the image in the original folder, but I can't be sure this doesnt cause havoc later.
That will certainly break things down, but is still not completely scalable. One must be sure that no directory will ever exceed a specific number of files in the final folder, while Minute (TS) is going to repeat unpredictably often on extremely large record counts. That's one risk of Open storage. Another is that users will mess with it in non-server solutions.
And, yes, you are correct that the path should never change once the file is stored.
However, FM stores a referrence to where it places it at the time of storage, so the fact the data field used to place it later changes does not mean the storage is modified due to other data changes. I understand that the path at the time of storage is what's controlled via any user-calculated path.
Completely scalable because some_field is anything. Use whatever calculation satisfies your breakdown (year, month, date, hour, minute, ID, random number). Go wild and place each file in a new folder.
On Edit: E.g., use folder with name int ( ID / 100 ) seems to work well with 100s of imported images.
What is gained from Open storage over Secure if you have to create such a complex calc that it becomes hard to locate your file in the resulting directory structure?
As far as I can tell, the only reason to prefer Open storage is the accessibility of the stored files via the OS, and that access should only be allowed in a non-server environment anyway.
Secure storage manages all the directory stuff without the effort, and lets FM optimize the number of directories required. Much more efficient that having to plce each file in it's own directory to keep things scalable.
Not saying it wouldn't work -- just can't imagine why anyone would bother.
I have not done the research as yet, so perhaps someone can tell me the answer:
If I need to access a file stored as a "Remote Container" via Custom Web Publishing, does the "open" vs. "secure" setting make a difference?
Of course if I want to access the file via a URL, I would have to know the exact path. In my current (FMSA 11) envirtonment with SuperContainer, I maintain a small "index" table to documents stored via SuperContainer. I have created a scambled (randomized) path to the SuperContainer documents to prevent unauthorized access, I store this scambled path in my index table, and thus my PHP code can retrieve the path, construct a URL, and open a new window to, for example, display a jpeg all by itself in the new window.
I was assumming I would need to use the "open" setting to be able to duplicate this functionality, as I was guessing there would be no way for me to programmatically retrieve the path to a Remote Container stored in "secure" mode.
Anyway, if I am correct, this might explain "why anyone would bother." ;-)
Thanks for any feedback!
You write that the OS is fighting putting all the files in one folder. Is it the situation with FileMaker server creating files in a directory (from a container field)?
In our case, that meant over 60,000 container-content files being written into one OS-level directory. The OS itself fights this kind of overload of a single directory, and the process of transfer comes to a near standstill after first few hundred records.
The discussion is very interesting, especially on how to structure, keep overview etc. But I searched official MS and Apple sources and comments on the net and it seems that with Mac OS X there are no real limits and that with NTFS the same is the case. Linux should also be OK.
Have any of you tried putting 100.000 files in one directory with filemaker?
Mac OS X
All versions of Mac OS X support up to 2.1 billion files in one folder. But of course limited by storage size.
FAT = 65.517 files and max disk size 4 GB
FAT32 = 65,534 files in a single folder and max files on disk 268,435,437
NTFS = 4.294.967.295 files on disk or in a single fonder.
Read the official rec
Actually as trickykid said no, you can have 3 files of 4GB in the same dir. Its actually a physical limitation. 4GB is the maximum of a 32bit system, because the maximum number you can write is 2^32 = 4 294 967 296
so you can't say I would like to request bit number 5 000 000 000, that number can't be sent over a 32bit bus.
On a 64 bit system that barrier is lifted to a much higher value 2^64
It is easy to test how FileMaker/FileMaker Server will react to a folder with, lets say 500.000 files. During test of the pre-release versions we tested with 10-20.000 files - this did not cause any problems.
Consider turning of all automatic functions like indexing etc. on the fileserver.
FileMaker does inded give us the ability to organize the files placed locally on the client computer or on the server.
We are having all the tools available we could wish for:
How could you choose to organize your files - of course depending on the existence of the entities used as examples?
- By Project
- By Customer
- By file type:-)
- By anything else
If you are really afraid of to many files (se my next post) and if you do not want to use some logical structure based on entities and attributes , you could choose to create a counter counting files and then have a new folder created for each 10, 100 or 1000 files.
So - I can not really see the problem ... or do I get you wrong
It was claimed that you could only have a very limited number of files in a folder handled by FileMaker or FileMaker Server.
We created a test file, a very very simple one - it took 5 minutes to make it.
Mac OS X 10.7.3, FileMaker 12.0v1, 51.000 small files.
The script is importing a file, storing it with the name of the primary key (UUID - example: 8C14FC46-F318-4559-850F-B20494586BE5) going to next record and then importing the new file, exporting this with a new name etc. etc.
FileMaker created the records, exported the files, imported them. Now there are 51.000 records with each their file in the same folder.
In FileMaker everything is speedy, and exporting a copy (via filemaker to the desktop) is speedy.
Going into Finder to browse the folder is not funny. It is as expected!
This is a very limited test. But as I see it FileMaker will handle a very large number of files in one folder without problems.
I will let the test continue and within an hour I will have 100.000-200.000 records and linked files.
I wanted to be able to keep an eye on what happened. Therefore the files are not encrypted.
Now we are having 105.000 files/records.
FileMaker 12 works like a wonder with them. And ... I will not open the window in Finder:-)
I will still recomend having a logical structure with a limited number of files per directory. But the arguments should probably not be based on technical problems but on common sence.
Is it only for me that +100.000 files in one single folder is so unproblematic for FileMaker?
I will let the test continue, and I expect to have +500.000 files by this afternoon.
To echo the results Carsten is seeing, I've long been putting 30,0000 to 50,000 files in a single folder when files storing via SuperContainer on Windows Server 2003 and 2008. Have not run into a single problem. Only small difference is that SuperContainer creates a folder for each file stored, so what I really have is 50,000 folders inside another folder, and one file in each of the 50,000 folders. Still Windows Server doesn't seem to wince a bit at having 50,000 items (folders) inside of a single enclosing folder. I suspect others have stored far more via SuperContainer ...
In our situation, we import the filepath of the source, and use that when needing to open the file via a URL from FileMaker. My understanding of external storage is that opening the file from the copy externally stored as a part of the database itself is dangerous to the storage system.
Even with Open storage, only opening a COPY of the file is made from the external Open storage is safe. That is not doable via a URL.
I don't envision a safe use for using the URL method with external stored containers to access the stored container content.