In 11 we found the containers stored internally were the safest, and we placed that lone file in a different sub-directory f the databases on the server which was omitted from our frequent backups. A separate schedule backed up only that sub-directory one daily in the wee small hours of the morning.
We are still testing external storage systems in 12 to determine if we can effectively more a file which is already nearly 10 GB to external storage.
So far our tests are incomplete but strongly suggest that a new import (after conversion and changing to external storage) will be the best, in cases in cases where a complete new import of container contents can be accomplished. This might be done from an offline copy of the same converted file which is stil stored internally.
I will post the results of our container storage options tests to TechNet after we have run all the possible options we have envisioned.
So far we have done fresh import to a clone with external storage on the server, importing from an image storage server by Folder. That was good for the images still available in the original locations, and it reduced file size for our particular settings when compared t imports using 11.
On the other hand, converting to 12 and moving to external storage before loading the file to the server has failed in 2 different configurations where the conversion and container transfer was done on a Mac, with the upload then done to a Windows server. In both Mac-to-Win uploads of External Secure storage, the file hung up the server at the point where it said it was "Succesful" -- the Server admin panel froze before the Finish button was enabled.
You're missing an important change that was in FMS12: files that are not changed between backups (of the same schedule) are NOT copied again, a hard link is created instead. So if a lot of that 200GB worth of container data is static and does not change between backups, no extra disk space will be taken up by the backups.
In fact you should now be able to run a lot more frequent backups than you could in FMS11. Check out the whitepaper that Steven Blackwell and myself wrote on the subject and that you can find on www.fmforums.com.
1 of 1 people found this helpful
My (mis?)understanding of the new backup system was that the unchanged-so-linked files were used to speed things up only in incremental backups. Is this true in all backup schedules or only if incremental backups are enabled?
1 of 1 people found this helpful
Also a correction on your understanding of progressive backups: the changes are NOT stored. FMS collects changes and at the end of the configured interval will roll those changes in the oldest of the two full backups it keeps. When that is done, the change log is cleared.
So progressive backups keeps two full backups of your solution, no hard links like with regular schedules. If space is something that you are worried about, progressive backups are not going to help you. You'll need an additional 400GB of disk space just to use the feature.
Hard linking only happens in regular backup schedules. There is no relation to progressive (incremental backups), there is no hard linking there.
Hard linking in regular backup schedules is specifically targeted at abross86's scenario.
Ah, it does appear that you're correct. I also had assumed that hard linking only occurs in the progressive backups. Sure enough I ran some tests and although the backups appear to take up the full amount of disk space, the remaining space on the drive doesnt reflect that. There is one caveat, which is that we have a process which grabs our filemaker backups and stores them on a NAS device, which would see the hard-link as a normal directory and grab all of the managed files anyway. We'll have to find a way around that problem but otherwise this aleviates my concerns. Thanks!
Edit: It appears I celebrated too soon. I just ran another scheduled backup, and that time it did, in fact, make a full copy of the data instead of doing a hard link. I believe this happens anytime a schedule is run for the first time. It also seems that the server does make real copies of any managed files that have been added/modified. This is an issue because we might have several gb of data added within an hour, which means slow backups, and more storage space being used. What I really need is a way to just completely disable backing up of the RC_Data_FM directory. Seems like a simple request, really. Please, filemaker devs, give us this option!
I'm curious how hard linking is handled. Does the backup ALWAYS just create a hard link to the top level RC_Data_FMS directory, or will it create a real copy of the data if any of the managed files have changed? Basically I'm just wondering if there is any instance where the scheduled backups would "go rogue" and create a real copy of the entire 200gb directory, or even some part of it.
Message was edited by: abross86
Yes, hard linking only happens between files of the same backup schedule. If you make a new schedule, a full backup is created the first time. Also note that hard links are done between backed-up files, not back to the original live file.
It seems you'd be better off with SuperContainer (from 360Works). With SuperContainer deployed you can have a backup schedule that backups up just the FM files and not the container data files.
I think I'm going to pass on supercontainer due to the complications it will introduce and wait on filemaker 12 for now. My hope is that if I make them aware of our use case (which I suspect is not uncommon) the filemaker devs will add the feature I need. Wishful thinking, I know...
abross86 may be backing up without lights on,
You are describing two different backup systems.
1. FileMaker Server Scheduled backups.
2. NAS backups of the FMS backup folders
As Wim described the FMS backups will use hard links and won't use a lot of disk space on unchanged files. These are the files you backup to the NAS.
And the incremental backups that use two copies and therefore a lot of disk space. As you've probably figured out they are 'in-use' and should not be backed up.
By your description the NAS backup is just a copying process. What you appear to want is an NAS backup that will incrementally back up.
I think there is Backup software that can do what you want. But its not part of FMS to do backups of its own backups.
If you have a lot of container information then store it externally (new feature in FM12). Use the FMS External storage option on your container fields. If you have a lot of items then be sure to change the storage location to a calculation that changes the location every so often. The reason is the OS. The OS can really bog down if too many files are in a single folder. There is another discussion about this.
The NAS backups are secondary, really. I can solve that issue other ways. What I want is really quite simple, but is seemingly impossible at the moment. The scheduled backups need to run, but completely disregard the managed container data in the RC_Data folder. Even with hard linking, a full copy of that folder is created for each schedule...so with 200gb of data, and 4 different schedules we're talking about 800gb initially. We run our servers on SSD so that isn't practical.
We already have a directory structure setup using calculated paths, so that isnt an issue.
You could do it with an archtiecture change in your solution: move all the container fields in a sepearate file & table and relate them back to the original table. if the solution has multiple non-container-data files you'll have to put the container data file in its own folder too on FMS
That way you can set up a backup schedule that target the folder with the non-container-data files and a separate schedule for the folder with the container data FM file.
Exactly. That was my backup plan (no pun intended). This still leaves me with those few tables not being backed up regularly, but with some clever recovery and syncing scripts I think I could make it work.
I have the same problem, trying to separate the container data from the database files.
In my solution I have switched to external open storage without any problem. After having read your posts, I expected that, let's say, my "hourly schedule"would only once backup the whole data (which is 500mb, 50mb database and 450 container conents), then make only backups of the database ±50mb and some changed container data.
However all the 8 hourly folder have the same size..500mb. So...I somehow misunderstood, either your post or the feature.
Can you explain where this linking of container data should happen?