The workaround is to use "scp" to securely transfer files between the instance and your machine.
- Open port 22 on ec2 instance from AWS consol
- close the file you want to copy - (from FMCloud console)
- scp -i path/YourKeyPair.pem centos@host:/opt/FileMaker/FileMaker\\\Server/Data/Database/Data.fmp12 .
- ssh -i path/YourKeyPair.pem centos@host and delete the zip files in /FileMakerData/tmp/DownloadTemp_FMS/
- reopen the file from FMC console
- close port 22 from AWS console
Hope this helps!
I couldn't seem to get the scp syntax correct, but the user 'centos' was the missing piece of the puzzle for me to get sftp to work with FileZilla -- shamefacedly, I prefer a GUI...
- As suggested above, close the file via the FM cloud console
- Open port 22 on the server via the AWS console (security group)
- On FileZilla, go to Settings -> SFTP, and click 'add a key file...'
Select the private key file that you generated when setting up your AWS instance (you
have hopefully kept this somewhere safe, for just such an eventuality!). On older
versions of FileZilla may convert the file format, but my version accepted the PEM file
4. Close settings
5. On FileZilla, make a new site;
• Host = your AWS instance DNS name
• Protocol = SFTP
• Logon type = Normal
• User = centos
6. Now click 'connect'
You should see the root of your FMcloud AWS instance.
Files are under /FileMakerData/Data/Databases
Drag and drop to download.
7. It's best security practice to close port 22 on the server again.
- this is a workaround for an known issue with FMcloud, and that it is far better to download a backup copy via the Admin Console if at all possible
- the files should be shut down gracefully before downloading
- this process is not supported or recommended by FileMaker Inc., and is done at your own risk.
1 of 1 people found this helpful
I've taken a similar approach, but used a second nano instance mounted to a FM backup snapshot. With this method you are not messing with the main FM cloud instance. See here for Soliant's write up on this approach:
Thanks for the heads-up!
I tried this approach (not realising that someone had already blogged about it) but I didn't seem to have ownership (or permissions) of the mounted volume -- it didn't appear in /dev/ except under /drives/uuid and when I tried to access it, there was a permissions denied error.
As I was in a hurry to get a copy of the file, I took the simplest approach. I may re-try this method, following the blog post, to see where I went wrong.
I suppose that you could, in theory, mount a preserved backup in the FMcloud console, and do much the same thing for the FM cloud instance.
In theory, yes. But by accessing a backup snapshot and utilizing a secondary instance, you are completely outside of the FM cloud instance, and there is no need to interrupt service to users. The "issue" I ran into the first time I set it up was that I inadvertently chose the wrong AWS "server farm", so I couldn't connect the backup to the new nano instance. There may be a way around that if you know AWS well enough. I simply deleted the nano instance and remade it.
> But by accessing a backup snapshot and utilizing a secondary instance,
> you are completely outside of the FM cloud instance, and there is no
> need to interrupt service to users.
I had no trouble creating a volume from a snapshot, and attaching it to my free-tier linux instance (using the AWS console), but then when I tried to access it via FileZilla it was not visible, although I could navigate the instance's main volume without difficulty.
File permissions, or file-system problems?
or my inadequate understanding of linux file systems