what happens if you ignore the elastic IP entirely and just use the Public DNS or IP that was assigned by AWS?
You can find the public IP assigned by AWs under the EC2 Control Panel > Instances page > Select Instance > "Description" tab.
I'm not sure, but I don't think Elastic IPs are meant to work with FMS on AWS.
If you try and hit the public DNS above with /fmi/webd on the end, does everything work as expected?
Also, did you make sure to allow ports 80 and 443 through the inbound rules for your AWS/EC2 security group? It sounds like port 5003 is working fine on that.
Hi Mike -
I tried what you suggested before posting, and it does work to get the log-in page, but then it fails again after that.
What I ended up doing, for now, is change the deployment from two-machine to one-machine, eliminating the traffic going between the two EC2 instances. That solved the problem immediately. I suspect that the root of the problem is a communication failure between the two machines that relates to the overall network configuration.
I suspect that if I want to go back to the two-machine deployment I'll have to learn how to put both of the machines on the same subnet within the Amazon Cloud.
For now, it works and I'm able to connect both ways.
I would still like to know how to make this work with a two-machine deployment. Anyone have ideas or suggestions?
Wizard Consulting Group, Inc.
1 of 1 people found this helpful
You have to use the FQDN when registering a worker machine with the main server. You can verify by logging into the admin console and seeing the hosts listed for all worker machines, including "worker 1" which is always the main machine.
FM Server will load balance with as many worker machines as it has attached, depending on current users connected to any machine and server load. Once a session is handed off to a worker machine, you remain on that machine until you log out, and any and all session data (global fields, variables, etc) are stored on that machine for the duration.
With the requirements now upgraded and each worker can handle up to 100 sessions, the need for a second machine is much less, IMO.
2 of 2 people found this helpful
Yes, as Mike Duncan says, when configuring machines together in a multi-machine deployment (e.g. when entering master machine's address into worker's deployment screen), you should use addresses that can be accessed from external browser client machines (e.g. FQDNs). Otherwise load balancer can revert to the addresses that were used for configuration (which in this case are internal, and not accessible to the outside). Was going ask you when first reading your post last night if you had a multi-machine deployment here (but wasn't sure at the time if you could do that with AWS instances; now I know).
Hi Everyone -
I think the mistake I made was using the internal rather than external address for setting up the Worker machine, so that's why removing the Worker machine fixed it.
Based on the input provided here, I'm not sure I need a 2-machine deployment, so I'm going to leave it as a single-machine set-up for now.
I sincerely appreciate all of the helpful input.
Wizard Consulting Group, Inc.