1 of 1 people found this helpful
As you have 60 users, you'll definetly need to go for the 12-core solution as this will mean FMServer will be able to run 12 separate requests on each core at the same time, so there will a lot less waiting on the users' machines. But if you are going to spend an extra $3,500 (€ 3,500) for the 12-core upgrade ,then you should definetly be looking at upgrading the RAM to 32 or 64 GB. This will help FMS have a larger cache so will also speed the whole experience.
Hope that helps
1 of 1 people found this helpful
We did a lot of benchmarking today and find out more cores is better just like faster chips are better.
Which carries the better oomph? We're still not sure...more tests need to happen.
At a certain threshold in our testing we felt like something besides these two became the bottleneck...we are pretty confident disk and RAM weren't the issue. We intentionally "pre-heated" the sample data so disk I/O was nill and we had plenty of free RAM throughout.
Maybe our FMP clients couldn't generate enough load to really put the pedal to the metal. Without automation, testing to simulate actual behavior of 60 users requires lots of hands, copies of FMP and machines. Our tests generated only 10% of that load.
We ran test many scenarios with 1, 2 and 4 cores across 1, 2, 4 and 6 clients. Pretty soon I'll be able to run the suite again with 6 cores and hopefully more clients.
Maybe in February / March I can test with more cores...if such things ever actually ship.
Will post results once I have clearer emerging patterns.
Just FYI, we are testing different combinations as well. We setup our first two server installation the other day. For our particular application, which is very "heavy" on tabs and fields to say the least, we are seeing a DEGRADE in performance using two servers. These are two DELL R710 Servers. The Web Server is a 12 core, 32GB system with a RAID 10 drive array using 15K RPM drives. I don't have definitive proof, but I believe the two servers is introducing some latency in screen refresh speed, potentially because of the limitation of having to move the data from one server to the other via Ethernet.
We do not need to have a lot of users on a single server. So my next test will be putting everyting on a single R710 and cranking up the memory a bit more. I'm also investigating how we may be able to dedicate two of the four network ports on these servers into a dual "bonded" network connection that is dedicated to the connection between the two servers. So far, I have not figured out a way to do this.
We have not done too much with our application to optimize it for WebDirect. We will have to go down that avenue as well. Despite these issues, it is usable, and has been relatively solid. The screen refresh and redraw issues are the most unsatisfactory factors at the moment. You mileage may vary. ;-)
When you say 'web server' is that a web direct server? Otherwise I am wondering why you need so much grunt!
My thoughts at the moment are an IBM x3650 64Gig RAM, dual 2.3Ghz 6 core Xeons, with two RAIDs, one SSD (3 x 64Gb, raid 5 for 128Gb) based for the FileMaker files and one based on HDD (6x 1Tb, radi 5 for 5Tb) for containers. We have a lot of containers. I might have another SSD for boot or boot from the first RAID. In total cores this is slightly down on where we are at currently but the speed of the RAID is more important.
For web direct i am thinking an x3550 dual Xeon 8 core 1.8Ghz 64 Gig RAM. That will have a single 128Gb SSD as boot with an HDD for backup and failover. I see no reason for massive storage on that machine.
I am testing 4 teamed gigabit network connections on our current X Serve, if this isn't sufficient we will get a dual 10gig network card and switch as a backbone.
With regard to the origional post our current X Serve has 8 cores, effectively working as 16 with hyperthreading. We seldom see any of the 16 cores maxed but as load increases first thing in the moerning each processor in turn lights up. Typically we run at around 200% processor utilisation most of which is the FileMaker Server demon. I believe each new FileMaker client conneciton spawns a new thread passed to the next processor. Processors seem more important that Ghz. I would like twin 8 core Xeons in our next FileMaker Server but they are murderously expensive so might have to wait until a mid life update, we expect the machines to serve a 5 year front line duty before retirement as file servers, door stops etc.
Yes, the two server setup, "worker" server. Yes, it's for WebDirect.
How are you using the XSERVE as they don't support the latest OS?
Have you configured the teamed gigabit ethernet with FMS 13?
http://support.apple.com/kb/ht5842The X Serve is an early 2009 Xeon, and will take Mavericks. Although at the moment we are running FMS11. I really really hope we can install Mavericks, this machine is going to run Mavericks server when it retires from being the front line FMS server.
We have not done anything regarding configuring FMS for the teamed connection; we have simpled bonded the four network ports (ie added a virtual aggregate NIC in the networking control panel) we have and beat the Netgear switch with a stick until it finally behaved itself and accepted bonding a bank of ports. It appears to work, its the only network conneciton on the machine, but we are only testing at the moment while the factory is shut down for Christmas.
Why the RAID 10 and high speed disks for the web direct machine? I am guessing you are effectively mirroring?
For our setup I am wondering about (and the boss may cry when I suggest this) 4 10gig ports on the FMS Machine, 2x10gig ports and 1x1gig port on the Web Direct machine. The FileMaker server will team two ports to a 10gig switch and have 2 teamed direct to the Web Direct server. Then the web direct server will have the 1 gig directly connected to a dedicated internet connection.
The way I think of web direct its a copy of FileMaker plus a web rendering engine for each connection, so you need to think what sort of machine could handle 50 copies of FMP virtualsised plus a rendering engine. We are expecting to go straight to 50 users on day one over web direct and that will increate over time so skrimping here is not part of the plan.
However I have had issues in the past with FileMaker server not plying nicely with multiple IPs (ie it bonded to one random port and refused to acknowlege the others, this is why I am playing with bonding the 4 ports we have, origionally we intended to use these to bridge into seperate subnets but it never worked well. That was a good few versions before 11 though, maybe it works better now.
Yes, the server is for WebDirect. If you look at the recommendations, and comments, FMI claims disk speed is import on the web server as well. That said, I think the speed of the interface between the two machines is the bottleneck.
I don't even want to have a switch between the two servers, I want two (or more NICs) teamed directly, server to server. Not sure if this is feasible or not.
No reason you can not team a direct connection. Once the ports are teamed they are seen as a single interface.
I don't have any knowledge to add to this thread at the time, but I want to say that I appreciate the testing efforts going on and I (and likely others) are following it closely. Performance "Best Practice"documents often contain recommendations that simply don't provide a benchmarkable performance difference. Anecdotal information on what someone believes will make an improvement is also often unreliable and perhaps guilty of a placebo effect.
I'm about to buy 2 of the new Mac Pros for 2 clients and I KNOW that the "best practice" of getting the fastest disks does make a benchmarkable difference. I'm going to buy a Promise Pegasus2 R4 from Apple with a 14-day return policy so I can benchmark the performance difference between that and the Mac Pros internal PCIe SSD. Taylor Sharpe has benchmarked read/write differences of the previous generation Pegasus RAIDs to give me confidence that this is a promising (pun intended) option (Thanks, Taylor!).
But I simply don't have the available resources to benchmark differences between different GHz and multiple cores. And before I recommend to a client to spend an extra $500 to $3,500 on additional cores, I sure would like to see a benchmarkable FMP difference. I'm also very intrigued by the NIC teaming/bonding conversation.
Let us know how those Pros work out.
Big bucks. ;-)
I know the ports can be teamed, I'm just not sure how it all needs to be configured on the Filemaker Server end of things.
The standard install only seems to make provisions for a signle IP address.
There is nothing required on the FMS end. Network card teaming is all on the OS and FMS takes its feed from there.
Obviously you'd want to do the OS set up before installing FMS. FMS hates it when you make changes to IP configs or network card configs after it was installed.
OK, I'm not quite sure how that would work.
If I bind say two ports together between the two machines, I would give this connection it's own IP on both servers. So assuming it would be a "private" IP something like 192.168.0.X and X+1.
The Worker server would have to have a public address, and the Master would have to have a public address.
So, when you setup the Master Server, it asks you to enter the IP of the worker machine. Would I then use the Private IP, or the Public IP of the Worker Server? (I'm guessing private). And assuming I do that, would the other functions just work automatically with the Public IPs defined on the standard NICs?
I guess I'm thinking it would be more complicated than that. I've always kept my FM Server installations super simple.