When we recently had the problems with the servers crashing during fortress and keep raids, there was a cry to just add more hardware. Now this did make me chuckle just a little bit, mainly because conveyed a lack of understanding on what Warhammer really was physically. It also didn’t really consider what the game servers actually do, from a number crunching and data distribution viewpoint.
Now we all have our PCs sitting under a desk, which just exist to run Warhammer or whatever our personal poison is at the moment. Now obviously when you upgrade that, it’s simply a 5 minute job to pop the case off and replace a component, reformat the hard drive, change the CPU/cooler. Heh that’s all there is to it? But we all have come across the situation where the new CPU we want will not fit in the motherboard (MB), due to a socket size change (God bless you Intel). Which leaves you in the situation where you either change the MB or stick with what you have. Bare that in mind when you consider a blade server later on.
When you come to a setup like Warhammer, it’s a 24/7 operation. You have a player base, who take it as a mortal offense when the servers are off line at 4.30am, 10am or whenever. Heaven forbid if they want to take them down during prime time!
But what do people actually think Warhammer is? It’s certainly not a load of desktop PCs sitting row upon row. Now what I write from now on, is merely guesswork and there will be a lot of people who run these setups and will put me straight on any obvious errors. But it’s what I would be running for this kind of environment.
Now we are realistically looking at a blade server per named server e.g. Karak Eight Peaks. This is an example of a blade server, each blade will have no graphics cards or sound or anything that will produce/use unnecessary heat or power. It’s probably multi-CPU/ motherboard/ memory with either a disk array at the bottom or a connection to a SAN or maybe both. The box to the right has 16 blades, so if we worked on the premise that each tier pairing is assigned to a blade, you have 12 blades used, then we would allocate a blade for handling the scenarios. Another for the capital cities, maybe shared or maybe one each. Then another blade for the dungeons. Now obviously this is all pie in the sky guess work and I don’t expect to see Mythic come out and say what they are running.
Now you know when you move between tiers, going into a scenario or just going to the capital city, then it is very probable that your character is being moved between blades on the blade server. Hence the delay, though it also gives time for the graphics to be loaded. Not really sure yet which is contributing most to the delay. When I finally get my SSD drive, then it will provide a useful indicator of the loading time for the graphics.
So on the last count we had about 65 named servers in Europe, sitting in a data centre in France, though I believe GOA run Warhammer out of Dublin. So image 65 boxes just for the game servers. Probably 2 to 3 to a cabinet. So you would probably be seeing a setup like this (Look right). So when we people start flippantly talking about upgrading hardware, it’s really worth considering that we are talking about a rather large amount of money. Yes they get our subscriptions, but whenever you end up buying hardware from a main supplier (IBM, HP, Sun + a host of others), it always ends up costing more than buying it from Overclockers. When you want to think about a lot of hardware, just go to the server listing pages for World Of Warcraft.
When the hardware requirements for European Data centre were mentioned in a post, the figure 9 million euros/dollars (6 million pounds) was mentioned. You would be surprised how quickly you can spend that, when you are having to buy a blade server setup per named server (Karak Eight Peaks, Karak Hirn), web servers, mail servers, server management software, financial systems, software support costs, network kit, back up network kit, cabling, cooling, plus a myriad of other things.
Ultimately though, you can’t just keep throwing hardware at the problem, not when you have potentially so much hardware to replace.