Introducing 10Gbps Networking In Your Hyper-V Failover Cluster Environment

This is a 1st post in a series of 4. Here’s a list of all parts:

  1. Introducing 10Gbps Networking In Your Hyper-V Failover Cluster Environment (Part 1/4)
  2. Introducing 10Gbps With A Dedicated CSV & Live Migration Network (Part 2/4)
  3. Introducing 10Gbps & Thoughts On Network High Availability For Hyper-V (Part 3/4)
  4. Introducing 10Gbps & Integrating It Into  Your Network Infrastructure (Part 4/4)

A lot of early and current Hyper-V clusters are built on 1Gbps network architectures. That’s fine and works very well for a large number of environments. Perhaps at this moment in time you’re running solutions using blades with 10Gbps mezzanine cards/switches and all this with or without cutting up the bandwidth available for all the different networks needs a Hyper-V cluster has or should have for optimal performance and availability. This depends on the vendor and the type of blades you’re using. It also matters when you bought the hardware (W2K8 or W2K8R2 era) and if you built the solution yourself or bought a fast track or reference architecture kit, perhaps even including all Microsoft software and complete with installation services.

I’ve been looking into some approaches to introducing a 10Gbps network for use with Hyper-V clusters mainly for Clustered Shared Volume (CSV) and Live Migration (LM) Traffic. In brown field environments that are already running Hyper-V clusters there are several scenarios to achieve this, but I’m not offering the “definite guide” on how to do this. This is not a best practices story. There is no one size fits all. Depending on your capabilities, needs & budget you’ll approach things differently, reflecting what’s best for your environment. There are some “don’t do this in production whatever you environment is” warnings that you should take note of, but apart from that you’re free to choose what suits you best.

The 10Gbps implementations I’m dealing with are driven by one very strong operational requirement: reduce the live migration time for virtual machines with a lot of memory running a under a decent to heavy load. So here it is all about bandwidth and speed. The train of taught we’re trying to follow is that we do not want introduce 10Gbps just to share its bandwidth between 4 or more VLANs as you might see in some high density blade solutions. There that has often to do with limited amount of NIC/switch ports in some environments where they also want to have high availability. In high density scenarios the need to reduce cabling is also more urgent. All this is also often driven a desire to cut costs or keep those down as much as possible. But as technology evolves fast my guess is that within a few years we won’t be discussing the cost of 10Gbps switches anymore and even today there very good deals to be made. The reduction of cabling safes on labor & helps achieve high density in the racks. I do need to stress however that way too often discussions around density, cooling and power consumption in existing data centers or server rooms is not as simple as it appears. I would state that the achieve real and optimal results from an investment in blades you have to have the server room, cooling, power and ups designed around them. I won’t even go into the discussion over when blade servers become a cost effective solutions for SMB needs.

So back to 10Gbps networking. You should realize that Live Migration and Redirected Access with CSV absolutely benefit from getting a 10Gbps pipes just for their needs. For VMs consuming 16 Gb to 32 GB of memory this is significant. Think about it. Bringing 16 seconds back to 4 seconds might not be too big of a deal for a node with 10 to 15 VMs. But when you have a dozen SQL Servers that take 180 to 300 seconds to live migrate and reduce that to 20 to 30 seconds that helps. Perhaps not so during automated maintenance but when it needs to be done fast (i.e. on a node indicating serious hardware issues) those times add up. To achieve such results we gave the Live Migration & CSV network both a dedicated 10Gbps network. They consume about 50% of the available bandwidth so even a failover of the CSV traffic to our Live Migration network or vice versa should be easily handled. On top of the “Big Pipes” you can test jumbo frames, VMQ, …

Now the biggest part of that Live Migration time is in the “Brown-Out” phase (event id 22508 in the Hyper-V-Worker log) during which the memory transfer happens. Those are the times we reduce significantly by moving to 10Gbps. The “Black-Out” phase during which the virtual machine is brought on line on the other node creates a snapshot with the last remaining delta of “dirty memory pages”, followed by quiescing the virtual machine for the last memory copy to be performed and finally by the unquiescing of the virtual machine which is then running on the other node. This is normally measured in hundreds of milliseconds (event id 22509 in the Hyper-V-Worker log) . We do have a couple of very network intensive applications that sometimes have a GUI issue after a live migration (the services are fine but the consoles monitoring those services act up). We plan on moving those VMs to 10Gbps to find out if this reduces the “Black-Out” phase a bit and prevents that GUI of acting up. When can give you more feedback on this, I’ll let you known how that worked out.

An Example of these events in the Hyper-V-Worker event log is listed below:

Event ID 22508:

‘XXXXXXXX-YYYY-ZZZZ-QQQQ-DC12222DE1’ migrated with a Brown-Out time of 64.975 seconds.

Event ID 22509:

‘XXXXXXXX-YYYY-ZZZZ-QQQQ-DC12222DE1’ migrated with a Black-Out time of 0.811 seconds, 842 dirty page and 4841 KB of saved state.

Event ID 22507:

Migration completed successfully for ‘XXXXXXXX-YYYY-ZZZZ-QQQQ-DC12222DE1’ in 66.581 seconds.

In these 10Gbps efforts I’m also about high availability but not when that would mean sacrificing performance due to the fact I need to keep costs down and perhaps use approaches that are only really economical in large environments. The scenarios I’m dealing with are not about large hosting environments or cloud providers. We’re talking about providing the best network performance to some Hyper-V clusters that will be running SQL Server for example, or other high resource applications. These are relatively small environments compared to hosting and cloud providers. The economics and the needs are very different. As a small example of this: saving a ten thousand switch ports means that you’ll need you’ll save 500 times the price of a switch. To them that matters a lot more, not just in volume but also in relation to the other costs. They’re probably running services with an architecture that survives loosing servers and don’t require clustering. It all runs on cheap hardware with high energy efficiency as they don’t care about losing nodes when the service has been designed with that in mind. Economics of scale is what they are all about. They’d go broke building all that on highly redundant hardware and fail at achieving their needs. But most of us don’t work in such an environment.

I would also like to remind you that high availability introduces complexity. And complexity that you can’t manage will sink your high availability faster than a torpedo mid ship downs a cruiser. So know what you do, why and when to do it. One final piece of advice: TEST!

So to conclude this part take note of the fact I’m not discussing the design of a “fast track” setup that I’ll resell for all kinds of environments and I need a very cost effective rinse & repeat solution that has a Small, Medium & Large variety with all bases covered. I’m not saying those aren’t good or valuable, far from it, a lot of people will benefit from those but I’m serving other needs. If you wonder why they want to virtualize the applications at all, it has to do with disaster recovery & business continuity and replicating the environment to a remote site.

I intend to follow up on this in future blog posts when I have more information and some time to write it all up.

KB Article 2522766 & KB Article 2135160 Published Today

At this moment in time I don’t have any more Hyper-V clusters to support that are below Windows Server 2008 R2 SP1. That’s good as I only have one list of patches to keep up to date for my own use. As for you guys still taking care of Windows 2008 R2 RTM Hyper-V cluster you might want to take a look at KN article 2135160 FIX: "0x0000009E" Stop error when you host Hyper-V virtual machines in a Windows Server 2008 R2-based failover cluster that was released today. The issue however is (yet again) an underlying C-State issue that already has been fixed in relation to another issue published as KB article 983460 Startup takes a long time on a Windows 7 or Windows Server 2008 R2-based computer that has an Intel Nehalem-EX CPU installed.

And for both Windows Server 2008 R2 RTM and SP1 you might take a look at an MPIO issue that was also published today (you are running Hyper-V on a cluster and your are using MPIO for redundant storage access I bet) KB article 2522766 The MPIO driver fails over all paths incorrectly when a transient single failure occurs in Windows Server 2008 or in Windows Server 2008 R2

It’s time I add a page to this blog for all the fixes related to Hyper-V and Failover Clustering with Windows Server 2008 R2 SP1 for my own reference Smile

Hyper-V Is Right Up There In Gartner’s Magic Quadrant for x86 Server Virtualization Infrastructure

So how do you like THEM apples?

Well take a look at this people, Gartner published the following on June 30th Magic Quadrant for x86 Server Virtualization Infrastructure

Figure 1: Magic Quadrant for x86 Server Virtualization Infrastructure (Source: Gartner 2011)

That’s not a bad spot if you ask me. And before the “they paid there way up there” remarks flow in, Gartner works how Gartner works and it works like that for everyone (read” the other vendors” on there) so that remark could fly right back into your face if you’re not careful. To get there in 3 years time is not a bad track record. And if you believe some of the people out there this can’t be true. Now knowing that they only had Virtual Server to offer before Hyper-V was available and I was not using that anywhere. No, not even for non-critical production or testing as the lack of X64 bit VM support made it a “no go” product for me. So the success if Hyper-V is quite an achievement. But back in 2008, I did go with Hyper-V as a high available virtualization solution, after having tested and evaluated it during the Beta & Release Candidate time frame. Some people thought I was making a mistake.

But the features in Hyper-V were  “good enough” for most needs I needed to deal with and yes I knew VMware had a richer offering and was proven technology, something people never forget to mention that to me for some reason. I guess they wanted to make sure I hadn’t been living under a rock the last couple of years. They never mentioned the cost and some trends however or looked at the customer’s real needs. Hyper-V was a lot better than what most environments I had to serve had running at the time. In 2008 those people I needed to help were using VMware Server or Virtual Server. Both were/are free but for anything more than lightweight applications on the “not that important” list they are not suitable. If you’re going to do virtualization structurally you need high availability to avoid the risks associated with putting all your eggs in one basket. However, as you might have guessed these people did not use ESX. Why? In all honesty, the cost associated.

In the 2005-2007 time frame servers were not yet at the cost/performance ratio spent they reached in 2008 and far cry from where they are now. Those organizations didn’t do server virtualization because from the cost perspective in both licensing fees for functionality and hardware procurement. It just didn’t fit in yet.  The hardware cost barrier had come down and now with Hyper-V 1.0 we got a hypervisor that we knew could deliver something that was good enough to get the job done at a cost they could cover. We also knew that Live Migration and Dynamic Memory were in the pipelines and the product would only become better. Having tested Hyper-V I knew I had a technology to work with at a very reasonable price (or even for free) and that included high availability.  Combine this with the notion at the time that hypervisors are becoming commodities and that people are announcing the era of the cloud. Where do you think the money needs to go? Management & Applications. Where did Microsoft help with that? The System Center suite. System Center Virtual Machine Manager and Operations Manager. Are those perfect at their current incarnations? Nope. But have you looked at SCVMM 2012 Beta? Do you follow the buzz around Hyper-V 3.0 or vNext? Take a peek and you know where this is going. Think private & hybrid cloud. The beef with the MS stack lies in the hypervisor & management combination. Management tools and integration capability to help with application delivery and hence with the delivery of services to the business. Even if you have no desire or need for the public cloud, do take a look. Having a private cloud capability enhances your internal service delivery. Think of it as “Dynamic IT on Steroids”. Having a private cloud is a prerequisite for having a Hybrid cloud, which aids in the use of the public cloud when that time comes for your organization. And if never, no problem, you have gotten the best internal environment possible, no money or time lost. See my blog for more Private Clouds, Hybrid Clouds & Public Clouds musings on this.

Is Hyper-V and System Center the perfect solution for everyone in every case? No sir.  No single product or product stack can be everything to everyone. The entire VMware versus Hyper-V mud-slinging contests are at best amusing when you have the time and are in the mood for it. Most of the time I’m not playing that game. The consultant’s answer is correct: “It depends”. And very few people know all virtualization products very well and have equal experience with them. But when you’re looking to use virtualization to carry your business into the future you should have a look at the Microsoft stack and see if can help you. True objectivity is very hard. We all have our preferences and monetary incentives and there are always those who’ll take it to extreme levels. There are still people out there claiming you need to reboot a Windows server daily and have BSODs all over the place. If that is really the case they should not be blaming technology. If the technology was that bad they would not need to try and convince people not to use it, they would run away from it by themselves and I would be asking you if you want fries with your burger. Things go “boink” sometimes with any technology, really, you’d think it was created by humans, go figure. At BriForum 2011 in London this year it was confirmed that more and more we’re seeing multi hypervisors in use with large to medium organizations. That means there is a need for different solutions in different areas and that Hyper-V was doing particularly well in greenfield scenarios.

Am I happy with the choices I made? Yes. We’re getting ready  to do some more Hyper-V projects and those plans even include SCVMM 2012 & SCOM 2012 together with and upgrade path to Hyper-V vNext. I mailed the Gartner link to my manager, pointing out my obstinate choice back then turned out rather well Winking smile.

Follow Up on Power Options for Performance When Virtualizing

So some people asked where they can find and configure those power settings we were talking about in a previous blog Consider CPU Power Optimization Versus Performance When Virtualizing. So in this blog entry, I’ll do a quick run-through of this. As I can get my hands on some DELL servers from two different generations (G10/G11), the screenshots are of those servers.

Let’s first look at CPUz screenshots from a DELL PE2950 III where we see to different P-States. So here we see the fluctuation between CPU Power. This CPU knows SpeedStep but not TurboBoost for example.

By default/normally SpeedStep is enabled in the BIOS and Windows 2008 R2 has the “Balanced” power plan as a default. So this shows up something like this.

This means you can play around and set the power plan in Windows. So far so good. Naturally, when your PCU doesn’t support fancy power there not much Windows can do for you on that front. Depending on the CPU you can also enable features like C-Sate (core parking), P-states (SpeedStep), and TurboBoost in the BIOS. Where exactly and what it is called depends a bit on the hardware /BIOS you’re running and the CPUs that are in there. When you disable all power saving settings in the BIOS or set the for maximum performance you can’t use it in Windows anymore. That’s when you’ll see something like this:

So on a Windows 2008 R2 Server, you’ll note that the Power Options in the GUI are disabled when BIOS options are set to maximum performance. Note that when you install the Hyper-V role it turns Standby & Hibernation off. No need for that, unless it’s you demo machine/laptop and then you can turn it back on (see Hibernate and Sleep with Hyper-V Role Enabled). But Microsoft does state that P-states (SpeedStep) are supported and can be used, but it needs to be enabled in the BIOS for this.

To demonstrate the settings let’s look at the BIOS of a DELL R710 this looks like what you see in the picture below. You disable SpeedStep by setting the option for CPU Power and Performance Management to “Maximum Performance”. For DELL G11 hardware you can find more information on the available options in the article Best Practices in Power Management. I suggest you search for the documentation for the servers you have at hand to see what the vendors have to offer in advice on settings and how to set them.

Possible Values here are:

Static MAX Performance
DBPM Disabled ( BIOS will set P-State to MAX) Memory frequency = Maximum Performance Fan algorithm = Performance

OS Control
Enable OS DBPM Control (BIOS will expose all possible P states to OS) Memory frequency = Maximum Performance Fan algorithm = Power

Active Power Controller
Enable DellSystem DBPM (BIOS will not make all P states available to OS) Memory frequency = Maximum Performance Fan algorithm = Power

Custom
CPU Power and Performance Management: Maximum Performance | Minimum Power | OS DBPM | System DBPM Memory Power and Performance Management: Maximum Performance |1333Mhz |1067Mhz |800Mhz| Minimum Power Fan Algorithm Performance | Power

And since I’m a nice guy for all you people running a bit older hardware like a PE2950 III there it is called “Demand-Based Power Management” under the CPU Information and you actually disable it.

Now when you’re running Hyper-V and you disabled SpeedStep or “Cool’n’Quiet” you’ll see something like this in the GUI:

There is nothing to configure so it’s greyed out but it doesn’t really reflect your intentions. There can change this using the GUI if the fact the faded out options are not reflecting what you configured in the BIOS  annoys you or you can use powercfg to make them less “contradictionary”. All you need to do is run the following line from the command prompt: “powercfg -setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c” …

… and immediately you’ll see the greyed out GUI reflect a bit more what you actually set in the BIOS. Mind you this is cosmetics, but hey, we’re inclined that way by evolution.

As stated above you can also use the “Change settings that are currently unavailable” to enable the radio buttons for “High performance” but do note again that if you didn’t enable the operating system to control the power it’s cosmetics.

So now when you think you have this figured out and you’re gazing at CPUz to watch the results you might still see some differences. Aha, well there is still Turbo Boost (no, not that turbo button on your 1990’s PC)  seen in the DELL R710 BIOS as Turbo Mode (AMD offers similar functionality in Turbo Core)that we left enabled under “Processor Information” in the BIOS. This means that sometimes, when the CPU can use an extra power boost, it will get it, on top of the full power it has now by default since we configured it for Maximum Performance.

So Turbo Mode will sometimes cause you to see a higher frequency than what your CPU’s specification says it has in CPUz as in the left picture below. Without Turbo Boost it looks more like the specs (right picture below)

And voila, that was a quick overview of where to see & do what. I don’t have access to more modern HP kit right now so the BIOS screenshots are from 2 different generations DELL Servers, but you’ll figure it out for your hardware I’m sure. Hope this clarifies certain things to you all. I know there is a lot more to all this, how it works, how many P-states there are but I’m not a CPU engineer or a hardcore overclocker. I’m just a systems engineer trying to get the most out of his hardware in a realistic way.