Hyper-V Is Right Up There In Gartner’s Magic Quadrant for x86 Server Virtualization Infrastructure

So how do you like THEM apples?

Well take a look at this people, Gartner published the following on June 30th Magic Quadrant for x86 Server Virtualization Infrastructure

Figure 1: Magic Quadrant for x86 Server Virtualization Infrastructure (Source: Gartner 2011)

That’s not a bad spot if you ask me. And before the “they paid there way up there” remarks flow in, Gartner works how Gartner works and it works like that for everyone (read” the other vendors” on there) so that remark could fly right back into your face if you’re not careful. To get there in 3 years time is not a bad track record. And if you believe some of the people out there this can’t be true. Now knowing that they only had Virtual Server to offer before Hyper-V was available and I was not using that anywhere. No, not even for non-critical production or testing as the lack of X64 bit VM support made it a “no go” product for me. So the success if Hyper-V is quite an achievement. But back in 2008, I did go with Hyper-V as a high available virtualization solution, after having tested and evaluated it during the Beta & Release Candidate time frame. Some people thought I was making a mistake.

But the features in Hyper-V were  “good enough” for most needs I needed to deal with and yes I knew VMware had a richer offering and was proven technology, something people never forget to mention that to me for some reason. I guess they wanted to make sure I hadn’t been living under a rock the last couple of years. They never mentioned the cost and some trends however or looked at the customer’s real needs. Hyper-V was a lot better than what most environments I had to serve had running at the time. In 2008 those people I needed to help were using VMware Server or Virtual Server. Both were/are free but for anything more than lightweight applications on the “not that important” list they are not suitable. If you’re going to do virtualization structurally you need high availability to avoid the risks associated with putting all your eggs in one basket. However, as you might have guessed these people did not use ESX. Why? In all honesty, the cost associated.

In the 2005-2007 time frame servers were not yet at the cost/performance ratio spent they reached in 2008 and far cry from where they are now. Those organizations didn’t do server virtualization because from the cost perspective in both licensing fees for functionality and hardware procurement. It just didn’t fit in yet.  The hardware cost barrier had come down and now with Hyper-V 1.0 we got a hypervisor that we knew could deliver something that was good enough to get the job done at a cost they could cover. We also knew that Live Migration and Dynamic Memory were in the pipelines and the product would only become better. Having tested Hyper-V I knew I had a technology to work with at a very reasonable price (or even for free) and that included high availability.  Combine this with the notion at the time that hypervisors are becoming commodities and that people are announcing the era of the cloud. Where do you think the money needs to go? Management & Applications. Where did Microsoft help with that? The System Center suite. System Center Virtual Machine Manager and Operations Manager. Are those perfect at their current incarnations? Nope. But have you looked at SCVMM 2012 Beta? Do you follow the buzz around Hyper-V 3.0 or vNext? Take a peek and you know where this is going. Think private & hybrid cloud. The beef with the MS stack lies in the hypervisor & management combination. Management tools and integration capability to help with application delivery and hence with the delivery of services to the business. Even if you have no desire or need for the public cloud, do take a look. Having a private cloud capability enhances your internal service delivery. Think of it as “Dynamic IT on Steroids”. Having a private cloud is a prerequisite for having a Hybrid cloud, which aids in the use of the public cloud when that time comes for your organization. And if never, no problem, you have gotten the best internal environment possible, no money or time lost. See my blog for more Private Clouds, Hybrid Clouds & Public Clouds musings on this.

Is Hyper-V and System Center the perfect solution for everyone in every case? No sir.  No single product or product stack can be everything to everyone. The entire VMware versus Hyper-V mud-slinging contests are at best amusing when you have the time and are in the mood for it. Most of the time I’m not playing that game. The consultant’s answer is correct: “It depends”. And very few people know all virtualization products very well and have equal experience with them. But when you’re looking to use virtualization to carry your business into the future you should have a look at the Microsoft stack and see if can help you. True objectivity is very hard. We all have our preferences and monetary incentives and there are always those who’ll take it to extreme levels. There are still people out there claiming you need to reboot a Windows server daily and have BSODs all over the place. If that is really the case they should not be blaming technology. If the technology was that bad they would not need to try and convince people not to use it, they would run away from it by themselves and I would be asking you if you want fries with your burger. Things go “boink” sometimes with any technology, really, you’d think it was created by humans, go figure. At BriForum 2011 in London this year it was confirmed that more and more we’re seeing multi hypervisors in use with large to medium organizations. That means there is a need for different solutions in different areas and that Hyper-V was doing particularly well in greenfield scenarios.

Am I happy with the choices I made? Yes. We’re getting ready  to do some more Hyper-V projects and those plans even include SCVMM 2012 & SCOM 2012 together with and upgrade path to Hyper-V vNext. I mailed the Gartner link to my manager, pointing out my obstinate choice back then turned out rather well Winking smile.

Free Support Rant

<rant>

I blog and help out in news groups because I like to share ideas, solutions and help out when and where I can. I’m active on twitter because I enjoy the discussions, the out loud thinking and the reflection we all get of just throwing ideas, conclusions, opinions, experiences and knowledge in a pool of diverse but very skilled passionate IT Professionals and Developers.

It is not always easy to share information. The potential complexity of environments that may well have other issues and restrictions in combination with the vast amount of possible configurations and designs, both valid and ill advised, make it near to impossible to cover all eventualities. If one of my blog posts does not contain the answer to your specific problem or does not apply to your particular situation, do not complain & moan about it, let alone demand of me to come up with a solution. What is written here are bits and pieces of information which I choose to share because I think they have some value and can help other people out.  I do this in my own time. Really, I am not paid to blog, research technologies or build labs. I do this out of my own interest and because I enjoy it and it has value to me in my own work. I work a lot of hours “for a boss” and those are not always the most esoteric. When you read my “About” page you’ll read the following:

I’m still in the trenches with my boys and gals. Empty suits or hollow bunnies are neither wanted nor needed. In IT you live by the sword and you die by the sword. There is no hiding when you mess up, all our mistakes are in plain sight of everyone using what we build.

That is my reality and I live by it. Perhaps others should try this.  I’ve seen to many ICT “gods” come down from heaven for a short while pushing their latest religion or product. Loudly proclaiming it is the truth and the only way forward. Failure to achieve success is always due to a lack of faith with us subjects, our (at best) mediocre skills or because we have to wait and see the benefits,  much later in time, but we need to keep the faith. When the shit hits the fan those gods are back on the Olympus, pushing daggers into the back of us infidels who couldn’t make it work. No thank you. I think the people I work with know the  strengths and weaknesses of both my self or my solutions. I have however never ever left them out in the cold when something didn’t work out as planned or when things failed. Yes, eventually things, big and small, do fail. How you try and prevent that as much as possible and how you deal with it when it happens is what makes a huge difference. That’s where my professional responsibilities lie, not with some Microsoft bashing, impolite, wannabe who thinks insulting me is a good approach to getting me to solve their issues with a Microsoft product. You know the type, they open a pack of “M$ Sucks Quick Mix” to try and get some “Instant credibility” and fail miserably, they even fail at asking for help.

I am not your free support desk, your dedicated Microsoft technology research engineer or trouble shooter. I’m an IT Pro with a busy job. I think certain people out there need to learn that you can catch more flies with honey than with vinegar. Don’t be a “jerk”.

<rant>

Follow Up on Power Options for Performance When Virtualizing

So some people asked where they can find and configure those power settings we were talking about in a previous blog Consider CPU Power Optimization Versus Performance When Virtualizing. So in this blog entry, I’ll do a quick run-through of this. As I can get my hands on some DELL servers from two different generations (G10/G11), the screenshots are of those servers.

Let’s first look at CPUz screenshots from a DELL PE2950 III where we see to different P-States. So here we see the fluctuation between CPU Power. This CPU knows SpeedStep but not TurboBoost for example.

By default/normally SpeedStep is enabled in the BIOS and Windows 2008 R2 has the “Balanced” power plan as a default. So this shows up something like this.

This means you can play around and set the power plan in Windows. So far so good. Naturally, when your PCU doesn’t support fancy power there not much Windows can do for you on that front. Depending on the CPU you can also enable features like C-Sate (core parking), P-states (SpeedStep), and TurboBoost in the BIOS. Where exactly and what it is called depends a bit on the hardware /BIOS you’re running and the CPUs that are in there. When you disable all power saving settings in the BIOS or set the for maximum performance you can’t use it in Windows anymore. That’s when you’ll see something like this:

So on a Windows 2008 R2 Server, you’ll note that the Power Options in the GUI are disabled when BIOS options are set to maximum performance. Note that when you install the Hyper-V role it turns Standby & Hibernation off. No need for that, unless it’s you demo machine/laptop and then you can turn it back on (see Hibernate and Sleep with Hyper-V Role Enabled). But Microsoft does state that P-states (SpeedStep) are supported and can be used, but it needs to be enabled in the BIOS for this.

To demonstrate the settings let’s look at the BIOS of a DELL R710 this looks like what you see in the picture below. You disable SpeedStep by setting the option for CPU Power and Performance Management to “Maximum Performance”. For DELL G11 hardware you can find more information on the available options in the article Best Practices in Power Management. I suggest you search for the documentation for the servers you have at hand to see what the vendors have to offer in advice on settings and how to set them.

Possible Values here are:

Static MAX Performance
DBPM Disabled ( BIOS will set P-State to MAX) Memory frequency = Maximum Performance Fan algorithm = Performance

OS Control
Enable OS DBPM Control (BIOS will expose all possible P states to OS) Memory frequency = Maximum Performance Fan algorithm = Power

Active Power Controller
Enable DellSystem DBPM (BIOS will not make all P states available to OS) Memory frequency = Maximum Performance Fan algorithm = Power

Custom
CPU Power and Performance Management: Maximum Performance | Minimum Power | OS DBPM | System DBPM Memory Power and Performance Management: Maximum Performance |1333Mhz |1067Mhz |800Mhz| Minimum Power Fan Algorithm Performance | Power

And since I’m a nice guy for all you people running a bit older hardware like a PE2950 III there it is called “Demand-Based Power Management” under the CPU Information and you actually disable it.

Now when you’re running Hyper-V and you disabled SpeedStep or “Cool’n’Quiet” you’ll see something like this in the GUI:

There is nothing to configure so it’s greyed out but it doesn’t really reflect your intentions. There can change this using the GUI if the fact the faded out options are not reflecting what you configured in the BIOS  annoys you or you can use powercfg to make them less “contradictionary”. All you need to do is run the following line from the command prompt: “powercfg -setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c” …

… and immediately you’ll see the greyed out GUI reflect a bit more what you actually set in the BIOS. Mind you this is cosmetics, but hey, we’re inclined that way by evolution.

As stated above you can also use the “Change settings that are currently unavailable” to enable the radio buttons for “High performance” but do note again that if you didn’t enable the operating system to control the power it’s cosmetics.

So now when you think you have this figured out and you’re gazing at CPUz to watch the results you might still see some differences. Aha, well there is still Turbo Boost (no, not that turbo button on your 1990’s PC)  seen in the DELL R710 BIOS as Turbo Mode (AMD offers similar functionality in Turbo Core)that we left enabled under “Processor Information” in the BIOS. This means that sometimes, when the CPU can use an extra power boost, it will get it, on top of the full power it has now by default since we configured it for Maximum Performance.

So Turbo Mode will sometimes cause you to see a higher frequency than what your CPU’s specification says it has in CPUz as in the left picture below. Without Turbo Boost it looks more like the specs (right picture below)

And voila, that was a quick overview of where to see & do what. I don’t have access to more modern HP kit right now so the BIOS screenshots are from 2 different generations DELL Servers, but you’ll figure it out for your hardware I’m sure. Hope this clarifies certain things to you all. I know there is a lot more to all this, how it works, how many P-states there are but I’m not a CPU engineer or a hardcore overclocker. I’m just a systems engineer trying to get the most out of his hardware in a realistic way.

Hyper-V 3.0 Leaked Screen Shots From Windows 8 Create A Buzz

Well, last Monday, June 20th 2011 was quite a twitter active day about some leaked Windows 8 screen shots that lifted a tip of the veil  about Hyper-V 3.0 / Hyper-V vNext or Hyper-V 3. You can take a peak here (Windows Now by Robert McLaws) and here (WinRumors) to see for yourself.

Now Scot Lowe also blogged on this but with some more detail. The list below is the one form Scott Lowe’s blog http://blogs.virtualizationadmin.com/lowe/2011/06/20/hyper-v-30-%e2%80%93-what%e2%80%99s-coming/ but I added some musings and comments to certain items.

  • Virtual Fibre Channel Adapter  ==> nice, I guess the competition of iSCSI was felt. How will this turn out/means with regards to SAN/DRVIVER/HBA support is interesting and there is a mention of virtual fiber channel SAN in the screenshots …
  • Storage Resource Pools  & Network Resource Pools   ==> this could become sweet … I’m dreaming about my wish list feedback to Microsoft but without details I won’t speculate any further.
  • New .VHDX virtual hard drive format (Up to 16TB + power failure resiliency) ==> This is just plain sweet, we’re no longer bound by 2TB LUNs on our physical storage (SAN), now we can take that to the next level.
  • Support for more than 4 cores! (My machine has 12 cores) ==> I say “Bring it on!”
  • NUMA – Memory per Node, Cores per Node, Nodes per Processor Socket ==> Well, well … what will this translate into? Help deal with Dynamic Memory? Aid in virtualization of SQL Servers (i.e. better support for scaling up, for now scaling out works better here).
  • Hardware Acceleration (Virtual Machine Queue & IPsec Offload)
  • Bandwidth Management ==> Ah, that would be nice 🙂
  • DHCP Guard  ==> This is supposed to drop DHCP traffic from VM “masquerading” as DHCP servers. Could be very useful, but need details. Will a DHCP server need to be authorize?. What with non Windows VMs, do you add “good” DHCP servers to an allow list?
  • Router Guard  ==> same as above but for rogue routers.  Drops router advertisement and redirection messages from unauthorized virtual machines pretending to be others. So this sound like an allow list.
  • Monitor Port Provides for monitoring of network traffic in and out of a virtual machine. Forwards the information to a monitoring virtual machine.  ==> Do I hear some cheering network engineers?
  • Virtual Switch Extensions.So far, there appear to be two filters added: NDIS Capture LightWeight Filter and WFP vSwitch Layers LightWeight Filter.

All of this is pretty cool stuff and has many of us wanting to get our hands on the first beta 🙂 I’ve been running Windows Server tweaked as desktop since Windows 2003 so I have Hyper-V already in that role but hey bring it on. I ‘m very eager to get started with this. I have visions on System Center Virtual machine Manager 2012, Hyper-V 3.0 with very capable recent SAN technology … Open-mouthed smile