Windows Server 2012 Hyper-V Supports IPsec Task Offloading

IPsec has been around for a while now. In an ever more security conscious & regulated world you want and/or are required to protect your network communication by
authenticating and encrypting the contents of at least some of your network traffic. Think about SOX and HIPPA and you’ll see that trade or government security requirements are not going anywhere but up for us all. This is not just restricted to military of intelligence organizations.

We’ve seen the ability to offload IPsec traffic to the NIC for a while now. This is great as the IPsec processing is a very CPU intensive workload. Unfortunately it didn’t work for virtual machines . Until now IPsec offloads was only available to host/parent workloads in using Windows Server 2008 R2. The virtualization of high volume network traffic workloads that require encryption means a serious hit on the resources on the host. If you’re willing to pay you might get by by throwing extra host & CPU power at the issue. But what if the load means a single virtual machine with 4 vCPUs can’t hack it? Game over. Sure Windows Server 2012 Hyper-V allows for 32 vCPUs now,  but that is very costly, so this is not a very cost effective solution. So in some cases this lead to those workloads being marked as “unsuited for virtualization”.

But with Windows Server 2012 Hyper-V we get a very welcome improvement, that is the fact that a virtual machine can now also offload the IPsec processing to the physical NIC on the host. That frees up a lot of CPU cycles to perform more application-level work, resulting in better virtualization densities, which means less costs etc.

Let’s take a look where you can set this in the Hyper-V GUI where you’ll find it under the network adaptor /Hardware Acceleration.

image

IPsec offload is also managed by the Hyper-V switch, this controls whether the offloading will be active or not. This is to prevent that the IPsec offload stopping the services if insufficient resources are available. Please do note that IPsec when required in the guest will be done anyway creating an extra CPU burden. So this does not disable IPsec, just the offloading of it. On top of this and in the gravest extreme you can guarantee that IPsec servers can get the resources they need by sacrificing less important guest if needed. by using virtual machine prioritization. The fact that you can configure the number of security associations helps balancing the needs of multiple virtual machines requiring IPsec offload.

To conclude, this wouldn’t be Windows Server 2012 if you couldn’t do all this with PowerShell. Take a look at  Set-VMNetworkAdapter and notice the following parameter:

-IPsecOffloadMaximumSecurityAssociation<UInt32>

This specifies the maximum number of security associations that can be offloaded to the physical network adapter that is bound to the virtual switch and that supports IPSec Task Offload. The thing to notice here is that specify a zero value is used to disable the IPsec Offload feature.

image

Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1

This is a multipart series based on some lab test & work I did.

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 3

After I got back from the MVP Summit 2012 in Bellevue/Redmond I could wait to start playing with a Windows 8 Hyper-V cluster so I decided to upgrade my Windows 2008 R2 cluster nodes to Windows 8. That means evicting them on by one, upgrading them and adding them to a new Windows 8 cluster. As we can build a one node cluster this can be done a node at the time. This isn’t a fail proof definite “How To”, I’m just sharing what I did.

Evicting a node

Before evicting a node make sure all virtual machines are running on the other node(s). As you can see the cluster warrior has 2 nodes, crusader & saracen (I was listening to some Saxon heavy metal at the time I built that lab setup). We evacuated node saracen prior to evicting it.

image

Evict the node & confirm when asked.

image

image

When this is done all storage is off line to the node evicted from the cluster. No need to worry about that.

Upgrade that node to Windows 8

To anyone having installed/upgraded to Windows 2008 R2 this should all be a very recognizable experience. Being lazy, I left the iSCSI initiator configuration in there with the Hyper-V & failover cluster roles installed during the upgrade. Now for production environments I like to build my nodes from scratch to have an exactly known, new and clean installation base. But for my test lab at home I wanted to get it done as fast as possible. If only the days had more hours …For extra safety you can pull the plug (or disable the switch ports) on your iSCSI or FC connections and make sure no storage is presented to the node during the upgrade process. Now please do mind is use Intel server grade NIC adaptors for which Windows 8 beta has drivers. Your situation may vary so I can’t guarantee the 7 year old FC HBA in your lab server will just work, OK!?

So run setup.exe from the Windows 8 (Beta) ISO you extracted to a folder on the server or  from the (bootable) USB you created with the downloaded ISO.

image

 

The Windows Setup installer will start.

04 run setup

 

Click on “Install now” to proceed and start the setup process.

image

 

Select to “Go online to get the latest updates for Setup (Recommended)”

image

 

So it looks for updates on line.

image

 

It didn’t find any but that’s OK.

image

 

Select the installation you want. I went with for Server with a GUI as I want screen shots. But as I wrote in the blog post Windows 8 Server With GUI, Minimal Server Interface & Server Core Lesson with the Desktop Experience Feature you can turn it into a Server Core Installation and back again now. So no regrets with any choice you make here, which is a nice improvement that can save us a lot of time.

image

Accept the EULA

image

 

We opt to upgrade (in production I go for a clean install)

image

 

I get notified that I have to remove PerfectDisk. I had an evaluation copy of Raxco PerfectDisk installed I used to do some testing with redirected CSV traffic and defragmentation (see Some Feedback On How to defrag a Hyper-V R2 Cluster Shared Volume).

image

 

So the upgrade was cancelled.

image

 

I uninstalled PerfectDisk but still it was a no go. I  had to remove all traces of it in the registry & files systems that the uninstall left or the upgrade just wouldn’t start. But after that it worked.

image

 

That means we can kick of the upgrade! It all looks very familiar Smile It takes a couple of reboots and some patience. But all in all it’s a fast process.

image

image

image

image

After this step it takes a couple of reboots and some patience. But all in all it’s a fast process. After some reboots and a screen that goes dark in between those …we get our restyled beta fish.

image

image

image

And voila we’re where we need to be … Smile

image

 

After the upgrade process I ran into one error. The GUI for Failover Clustering would not start. The solution if found for that was simply to remove that role and add it again. That did the trick.

ClusGUI

 

So this was a description of the first steps to transition a  Windows 2008 R2 SP1 cluster to a  Windows 8 (Beta) Cluster. As seen we evict the nodes one by one to upgrade them or do a clean install. In the latter case you’ll need to do the iSCSI initiator configuration again,  install the Failover Cluster role and in the case of a Hyper-V cluster the Hyper-V role. The nodes can than be added to a new Windows 8 cluster, starting out with a one node cluster. More on that in the second part of this blog post.

Windows 8 Hyper-V Improved Integration Services Setup

In Windows 8 Beta there is a nice and functional improvement in Hyper-V Manager when you want to install or upgrade the Integration Services. It shows you what version (if any) is installed and if an upgrade is needed or not. Until now it just “mentioned” that “a previous” (no version, could be the latest one) were installed and happily let you reinstall them needed or not. Begs the questions how does this all deal with “corrupted” integration services if such a thing exists. I, personally, have never seen it. Uninstall/reinstall I guess when you come across it as I don’t know of a forced/repair install option.

Walkthrough of The Improved Integration Services Setup

In the Virtual Machine console navigate to Action and select “Insert Integration Services Setup Disk”

image

In the Virtual Machine console you’ll see that inserting the integration services disk succeeded.

image

Like before, if the setup process doesn’t start automatically just navigate to the DVD and kick start it yourself.

10

 

As you can see below it now shows what version (if any) of the integration services is already installed and asks you if you want to update. In the example below you can see it has the Windows 2008 R2 SP1 version of the integration services. This is as expected as this machine (a W2K3R2SP2 guest) was imported from a Hyper-V cluster running that Windows 2008 R2 SP1.

Integration Comopnents

 

You click OK and the installation process for the integration services will start.

02

03

 

When the installation is done you’ll be notified that the virtual machines needs to restart.

image

 

The server will reboot and if you then try to install the integration services again it will notify you that it has already the correct version of the integration tools running.

09

 

Remarks

If you hit an error in the Beta of Windows 8 Hyper-V I advise two things I have experienced myself in the labs.

  1. Make sure you have enough disk space. I had one test server that had only a few MB left on the C partition and that bit me Smile
  2. Make sure you do it after a clean reboot. Just to make sure you have no pending hardware detection/installs lingering around. I experienced this one on a Windows 2003 R2 SP2 guest. Error code 1618, yup that means Another installation is already in progress.

04

Windows 8 introduces SR-IOV to Hyper-V

We dive a bit deeper into SR-IOV today. I’m not a hardware of software network engineer but this is my perspective on what it is and why it’s valuable addition to the toolbox of Hyper-V in Windows 8.

What is SR-IOV?

SR-IOV stands for Single Root I/O Virtualization. The “Single Root” part means that the PCIe device can only be shared with one system. The Multi Root I/O Virtualization (MR-IOV) is a specification where it can be shared by multiple systems. This is beyond the scope of this blog but you can imagine this being used in future high density blade server topologies and such to share connectivity among systems.

What does SR-IOV do?

Basically SR-IOV allows a single PCIe device to emulate multiple instances of that physical PCIe device on the PCI bus. So it’s a sort of PCIe virtualization. SR-IOV achieves this by using NICs that support this (hardware dependent) by use physical functions (PFs) and virtual functions (VFs). The physical device (think of this a port on a NIC)  is known as a Physical Function (PF) . The virtualized instances of that physical device (that port on our NIC that gets emulated x times) are the Virtual Functions (VF). A PF acts like a full blown PCIe device and is configurable, it acts and functions like a physical device. There is only one PF per port on a physical NIC. VF are only capable of data transfers in and out of devices and can’t be configured or act like real PCIe devices. However you can have many of them tied to one PF but they share the configuration of the PF.

It’s up to the hypervisor (software dependency)  to  assign one or more of these VFs to a virtual Machine (VM) directly. The guest can then use the VF NIC ports via VF driver (so there need to be VF drivers in the integration components) and traffic is send directly (via DMA) in and out of the guest to the physical NIC bypassing the virtual switch of the hyper visor completely. This reduces overhead on CPU load and increases performance of the host and as such also helps with network I/O to and from the guests, it’s as if the virtual machine uses the physical NIC in the host directly. The hyper visor needs to support SR-IOV because it needs to know what PFs and VFs are en how they work.

image

So SR-IOV depends on both hardware (NIC) and software (hypervisor) that supports it. It’s not just the NIC by the way, SR-IOV also needs a modern BIOS with virtualization support. Now most decent to high end server CPUs today support it, so that’s not an issue. Likewise for the NIC.  A modern quality NIC targeted at the virtualization market supports this.  And of cause SR-IOV also needs to be supported by the hypervisor. Until Windows 8, Hyper-V did not support SR-IOV but now it does.

I’ve read in an HP document that you can have 1 to 6 PFs per device (NIC port) and up to 256 “virtual devices” or VF per NIC today. But in reality that might not viable due to the overhead in hardware resources associated with this. So 64 or 32 VFs might be about the maximum but still, 64*2=128 virtual devices from a dual port 10Gbps NIC is already pretty impressive to me. I don’t know what they are for Hyper-V 3.0 but there will be limits to the number of SR-IOV NIC is a server and the number of VFs per core and host but I think they won’t matter to much for most of us in reality. And as technology advances we’ll only see these limits go up as the SR-IOV standard itself allows for more VFs.

So where does SR-IOV fit in when compared to VMQ?

Well it does away with some overhead that still remains with VMQ. VMQ took away the overload of a single core in the host have to be involved in handle all the incoming traffic. But still the hypervisor still has to touch every packet coming in and out. With SR-IOV that issue is addressed as it allows moving data in and out of a virtual machine to the physical NIC via Direct memory Access (DMA). So with this the CPU bottle neck is removed entirely from the process of moving data in and out of virtual machines. The virtual switch never touches it. To see a nice explanation of SR-IOV take a look at the Intel SR-IOV Explanation video on YouTube.

Intel SR-IOV Explanation

VMQ Coalescing tried to address some of the pain of the next bottle neck of using VMQ, which is the large number of interrupts needed to handle traffic if you have a lot of queues. But as we discussed already this functionality is highly under documented and it’s a bit of black art. Especially when NIC teaming and some NIC advanced software issues come in to play. Dynamic VMQ is supposed to take care of that black art and make it more reliable and easier.

Now in contrast to VMQ & RSS that don’t mix together in a Hyper-V environment you can combine SR-IOV with RSS, they work together.

Benefits Versus The Competition

One of the benefits That Hyper-V 3.0 in Windows 8 has over the competition is that you can live migrate to an node that’s not using SR-IOV. That’s quite impressive.

Potential Drawback Of Using SR-IOV

A draw back is that by bypassing the Extensible Virtual Switch you might lose some features and extensions. Whether this is  very important to you depends on your environment and needs. It would take me to far for this blog post but CISCO seems to have enough aces up it’s sleeve to have an integrated management & configuration interface to manage both the networking done in the extensible virtual switch as the SR-IOV NICs. You can read more on this over here Cisco Virtual Networking: Extend Advanced Networking for Microsoft Hyper-V Environments. Basically they:

  1. Extend enterprise-class networking functions to the hypervisor layer with Cisco Nexus 1000V Series Switches.
  2. Extend physical network to the virtual machine with Cisco UCS VM-FEX.

Interesting times are indeed ahead. Only time will tell what many vendors have to offer in those areas & for what type customer profiles (needs/budgets).

A Possible Usage Scenario

You can send data traffic over SR-IOV if that suits your needs. But perhaps you’ll want to keep that data traffic flowing over the extensible Hyper-V virtual switch. But if you’re using iSCSI to the guest why not send that over the SR-IOV virtual function to reduce the load to the host? There is still a lot to learn and investigate on this subject As a little side note. How are the HBAs in Hyper-V 3.0 made available to the virtual machines? SR-IOV, but the PCIe device here is a Fibre HBA not a NIC. I don’t know any details but I think it’s similar.