There seems to be a small bug in the Cluster Aware Updating GUI when the cluster name exceeds 15 characters. In our example we’ll look at a cluster with the name XXXCLUSSQLSERVERS or xxxclussqlservers.test.lab. We’ll try to connect to that cluster to do some cluster aware updating.
Click on the dropdown arrow and select our cluster
Once selected, click “Connect”
Now we’re greeted by this little message
No, you didn’t make a typo as you selected the cluster from the drop down list. You also know that your cluster is up and running. So what happened? Well, the GUI queries AD and returns the CNOs it finds. Those are limited to the NETBIOS name and as such maximal 15 characters long. In this case the name is XXXCLUSSQLSERVERS and this gives a CNO of XXXCLUSSQLSERVE, which is not found as a cluster.
The fix is easy and simple. Just type in the cluster name. XXXCLUSSQLSERVERS and voila. You can connect and are on your way.
Let’s see if the FQDN is accepted as well, shall we? And yes, the below screenshot proves this.
Conclusion
So this is not a problem once you know this . The CAU GUI returns the cluster CNO name and that’s the NetBIOS name which can be only 15 characters long. Selecting it in CUA to connect to the cluster doesn’t work. You need to fill out the complete name. As we demonstrated the CAU GUI does also accept a FQDN. To prevent running into this issue consider not making your cluster names longer than 15 characters as then the CNO and the cluster name will be identical and is a smart thing to do as you’ll avoid possible duplicate CNOs trying (and failing) to be created or other bugs .
In PowerShell you always submit the cluster name so you don’t hit this issue. Perhaps the GUI drop down list could translate the CNOs into the actual cluster names?
IPsec has been around for a while now. In an ever more security conscious & regulated world you want and/or are required to protect your network communication by authenticating and encrypting the contents of at least some of your network traffic. Think about SOX and HIPPA and you’ll see that trade or government security requirements are not going anywhere but up for us all. This is not just restricted to military of intelligence organizations.
We’ve seen the ability to offload IPsec traffic to the NIC for a while now. This is great as the IPsec processing is a very CPU intensive workload. Unfortunately it didn’t work for virtual machines . Until now IPsec offloads was only available to host/parent workloads in using Windows Server 2008 R2. The virtualization of high volume network traffic workloads that require encryption means a serious hit on the resources on the host. If you’re willing to pay you might get by by throwing extra host & CPU power at the issue. But what if the load means a single virtual machine with 4 vCPUs can’t hack it? Game over. Sure Windows Server 2012 Hyper-V allows for 32 vCPUs now, but that is very costly, so this is not a very cost effective solution. So in some cases this lead to those workloads being marked as “unsuited for virtualization”.
But with Windows Server 2012 Hyper-V we get a very welcome improvement, that is the fact that a virtual machine can now also offload the IPsec processing to the physical NIC on the host. That frees up a lot of CPU cycles to perform more application-level work, resulting in better virtualization densities, which means less costs etc.
Let’s take a look where you can set this in the Hyper-V GUI where you’ll find it under the network adaptor /Hardware Acceleration.
IPsec offload is also managed by the Hyper-V switch, this controls whether the offloading will be active or not. This is to prevent that the IPsec offload stopping the services if insufficient resources are available. Please do note that IPsec when required in the guest will be done anyway creating an extra CPU burden. So this does not disable IPsec, just the offloading of it. On top of this and in the gravest extreme you can guarantee that IPsec servers can get the resources they need by sacrificing less important guest if needed. by using virtual machine prioritization. The fact that you can configure the number of security associations helps balancing the needs of multiple virtual machines requiring IPsec offload.
To conclude, this wouldn’t be Windows Server 2012 if you couldn’t do all this with PowerShell. Take a look at Set-VMNetworkAdapter and notice the following parameter:
-IPsecOffloadMaximumSecurityAssociation<UInt32>
This specifies the maximum number of security associations that can be offloaded to the physical network adapter that is bound to the virtual switch and that supports IPSec Task Offload. The thing to notice here is that specify a zero value is used to disable the IPsec Offload feature.
After a sad year of no TechEd Europe in 2011, one of our favorite tech conferences for Microsoft technologies is back in full force. Ladies & Gentlemen, TechEd Europe 2012 will be here sooner than you think.
It’s more than just technical training, it is networking, white board sessions and passionate discussion amongst peers, experts & Microsoft employees who built the products. If you still need more technical content than that take a look at the pre-conference agenda for a full day of expertly delivered education.
No this is not just a commercial, I haven’t missed a TechEd Europe yet this century and for good reasons. If you’d like to read why take a look at this blog post Why I Find Value In A Conference
There will be loads of sessions on all products in the System Center 2012 and Windows 8. In the developer sphere there’s the .NET Framework 4.5 & Visual Studio 2012 to look forward to. Combine this with a lot of experience based guidance on current technologies and you can’t afford to miss out. To avoid disappointment register as soon as possible to join your fellow IT Pros & Developers.
We dive a bit deeper into SR-IOV today. I’m not a hardware of software network engineer but this is my perspective on what it is and why it’s valuable addition to the toolbox of Hyper-V in Windows 8.
What is SR-IOV?
SR-IOV stands for Single Root I/O Virtualization. The “Single Root” part means that the PCIe device can only be shared with one system. The Multi Root I/O Virtualization (MR-IOV) is a specification where it can be shared by multiple systems. This is beyond the scope of this blog but you can imagine this being used in future high density blade server topologies and such to share connectivity among systems.
What does SR-IOV do?
Basically SR-IOV allows a single PCIe device to emulate multiple instances of that physical PCIe device on the PCI bus. So it’s a sort of PCIe virtualization. SR-IOV achieves this by using NICs that support this (hardware dependent) by use physical functions (PFs) and virtual functions (VFs). The physical device (think of this a port on a NIC) is known as a Physical Function (PF) . The virtualized instances of that physical device (that port on our NIC that gets emulated x times) are the Virtual Functions (VF). A PF acts like a full blown PCIe device and is configurable, it acts and functions like a physical device. There is only one PF per port on a physical NIC. VF are only capable of data transfers in and out of devices and can’t be configured or act like real PCIe devices. However you can have many of them tied to one PF but they share the configuration of the PF.
It’s up to the hypervisor (software dependency) to assign one or more of these VFs to a virtual Machine (VM) directly. The guest can then use the VF NIC ports via VF driver (so there need to be VF drivers in the integration components) and traffic is send directly (via DMA) in and out of the guest to the physical NIC bypassing the virtual switch of the hyper visor completely. This reduces overhead on CPU load and increases performance of the host and as such also helps with network I/O to and from the guests, it’s as if the virtual machine uses the physical NIC in the host directly. The hyper visor needs to support SR-IOV because it needs to know what PFs and VFs are en how they work.
So SR-IOV depends on both hardware (NIC) and software (hypervisor) that supports it. It’s not just the NIC by the way, SR-IOV also needs a modern BIOS with virtualization support. Now most decent to high end server CPUs today support it, so that’s not an issue. Likewise for the NIC. A modern quality NIC targeted at the virtualization market supports this. And of cause SR-IOV also needs to be supported by the hypervisor. Until Windows 8, Hyper-V did not support SR-IOV but now it does.
I’ve read in an HP document that you can have 1 to 6 PFs per device (NIC port) and up to 256 “virtual devices” or VF per NIC today. But in reality that might not viable due to the overhead in hardware resources associated with this. So 64 or 32 VFs might be about the maximum but still, 64*2=128 virtual devices from a dual port 10Gbps NIC is already pretty impressive to me. I don’t know what they are for Hyper-V 3.0 but there will be limits to the number of SR-IOV NIC is a server and the number of VFs per core and host but I think they won’t matter to much for most of us in reality. And as technology advances we’ll only see these limits go up as the SR-IOV standard itself allows for more VFs.
So where does SR-IOV fit in when compared to VMQ?
Well it does away with some overhead that still remains with VMQ. VMQ took away the overload of a single core in the host have to be involved in handle all the incoming traffic. But still the hypervisor still has to touch every packet coming in and out. With SR-IOV that issue is addressed as it allows moving data in and out of a virtual machine to the physical NIC via Direct memory Access (DMA). So with this the CPU bottle neck is removed entirely from the process of moving data in and out of virtual machines. The virtual switch never touches it. To see a nice explanation of SR-IOV take a look at the Intel SR-IOV Explanation video on YouTube.
Intel SR-IOV Explanation
VMQ Coalescing tried to address some of the pain of the next bottle neck of using VMQ, which is the large number of interrupts needed to handle traffic if you have a lot of queues. But as we discussed already this functionality is highly under documented and it’s a bit of black art. Especially when NIC teaming and some NIC advanced software issues come in to play. Dynamic VMQ is supposed to take care of that black art and make it more reliable and easier.
Now in contrast to VMQ & RSS that don’t mix together in a Hyper-V environment you can combine SR-IOV with RSS, they work together.
Benefits Versus The Competition
One of the benefits That Hyper-V 3.0 in Windows 8 has over the competition is that you can live migrate to an node that’s not using SR-IOV. That’s quite impressive.
Potential Drawback Of Using SR-IOV
A draw back is that by bypassing the Extensible Virtual Switch you might lose some features and extensions. Whether this is very important to you depends on your environment and needs. It would take me to far for this blog post but CISCO seems to have enough aces up it’s sleeve to have an integrated management & configuration interface to manage both the networking done in the extensible virtual switch as the SR-IOV NICs. You can read more on this over here Cisco Virtual Networking: Extend Advanced Networking for Microsoft Hyper-V Environments. Basically they:
Extend enterprise-class networking functions to the hypervisor layer with Cisco Nexus 1000V Series Switches.
Extend physical network to the virtual machine with Cisco UCS VM-FEX.
Interesting times are indeed ahead. Only time will tell what many vendors have to offer in those areas & for what type customer profiles (needs/budgets).
A Possible Usage Scenario
You can send data traffic over SR-IOV if that suits your needs. But perhaps you’ll want to keep that data traffic flowing over the extensible Hyper-V virtual switch. But if you’re using iSCSI to the guest why not send that over the SR-IOV virtual function to reduce the load to the host? There is still a lot to learn and investigate on this subject As a little side note. How are the HBAs in Hyper-V 3.0 made available to the virtual machines? SR-IOV, but the PCIe device here is a Fibre HBA not a NIC. I don’t know any details but I think it’s similar.