Windows Server 2012 Hyper-V Supports IPsec Task Offloading

IPsec has been around for a while now. In an ever more security conscious & regulated world you want and/or are required to protect your network communication by
authenticating and encrypting the contents of at least some of your network traffic. Think about SOX and HIPPA and you’ll see that trade or government security requirements are not going anywhere but up for us all. This is not just restricted to military of intelligence organizations.

We’ve seen the ability to offload IPsec traffic to the NIC for a while now. This is great as the IPsec processing is a very CPU intensive workload. Unfortunately it didn’t work for virtual machines . Until now IPsec offloads was only available to host/parent workloads in using Windows Server 2008 R2. The virtualization of high volume network traffic workloads that require encryption means a serious hit on the resources on the host. If you’re willing to pay you might get by by throwing extra host & CPU power at the issue. But what if the load means a single virtual machine with 4 vCPUs can’t hack it? Game over. Sure Windows Server 2012 Hyper-V allows for 32 vCPUs now,  but that is very costly, so this is not a very cost effective solution. So in some cases this lead to those workloads being marked as “unsuited for virtualization”.

But with Windows Server 2012 Hyper-V we get a very welcome improvement, that is the fact that a virtual machine can now also offload the IPsec processing to the physical NIC on the host. That frees up a lot of CPU cycles to perform more application-level work, resulting in better virtualization densities, which means less costs etc.

Let’s take a look where you can set this in the Hyper-V GUI where you’ll find it under the network adaptor /Hardware Acceleration.

image

IPsec offload is also managed by the Hyper-V switch, this controls whether the offloading will be active or not. This is to prevent that the IPsec offload stopping the services if insufficient resources are available. Please do note that IPsec when required in the guest will be done anyway creating an extra CPU burden. So this does not disable IPsec, just the offloading of it. On top of this and in the gravest extreme you can guarantee that IPsec servers can get the resources they need by sacrificing less important guest if needed. by using virtual machine prioritization. The fact that you can configure the number of security associations helps balancing the needs of multiple virtual machines requiring IPsec offload.

To conclude, this wouldn’t be Windows Server 2012 if you couldn’t do all this with PowerShell. Take a look at  Set-VMNetworkAdapter and notice the following parameter:

-IPsecOffloadMaximumSecurityAssociation<UInt32>

This specifies the maximum number of security associations that can be offloaded to the physical network adapter that is bound to the virtual switch and that supports IPSec Task Offload. The thing to notice here is that specify a zero value is used to disable the IPsec Offload feature.

image

Configuring Jumbo Frames with PowerShell in Windows Server 2012

During lab and test time with Windows Server 2012 Hyper-V some experimenting with PowerShell is needed to try and automate actions and settings. One of the thing we have been playing around with was how to enable and configure jumbo frames.

Many advanced features like Large Send Offload have commandlets of their own (Enable-NetAdapterLso etc.), but not all them and jumbo frames is one of the latter. For those advanced features you can use the NetAdapterAdvancedProperty commandlets (Network Adapter Cmdlets in Windows PowerShell). You can than set/enable those features via the registry keywords & values. Let’s say we want to enable jumbo frames on a virtual  adapter named “ISCSI” in a VM.

image

To know what values to use you can run:

Get-NetAdapterAdvancedProperty -Name ISCSI

image

As you can see Jumbo Packet has a RegistryValue of 1514 and a DisplayValue  of “Disabled”. You can also see that the RegistryKeyword to use to enable and configure jumbo frames is “*JumboPacket”. So to enable jumbo frames you run the following command:

Set-NetAdapterAdvancedProperty -Name “ISCSI” -RegistryKeyword “*JumboPacket” -Registryvalue 9014

image

The RegistryValue is set to 9014 and the DisplayValue is set to “9014 Bytes”, i.e. it’s enabled.

If you type in an disallowed value it will list the accepted values. Please note also that these can differ from NIC to NIC depending on what is supported. Some will only show 1514, 4088, some will show 1514, 4088, 9014.

image

Now to disable jumbo frames you just need to reset the RegistryValue back to 1514

Set-NetAdapterAdvancedProperty -Name “ISCSI” -RegistryKeyword “*JumboPacket” -Registryvalue 1514

The result of this command can be seen in the picture below. DisplayName Jumbo Packet has a DisplayValue of “Disabled” again.

image

Let’s say you want to enable jumbo frames on all network adapters in a host you can run this:

Get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | Set-NetAdapterAdvancedProperty –RegistryValue “9014

Or run

Set-NetAdapterAdvancedProperty -Name * -RegistryKeyword “*JumboPacket” -Registryvalue 9014

I didn’t notice much difference in speed testing this with measure-command.

If you mess things up to much and you want to return all DisplayName settings to a well known status, i.e. the defaults you can run:

Reset-NetAdapterAdvancedProperty –Name SCSCI –DisplayName *

If you’ve just messed around with the jumbo frame settings run

Reset-NetAdapterAdvancedProperty -Name ISCSI –DisplayName “Jumbo Packet”

Or you can do the same for all network adapters:

Reset-NetAdapterAdvancedProperty –Name * –DisplayName “Jumbo Packet”

There you go, you’re well on your way doing the more advanced configurations of your network setup. Enjoy!

Windows Server 2012 Supports Data Center Bridging (DCB)

Data Center Bridging (DCB) is a collection of standards-based end-to-end networking technologies that allow Ethernet to act as the unified fabric for multiple types of traffic in the data center. You cannot put a bunch of traffic types / protocols on the same physical pipes if you have no way of guaranteeing that they will each get what they need when they need it based on priority and impact. Even with ludicrous over provisioning you could still run into issues and even if that’s not the case that’s a very expensive option. When you think about iSCSI, Remote Direct Memory Access (RDMA) and Fibre Channel over Ethernet (FCoE) you can see where the benefits are to be found. We just can keep adding network after network infrastructure for all these applications on a large scale.

  • Integrates with the standard Ethernet networks
  • Prevents congestion in NIC & network by reserving bandwidth for particular traffic types giving better performance for all
  • Windows 2012 provides support & control for DCB and allows to tags packets by traffic type
  • Provides lossless transport for mission critical workloads

You can see why this can be handy in a virtualized world evolving in to * cloud infrastructure. By enabling multiple traffic types to use an Ethernet fabric you can simplify & reduce the  network infrastructure (hardware & cabling).  In some environments this is a big deal. Imagine that a cloud provider does storage traffic over Ethernet on the same hardware infrastructure as the rest of the Ethernet traffic. You can get rid of the isolated storage-specific switches and HBAs reducing complexity, and operational costs. Potentially even equipment costs, I say potentially because I’ve seen the cost of some unified fabric switches and think your mileage may vary depending on the scale and nature of your operations.

Requirements for Data Center Bridging

DCB is based on 4 specifications by the DCB Task Group

  1. Enhanced Transmission Selection (IEEE 802.1Qaz)
  2. Priority Flow Control (IEEE 802.1Qbb)
  3. Datacenter Bridging Exchange protocol
  4. Congestion Notification (IEEE 802.1Qau)

3. & 4. are not strictly required but optional (and beneficial) if I understand things correctly. If you want to dive a little deeper have a look here at the DCB Capability Exchange Protocol Specification and have a chat with your network people on what you want to achieve.

You also need support for DCB in the switches and in the network adaptors. 

Finally don’t forget to run Windows Server 2012 as the operating systems Winking smile. You can find some more information on TechNet  Data Center Bridging (DCB) Overview but it is incomplete. More information is coming!

Understanding what it is and does

So, in the same metaphor of a traffic situation like we used with Data Center TCP  we can illustrate the situation & solution with traffic lanes for emergency services and the like. Instead of having your mission critical traffic stuck in grid lock like the fire department trucks below

image

You could assign an reserved lane, QOS, guaranteed minimal bandwidth, for that mission critical service.  Whilst you at it you might do the same for some less critical services that none the less provide a big benefit to the entire situation as well.

image

Windows Server 2012 Supports Data Center TCP (DCTCP)

In the grand effort to make Windows Server 2012 scale above and beyond the call of duty Microsoft has been addressing (potential) bottle necks all over the stack. CPU, NUMA, Memory, storage and networking.

Data Center TCP (DCTCP) is one of the many improvements by which Microsoft aims to deliver a lot better network throughput with affordable switches. Switches that can mange large amounts of network traffic tend to have large buffers and those push up the prices a lot. The idea here is that a large buffer creates the ability to deal with burst and prevents congestions. Call it over provisioning if you want.  While this helps it is far from ideal. Let’s say it a blunt instrument.

To mitigate this issue Windows Server 2012 is now capable dealing with network congestion in  a more intelligent way. It does so by reacting to the degree & not merely the presence of congestion using DCTCP. The goals are:

  • Achieve low latency, high burst tolerance, and high throughput, with small buffer switches (read cheaper).
  • Requires Explicit Congestion Notification (ECN, RFC 3168) capable switches. This should be no showstopper you’d think as it’s probably pretty common on most data center / rack switches but that doesn’t seem to be the case for the real cheap ones where this would shine … Sad smile
  • Algorithm enables when it makes sense to do so (low round trip times, i.e. it will be used inside the data center where it makes sense, not over a world wide WAN or internet). 

To see if it is applied run Get-NetTcpConnection:

image

As you can see this is applied here on a DELL PC8024F switch for the CSV and LM networks. The internet connected NIC (connection of the RDP session) shows:

image

Yup, it’s East-West traffic only, not North-South where it makes no sense.

When I was prepping a slide deck for a presentation on what this is, does and means I compared it to the green wave traffic light control. The space between consecutive traffic lights is the buffer and the red light are stops the traffic has to deal with due congestion. This leaves room for a lot of improvement and the way to achieve this is traffic control that intelligently manages the incoming flow so that at every hop there is a green light and the buffer isn’t saturated.

image

Windows Server 2012 in combination with Explicit Congestion Notification (ECN) provides the intelligent traffic control to realize the green wave.

image

The result is very smooth low latency traffic with high burst tolerance and high throughput with cheaper small buffer switches. To see the difference look at the picture   below (from Microsoft BUILD)of what this achieves. Pretty impressive. Here’s a paper by Microsoft Research on the subject

image