NIC Teaming in Windows 8 & Hyper-V

One of the many new features in Windows 8 is native NIC Teaming or Load Balancing and Fail Over (LBFO). This is, amongst many others, a most welcome and long awaited improvement. Now that Microsoft has published a great whitepaper (see the link at the end) on this it’s time to publish this post that has been simmering in my drafts for too long. Most of us dealing with NIC teaming in Windows have a lot of stories to tell about incompatible modes depending on the type of teaming, vendors and what other advanced networking features you use.  Combined with the fact that this is a moving target due to a constant trickle of driver & firmware updates to rid of us bugs or add support for features. This means that what works and what doesn’t changes over time. So you have to keep an eye on this. And then we haven’t even mentioned whether it is supported or not and the hassle & risk involved with updating a driver Smile

When it works it rocks and provides great benefits (if not it would have been dead). But it has not always been a very nice story. Not for Microsoft, not for the NIC vendors and not for us IT Pros. Everyone wants things to be better and finally it has happened!

Windows 8 NIC Teaming

Windows 8 brings in box NIC Teaming, also know as Load Balancing and Fail Over (LBFO), with full Microsoft support. This makes me happy as a user. It makes the NIC vendors happy to get out of needing to supply & support LBOF. And it makes Microsoft happy because it was a long missing feature in Windows that made things more complex and error prone than they needed to be.

So what do we get form Windows NIC Teaming

  • It works both in the parent & in the guest. This comes in handy, read on!

image

  • No need for anything else but NICs and Windows 8, that’s it. No 3rd party drivers software needed.
  • A nice and simple GUI to configure & mange it.
  • Full PowerShell support for the above as well so you can automate it for rapid & consistent deployment.
  • Different NIC vendors are supported in the same team.  You can create teams with different NIC vendors in the same host. You can also use different NIC across hosts. This is important for Hyper-V clustering & you don’t want to be forced to use the same NICs everywhere. On top of that you can live migrate transparently between servers that have different NIC vendor setups. The fact that Windows 8 abstracts this all for you is just great and give us a lot more options & flexibility.
  • Depending on the switches you have it supports a number of teaming modes:
    • Switch Independent:  This uses algorithms that do not require the switch to participate in the teaming. This means the switch doesn’t care about what NICs are involved in the teaming and that those teamed NICS can be connected to different switches. The benefit of this is that you can use multiple switches for fault tolerance without any special requirements like stacking.
    • Switch Dependent: Here the switch is involved in the teaming. As a result this requires all the NICs in the team to be connected to the same switch unless you have stackable switches. In this mode network traffic travels at the combined bandwidth of the team members which acts as a as a single pipeline.There are two variations supported.
      1. Static (IEEE 802.3ad) or Generic: The configuration on the switch and on the server identify which links make up the team. This is a static configuration with no extra intelligence in the form of protocols assisting in the detection of problems (port down, bad cable or misconfigurations).
      2. LACP (IEEE 802.1ax, also known as dynamic teaming). This leverages the Link Aggregation Control Protocol on the switch to dynamically identify links between the computer and a specific switch. This can be useful to automatically reconfigure a team when issues arise with a port, cable or a team member.
  • There are 2 load balancing options:
    1. Hyper-V Port: Virtual machines have independent MAC addresses which can be used to load balance traffic. The switch sees a specific source MAC addresses connected to only one connected network adapter, so it can and will balance the egress traffic (from the switch) to the computer over multiple links, based on the destination MAC address for the virtual machine. This is very useful when using Dynamic Virtual Machine Queues. However, this mode might not be specific enough to get a well-balanced distribution if you don’t have many virtual machines. It also limits a single virtual machine to the bandwidth that is available on a single network adapter. Windows Server 8 Beta uses the Hyper-V switch port as the identifier rather than the source MAC address. This is because a virtual machine might be using more than one MAC address on a switch port.
    2. Address Hash: A hash (there a different types, see the white paper mention at the end for details on this) is created based on components of the packet. All packets with that hash value are assigned to one of the available network adapters. The result is that all traffic from the same TCP stream stays on the same network adapter. Another stream will go to another NIC team member, and so on. So this is how you get load balancing. As of yet there is no smart or adaptive load balancing available that make sure the load balancing is optimized by monitoring distribution of traffic and reassigning streams when beneficial.

Here a nice overview table from the whitepaper:

image

Microsoft stated that this covers the most requested types of NIC teaming but that vendors are still capable & allowed to offer their own versions, like they have offered for many years, when they find that might have added value.

Side Note

I wonder how all this is relates/works with to Windows NLB, not just on a host but also in a virtual machine in combination with windows NIC teaming in the host (let alone the guest). I already noticed that Windows NLB doesn’t seem to work if you use Network Virtualization in Windows 8. That combined with the fact there is not much news on any improvements in WNLB (it sure could use some extra features and service monitoring intelligence) I can’t really advise customers to use it any more if they want to future proof their solutions. The Exchange team already went that path 2 years ago. Luckily there are some very affordable & quality solution out there. Kemp Technologies come to mind.

  • Scalability.You can have up to 32 NIC in a single team. Yes those monster setups do exist and it provides for a nice margin to deal with future needs Smile
  • There is no THEORETICAL limit on how many virtual interfaces you can create on a team. This sounds reasonable as otherwise having an 8 or 16 member NIC team makes no sense. But let’s keep it real, there are other limits across the stack in Windows, but you should be able to get up to at least 64 interfaces generally. Use your common sense. If you couldn’t put 100 virtual machines in your environment on just two 1Gbps NICs due to bandwidth concerns & performance reasons you also shouldn’t do that on two teamed 1Gbps NICs either.
  • You can mix NIC of different speeds in the same team. Mind you, this is not necessarily a good idea. The best option is to use NICs of the same speed. Due to failover and load balancing needs and the fact you’d like some predictability in a production environment. In the lab this can be handy when you need to test things out or when you’d rather have this than no redundancy.

Things to keep in mind

SR-IOV & NIC teaming

Once you team NICs they do not expose SR-IOV on top of that. Meaning that if you want to use SR-IOV and need resilience for your network you’ll need to do the teaming in the guest. See the drawing higher up. This is fully supported and works fine. It’s not the easiest option to manage as it’s on a per guest basis instead of just on the host but the tip here is using the NIC Teaming UI on a host to manage the VM teams at the same time.  Just add the virtual machines to the list of managed servers.

image

Do note that teams created in a virtual machine can only run in Switch Independent configuration, Address Hash distribution mode. Only teams where each of the team members is connected to a different Hyper-V switch are supported. Which is very logical, as the picture below demonstrates, because you won’t have a redundant solution.

image

Security Features & Policies Break SR-IOV

Also note that any advanced feature like security policies on the (virtual) switch will disable SR-IOV, it has to or SR-IOV could be used as an effective security bypass mechanism. So beware of this when you notice that SR-IOV doesn’t seem to be working.

RDMA & NIC Teaming Do Not Mix

Now you need also to be aware of the fact that RDMA requires that each NIC has a unique IP addresses. This excludes NIC teaming being used with RDMA. So in order to get more bandwidth than one RDMA NIC can provide you’ll need to rely on Multichannel. But that’s not bad news.

TCP Chimney

TCP Chimney is not supported with network adapter teaming in Windows Server “8” Beta. This might change but I don’t know of any plans.

Don’t Go Overboard

Note that you can’t team teamed NIC whether it is in the host or parent or in virtual machines itself. There is also no support for using Windows NIC teaming to team two teams created with 3rd party (Intel or Broadcom) solutions. So don’t stack teams on top of each

Overview of Supported / Not Supported Features With Windows NIC Teaming

image

Conclusion

There is a lot more to talk about and a lot more to be tested and learned. I hope to get some more labs going and run some tests to see how things all fit together. The aim of my tests is to be ready for prime time when Windows 8 goes RTM. But buyer beware, this is  still “just” Beta material.

For more information please download the excellent whitepaper NIC Teaming (LBFO) in Windows Server "8" Beta

11 thoughts on “NIC Teaming in Windows 8 & Hyper-V

  1. Can anyone point me at documentation that specifically states if NIC teaming works with iSCSI interfaces? I have found documentation from MS that 2008R2 should not use NIC teaming and that you should use MPIO instead (uGuid.doc at http://www.microsoft.com/en-us/download/details.aspx?id=18986).

    Can I use NIC teaming with an iSCSI interface on windows 8?

    Can I use on windows 8 NIC teaming with MPIO and a DSM with an iSCSI interface or is it mutually exclusive where I have to select either NIC teaming or MPIO but not both?

    It would be great to include any link to MS documentation to support your answer.

    Thanks

    • The way I understand it is that you use teaming to build a converged network for all kinds of purposes including providing clustered iSCSI storage to Virtual Macgine guests. That means that those would get a virtual port for iSCSI storage traffic that doesn’t need MPIO as below the Virtual switch redudancy is provided by the NIC team, but that’s not visible to that guest. You don’t team storage paths in the guest VM. There are many scenarios for converged networks including providing a storage path for VM guests but none of them is using storage for the host out of that NIC team. You’ll have either DAS/Shared SAS/iSCSI or FC on the host for storage depending on your needs. And their you have MPIO for redundany/failover. Take a look as some scenarios here: http://www.thomasmaurer.ch/2012/07/windows-server-2012-hyper-v-converged-fabric/

  2. Hi,

    how is the MAC of the team been determined? The foll. explanation: “In switch independent / address hash configuration the team will use the MAC address of the primary team member” is a little bit vague I think. At least that’s not what I was able to test:

    1. I’ve set up a “switch independent / address hash conf.” team
    2. the MAC address of the team matched the address of the primary (the first NIC I’ve added to the team) team member
    3. after reboot the MAC address of the team matches the second NIC added to the team

    How to work past this if you want to have your IPs assigned dynamically (DHCP)?

    Thank you for your reply.
    Best regards,
    P

    • Not much different from other teaming solutions afaik. Perhaps I’m misinterpreting your concerns. What do you want to do? Create DHCP reservations? Registering both MAC addresses might do the trick otherwise just let DHCP sort it out. Or are you refering to the possibility with Broadcom/Intel to fix the MAC address? Thx.

      • Hi,

        thanx for your reply. I’m referring to the stochastic behavior of the MAC address being assigned to the team.

        I’ve learned that with different vendors there’s a pattern on how the driver picks up the MAC for the team (HW dependent, not important to this discussion). The important thing: the same MAC is always picked! This way only one MAC is needed to set up the DHCP scope.

        With Win2012 I’m uncertain. Once rebooted the MAC changed. The question is whether it will be randomized with each reboot or is there some logic to it perhaps.

        Pavol

        • OK. With BACS, in certain conditions (SLB) this was not guaranteed. So the provided a way to fix a MAC Address (see release notes) in the registry: HKLMSYSTEMCurrentControlSetServicesBlfpParam eters{team_id}TeamMacAddress

          I’m mostley interested in the scenarios you are facing with this behaviour. I’ll try to get some feedback on those. Are your main worries out of data ARP tables issues that have been seen with some versions of INTEL/Broadcom NIC teaming software and/or firmware or something else?

  3. Hi,

    thank you, my main worries is DHCP (TFTP to be more exact) and in order to set it up properly one needs to understand “how the MAC address is being determined?”.

    Like I mentioned earlier, with HP for example, I knew exactly which MAC address is going to be picked up by the team. This helped me to automate the deployment of servers where I needed to use TFTP (DHCP). With Win2012 I simply don’t know which MAC is finally going to make it to the team.

    See also my post on davidzi.com, where Dave was able to replicate the issue: http://alturl.com/ubcoz.

    Best regards,
    Pavol

    • Thx for the extra info. I’ll see what I can find out, might take a while and not deliver much more than what you already know in option syou have. Do I understand correctly for the comment on Dave’s post that you got the intel to always us the same MAC when you followed a specific installation order with a specific teaming mode? Does the HP solution always use a fixed MAC address by default and never a member one as otherwise what would happen during the failure of the primary member there? Unless this is the case, like with the BACS or Windows team – bar a fixed default MAC on a team by the vendor – can we be certain we’ll always, evend during failover, the same MAC address will be used?

Leave a Reply, get the discussion going, share and learn with your peers.

This site uses Akismet to reduce spam. Learn how your comment data is processed.