Trunking With Hyper-V Networking

When doing lab work, or real life implementations you’ll need to go beyond the basic 101 stuff to build solutions every now and then. This is especially true when using virtual network appliances. Networking means you’ll you’ll be dealing with Link Aggregation Groups, Trunking, MLAG, routing, LACP … in short the tools of the trade when doing networking. In my experience I use trunking in Hyper-V mostly to mimic real world scenarios where trunking is used (firewall, routers, load balancers). These tend to be limited in usable ports in real life. So even, before you run out of physical ports on your Hyper-V host to work with we leverage them to mimic the real live environment. This leads us to trunking with Hyper-V networking

I for one have used this on 10Gbps ports on bot physical and virtual load balancers in the uplink to the switches. As you can imagine when doing redundant (teaming) cabling with HA load balancers you’re consuming 10Gbps ports and not all VLANs warrant a dedicated 10Gbps uplink, even if you had ‘m.

Trunking & VLAN’s are the way we deal with this in the network hardware world and we can do the same in Hyper-V. In the Hyper-V Manager GUI you will not find a way to define a trunk on an vNIC attached to a vSwitch. But this can be done via PowerShell. So please do not reject Hyper-V as not being up to the job. It is. Let me show you how you can do trunking with Hyper-V networking.

Generally on a clean install I dump the default vNIC. DO NOT DO this blindly on an existing deployed appliance virtual machine.

#Delete the default network adapter
Remove-VMNetworkAdapter -VMName VLM200-1 -Name "Network Adapter"

I then add the number of ethernet ports I need on my Kemptechnologies virual Load Master.

#Create the VLM200 ports (4 like it's physical counterpart)
For ($Count=0; $Count -le 3; $Count ++)
{
Add-VMNetworkadapter -VMName VLM200-1 -Name "Eth$Count"
}

A peak at our handy work via Get-VMNetworkAdapter -VMName VLM200-1 shows our 4 ports.

image

As you can see I like to name my network adapters with a distinctive name. In combination with the switch name it enables me to identify the NICs better. Combine that with a good naming policy inside the VM if possible. In Windows Server 2016 you can hot add and remove vNICs and new “Device Naming”

(see Hot add/remove of network adapters and enabling device naming in Windows Server Hyper-V) functionality which only makes the experience better in relation to uptime and automation.

Now let’s say we use eth0 for management and for the HA heartbeat. That leaves Eth2 and Eth3 for workloads. We could even aggregate these (redundancy, heart beat). In this demo we’ll configure Eth3 as a trunk with a list of allowed VLANs. We keep the native VLAN ID on 0 as it is by default. Only in specific situations where you have changed this in the network should this be changed.

#Trunk Eth3 and add the required VLAnIDs
Set-VMNetworkAdaptervlan -VMName VLM200-1 -VMNetworkAdapterName "Eth3"-Trunk -AllowedVlanIdList "10, 20, 30" -NativeVlanId 0

Which delivers us what we need to get our network appliance going

image

In your virtual appliance you can now create VLANs on Eth3. How this shows up is dependent on the appliance. In this example a Kemp Virtual Load Master. Here we mimic a 4 port load master. We’re not doing trunking because we ran out of the max supported number of NICs we can add to a virtual machine.

image

A word of warning. You will not see this configuration in the settings via the GUI.
Manipulating the VLAN settings in the GUI will overwrite the settings without a warning.
So be careful with configuration of your virtual network appliance(s).  As an example I’ll touch the VLAN setting of Eth3 and give it VLAN 500.

image

We now have a look at our VLAN settings of the appliance

image

That vNIC is now in Access mode with VLAN 500. Ouch, that will seriously ruin your day in production! Be careful!

On top of this some appliances do not respond well to such misconfigurations on the switch side (both physical and virtual switches). This leads not only to service interruption but could lead to the inability to mange the appliance, requiring a reboot of them etc.

Anyway, so yes you can do trunking with Hyper-V networking on a vNIC but this normally only makes sense I you have an appliance running that knows what to do with a trunk such as a virtual  firewall, router or load balancer.

Virtual Network Appliances I Use for Hyper-V Labs

When you build and maintain a test lab you’re always on the lookout for gear you can use. That’s either hardware or virtual appliances. My main concern is cost, it should work well on Hyper-V and the ability to mimic real world environments. That’s a great help for educational purposes as well as for testing and as an aid to troubles shooting. One of the nice things virtualization and now also cloud IAAS offers is the ability to run virtual storage and network appliances that allow us to have that real world look and feel. Add to that ever more software defined storage, networking and compute and we’re able to build very realistic labs. The limits we’re left with are time, money and space.

When building a lab some people tend to run into perceived limitations of their hypervisor. That’s to be expected as for many that hypervisor is just something to quickly get up and running an get to work writing code, implementing a backup solution or whatever the workload at hand is all about. The tip here is not to give up to fast.

More recently I’m build/working on a new lab setup simulating different sites. I need to route between these isolated test networks and load balance traffic in a site redundant manner. The idea was to mimic real life as well as we good. Add to that lab setup an Azure “site” and it’s fun all over. It’s all based on Hyper-V and Windows Server virtual machines but some components are not. Windows NLB has had its best day and RRAS is limited in the abilities I need to test. They can and do work fine for certain scenarios, but not for all that I need to test. I add virtual load balancers, virtual switches with the look and feel of physical ones and the same for virtual firewalls.

Now in real life you’ll be dealing with Link Aggregation Groups, Trunking, MLAG, routing, teaming … in short the tools of the trade when doing networking. One side effect of this is that on a Hyper-V host you quickly run out of physical network ports to work with. That’s not a problem, in real life your firewall or load balancer does not have 48 ports either. Often you have 4 to 8 and sometimes more, but often not, ports at your disposal and depending on the complexity that’s more than enough or not at all. Trunking & VLAN’s are the way we deal with this. In the Hyper-V GUI you will not find a way to define a trunk on an vNIC attached to a vSwitch. But this can be done via PowerShell. So please do not reject Hyper-V as not being up to the job. It is! Read about this in my blog post.

People often ask me what virtual network appliances I Use for Hyper-V Labs. This does vary over time, but there are some constants. In the lab I hate wasting time on time bombed trials. So I avoid those in favor of either fully featured solutions or I use free open source alternatives. Smart vendors provide the easiest access possible to their solutions. They realize that easy access delivers the ability to learn and test every aspect of the products which make a huge difference in the success of their offerings in the real world. When it comes to load balancers I use the KEMP Virtual Load Masters. You can read more about these in projects and lab testing  in blogs about the KEMP (Virtual) Load Master.

As an MVP I got 1 free license. Together with the ability to restore configurations I can have a pseudo permanent redundant load balancing setup. Only building labs for multi-site geo load balancing solutions requires to start from scratch every time. For routing I use VyOS, it works on both hardware and on a bunch of hypervisors with X64 bit virtual machines. When I need the look and feel of a firewall you’ll encounter in business I use Opnsense. It supports the synthetic vNICs with the enlightened Hyper-V drivers. Yup, the integration components are there.  It doesn’t boot from UEFI so no Generation 2 virtual machine support as of yet. imageimage

Another good one is IPFire. This one also does a nice job with the integration components.

image

I also have a DELL SonicWall in my home office where I have some ports to play with but it tends to be leveraged more for the permanent parts of the lab. It’s a crucial & permanent component.

SonicWALL NSA 220 Wireless-N Appliance

Find All Virtual Machines With A Duplicate Static MAC Address On A Hyper-V Cluster With PowerShell

During some trouble shooting recently I needed to find all virtual machines with a duplicate static MAC address on a Hyper-V cluster with PowerShell. I didn’t feel like doing this via the GUI for obvious reasons. I needed this because while trying to find the reason why a VM lost connectivity to one of it two NICs I discovered it had a static MAC address. No one had a good reason for this VM to have a static MAC address I stopped the VM, switched that NIC to a dynamic MAC address and rebooted. All was well afterwards

But I still needed to find out what potentially caused the issue, my guess was a duplicate MAC address (what else?). The biggest candidates for having a duplicate MAC was another VM or VMs. So here’ s some PowerShell that will list all clustered VMs that have a static MAC address.

Get-ClusterGroup | ? {$_.GroupType -eq 'VirtualMachine'} `
| get-VM | Get-VMNetworkAdapter | where-object {$_.DynamicMacAddressEnabled -eq $False}

Let’s elaborate the code a bit and search for the occurrence of duplicates in MAC address

$AllNicsWithStaticMAC = Get-ClusterGroup | ? {$_.GroupType -eq 'VirtualMachine'} `
| get-VM | Get-VMNetworkAdapter | where-object {$_.DynamicMacAddressEnabled -eq $False}


$AllNicsWithStaticMAC.GetEnumerator() | Group-Object MacAddress | ? {$_.Count -gt 1} | ft * -autosize

The result is as follows

image

So in our lab simulation we have found a static MAC address that occurs 3 time!

If you have 200 VMs running on that cluster you might not want to look over the list manually, not that I’m hoping you have 200 VMs with the same MAC address, but just to find the servers that have the same MAC address fast. For this we adapt the above PowerShell a bit

$AllNicsWithStaticMAC = Get-ClusterGroup | ? {$_.GroupType -eq 'VirtualMachine'} `
| get-VM | Get-VMNetworkAdapter | where-object {$_.DynamicMacAddressEnabled -eq $False}


$AllNicsWithStaticMAC.GetEnumerator() | Group-Object MacAddress | ? {$_.Count -gt 1} | ft * -autosize


if($AllNicsWithStaticMAC -ne $null)
{
    (($AllNicsWithStaticMAC).GetEnumerator() | Group-Object MacAddress `
    | ? {$_.Count -gt 1}).Group | Ft MacAddress,Name,VMName -GroupBy MaCAddress -AutoSize
}
Else
{
    "No Static MAC addresses where found on your cluster"
}

Which results in a nice list of the duplicate MAC address, on what Network adapter is sits an on what virtual machine. It sorts by (duplicate) MAC address, Network Adapter Name and VMName.

image

The lab demo is a bit fabricated as I’m not creating duplicate MAC address for this blog on my lab clusters.

I hope this helps some of you when you need to find all virtual machines with a duplicate static MAC address on a Hyper-V cluster with PowerShell. Now you can adapt the code to only look for dynamic duplicate MAC addresses or both static and dynamic MAC addresses. You get the gest. Thank your for reading.

The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests

In The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests I share a trouble shooting experience with you. I was asked if I could possibly take a look at a weird, but very infrequent network issue with 2 VMs (W2K12R2) on a cluster (W2K12R2) running over 40 guests? Sure! These 2 virtual machines worked well 98% of the time. About 2% of the time they just fell of the network, sometimes both vNICs, sometimes both VMs. Asking what they meant, they said unreachable. But we can’t find anything wrong as all other VMs run fine with the same configuration on the same hosts. They told me there was nothing in the event logs of either the host or the guests to explain any of this. A reboot or 2 or even a live migration sometimes fix the issue. Normally the monthly patch cycle prevent to many problems with connectivity. Pretty weird! Usually bad firmware, drivers or bad offload feature support can cause issues, but that would not target just 2 out of 40 VMs that have the same settings.

It was only these 2 VMs, not matter what host the were running on in the cluster. As the the vNICs shared the same 2 vSwitches (teamed) with all other VMS that never had issues I was pretty sure the configuration of the switches, NIC, teams and vSwitch were OK. This was verified for due diligence and it  checked out on all hosts as expected. All firmware, drivers and offloads were done correctly.

I also checking the VLANs settings of the vNIC themselves for those two VMs and compared them a couple of VMs that had no issues what so ever and found them to be identical.

At first everything seemed fine and I was stumped. The event logs both in the VMs as on the hosts were squeaky clean. After that exercise I started running some PowerShell command lets to take a look at the configuration of the VMs on the hosts. You see the GUI does not expose all possible configurations and I wanted to look every configuration option. That’s when I found the following

image

The vNIC for the 2 offending VMs were in Access mode while the VlanList had a single value 0 (basically meaning untagged, it’s a reserved VLAN for priority tagging and the use is not 100% standard across switch vendors). This just didn’t compute. In the GUI we did not see this, there things looked normal.

image

You cannot even set this in the GUI, it won’t allow you.

image

image

But when run in a PowerShell command it allows you to make this configuration. So maybe that’s what’s happened.

Set-VMNetworkAdaptervlan -VMName DNS01 -Access -VlanId 0

No one knew, nor can I tell you. But I tested to verify this does run and makes that configuration without any issue, weird. Anyway, I resolved the issue by running the following command.

Set-VMNetworkAdaptervlan -VMName DNS01 –Untagged

image

The rare connectivity issue disappeared and all was well in 100% of the cases. That how The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests came to a happy end.