Windows Server 2016 TPv4 Hyper-V brings virtual machine configuration version 7

When building a Windows Server  2016 TPv4 Hyper-V cluster this weekend I noticed that we now have a new version of the virtual machine configuration.

When we migrate (rolling cluster upgrade, move to new cluster or host, import on new cluster or host) virtual machines to  Windows Server 2016 Hyper-V from Windows Server 2012 R2, the virtual machine’s configuration file isn’t automatically upgraded. In the past it was, which blocked moving back to a previous edition of Hyper-V. Now we can do this until we manually update the virtual machine configuration version.  This block going back but it enables our new virtual machine features. Version 5.0 is the one that’s compatible with Windows Server 2012 (R2) Windows Server 2016. Version 6.2 was what we had in TPv3 and could only run on Windows Server 2016. Windows Server 2016 TPv4 Hyper-V brings virtual machine configuration version 7.

When you have virtual machines that come from  Technical Preview v3 and you had updated the virtual machine configuration of your virtual machines or created brand new ones these would be at version 6.2. Since I do not consider it wise to keep testing these on a version of a previous preview I updated them all to version 7.


The code below grabs all VMs on all cluster nodes (even the none clustered VMs), shuts them down, updates the configuration version and starts them again. It’s just a quick example.


Now do NOT do this to virtual machines with configuration version 5 that you might want to move back / import to a Windows Server 2012 R2 Hyper-V host. But if you know you’ll be testing with the new features, have a blast, like me here on the TPv4 lab cluster.


I’m still looking for the features version 7.0 enables, probably nested virtualization is one of those features I’m guessing. Happy testing!

Remove Lingering Backup Checkpoints from a Hyper-V Virtual Machine

Sometimes things go wrong with a backup job. There are many reasons for this, but that’s not the focus of this blog post. We’re going to show you how to remove lingering Backup checkpoints from a Hyper-V Virtual Machine that where not properly removed after a backup on Windows Server 2012 R2 Hyper-V.

You see checkpoint with a “odd” icon that looks like this


You can right click on the checkpoint but you will not find an option to apply or delete it.This is a recovery snapshot and you’re not supposed to manipulate this manually. It’s there as part of the backup process. You can read more about that in my blog posts Some Insights Into How Windows 2012 R2 Hyper-V Backups Work and What Is AutoRecovery.avhdx all about?.

When we look ad the files we see the traces of a Windows Server 2012R2 Hyper-V backup


Some people turn to manually merging these checkpoints. I have discussed this process in blog posts 3 Ways To Deal With Lingering Hyper-V Checkpoints Formerly Known as Snapshots and Manually Merging Hyper-V Checkpoints. But you don’t have to do this! As a matter of fact you should avoid this if possible at all. The good news is that this can be done via PowerShell.

I’m not convinced the fact that you can’t do it in the GUI is to be considered a bug as some would suggest. The fact is you’re not supposed to have to do this, so the functionality is not there. I guess this is to protect people from trying to delete one by mistake when they see one during backups.

Anyway, PowerShell to the rescue!

So we run:

Get-VMSnapshot -ComputerName "MyHyperVHost" -VMName "VMWithLingeringBackupCheckpoint"


This show us the checkpoint information (note it does not show you the XXX-AutoRecovery.avhdx” as a checkpoint.


And then we simply run Remove-VMSnapshot

#Just remove the lingerging backup checkpoint like you would a normal one via PoSh 
Get-VMSnapshot -ComputerName "MyHyperVHost" -VMName "VMWithLingeringBackupCheckpoint" | Remove-VMSnapshot

You can see the merging process in the GUI:


If you inspect the remaining MyVM-AutoRecovery.avhdx file you’ll see that it points to the parent vhdx.Normally your VM is now already running from the vhdx anyway. If not you’ll need to deal with it.


During a normal backup that file is deleted when the merge of the redirect AVHDX is done. You’ll need to do that manually here and be done with it.

Conclusion: there is no need to dive in and start merging the lingering backup checkpoints manually. Leave that for those scenarios where that’s the only option you have left.

Trunking With Hyper-V Networking

When doing lab work, or real life implementations you’ll need to go beyond the basic 101 stuff to build solutions every now and then. This is especially true when using virtual network appliances. Networking means you’ll you’ll be dealing with Link Aggregation Groups, Trunking, MLAG, routing, LACP … in short the tools of the trade when doing networking. In my experience I use trunking in Hyper-V mostly to mimic real world scenarios where trunking is used (firewall, routers, load balancers). These tend to be limited in usable ports in real life. So even, before you run out of physical ports on your Hyper-V host to work with we leverage them to mimic the real live environment. This leads us to trunking with Hyper-V networking

I for one have used this on 10Gbps ports on bot physical and virtual load balancers in the uplink to the switches. As you can imagine when doing redundant (teaming) cabling with HA load balancers you’re consuming 10Gbps ports and not all VLANs warrant a dedicated 10Gbps uplink, even if you had ‘m.

Trunking & VLAN’s are the way we deal with this in the network hardware world and we can do the same in Hyper-V. In the Hyper-V Manager GUI you will not find a way to define a trunk on an vNIC attached to a vSwitch. But this can be done via PowerShell. So please do not reject Hyper-V as not being up to the job. It is. Let me show you how you can do trunking with Hyper-V networking.

Generally on a clean install I dump the default vNIC. DO NOT DO this blindly on an existing deployed appliance virtual machine.

#Delete the default network adapter
Remove-VMNetworkAdapter -VMName VLM200-1 -Name "Network Adapter"

I then add the number of ethernet ports I need on my Kemptechnologies virual Load Master.

#Create the VLM200 ports (4 like it's physical counterpart)
For ($Count=0; $Count -le 3; $Count ++)
Add-VMNetworkadapter -VMName VLM200-1 -Name "Eth$Count"

A peak at our handy work via Get-VMNetworkAdapter -VMName VLM200-1 shows our 4 ports.


As you can see I like to name my network adapters with a distinctive name. In combination with the switch name it enables me to identify the NICs better. Combine that with a good naming policy inside the VM if possible. In Windows Server 2016 you can hot add and remove vNICs and new “Device Naming”

(see Hot add/remove of network adapters and enabling device naming in Windows Server Hyper-V) functionality which only makes the experience better in relation to uptime and automation.

Now let’s say we use eth0 for management and for the HA heartbeat. That leaves Eth2 and Eth3 for workloads. We could even aggregate these (redundancy, heart beat). In this demo we’ll configure Eth3 as a trunk with a list of allowed VLANs. We keep the native VLAN ID on 0 as it is by default. Only in specific situations where you have changed this in the network should this be changed.

#Trunk Eth3 and add the required VLAnIDs
Set-VMNetworkAdaptervlan -VMName VLM200-1 -VMNetworkAdapterName "Eth3"-Trunk -AllowedVlanIdList "10, 20, 30" -NativeVlanId 0

Which delivers us what we need to get our network appliance going


In your virtual appliance you can now create VLANs on Eth3. How this shows up is dependent on the appliance. In this example a Kemp Virtual Load Master. Here we mimic a 4 port load master. We’re not doing trunking because we ran out of the max supported number of NICs we can add to a virtual machine.


A word of warning. You will not see this configuration in the settings via the GUI.
Manipulating the VLAN settings in the GUI will overwrite the settings without a warning.
So be careful with configuration of your virtual network appliance(s).  As an example I’ll touch the VLAN setting of Eth3 and give it VLAN 500.


We now have a look at our VLAN settings of the appliance


That vNIC is now in Access mode with VLAN 500. Ouch, that will seriously ruin your day in production! Be careful!

On top of this some appliances do not respond well to such misconfigurations on the switch side (both physical and virtual switches). This leads not only to service interruption but could lead to the inability to mange the appliance, requiring a reboot of them etc.

Anyway, so yes you can do trunking with Hyper-V networking on a vNIC but this normally only makes sense I you have an appliance running that knows what to do with a trunk such as a virtual  firewall, router or load balancer.

The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests

In The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests I share a trouble shooting experience with you. I was asked if I could possibly take a look at a weird, but very infrequent network issue with 2 VMs (W2K12R2) on a cluster (W2K12R2) running over 40 guests? Sure! These 2 virtual machines worked well 98% of the time. About 2% of the time they just fell of the network, sometimes both vNICs, sometimes both VMs. Asking what they meant, they said unreachable. But we can’t find anything wrong as all other VMs run fine with the same configuration on the same hosts. They told me there was nothing in the event logs of either the host or the guests to explain any of this. A reboot or 2 or even a live migration sometimes fix the issue. Normally the monthly patch cycle prevent to many problems with connectivity. Pretty weird! Usually bad firmware, drivers or bad offload feature support can cause issues, but that would not target just 2 out of 40 VMs that have the same settings.

It was only these 2 VMs, not matter what host the were running on in the cluster. As the the vNICs shared the same 2 vSwitches (teamed) with all other VMS that never had issues I was pretty sure the configuration of the switches, NIC, teams and vSwitch were OK. This was verified for due diligence and it  checked out on all hosts as expected. All firmware, drivers and offloads were done correctly.

I also checking the VLANs settings of the vNIC themselves for those two VMs and compared them a couple of VMs that had no issues what so ever and found them to be identical.

At first everything seemed fine and I was stumped. The event logs both in the VMs as on the hosts were squeaky clean. After that exercise I started running some PowerShell command lets to take a look at the configuration of the VMs on the hosts. You see the GUI does not expose all possible configurations and I wanted to look every configuration option. That’s when I found the following


The vNIC for the 2 offending VMs were in Access mode while the VlanList had a single value 0 (basically meaning untagged, it’s a reserved VLAN for priority tagging and the use is not 100% standard across switch vendors). This just didn’t compute. In the GUI we did not see this, there things looked normal.


You cannot even set this in the GUI, it won’t allow you.



But when run in a PowerShell command it allows you to make this configuration. So maybe that’s what’s happened.

Set-VMNetworkAdaptervlan -VMName DNS01 -Access -VlanId 0

No one knew, nor can I tell you. But I tested to verify this does run and makes that configuration without any issue, weird. Anyway, I resolved the issue by running the following command.

Set-VMNetworkAdaptervlan -VMName DNS01 –Untagged


The rare connectivity issue disappeared and all was well in 100% of the cases. That how The Mysterious Case of Infrequent Network Connectivity Issues on 2 Hyper-V VMs Out of 40 Guests came to a happy end.