How I Made Server 2012 R2 Love Hyper-V 2025

Introduction

Yes, Windows Server 2012 R2. Me, the most vocal proponent of keeping your environments up to date, to the level I barely have a Windows Server 2022 under my care anymore.

So what gives? Some people spun up a brand-new Windows Server 2025 Hyper-V cluster and migrated a truckload of their virtual machines over. I love modern infrastructure, so this all sounds very good until they reach out with a little issue. About a dozen of their virtual machines do not boot properly, but into the recovery console. My first question was what is the OS on those virtual machines? When the answer is Windows Server 2012 R2, maybe some Windows Server 2016, I had heard all I needed to know to help “fix” this. The real solution is not running those old out-of-support OS versions anymore, but we can “fix” it so your apps keep running while you upgrade or migrate.

Symptoms

Their older but business-critical Windows Server 2012 R2 VMs—and these were Generation 2, UEFI VMs, no less, but did not boot on their shiny new Hyper-V cluster. The migration itself went smoothly, they said, but when they started the virtual machines, the apps did not come up. So they checked the console logs of those virtual machines and saw STOP 0x0000007B: INACCESSIBLE_BOOT_DEVICE errors and recovery consoles. Rebooting did not help at all. This was a solid, reproducible crash loop, exactly at the point where the OS should hand off the bootloader to the kernel and find its disk. If you’ve been in the game for a while, you know that this usually spells one thing: a fundamental storage or bus driver issue. But why now?

The ACPI Identity Crisis

Windows Server 2012 (R2) and Windows Server 2016 are not supported on Windows Server 2025 Hyper-V. Upgrade or migrate before you move them.

This wasn’t just some random corruption. We were looking at a fundamental compatibility issue. To understand why, you need to understand how Hyper-V and the Guest OS communicate during the boot process.

Server 2012 R2 came out in 2013. Hyper-V 2025 is the latest of the greatest at the time of writing. In the decade between those releases, the “hardware signatures” (Hardware IDs, or HWIDs) that Hyper-V presents to a virtual machine have evolved.

In Gen 2 VMs, Windows relies heavily on the ACPI (Advanced Configuration and Power Interface) tables to find its critical components, especially the virtual machine bus (VMBus) and the storage controllers that attach to it.

When 2012 R2 boots, the kernel says, “Okay, ACPI, show me my storage bus.”

The Hyper-V 2025 host says, “Here is your storage bus, its ID is MSFT1000.”

The 2012 R2 kernel looks in its driver database and goes, “MSFT1000? I have no idea who that is. I’m looking for VMBus or nothing.”

Boom. It can’t see the bus, it can’t load the disk driver, and it can’t find itself so it suffers an Inaccessible Boot Device crash. As the guest has no clue what to do.

The “fix” is in some offline registry editing

Since the VM was in a crash loop and couldn’t boot to Windows, we had to perform some offline registry surgery. Luckily for them, the virtual machines could boot into their recovery environments, so they did not have to boot from an ISO to reach a command prompt and access the offline system hive.

We used a combination of reg load to “mount” the system registry from the VM’s disk onto our repair environment, and then some strategic reg copy commands to “spoof” the IDs.

Step by step

  1. Mounting the Hive:
    Code snippet
    reg load HKLM\TempHive c:\Windows\system32\config\SYSTEM

    (This assumes c: is where the VM’s Windows volume is mounted.)
  2. Map the VMBus to MSFT1000
    Code snippet
    reg copy HKLM\TempHive\ControlSet001\Enum\ACPI\VMBus HKLM\TempHive\ControlSet001\Enum\ACPI\MSFT1000 /s

    This is the core fix. We are telling the 2012 R2 system: “Look, if you ever see a device calling itself MSFT1000, don’t ignore it. Duplicate every single setting, driver binding, and service permission you have for ‘VMBus’ and apply it to this new ‘MSFT1000’ identity.” This essentially links the modern host’s ID to the older OS’s native VMBus driver stack.
  3. Mapping the Generation Counter to MSFT1002
    Code snippet
    reg copy HKLM\TempHive\ControlSet001\Enum\ACPI\Hyper_V_Gen_Counter_V1 HKLM\TempHive\ControlSet001\Enum\ACPI\MSFT1002 /s
    This maps the older Hyper_V_Gen_Counter_V1 identity—used for snapshots and consistency—to its modern equivalent on the 2025 host, MSFT1002. This is crucial for making sure integration services load properly.

After these commands, we had to run reg unload HKLM\TempHive to save our changes. We removed the ISO, rebooted, and… Bingo. The Server 2012 R2 boot screen appeared, and the login prompt followed shortly after.

This works because Server 2012 R2 has the necessary VMBus and storage drivers; it just doesn’t know they are compatible with the hardware IDs reported by Hyper-V 2025. This registry trick just creates that necessary driver-to-hardware binding.

But remember that this is an “unsupported” hack! While this gets the VM booting, moving 2012 R2 to newer hosts often means features might be degraded. Microsoft deprecated official support for 2012 R2 guests on modern hosts a while ago. Windows Server 2016 RTM without modern patching will suffer from the same issue, by the way.

Below is a complete script you can copy and paste into CMD.exe in your recovery environment to fix a virtual machine with this issue.

@echo off
echo.
echo ============================================================
echo   Hyper-V 2025 ACPI Fix for Windows Server 2012 R2 / 2016 RTM
echo   - Adds MSFT1000 (VMBus) and MSFT1002 (GenCounter)
echo   - Auto-detects ControlSet
echo   - Creates SYSTEM hive backup
echo ============================================================
echo.

:: --- Step 1: Detect Windows drive ---
echo Detecting Windows installation drive...
set WINDRV=

for %%d in (C D E F G H I J K L M N O P Q R S T U V W X Y Z) do (
    if exist %%d:\Windows\System32\Config\SYSTEM (
        set WINDRV=%%d:
    )
)

if "%WINDRV%"=="" (
    echo ERROR: Could not find Windows installation drive.
    echo Aborting.
    exit /b 1
)

echo Windows installation found on %WINDRV%
echo.

:: --- Step 2: Backup SYSTEM hive ---
echo Creating SYSTEM hive backup...
copy "%WINDRV%\Windows\System32\Config\SYSTEM" "%WINDRV%\Windows\System32\Config\SYSTEM.bak"
if errorlevel 1 (
    echo ERROR: Backup failed. Aborting.
    exit /b 1
)
echo Backup created: SYSTEM.bak
echo.

:: --- Step 3: Load SYSTEM hive ---
echo Loading SYSTEM hive into HKLM\TempHive...
reg load HKLM\TempHive "%WINDRV%\Windows\System32\Config\SYSTEM"
if errorlevel 1 (
    echo ERROR: Failed to load SYSTEM hive. Aborting.
    exit /b 1
)
echo Hive loaded.
echo.

:: --- Step 4: Detect active ControlSet ---
echo Detecting active ControlSet...
for /f "tokens=3" %%a in ('reg query HKLM\TempHive\Select /v Current') do set CS=ControlSet00%%a

if "%CS%"=="" (
    echo ERROR: Could not determine active ControlSet.
    reg unload HKLM\TempHive
    exit /b 1
)

echo Active ControlSet: %CS%
echo.

:: --- Step 5: Apply ACPI fixes ---
echo Applying ACPI fixes...

echo - Cloning VMBus -> MSFT1000
reg copy HKLM\TempHive\%CS%\Enum\ACPI\VMBus HKLM\TempHive\%CS%\Enum\ACPI\MSFT1000 /s /f

echo - Cloning Hyper_V_Gen_Counter_V1 -> MSFT1002
reg copy HKLM\TempHive\%CS%\Enum\ACPI\Hyper_V_Gen_Counter_V1 HKLM\TempHive\%CS%\Enum\ACPI\MSFT1002 /s /f

echo ACPI fixes applied.
echo.

:: --- Step 6: Unload hive ---
echo Unloading SYSTEM hive...
reg unload HKLM\TempHive
echo Hive unloaded.
echo.

echo ============================================================
echo   FIX COMPLETE
echo   You may now reboot the VM.
echo ============================================================
echo.
pause

Better to do this proactively; I have a PowerShell solution on GitHub that also includes the above .cmd script. The Invoke-TestAndFixHyperV2025ReadinessForLegacyVMs.ps1 script can handle virtual machines that are online, before you move them to Hyper-V 2025. https://github.com/WorkingHardInIT/Invoke-TestAndFixHyperV2025ReadinessForLegacyVMs

But I cannot migrate or upgrade yet!

I call bull shit on most of these in 99% of cases. And if it is not bullshit you really need to get your act together and work on fixing your apps/vendors to never allow getting into such a mess in the first place.

Conclusion

Tech debt. You know that thing every IT manager and department is preventing or solving for the last 30 years is still very much around. Despite all that ITIL, risk and change management, or maybe even due to all that talk and very little action.

Sometimes, saving the day isn’t about deploying the latest and greatest tech; it’s about diving into the deepest, darkest corners of the OS and tricking it into working just one more time. There are no guarantees, and this is a ticking time bomb.

I bought these people some time. Now they need to get working! I also kindly suggested they should read their backup vendors’ support statements 😉.

Setting a static MAC address on a guest NIC team in Hyper-V

Introduction

Before we talk about setting a static MAC address on a guest NIC team in Hyper-V. We go back to Ubuntu Linux. Do you remember my blog post about configuring an interface bond in a Ubuntu Hyper-V guest? If not, please read it as what I did there got me thinking about setting a static MAC address on a guest NIC team in Hyper-V.

Ubuntu network bond

As you have read by now in the blog post I linked to above, we need to enable MAC Spoofing on both vNICs members of an interface bond in Ubuntu virtual machine on Hyper-V. Only then will you have network connectivity and are you able to get a DHCP address. On Ubuntu (or Linux in general), the bond interface has a generated MAC address assigned. It does not take one of the MAC addresses of the member vNICs. That is why we need MAC spoofing enabled on both member vNIC in the Hyper-V settings for this to work! In a Windows guest, you will find that the MAC address for the LBFO team gets one of the MAC addresses of its member vNICs assigned. As such, this does not require NIC spoofing. During failover, it will swap to the other one.

Setting a static MAC address on a guest NIC team in Hyper-V

In Ubuntu, you can set a chosen static MAC address on a bond and on the member interfaces inside the guest operating system. Would we be able to do the same with a NIC team in a Windows Server guest virtual machine? Well, yes! It sounds like a dirty hack inspired by Linux bonding, which might be way beyond anything resembling a supported configuration. But, if it is allowed for Linux, why not leverage the same technique in Windows?

Configuration walkthrough

We use a mix of MAC address spoofing on the member vNICs with “enable this network adapter to be part of a team in the guest operating system” checked (not actually needed in this case) and a hardcoded MAC address on the team NIC and both member NICs inside the virtual machine. The same MAC address!

Setting a static MAC address on a guest NIC team in Hyper-V
The team interface and its member all get the same static MAC address in the guest

First, note the format of the MAC address. No dashes, dots, or colons. Also, that is a lot of clicking. Let’s try to do this with PowerShell. Using Set-NetAdapter throws an error to the fact that it detects the duplicate MAC address. It protects you against what it thinks is a bad idea.

$TeamName = 'GUEST-TEAM'
Set-NetAdapter -Name $TeamName -MacAddress "14-52-AC-25-DF-74"
ForEach ($MemberNic in $TeamName){
#Get-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name | Format-Table
Set-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name  -MacAddress "14-52-AC-25-DF-74"
} 

Set-NetAdapter : The network address 1452AC25DF74 is already used on a network adapter with the name ‘Guest-team-member-01’ At line:2 char:1+ Set-NetAdapter -Name $TeamName -MacAddress “14-52-AC-25-DF-74″+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo          : InvalidArgument: (MSFT_NetAdapter…wisetech.corp”):ROOT/StandardCimv2/MSFT_NetAdapter) [Set-NetAdapter], CimException    + FullyQualifiedErrorId : Windows System Error 87,Set-NetAdapter
Set-NetAdapter : The network address 1452AC25DF74 is already used on a network adapter with the name ‘Guest-team-member-01’
At line:5 char:1
+ Set-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name  -MacAdd …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (MSFT_NetAdapter…wisetech.corp”):ROOT/StandardCimv2/MSFT_NetAdapter) [Set-NetAdapter], CimException
    + FullyQualifiedErrorId : Windows System Error 87,Set-NetAdapter

You need to use Set-NetAdapterAdvancedProperty. Mind you that the MAC address property for the team is called “MAC Address” and for the team member NIC “Network Address” just like in the GUI. Use the following code in the guest virtual machine.

$Team = Get-NetLbfoTeam -Name 'GUEST-TEAM'
$MACAddress = "1452AC25DF74"
$TeamName = $Team.Name
#Get-NetAdapterAdvancedProperty -Name $TeamName
Set-NetAdapterAdvancedProperty -Name $TeamName -DisplayName 'MAC Address' -DisplayValue $MACAddress

$TeamMemberNicNames = (Get-NetLbfoTeamMember -Team $TeamName).Name
foreach ($TeamMember in $TeamMemberNicNames){
    #Get-NetAdapterAdvancedProperty -Name $TeamMember
    Set-NetAdapterAdvancedProperty -Name $TeamMember -DisplayName 'Network Address' -DisplayValue $MACAddress
}

Let’s check our handy work with PowerShell

Setting a static MAC address on a guest NIC team in Hyper-V
Verify the team interface and its member all have the same static MAC address in the guest

Last but not least, leave the dynamically assigned MAC addressed on the vNIC team members in Hyper-V setting but do enable MAC spoofing.

Setting a static MAC address on a guest NIC team in Hyper-V
Enable MAC address spoofing

Borrowing a trick from Linux for setting a static MAC address on a guest NIC team in Hyper-V

With this setup, we do not need separate switches for each member vNIC for failover to work but it is still very much advised to do so if you want real failover. First, It sounds filthy, dirty, and rotten, but for lab, demo purposes, go on, be a devil. Secondly, can you use this in production? Yes, you can. Just mind the MAC addresses you assign to avoid conflicts. Now you can tie your backward software license key that depends on a fixed MAC address to a Windows LBFO in a Hyper-V virtual machine. Why? Because we can. Finally, I would perhaps have to say that you should not do it, but Linux does, and so can windows!

Configuring an interface bond in a Ubuntu Hyper-V guest

Introduction

In this post, we take a look at configuring an interface bond in a Ubuntu Hyper-V guest. But first a quick word about NIC teaming and Hyper-V. In real life, teaming is most often done on physical hardware. But in the lab, or for some edge production cases, you might want to use it in virtual machines. The use case here is virtual machines used for testing and knowledge transfer. We are teaching about creating Veeam Backup & Configuration hardened repositories with XFS and immutability. In that lab, we are emulating a NIC team on hardware servers.

When you need redundant, high available networking for your Hyper-V guests, you normally create a NIC team on the host. You then use that NIC team to make your vSwitch. You can use a traditional LBFO team (depreciated) or a SET switch. The latter is the current technology and the way forward. But in this lab scenario, I am using LBFO, native Windows native NIC teaming.

Configuring an interface bond in a Ubuntu Hyper-V guest
99.9% of all use cases will use teaming on the Hyper-V host

Host teaming provides both bandwidth aggregation, redundancy, and failover. Typically, you do not mess around with NIC teaming in the guest in 99.99% of cases. Below we see a figure showing guest teaming. You need to use two physical NICs for genuine redundancy. Each with its separate virtual switch and uplinked to separate physical switches. Beware that only switch independent teaming is supported in the guest OS, so configure the switches and switch ports accordingly.

Configuring an interface bond in a Ubuntu Hyper-V guest
Hyper-V in guest NIC teaming

In-guest teaming is rarely used for production workloads, that is, bar some exceptions with SR-IOV, but that is another discussion. However, you might have a valid reason to use NIC teaming for lab work, testing, documenting configurations, teaching, etc. Luckily, that is easy to do. Hyper-V has a setting for your vNICs, enabling them to be functional members of a NIC team in a Windows guest OS. Als long as that OS supports native teaming. That is the case for Windows Server 2012 and later.

NIC teaming inside a Hyper-V Guest

For each vNIC member of the NIC team in the guest, you must put a checkmark to “enable this network adapter to be part of a team in the guest operating system” there is nothing more to it. The big caveat here is that each member must reside on a different external vSwitch for failover to work correctly. Otherwise, you will see a “The virtual switch lacks external connectivity” error on the remaining when failing over and packet loss.

Enable NIC teaming o the vNIC that are going to be team membersthe Hyper-V settings

There is nothing more you need to make it work perfectly in a Windows guest VM. As you can see in the image below, both my LAN NIC and the NIC get an address from the DHCP server.

Functional team in the virtual machine. Do test failover to make sure you got it right?

That’s great. But sometimes, I need to have a NIC team inside a Linux guest virtual machine. For example, recently, on Ubuntu 20.04, I went through my typical motions to get in guest NIC teaming or bonding in Linux speak. But, much to my surprise, I did not get an IP address from my DHCP server on my Ubuntu 20.04 guest bond. So, what could be the cause?

Configuring an interface bond in a Ubuntu Hyper-V guest

In Ubuntu, we use netplan to configure our networking and in the image below you can see a sample configuration.

A minimal bond configuration in Ubuntu

I have created a bond using eth0 and eth1, and we should get an IP address from DHCP. The bonding mode is balance-rr. But why I am not getting an IP address. I did check the option “Enable this network adapter to be part of a team in the guest operating system” on both member vNICs.

Well, let’s look at the nic interfaces and the bond. There we see something exciting.

Configuring an interface bond in a Ubuntu Hyper-V guest
Note that the bond and it’s member interfaces have the same MAC address that does not come from the Hyper-V host pool

Note that the bond has a MAC address that is the same as both member interfaces. Also, note that this MAC address does not come from the Hyper-V host MAC address pool and is not what is assigned to the vNIC by Hyper-V as you can see in the image below! That is the big secret.

With MAC addressed unknown to the hypervisor, this smells of something that requires MAC spoofing, doesn’t it? So, I enabled it, and guess what? Bingo!

So what is the difference with Windows when configuring an interface bond in a Ubuntu Hyper-V guest?

The difference with Windows is that an interface bond in an Ubuntu Hyper-V guest requires MAC address spoofing. You have to enable MAC Spoofing on both vNICs members of the Ubuntu virtual machine bond. The moment you do that, you will see you get a DHCP address on the bond and get network connectivity. But why is this needed? In Ubuntu (or Linux in general), the bond interface and its members have a generated MAC address assigned. It does not take one of the MAC addresses of the member vNICs. So, we need MAC spoofing enabled on both member vNIC in the Hyper-V settings for this to work! In a Windows guest, the LBFO team gets one of the MAC addresses of its member vNICs assigned. As such, this does not require NIC spoofing.

With Ubuntu (Linux) you don’t even have to check “enable this network adapter to be part of a team in the guest operating system” on the member vNICs. Note that a guest Linux bond does not need every member interface on a separate vSwitch for failover to work. Not even if you enable “enable this network adapter to be part of a team in the guest operating system.” However, the latter is still ill-advised when you want real redundancy and failover.

Virtual switch QoS mode during migrations

Introduction

In Shared nothing live migration with a virtual switch change and VLAN ID configuration I published a sample script. The script works well. But there are two areas of improvement. The first one is here in Checkpoint references a non-existent virtual switch. This post is about the second one. Here I show that I also need to check the virtual switch QoS mode during migrations. A couple of the virtual machines on the source nodes had absolute minimum and/or maximum bandwidth set. On the target nodes, all the virtual switches are created by PowerShell. This defaults to weight mode for QoS, which is the more sensible option, albeit not always the easiest or practical one for people to use.

Virtual switch QoS mode during migrations

First a quick recap of what we are doing. The challenge was to shared nothing live migrate virtual machines to a host with different virtual switch names and VLAN IDs. We did so by adding dummy virtual switches to the target host. This made share nothing live migration possible. On arrival of the virtual machine on the target host, we immediately connect the virtual network adapters to the final virtual switch and set the correct VLAN IDs. That works very well. You drop 1 or at most 2 pings, this is as good as it gets.

This goes wrong under the following conditions:

  • The source virtual switch has QoS mode absolute.
  • Virtual network adapter connected to the source virtual switch has MinimumBandwidthAbsolute and/or MaximumBandwidth set.
  • The target virtual switch with QoS mode weighted

This will cause connectivity loss as you cannot set absolute values to a virtual network attached to a weighted virtual switch. So connecting the virtual to the new virtual switch just fails and you lose connectivity. Remember that the virtual machine is connected to a dummy virtual switch just to make the live migration work and we need to swap it over immediately. The VLAN ID does get set correctly actually. Let’s deal with this.

Steps to fix this issue

First of all, we adapt the script to check the QoS mode on the source virtual switches. If it is set to absolute we know we need to check for any settings of MinimumBandwidthAbsolute and MaximumBandwidth on the virtual adapters connected to those virtual switches. These changes are highlighted in the demo code below.

Secondly, we adapt the script to check every virtual network adapter for its bandwidth management settings. If we find configured MinimumBandwidthAbsolute and MaximumBandwidth values we set these to 0 and as such disable the bandwidth settings. This makes sure that connecting the virtual network adapters to the new virtual switch with QoS mode weighted will succeed. These changes are highlighted in the demo code below.

Finally, the complete script

#The source Hyper-V host
$SourceNode = 'NODE-A'
#The LUN where you want to storage migrate your VMs away from
$SourceRootPath = "C:\ClusterStorage\Volume1*"
#The source Hyper-V host

#The target Hypr-V host
$TargetNode = 'ZULU'
#The storage pathe where you want to storage migrate your VMs to
$TargetRootPath = "C:\ClusterStorage\Volume1"

$OldVirtualSwitch01 = 'vSwitch-VLAN500'
$OldVirtualSwitch02 = 'vSwitch-VLAN600'
$NewVirtualSwitch = 'ConvergedVirtualSwitch'
$VlanId01 = 500
$VlanId02 = 600
  
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch01QoSMode = 'Absolute'
}
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch02QoSMode = 'Absolute'
}
    
#Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
$AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }

#We loop through all VMs we find on our SourceRoootPath
ForEach ($VM in $AllVMsOnRootPath) {
    #We generate the final VM destination path
    $TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
    #Grab the VM name
    $VMName = $VM.Name
    $VM.VMid
    $VMName

    #If the VM is still clusterd, get it removed form the cluster as live shared nothing migration will otherwise fail.
    if ($VM.isclustered -eq $True) {
        write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
        Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
        Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
        write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
    }
    #If the VM checkpoint, notify the user of the script as this will cause issues after swicthing to the new virtual
    #switch on the target node. Live migration will fail between cluster nodes if the checkpoints references 1 or more
    #non existing virtual switches. These must be removed prior to of after completing the shared nothing migration.
    #The script does this after the migration automatically, not before as I want it to be untouched if the shared nothing
    #migration fails.

    $checkpoints = get-vmcheckpoint -VMName $VM.Name

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints"
        write-host -foregroundcolor yellow "This VM will be migrated to the new host"
        write-host -foregroundcolor yellow "Only after a succesfull migration will ALL the checpoints be removed"
    }
    
    #Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
    #for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
    Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
    
    $MovedVM = Get-VM -ComputerName $TargetNode -Name $VMName

    $vNICOnOldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01
    if ($Null -ne $vNICOnOldvSwitch01) {
        foreach ($VMNetworkadapater in $vNICOnOldvSwitch01) {   
            if ($OldVirtualSwitch01QoSMode -eq 'Absolute') { 
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximuBandwidthAbsolute will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
                
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICOnOldvSwitch01 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId01"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICOnOldvSwitch01 -Access -VLANid $VlanId01
        }
    }

    $vNICsOnOldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
    if ($NULL -ne $vNICsOnOldvSwitch02) {
        foreach ($VMNetworkadapater in $vNICsOnOldvSwitch02) {
            if ($OldVirtualSwitch02QoSMode -eq 'Absolute') { 
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICsOnOldvSwitch02 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId02"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICsOnOldvSwitch02 -Access -VLANid $VlanId02
        }
    }

    #If the VM has checkpoints, this is when we remove them.
    $checkpoints = get-vmcheckpoint -ComputerName $TargetNode -VMName $MovedVM.VMName

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints and they will ALL be removed"
        $CheckPoints | Remove-VMCheckpoint 
    }
}

Below is the output of a VM where we had to change the switch name, enable a VLAN ID, deal with absolute QoS settings and remove checkpoints. All this without causing downtime. Nor did we change the original virtual machine in case the shared nothing migration fails.

Some observations

The fact that we are using PowerShell is great. You can only set weighted bandwidth limits via PowerShell. The GUI is only for absolute values and it will throw an error trying to use it when the virtual switch is configured as weighted.

This means you can embed setting the weights in your script if you so desire. If you do, read up on how to handle this best. trying to juggle the weight settings to be 100 in a dynamic environment is a bit of a challenge. So use the default flow pool and keep the number of virtual network adapters with unique settings to a minimum.

Conclusion

To avoid downtime we removed all the set minimum and maximum bandwidth settings on any virtual network adapter. By doing so we ensured that the swap to the new virtual switch right after the successful shared nothing live migration will succeed. If you want you can set weights on the virtual network adapters afterward. But as the bandwidth on these new hosts is now a redundant 25 Gbps, the need was no longer there. As a result, we just left them without. this can always be configured later if it turns out to be needed.

Warning: this is a demo script. It lacks error handling and logging. It can also contain mistakes. But hey you get it for free to adapt and use. Test and adapt this in a lab. You are responsible for what you do in your environments. Running scripts downloaded from the internet without any validation make you a certified nut case. That is not my wrongdoing.

I hope this helps some of you. thanks for reading.