Set Max Concurrent Tasks in Veeam with Powershell

Set Max Concurrent Tasks in Veeam with PowerShell

In this blog post, I’ll look at how to set the Max Concurrent Tasks in Veeam with PowerShell. When configuring your Veeam backup environment for the best possible backup performance there are a lot of settings to tweak. The defaults do a good job to get you going fast and well. But when you have more resources it pays to optimize. One of the things to optimize is Max Concurrent Tasks.

NOTE: all PowerShell here was tested against VBR v10a

Where to set max concurrent tasks or task limits

There are actually 4 places (2 specific for Hyper-V) where you can set the this in Veeam for a Hyper-V environment.

  1. Off-host proxy
  2. On-host proxy
  3. File Share Proxy (NEW in V10)
  4. Repository or SOBR extent

Also see https://helpcenter.veeam.com/docs/backup/hyperv/limiting_tasks.html?ver=100

Use PowerShell to set the Max Concurrent Tasks in Veeam
Max Concurrent Tasks on an off-host proxy
Use PowerShell to set the Max Concurrent Tasks in Veeam
Task limit on the on-host Hyper-V proxy
Use PowerShell to set the Max Concurrent Tasks in Veeam
Max Concurrent tasks on a file proxy (V10)
Use PowerShell to set the Max Concurrent Tasks in Veeam
Limit maximum concurrent tasks on a repository or SOBR extent

Now, let’s dive into those a bit and show the PowerShell to get it configured.

Configuring the proxies

When configuring the on-host or off-host proxies, the max concurrent tasks are based on virtual disks. Let’s look at some examples. 4 virtual machines with a single virtual disk consume 4 concurrent tasks. A single virtual machine with 4 virtual disks also consumes 4 concurrent tasks. 2 virtual machines with 2 virtual disks each consumes, you guessed it, 4 concurrent tasks.

Note that it doesn’t matter if these VMs are in a single job or multiple jobs. The limits are set at the proxy level. So it is the sum of all virtual disks in the VMs of all concurrently running backup jobs. Once you hit the limit, as a result, the remainder of virtual disks (which might translate into complete VMs) will be pending.

set the max concurrent tasks for on-host proxies

#We grab the Hyper-V on-host backup proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 12
$HvProxies = [Veeam.Backup.Core.CHvProxy]::GetAll()
$HvProxies.Count
Foreach ($Proxy in $HvProxies) {
    $HyperVOnHostProxy = $proxy.Host.Name
    $MaxTaskCount = $proxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HyperVOnHostProxy has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $options = $Proxy.Options
    $options.MaxTasksCount = $MaxTaskCountValueToSet 
    $Proxy.SetOptions($options)
}

#Report the changes
$HvProxies = [Veeam.Backup.Core.CHvProxy]::GetAll()
Foreach ($Proxy in $HvProxies) {
    $HyperVOnHostProxy = $proxy.Host.Name
    $MaxTaskCount = $proxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HyperVOnHostProxy has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

set THE MAX CONCURRENT TASKS for off-host proxies

#We grab the Hyper-V off-host backup proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 6
$HvOffHostProxies = Get-VBRHvProxy
foreach ($OffhostProxy in $HvOffHostProxies) {
    $HvOffHostProxyName = $OffhostProxy.Name
    $MaxTaskCount = $OffhostProxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HvOffHostProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $Options = $OffhostProxy.Options
    $Options.MaxTasksCount = $MaxTaskCountValueToSet
    $OffhostProxy.SetOptions($Options)
}

#Report the changes
foreach ($OffhostProxy in $HvOffHostProxies) {
    $HvOffHostProxyName = $OffhostProxy.Name
    $MaxTaskCount = $OffhostProxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HvOffHostProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

PowerShell code to set THE MAX CONCURRENT TASKS for file proxies

#We grab the file proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 12
$FileProxies = [Veeam.Backup.Core.CFileProxy]::GetAll()
Foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Name
    $MaxTaskCount = $FileProxy.MaxTasksCount
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $options = $FileProxy.Options
    $options.MaxTasksCount = $MaxTaskCountValueToSet 
    $FileProxy.SetOptions($options)
}

#Report the changes
$FileProxies = [Veeam.Backup.Core.CFileProxy]::GetAll()
Foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Name
    $MaxTaskCount = $FileProxy.MaxTaskCount
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Last but not least, note that VBR v10 PowerShell also has the Get-VBRNASProxyServer and Set-VBRNASProxyServer commands to work with. However, initially, it seemed not to be reporting the name of the proxies which is annoying. But after asking around I learned it can be found as a property of the Server object it returns. While I was expecting $FileProxy. to exist (based on other Veeam proxy commands) I need to use Name$FileProxy.Server.Name

$MaxTaskCountValueToSet = 4
$FileProxies = Get-VBRNASProxyServer
foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Server.Name
    $MaxTaskCount = $FileProxy.ConcurrentTaskNumber
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    Set-VBRNASProxyServer -ProxyServer $FileProxy -ConcurrentTaskNumber $MaxTaskCountValueToSet
}

#Report the changes
$FileProxies = Get-VBRNASProxyServer
foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Server.Name
    $MaxTaskCount = $FileProxy.ConcurrentTaskNumber
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Configuring the repositories/SOBR extents

First of all, for Backup Repositories, the max concurrent tasks are not based on virtual disks but on backup files (.vbk, .vib & .vrb).

Secondly, you can use either per VM backup files or non-per VM backup files. In the per VM backup files every VM in the job will have its own backup file. So this consumes more concurrent talks in a single job than the non-per VM backup files mode where a single job will have a single file. Let’s again look at some examples to help clarify this. A single backup job in non-per VM mode will use a single backup file and as such one concurrent task regardless of the number of VMs in the job. A single backup job using per VM backup mode will use a single backup file per VM in the job.

What you need to consider with repositories is that synthetic tasks (merges, transformations, synthetic fulls) also consume tasks and count towards the concurrent task limit on a repository/etxent. So when setting it, don’t think is only related to running active backups.

Finally, when you combine roles, please beware the same resources (cores, memory) will have to be used towards those task limits. That also means you have to consider other subsystems like the storage. If that can’t keep up, your performance will suffer.

PowerShell code to set the task limit for a repository/extent

For a standard backup repositories this will do the job

Get-VBRBackupRepository | Set-VBRBackupRepository -LimitConcurrentJobs -MaxConcurrentJobs 24

For the extends of a SOBR you need to use something like this

Get-VBRBackupRepository -ScaleOut | Get-VBRRepositoryExtent | Set-VBRBackupRepository -LimitConcurrentJobs -MaxConcurrentJobs 24

I you put the output of Get-VBRBackupRepository in a foreach next you can also configuret/report on individual Backup repositories when requiered.

#We grab the repositories. Note: use -autoscale if you need to grab SOBR extents.
#We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 6
$Repositories = Get-VBRBackupRepository
foreach ($Repository in $Repositories) {
    $RepositoryName = $Repository.Name
    $MaxTaskCount = $Repository.Options.MaxTaskCount
    Write-Host "The on-host Hyper-V proxy $RepositoryName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow

    Set-VBRBackupRepository -Repository $Repository  -LimitConcurrentJobs -MaxConcurrentJob $MaxTaskCountValueToSet
}

#Report the changes
$Repositories = Get-VBRBackupRepository
foreach ($Repository in $Repositories) {
    $RepositoryName = $Repository.Name
    $MaxTaskCount = $Repository.Options.MaxTaskCount
    Write-Host "The on-host Hyper-V proxy $RepositoryName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Conclusion

So I have shown you ways to automate. Similar settings for different purposes. The way off automating differs a bit depending on the type of proxy or if it is a repository. I hope it helps some of you out there.

Set the Hyper-V volume-specific settings in Veeam with PowerShell

Set the Hyper-V volume-specific settings in Veeam with PowerShell

When adding and configuring Hyper-V servers to Veeam you can set the Hyper-V volume-specific settings in Veeam with PowerShell or in the GUI.

  1. Select what VSS provider to use (Windows native VSS or a Hardware VSS provider)
  2. Configure the maximum number of concurrent snapshots to allow for the volume

I will show how to Set the Hyper-V volume-specific settings in Veeam with PowerShell. But first, let’s remind our selves of what it is used for.

The first one is easy. You will use the Windows native VSS unless you have a hardware VSS provider installed and configured. These come from your storage array vendor. Hardware VSS providers are only available for volumes that are provided by that storage array. If you don’t set them manual Veeams scans your host en picks the best option. It does so based on the type of volume and the availability of a hardware VSS provider or not.

The second option’s meaning depends on the version of Windows and also on whether you leverage a hardware VSS provider or not. You see the value of the maximum number of concurrent snapshots doesn’t always result in the same behavior you might expect.

Lets look at the documentation

I invite you to read the Veeam documentation on this subject. below you will find an excerpt with my annotations.

Follow the link for each option to learn more in the on line Veeam documentation.

  1. For Microsoft Hyper-V 2012 R2 and earlier, the default is set to simultaneously store 4 snapshots of one volume. To change this number, specify the Max snapshots value. It is not recommended that you increase the number of snapshots for slow storage. Many snapshots existing at the same time may cause VM processing failures.
  2. For Microsoft Hyper-V Server 2016 and later. You can simultaneously store 4 VM checkpoints on one volume. To change this number, specify the Max snapshots value. Note that this limitation works only for (recovery) checkpoints created during Veeam Backup & Replication data protection tasks. When you still use host VSS provider in your backup process (with a SAN hardware VSS provider, combined with off-host Hyper-V proxies) this acts like before. It will not limit the number of concurrent VM backup jobs. That only happens when the Hyper-V recovery checkpoints are the only thing in play. This means that for an S2D or Azure Stack HCI solution for example you will need to increase this value if you want to have more than 4 VM backed up simultaneously on that volume. No matter how many concurrent tasks you set on your Hyper-Hosts and repositories. By the way, remember that a task does not equal a VM but a disk per VM / backup file per VM. In a simple example with nothing else in play, this means that 16 tasks can be 4 VMs if those VMs all happen to have 4 disks, etc.
Set the Hyper-V volume-specific settings in Veeam with PowerShell
The default setting for maximum concurrent snapshots is 4

Now we have that out of the way. I find it tedious to do all this in the GUI. Especially so in larger environments and during testing in the lab or prior to taking a solution into production. There can be many hosts and even more volumes to configure. This is why I Set the Hyper-V volume-specific settings (and other configurations) in Veeam with PowerShell.

How to set the Hyper-V volume-specific settings in Veeam with PowerShell

So here I will share how to do this in PowerShell. It is not very difficult. Below snippet is the crux of what you need to integrate into your own scripts. Below I grab all the volumes on all the nodes of a cluster and set the MaxSnapShot value to 8. Tun a Hyper-V backup job against those CSV’s with 10 single disks VMs. You’ll see we can no have up to 8 VMs being backed up concurrently instead of 4.

I am also showing how to set the VSS provider. Warning, PowerShell will let you set a wrong provider. The GUI protects against that, So pay attention here.

#Grab the Cluster whose nodes volumes we want to configure
$Cluster = Get-Vbrserver -Name W2K19-LAB.datawisetech.corp -type HvCluster

#Grab the correct Hyper-V hosts based on the parentid (cluster they belong to)
$ClusterNodes = Get-VBRServer -Type HvServer | Where ParentID -eq $Cluster.Id 

Foreach ($ClusterNode in $ClusterNodes) {
    $ServerVolumes = Get-VBRHvServerVolume -Server $ClusterNode.Name
    $Provider = Get-VBRHvVssProvider -Server $ClusterNode.Name -Name "Microsoft CSV Shadow Copy Provider"
    Foreach ($Volume in $ServerVolumes) {
        if ($Volume.Type -eq "CSV") {

            Set-VBRHvServerVolume -Volume $Volume -MaxSnapshots 8 -VSSProvider $Provider
        }
    }
}
Set the Hyper-V volume-specific settings in Veeam with PowerShell
Only the CSV volumes have had their Max concurrent snapshot increased to 8.

Conclusion

I have shown you how to set the Hyper-V volume-specific settings in Veeam with PowerShell (VSSProvider/max concurrent snapshots

The max concurrent snapshots value is not the only setting determining how many VMs you can backup concurrently in one job. But it is an important one to know about when leveraging recovery checkpoints. You also need to mind max concurrent tasks.

Every virtual disk being backed up counts as a task. So a virtual machine with 3 disks will consume 3 tasks out of the max concurrent tasks you have set on the backup proxy. Don’t go overboard. Count cores when determining how to set these values. Also, remember that taking it easy to speed things up is a rule in backups. There is no speed gained by trying to do more than your cores can handle. Or, when you have plenty of cores by, depleting IOPS on your storage.

I will show you how to configure those with PowerShell in future blog posts.

Virtual switch QoS mode during migrations

Introduction

In Shared nothing live migration with a virtual switch change and VLAN ID configuration I published a sample script. The script works well. But there are two areas of improvement. The first one is here in Checkpoint references a non-existent virtual switch. This post is about the second one. Here I show that I also need to check the virtual switch QoS mode during migrations. A couple of the virtual machines on the source nodes had absolute minimum and/or maximum bandwidth set. On the target nodes, all the virtual switches are created by PowerShell. This defaults to weight mode for QoS, which is the more sensible option, albeit not always the easiest or practical one for people to use.

Virtual switch QoS mode during migrations

First a quick recap of what we are doing. The challenge was to shared nothing live migrate virtual machines to a host with different virtual switch names and VLAN IDs. We did so by adding dummy virtual switches to the target host. This made share nothing live migration possible. On arrival of the virtual machine on the target host, we immediately connect the virtual network adapters to the final virtual switch and set the correct VLAN IDs. That works very well. You drop 1 or at most 2 pings, this is as good as it gets.

This goes wrong under the following conditions:

  • The source virtual switch has QoS mode absolute.
  • Virtual network adapter connected to the source virtual switch has MinimumBandwidthAbsolute and/or MaximumBandwidth set.
  • The target virtual switch with QoS mode weighted

This will cause connectivity loss as you cannot set absolute values to a virtual network attached to a weighted virtual switch. So connecting the virtual to the new virtual switch just fails and you lose connectivity. Remember that the virtual machine is connected to a dummy virtual switch just to make the live migration work and we need to swap it over immediately. The VLAN ID does get set correctly actually. Let’s deal with this.

Steps to fix this issue

First of all, we adapt the script to check the QoS mode on the source virtual switches. If it is set to absolute we know we need to check for any settings of MinimumBandwidthAbsolute and MaximumBandwidth on the virtual adapters connected to those virtual switches. These changes are highlighted in the demo code below.

Secondly, we adapt the script to check every virtual network adapter for its bandwidth management settings. If we find configured MinimumBandwidthAbsolute and MaximumBandwidth values we set these to 0 and as such disable the bandwidth settings. This makes sure that connecting the virtual network adapters to the new virtual switch with QoS mode weighted will succeed. These changes are highlighted in the demo code below.

Finally, the complete script

#The source Hyper-V host
$SourceNode = 'NODE-A'
#The LUN where you want to storage migrate your VMs away from
$SourceRootPath = "C:\ClusterStorage\Volume1*"
#The source Hyper-V host

#The target Hypr-V host
$TargetNode = 'ZULU'
#The storage pathe where you want to storage migrate your VMs to
$TargetRootPath = "C:\ClusterStorage\Volume1"

$OldVirtualSwitch01 = 'vSwitch-VLAN500'
$OldVirtualSwitch02 = 'vSwitch-VLAN600'
$NewVirtualSwitch = 'ConvergedVirtualSwitch'
$VlanId01 = 500
$VlanId02 = 600
  
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch01QoSMode = 'Absolute'
}
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch02QoSMode = 'Absolute'
}
    
#Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
$AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }

#We loop through all VMs we find on our SourceRoootPath
ForEach ($VM in $AllVMsOnRootPath) {
    #We generate the final VM destination path
    $TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
    #Grab the VM name
    $VMName = $VM.Name
    $VM.VMid
    $VMName

    #If the VM is still clusterd, get it removed form the cluster as live shared nothing migration will otherwise fail.
    if ($VM.isclustered -eq $True) {
        write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
        Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
        Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
        write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
    }
    #If the VM checkpoint, notify the user of the script as this will cause issues after swicthing to the new virtual
    #switch on the target node. Live migration will fail between cluster nodes if the checkpoints references 1 or more
    #non existing virtual switches. These must be removed prior to of after completing the shared nothing migration.
    #The script does this after the migration automatically, not before as I want it to be untouched if the shared nothing
    #migration fails.

    $checkpoints = get-vmcheckpoint -VMName $VM.Name

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints"
        write-host -foregroundcolor yellow "This VM will be migrated to the new host"
        write-host -foregroundcolor yellow "Only after a succesfull migration will ALL the checpoints be removed"
    }
    
    #Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
    #for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
    Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
    
    $MovedVM = Get-VM -ComputerName $TargetNode -Name $VMName

    $vNICOnOldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01
    if ($Null -ne $vNICOnOldvSwitch01) {
        foreach ($VMNetworkadapater in $vNICOnOldvSwitch01) {   
            if ($OldVirtualSwitch01QoSMode -eq 'Absolute') { 
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximuBandwidthAbsolute will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
                
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICOnOldvSwitch01 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId01"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICOnOldvSwitch01 -Access -VLANid $VlanId01
        }
    }

    $vNICsOnOldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
    if ($NULL -ne $vNICsOnOldvSwitch02) {
        foreach ($VMNetworkadapater in $vNICsOnOldvSwitch02) {
            if ($OldVirtualSwitch02QoSMode -eq 'Absolute') { 
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICsOnOldvSwitch02 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId02"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICsOnOldvSwitch02 -Access -VLANid $VlanId02
        }
    }

    #If the VM has checkpoints, this is when we remove them.
    $checkpoints = get-vmcheckpoint -ComputerName $TargetNode -VMName $MovedVM.VMName

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints and they will ALL be removed"
        $CheckPoints | Remove-VMCheckpoint 
    }
}

Below is the output of a VM where we had to change the switch name, enable a VLAN ID, deal with absolute QoS settings and remove checkpoints. All this without causing downtime. Nor did we change the original virtual machine in case the shared nothing migration fails.

Some observations

The fact that we are using PowerShell is great. You can only set weighted bandwidth limits via PowerShell. The GUI is only for absolute values and it will throw an error trying to use it when the virtual switch is configured as weighted.

This means you can embed setting the weights in your script if you so desire. If you do, read up on how to handle this best. trying to juggle the weight settings to be 100 in a dynamic environment is a bit of a challenge. So use the default flow pool and keep the number of virtual network adapters with unique settings to a minimum.

Conclusion

To avoid downtime we removed all the set minimum and maximum bandwidth settings on any virtual network adapter. By doing so we ensured that the swap to the new virtual switch right after the successful shared nothing live migration will succeed. If you want you can set weights on the virtual network adapters afterward. But as the bandwidth on these new hosts is now a redundant 25 Gbps, the need was no longer there. As a result, we just left them without. this can always be configured later if it turns out to be needed.

Warning: this is a demo script. It lacks error handling and logging. It can also contain mistakes. But hey you get it for free to adapt and use. Test and adapt this in a lab. You are responsible for what you do in your environments. Running scripts downloaded from the internet without any validation make you a certified nut case. That is not my wrongdoing.

I hope this helps some of you. thanks for reading.

Shared nothing live migration with a virtual switch change and VLAN ID configuration

Introduction

I was working on a hardware refresh, consolidation and upgrade to Windows Server 2019 project. This mainly boils down to cluster operating system rolling upgrades from Windows Server 2016 to Windows Server 2019 with new servers replacing the old ones. Pretty straight forward. So what does this has to do with shared nothing live migration with a virtual switch change and VLAN ID configuration

Due to the consolidation aspect, we also had to move virtual machines from some older clusters to the new clusters. The old cluster nodes have multiple virtual switches. These connect to different VLANs. Some of the virtual machines have on only one virtual network adapter that connects to one of the virtual switches. Many of the virtual machines are multihomed. The number of virtual NICs per virtual machine was anything between 1 to 3. For this purpose, we had the challenge of doing a shared nothing live migration with a virtual switch change and VLAN ID configuration. All this without downtime.

Meeting the challenge

In the new cluster, there is only one converged virtual switch. This virtual switch attaches to trunked network ports with all the required VLANs. As we have only one virtual switch on the new Hyper-V cluster nodes, the name differs from those on the old Hyper-V cluster nodes. This prevents live migration. Fixing this is our challenge.

First of all, compare-vm is your friend to find out blocking incompatibilities between the source and the target nodes. You can read about that in many places. Here, we focus on our challenge.

Making Shared nothing migration work

The first step is to make sure shared nothing migration works. We can achieve this in several ways.

Option 1

We can disconnect the virtual machine network adapters from their virtual switch. While this allows you to migrate the virtual machines, this leads to connectivity loss. This is not acceptable.

Option2

We can preemptively set the virtual machine network adapters to a virtual switch with the same name as the one on the target and enable VLAN ID. Consequently, this means you have to create those and need NICs to do so. but unless you configure and connect those to the network just like on the new Hyper-V hosts this also leads to connectivity loss. That was not possible in this case. So this option again is unacceptable.

Option 3

What I did was create dummy virtual switches on the target hosts. For this purpose, I used some spare LOM NICs. I did not configure them otherwise. As a matter of fact, tI did not even connect them. Just the fact that they exist with the same names as on the old Hyper-V hosts is sufficient to make shared nothing migration possible. Actually, this is a great time point to remind ourselves that we don’t even spare NICs. Dummy private virtual switches that are not even attached to a NIC will also do.

After we have finished the migrations we just delete the dummy virtual switches. That all there is to do if you used private ones. If you used spare NICs just disable them again. Now all is as was and should be on the new cluster nodes.

Turning shared nothing migration into shared nothing live migration

Remember, we need zero downtime. You have to keep in mind that long as the shared nothing live migration is running all is well. We have connectivity to the original virtual machines on the old cluster nodes. As soon as the shared nothing live migration finishes we do 2 things. First of all, we connect the virtual network adapters of the virtual machines to the new converged virtual switch. Also, we enable the VLAN ID. To achieve this, we script it out in PowerShell. As a result, is so fast we only drop only 1 or 2 pings. Just like a standard live migration.

Below you can find a conceptual script you can adapt for your own purposes. For real migrations add logging and error handling. Please note that to leverage share nothing migration you need to be aware of the security requirements. Credential Security Support Provider (CredSSP) is the default option. If you want or must use Kerberos you must configure constrained delegation in Active Directory.

I chose to use CredSSP as we would decommission the old host soon afterward anyway. It also means we did not need Active Directory work done. This can be handy if that is not evident in the environment you are in. We started the script on every source Hyper-V host, migrating a bunch of VMs to a new Hyper-V host. This works very well for us. Hope this helps.

Sample Script

    #The source Hyper-V host
    $SourceNode = 'NODE-A'
    #The LUN where you want to storage migrate your VMs away from
    $SourceRootPath = "C:\ClusterStorage\Volume1*"
    #The source Hyper-V host

    #The target Hypr-V host
    $TargetNode = 'ZULU'
    #The storage pathe where you want to storage migrate your VMs to
    $TargetRootPath = "C:\ClusterStorage\Volume1"

    $OldVirtualSwitch01 = 'vSwitch-VLAN500'
    $OldVirtualSwitch02 = 'vSwitch-VLAN600'
    $NewVirtualSwitch = 'ConvergedVirtualSwitch'
    $VlanId01 = 500
    $VlanId02 = 600

    #Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
    $AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }

    #We loop through all VMs we find on our SourceRoootPath
    ForEach ($VM in $AllVMsOnRootPath) {
        #We generate the final VM destination path
        $TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
        #Grab the VM name
        $VMName = $VM.Name
        $VM.VMid
        $VMName


        if ($VM.isclustered -eq $True) {
            write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
            Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
            Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
            write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
        }
    
        #Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
        #for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
        Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
    
         $OldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01

        if ($Null -ne $OldvSwitch01) {
            foreach ($VMNetworkadapater in $OldvSwitch01)
            {   write-host 'Moving to correct vSwitch'
                Connect-VMNetworkAdapter -VMNetworkAdapter $OldvSwitch01 -SwitchName $NewVirtualSwitch
                write-out "Setting VLAN $VlanId01"
                Set-VMNetworkAdapterVlan  -VMNetworkAdapter $OldvSwitch01 -Access -VLANid $VlanId01
            }
        }
        $OldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
        if ($NULL -ne $OldvSwitch02) {
            foreach ($VMNetworkadapater in $OldvSwitch02) {
                write-host 'Moving to correct vSwitch'
                Connect-VMNetworkAdapter -VMNetworkAdapter $OldvSwitch02 -SwitchName $NewVirtualSwitch
                write-host "Setting VLAN $VlanId02"
                Set-VMNetworkAdapterVlan  -VMNetworkAdapter $OldvSwitch02 -Access -VLANid $VlanId02
            }
        }
    }