Set the Hyper-V volume-specific settings in Veeam with PowerShell
When adding and configuring Hyper-V servers to Veeam you can set the Hyper-V volume-specific settings in Veeam with PowerShell or in the GUI.
Select what VSS provider to use (Windows native VSS or a Hardware VSS provider)
Configure the maximum number of concurrent snapshots to allow for the volume
I will show how to Set the Hyper-V volume-specific settings in Veeam with PowerShell. But first, let’s remind our selves of what it is used for.
The first one is easy. You will use the Windows native VSS unless you have a hardware VSS provider installed and configured. These come from your storage array vendor. Hardware VSS providers are only available for volumes that are provided by that storage array. If you don’t set them manual Veeams scans your host en picks the best option. It does so based on the type of volume and the availability of a hardware VSS provider or not.
The second option’s meaning depends on the version of Windows and also on whether you leverage a hardware VSS provider or not. You see the value of the maximum number of concurrent snapshots doesn’t always result in the same behavior you might expect.
Lets look at the documentation
I invite you to read the Veeam documentation on this subject. below you will find an excerpt with my annotations.
Follow the link for each option to learn more in the on line Veeam documentation.
For Microsoft Hyper-V 2012 R2 and earlier, the default is set to simultaneously store 4 snapshots of one volume. To change this number, specify the Max snapshots value. It is not recommended that you increase the number of snapshots for slow storage. Many snapshots existing at the same time may cause VM processing failures.
For Microsoft Hyper-V Server 2016 and later. You can simultaneously store 4 VM checkpoints on one volume. To change this number, specify the Max snapshots value. Note that this limitation works only for (recovery) checkpoints created during Veeam Backup & Replication data protection tasks. When you still use host VSS provider in your backup process (with a SAN hardware VSS provider, combined with off-host Hyper-V proxies) this acts like before. It will not limit the number of concurrent VM backup jobs. That only happens when the Hyper-V recovery checkpoints are the only thing in play. This means that for an S2D or Azure Stack HCI solution for example you will need to increase this value if you want to have more than 4 VM backed up simultaneously on that volume. No matter how many concurrent tasks you set on your Hyper-Hosts and repositories. By the way, remember that a task does not equal a VM but a disk per VM / backup file per VM. In a simple example with nothing else in play, this means that 16 tasks can be 4 VMs if those VMs all happen to have 4 disks, etc.
Now we have that out of the way. I find it tedious to do all this in the GUI. Especially so in larger environments and during testing in the lab or prior to taking a solution into production. There can be many hosts and even more volumes to configure. This is why I Set the Hyper-V volume-specific settings (and other configurations) in Veeam with PowerShell.
How to set the Hyper-V volume-specific settings in Veeam with PowerShell
So here I will share how to do this in PowerShell. It is not very difficult. Below snippet is the crux of what you need to integrate into your own scripts. Below I grab all the volumes on all the nodes of a cluster and set the MaxSnapShot value to 8. Tun a Hyper-V backup job against those CSV’s with 10 single disks VMs. You’ll see we can no have up to 8 VMs being backed up concurrently instead of 4.
I am also showing how to set the VSS provider. Warning, PowerShell will let you set a wrong provider. The GUI protects against that, So pay attention here.
#Grab the Cluster whose nodes volumes we want to configure
$Cluster = Get-Vbrserver -Name W2K19-LAB.datawisetech.corp -type HvCluster
#Grab the correct Hyper-V hosts based on the parentid (cluster they belong to)
$ClusterNodes = Get-VBRServer -Type HvServer | Where ParentID -eq $Cluster.Id
Foreach ($ClusterNode in $ClusterNodes) {
$ServerVolumes = Get-VBRHvServerVolume -Server $ClusterNode.Name
$Provider = Get-VBRHvVssProvider -Server $ClusterNode.Name -Name "Microsoft CSV Shadow Copy Provider"
Foreach ($Volume in $ServerVolumes) {
if ($Volume.Type -eq "CSV") {
Set-VBRHvServerVolume -Volume $Volume -MaxSnapshots 8 -VSSProvider $Provider
}
}
}
Conclusion
I have shown you how to set the Hyper-V volume-specific settings in Veeam with PowerShell (VSSProvider/max concurrent snapshots
The max concurrent snapshots value is not the only setting determining how many VMs you can backup concurrently in one job. But it is an important one to know about when leveraging recovery checkpoints. You also need to mind max concurrent tasks.
Every virtual disk being backed up counts as a task. So a virtual machine with 3 disks will consume 3 tasks out of the max concurrent tasks you have set on the backup proxy. Don’t go overboard. Count cores when determining how to set these values. Also, remember that taking it easy to speed things up is a rule in backups. There is no speed gained by trying to do more than your cores can handle. Or, when you have plenty of cores by, depleting IOPS on your storage.
I will show you how to configure those with PowerShell in future blog posts.
A proxy server is used to allow applications and web browsers to communicate with the internet. There are multiple benefits. One of them is caching of the results. This is less important then it used to be. You also control what sites can be visited and that you have a log of where the traffic is going to. You can use a transparent proxy which means that you do not need to configure your hosts with proxy settings. The firewall sends all the internet bound traffic (HTTP/HTTPS) to the proxy server. With a standard proxy you need to tell the hosts and applications where to go and, optionally for what sites to bypass the proxy server. There are different options and libraries to configure the Windows proxy settings. Here we focus on how to Configure WinINET proxy server with PowerShell.
Why? Primarily because I wanted to automate this. Secondly, I needed a solution that was easy and fast in a non-domain joined / work group scenario.
How to configure proxy server settings
There are basically 3 ways to define proxy settings on a Windows host.I
Applications using the WinINET library. WinINET, an API, is part of Internet Explorer and can also be used by other applications. Applications leveraging the WinINET API take over the proxy settings configured in Internet Explorer. You set these per user or per machine. In the latter case only a user with administrative rights can set or change the proxy server settings. We’ll look at PowerShell to configure this for us.
Applications using the WinHTTP library. WinHTTP is the best choice for non-interactive usage. Prime examples are windows services or custom applications. WinHTTP does not use the proxy settings from WinINET unless you import them. This is the one we need to configure for applications that do not use WinINET. If you do not do so you will run into issues when blocking direct internet access. Settings in WinINET are ignored. Which means the proxy is not by an application or service using WinHTTP. You set this via netsh winhttp set proxy <proxy>:<port>. You can reset this via netsh winhttp reset proxy. If you want to import the WinINET setting use netsh winhttp import proxy source=ie
Applications that have their own proxy settings. In this case you configure the settings in the application itself. Some applications like Firefox use the systems settings by default but you can also define Firefox specific proxy settings.
Where to configure the proxy settings
Bar the options to automatically detect the proxy setting or using a script (GPO or registry editing) you can manually configure the settings for WinINET.
I have stopped using Internet explorer and as such I avoid using it to configure the WinINET settings. For that I prefer to use the Windows, Settings, Network & Internet, Proxy. The result is the same and on modern OS versions I disable Internet Explorer.
If you want to set them machine wide you can do so via a GPO or registry editing.
Via GPO: Computer Configuration\Administrative Templates\Windows Components\Internet Explorer\Make proxy settings per-machine (rather than per user) Via Registry:HKLM\Software\Policies\Microsoft\Windows\CurrentVersion\Internet Settings DWORD: ProxySettingsPerUser = 0
But bar using GPOs setting Proxy setting manually can be tedious, especially when you want to clean out old settings and have multiple profiles on the hosts. So I threw together a PowerShell solution to use in work group environments. This took some research and testing to get right.
$ProxyServer = "192.168.2.5:3128"
$ProxyBypassList = "192.168.2.3;192.168.2.4;192.168.2.5;192.168.2.72;<local>"
$TurnProxyOnOff = "On"
$ProxyPerMachine = $False
<#
$ProxyServer = ""
$ProxyBypassList = ""
$TurnProxyOnOff = "Off"
$ProxyPerMachine = $False
#/#>
#Example: Set-InternetProxy "mproxy:3128" "*.mysite.com;<local>"
function Set-InternetProxy($ProxyPerMachine, $TurnProxyOnOff, $proxy, $bypassUrls) {
if ($TurnProxyOnOff -eq "Off") { $ProxyEnabled = '01'; $ProxyEnable = 0 } Else { $ProxyEnabled = '11'; $ProxyEnable = 1 }
$regPath = "HKLM:\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings"
$proxyBytes = [system.Text.Encoding]::ASCII.GetBytes($proxy)
$bypassBytes = [system.Text.Encoding]::ASCII.GetBytes($bypassUrls)
$defaultConnectionSettings = [byte[]]@(@(70, 0, 0, 0, 0, 0, 0, 0, $ProxyEnabled, 0, 0, 0, $proxyBytes.Length, 0, 0, 0) + $proxyBytes + @($bypassBytes.Length, 0, 0, 0) + $bypassBytes + @(1..36 | % { 0 }))
if ($ProxyPerMachine -eq $True) { #ProxySettingsPerMachine
New-ItemProperty -Path $regPath -Name 'ProxySettingsPerUser' -Value 0 -PropertyType DWORD -Force #-ErrorAction SilentlyContinue
#Set the proxy settings per Machine
SetProxySettingsPerMachine $Proxy $ProxyEnable $defaultConnectionSettings
#As we are using the per machine proxy settings clear the user settings, tidy up.
#This is done for all profiles found on the host as well as the default profile.
ClearProxySettingPerUser
}
Elseif ($ProxyPerMachine -eq $False) { #ProxySettingsPerUser
New-ItemProperty -Path $regPath -Name 'ProxySettingsPerUser' -Value 1 -PropertyType DWORD -Force #-ErrorAction SilentlyContinue
#write-Host "we get here"
#Set the proxy settings per user (this is done for all profiles found on the host as well as the default profile)
SetProxySettingsPerUser $Proxy $ProxyEnable $defaultConnectionSettings
#As we are using the per user proxy settings clear the machine settings, tidy up.
ClearProxySettingsPerMachine
}
}
function SetProxySettingsPerUser($Proxy, $ProxyEnable, $defaultConnectionSettings) {
# Get each user profile SID and Path to the profile
$UserProfiles = Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\*" | Where { $_.PSChildName -match "S-1-5-21-(\d+-?){4}$" } | Select-Object @{Name = "SID"; Expression = { $_.PSChildName } }, @{Name = "UserHive"; Expression = { "$($_.ProfileImagePath)\NTuser.dat" } }
# We also grab the default user profile just in case the proxy settings have been changed in there, but they should not have been
$DefaultProfile = "" | Select-Object SID, UserHive
$DefaultProfile.SID = ".DEFAULT"
$DefaultProfile.Userhive = "C:\Users\Public\NTuser.dat"
$UserProfiles += $DefaultProfile
# Loop through each profile we found on the host
Foreach ($UserProfile in $UserProfiles) {
# Load ntuser.dat if it's not already loaded
If (($ProfileAlreadyLoaded = Test-Path Registry::HKEY_USERS\$($UserProfile.SID)) -eq $false) {
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE LOAD HKU\$($UserProfile.SID) $($UserProfile.UserHive)" -Wait -WindowStyle Hidden
Write-Host -ForegroundColor Cyan "Loading hive" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
Else {
Write-Host -ForegroundColor Cyan "Hive already loaded" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
$registryPath = "Registry::HKEY_USERS\$($UserProfile.SID)\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
#$registryPath = "HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value $proxy
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value $ProxyEnable
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
# Unload NTuser.dat if it wasen't loaded to begin with.
If ($ProfileAlreadyLoaded -eq $false) {
[gc]::Collect() #Ckean up any open handles to the registry to avoid getting an "Access Denied" error.
Start-Sleep -Seconds 5 #Give it some time
#Unoad the user profile, but only if we loaded it our selves manually.
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE UNLOAD HKU\$($UserProfile.SID)" -Wait -WindowStyle Hidden | Out-Null
Write-Host -ForegroundColor Cyan "Unloading hive" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
}
}
function SetProxySettingsPerMachine ($Proxy, $ProxyEnable, $defaultConnectionSettings) {
#Set the proxy settings per machine (this is done for both X64 and X86)
$registryPath = "HKLM:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value $proxy
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value $ProxyEnable
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
$registryPath = "HKLM:\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value $proxy
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value $ProxyEnable
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
}
Function ClearProxySettingPerUser () {
# Get each user profile SID and Path to the profile
$UserProfiles = Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\*" | Where { $_.PSChildName -match "S-1-5-21-(\d+-?){4}$" } | Select-Object @{Name = "SID"; Expression = { $_.PSChildName } }, @{Name = "UserHive"; Expression = { "$($_.ProfileImagePath)\NTuser.dat" } }
# We also grab the default user profile just in case the proxy settings have been changed in there, but they should not have been
$DefaultProfile = "" | Select-Object SID, UserHive
$DefaultProfile.SID = ".DEFAULT"
$DefaultProfile.Userhive = "C:\Users\Public\NTuser.dat"
$UserProfiles += $DefaultProfile
# Loop through each profile we found on the host
Foreach ($UserProfile in $UserProfiles) {
# Load ntuser.dat if it's not already loaded
If (($ProfileAlreadyLoaded = Test-Path Registry::HKEY_USERS\$($UserProfile.SID)) -eq $false) {
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE LOAD HKU\$($UserProfile.SID) $($UserProfile.UserHive)" -Wait -WindowStyle Hidden
Write-Host -ForegroundColor Cyan "Loading hive" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
Else {
Write-Host -ForegroundColor Cyan "Hive already loaded" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
#As you are using per machine setttings erase any proxy setting for the current user.
$proxyBytes = [system.Text.Encoding]::ASCII.GetBytes('')
$bypassBytes = [system.Text.Encoding]::ASCII.GetBytes('')
$defaultConnectionSettings = [byte[]]@(@(70, 0, 0, 0, 0, 0, 0, 0, 01, 0, 0, 0, $proxyBytes.Length, 0, 0, 0) + $proxyBytes + @($bypassBytes.Length, 0, 0, 0) + $bypassBytes + @(1..36 | % { 0 }))
$registryPath = "Registry::HKEY_USERS\$($UserProfile.SID)\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value ''
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value 0
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
# Unload NTuser.dat if it wasen't loaded to begin with.
If ($ProfileAlreadyLoaded -eq $false) {
[gc]::Collect() #Clean up any open handles to the registry to avoid getting an "Access Denied" error.
Start-Sleep -Seconds 2 #Give it some time
#Unoad the user profile, but only if we loaded it our selves manually.
Start-Process -FilePath "CMD.EXE" -ArgumentList "/C REG.EXE UNLOAD HKU\$($UserProfile.SID)" -Wait -WindowStyle Hidden | Out-Null
Write-Host -ForegroundColor Cyan "Unloading hive" $UserProfile.UserHive "for user profile SID:" $UserProfile.SID
}
}
}
Function ClearProxySettingsPerMachine () {
#As you are using per user setttings erase any proxy setting per machine
$proxyBytes = [system.Text.Encoding]::ASCII.GetBytes('')
$bypassBytes = [system.Text.Encoding]::ASCII.GetBytes('')
$defaultConnectionSettings = [byte[]]@(@(70, 0, 0, 0, 0, 0, 0, 0, 01, 0, 0, 0, $proxyBytes.Length, 0, 0, 0) + $proxyBytes + @($bypassBytes.Length, 0, 0, 0) + $bypassBytes + @(1..36 | % { 0 }))
$registryPath = "HKLM:\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value ''
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value 0
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
$registryPath = "HKLM:\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Internet Settings"
Set-ItemProperty -Path $registryPath -Name ProxyServer -Value ''
Set-ItemProperty -Path $registryPath -Name ProxyEnable -Value 0
Set-ItemProperty -Path "$registryPath\Connections" -Name DefaultConnectionSettings -Value $defaultConnectionSettings
}
Set-InternetProxy $ProxyPerMachine $TurnProxyOnOff $ProxyServer $ProxyBypassList
Test the above script to Configure WinINET proxy server with PowerShell.in a VM to verify its behaviour. I hope this help somebody and probably my future self as well.
In Shared nothing live migration with a virtual switch change and VLAN ID configuration I published a sample script. The script works well. But there are two areas of improvement. The first one is here in Checkpoint references a non-existent virtual switch. This post is about the second one. Here I show that I also need to check the virtual switch QoS mode during migrations. A couple of the virtual machines on the source nodes had absolute minimum and/or maximum bandwidth set. On the target nodes, all the virtual switches are created by PowerShell. This defaults to weight mode for QoS, which is the more sensible option, albeit not always the easiest or practical one for people to use.
Virtual switch QoS mode during migrations
First a quick recap of what we are doing. The challenge was to shared nothing live migrate virtual machines to a host with different virtual switch names and VLAN IDs. We did so by adding dummy virtual switches to the target host. This made share nothing live migration possible. On arrival of the virtual machine on the target host, we immediately connect the virtual network adapters to the final virtual switch and set the correct VLAN IDs. That works very well. You drop 1 or at most 2 pings, this is as good as it gets.
This goes wrong under the following conditions:
The source virtual switch has QoS mode absolute.
Virtual network adapter connected to the source virtual switch has MinimumBandwidthAbsolute and/or MaximumBandwidth set.
The target virtual switch with QoS mode weighted
This will cause connectivity loss as you cannot set absolute values to a virtual network attached to a weighted virtual switch. So connecting the virtual to the new virtual switch just fails and you lose connectivity. Remember that the virtual machine is connected to a dummy virtual switch just to make the live migration work and we need to swap it over immediately. The VLAN ID does get set correctly actually. Let’s deal with this.
Steps to fix this issue
First of all, we adapt the script to check the QoS mode on the source virtual switches. If it is set to absolute we know we need to check for any settings of MinimumBandwidthAbsolute and MaximumBandwidth on the virtual adapters connected to those virtual switches. These changes are highlighted in the demo code below.
Secondly, we adapt the script to check every virtual network adapter for its bandwidth management settings. If we find configured MinimumBandwidthAbsolute and MaximumBandwidth values we set these to 0 and as such disable the bandwidth settings. This makes sure that connecting the virtual network adapters to the new virtual switch with QoS mode weighted will succeed. These changes are highlighted in the demo code below.
Finally, the complete script
#The source Hyper-V host
$SourceNode = 'NODE-A'
#The LUN where you want to storage migrate your VMs away from
$SourceRootPath = "C:\ClusterStorage\Volume1*"
#The source Hyper-V host
#The target Hypr-V host
$TargetNode = 'ZULU'
#The storage pathe where you want to storage migrate your VMs to
$TargetRootPath = "C:\ClusterStorage\Volume1"
$OldVirtualSwitch01 = 'vSwitch-VLAN500'
$OldVirtualSwitch02 = 'vSwitch-VLAN600'
$NewVirtualSwitch = 'ConvergedVirtualSwitch'
$VlanId01 = 500
$VlanId02 = 600
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') {
$OldVirtualSwitch01QoSMode = 'Absolute'
}
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') {
$OldVirtualSwitch02QoSMode = 'Absolute'
}
#Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
$AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }
#We loop through all VMs we find on our SourceRoootPath
ForEach ($VM in $AllVMsOnRootPath) {
#We generate the final VM destination path
$TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
#Grab the VM name
$VMName = $VM.Name
$VM.VMid
$VMName
#If the VM is still clusterd, get it removed form the cluster as live shared nothing migration will otherwise fail.
if ($VM.isclustered -eq $True) {
write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
}
#If the VM checkpoint, notify the user of the script as this will cause issues after swicthing to the new virtual
#switch on the target node. Live migration will fail between cluster nodes if the checkpoints references 1 or more
#non existing virtual switches. These must be removed prior to of after completing the shared nothing migration.
#The script does this after the migration automatically, not before as I want it to be untouched if the shared nothing
#migration fails.
$checkpoints = get-vmcheckpoint -VMName $VM.Name
if ($Null -ne $checkpoints) {
write-host -foregroundcolor yellow "This VM has checkpoints"
write-host -foregroundcolor yellow "This VM will be migrated to the new host"
write-host -foregroundcolor yellow "Only after a succesfull migration will ALL the checpoints be removed"
}
#Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
#for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
$MovedVM = Get-VM -ComputerName $TargetNode -Name $VMName
$vNICOnOldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01
if ($Null -ne $vNICOnOldvSwitch01) {
foreach ($VMNetworkadapater in $vNICOnOldvSwitch01) {
if ($OldVirtualSwitch01QoSMode -eq 'Absolute') {
if (0 -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
}
if (0 -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximuBandwidthAbsolute will be reset to 0."
Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
}
}
write-host 'Moving to correct vSwitch'
Connect-VMNetworkAdapter -VMNetworkAdapter $vNICOnOldvSwitch01 -SwitchName $NewVirtualSwitch
write-host "Setting VLAN $VlanId01"
Set-VMNetworkAdapterVlan -VMNetworkAdapter $vNICOnOldvSwitch01 -Access -VLANid $VlanId01
}
}
$vNICsOnOldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
if ($NULL -ne $vNICsOnOldvSwitch02) {
foreach ($VMNetworkadapater in $vNICsOnOldvSwitch02) {
if ($OldVirtualSwitch02QoSMode -eq 'Absolute') {
if ($Null -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
}
if ($Null -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
}
}
write-host 'Moving to correct vSwitch'
Connect-VMNetworkAdapter -VMNetworkAdapter $vNICsOnOldvSwitch02 -SwitchName $NewVirtualSwitch
write-host "Setting VLAN $VlanId02"
Set-VMNetworkAdapterVlan -VMNetworkAdapter $vNICsOnOldvSwitch02 -Access -VLANid $VlanId02
}
}
#If the VM has checkpoints, this is when we remove them.
$checkpoints = get-vmcheckpoint -ComputerName $TargetNode -VMName $MovedVM.VMName
if ($Null -ne $checkpoints) {
write-host -foregroundcolor yellow "This VM has checkpoints and they will ALL be removed"
$CheckPoints | Remove-VMCheckpoint
}
}
Below is the output of a VM where we had to change the switch name, enable a VLAN ID, deal with absolute QoS settings and remove checkpoints. All this without causing downtime. Nor did we change the original virtual machine in case the shared nothing migration fails.
Some observations
The fact that we are using PowerShell is great. You can only set weighted bandwidth limits via PowerShell. The GUI is only for absolute values and it will throw an error trying to use it when the virtual switch is configured as weighted.
This means you can embed setting the weights in your script if you so desire. If you do, read up on how to handle this best. trying to juggle the weight settings to be 100 in a dynamic environment is a bit of a challenge. So use the default flow pool and keep the number of virtual network adapters with unique settings to a minimum.
Conclusion
To avoid downtime we removed all the set minimum and maximum bandwidth settings on any virtual network adapter. By doing so we ensured that the swap to the new virtual switch right after the successful shared nothing live migration will succeed. If you want you can set weights on the virtual network adapters afterward. But as the bandwidth on these new hosts is now a redundant 25 Gbps, the need was no longer there. As a result, we just left them without. this can always be configured later if it turns out to be needed.
Warning: this is a demo script. It lacks error handling and logging. It can also contain mistakes. But hey you get it for free to adapt and use. Test and adapt this in a lab. You are responsible for what you do in your environments. Running scripts downloaded from the internet without any validation make you a certified nut case. That is not my wrongdoing.
I hope this helps some of you. thanks for reading.
I was working on a hardware refresh, consolidation and upgrade to Windows Server 2019 project. This mainly boils down to cluster operating system rolling upgrades from Windows Server 2016 to Windows Server 2019 with new servers replacing the old ones. Pretty straight forward. So what does this has to do with shared nothing live migration with a virtual switch change and VLAN ID configuration
Due to the consolidation aspect, we also had to move virtual machines from some older clusters to the new clusters. The old cluster nodes have multiple virtual switches. These connect to different VLANs. Some of the virtual machines have on only one virtual network adapter that connects to one of the virtual switches. Many of the virtual machines are multihomed. The number of virtual NICs per virtual machine was anything between 1 to 3. For this purpose, we had the challenge of doing a shared nothing live migration with a virtual switch change and VLAN ID configuration. All this without downtime.
Meeting the challenge
In the new cluster, there is only one converged virtual switch. This virtual switch attaches to trunked network ports with all the required VLANs. As we have only one virtual switch on the new Hyper-V cluster nodes, the name differs from those on the old Hyper-V cluster nodes. This prevents live migration. Fixing this is our challenge.
First of all, compare-vm is your friend to find out blocking incompatibilities between the source and the target nodes. You can read about that in many places. Here, we focus on our challenge.
Making Shared nothing migration work
The first step is to make sure shared nothing migration works. We can achieve this in several ways.
Option 1
We can disconnect the virtual machine network adapters from their virtual switch. While this allows you to migrate the virtual machines, this leads to connectivity loss. This is not acceptable.
Option2
We can preemptively set the virtual machine network adapters to a virtual switch with the same name as the one on the target and enable VLAN ID. Consequently, this means you have to create those and need NICs to do so. but unless you configure and connect those to the network just like on the new Hyper-V hosts this also leads to connectivity loss. That was not possible in this case. So this option again is unacceptable.
Option 3
What I did was create dummy virtual switches on the target hosts. For this purpose, I used some spare LOM NICs. I did not configure them otherwise. As a matter of fact, tI did not even connect them. Just the fact that they exist with the same names as on the old Hyper-V hosts is sufficient to make shared nothing migration possible. Actually, this is a great time point to remind ourselves that we don’t even spare NICs. Dummy private virtual switches that are not even attached to a NIC will also do.
After we have finished the migrations we just delete the dummy virtual switches. That all there is to do if you used private ones. If you used spare NICs just disable them again. Now all is as was and should be on the new cluster nodes.
Turning shared nothing migration into shared nothing live
migration
Remember, we need zero downtime. You have to keep in mind that long as the shared nothing live migration is running all is well. We have connectivity to the original virtual machines on the old cluster nodes. As soon as the shared nothing live migration finishes we do 2 things. First of all, we connect the virtual network adapters of the virtual machines to the new converged virtual switch. Also, we enable the VLAN ID. To achieve this, we script it out in PowerShell. As a result, is so fast we only drop only 1 or 2 pings. Just like a standard live migration.
Below you can find a conceptual script you can adapt for your own purposes. For real migrations add logging and error handling. Please note that to leverage share nothing migration you need to be aware of the security requirements. Credential Security Support Provider (CredSSP) is the default option. If you want or must use Kerberos you must configure constrained delegation in Active Directory.
I chose to use CredSSP as we would decommission the old host soon afterward anyway. It also means we did not need Active Directory work done. This can be handy if that is not evident in the environment you are in. We started the script on every source Hyper-V host, migrating a bunch of VMs to a new Hyper-V host. This works very well for us. Hope this helps.
Sample Script
#The source Hyper-V host
$SourceNode = 'NODE-A'
#The LUN where you want to storage migrate your VMs away from
$SourceRootPath = "C:\ClusterStorage\Volume1*"
#The source Hyper-V host
#The target Hypr-V host
$TargetNode = 'ZULU'
#The storage pathe where you want to storage migrate your VMs to
$TargetRootPath = "C:\ClusterStorage\Volume1"
$OldVirtualSwitch01 = 'vSwitch-VLAN500'
$OldVirtualSwitch02 = 'vSwitch-VLAN600'
$NewVirtualSwitch = 'ConvergedVirtualSwitch'
$VlanId01 = 500
$VlanId02 = 600
#Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
$AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }
#We loop through all VMs we find on our SourceRoootPath
ForEach ($VM in $AllVMsOnRootPath) {
#We generate the final VM destination path
$TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
#Grab the VM name
$VMName = $VM.Name
$VM.VMid
$VMName
if ($VM.isclustered -eq $True) {
write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
}
#Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
#for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
$OldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01
if ($Null -ne $OldvSwitch01) {
foreach ($VMNetworkadapater in $OldvSwitch01)
{ write-host 'Moving to correct vSwitch'
Connect-VMNetworkAdapter -VMNetworkAdapter $OldvSwitch01 -SwitchName $NewVirtualSwitch
write-out "Setting VLAN $VlanId01"
Set-VMNetworkAdapterVlan -VMNetworkAdapter $OldvSwitch01 -Access -VLANid $VlanId01
}
}
$OldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
if ($NULL -ne $OldvSwitch02) {
foreach ($VMNetworkadapater in $OldvSwitch02) {
write-host 'Moving to correct vSwitch'
Connect-VMNetworkAdapter -VMNetworkAdapter $OldvSwitch02 -SwitchName $NewVirtualSwitch
write-host "Setting VLAN $VlanId02"
Set-VMNetworkAdapterVlan -VMNetworkAdapter $OldvSwitch02 -Access -VLANid $VlanId02
}
}
}