Connect to an Azure VM via Bastion with native RDP using only Azure PowerShell

Connect to an Azure VM via Bastion with native RDP using only Azure PowerShell

To connect to an Azure VM via Bastion with native RDP using only RDP requires a custom solution. By default, the user must leverage Azure CLI. It also requires the user to know the Bastion subscription and the resource ID of the virtual machine. That’s all fine for an IT Pro or developer, but it is a bit much to handle for a knowledge worker.

That is why I wanted to automate things for those users and hide that complexity away from the users. One requirement was to ensure the solution would work on a Windows Client on which the user has no administrative rights. So that is why, for those use cases, I wrote a PowerShell script that takes care of everything for an end user. Hence, we chose to leverage the Azure PowerShell modules. These can be installed for the current user without administrative rights if needed. Great idea, but that left us with two challenges to deal with. These I will discuss below.

A custom PowerShell Script

The user must have the right to connect to their Virtual Machine in Azure over the (central) bastion deployment. These are listed below. See Connect to a VM using Bastion – Windows native client for more information.

  • Reader role on the virtual machine.
  • Reader role on the NIC with private IP of the virtual machine.
  • Reader role on the Azure Bastion resource.
  • Optionally, the Virtual Machine Administrator Login or Virtual Machine User Login role

When this is OK, this script generates an RDP file for them on the desktop. That script also launches the RDP session for them, to which they need to authenticate via Azure MFA to the Bastion host and via their VM credentials to the virtual machine. The script removes the RDP files after they close the RDP session. The complete sample code can be found here on GitHub.

I don’t want to rely on Azure CLI

Microsoft uses Azure CLI to connect to an Azure VM via Bastion with native RDP. We do not control what gets installed on those clients. If an installation requires administrative rights, that can be an issue. There are tricks with Python to get Azure CLI installed for a user, but again, we are dealing with no technical profiles here.

So, is there a way to get around the requirement to use Azure CLI? Yes, there is! Let’s dive into the AZ CLI code and see what they do there. As it turns out, it is all Python! We need to dive into the extension for Bastion, and after sniffing around and wrapping my brain around it, I conclude that these lines contain the magic needed to create a PowerShell-only solution.

See for yourself overhere: azure-cli-extensions/src/bastion/azext_bastion/custom.py at d3bc6dc03bb8e9d42df8c70334b2d7e9a2e38db0 · Azure/azure-cli-extensions · GitHub

In PowerShell, that translates into the code below. One thing to note is that if this code is to work with PowerShell for Windows, we cannot use “keep-alive” for the connection setting. PowerShell core does support this setting. The latter is not installed by default.

# Connect & authenticate to the correct tenant and to the Bastion subscription
Connect-AzAccount -Tenant $TenantId -Subscription $BastionSubscriptionId | Out-Null

 #Grab the Azure Access token
    $AccessToken = (Get-AzAccessToken).Token
    If (!([string]::IsNullOrEmpty($AccessToken))) {
        #Grab your centralized bastion host
        try {
            $Bastion = Get-AzBastion -ResourceGroupName $BastionResoureGroup -Name $BastionHostName
            if ($Null -ne $Bastion ) {
                write-host -ForegroundColor Cyan "Connected to Bastion $($Bastion.Name)"
                write-host -ForegroundColor yellow "Generating RDP file for you to desktop..."
                $target_resource_id = $VmResourceId
                $enable_mfa = "true" #"true"
                $bastion_endpoint = $Bastion.DnsName
                $resource_port = "3389"

                $url = "https://$($bastion_endpoint)/api/rdpfile?resourceId=$($target_resource_id)&format=rdp&rdpport=$($resource_port)&enablerdsaad=$($enable_mfa)"

                $headers = @{
                    "Authorization"   = "Bearer $($AccessToken)"
                    "Accept"          = "*/*"
                    "Accept-Encoding" = "gzip, deflate, br"
                    #"Connection" = "keep-alive" #keep-alive and close not supported with PoSh 5.1 
                    "Content-Type"    = "application/json"
                }

                $DesktopPath = [Environment]::GetFolderPath("Desktop")
                $DateStamp = Get-Date -Format yyyy-MM-dd
                $TimeStamp = Get-Date -Format HHmmss
                $DateAndTimeStamp = $DateStamp + '@' + $TimeStamp 
                $RdpPathAndFileName = "$DesktopPath\$AzureVmName-$DateAndTimeStamp.rdp"
                $progressPreference = 'SilentlyContinue'
            }
            else {
                write-host -ForegroundColor Red  "We could not connect to the Azure bastion host"
            }
        }
        catch {
            <#Do this if a terminating exception happens#>
        }
        finally {
            <#Do this after the try block regardless of whether an exception occurred or not#>
        }

Finding the resource id for the Azure VM by looping through subscriptions is slow

As I build a solution for a Windows client, I am not considering leveraging a tunnel connection (see Connect to a VM using Bastion – Windows native client). I “merely” want to create a functional RDP file the user can leverage to connect to an Azure VM via Bastion with native RDP.

Therefore, to make life as easy as possible for the user, we want to hide any complexity for them as much as possible. Hence, I can only expect them to know the virtual machine’s name in Azure. And if required, we can even put that in the script for them.

But no matter what, we need to find the virtual machine’s resource ID.

Azure Graph to the rescue! We can leverage the code below, and even when you have to search in hundreds of subscriptions, it is way more performant than Azure PowerShell’s Get-AzureVM, which needs to loop through all subscriptions. This leads to less waiting and a better experience for your users. The Az.ResourceGraph module can also be installed without administrative rights for the current users.

$VMToConnectTo = Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachines' and name == '$AzureVmName'" -UseTenantScope

Note using -UseTenantScope, which ensures we search the entire tenant even if some filtering occurs.

Creating the RDP file to connect to an Azure Virtual Machine over the bastion host

Next, I create the RDP file via a web request, which writes the result to a file on the desktop from where we launch it, and the user can authenticate to the bastion host (with MFA) and then to the virtual machine with the appropriate credentials.

        try {
            $progressPreference =  'SilentlyContinue'
            Invoke-WebRequest $url -Method Get -Headers $headers -OutFile $RdpPathAndFileName -UseBasicParsing
            $progressPreference =  'Continue'

            if (Test-Path $RdpPathAndFileName -PathType leaf) {
                Start-Process $RdpPathAndFileName -Wait
                write-host -ForegroundColor magenta  "Deleting the RDP file after use."
                Remove-Item $RdpPathAndFileName
                write-host -ForegroundColor magenta  "Deleted $RdpPathAndFileName."
            }
            else {
                write-host -ForegroundColor Red  "The RDP file was not found on your desktop and, hence, could not be deleted."
            }
        }
        catch {
            write-host -ForegroundColor Red  "An error occurred during the creation of the RDP file."
            $Error[0]
        }
        finally {
            $progressPreference = 'Continue'
        }

Finally, when the user is done, the file is deleted. A new one will be created the next time the script is run. This protects against stale tokens and such.

Pretty it up for the user

I create a shortcut and rename it to something sensible for the user. Next, I changed the icon to the provided one, which helps visually identify the shortcut from any other Powershell script shortcut. They can copy that shortcut wherever suits them or pin it to the taskbar.

Connect to an Azure VM via native RDP using only Azure PowerShell

ConvertFrom-Json is not serializable

Introduction

While writing Bicep recently, I was stumped by the fact that my deployment kept failing. I spent a lot of time troubleshooting many possible ideas on what might be causing this. As JSON is involved and I am far from a JSON syntax guru, I first focused on that. Later I moved to how I use JSON in Bicep and PowerShell before finally understanding the problem was due to the fact that ConvertFrom-Json is not serializable.

Parameters with Bicep

When deploying resources in Azure with Bicep, I always need to consider who has to deliver or maintain the code and the parameters. It has to be somewhat structured, readable, and understandable. It can’t be one gigantic listing that confuses people to the point they are lost. Simplicity and ease of use rule my actions here. I know when it comes to IaC, this can be a challenge. So, when it comes to parameters, what are our options here?

  • I avoid hard-coding parameters in Bicep. It’s OK for testing while writing the code, but beyond that, it is a bad idea for maintainability.
  • You can use parameter files. That is a considerable improvement, but it has its limitations.
  • I have chosen the path of leveraging PowerShell to create and maintain parameters and pass those via objects to the main bicep file for deployment. That is a flexible and maintainable approach. Sure, it is not perfect either, but neither am I.

Regarding Bicep and PowerShell, we can also put parameters in separate files and read those to create parameters. Whether this is a good idea depends on the situation. My rule of thumb is that it is worth doing when things become easier to read and maintain while reducing the places where you have to edit your IaC files. In the case of Azure Firewall Policy Rules Collection Groups, Rules collections, and Rules, it can make sense.

Bicep and JSON files

You can read file content in Bicep using. With the json() function, you can tell Bicep that this is JSON. So far, so good. The below is perfectly fine and works. We can loop through that variable in a resource deployment.

var firewallChildRGCs = [

    json(loadTextContent('./AFW/Policies/RGSsAfwChild01.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild02.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild03.json'))

]

However, I am not entirely happy with this. While I like it in some aspects, it conflicts with my desire not needing to edit a working Bicep file once it is in use. So what do I like about it?

It keeps Bicep clean and concise and limits the looping to iterate over the Rules Collection Groups, thus avoiding the nested looping for Rules collections and Rules. Why is that? Because I can do this

@batchSize(1)

resource firewallChildPolicyWEUColGroups 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2022-07-01' = [for (childrcg, index) in firewallChildRGCs: {

  parent: firewallChildPolicyWEU

  name: childrcg.name

  dependsOn: [firewallParentPolicyWEUColGroups]

  properties: childrcg.properties

}]

As you can see, I loop through the variable and pass the JSON into the properties. That way, I create all Rule Collections and Rules without needing to do any nested looping via “helper” modules to get this done.

The drawback, however, is that the loadTextContent function in Bicep cannot use dynamic parameters or variables. As a result, the paths to the files need to be hard coded into the Bicep file. That is something we want to avoid. But until that is possible, it is a hard restriction. That is because parameters are evaluated during runtime (bicep deployment), whereas loadTextContent in Bicep happens while compiling (bicep build). So, in contrast to the early previews of Bicep, where you “transpiled” the Bicep manually, it is now done for you automatically before the deployment. You think this can work, but it does not.

PowerShell and JSON files

As mentioned above, I chose to use PowerShell to create and maintain parameters, and I want to read my JSON files there. However, it prevents me from creating large, long, and complex to maintain PowerShell objects with nested arrays. Editing these is not straightforward for everyone. On top of that, it leads to the need for nested looping in Bicep via “helper” modules. While that works, and I use it, I find it more tedious with deeply nested structures and many parameters to supply. Hence I am splitting it out into easier-to-maintain separate JSON files.

Here is what I do in PowerShell to build my array to pass via an Object parameter. First, I read the JSON filers from my folder.

$ChildFilePath = "../bicep/nested/AfwChildPoliciesAndRules/*"
$Files = Get-ChildItem -File $ChildFilePath -Include '*.json' -exclude 'DONOTUSE*.json'
$Files
$AfwChildCollectionGroupsValidate = @() # We use this with ConvertFrom-Json to validate that the JSON file is OK, but cannot use this to pass as a param to Bicep
    $AfwChildCollectionGroups = @()
    Foreach ($File in $Files) {
    try{
        $AfwChildCollectionGroupsValidate += (Get-Content $File.FullName -Raw) | ConvertFrom-Json
        # DO NOT PUT JSON in here - the PSCustomObject is not serializable and passing this param to Bicep will than be empty!
        $AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) # A string is serializable!
        }
        Catch
        {
        write-host -ForegroundColor Red "ConvertFrom-Json threw and error. Check your JSON in the RCG/RC/R files"
        Exit
        }
    }

I can then use this to roll out the resources, as in the below example.

// Roll out the child Rule Collection Group(s)

var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {

  name: json(rulecol).name

  properties: json(rulecol).properties

}]

Initially, the idea was that by using ConvertFrom-Json I would pass the JSON to Bicep as a parameter directly.

$AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) | ConvertFrom-Json

So not only would I not need to load the files in Bicep with a hard-coded path, I would also not need to use json() function in Bicep.

// Roll out the child Rule Collection Group(s)
var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {
  name: rulecol.name
  properties: rulecol.properties
}]

However, this failed on me time and time again with properties not being found and what not. Below is an example of such an error.

Line |
  30 |          New-AzResourceGroupDeployment @params -DeploymentDebugLogLeve …
     |          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | 2:46:20 PM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'The template variable 'ChildRCGs' is not valid: The language expression property 'name' doesn't exist, available properties are ''.. Please see    
     | https://aka.ms/arm-functions for usage details.'.

It did not make sense at all. That was until a dev buddy asked if the object was serializable at all. And guess what? ConvertFrom-Json creates a PSCustomObject that is NOT serializable.

You can check this quickly yourself.

((Get-Content $File.FullName -Raw) | ConvertFrom-Json).gettype().IsSerializable

Will print False

While

((Get-Content $File.FullName -Raw)).gettype().IsSerializable

Will print True

With some more testing and the use of outputs, I can even visualize that the parameter remained empty! The array contains three empty {} where I expected the JSON.

ConvertFrom-Json is not serializable

I usually do not have any issues with this in my pure PowerShell scripting. But here, I pass the object from PowerShell to Bicep, and guess what? For that to work, it has to be serializable. Now, when I do this, there are no warnings or errors. It just seems to work until you use the parameter and get errors that, at first, I did not understand. But the root cause is that in Bicep, the parameter remained empty. Needless to say, I wasted many hours trying to fix this before I finally understood the root cause!

As you can see in the code, I still use ConvertFrom-Json to test if my JSON files contain any errors, but I do not pass that JSON to Bicep as that will not work. So instead, I pass the string and still use the json() function in Bicep.

Hence, this blog post is to help others not make the mistake I made. It will also help me remember ConvertFrom-Json is not serializable.

SecretStore local vault extension

What is the SecretStore local vault extension

The SecretStore local vault extension is a PowerShell module extension vault for Microsoft.PowerShell.SecretManagement. It is a secure storage solution that stores secret data on the local machine. It is based on .NET cryptography APIs, and works on Windows, Linux, macOS thanks to PowerShell Core.

The secret data is stored at rest in encrypted form on the file system and decrypted when returned to a user request. The store file data integrity is verified using a cryptographic hash embedded in the file.

The store can be configured to require a password or operate password-less. Requiring a password adds to defense-in-depth since password-less operation relies solely on file system protections. Password-less operation still encrypts data, but the encryption key is stored on file and is accessible. Another configuration option is the password timeout, which by default is 15 minutes for automation purposes you can use Unlock-SecretStore to enter the password for the current PowerShell session for the duration of the timeout period.

Testing the SecretStore local vault extension

Below you will find a demonstration script where I register a vault of the type secret store. This is a local vault extension that creates its data and configuration files in the currently logged-in user scope. You specify the vault type to register by the ModuleName parameter.

$MySecureVault1 = 'LocalSecVault1'
#Register Vault1 in secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name 
$MySecureVault1 -DefaultVault

#Verify the vault is there
Get-SecretVault

#Add secrets to Vault 1
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault1" -Secret "pwdserverautom1" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault1" -Secret "pwdserverautom2" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault1" -Secret "pwdserverautom3" -Vault $MySecureVault1

#Verify secrets
Get-SecretInfo

Via Get-SecetInfo I can see the three secrets I added to the vault LocalSecVault1

SecretStore local vault extension
SecretStore local vault extensionThe three secrets I added to vault LocalSecVault1

The configuration and data are stored in separate files. The file location depends on the operating system. For Windows this is %LOCALAPPDATA%\Microsoft\PowerShell\secretmanagement\localstore. For Linux and MacOS it is $HOME/.secretmanagement/localstore/

SecretStore local vault extension
The localstore files

As you can see this happens under the user context. Support for all users or machine-wide context or scope is a planned future capability, but this is not available yet.
Access to the SecretStore files is via NTFS file permissions (Windows) or access control lists (Linux) limiting access to the specific user/owner.

Multiple Secret stores

It is possible in SecretManagement to register an extension vault multiple times. The reason for this is that an extension vault may support different contexts via the registration VaultParameters.

At first, it might seem that this means we can create multiple SecretStores but that is not the case. The SecretStore vault currently operates under the scope of the currently logged-on user at a very specific path. As a result, it confused me when I initially tried to create multiple SecretStores. I could see all the secrets of the other stores. Initially, that is what I thought happend. Consequenlty, I had a little security scare.. In reality, I just register different vault names to the same SecretStore as there is only one.

$MySecurevault2 = 'LocalSecVault2'
$MySecureVault3 = 'LocalSecVault3'

#Register two more vaults to secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecurevault2 -DefaultVault
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecureVault3 -DefaultVault

#Note that all vaults contain the secrets of Vault1
Get-SecretInfo
 
#Add secrets to Vault 2
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault2" -Secret "pwdserverautom1" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault2" -Secret "pwdserverautom2" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault2" -Secret "pwdserverautom3" -Vault $MySecureVault2

#Note that all vaults contain the secrets of Vault1 AND Vault 2
Get-SecretInfo
SecretStore local vault extension
Note that every registered local store vault beasically sees the same SecretStore as they all point to the same files.

Now, if you think, that multiple SecretStores per user scope are a good idea there is an open request to support this: Request: Multiple instances of SecretStore · Issue #58 · PowerShell/SecretStore (github.com).

Set Max Concurrent Tasks in Veeam with Powershell

Set Max Concurrent Tasks in Veeam with PowerShell

In this blog post, I’ll look at how to set the Max Concurrent Tasks in Veeam with PowerShell. When configuring your Veeam backup environment for the best possible backup performance there are a lot of settings to tweak. The defaults do a good job to get you going fast and well. But when you have more resources it pays to optimize. One of the things to optimize is Max Concurrent Tasks.

NOTE: all PowerShell here was tested against VBR v10a

Where to set max concurrent tasks or task limits

There are actually 4 places (2 specific for Hyper-V) where you can set the this in Veeam for a Hyper-V environment.

  1. Off-host proxy
  2. On-host proxy
  3. File Share Proxy (NEW in V10)
  4. Repository or SOBR extent

Also see https://helpcenter.veeam.com/docs/backup/hyperv/limiting_tasks.html?ver=100

Use PowerShell to set the Max Concurrent Tasks in Veeam
Max Concurrent Tasks on an off-host proxy
Use PowerShell to set the Max Concurrent Tasks in Veeam
Task limit on the on-host Hyper-V proxy
Use PowerShell to set the Max Concurrent Tasks in Veeam
Max Concurrent tasks on a file proxy (V10)
Use PowerShell to set the Max Concurrent Tasks in Veeam
Limit maximum concurrent tasks on a repository or SOBR extent

Now, let’s dive into those a bit and show the PowerShell to get it configured.

Configuring the proxies

When configuring the on-host or off-host proxies, the max concurrent tasks are based on virtual disks. Let’s look at some examples. 4 virtual machines with a single virtual disk consume 4 concurrent tasks. A single virtual machine with 4 virtual disks also consumes 4 concurrent tasks. 2 virtual machines with 2 virtual disks each consumes, you guessed it, 4 concurrent tasks.

Note that it doesn’t matter if these VMs are in a single job or multiple jobs. The limits are set at the proxy level. So it is the sum of all virtual disks in the VMs of all concurrently running backup jobs. Once you hit the limit, as a result, the remainder of virtual disks (which might translate into complete VMs) will be pending.

set the max concurrent tasks for on-host proxies

#We grab the Hyper-V on-host backup proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 12
$HvProxies = [Veeam.Backup.Core.CHvProxy]::GetAll()
$HvProxies.Count
Foreach ($Proxy in $HvProxies) {
    $HyperVOnHostProxy = $proxy.Host.Name
    $MaxTaskCount = $proxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HyperVOnHostProxy has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $options = $Proxy.Options
    $options.MaxTasksCount = $MaxTaskCountValueToSet 
    $Proxy.SetOptions($options)
}

#Report the changes
$HvProxies = [Veeam.Backup.Core.CHvProxy]::GetAll()
Foreach ($Proxy in $HvProxies) {
    $HyperVOnHostProxy = $proxy.Host.Name
    $MaxTaskCount = $proxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HyperVOnHostProxy has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

set THE MAX CONCURRENT TASKS for off-host proxies

#We grab the Hyper-V off-host backup proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 6
$HvOffHostProxies = Get-VBRHvProxy
foreach ($OffhostProxy in $HvOffHostProxies) {
    $HvOffHostProxyName = $OffhostProxy.Name
    $MaxTaskCount = $OffhostProxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HvOffHostProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $Options = $OffhostProxy.Options
    $Options.MaxTasksCount = $MaxTaskCountValueToSet
    $OffhostProxy.SetOptions($Options)
}

#Report the changes
foreach ($OffhostProxy in $HvOffHostProxies) {
    $HvOffHostProxyName = $OffhostProxy.Name
    $MaxTaskCount = $OffhostProxy.MaxTasksCount
    Write-Host "The on-host Hyper-V proxy $HvOffHostProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

PowerShell code to set THE MAX CONCURRENT TASKS for file proxies

#We grab the file proxies. Note this code does not grab
#any other type of proxies. We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 12
$FileProxies = [Veeam.Backup.Core.CFileProxy]::GetAll()
Foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Name
    $MaxTaskCount = $FileProxy.MaxTasksCount
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    $options = $FileProxy.Options
    $options.MaxTasksCount = $MaxTaskCountValueToSet 
    $FileProxy.SetOptions($options)
}

#Report the changes
$FileProxies = [Veeam.Backup.Core.CFileProxy]::GetAll()
Foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Name
    $MaxTaskCount = $FileProxy.MaxTaskCount
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Last but not least, note that VBR v10 PowerShell also has the Get-VBRNASProxyServer and Set-VBRNASProxyServer commands to work with. However, initially, it seemed not to be reporting the name of the proxies which is annoying. But after asking around I learned it can be found as a property of the Server object it returns. While I was expecting $FileProxy. to exist (based on other Veeam proxy commands) I need to use Name$FileProxy.Server.Name

$MaxTaskCountValueToSet = 4
$FileProxies = Get-VBRNASProxyServer
foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Server.Name
    $MaxTaskCount = $FileProxy.ConcurrentTaskNumber
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow
    Set-VBRNASProxyServer -ProxyServer $FileProxy -ConcurrentTaskNumber $MaxTaskCountValueToSet
}

#Report the changes
$FileProxies = Get-VBRNASProxyServer
foreach ($FileProxy in $FileProxies) {
    $FileProxyName = $FileProxy.Server.Name
    $MaxTaskCount = $FileProxy.ConcurrentTaskNumber
    Write-Host "The file proxy $FileProxyName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Configuring the repositories/SOBR extents

First of all, for Backup Repositories, the max concurrent tasks are not based on virtual disks but on backup files (.vbk, .vib & .vrb).

Secondly, you can use either per VM backup files or non-per VM backup files. In the per VM backup files every VM in the job will have its own backup file. So this consumes more concurrent talks in a single job than the non-per VM backup files mode where a single job will have a single file. Let’s again look at some examples to help clarify this. A single backup job in non-per VM mode will use a single backup file and as such one concurrent task regardless of the number of VMs in the job. A single backup job using per VM backup mode will use a single backup file per VM in the job.

What you need to consider with repositories is that synthetic tasks (merges, transformations, synthetic fulls) also consume tasks and count towards the concurrent task limit on a repository/etxent. So when setting it, don’t think is only related to running active backups.

Finally, when you combine roles, please beware the same resources (cores, memory) will have to be used towards those task limits. That also means you have to consider other subsystems like the storage. If that can’t keep up, your performance will suffer.

PowerShell code to set the task limit for a repository/extent

For a standard backup repositories this will do the job

Get-VBRBackupRepository | Set-VBRBackupRepository -LimitConcurrentJobs -MaxConcurrentJobs 24

For the extends of a SOBR you need to use something like this

Get-VBRBackupRepository -ScaleOut | Get-VBRRepositoryExtent | Set-VBRBackupRepository -LimitConcurrentJobs -MaxConcurrentJobs 24

I you put the output of Get-VBRBackupRepository in a foreach next you can also configuret/report on individual Backup repositories when requiered.

#We grab the repositories. Note: use -autoscale if you need to grab SOBR extents.
#We set the MaxTasksCount and report back
$MaxTaskCountValueToSet = 6
$Repositories = Get-VBRBackupRepository
foreach ($Repository in $Repositories) {
    $RepositoryName = $Repository.Name
    $MaxTaskCount = $Repository.Options.MaxTaskCount
    Write-Host "The on-host Hyper-V proxy $RepositoryName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Yellow

    Set-VBRBackupRepository -Repository $Repository  -LimitConcurrentJobs -MaxConcurrentJob $MaxTaskCountValueToSet
}

#Report the changes
$Repositories = Get-VBRBackupRepository
foreach ($Repository in $Repositories) {
    $RepositoryName = $Repository.Name
    $MaxTaskCount = $Repository.Options.MaxTaskCount
    Write-Host "The on-host Hyper-V proxy $RepositoryName has a concurrent task limit of $MaxTaskCount" -ForegroundColor Green
}

Conclusion

So I have shown you ways to automate. Similar settings for different purposes. The way off automating differs a bit depending on the type of proxy or if it is a repository. I hope it helps some of you out there.