Create virtual machines for a Veeam hardened repository lab

Introduction

In this blog post, I will give you a script to create virtual machines for a Veeam hardened repository lab.

Create virtual machines for a Veeam hardened repository lab
The script has just created two virtual machines for you

Some of you have asked me to do some knowledge transfer about configuring a Veeam hardened repository. For lab work virtualization is your friend. I hope to show you some of the Ubuntu Linux configurations I do. When time permits I will blog about this and you can follow along. I will share what I can on my blog.

Running the script

Now, if you have Hyper-V running on a lab node or on your desktop or laptop you can create virtual machines for a Veeam hardened repository lab with the PowerShell script below. Just adjust the parameters and make sure you have the Ubuntu 20.04 Server ISO in the right place. The script creates the virtual machine configuration files under a folder with the name of the virtual machine in the path you specify in the variables The VM it creates will boot into the Ubuntu setup and we can walk through it and configure it.

Pay attention to the -version of the virtual machine. I run Windows Server 2022 and Windows 11 on my PCs so you might need to adjust that to a version your Hyper-V installation supports.

Also, pay attention to the VLAN IDs used. That suits my lab network. It might not suit yours. Use VLAN ID 0 to disable the VLAN identifier on a NIC.

Clear-Host
$VMPrefix = 'AAAA-XFSREPO-0'
$Path = "D:\VirtualMachines\"
$ISOPath = 'D:\VirtualMachines\ISO\ubuntu-20.04.4-live-server-amd64.iso'
$NumberOfCPUs = 2
$Memory = 4GB
$vSwitch = 'DataWiseTech'
$NumberOfVMs = 2
$VlanIdTeam = 2
$VlanIDSMB1 = 40
$VlanIdSMB2 = 50
$VmVersion = '10.0'

ForEach ($Counter in 1..$NumberOfVMs) {
    $VMName = $VMPrefix + $Counter
    $DataDisk01Path = "$Path$VMName\Virtual Hard Disks\$VMName-DATA01.vhdx"
    $DataDisk02Path = "$Path$VMName\Virtual Hard Disks\$VMName-DATA02.vhdx"
    Write-Host -ForegroundColor Cyan "Creating VM $VMName in $Path ..."
    New-VM -Name $VMName -path $Path -NewVHDPath "$Path$VMName\Virtual Hard Disks\$VMName-OS.vhdx" `
        -NewVHDSizeBytes 65GB -Version 10.0 -Generation 2 -MemoryStartupBytes $Memory -SwitchName $vSwitch| out-null

    Write-Host -ForegroundColor Cyan "Setting VM $VMName its number of CPUs to $NumberOfCPUs ..."
    Set-VMProcessor –VMName $VMName –count 2

    Write-Host -ForegroundColor Magenta "Adding NICs LAN-HOST01, LAN-HOST02, SMB1 and SMB2 to $VMName"
    #Remove-VMNetworkAdapter -VMName $VMName -Name 'Network Adapter'

    Rename-VMNetworkAdapter -VMName $VMName -Name 'Network Adapter' -NewName LAN-HOST-01
    #Connect-VMNetworkAdapter -VMName $VMName -Name LAN -SwitchName $vSwitch
    Add-VMNetworkAdapter -VMName $VMName -SwitchName DataWiseTech -Name LAN-HOST-02 -DeviceNaming On
    Add-VMNetworkAdapter -VMName $VMName -SwitchName $vSwitch -Name SMB1 -DeviceNaming On
    Add-VMNetworkAdapter -VMName $VMName -SwitchName $vSwitch -Name SMB2 -DeviceNaming On
    
    Write-Host -ForegroundColor Magenta "Assigning VLANs to NICs LAN-HOST01, LAN-HOST02, SMB1 and SMB2 to $VMName"
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName LAN-HOST-01 -Access -VLANId $VlanIdTeam
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName LAN-HOST-02 -Access -VLANId $VlanIdTeam  
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName SMB1 -Access -VLANId $VlanIdSMB1
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName SMB2 -Access -VLANId $VlanIdSmb2

    Set-VMNetworkAdapter -VMName $VMName -Name LAN-HOST-01 -DhcpGuard On -RouterGuard On -DeviceNaming On -MacAddressSpoofing On -AllowTeaming On
    Set-VMNetworkAdapter -VMName $VMName -Name LAN-HOST-02 -DhcpGuard On -RouterGuard On -MacAddressSpoofing On -AllowTeaming On
    Set-VMNetworkAdapter -VMName $VMName -Name SMB1 -DhcpGuard On -RouterGuard On -MacAddressSpoofing Off -AllowTeaming off
    Set-VMNetworkAdapter -VMName $VMName -Name SMB2 -DhcpGuard On -RouterGuard On -MacAddressSpoofing Off -AllowTeaming off

    Write-Host -ForegroundColor yellow "Adding DVD Drive to $VMName"
    Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 8 

    Write-Host -ForegroundColor yellow "Mounting $ISOPath to DVD Drive on $VMName"
    Set-VMDvdDrive -VMName $VMName -Path $ISOPath

    Write-Host -ForegroundColor White "Setting DVD with $ISOPath as first boot device on $VMName"
    $DVDWithOurISO = ((Get-VMFirmware -VMName $VMName).BootOrder | Where-Object Device -like *DVD*).Device
    
    Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDWithOurISO `
    -EnableSecureBoot On -SecureBootTemplate MicrosoftUEFICertificateAuthority

    Write-Host -ForegroundColor Cyan "Creating two data disks and adding them to $VMName"
    New-VHD -Path $DataDisk01Path -Dynamic -SizeBytes 150GB | out-null
    New-VHD -Path $DataDisk02Path -Dynamic -SizeBytes 150GB | out-null

    Add-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 `
    -ControllerLocation 1 -ControllerType SCSI  -Path $DataDisk01Path

    Add-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 `
    -ControllerLocation 2 -ControllerType SCSI  -Path $DataDisk02Path

    $VM = Get-VM $VMName 
    write-Host "VM $VM  has been created" -ForegroundColor green
    write-Host ""
}

Conclusion

In conclusion, that’s it for now. Play with the script and you will create virtual machines for a Veeam hardened repository lab in no time. That way you are ready to test and educate yourself. Don’t forget that you need to have sufficient resources on your host. Virtualization is cool but it is not magic.

Some of the settings won’t make sense to some of you, but during the future post, this will become clear. These are specific to Ubuntu networking on Hyper-V.

I hope to publish the steps I take in the coming months. As with many, time is my limiting factor so have patience. In the meanwhile, you read up about the Veeam hardened repository.

Failing compilation with Azure Automation State Configuration: Cannot connect to CIM server. The specified service does not exist as an installed service

Introduction

You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration, which functions as a pull server. Next to doing this via the Azure portal, you can also use PowerShell. The latter allows for easy integration in DevOps pipelines and provides the flexibility to deal with complex parameter constructs. So, this is my preferred option. Of course, you can also push DSC configurations to Azure virtual machines via ARM templates. But I like the pull mechanisms for life cycle management just a bit more as we can update the DSC config and push it out when needed. So, that’s all good, but under certain conditions, you can get the following error: Cannot connect to CIM server. The specified service does not exist as an installed service.

When can you get into this pickle?

DSC itself is PowerShell, and that comes in quite handy. Sometimes, the logic you use inside DSC blocks is insufficient to get the job done as needed. With PowerShell, we can leverage the power of scripting to get the information and build the logic we need. One such example is formatting data disks. Configuring network interfaces would be another. A disk number is not always reliable and consistent, leading to failed DSC configurations.
For example, the block below is a classic way to wait for a disk, and when it shows up, initialize, format, and assign a drive letter to it.

xWaitforDisk NTDSDisk {
    DiskNumber = 2

    RetryIntervalSec = 20
    RetryCount       = 30
}
xDisk ADDataDisk {
    DiskNumber = 2
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
} 

The disk number may vary depending on whether your Azure virtual machine has a temp disk or not, or if you use disk encryption or not can trip up disk numbering. No worries, DSC has more up its sleeve and allows to use the disk id instead of the disk number. That is truly unique and consistent. You can quickly grab a disk’s unique id with PowerShell like below.

xWaitforDisk NTDSDisk {
    DiskIdType       = 'UniqueID'
    DiskId           = $NTDSDiskUniqueId #'1223' #GetScript #$NTDSDisk.UniqueID
    RetryIntervalSec = 20
    RetryCount       = 30
}

xDisk ADDataDisk {
    DiskIdType  = 'UniqueID'
    DiskId      = $NTDSDiskUniqueId #GetScript #$NTDSDisk.UniqueID
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
}

Powershell in compilation error

So we upload and compile this DSC configuration with the below script.

$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    SourcePath            = 'C:\Users\WorkingHardInIT\OneDrive\AzureAutomation\AD-extension-To-Azure\InfAsCode\Up\App\PowerShell\ADDSServer.ps1'
    Published             = $true
    Force                 = $true
}

$UploadDscConfiguration = Import-AzAutomationDscConfiguration @params

while ($null -eq $UploadDscConfiguration.EndTime -and $null -eq $UploadDscConfiguration.Exception) {
    $UploadDscConfiguration = $UploadDscConfiguration | Get-AzAutomationDscCompilationJob
    write-Host -foregroundcolor Yellow "Uploading DSC configuration"
    Start-Sleep -Seconds 2
}
$UploadDscConfiguration | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor Green "Uploading done:"
$UploadDscConfiguration


$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    ConfigurationName     = 'ADDSServer'
}

$CompilationJob = Start-AzAutomationDscCompilationJob @params 
while ($null -eq $CompilationJob.EndTime -and $null -eq $CompilationJob.Exception) {
    $CompilationJob = $CompilationJob | Get-AzAutomationDscCompilationJob
    Start-Sleep -Seconds 2
    Write-Host -ForegroundColor cyan "Compiling"
}
$CompilationJob | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor green "Compiling done:"
$CompilationJob

So, life is good, right? Yes, until you try and compile that (DSC) configuration in Azure Automation State Configuration. Then, you will get a nasty compile error.

Cannot connect to CIM server. The specified service does not exist as an installed service

“Exception: The running command stopped because the preference variable “ErrorActionPreference” or common parameter is set to Stop: Cannot connect to CIM server. The specified service does not exist as an installed service.”

Or in the Azure Portal:

Cannot connect to CIM server. The specified service does not exist as an installed service

The Azure compiler wants to validate the code, and as you cannot get access to the host, compilation fails. So the configs compile on the Azure Automation server, not the target node (that does not even exist yet) or the localhost. I find this odd. When I compile code in C# or C++ or VB.NET, it will not fail because it cannot connect to a server and validate my code by crabbing disk or interface information at compile time. The DSC code only needs to be correct and valid. I wish Microsoft would fix this behavior.

Workarounds

Compile DSC locally and upload

Yes, I know you can pre-compile the DSC locally and upload it to the automation account. However, the beauty of using the automation account is that you don’t have to bother with all that. I like to keep the flow as easy-going and straightforward as possible for automation. Unfortunately, compiling locally and uploading doesn’t fit into that concept nicely.

Upload a PowerShell script to a storage container in a storage account

We can store a PowerShell script in an Azure storage account. In our example, that script can do what we want, find, initialize, and format a disk.

Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } |
Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize |
Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false

From that storage account, we download it to the Azure VM when DSC is running. This can be achieved in a script block.

$BlobUri = 'https://scriptlibrary.blob.core.windows.net/scripts/DSC/InitialiseNTDSDisk.ps1' #Get-AutomationVariable -Name 'addcInitialiseNTDSDiskScritpBlobUri'
$SasToken = '?sv=2021-10-04&se=2022-05-22T14%3A04%8S67QZ&cd=c&lk=r&sig=TaeIfYI63NTgoftSeVaj%2FRPfeU5gXdEn%2Few%2F24F6sA%3D'
$CompleteUri = "$BlobUri$SasToken"
$OutputPath = 'C:\Temp\InitialiseNTDSDisk.ps1'

Script FormatAzureDataDisks {
    SetScript  = {

        Invoke-WebRequest -Method Get -uri $using:CompleteUri -OutFile $using:OutputPath
        . $using:OutputPath
    }

    TestScript = {
        Test-Path $using:OutputPath
    }

    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
} 

But we need to set up a storage account and upload a PowerShell script to a blob. We also need a SAS token to download that script or allow public access to it. Instead of hardcoding this information in the DSC script, we can also store it in automation variables. We could even abuse Automation credentials to store the SAS token securely. All that is possible, but it requires more infrastructure, maintenance, security while integrating this into the DevOps flow.

PowerShell to generate a PowerShell script

The least convoluted workaround that I found is to generate a PowerShell script in the Script block of the DSC configuration and save that to the Azure VM when DSC is running. In our example, this becomes the below script block in DSC.

Script FormatAzureDataDisks {
    SetScript  = {
        $PoshToExecute = 'Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false'
        $ PoshToExecute | out-file $using:OutputPath
        . $using:OutputPath
    }
    TestScript = {
        Test-Path $using:OutputPath 
    }
    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
}

So, in SetScript, we build our actual PowerShell command we want to execute on the host as a string. Then, we persist to file using our $OutputPath variable we can access inside the Script block via the $using: OutputPath. Finally, we execute our persisted script by dot sourcing it with “. “$using:OutputPath” In TestScript, we test for the existence of the file and ignore the output of GetScript, but it needs to be there. The maintenance is easy. You edit the string variable where we create the PowerShell to save in the DSC configuration file, which we upload and compile. That’s it.

To be fair, this will not work in all situations and you might need to download protected files. In that case, the above will solutions will help out.

Conclusion

Creating a Powershell script in the DSC configuration file requires less effort and infrastructure maintenance than uploading such a script to a storage account. So that’s the pragmatic trick I use. I’d wish the compilation to an automation account would succeed, but it doesn’t. So, this is the next best thing. I hope this helps someone out there facing the same issue to work around the error: Cannot connect to CIM server. The specified service does not exist as an installed service.

SecretStore local vault extension

What is the SecretStore local vault extension

The SecretStore local vault extension is a PowerShell module extension vault for Microsoft.PowerShell.SecretManagement. It is a secure storage solution that stores secret data on the local machine. It is based on .NET cryptography APIs, and works on Windows, Linux, macOS thanks to PowerShell Core.

The secret data is stored at rest in encrypted form on the file system and decrypted when returned to a user request. The store file data integrity is verified using a cryptographic hash embedded in the file.

The store can be configured to require a password or operate password-less. Requiring a password adds to defense-in-depth since password-less operation relies solely on file system protections. Password-less operation still encrypts data, but the encryption key is stored on file and is accessible. Another configuration option is the password timeout, which by default is 15 minutes for automation purposes you can use Unlock-SecretStore to enter the password for the current PowerShell session for the duration of the timeout period.

Testing the SecretStore local vault extension

Below you will find a demonstration script where I register a vault of the type secret store. This is a local vault extension that creates its data and configuration files in the currently logged-in user scope. You specify the vault type to register by the ModuleName parameter.

$MySecureVault1 = 'LocalSecVault1'
#Register Vault1 in secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name 
$MySecureVault1 -DefaultVault

#Verify the vault is there
Get-SecretVault

#Add secrets to Vault 1
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault1" -Secret "pwdserverautom1" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault1" -Secret "pwdserverautom2" -Vault $MySecureVault1
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault1" -Secret "pwdserverautom3" -Vault $MySecureVault1

#Verify secrets
Get-SecretInfo

Via Get-SecetInfo I can see the three secrets I added to the vault LocalSecVault1

SecretStore local vault extension
SecretStore local vault extensionThe three secrets I added to vault LocalSecVault1

The configuration and data are stored in separate files. The file location depends on the operating system. For Windows this is %LOCALAPPDATA%\Microsoft\PowerShell\secretmanagement\localstore. For Linux and MacOS it is $HOME/.secretmanagement/localstore/

SecretStore local vault extension
The localstore files

As you can see this happens under the user context. Support for all users or machine-wide context or scope is a planned future capability, but this is not available yet.
Access to the SecretStore files is via NTFS file permissions (Windows) or access control lists (Linux) limiting access to the specific user/owner.

Multiple Secret stores

It is possible in SecretManagement to register an extension vault multiple times. The reason for this is that an extension vault may support different contexts via the registration VaultParameters.

At first, it might seem that this means we can create multiple SecretStores but that is not the case. The SecretStore vault currently operates under the scope of the currently logged-on user at a very specific path. As a result, it confused me when I initially tried to create multiple SecretStores. I could see all the secrets of the other stores. Initially, that is what I thought happend. Consequenlty, I had a little security scare.. In reality, I just register different vault names to the same SecretStore as there is only one.

$MySecurevault2 = 'LocalSecVault2'
$MySecureVault3 = 'LocalSecVault3'

#Register two more vaults to secret store
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecurevault2 -DefaultVault
Register-SecretVault -ModuleName Microsoft.PowerShell.SecretStore -Name $MySecureVault3 -DefaultVault

#Note that all vaults contain the secrets of Vault1
Get-SecretInfo
 
#Add secrets to Vault 2
Set-Secret -Name "DATAWISETECH\serverautomation1in$MySecureVault2" -Secret "pwdserverautom1" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation2in$MySecureVault2" -Secret "pwdserverautom2" -Vault $MySecureVault2
Set-Secret -Name "DATAWISETECH\serverautomation3in$MySecureVault2" -Secret "pwdserverautom3" -Vault $MySecureVault2

#Note that all vaults contain the secrets of Vault1 AND Vault 2
Get-SecretInfo
SecretStore local vault extension
Note that every registered local store vault beasically sees the same SecretStore as they all point to the same files.

Now, if you think, that multiple SecretStores per user scope are a good idea there is an open request to support this: Request: Multiple instances of SecretStore · Issue #58 · PowerShell/SecretStore (github.com).

KeePass SecretManagement extension vault

KeePass SecretManagement extension vault

The SecretManagement and SecretStore can work with SecretManagement extension vault modules. These can be found in the PowerShell Gallery using the “SecretManagement” search tag. Some example are:

I use KeePass and as such, the KeePass SecretManagement extension vault is the one I will demonstrate. First of all, install the module. Note that I chose to use the most recent beta version, which is 0.9.2-beta0008 at the time of writing this blog post.

Install-Module -Name SecretManagement.KeePass -AllowPrerelease

Naturally, if you haven’t installed SecretManagement and SecretStore modules yet, you must now really do that to be able to play with them.

Install-Module Microsoft.PowerShell.SecretManagement, Microsoft.PowerShell.SecretStore

Now that has been taken care of we can start testing the KeePass SecretManagement extension vault.

Using the KeePass SecretManagement extension vault

I created a demo KeePass .kdbx file in which I stored some example user names with their passwords. This file has a master password. You can also use a key or the Windows user account if you want to do so.

Our demo .kdbx file

Now I will register the KeePass file as a Vault

Register-KeePassSecretVault -Name 'WorkingHardInITKeePassVault' -Path 'C:\SysAdmin\Authentication\workinghardinit.kdbx' -UseMasterPassword
KeePass SecretManagement extension vaultRegister the KeePass Vault

As you can see this prompts you for the KeePass Master Pasword.

Keepass Master Password
Enter the Keepass Master password for: C:\SysAdmin\Authentication\workinghardinit.kdbx
Password for user Keepass Master Password:

Now that is done, I will unlock the KeePass secret vault so I can use it in automation without being prompted for it. By default, it remains unlocked for 900 seconds (15 minutes). This is configurable.

Unlock-KeePassSecretVault -Name 'WorkingHardInITKeePassVault'
Unlock the KeePass Vault, by entering the store password and, if not opened yes the KeePass master password
$FCcreds = Get-Secret -Name 'FC Switch 01'  -Vault 'WorkingHardInITKeePassVault'
$FCSwitchUser = $FCcreds.GetNetworkCredential().UserName
$FCSwitchPwd  =$FCcreds.GetNetworkCredential().Password
write-Host -foregroundcolor Green "FC Swicth 01 username $FCSwitchUser has $FCSwitchPwd for its password"
KeePass SecretManagement extension vault
We grab the username for the FC Switch 01 entry in the KeePass secret Vault.

Note that the entry for the secret is a network credential. As result, we can use the properties of the credential object to obtain the username and password in plain text. That is to say, we can (and should) use the credentials directly. You do not need to show or use the password in plain text. I did this here to show you that we got the correct values back.

Credentials ready to use.

Updating and adding secrets

Currently, updating the secrets with is not supported.

Let’s hope that theu allow updating and document using the hash table to enter metadata better in the future.

We need to first remove the existing one for now and re-enter the information. We’ll see how this evolves

Remove-Secret -Name 'FC Switch 01' -Vault 'WorkingHardInITKeePassVault'
$FCcreds = Get-Credential -UserName 'fcadmin'
Set-Secret -Name 'FC Switch 01' -Secret $FCcreds -Vault 'WorkingHardInITKeePassVault'

Finally, the good news is that there is also a PowerShell KeePass module that you can use for that sort of work. So you have the means in PowerShell to do so. See Getting Started · PSKeePass/PoShKeePass Wiki (github.com).

Conclusion

That was fun, was it not? The SecretManagement and SecretStore modules are going places. I hope this helps and happy scripting!