Deploying an OPNsense/pfSense Hyper-V virtual machine

Introduction

I recently wrote a 3 part series (see part 1, part 2, and part 3) about setting up site-to-site VPNs to Azure using an OPNsense router/firewall appliance. That made me realize some of you would like to learn more about deploying an OPNsense/pfSense Hyper-V virtual machine in your lab or production. In this blog post, using PowerShell, I will start by showing you how to deploy one or more virtual machines that are near perfect to test out a multitude of scenarios with OPNsense. It is really that easy.

Deploying an OPNsense/pfSense Hyper-V virtual machine

Hyper-V virtual switches

I have several virtual switches on my Hyper-V host (or hosts if you are clustering in your lab). One is for connections to my ISP, and the other is for management, which can be (simulated) out-of-band (OOB) or not; that is up to you. Finally, you’ll want one for your workload’s networking needs. In the lab, you can forgo teaming (SET or even classic native teaming) and do whatever suits your capabilities and /or needs.

I can access more production-grade storage, server, and network hardware (DELL, Lenovo, Mellanox/Nvidia) for serious server room or data center work via befriended MVPs and community supporters. So that is where that test work happens.

Images speak louder than a thousand words

Let’s add some images to help you orient your focus on what we are doing. The overview below gives you an idea about the home lab I run. I acquired the network gear through dumpster diving and scavenging around for discards and gifts from befriended companies. The focus is on the required functionality without running up a ridiculous power bill and minimizing noise. The computing hardware is PC-based and actually quite old. I don’t make a truckload of money and try to reduce my CO2 footprint. If you want $$$, you are better in BS roles, and there, expert technical knowledge is only a hindrance.

The grey parts are the permanently running devices. These are one ISP router and what the rest of the home network requires. An OPNsense firewall, some Unifi WAPs, and a managed PoE switch. That provides my workstation with a network for Internet access. It also caters to my IoT, WiFi, and home networking needs, which are unimportant here.

The lab’s green part can be shut down unless I need it for lab scenarios, demos, or learning. Again this saves on the electricity bill and noise.

The blue part of the network is my main workstation and about 28 virtual machines that are not all running. I fire those up when needed. And we’ll focus on this part for the OPNsense needs. What is not shown but which is very important as a Veeam Vanguard is the Veeam Backup & Replication / Veeam ONE part of the lab. That is where I design and test my “radical resilience” Veeam designs.

Flexibility is key

On my stand-alone Hyper-V workstation, I have my daily workhorse and a small data center running all in one box. That helps keep costs down and means that bar the ISP router and permanent home network, I can shut down and cut power to the Barracuda appliance, all switches, the Hyper-V cluster nodes, and the ISCSI storage server when I don’t need them.

If you don’t have those parts in your lab, you need fewer NIC ports in the workstation. You can make the OOB and the LAN vSwitch internal or private, as traffic does not need to leave the host. In that case, one NIC port for your workstation and one for the ISP router will suffice. If you don’t get a public IP from your ISP, you can use a NIC port for an external vSwitch shared with your host.

This gives me a lot of flexibility, and I have chosen to integrate my workstation data center with my hardware components for storage and Hyper-V clustering.

Even with a laptop or a PC with one NIC, you can use the script I share here using internal or private virtual switches. As long as you stay on your host, that will do, with certain limitations of cause.

Three virtual switches

OOB-MGNT: This is attached to a subnet that serves no purpose other than to provide an IP to manage network devices. Appliances like the Kemp Loadmasters, the OPNsense/pfSense/VyOS appliances, physical devices like the switches, the Barracuda firewall, the home router, and other temporary network appliances. It does not participate in any configuration for high availability or in carrying data.

ISP-WAN: This vSwitch has an uplink to the ISP router. Virtual machines attached to it can get a DHCP address from that ISP router, providing internet access over NAT. Alternatively, you can configure it to receive a public IP address from your ISP via DHCP (Cable) or PPoE (VDSL). With some luck, your ISP hands out more than just one. If so, you can test BGP-based dynamic routing with a site-to-site VPN from OPNsense and Azure VWAN.

LAN: The LAN switch is for carrying configuration and workload data. For standard use virtual machines, we configure the VLAN tag on the vNIC settings in the portal or via PowerShell. But network appliances must be able to carry all VLAN traffic. That is why we configure the virtual NICs of the LAN in trunk mode and set the list of allowed VLANs it may carry.

Downloads

OPNsense: https://opnsense.org/download/ (choose the DVD image for the ISO)

pfSense: https://www.pfsense.org/download (choose AMD64 and the DVD image for the ISO)

PowerShell script for deploying an OPNsense/pfSense Hyper-V virtual machine

Change the parameters in the below PowerShell function. Call it by running CreateAppliance. You can parameterize the function at will and leverage it however you see fit. This is just to give you an idea of how to do it and how I configure the settings for the appliance(s).

function CreateAppliance() {
    Clear-Host


    $Title = @"
    ___           _                     _      _               _                     _     _
   /   \___ _ __ | | ___  _   _  /\   /(_)_ __| |_ _   _  __ _| |   /\/\   __ _  ___| |__ (_)_ __   ___
  / /\ / _ \ '_ \| |/ _ \| | | | \ \ / / | '__| __| | | |/ _`  | |  /    \ / _`  |/ __| '_ \| | '_ \ / _ \
 / /_//  __/ |_) | | (_) | |_| |  \ V /| | |  | |_| |_| | (_| | | / /\/\ \ (_| | (__| | | | | | | |  __/
/___,' \___| .__/|_|\___/ \__, |   \_/ |_|_|   \__|\__,_|\__,_|_| \/    \/\__,_|\___|_| |_|_|_| |_|\___|
           |_|            |___/
  __                ___  ___    __                            __      __ __
 / _| ___  _ __    /___\/ _ \/\ \ \___  ___ _ __  ___  ___   / / __  / _/ _\ ___ _ __  ___  ___
| |_ / _ \| '__|  //  // /_)/  \/ / __|/ _ \ '_ \/ __|/ _ \ / / '_ \| |_\ \ / _ \ '_ \/ __|/ _ \
|  _| (_) | |    / \_// ___/ /\  /\__ \  __/ | | \__ \  __// /| |_) |  _|\ \  __/ | | \__ \  __/
|_|  \___/|_|    \___/\/   \_\ \/ |___/\___|_| |_|___/\___/_/ | .__/|_| \__/\___|_| |_|___/\___|
                                                              |_|

"@

    Write-Host -ForegroundColor Green $Title

    filter timestamp { "$(Get-Date -Format "yyyy/MM/dd hh:mm:ss"): $_" }

    VMPrefix $= 'OPNsense-0'
    $Path = "D:\VirtualMachines\"
    $ISOPath = 'D:\VirtualMachines\ISO\OPNsense-23.7-vga-amd64.iso'
    #$ISOPath = 'C:\VirtualMachines\ISO\pfSense-CE-2.7.1-RELEASE-amd64.iso'
    $ISOFile = Split-Path $ISOPath -leaf
    $NumberOfCPUs = 2
    $Memory = 4GB

    $NumberOfVMs = 1 # Handy to create a high available pair or multiple test platforms
    $VMGeneration = 2 # If an apliance supports generation 2, choose this, always! OPNsense, pfSense, Vyos support this.
    $VmVersion = '10.0'  #If you need this VM yo run on older HYper-V hosts choose the version accordingly

    #vSwitches
    $SwitchISP = 'ISP-WAN' #This external vSwitch is connected to the NIC towards ISP router. Not shared with the hyper-V Host
    $SwitchOOBMGNT = 'OOB-MGNT' #This can be a private/internal netwwork or an external one, possibly shared with the host.
    $SwitchLAN = 'LAN' #This can be a private/internal netwwork or an external one, possibly shared with the host.

    #vNICs and if applicable their special configuration.
    $WAN1 = 'WAN1'
    $WAN2 = 'WAN1'
    $OOBorMGNT = 'OOB'
    $LAN1 = 'LAN1'
    $LAN1TrunkList = "1-2048"
    $LAN2 = 'LAN2'
    $LAN2TrunkList = "1-2048"

    write-host -ForegroundColor green -Object ("Starting deployment of your appliance(s)." | timestamp)
    ForEach ($Counter in 1..$NumberOfVMs) {
        $VMName = $VMPrefix + 1

        try {
            Get-VM -Name $VMName -ErrorAction Stop | Out-Null
            Write-Host -ForegroundColor red ("The machine $VMName already exists. We are not creating it" | timestamp)
            exit
        }
        catch {
            $NewVhdPath = "$Path\$VMName\Virtual Hard Disks\$VMName-OS.vhdx" 
            If ( Test-Path -Path $NewVhdPath) {
                Write-host ("$NewVhdPath allready exists. Clean this up or specify a new location to create the VM." | timestamp)
            }
            else {
                Write-Host -ForegroundColor Cyan ("Creating VM $VMName in $Path ..." | timestamp)
                New-VM -Name $VMName -path $Path -NewVHDPath $NewVhdPath
                -NewVHDSizeBytes 64GB -Version $VmVersion -Generation $VMGeneration -MemoryStartupBytes $Memory | out-null

                Write-Host -ForegroundColor Cyan ("Setting VM $VMName its number of CPUs to $NumberOfCPUs ..." | timestamp)
                Set-VMProcessor –VMName $VMName –count 2

                #Get rid of the default network adapter -renaning would also be an option
                Remove-VMNetworkAdapter -VMName $VMName -Name 'Network Adapter'

                Write-Host -ForegroundColor Magenta ("Adding Interfaces WAN1, WAN2, OOBMGNT, LAN1 & LAN2 to $VMName" | timestamp)
                write-host -ForegroundColor yellow -Object ("Creating $WAN1 Interface" | timestamp)
                #For first ISP uplink
                Add-VMNetworkAdapter -VMName $vmName -Name $WAN1 -SwitchName $SwitchISP
                write-host -ForegroundColor green -Object ("Created $WAN1 Interface succesfully" | timestamp)

                write-host -ForegroundColor yellow -Object ("Creating $WAN2 Interface" | timestamp)
                #For second ISP uplink
                Add-VMNetworkAdapter -VMName $vmName -Name $WAN2 -SwitchName $SwitchISP
                write-host -ForegroundColor green -Object ("Created $WAN2 Interface succesfully" | timestamp)

                write-host -ForegroundColor yellow -Object ("Creating $OOBorMGNT Interface" | timestamp)
                #Management Interface - This can be OOB if you want. Do note by default the appliance route to this interface.
                Add-VMNetworkAdapter -VMName $vmName -Name $OOBorMGNT  -SwitchName $SwitchOOBMGNT #For management network (LAN in OPNsense terminology - rename it there to OOB or MGNT as well - I don't use a workload network for this)
                write-host -ForegroundColor green -Object ("Created $OOBorMGNT Interface succesfully" | timestamp)

                write-host -ForegroundColor yellow -Object ("Creating $LAN1 Interface" | timestamp)
                #For workload network (for the actual network traffic of the VMs.)
                Add-VMNetworkAdapter -VMName $vmName -Name $LAN1 -SwitchName $SwitchLAN
                write-host -ForegroundColor green -Object ("Created $LAN1 Interface succesfully" | timestamp)

                write-host -ForegroundColor yellow -Object ("Creating $LAN2 Interface" | timestamp)
                #For workload network (for the actual network traffic of the VMs. he second one is optional but useful in labs scenarios.)
                Add-VMNetworkAdapter -VMName $vmName -Name $LAN2 -SwitchName $SwitchLAN
                write-host -ForegroundColor green -Object ("Created $LAN2 Interface succesfully" | timestamp)

                Write-Host -ForegroundColor Magenta ("Setting custom configuration top the Interface (trunking, allowed VLANs, native VLAN ..." | timestamp)
                #Allow all VLAN IDs we have in use on the LAN interfaces of the firewall/router. The actual config of VLANs happens on the appliance.
                write-host -ForegroundColor yellow -Object ("Set $LAN1 Interface to Trunk mode and allow VLANs $LAN1TrunkList with native VLAN = 0" | timestamp)
                Set-VMNetworkAdapterVlan -VMName $vmName -VMNetworkAdapterName $LAN1 -Trunk -AllowedVlanIdList $LAN1TrunkList -NativeVlanId 0
                write-host -ForegroundColor green -Object ("The $LAN1 Interface is now in Trunk mode and allows VLANs $LAN1TrunkList with native VLAN = 0" | timestamp)
                write-host -ForegroundColor yellow -Object ("Set $LAN2 Interface to Trunk mode and allow VLANs $LAN2TrunkList with native VLAN = 0" | timestamp)
                Set-VMNetworkAdapterVlan -VMName $vmName -VMNetworkAdapterName $LAN2 -Trunk -AllowedVlanIdList $LAN2TrunkList -NativeVlanId 0
                write-host -ForegroundColor green -Object ("The $LAN2 Interface is now in Trunk mode and allows VLANs $LAN2TrunkList with native VLAN = 0" | timestamp)

                Write-Host -ForegroundColor Magenta ("Adding DVD Drive, mounting appliance ISO, setting it to boot first" | timestamp)
                Write-Host -ForegroundColor yellow ("Adding DVD Drive to $VMName"  | timestamp)
                Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 8
                write-host -ForegroundColor green -Object ("Succesfully addded the DVD Drive." | timestamp)
                Write-Host -ForegroundColor yellow ("Mounting $ISOPath to DVD Drive on $VMName" | timestamp)
                Set-VMDvdDrive -VMName $VMName -Path $ISOPath
                write-host -ForegroundColor green -Object ("Mounted $ISOFile." | timestamp)
                Write-Host -ForegroundColor yellow  ("Setting DVD with $ISOPath as first boot device on $VMName and disabling secure boot"  | timestamp)
                $DVDWithOurISO = ((Get-VMFirmware -VMName $VMName).BootOrder | Where-Object Device -like *DVD*).Device
                #I am optimistic and set the secure boot template to what it most likely will be if they ever support it :-)
                Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDWithOurISO `
                    -EnableSecureBoot Off -SecureBootTemplate MicrosoftUEFICertificateAuthority
                write-host -ForegroundColor green -Object ("Set vDVD with as the first boot device and disabled secure boot." | timestamp)
                $VM = Get-VM $VMName
                write-Host -ForegroundColor Cyan ("Virtual machine $VM  has been created."  | timestamp)
            }
        }
    }
    write-Host -ForegroundColor Green "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
    write-Host -ForegroundColor Green "You have created $NumberOfVMs virtual appliance(s) with each two WAN ports, a Management port and
    two LAN ports. The chosen appliance ISO is loaded in the DVD as primary boot device, ready to install."
    write-Host -ForegroundColor Green "+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++"
}
#Run by calling
CreateAppliance

Conclusion

Deploying an OPNsense/pfSense Hyper-V virtual machine is easy. You can have one or more of them up and running in less seconds. For starters, it will take longer to download the ISOs for installing OPNsense or pfSense than to create the virtual machine.

Finally, the virtual machine configuration allows for many lab scenarios, demos, and designs. As such, they provide your lab with all the capabilities and flexibilities you need for learning, testing, validating designs, and troubleshooting.

ConvertFrom-Json is not serializable

Introduction

While writing Bicep recently, I was stumped by the fact that my deployment kept failing. I spent a lot of time troubleshooting many possible ideas on what might be causing this. As JSON is involved and I am far from a JSON syntax guru, I first focused on that. Later I moved to how I use JSON in Bicep and PowerShell before finally understanding the problem was due to the fact that ConvertFrom-Json is not serializable.

Parameters with Bicep

When deploying resources in Azure with Bicep, I always need to consider who has to deliver or maintain the code and the parameters. It has to be somewhat structured, readable, and understandable. It can’t be one gigantic listing that confuses people to the point they are lost. Simplicity and ease of use rule my actions here. I know when it comes to IaC, this can be a challenge. So, when it comes to parameters, what are our options here?

  • I avoid hard-coding parameters in Bicep. It’s OK for testing while writing the code, but beyond that, it is a bad idea for maintainability.
  • You can use parameter files. That is a considerable improvement, but it has its limitations.
  • I have chosen the path of leveraging PowerShell to create and maintain parameters and pass those via objects to the main bicep file for deployment. That is a flexible and maintainable approach. Sure, it is not perfect either, but neither am I.

Regarding Bicep and PowerShell, we can also put parameters in separate files and read those to create parameters. Whether this is a good idea depends on the situation. My rule of thumb is that it is worth doing when things become easier to read and maintain while reducing the places where you have to edit your IaC files. In the case of Azure Firewall Policy Rules Collection Groups, Rules collections, and Rules, it can make sense.

Bicep and JSON files

You can read file content in Bicep using. With the json() function, you can tell Bicep that this is JSON. So far, so good. The below is perfectly fine and works. We can loop through that variable in a resource deployment.

var firewallChildRGCs = [

    json(loadTextContent('./AFW/Policies/RGSsAfwChild01.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild02.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild03.json'))

]

However, I am not entirely happy with this. While I like it in some aspects, it conflicts with my desire not needing to edit a working Bicep file once it is in use. So what do I like about it?

It keeps Bicep clean and concise and limits the looping to iterate over the Rules Collection Groups, thus avoiding the nested looping for Rules collections and Rules. Why is that? Because I can do this

@batchSize(1)

resource firewallChildPolicyWEUColGroups 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2022-07-01' = [for (childrcg, index) in firewallChildRGCs: {

  parent: firewallChildPolicyWEU

  name: childrcg.name

  dependsOn: [firewallParentPolicyWEUColGroups]

  properties: childrcg.properties

}]

As you can see, I loop through the variable and pass the JSON into the properties. That way, I create all Rule Collections and Rules without needing to do any nested looping via “helper” modules to get this done.

The drawback, however, is that the loadTextContent function in Bicep cannot use dynamic parameters or variables. As a result, the paths to the files need to be hard coded into the Bicep file. That is something we want to avoid. But until that is possible, it is a hard restriction. That is because parameters are evaluated during runtime (bicep deployment), whereas loadTextContent in Bicep happens while compiling (bicep build). So, in contrast to the early previews of Bicep, where you “transpiled” the Bicep manually, it is now done for you automatically before the deployment. You think this can work, but it does not.

PowerShell and JSON files

As mentioned above, I chose to use PowerShell to create and maintain parameters, and I want to read my JSON files there. However, it prevents me from creating large, long, and complex to maintain PowerShell objects with nested arrays. Editing these is not straightforward for everyone. On top of that, it leads to the need for nested looping in Bicep via “helper” modules. While that works, and I use it, I find it more tedious with deeply nested structures and many parameters to supply. Hence I am splitting it out into easier-to-maintain separate JSON files.

Here is what I do in PowerShell to build my array to pass via an Object parameter. First, I read the JSON filers from my folder.

$ChildFilePath = "../bicep/nested/AfwChildPoliciesAndRules/*"
$Files = Get-ChildItem -File $ChildFilePath -Include '*.json' -exclude 'DONOTUSE*.json'
$Files
$AfwChildCollectionGroupsValidate = @() # We use this with ConvertFrom-Json to validate that the JSON file is OK, but cannot use this to pass as a param to Bicep
    $AfwChildCollectionGroups = @()
    Foreach ($File in $Files) {
    try{
        $AfwChildCollectionGroupsValidate += (Get-Content $File.FullName -Raw) | ConvertFrom-Json
        # DO NOT PUT JSON in here - the PSCustomObject is not serializable and passing this param to Bicep will than be empty!
        $AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) # A string is serializable!
        }
        Catch
        {
        write-host -ForegroundColor Red "ConvertFrom-Json threw and error. Check your JSON in the RCG/RC/R files"
        Exit
        }
    }

I can then use this to roll out the resources, as in the below example.

// Roll out the child Rule Collection Group(s)

var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {

  name: json(rulecol).name

  properties: json(rulecol).properties

}]

Initially, the idea was that by using ConvertFrom-Json I would pass the JSON to Bicep as a parameter directly.

$AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) | ConvertFrom-Json

So not only would I not need to load the files in Bicep with a hard-coded path, I would also not need to use json() function in Bicep.

// Roll out the child Rule Collection Group(s)
var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {
  name: rulecol.name
  properties: rulecol.properties
}]

However, this failed on me time and time again with properties not being found and what not. Below is an example of such an error.

Line |
  30 |          New-AzResourceGroupDeployment @params -DeploymentDebugLogLeve …
     |          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | 2:46:20 PM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'The template variable 'ChildRCGs' is not valid: The language expression property 'name' doesn't exist, available properties are ''.. Please see    
     | https://aka.ms/arm-functions for usage details.'.

It did not make sense at all. That was until a dev buddy asked if the object was serializable at all. And guess what? ConvertFrom-Json creates a PSCustomObject that is NOT serializable.

You can check this quickly yourself.

((Get-Content $File.FullName -Raw) | ConvertFrom-Json).gettype().IsSerializable

Will print False

While

((Get-Content $File.FullName -Raw)).gettype().IsSerializable

Will print True

With some more testing and the use of outputs, I can even visualize that the parameter remained empty! The array contains three empty {} where I expected the JSON.

ConvertFrom-Json is not serializable

I usually do not have any issues with this in my pure PowerShell scripting. But here, I pass the object from PowerShell to Bicep, and guess what? For that to work, it has to be serializable. Now, when I do this, there are no warnings or errors. It just seems to work until you use the parameter and get errors that, at first, I did not understand. But the root cause is that in Bicep, the parameter remained empty. Needless to say, I wasted many hours trying to fix this before I finally understood the root cause!

As you can see in the code, I still use ConvertFrom-Json to test if my JSON files contain any errors, but I do not pass that JSON to Bicep as that will not work. So instead, I pass the string and still use the json() function in Bicep.

Hence, this blog post is to help others not make the mistake I made. It will also help me remember ConvertFrom-Json is not serializable.

Create virtual machines for a Veeam hardened repository lab

Introduction

In this blog post, I will give you a script to create virtual machines for a Veeam hardened repository lab.

Create virtual machines for a Veeam hardened repository lab
The script has just created two virtual machines for you

Some of you have asked me to do some knowledge transfer about configuring a Veeam hardened repository. For lab work virtualization is your friend. I hope to show you some of the Ubuntu Linux configurations I do. When time permits I will blog about this and you can follow along. I will share what I can on my blog.

Running the script

Now, if you have Hyper-V running on a lab node or on your desktop or laptop you can create virtual machines for a Veeam hardened repository lab with the PowerShell script below. Just adjust the parameters and make sure you have the Ubuntu 20.04 Server ISO in the right place. The script creates the virtual machine configuration files under a folder with the name of the virtual machine in the path you specify in the variables The VM it creates will boot into the Ubuntu setup and we can walk through it and configure it.

Pay attention to the -version of the virtual machine. I run Windows Server 2022 and Windows 11 on my PCs so you might need to adjust that to a version your Hyper-V installation supports.

Also, pay attention to the VLAN IDs used. That suits my lab network. It might not suit yours. Use VLAN ID 0 to disable the VLAN identifier on a NIC.

Clear-Host
$VMPrefix = 'AAAA-XFSREPO-0'
$Path = "D:\VirtualMachines\"
$ISOPath = 'D:\VirtualMachines\ISO\ubuntu-20.04.4-live-server-amd64.iso'
$NumberOfCPUs = 2
$Memory = 4GB
$vSwitch = 'DataWiseTech'
$NumberOfVMs = 2
$VlanIdTeam = 2
$VlanIDSMB1 = 40
$VlanIdSMB2 = 50
$VmVersion = '10.0'

ForEach ($Counter in 1..$NumberOfVMs) {
    $VMName = $VMPrefix + $Counter
    $DataDisk01Path = "$Path$VMName\Virtual Hard Disks\$VMName-DATA01.vhdx"
    $DataDisk02Path = "$Path$VMName\Virtual Hard Disks\$VMName-DATA02.vhdx"
    Write-Host -ForegroundColor Cyan "Creating VM $VMName in $Path ..."
    New-VM -Name $VMName -path $Path -NewVHDPath "$Path$VMName\Virtual Hard Disks\$VMName-OS.vhdx" `
        -NewVHDSizeBytes 65GB -Version 10.0 -Generation 2 -MemoryStartupBytes $Memory -SwitchName $vSwitch| out-null

    Write-Host -ForegroundColor Cyan "Setting VM $VMName its number of CPUs to $NumberOfCPUs ..."
    Set-VMProcessor –VMName $VMName –count 2

    Write-Host -ForegroundColor Magenta "Adding NICs LAN-HOST01, LAN-HOST02, SMB1 and SMB2 to $VMName"
    #Remove-VMNetworkAdapter -VMName $VMName -Name 'Network Adapter'

    Rename-VMNetworkAdapter -VMName $VMName -Name 'Network Adapter' -NewName LAN-HOST-01
    #Connect-VMNetworkAdapter -VMName $VMName -Name LAN -SwitchName $vSwitch
    Add-VMNetworkAdapter -VMName $VMName -SwitchName DataWiseTech -Name LAN-HOST-02 -DeviceNaming On
    Add-VMNetworkAdapter -VMName $VMName -SwitchName $vSwitch -Name SMB1 -DeviceNaming On
    Add-VMNetworkAdapter -VMName $VMName -SwitchName $vSwitch -Name SMB2 -DeviceNaming On
    
    Write-Host -ForegroundColor Magenta "Assigning VLANs to NICs LAN-HOST01, LAN-HOST02, SMB1 and SMB2 to $VMName"
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName LAN-HOST-01 -Access -VLANId $VlanIdTeam
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName LAN-HOST-02 -Access -VLANId $VlanIdTeam  
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName SMB1 -Access -VLANId $VlanIdSMB1
    Set-VMNetworkAdapterVlan -VMName $VMName -VMNetworkAdapterName SMB2 -Access -VLANId $VlanIdSmb2

    Set-VMNetworkAdapter -VMName $VMName -Name LAN-HOST-01 -DhcpGuard On -RouterGuard On -DeviceNaming On -MacAddressSpoofing On -AllowTeaming On
    Set-VMNetworkAdapter -VMName $VMName -Name LAN-HOST-02 -DhcpGuard On -RouterGuard On -MacAddressSpoofing On -AllowTeaming On
    Set-VMNetworkAdapter -VMName $VMName -Name SMB1 -DhcpGuard On -RouterGuard On -MacAddressSpoofing Off -AllowTeaming off
    Set-VMNetworkAdapter -VMName $VMName -Name SMB2 -DhcpGuard On -RouterGuard On -MacAddressSpoofing Off -AllowTeaming off

    Write-Host -ForegroundColor yellow "Adding DVD Drive to $VMName"
    Add-VMDvdDrive -VMName $VMName -ControllerNumber 0 -ControllerLocation 8 

    Write-Host -ForegroundColor yellow "Mounting $ISOPath to DVD Drive on $VMName"
    Set-VMDvdDrive -VMName $VMName -Path $ISOPath

    Write-Host -ForegroundColor White "Setting DVD with $ISOPath as first boot device on $VMName"
    $DVDWithOurISO = ((Get-VMFirmware -VMName $VMName).BootOrder | Where-Object Device -like *DVD*).Device
    
    Set-VMFirmware -VMName $VMName -FirstBootDevice $DVDWithOurISO `
    -EnableSecureBoot On -SecureBootTemplate MicrosoftUEFICertificateAuthority

    Write-Host -ForegroundColor Cyan "Creating two data disks and adding them to $VMName"
    New-VHD -Path $DataDisk01Path -Dynamic -SizeBytes 150GB | out-null
    New-VHD -Path $DataDisk02Path -Dynamic -SizeBytes 150GB | out-null

    Add-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 `
    -ControllerLocation 1 -ControllerType SCSI  -Path $DataDisk01Path

    Add-VMHardDiskDrive -VMName $VMName -ControllerNumber 0 `
    -ControllerLocation 2 -ControllerType SCSI  -Path $DataDisk02Path

    $VM = Get-VM $VMName 
    write-Host "VM $VM  has been created" -ForegroundColor green
    write-Host ""
}

Conclusion

In conclusion, that’s it for now. Play with the script and you will create virtual machines for a Veeam hardened repository lab in no time. That way you are ready to test and educate yourself. Don’t forget that you need to have sufficient resources on your host. Virtualization is cool but it is not magic.

Some of the settings won’t make sense to some of you, but during the future post, this will become clear. These are specific to Ubuntu networking on Hyper-V.

I hope to publish the steps I take in the coming months. As with many, time is my limiting factor so have patience. In the meanwhile, you read up about the Veeam hardened repository.

Failing compilation with Azure Automation State Configuration: Cannot connect to CIM server. The specified service does not exist as an installed service

Introduction

You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration, which functions as a pull server. Next to doing this via the Azure portal, you can also use PowerShell. The latter allows for easy integration in DevOps pipelines and provides the flexibility to deal with complex parameter constructs. So, this is my preferred option. Of course, you can also push DSC configurations to Azure virtual machines via ARM templates. But I like the pull mechanisms for life cycle management just a bit more as we can update the DSC config and push it out when needed. So, that’s all good, but under certain conditions, you can get the following error: Cannot connect to CIM server. The specified service does not exist as an installed service.

When can you get into this pickle?

DSC itself is PowerShell, and that comes in quite handy. Sometimes, the logic you use inside DSC blocks is insufficient to get the job done as needed. With PowerShell, we can leverage the power of scripting to get the information and build the logic we need. One such example is formatting data disks. Configuring network interfaces would be another. A disk number is not always reliable and consistent, leading to failed DSC configurations.
For example, the block below is a classic way to wait for a disk, and when it shows up, initialize, format, and assign a drive letter to it.

xWaitforDisk NTDSDisk {
    DiskNumber = 2

    RetryIntervalSec = 20
    RetryCount       = 30
}
xDisk ADDataDisk {
    DiskNumber = 2
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
} 

The disk number may vary depending on whether your Azure virtual machine has a temp disk or not, or if you use disk encryption or not can trip up disk numbering. No worries, DSC has more up its sleeve and allows to use the disk id instead of the disk number. That is truly unique and consistent. You can quickly grab a disk’s unique id with PowerShell like below.

xWaitforDisk NTDSDisk {
    DiskIdType       = 'UniqueID'
    DiskId           = $NTDSDiskUniqueId #'1223' #GetScript #$NTDSDisk.UniqueID
    RetryIntervalSec = 20
    RetryCount       = 30
}

xDisk ADDataDisk {
    DiskIdType  = 'UniqueID'
    DiskId      = $NTDSDiskUniqueId #GetScript #$NTDSDisk.UniqueID
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
}

Powershell in compilation error

So we upload and compile this DSC configuration with the below script.

$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    SourcePath            = 'C:\Users\WorkingHardInIT\OneDrive\AzureAutomation\AD-extension-To-Azure\InfAsCode\Up\App\PowerShell\ADDSServer.ps1'
    Published             = $true
    Force                 = $true
}

$UploadDscConfiguration = Import-AzAutomationDscConfiguration @params

while ($null -eq $UploadDscConfiguration.EndTime -and $null -eq $UploadDscConfiguration.Exception) {
    $UploadDscConfiguration = $UploadDscConfiguration | Get-AzAutomationDscCompilationJob
    write-Host -foregroundcolor Yellow "Uploading DSC configuration"
    Start-Sleep -Seconds 2
}
$UploadDscConfiguration | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor Green "Uploading done:"
$UploadDscConfiguration


$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    ConfigurationName     = 'ADDSServer'
}

$CompilationJob = Start-AzAutomationDscCompilationJob @params 
while ($null -eq $CompilationJob.EndTime -and $null -eq $CompilationJob.Exception) {
    $CompilationJob = $CompilationJob | Get-AzAutomationDscCompilationJob
    Start-Sleep -Seconds 2
    Write-Host -ForegroundColor cyan "Compiling"
}
$CompilationJob | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor green "Compiling done:"
$CompilationJob

So, life is good, right? Yes, until you try and compile that (DSC) configuration in Azure Automation State Configuration. Then, you will get a nasty compile error.

Cannot connect to CIM server. The specified service does not exist as an installed service

“Exception: The running command stopped because the preference variable “ErrorActionPreference” or common parameter is set to Stop: Cannot connect to CIM server. The specified service does not exist as an installed service.”

Or in the Azure Portal:

Cannot connect to CIM server. The specified service does not exist as an installed service

The Azure compiler wants to validate the code, and as you cannot get access to the host, compilation fails. So the configs compile on the Azure Automation server, not the target node (that does not even exist yet) or the localhost. I find this odd. When I compile code in C# or C++ or VB.NET, it will not fail because it cannot connect to a server and validate my code by crabbing disk or interface information at compile time. The DSC code only needs to be correct and valid. I wish Microsoft would fix this behavior.

Workarounds

Compile DSC locally and upload

Yes, I know you can pre-compile the DSC locally and upload it to the automation account. However, the beauty of using the automation account is that you don’t have to bother with all that. I like to keep the flow as easy-going and straightforward as possible for automation. Unfortunately, compiling locally and uploading doesn’t fit into that concept nicely.

Upload a PowerShell script to a storage container in a storage account

We can store a PowerShell script in an Azure storage account. In our example, that script can do what we want, find, initialize, and format a disk.

Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } |
Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize |
Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false

From that storage account, we download it to the Azure VM when DSC is running. This can be achieved in a script block.

$BlobUri = 'https://scriptlibrary.blob.core.windows.net/scripts/DSC/InitialiseNTDSDisk.ps1' #Get-AutomationVariable -Name 'addcInitialiseNTDSDiskScritpBlobUri'
$SasToken = '?sv=2021-10-04&se=2022-05-22T14%3A04%8S67QZ&cd=c&lk=r&sig=TaeIfYI63NTgoftSeVaj%2FRPfeU5gXdEn%2Few%2F24F6sA%3D'
$CompleteUri = "$BlobUri$SasToken"
$OutputPath = 'C:\Temp\InitialiseNTDSDisk.ps1'

Script FormatAzureDataDisks {
    SetScript  = {

        Invoke-WebRequest -Method Get -uri $using:CompleteUri -OutFile $using:OutputPath
        . $using:OutputPath
    }

    TestScript = {
        Test-Path $using:OutputPath
    }

    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
} 

But we need to set up a storage account and upload a PowerShell script to a blob. We also need a SAS token to download that script or allow public access to it. Instead of hardcoding this information in the DSC script, we can also store it in automation variables. We could even abuse Automation credentials to store the SAS token securely. All that is possible, but it requires more infrastructure, maintenance, security while integrating this into the DevOps flow.

PowerShell to generate a PowerShell script

The least convoluted workaround that I found is to generate a PowerShell script in the Script block of the DSC configuration and save that to the Azure VM when DSC is running. In our example, this becomes the below script block in DSC.

Script FormatAzureDataDisks {
    SetScript  = {
        $PoshToExecute = 'Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false'
        $ PoshToExecute | out-file $using:OutputPath
        . $using:OutputPath
    }
    TestScript = {
        Test-Path $using:OutputPath 
    }
    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
}

So, in SetScript, we build our actual PowerShell command we want to execute on the host as a string. Then, we persist to file using our $OutputPath variable we can access inside the Script block via the $using: OutputPath. Finally, we execute our persisted script by dot sourcing it with “. “$using:OutputPath” In TestScript, we test for the existence of the file and ignore the output of GetScript, but it needs to be there. The maintenance is easy. You edit the string variable where we create the PowerShell to save in the DSC configuration file, which we upload and compile. That’s it.

To be fair, this will not work in all situations and you might need to download protected files. In that case, the above will solutions will help out.

Conclusion

Creating a Powershell script in the DSC configuration file requires less effort and infrastructure maintenance than uploading such a script to a storage account. So that’s the pragmatic trick I use. I’d wish the compilation to an automation account would succeed, but it doesn’t. So, this is the next best thing. I hope this helps someone out there facing the same issue to work around the error: Cannot connect to CIM server. The specified service does not exist as an installed service.