It almost snuck by me but on November 15th, 2020 Microsoft announced that a web app in Azure App Service now supports NAT Gateway. That might not seem like a big deal but it can come in quite handy! Also, we have been waiting for this for quite a while.
Why is this useful?
For one the NAT Gateway provides a dedicated, fixed IP address for outgoing traffic. That can be quite handy for whitelisting use cases. You could use Azure Firewall if you want to control egress traffic over a dedicated fixed IP address by FQDN but then you miss out on the second benefit, scalability. On top of that Azure Firewall is expensive overkill just to get a dedicated IP for outbound traffic.
An Azure NAT Gateway also helps with scaling the web application. Because it delivers 64000 outbound SNAT usable ports. The Azure App Service itself has a limited number of connections you can have to the same address and port.
How to use a NAT Gateway with Azure App Service
Integrate your app with an Azure virtual network. You need to use Regional VNet Integration in order to leverage an Azure NAT Gateway. Regional VNet Integration is available for web apps in a Standard, Premium V2 or Premium V3 App Service plan. It will work with both Function apps and web or API apps. Note some Standard App Service plans cannot use Regional VNet Integration if they run on older App Service deployments on older hardware stamps. See Clarify if PremiumV2 is required for VNET integration.
Route all the outbound traffic into your Azure virtual network
Provision a NAT Gateway in the same virtual network and configure it with the subnet used for VNet Integration.
From now on outbound (egress) traffic initiated by your web app in Azure App Service will go out over the IP address of the NAT Gateway.
I use Storage spaces in various environments for several use cases, even with clients (see Move Storage Spaces from Windows 8.1 to Windows 10). In this blog post, we’ll walk through replacing a failed disk in a stand-alone Storage Spaces with Mirror Accelerated Parity. have a number of DELL R740XD stand-alone servers with a ton of storage that I use as backup targets. See A compact, high capacity, high throughput, and low latency backup target for Veeam Backup & Replication v10 for a nice article on a high-performance design with such servers. They deliver the repositories for the extents in Veeam Backup & Replication Scale-out Backup Repositories. They have MVME’s for the performance tier in Storage Spaces with Mirror Accelerated Parity.
Even with the best hardware and vendor, a disk can fail and yes, it happened to one of our NVME drives. Reseating the disk did not help and we were one disk shot in device manager. So yes the disk was dead (or worse the bus where it was seated, but that is less likely).
Replacing a failed disk in a stand-alone Storage Spaces with Mirror Accelerated Parity
So let’s take a look by running
Get-PhysicalDisk
I immediately see that we have an NVME that has lost communication, it is gone and also no longer displays a disk number. It seems to be broken.
That means we need to get rid of it in the storage spaces pool so we can replace it.
Getting rid of the failed disk properly
I put the disk that lost communication into a variable
Let this complete and check again with Get-PhysicalDisk, you will see the problematic disk has gone. Note that there are only 7 NVME disks left.
It does not show an unrecognized disk that is still visible to the OS somehow. So we cannot try to reset it to try to get it back into action. We need to replace it and so we request a replacement disk with DELL support and swap them out.
Putting the new disk into service
Now we have our new disk we want it to be added to the storage pool. You should now see the new disk in Disk Manager as online and not initialized. Or when you select Add Physical Disk in the storage pool in Server Manager.
But we were doing so well in PowerShell so let’s continue there. We will add the new disk to the storage pool. Run
When you run Get-PhysicalDisk again you will see that there are no disks left that can be pooled, meaning they are all in the storage pool. And we have 8 NMVE disks agaibn Good!
That’s it. All is well again and rebalanced. It also ensures the storage capacity contributed by the replaced disk will be available in the performance tier when I want to create an extra virtual disk. Storage Spaces at its best giving me the opportunity to leverage NVMe with other disks while maximizing the benefits of ReFS.
As you have seen replacing a failed disk in a stand-alone Storage Spaces with Mirror Accelerated Parity is not too hard to do. You just need to wrap your head around how storage spaces work and investigate the commands a little. For that I recommend practicing on a Virtual Machine.
Let’s take a look at some of the recently released LoadMaster LMOS 7.2.52 firmware feature enhancements. You can read the release notes on their web site. Next to security updates there are two entries in the new features and change notices that caught my attention. The first is the new feature that delivers the Ability to use SNI in SubVS, as well as SNI-Hostname Pass-Through. Secondly, it is the enhancement that we can now configure Per-VS Health Check Settings. Both are very welcome as I have to deal with such scenarios or needs frequently. So let’s take a look.
Ability to use SNI in SubVS, as well as SNI-Hostname Pass-Through
The Server Name Indication (SNI) feature has been enhanced to support the following:
The ability to pass through the original hostname as the SNI hostname to the Real Server.
The ability to specify a different (manual) SNI hostname per SubVS. This is the same as the previous functionality to specify this on the parent Virtual Service (Reencryption SNI Hostname) but on the SubVS level with content switching.
These new features help make scenarios, where you may want to consolidate as many services as possible to the least amount of IP addresses, easier and less confusing to implement.
The Pass-through SNI hostname check box is available in the SSL Properties section of the Virtual Service modify screen. When this is enabled and when re-encrypting, the received SNI hostname is passed through as the SNI to be used to connect to the Real Server. If the Virtual Server has a Reencryption SNI Hostname set, this overrides the received SNI.
It is also possible to set the re-encryption SNI hostname in a SubVS (in the Basic Properties section). If it is set in a SubVS, this overrides the parent Virtual Service value and/or the received SNI value.
Per-VS Health Check Settings
Until now, health check settings were global-only, located on the Rule & Checking > Check Parameters UI page: Check Interval, Connect Timeout, and Retry Count
Now, we can also configure these settings on the Virtual Service Real Servers tab. This means we can tune the health check behavior for specific VSs and SubVSs. By default, real server health checks use the global settings. The VS or SubVS settings change as the global settings change. So the default behaves as in previous releases.
Once you change a check parameter on a VS or SubVS level, however, those custom VS or SubVS settings will remain unchanged regardless of changes made to the global setting. The UI indicates whether the currently in-use value is the global value or is set to a custom value.
Being able to set the real server health check interval, timeout, and retry count helps us in those scenarios where we have services with different needs. These global setting where always a balancing act between all the services. So this capability is very welcome as well.
It is great to see Kemp Technologies their offerings evolve and improve. They have established themselves quite well over the years. It makes me happy that way back I chose them for their price/value and excellent support. I still remember the first HA pair (LM-2200) I ever deployed (2011) for a real time reference GPS position system. That was the beginning of a very succesful journey building solutions with LoadMasters using both physical and virtual appliances.
A File-Level Restore in a hardened environment can trip you up. In Veeam Backup & Replication the ability to do file-level restores from virtual machine backups is a handy option to have. In secured environments with an isolated and protected Veeam backup fabric, this is not always a straightforward or fast process. Let’s look at this and make some suggestions for a solution or workarounds.
Veeam File-Level Restore in a hardened environment
Nowadays, it is a common best practice that the Veeam Backup & Replication environment is isolated and independent from the fabrics it protects. It is even more than a best practice and ransomware educated many of us fast and hard on the need for this.
In such an environment every server in the Veeam backup fabric solution is isolated, with their own credentials and not joined or in any way dependent on the environment it is protecting. Dedicated accounts are used to run the actual backup jobs and these are not used for normal administrative or operational tasks.
The majority of Veeam functionality works just fine in such an environment. But there are some things that need to be addressed. One such thing is a file-level restore (FLR). In a secured and isolated environment, an FLR job cannot leverage admin shares to gain access to the virtual machine to where files are being restored or access the data in the backup. To work around this Veeam can leverage PowerShell Direct by which it can securely restore the files into the virtual machine anyway.
But leveraging PowerShell Direct comes at the cost of speed. 7 MB/s is okay for a small number of smaller files. It becomes frustratingly slow when it comes to restoring many and larger files. Minutes turn into hours. In some cases, this just won’t do. In our case, the DBA who needed to do the database restore was not happy and the developers waiting for the restored database were not overly pleased either.
So what other options do we have?
Set up a secured file share in the protected environment
Below we will discuss the other options you have for a File-Level Restore in a hardened environment via FLR in Veeam Backup & Recovery. But first, we prepare a landing zone in the secured environment where we can copy restored files to. For this, we set up a restore share in the protected environment where a few admins that are responsible for backup and restores can write to. No one else can enumerate, read, or write to that share. When a file gets restored there they can copy it over to where the people that need it can access it. This helps protect and limit access to restored files to just the people that need to get to it. The accounts used to access that share from the isolated backup environment do not get admin permissions on the system where that share resides or anywhere else.
Providing a restore share in the protected environment is safe. You only allow certain admins to have access to it, no one else. Since the less trusted environment (the one we protect) allows access from a more secure environment (the backup environment) this does not expose the backup environment to extra danger. At no point does this allow any access to the backup environment from the protected environment.
Use the FLR wizard to copy the files
You can use the FLR wizard to copy the file locally in the secured Veeam backup environment.
From there you can copy it to the restore share in the protected environment. This is very fast as nothing is. On our network we get 230 MB/s.
But what if you don’t have the space for that locally? Not everyone can have a large local restore disk onWould it not be handy to copy it to restore share in the protected environment directly? Well yes, you can try in the wizard and it will ask for the credentials to access the file share.
But it will fail to read and restore the backups but this will fail with an “Access is denied” error.
You could work around it if you use the account VBR uses to actually make the backups. But that won’t do. For one those credentials should be guarded and not use by anything or anyone but VBR. Secondly, it doesn’t work anyway.
So are you out of luck? The good news is no, you are not, so read on!
Copy the files directly from the VeeamFLR mountpoint
If you don’t have the space to restore the file locally in the secure backup environment, there is another trick. When you look at the restore log you will see what the mount server is. That is the node where you can find the C:\VeeamFLR folder, which is where the content of the virtual Machine VHDX volumes is being mounted. This means that on that node you can navigate to the folder or files you want to restore.
Select the ones you want to recover and just copy them. Navigate to restore share in the protected environment. You just need to provide the credentials of the dedicated account with write access to that share. Those are the only rights you need on that share. The copy is again fast at 230 MB/s. That is a lot better than 7MB/s.
There is an issue with this, however. In this scenario, you need access to the server where the volumes of the virtual machine get mounted. Normally that would be the local repository where the backup files reside and guess what, you lock these down as much as possible and don’t log on to them unless needed. You could specify another, but then it needs to be a server where you have access to and it copies the data over the network, which is slower. With 10Gbps or more that’s not an issue but with 1Gbps, it can take a while with a lot of large files. Can we fix this? Yes. with Mount to Console!
Mount to Console to the rescue
But hey, this would not be Veeam if they did not have yet another option on their sleeve! You can use the Mount to Console button on the ribbon of the Veeam Backup browser to create an extra mount point on your VBR console server. From here you can copy the files to the restore file share in the protected environment a lot faster as well. All you need for this is a local user (non-administrator account) with the Veeam Restore Operator role.
Veeam Enterprise Manager addresses the challenges we worked around here. But you’ll need Enterprise (plus) to get full-featured use of that. It is nice as it takes care of the above scenario and you can have restore rights assigned to people without compromising your backup environments operational security. So, yes, use Enterprise Manager when possible. If not, see the above workarounds for other options.
Conclusion
Having some insights into how Veeam Backup & Replication works can be in handy at times. It also helps if you can think a bit outside the box and act on those insights to come up with some alternatives or workarounds. This is exactly what we did here with great results. The DBA could do his restores faster while the devs have the version of the database they needed. Do note that the correct answer lies in Veeam Enterprise manager, but lacking that. You now have some options. I hope these insights into a File-Level Restore in a hardened environment help you out someday.