Protecting your Veeam Backup and Replication Server is critical

Introduction

In this blog, we will demonstrate one of the things that can go wrong when someone gets a hold of your Veeam Backup & Replication server administrative credentials. They can do more than “just” delete all your backups, replicas, etc. When they can logon to the Veeam Backup & Replication Server itself they can also grab all the credentials form the Veeam configuration database. Those credentials normally have privileges that you do not want to fall into the wrong hands. These are quite literally the keys to the kingdom. Hence, protecting your Veeam Backup & Replication Server is critical.

Protecting your Veeam Backup & Replication Server is critical

Security is not about one feature, technology or action. It takes a more holistic approach. It requires physical security to start with. You also need to adhere to the principles of least privilege rigorously. All this while locking down access, reducing the attack surface, leveraging segmentation, etc.

A key element lies in prevention. You must avoid the harvesting of those credentials. For this reason, you absolutely must practice privileged credential hygiene. Today you also want to leverage multi-factor authentication in order to protect access even better. All this, and more, prevents unauthorized access in the first place. Even when one measure fails. Read Veeam Backup & Replication 9.5 Update 3 — Infrastructure Hardening for more details on this.

Protecting your Veeam Backup & Replication Server is critical
Add MFA to portect your credentials being abused when compromised

Veeam Backup & Replication itself requires credentials to do its work of protecting data and workloads. Access to servers, proxies, repositories, interaction with virtual machines, etc. cannot happen without such credentials. Veeam encrypts the passwords of these users via strong encryption. They use the Microsoft CryptoAPI (FIPS certified) with the machine-specific encryption key for this.

As a side note, you might have seen the big fuss around the critical vulnerability in January 2020 regarding CryptoAPI. This is a reminder of why you need to keep your systems patched.

Protecting your Veeam Backup & Replication Server is critical
CryptoAPI

It ensures decryption of those passwords on another host than the one were encrypting them happened, fails. This means that even if someone steals the configuration database, or in some shape, way or form gets a hold of the encrypted password in the database they cannot be decrypted. This is an industry-standard and quite safe. What you need to know is that when someone gains access to your machine with local administrative rights, all bets are off.

What can happen?

The moment an attacker logs on to the Veeam Backup & Replication server with administrative rights, it is game over.

They will be able to grant themselves access to SQL Server and query it for the credentials. With that information, all they need to do is load and use a Veeam DLL to decrypt them. When this runs on the server where you encrypted them, this will succeed. If anyone would get hold of the encrypted password and tries to decrypt them on another host this will fail as that host has the wrong machine-specific key.

Let me emphasize once more that this is not a insecure implementation by Veeam. When you store encrypted passwords for a service, that service must be able to decrypt them. Otherwise, they can never use them. You cannot get the passwords via the GUI or the Veeam PowerShell commands. But via code, this is quite possible.

Sample code to proof that protecting your Veeam Backup & Replication Server is critical

I assembled a little PowerShell script that grabs the data from the Veeam configuration database. For this purpose, I filter out the passwords that have an empty string. We loop through the ones that remain and decrypt the passwords. In the end, I decided not to post the script as it might help people with bad intentions. I know it won’t stop bad actors cold in their tracks and maybe I will update this post later. But for now, did not include it.

Sorry, right now you can only see the output below from a example VBR Server

In the screenshot below you can see the results. This is a demo lab with demo credentials, so no worries about showing this to you. Remember that you can only decrypt the password on the Veeam Backup & Replication server where you encrypted them.

Protecting your Veeam Backup & Replication Server is critical

There they are, the users with the encrypted and decrypted passwords

To prove a point we will grab the encrypted passwords and try to decrypt them on another VBR server so we have around to do so. This fails with an Exception calling “GetLocalString” with “1” argument(s): “Key not valid for use in specified state. error.

Protecting your Veeam Backup & Replication Server is critical
No matter what encrypted password you try to decrypt on another host it will fail as you don’t have the correct machine specific key.

As you can see even if you get a hold of the encrypted strings they cannot be decrypted on another machine. You must do this on the machine that encrypted them.

Conclusion

While to some this might be a shock when they first learn of this., it is not a gaping security hole. It just shows you that security is more than encryption. It takes multiple measures on multiple levels to protect assets. I repeat, protecting your Veeam Backup & Replication Server is critical. For many people, this is indeed an eye-opener. The lesson is that you must protect your assets adequately. Do not bank on one feature to hold off any and all threats by itself. That is asking for the impossible.

I do hope that all Veeam software itself will also support MFA in the future. That would also help protect access via the Veeam Backup & Replication console.

I am a Veeam Vanguard 2020

I am a Veeam Vanguard 2020

In these somewhat disconcerting times (Corona & Covid-19, for those reading this 2010 years time) good news still happens. Early this evening I got an e-mail from Nikola Pejková. It stated that my nomination for Veeam Vanguard 2020 was approved.

The Veeam Vanguard logo and program

This, my dear readers, puts a smile on my face and makes me happy. It is a great program to be in. We share experiences, knowledge and learn with and from each other. This translates into better insights, designs, and implementations of Veeam solutions. Not only for ourselves, but also for our employers, customers and the community at large. With them, we share all this knowledge. This happens via blogging, user groups, speaking at conferences, webinars, webcasts, showcasts, podcasts, etc.

What does this mean?

We get access to key Veeam personnel who share their extensive insights with us. Their names read like a list of the top 25 people in the backup world today. Danny Allan, CTO and SVP at Veeam (Executive sponsor of the Vanguard program) is one of them. Then we have Anthony Spiteri, Michael Cade, David Hill, Karinne Bessette, Melissa Palmer, Rick Vanover, Technologist staff at Veeam. They, together with Kirsten Stoner, Dmitry Kniazev, Andrew Zhelezko, Nikola Pejková, Technical Analyst staff at Veeam get the honor of herding us cats. Last but not least at all are Anton Gostev, Senior Vice President, Product Management and Mike Resseler, Director, Product Management.

Anton Gostev sharing his insights with us at the Veeam Vanguard Summit 2019

On top of that, we are invited to Join the Veeam Vanguard Summit where we spent some intensive days in briefings and discussions with that team. It is quite an experience to be there. First and foremost, I am both humbled and proud to get this opportunity again this year. The chance to be part of all that comes with the fact that I am a Veeam Vanguard 2020. Second, I get the opportunity once more to pick the brains of the best and provide feedback on how we see, experience and use Veeam solutions is priceless.

Finally, thank you for the opportunity, thanks for the trust and I am looking forward to working with Veeam and my fellow Vanguards for another year. I hope to see you all in good health this year!

Squid for Windows and Veeam

Introduction

Squid for Windows is a free proxy service. What do Squid for Windows and Veeam have to do with each other? Well, I have been on a path to create some guidance on how to harden your Veeam backup infrastructure. The aim of this is to improve your survival chances when your business falls victim to ransomware and other threats. In that context, I use Squid for Windows to help protect the Veeam backup infrastructure which is any of the Veeam roles (VBR Server, proxies, repositories, gateways, etc.). This does not include the actual source servers where the data we protect lives (physical servers, virtual servers or files).

Setting the context

One of the recurring issues I see when trying to secure a Veeam backup infrastructure environment is that there are a lot of dependencies on other services and technologies. There are some potential problems with this. Often these dependencies are not under the control of the people handling the backups. Maybe some counter measures you would like to have in place don’t even exist!

When they do exist, sometimes you cannot get the changes you require made. So for my guidance, I have chosen to implement as much as possible with in box Windows Server roles and features augmented by free 3rd party offerings where required. Other than that we rely on the capabilities of Veeam Backup & Replication. In this process, we avoid taking hard dependencies on any service that we are protecting. This avoids the chicken and egg symptoms when the time to recover arrives.

The benefit of this approach is that we get a reasonably secured Veeam backup infrastructure in place even in a non-permissive environment. It helps with defense in depth if the solutions you deploy are secured well, independent on what is in place or not.

Squid for Windows and Veeam

One of the elements of protecting your Veeam environment is allowing outgoing internet access only to those services required but disallowing access to other all other sites. While the Windows firewall can help you secure your hosts it is not a proxy server. We are also trying to make the Veeam backup infrastructure independent of the environment we are protecting. So we chose not to rely on any exiting proxy services to be in place. If there are that is fine and considered a bonus.

To get a proxy service under our control we implement this with Squid for Windows.

Squid for Windows and Veeam
Install Squid for Windows

You can run this on your jump host, a host holding the Veeam gateway server role or, depending on your deployment size a dedicated virtual machine. You can also opt to have a dedicated second NIC on a separate Subnet/network to provide internet access. We will then point all Veeam backup infrastructure servers their proxy settings the Squid Proxy.

Squid white listing

In Squid, we can add a white list with sites we want to allow access to over HTTPS and block all others. In my Veeam labs, I allow sites associated with DUO (MFA), Wasabi (budget-friendly S3 compatible cloud storage), Veeam.com, and a bunch of Microsoft Sites associated with Windows update. Basically, this is a text file where you list the allowed sites. Mine is calles Allowed_Sites.txt and I store it under C:\Program Files\Squid\etc\squid.

## These are websites needed to keep the Veeam backup infra servers
## up to date and functioning well. They also include the sites needed
## by 3rd party offerings we rely on such as DUO, WASABI, CRL sites.
## Add .amazonaws.com, Azure storage as required

#DUO
.duosecurity.com
.duo.com

#WASABI
.wasabisys.com
.wasabi.com

#Windows Update
.microsoft.com
.edge.microsoft.com
.windowsupdate.microsoft.com
.update.microsoft.com
.windowsupdate.com
.redir.metaservices.microsoft.com
.images.metaservices.microsoft.com
.windows.com
.crl.microsoft.com

#VEEAM
.veeam.com

#CRLFQDNs
.GeoTrust.com
.digitalcertvalidation.com
.ws.symantec.com
.symcb.com
.globalsign.net
.globalsign.com
.Sectigo.com
.Comodoca.com

MFA providers and internet access

Warning! When leveraging MFA like DUO it is paramount that you add .duosecurity.com to the list of allowed sites. If not the DUO client cannot work properly. You will see errors like “The Duo authentication returned an unexpected response”.

You will have to fix this. As the server cannot contact the DUO service while it knows internet access is available, so the offline MFA access won’t kick in.

Use the Squid log

The Squid log lists all allowed and denied connections, you can quickly find out what is missing in the white list and add it.

Looking at this log while observing application behavior helps create a complete white list that only contains the FQDNs needed.

To make this work we first need to configure our Squid service to use the white list file. Below I have listed my configuration (C:\Program Files\Squid\etc\squid\squid.conf in the Veeam lab.

Veeam Squid Configuration

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed

acl localnet src 192.168.2.0/24	# RFC1918 possible internal network

acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 443		# https
acl CONNECT method CONNECT

## Custom ACL

#
# Recommended minimum Access Permission configuration:
#

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
acl allowed_sites dstdomain '/etc/squid/allowed_sites.txt'
http_access deny !allowed_sites
http_access allow CONNECT allowed_sites

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost


# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128

# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.
#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern .		0	20%	4320

dns_nameservers 1.1.1.1 208.67.222.222 208.67.220.220

max_filedescriptors 3200

To force the use of the proxy, you can block HTTP/HTTPS for the well-known internet ports such as 80, 8080, 443, 20, 21, etc. in the Windows firewall on the Veeam backup infrastructure hosts. Because the Veeam admins have local admin rights, which means they can change configuration settings. That requires intentional action. If you want to prevent internet access at another level, your security setup will need an extra external firewall component out of reach of the Veeam admins. It can be an existing FW or a designated one that allows only the proxy IP to access the internet. That’s all fine, and I consider that a bonus which definitely provides a more complete solution. But remember, we are trying to do everything here as much in-box to avoid dependencies on what we might not control.

Getting the proxy to be used

Getting the proxy to work for Veeam takes some extra configuration. Remember that there are 3 ways of setting proxy configurations in Windows. I have discussed this in my blog Configure WinINET proxy server with PowerShell.. Please go there for that info. For the Veeam services we need to leverage the WinHTTP library, not WinINET.

If the proxy is not set correctly and/or you have blocked direct internet access you will have issues with retrieving cloud tier certificates and Automatic license update fails when HTTP proxy is used or errors retrieving the certificates for your cloud capacity tier. All sorts of issues you do not want to happen.

We can set the WinHTTP proxy as follows with PowerShell / Netsh

$ProxyServer = "192.168.2.5:3128"
$ProxyBypassList = "192.168.2.3;192.168.2.4;192.168.2.5;192.168.2.72;<local>"  
netsh winhttp set proxy $ProxyServer bypass-list=$ProxyBypassList

If you want to get rid of the WinHTTP proxy setting you can do so via

netsh winhttp reset proxy

The proxy setting you do for WinINET with the Windows GUI or Internet Explorer are not those for WinHTTP. Edge Chromium actually takes you to the Windows proxy settings, there is no separate Edge GUI for that. But, again, That is WinINET, not Win HTTP.

You can set the WinINET proxy per user or per machine. This is actually a bit less elegant than I would like it to be. Also, remember that a browser’s proxy settings can override the system proxy settings. If you have set the system proxy settings (Windows or Internet Explorer) you can import it into WinHTTP via the following command.

netsh winhttp import proxy source=ie

Having WinINET configured for your proxy might also be desirable. If you set it I suggest you do this per machine and avoid users from changing this. Now, the users will be limited to a small number of Veeam admins. If you want to automate it, I have some more information and some PowerShell to share with you in Configure WinINET proxy server with PowerShell.

Conclusion

For our purposes, we used Squid as a free proxy, which we can control our selves. It is free and easy to setup. It prevents unintentional access and surfing to the internet. Sure, it can easily be circumvented by an administrator on a Veeam host, if no other countermeasures are in effect. But it serves its purpose of not allowing internet connections to anywhere by default. In that respect, it is an aid in maintaining a more secure posture in the daily operations of the Veeam backup infrastructure.

Veeam File Share backups and knowledge worker data

Introduction

Today I focus on Veeam File Share backups and knowledge worker data testing. In Veeam NAS and File Share Backups did my 1st testing with the RTM bits of Veeam Backup & Replication V10 File Share backup options. Those tests were focused on a pain point I encounter often in environments with lots of large files: being able to back them up at all! Some examples are medical imaging, insurance, and GIS, remote imaging (satellite images, Aerial photography, LIDAR, Mobile mapping, …).

The amount of data created has skyrocketed driven by not only need but the advances in technology. These technologies deliver ever-better quality images, are more and more affordable, and are ever more applicable in an expanding variety of business cases. This means such data is an important use case. Anyway for those use cases, things are looking good.

But what about Veeam File Share backups and knowledge worker data? Those millions of files in hundreds of thousands of folders. Well, in this blog post I share some results of testing with that data.

Veeam File Share backups and knowledge worker data

For this test we use a 2 TB volume with 1.87TB of knowledge worker data. This resides on a 2 TB LUN, formatted with NTFS and a unit allocation size of 4K.

1.87 TB of knowlegde worker dataon NTFS

The data consists of 2,637,652 files in almost 196,420 folders. The content is real-life accumulated data over many years. It contains a wide variety of file types such as office, text, zip, image, .pdf, .dbf, movie, etc. files of various sizes. This data was not generated artificially. All servers are Windows Server 2019. The backup repository was formatted with ReFS (64K allocation unit size).

Backup test

We back it up with the file server object from an all-flash source to an all-flash target. There is a dedicated 10Gbps backup network in the lab. As we did not have a separate spare lab node we configured the cache on local SSD on the repository. I set the backup I/O control for faster backup. We wanted to see what we could get out of this setup.

Below are the results.

Veeam File Share backups and knowledge worker data
45:28 minutes to backup 1.87 TB of knowledge worker data. I like it.

If you look at the back-up image above you see that the source was the bottleneck. As we are going for maximum speed we are hammering the CPU cores quite a bit. The screenshot below makes this crystal clear.

We have plenty of CPU cores in the lab on our backup source and we put them to work.
The CPU core load on the backup target is far less.

This begs the question if using the file share option would not be a better choice. We can then leverage SMB Direct. This could help save CPU cycles. With SMB Multichannel we can leverage two 10Gbps NICs. So We will repeat this test with the same data. Once with a file share on a stand-alone file server and once with a high available general-purpose file share with continuous availability turned on. This will allow us to compare the File Server versus File Share approach. Continuous availability has an impact on performance and I would also like to see how profound that is with backups. But all that will be for a future blog post.

Restore test

The ability to restore data fast is paramount. It is even mission-critical in certain scenarios. Medical images needed for consultations and (surgical) procedures for example.

So we also put this to the test. Note that we chose to restore all data to a new LUN. This is to mimic the catastrophical loss of the orginal LUN and a recovery to a new one.

The restore takes longer than the backup for the same amount of data, the restore speed is typically slower for large amounts of smaller files.

Below you will find a screenshot from the task manager on both the repository as well as the file server during the restore.

The repository server from where we are restoring the data
The file server where the backup is being restored completely on a new LUN. Note the peak throughput of 6.4 Gbps.

Mind you, this varies a lot and when it hits small files the throughput slows down while the cores load rises

The file server during the restore is having to work the hardest when it has to deal with the least efficient files.

Conclusion

For now, with variable data and lots of small files, it looks that restores take 2.5 to 3 times as long as backups with office worker data. We’ll do more testing with different data. With large image files, the difference is a lot less from our early testing. For now, this gives you a first look at our results with Veeam File Share backups and knowledge worker data As always, test for your self and test multiple scenarios. Your mileage will vary and you have to do your own due diligence. These lab tests are the beginning of mine, just to get a feel for what I can achieve. If you want to learn more about Veeam Backup & Replication go here.Thank you for reading.