In these somewhat disconcerting times (Corona & Covid-19, for those reading this 2010 years time) good news still happens. Early this evening I got an e-mail from Nikola Pejková. It stated that my nomination for Veeam Vanguard 2020 was approved.
This, my dear readers, puts a smile on my face and makes me happy. It is a great program to be in. We share experiences, knowledge and learn with and from each other. This translates into better insights, designs, and implementations of Veeam solutions. Not only for ourselves, but also for our employers, customers and the community at large. With them, we share all this knowledge. This happens via blogging, user groups, speaking at conferences, webinars, webcasts, showcasts, podcasts, etc.
What does this mean?
We get access to key Veeam personnel who share their extensive insights with us. Their names read like a list of the top 25 people in the backup world today. Danny Allan, CTO and SVP at Veeam (Executive sponsor of the Vanguard program) is one of them. Then we have Anthony Spiteri, Michael Cade, David Hill, Karinne Bessette, Melissa Palmer, Rick Vanover, Technologist staff at Veeam. They, together with Kirsten Stoner, Dmitry Kniazev, Andrew Zhelezko, Nikola Pejková, Technical Analyst staff at Veeam get the honor of herding us cats. Last but not least at all are Anton Gostev, Senior Vice President, Product Management and Mike Resseler, Director, Product Management.
On top of that, we are invited to Join the Veeam Vanguard Summit where we spent some intensive days in briefings and discussions with that team. It is quite an experience to be there. First and foremost, I am both humbled and proud to get this opportunity again this year. The chance to be part of all that comes with the fact that I am a Veeam Vanguard 2020. Second, I get the opportunity once more to pick the brains of the best and provide feedback on how we see, experience and use Veeam solutions is priceless.
Finally, thank you for the opportunity, thanks for the trust and I am looking forward to working with Veeam and my fellow Vanguards for another year. I hope to see you all in good health this year!
Squid for Windows is a free proxy service. What do Squid for Windows and Veeam have to do with each other? Well, I have been on a path to create some guidance on how to harden your Veeam backup infrastructure. The aim of this is to improve your survival chances when your business falls victim to ransomware and other threats. In that context, I use Squid for Windows to help protect the Veeam backup infrastructure which is any of the Veeam roles (VBR Server, proxies, repositories, gateways, etc.). This does not include the actual source servers where the data we protect lives (physical servers, virtual servers or files).
Setting the context
One of the recurring issues I see when trying to secure a Veeam backup infrastructure environment is that there are a lot of dependencies on other services and technologies. There are some potential problems with this. Often these dependencies are not under the control of the people handling the backups. Maybe some counter measures you would like to have in place don’t even exist!
When they do exist, sometimes you cannot get the changes you require made. So for my guidance, I have chosen to implement as much as possible with in box Windows Server roles and features augmented by free 3rd party offerings where required. Other than that we rely on the capabilities of Veeam Backup & Replication. In this process, we avoid taking hard dependencies on any service that we are protecting. This avoids the chicken and egg symptoms when the time to recover arrives.
The benefit of this approach is that we get a reasonably secured Veeam backup infrastructure in place even in a non-permissive environment. It helps with defense in depth if the solutions you deploy are secured well, independent on what is in place or not.
Squid for Windows and Veeam
One of the elements of protecting your Veeam environment is allowing outgoing internet access only to those services required but disallowing access to other all other sites. While the Windows firewall can help you secure your hosts it is not a proxy server. We are also trying to make the Veeam backup infrastructure independent of the environment we are protecting. So we chose not to rely on any exiting proxy services to be in place. If there are that is fine and considered a bonus.
To get a proxy service under our control we implement this with Squid for Windows.
You can run this on your jump host, a host holding the Veeam gateway server role or, depending on your deployment size a dedicated virtual machine. You can also opt to have a dedicated second NIC on a separate Subnet/network to provide internet access. We will then point all Veeam backup infrastructure servers their proxy settings the Squid Proxy.
Squid white listing
In Squid, we can add a white list with sites we want to allow access to over HTTPS and block all others. In my Veeam labs, I allow sites associated with DUO (MFA), Wasabi (budget-friendly S3 compatible cloud storage), Veeam.com, and a bunch of Microsoft Sites associated with Windows update. Basically, this is a text file where you list the allowed sites. Mine is calles Allowed_Sites.txt and I store it under C:\Program Files\Squid\etc\squid.
## These are websites needed to keep the Veeam backup infra servers
## up to date and functioning well. They also include the sites needed
## by 3rd party offerings we rely on such as DUO, WASABI, CRL sites.
## Add .amazonaws.com, Azure storage as required
MFA providers and internet access
Warning! When leveraging MFA like DUO it is paramount that you add .duosecurity.com to the list of allowed sites. If not the DUO client cannot work properly. You will see errors like “The Duo authentication returned an unexpected response”.
You will have to fix this. As the server cannot contact the DUO service while it knows internet access is available, so the offline MFA access won’t kick in.
Use the Squid log
The Squid log lists all allowed and denied connections, you can quickly find out what is missing in the white list and add it.
Looking at this log while observing application behavior helps create a complete white list that only contains the FQDNs needed.
To make this work we first need to configure our Squid service to use the white list file. Below I have listed my configuration (C:\Program Files\Squid\etc\squid\squid.conf in the Veeam lab.
Veeam Squid Configuration
# Recommended minimum configuration:
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 192.168.2.0/24 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 443 # https
acl CONNECT method CONNECT
## Custom ACL
# Recommended minimum Access Permission configuration:
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
acl allowed_sites dstdomain '/etc/squid/allowed_sites.txt'
http_access deny !allowed_sites
http_access allow CONNECT allowed_sites
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
# Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.
#cache_dir aufs /cygdrive/d/squid/cache 3000 16 256
# Leave coredumps in the first cache dir
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
dns_nameservers 126.96.36.199 188.8.131.52 184.108.40.206
To force the use of the proxy, you can block HTTP/HTTPS for the well-known internet ports such as 80, 8080, 443, 20, 21, etc. in the Windows firewall on the Veeam backup infrastructure hosts. Because the Veeam admins have local admin rights, which means they can change configuration settings. That requires intentional action. If you want to prevent internet access at another level, your security setup will need an extra external firewall component out of reach of the Veeam admins. It can be an existing FW or a designated one that allows only the proxy IP to access the internet. That’s all fine, and I consider that a bonus which definitely provides a more complete solution. But remember, we are trying to do everything here as much in-box to avoid dependencies on what we might not control.
Getting the proxy to be used
Getting the proxy to work for Veeam takes some extra configuration. Remember that there are 3 ways of setting proxy configurations in Windows. I have discussed this in my blog Configure WinINET proxy server with PowerShell.. Please go there for that info. For the Veeam services we need to leverage the WinHTTP library, not WinINET.
If the proxy is not set correctly and/or you have blocked direct internet access you will have issues with retrieving cloud tier certificates and Automatic license update fails when HTTP proxy is used or errors retrieving the certificates for your cloud capacity tier. All sorts of issues you do not want to happen.
We can set the WinHTTP proxy as follows with PowerShell / Netsh
If you want to get rid of the WinHTTP proxy setting you can do so via
netsh winhttp reset proxy
The proxy setting you do for WinINET with the Windows GUI or Internet Explorer are not those for WinHTTP. Edge Chromium actually takes you to the Windows proxy settings, there is no separate Edge GUI for that. But, again, That is WinINET, not Win HTTP.
You can set the WinINET proxy per user or per machine. This is actually a bit less elegant than I would like it to be. Also, remember that a browser’s proxy settings can override the system proxy settings. If you have set the system proxy settings (Windows or Internet Explorer) you can import it into WinHTTP via the following command.
netsh winhttp import proxy source=ie
Having WinINET configured for your proxy might also be desirable. If you set it I suggest you do this per machine and avoid users from changing this. Now, the users will be limited to a small number of Veeam admins. If you want to automate it, I have some more information and some PowerShell to share with you in Configure WinINET proxy server with PowerShell.
For our purposes, we used Squid as a free proxy, which we can control our selves. It is free and easy to setup. It prevents unintentional access and surfing to the internet. Sure, it can easily be circumvented by an administrator on a Veeam host, if no other countermeasures are in effect. But it serves its purpose of not allowing internet connections to anywhere by default. In that respect, it is an aid in maintaining a more secure posture in the daily operations of the Veeam backup infrastructure.
Today I focus on Veeam File Share backups and knowledge worker data testing. In Veeam NAS and File Share Backups did my 1st testing with the RTM bits of Veeam Backup & Replication V10 File Share backup options. Those tests were focused on a pain point I encounter often in environments with lots of large files: being able to back them up at all! Some examples are medical imaging, insurance, and GIS, remote imaging (satellite images, Aerial photography, LIDAR, Mobile mapping, …).
The amount of data created has skyrocketed driven by not only need but the advances in technology. These technologies deliver ever-better quality images, are more and more affordable, and are ever more applicable in an expanding variety of business cases. This means such data is an important use case. Anyway for those use cases, things are looking good.
But what about Veeam File Share backups and knowledge worker data? Those millions of files in hundreds of thousands of folders. Well, in this blog post I share some results of testing with that data.
Veeam File Share backups and knowledge worker data
For this test we use a 2 TB volume with 1.87TB of knowledge worker data. This resides on a 2 TB LUN, formatted with NTFS and a unit allocation size of 4K.
The data consists of 2,637,652 files in almost 196,420 folders. The content is real-life accumulated data over many years. It contains a wide variety of file types such as office, text, zip, image, .pdf, .dbf, movie, etc. files of various sizes. This data was not generated artificially. All servers are Windows Server 2019. The backup repository was formatted with ReFS (64K allocation unit size).
We back it up with the file server object from an all-flash source to an all-flash target. There is a dedicated 10Gbps backup network in the lab. As we did not have a separate spare lab node we configured the cache on local SSD on the repository. I set the backup I/O control for faster backup. We wanted to see what we could get out of this setup.
Below are the results.
If you look at the back-up image above you see that the source was the bottleneck. As we are going for maximum speed we are hammering the CPU cores quite a bit. The screenshot below makes this crystal clear.
This begs the question if using the file share option would not be a better choice. We can then leverage SMB Direct. This could help save CPU cycles. With SMB Multichannel we can leverage two 10Gbps NICs. So We will repeat this test with the same data. Once with a file share on a stand-alone file server and once with a high available general-purpose file share with continuous availability turned on. This will allow us to compare the File Server versus File Share approach. Continuous availability has an impact on performance and I would also like to see how profound that is with backups. But all that will be for a future blog post.
The ability to restore data fast is paramount. It is even mission-critical in certain scenarios. Medical images needed for consultations and (surgical) procedures for example.
So we also put this to the test. Note that we chose to restore all data to a new LUN. This is to mimic the catastrophical loss of the orginal LUN and a recovery to a new one.
Below you will find a screenshot from the task manager on both the repository as well as the file server during the restore.
Mind you, this varies a lot and when it hits small files the throughput slows down while the cores load rises
For now, with variable data and lots of small files, it looks that restores take 2.5 to 3 times as long as backups with office worker data. We’ll do more testing with different data. With large image files, the difference is a lot less from our early testing. For now, this gives you a first look at our results with Veeam File Share backups and knowledge worker data As always, test for your self and test multiple scenarios. Your mileage will vary and you have to do your own due diligence. These lab tests are the beginning of mine, just to get a feel for what I can achieve. If you want to learn more about Veeam Backup & Replication go here.Thank you for reading.
Veeam NAS and File Share Backups are a new capability in Veeam Backup & Replication V10. We can now backup SMB, NFS shares as well as file server sources. This means it covers Linux and Windows files servers and shares. It can also backup also many NAS devices. There are many of those in both the SME and enterprise market. I know it is fashionable to state that file servers are dead, But that is like saying e-mail is dead. Yes, right until the moment you kill their mailbox. At that moment it is mission critical again.
My first test results with the RTM bits are so good I doing this quick publish to share them with you.
Early testing of Veeam NAS and File Share Backups
As a Veeam Vanguard I got access to the Veeam Backup & Replication V10 RTM bits so I decided to give it a go in some of our proving grounds.
I tested a Windows File Server, a Windows File share and a General Purpose File Share with continuous availability on a 2 node cluster. All operating systems run Windows Server 2019, fully patched at the time of writing.
Windows File Server
This is the preferred method if you can use it. That is if you have a Windows or Linux server as opposed to an appliance. The speeds are great and I am flirting with the 10Gbps limits of the NIC. As this is pure TCP it does not leverage SMB Multichannnel or SMB Direct.
Windows File Share
With a NAS you might not have the option to leverage the file server object. No worries, we than use the File Share. If it is SMB 3 you can even leverage VSS if the NAS supports it. It might have the added benefit that you can add more file share proxies to do the initial full backup if so required. With File Server you are limited to itself. But it all depends on what the source can deliver and the target ingest.
Note that we use a SMB 3 file share here. With a properly configured network you can leverage SMB Direct and SMB Multichannel with this.
General Purpose File Share with continuous availability on a 2 node cluster
This one is important to me. I normally only deploy General Purpose File Shares with continuous availability anymore where applicable and possible. SMB 3 haves given us many gifts and I like to leverage them. The ease of maintenance it offers is too good not to use it when possible. Can you say office hours patching of file servers?
So here is a screen shot of a Backup of a General Purpose File Shares with continuous availability where is initiate a fail over of the file share. That explains for the dip in the throughput, but the backup keeps running. Awesome!
Backups are cool but restores rule. So to finish up this round of testing we share a restore. Not bad, not bad at all. 221.9 GB restored in 5.33 minutes.
More testing to follow
I will do more testing in the future. This will include small office files in large quantities. These early tests are more focused on large image data such as satellite, aerial photography and mobile mapping images. An important use case, hence our early testing focus.
For an overview of Veeam NAS and file share backups as well as the details take a look here for a presentation on the subject by one and only Michael Cade at TFD20.
The Veeam NAS and file share backups in Backup & Replication V10 are delivering great results right from the start. I am happy to see this capability arrive. Without any doubt the only remark I have is that they should have done this sooner. But today, it is here and I am nothing but happy about this.
There are lot of details to this but that will be for later content.