Hyper-V Amigos Showcast Episode 20 and 21

Introduction

This is just a quick blog post to let you know the Hyper-V Amigos have released 2 webcasts recently. These are Hyper-V Amigos Showcast Episode 20 and 21. You will find a link to the videos and a description of the content below.

Hyper-V Amigos Showcast – Episode 20

In episode 20 of the Hyper-V Amigo ShowCast, we continue our journey in the different ways in which we can use storage spaces in backup targets. In our previous “Hyper-V Amigos ShowCast (Episode 19)– Windows Server 2019 as Veeam Backup Target Part I” we looked at stand-alone or member servers with Storage Spaces. With both direct-attached storage and SMB files shares as backup targets. We also played with Multi Resilient Volumes.

For this webcast, we have one 2 node S2D cluster set up for the Hyper-V workload (Azure Stack HCI). On a second 2 node S2D cluster, we host 2 SOFS file shares. Each on their own CSV LUN. SOFS on S2D is supported for backups and archival workloads. And as it is SMB3 and we have RDMA capable NICs we can leverage RDMA (RoCE, Mellanox ConnectX-5) to benefit from CPU offloading and superb throughput at ultra-low latency.

Hyper-V Amigos Show Cast Episode 20

Some extra information

The General Purpose File Server (GPFS role) is not supported on S2D for now. You can use GPFS with shared storage and in combination with continuous availability. This performs well as a high available backup target as well. The benefit here is that this is cost-effective (Windows Server Standard licenses will do) and you get to use the shared storage of your choice. But in this show cast, we focus on the S2D scenario and we didn’t build a non-supported scenario.

You would normally expect to notice the performance impact of continuous availability when you compare the speeds with the previous episode where we used a non-high available file share (no continuous availability possible). But we have better storage in the lab for this test, the source system is usually the bottleneck and as such our results were pretty awesome.

The lab has 4 Tarox server nodes with a mix of Intel Optane DC Memory (Persistent Memory or Storage Class Memory), Intel NVMe and Intel SSD disks. For the networking, we leverage Mellanox ConnectX-5 100Gbps NICs and SN2100 100Gbps switches. Hence we both had a grin on our face just prepping this lab.

As a side note, the performance impact of continuous availability and write-through is expected. I have written about it before here. The reason why you might contemplate to use it. Next to a requirement for high availability, is due to the small but realistic data corruption risk you have with not continuously available SMB shares. The reason is that they do not provide write-through for guaranteed data persistence.

We also demonstrate the “Instant Recovery” capability of Veeam to make workloads available fast and point out the benefits.

Hyper-V Amigos Showcast – Episode 21

In episode 21 we are diving into leveraging the Veeam Agent for Windows integrated with Veeam Backup & Replication (v10 RC1)  to protect our physical S2D nodes. For shops that don’t have an automated cluster node build processes set up or rely on external help to come in and do it this can be a huge time saver.

We walk through the entire process and end up doing a bare metal recovery of one of the S2D nodes. The steps include:

  • Setting up an Active Directory protection group for our S2D cluster.
  • Creating a backup job for a Windows Server, where we select failover cluster as type (Which has only the “Managed by Backup Server”  as the mode).
  • We run a backup
  • After that, we create the Veeam Agent Recovery Media (the most finicky part)
  • Finally, we restore one of the S2D hosts completely using the bare metal recovery option

Some more information

Now we had some issues in the lab one of them suffering to a BSOD on the laptop used to make the recording and being a bit too impatient when booting from the ISO over a BMC virtual CD/DVD. Hence we had to glue some parts together and fast forward through the boring bits. We do appreciate that watching a system bot for 10 minutes doesn’t make for good infotainment. Other than that, it went fine and we were able to demonstrate the process from the beginning to the end.

As is the case with any process you should test and experiment to make sure you are familiar with the process. That makes it all a little easier and hurt a little less when the day comes you have to do it for real.

We hope the show cast helps you look into some of the capabilities and options you have with Veeam in regards to protecting any workloads. Long gone are the days that Veeam was only about protecting virtual Machines. Veeam is about protecting data where ever it lives. In VMs, physical servers, workstations, PCs, laptop, on-prem, in the cloud and Office 365. On top of that, you can restore it where ever you want to avoid lock-in and costly migration projects and tools. Check it out.

Conclusion

We will be doing more web casts on Veeam Backup & Replication v10 in 2020 as it will be generally available in Q1 as far I can guess.

Hyper-V Amigos Showcast Episode 20 and 21

But with Hyper-V Amigos Showcast Episode 20 and 21, that’s it for 2019. Enjoy the holidays during this festive season. The Hyper-V Amigos wish you a Merry X-Mas and a very happy New Year in 2020!

Veeam Vanguard Renewals and Nominations 2020

Introduction

Are you are working with Veeam software solutions? Are you passionate about sharing your experiences, knowledge, and insights? If so, you might want to consider a nomination for the Veeam Vanguard program. If you are already a Veeam Vanguard I’m pretty sure you already know submissions for Veeam Vanguard Renewals and Nominations 2020 are open.

Veeam Vanguard Renewals and Nominations

As we are nearing the end of 2019 Veeeam has opened the Veeam Vanguard Renewals and Nominations for 2020.

Describing the Veeam Vanguard program is not easily done. But Nikola Pejková has done a great job to do exactly that in Join the Veeam Vanguard 2020 class! She also explains how to nominate someone or yourself. Read the blog post and find out if this is something for you. I enjoy being a part of it because I get to learn with and from some of the best minds in the industry. This allows me to help others better while also keeping up with the changing IT landscape whilst helping others.

Veeam Vanguard Renewals and Nominations 2020
My fellow Veeam Vanguard and me in a Q&A session with the Veeam R&D and PM teams at the Veeam Vanguard Summit.

I would like to emphasize that the diversity of the Veeam Vanguard is paramount to me. It works because we have people in there form around the globe, from all kinds of backgrounds and job roles. This helps open up discussions with different points of view and experiences. Customers, consultants, and partners look at needs and solutions from their perspectives. Having us together in the Vanguard benefits us all and prevents tunnel vision.

Nominate someone, yourself or be nominated

Nikola explains how to do this in her blog so read Join the Veeam Vanguard 2020 class! and apply to become Vanguard! It is quite an experience. Quality people who are active in the commumnity and help by sharing their knowledge are welcomed and appreciated. Maybe you’ll find yourself to be a Veeam Vanguard in 2020!

Optimize the Veeam preferred networks backup initialization speed

When Veeam preferred networks cause slow backup initialization speeds

When using preferred networks in Veeam you choose to use another than the default host network for backups and restores. In this post, we’ll discuss how to optimize the Veeam preferred networks backup initialization speed because we aim for optimal performance. TL-DR: You need to provide connectivity to the preferred networks for the Veeam Backup & Replication server. It seems a common mistake I run into every now and then. Ultimately it makes people think Veeam is slow. No, it is just a configuration mistake.

Why use a preferred network?

Backups can fill up a 1Gbps pipe very fast. Many people still use 1Gbps networking as default connectivity to the hosts. Even when they leverage 10Gbps or better it is often in a converged network setup. This means that only part of the bandwidth goes to host connectivity. Few have 10Gbps for “just” host connectivity. This means it makes sense to select a different higher bandwidth network for backup and restore traffic.

Hence for high volume, high-performance backup and restores it is smart to look for a bigger pipe to leverage. Some environments have dedicated backup networks at 10Gbps or better. But we find way more high bandwidth networks for other purposes. In Hyper-V environments, you’ll have those for SMB networking like CSV, Live Migration variants and storage replication. Hyper-Converged Infrastructure deployments use these networks for storage as well. With S2D you’ll find more and more 25/50/100Gbps. All these can be leveraged as a preferred backup network in Veeam

Setting up a preferred network

Setting up a preferred network is easy. First of all, you figure out which network to use. You then add those to the preferred networks as follows:

In file menu select “Network Traffic Rules”

Optimize the Veeam preferred network backup initialization speed

Click “Add” and specify the source IP as well as the target IP range. You can op to encrypt the traffic and /or set a bandwidth limit.

Optimize the Veeam preferred network backup initialization speed

There is no need to have the preferred network registered in DNS. It will work fine without.

I hope it is clear that the source (Hyper-V Hosts), the target (backup repository or the extends in a Scale-Out Backup Repository) and any Off Host Proxies need connectivity to the preferred network(s). If you leverage WAN accelerators, Gateways Servers, log shipping servers than these also need access. Last but not least you should also make sure that the Veeam Backup Server (VBR) has access to the preferred networks. This is one that a lot of people seem to forget. May because it is most often a VM if it is not a shared role on the repository server or such and things do work without it.

When the VBR server has no access to the preferred networks things still work but initialization of the backup and restore jobs is a lot slower. Let’s test this.

Slow Initialization of backup and restore jobs

As a result of using preferred networks you might probably notice the following:

  • First of all, we notice a slow down in the overall initialization of the backup and restore job.
  • This manifests itself in a slow start of the actual VM backup/restore and reducing the number of simultaneous backups/restores of VMs within a job.

Without the VBR server having connectivity to the preferred networks

23:54 to complete the backup job (no connectivity to the preferred network)

Optimize the Veeam preferred networks backup initialization speed

With the VBR server having connectivity to the preferred networks. Notice how smooth and continuous the throughput is.

07:55 to complete the backup job (with connectivity to the preferred network) => 3 times as fast.

When you look into the Veeam backup logs for this job you will find at various stages attempts by the VBR server to connect to the preferred networks. If it can’t it has to wait until it times out. You see entries like:

A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 10.10.110.2:2509 (System.Net.Sockets.SocketException)

Optimize the Veeam preferred network backup initialization speed
Just a small part of all the NetSocket time out you will find for every single VM in the job. Here VBR is trying to connect to one of the extends in the SOBR.

This happens for every file in the backups (config files and disks) for every extend in the Scale-Out Backup Repository (per VM backup chain). This slows down the entire backup job tremendously.

Conclusion

I always make sure that the VBR servers in my environments have preferred network connectivity. Consequently, initialization is faster for both backups and restores. Test it out for yourself! It is the first thing I check when people complain of really slow backup. Do they have preferred networks set up? Check if the VBR server has connectivity to them!

CBT DRIVER WITH Veeam Agent for Microsoft Windows 2.1

Change Block Tracking comes to physical & IAAS Veeam Backups

With the big improvements and new capabilities delivered in Veeam Backup & Replication 9.5 Update 3 there are some interesting capabilities and features related specifically to the Veeam Agent for Windows 2.1 Server Edition. We now get the ability to manage the Veeam Agent centrally from within VBR 9.5 UP3 console or PowerShell. This includes deploying the new Change Block Tracking (CBT) driver for Windows Server (not Linux).

This CBT driver is optional and works like you have come to expect from Veeam VBR when backing up Hyper-V virtual machines pre-Windows Server 2016. Windows Server 2016 now has its own CBT capabilities that Veeam VBR 9.5 leveraged. The big thing here is that you now get CBT capabilities for physical or virtual in guest workloads (that includes IAAS people!) with Veeam Agent for Windows 2.1 Server Edition.

Deploying the Veeam Agent for Microsoft Windows 2.1 CBT Driver

The Veeam Agent for Windows 2.1 ships with an optional, signed change block tracking filter driver for Windows servers. That agent is included in your VBR 9.5 Update 3 download or you can choose to download an update that does not have the CBT driver included. That’s up to you. I just upgraded my lab and production environment with the agent included as I might have a use for them. If not now, then later and at least my environment is ready for that.

clip_image001

When you have installed VAW 2.1 you can navigate to C:\Program Files\Veeam\Endpoint Backup\CBTDriver and find the driver files there for the supported Windows Server OS versions under their respective folders.

clip_image003

As you can see in the screenshot above we have CBT drivers for any version of Windows Server back to Windows Server 2008 R2. If you are running anything older we really need to talk about your environment in 2018. I mean it.

Note that right clicking the .inf file for your version of Windows Server and selecting Install is the most manual way of installing the CBT driver. You’ll need to reboot the host.

clip_image005

Normally you’ll either integrate the deployment and updating of the CBT driver into the VBR 9.5 Update 3 console or you’ll deploy and update the CBT driver manually.

Install / uninstall the CBT driver via Veeam Backup & Replication Console

You can add servers individually or as part of a protection group (Active Directory based). Whatever option you chose you’ll have the option of managing them via the agent manually or via VBR server. Once you have done that you can deploy and update the optional CBT driver for supported Windows Server versions via the individual servers or the protection groups.

Individual Server

clip_image007

Once the agent is installed you’ll can optionally install the CBT driver. When that’s you can also uninstall the CBIT driver and the agent form the VBR Console.

clip_image009

Protection Group

You can add servers to protect via VAW 2.1 individually, via active directory (domain, organizational unit, container, computer, cluster or a group) or a CSV file with server names /IP-addresses. That’s another subject actually but you get the gist of what a protection group is.

clip_image011

Checking the CBT driver version

You can always check the CBT driver version via the details of a server added to the physical or cloud infrastructure.

clip_image012

Install / uninstall the CBT driver via the standalone Veeam Agent for Windows

My workstation at home isn’t managed by a Veeam Backup & Replication v9.5 Update 3 server. It’s a standalone system. But it does run Windows Server 2016. Now, even while such a standalone system can send its backups to Veeam Repository, I don’t do that at home. The target is a local disk in disk bay that I can easily swap out every week. I just rotate through a couple of recuperated larger HDDs for this purpose and this also allows me to take a backup copy off site. The Veeam Agent for Windows configuration for my home office workstation is done locally, including the installation of the CBT driver. Doing so is easy. Under settings in the we now have a 3rd entry VAW 2.1 that’s there to install the CBT driver if we want to.

clip_image014

When you click install it will be done before you can even blink. It will prompt you the restart the computer to finish installing the driver. Do so. If not, the next backup will complain about failing over to MFT analysis based incremental backups as you can’t use the installed CBT driver yet.

clip_image016

When using the new VAW 2.1 CBT drivers for windows changes get tracked a VCT file. These can be seen under C:\ProgramData\Veeam\EndpointData\CtStore\VctStore.

clip_image018

Ready to Go

I’ll compare the results of backing up my main workhorse with and without the CBT driver installed. Veeam indicates the use case is for servers with a lot of data churn and that’s where you should use them. The idea is that you don’t need to deal with updating the drivers when the benefits are not there. That’s fair enough I’d say but I’m going to experiment a little with them anyway to see what difference I can notice without resorting to a microscope.

If we conclude that having the CBT driver installed is not worth while for our workstation we can easily uninstall it again via the control panel, under settings, where we now see the option to uninstall it. Easy enough.

clip_image020

However, as it can track changes in NTFS as well as ReFS and FAT partitions it might be wise to use it for those servers that have one or more of such volumes, even when for NFTS volumes the speed difference isn’t that significant. Normally the bigger the data churn delta the bigger the benefits of the CBT driver will be.