Veeam File Share backups and knowledge worker data

Introduction

Today I focus on Veeam File Share backups and knowledge worker data testing. In Veeam NAS and File Share Backups did my 1st testing with the RTM bits of Veeam Backup & Replication V10 File Share backup options. Those tests were focused on a pain point I encounter often in environments with lots of large files: being able to back them up at all! Some examples are medical imaging, insurance, and GIS, remote imaging (satellite images, Aerial photography, LIDAR, Mobile mapping, …).

The amount of data created has skyrocketed driven by not only need but the advances in technology. These technologies deliver ever-better quality images, are more and more affordable, and are ever more applicable in an expanding variety of business cases. This means such data is an important use case. Anyway for those use cases, things are looking good.

But what about Veeam File Share backups and knowledge worker data? Those millions of files in hundreds of thousands of folders. Well, in this blog post I share some results of testing with that data.

Veeam File Share backups and knowledge worker data

For this test we use a 2 TB volume with 1.87TB of knowledge worker data. This resides on a 2 TB LUN, formatted with NTFS and a unit allocation size of 4K.

1.87 TB of knowlegde worker dataon NTFS

The data consists of 2,637,652 files in almost 196,420 folders. The content is real-life accumulated data over many years. It contains a wide variety of file types such as office, text, zip, image, .pdf, .dbf, movie, etc. files of various sizes. This data was not generated artificially. All servers are Windows Server 2019. The backup repository was formatted with ReFS (64K allocation unit size).

Backup test

We back it up with the file server object from an all-flash source to an all-flash target. There is a dedicated 10Gbps backup network in the lab. As we did not have a separate spare lab node we configured the cache on local SSD on the repository. I set the backup I/O control for faster backup. We wanted to see what we could get out of this setup.

Below are the results.

Veeam File Share backups and knowledge worker data
45:28 minutes to backup 1.87 TB of knowledge worker data. I like it.

If you look at the back-up image above you see that the source was the bottleneck. As we are going for maximum speed we are hammering the CPU cores quite a bit. The screenshot below makes this crystal clear.

We have plenty of CPU cores in the lab on our backup source and we put them to work.
The CPU core load on the backup target is far less.

This begs the question if using the file share option would not be a better choice. We can then leverage SMB Direct. This could help save CPU cycles. With SMB Multichannel we can leverage two 10Gbps NICs. So We will repeat this test with the same data. Once with a file share on a stand-alone file server and once with a high available general-purpose file share with continuous availability turned on. This will allow us to compare the File Server versus File Share approach. Continuous availability has an impact on performance and I would also like to see how profound that is with backups. But all that will be for a future blog post.

Restore test

The ability to restore data fast is paramount. It is even mission-critical in certain scenarios. Medical images needed for consultations and (surgical) procedures for example.

So we also put this to the test. Note that we chose to restore all data to a new LUN. This is to mimic the catastrophical loss of the orginal LUN and a recovery to a new one.

The restore takes longer than the backup for the same amount of data, the restore speed is typically slower for large amounts of smaller files.

Below you will find a screenshot from the task manager on both the repository as well as the file server during the restore.

The repository server from where we are restoring the data
The file server where the backup is being restored completely on a new LUN. Note the peak throughput of 6.4 Gbps.

Mind you, this varies a lot and when it hits small files the throughput slows down while the cores load rises

The file server during the restore is having to work the hardest when it has to deal with the least efficient files.

Conclusion

For now, with variable data and lots of small files, it looks that restores take 2.5 to 3 times as long as backups with office worker data. We’ll do more testing with different data. With large image files, the difference is a lot less from our early testing. For now, this gives you a first look at our results with Veeam File Share backups and knowledge worker data As always, test for your self and test multiple scenarios. Your mileage will vary and you have to do your own due diligence. These lab tests are the beginning of mine, just to get a feel for what I can achieve. If you want to learn more about Veeam Backup & Replication go here.Thank you for reading.

Hyper V Amigos Showcast 15 – VMFleet explained

After a short break in broadcasts, due to extremely busy times, the Hyper-V Amigos ride again! Yes, fellow Microsoft MVP Carsten Rachfahl (@hypervserver) from Rachfahl IT Solutions and yours truly are back in the saddle!

In episode 15 of the Hyper-V Amigos Showcast we look at VMFleet. VMFleet is a free Microsoft test suite to show/test and help analyze the performance of a Hyper-converged Storage Spaces Direct Setup.

image

In this episode Hyper V Amigos Showcast 15 – VMFleet explained we show you how to set it up, how it works and how to measure and evaluate your S2D deployment with it. We also share some real life tips with you. All this in just around 2 hours. So stick around for the long haul!

Accelerated Checkpoint merging with ReFS v2 in Windows Server 2016

Introduction

This blog post is a teaser where we show you some of the results we have seen with ReFS v2 in Windows 2016 (TPv4). In a previous blog post (Lightning Fast Fixed VHDX File Creation Speed With ReFS on Windows Server 2016) we have demonstrated the very fast VHDX file creation capabilities we got with ReFS v2. Now we look at another benefit of ReFS v2 in a Hyper-V environment, thanks to a feature or ReFS v2 called block cloning. We get accelerated checkpoint merging with ReFs v2 in Windows 2016

The Demo

For this short demo we have a virtual machine running Windows Server 2016. It resides on a CSV formatted with REFS (64K unit allocation size). Inside the virtual machine there is a second data disk. Our  VM called CheckPointReFS (64K unit allocation size) has this data volume formatted with ReFS (64K unit allocation size) and it runs on the ReFS formatted CSV. The disks in this test are fixed sized VHDX files.

On the data volumes we have about 30GB worth of ISO files. We checkpoint the VMs and then create a copy of those files on the data volume.

image

We then delete this checkpoint.

image

Via the events 19070 (start of a background disk merge) and 19080 (completion of a background disk merge) in the Microsoft-Windows-Hyper-V-VMMS/Admin logs we calculate the time this took: 5 seconds.

image_thumb76

image_thumb74

There are moments you just have to say “WAUW”. Really this rocks and it’s amazing. So amazing I figured I made a mistake and I ran it again … 4 seconds. WOEHOE!  What where the times you saw when you last deleted a large checkpoint?

I am really looking forward to do more testing with ReFS v2 capabilities with Hyper-V on Windows 2016.

RDMA Over RoCE With DCB Requires Tagged Non Default VLANs

It’s DCB That Requires This

For those of you who are experimenting with the RoCE variant of RDMA for SMB Direct in Windows Server 2012 (R2), make sure you have a VLAN tag in your configuration if this is more than a simple RDMA over two NICs. The moment you get DBC with PFC & ETS involved you’ll need non default tagged VLANs. Do note that PFC alone is good enough, ETS is strictly speaking not a requirement, but I’d consider doing it if you can.

With Enhanced Transmission Selection (ETS) the network traffic type is classified using the priority value in the VLAN tag of the Ethernet frame. The priority value is the Priority Code Point (PCP), which is described in the IEEE 802.1Q specification and uses a 3-bit field in the VLAN tag with eight possible priority values (0 to 7).

Priority-based Flow Control (PFC) allows to individually pause priorities of tagged traffic and helps to provide lossless or “no drop” behavior for a certain priority at the receiving port. As  above, each frame transmitted by a sending port is tagged with a priority value (0 to 7) in the VLAN tag. So for the traffic pause and resume functionality to work we need a VLAN tag to carry the priority value.

Does It Work Without?

But you’ll tell me that, as you may be lacking a DCB capable switch for lab purposes, you used a direct cable between your two RoCE NICs. And guess what RoCE, might have indeed worked for you without a VLAN tag. You can test & get a feel for what RoCE/RDMA can do for you with just the NICs. But as there is no switch involved you’re not using DCB for PFC/ETS and without that the need for the tagged VLAN isn’t there. Also see https://blog.workinghardinit.work/2013/05/03/smb-direct-roce-does-not-work-without-dcbpfc/.

So there you go. Design your RoCE/RDMA network based on DCB with PFC( and ETS) and not just on the tests with an direct cable or you might miss a few details that are quite important. Happy testing!