ReFS Supported Deployment Scenarios Updated

Introduction

Some support statements for ReFS have been updated recently. These reflect well over a year of me, fellow MVPs and others testing and providing feedback to Microsoft. For all practical purposes I’m talking about ReFSv3, which was introduced with Windows Server 2016. Read up on this because that’s what I’m discussing here: Resilient File System (ReFS) overview

As many you know the ReFS supported storage deployment option has “fluctuated a bit. It was t limited ReFS to Storage Spaces and standalone disks only. That meant no RAID controllers, no FC or iSCSI LUNs via a SAN whether that was a high end one or and entry level one that you normally only use for backup purposes.

I was never really satisfied with the reasons why and I kept being a passionate advocate for a decent explanation as tying a files system with the capabilities and potential of ReFS to almost a single storage solution (S2D, and yes that’s a very good HCI offering) isn’t going to help proliferate the goodness of ReFS around the globe.

I was not alone and many others, amongst them fellow MVPs Anton Gostev (Senior Vice President, Product Management at Veaam and an industry heavy weight when it comes to credibility and technical skill), Cars ten Rachfahl and Jan Kappen (both at Rachfahl IT-Solutions) were arguing he case for broader ReFS support. Last week we go the news that the ReFS deployment documentation had been revised. Guest what? Progress! A big thank you to Andrew Hansen for taking the time to hear us plead or case, listen to our testing results and passionate feedback. He picked up the ball, ran with it and delivered! Let’s take a look.

ReFS Storage Deployment Options

Storage Spaces Direct

Deploying ReFS on Storage Spaces Direct is recommended for virtualized workloads or network-attached storage. This is well known and is used for a Hyper Converged Infrastructure and Converged (SOFS) solution (Hyper-V, IIS, SQL, User Profile Disks and even archival or backup targets). You can deploy it with simple, mirrored (2-way or 3-way), parity or Mirror accelerated parity volumes.

Storage Spaces

Storage Spaces supports local non-removable direct-attached via BusTypes SATA, SAS, NVME, or attached via HBA (aka RAID controller in pass-through mode). You can deploy it with simple, mirrored (2-way or 3-way) or parity volumes. Do note that this can be both non-shared as shared storage spaces (Shared SAS enclosures). This is the high available solution with storage spaces we have before Windows Server 2016 added S2D.

Basic disks

Deploying ReFS on basic disks is best suited for applications that implement their own software resiliency and availability solutions. Applications that introduce their own resiliency and availability software solutions can leverage integrity-streams, block-cloning, and the ability to scale and support large data sets. A poster child for this use case is and Exchange DAG.

Now it is important to note that basic disks with ReFS are supported with local non-removable direct-attached disks via BusTypes SATA, SAS, NVME, or RAID. So yes, you can have RAID 1, 5,6,10 and make the storage redundant. Now, be smart, ReFS is great but it is not magic. If your workload requires redundancy and high availability you should provide it. This is not different when you use NTFS. When you have shared PCI RAID controllers (which can be redundant like in a DELL VRTX) this can be uses as well to create high availability deployments with shared storage.

SAN Storage

You can also use ReFS with a SAN over FC or iSCSI, normally those are always configured with some form of storage redundancy. You can consume the ReFS SAN storage on stand alone, member or clustered serves for high availability. As long as you use that storage for supported use cases. For example, it is and remains not support to put knowledge worker data on SOFS shares, not matter what the underlying storage for ReFS or NTFS volumes is. For backups this can leveraged to build some very capable solutions.

What were the concerns that made ReFS Support so limited at a given point in time?

Well one of them was confusion and concerns around how data gets flushed and persisted with non-storage spaces and simple disks. A valid concern but one you have with any file system so any storage array or controller needs to handle this well. As it turns out any decent piece of storage hardware/controller that’s on the Microsoft Hardware Compatibility List and is certified does its job well enough to guarantee this happens correctly. So, any certified OEM SAN, both entry level ones to high end enterprise grade gear is supported. Just like any good (certified) raid controller. Those are backed with battery backed caches that can survive down time for days to many weeks. You just pick the one that fits your needs, use case and budget form the options you have. That can be S2D, a SAN, a raid controller, or even basic directly attached disks.

My take on things

Why do I like the new supported options? Well because I have been testing them for backup targets, both high available one as non- high available one. I can have the benefits of ReFS that can be leveraged by backup software (Veeam Backup & Replication 9.5 for example) and have better performance, data protection with more type of storage than S2D. I like to have options and choices when designing as solution.

It is important to note one thing when you do not use ReFS in combination with Storage Spaces (S2D, Shared storage Spaces or “stand alone” storage spaces) with any form of data redundancy (2-way or 3-way mirror, parity, mirror accelerate parity). You will not have the built-in capability to repair data corruption than can occur while data sits on disk (bit rot) by leveraging the redundant copies in storage Spaces. That only comes when ReFS is combined with redundant Storage Spaces. Not with Simple Storage Spaces or any other storage array, redundant or not. The combination of ReFS with Storage Spaces offers this capability and is one of its selling points.

Other than that, the above ReFS storage deployment options let you leverage the benefits ReFS has to offer and yes, for some use case that will be preferred over NTFS. But don’t think NTFS should now only be used for the OS and such. That’s not the case. It is and remains very much the dominant file system for Windows. It’s just that now we get to leverage the goodness of ReFS for suitable scenarios with a lot more storage deployment options. This has a reason. For example, if you are going to do Hyper-V with a SAN the supported file system is NTFS, not ReFS. Mind you ReFS works but it’s not supported. I have tested this and while it works one of the concerns is the redirect IO traffic this incurs. With S2D the network fabric to deal with this is there by design: SMB Direct (RDMA) over 10Gbps or better. With a SAN that’s not necessarily so and as a result the network leveraged by CSV traffic might take a beating. The network traffic behavioral patterns are also different with ReFS versus NTFS on SAN based CSV than what you are used to with NFTS when it comes to owner and non-owner nodes. While I can make things work I must consider the benefits versus the risk of being unsupported. On a good SAN with ODX support that’s not worth the risk. Might this ever change? Maybe, but for now that’s it.

That said, when I design my ReFS LUNs and fabric well with a SAN and use them for a supported uses case like backup targets I am supported and I get to leverage the benefits of ReFS as it fits the use case very well (DPM, Veeam).

A side note on mirror accelerated parity

Mirror accelerated parity is only supported with S2D. That’s the only thing that, in regards to backup an archive targets that I want to keep testing (see Hyper-V Amigos Showcast Episode 12 – ReFS and Backup )and asking Microsoft to support at least on non-shared Storage spaces. I know shared storage spaces is being depreciated, no worries. That would make for some great, budget, archival and backup targets due to the fact you get bit rot protection due to the combination ReFS with redundant Storage Spaces. I even have some ideas on how to add tuning capabilities to the mirror / parity movement of data based on data age etc. I can dream right ?

Conclusion

To all the naysayers, the ones that bashed me when I discussed options for and the potential for ReFSv3 outside of S2D, take note, this is where we are today.

clip_image001

And I like it. I like the options ReFSv3 offers with variety of storage solutions to design and implement backup targets for many different needs and budgets. That’s what I like as I’m convinced that one size fits all solution are an illusion. Even at economies of scale and with commodity materials understanding the context in which to design and implement a solution matters, as it allows you to chose the proper methods for the given needs when you genuinely understand the challenge.

If you need help with this there are quite a number of highly skilled, experienced people with the right mindset to make help you maximize your ROI and TCO in an effective and efficient way. Many of these are MVPs and have their own business or work for IT firms where customers are not milked like cattle but really do provide high value services. Just reach out.

I made Veeam Vanguard 2018!

While attending the Microsoft MVP Global Summit 2018 I received notification that I was renewed as a Vanguard in 2018. This is my forth year, as I’m one of the inaugural members in 2015.

clip_image001

The Veeam Vanguard group is a collection of smart, hardworking IT experts that have a healthy interest in data protection and availability. No matter what you build in IT to support your business or customers it requires to be protected against down time. You also need the ability to perform disaster recovery and deliver business continuity for those days things are not going smoothly. Those requirements keeps these technologist busy and honest. They have to deliver on those requirements and they can’t talk their way out of not being able to do that when needed. The result is that this group of experts is very experienced and knowledgeable in both their specialties and in how to protect their workloads. Being part of the Veeam Vanguards means sharing that experience and knowledge and tapping in to their collective brain power. I’m happy and proud the be a Veeam Vanguard as it is a great learning experience and it helps me to deliver even more value to my employers and all Veeam customers. It’s win-win all over. Thank you Veeam for the opportunity and recognition.

It’s not as simple as renaming the avhdx to vhdx

This arrives in via the feedback option on my blog

Hi. I see through your website that you are an expert in vhdx / avhdx file. I had a system crash with data loss. I think this data is in an avhdx file. When I rename this file in vhdx, I can mount it but I have an error: the file is corrupted. Do you know a procedure to repair this type of file? I thank you in advance for your support!

Oh dear! An expert? While flattery can get you a long way in life with certain people virtual disks are impervious to that sort of thing. Look, MVP, Veeam Vanguard, Dell Rockstar … tip of the spear, edge of the sword, it’s all fine and well but it’s no good to split a granite piece of rock and virtual disks don’t care about titles, jut about how they are designed to work.

Before we dive into some more details please use the comments sections under the relevant blog post to ask questions. That way everyone can benefit form the answer. It’s all quite anonymous if you want it to be. Secondly vendors like Microsoft have great public support forums with many thousand pairs of eyes reading. That might also work better and faster for your needs.

Some details

When you have avhdx your data is stored in the avhdx and in the parent disks (more avhdx but at least always one vhdx). While you can throw away what’s in a avhdx under certain conditions (and lose that data) and mount the vhdx you cannot throw away the vhdx and hope to be able to access the data in the avhdx you rename to vhdx.

clip_image002

For a case of real data corruption, not just phantom or mixed up VHDX/AVHDX chain, where you can try to intervene, even manually if needed – and if you have the skills – you’ll have to recover or restore data.

If the storage on which the vhdx/avhdx reside is corrupted a good but time-consuming run of chksdk /f /r can do the job. I have done that before with success. But there are no guarantees in this game.

Other than that, or when the storage is gone, it is restore time. This can be leveraging whatever backup solution you use or VSS snapshots on the storage side of things. Those options are your best bet. You can find some more info on manually manipulating vhdx/avhdx files here but that’s not what you’re facing here it seems.

If you don’t have recovery options in place, what can I say?

Stop what you’re doing and contact a good data recovery company. Only damage can come from trying if you don’t know what you’re doing. You can hope trial and error will fix it but that would be the triumph of hope over experience. You’re usually not that lucky. Trust me.

The snarky bit

I’ll fight like hell if I’m in a pickle and the data is valuable. But it’s near to impossible to do it for someone else as it’s hard, time consuming and often it’s a case were the files have been worked on before, so they tend to be messed up. If the data is not that valuable, just eat the loss.

In reality my time always seems less valuable then peoples their data . Now if you say you can help me retire early by trying anyway and are OK with a best effort, no guarantees given deal I might do it. But I’m pretty sure investing in backups and restores is way cheaper and will lead to better results. Your data is important and valuable, even when my time is not. Just saying

CBT DRIVER WITH Veeam Agent for Microsoft Windows 2.1

Change Block Tracking comes to physical & IAAS Veeam Backups

With the big improvements and new capabilities delivered in Veeam Backup & Replication 9.5 Update 3 there are some interesting capabilities and features related specifically to the Veeam Agent for Windows 2.1 Server Edition. We now get the ability to manage the Veeam Agent centrally from within VBR 9.5 UP3 console or PowerShell. This includes deploying the new Change Block Tracking (CBT) driver for Windows Server (not Linux).

This CBT driver is optional and works like you have come to expect from Veeam VBR when backing up Hyper-V virtual machines pre-Windows Server 2016. Windows Server 2016 now has its own CBT capabilities that Veeam VBR 9.5 leveraged. The big thing here is that you now get CBT capabilities for physical or virtual in guest workloads (that includes IAAS people!) with Veeam Agent for Windows 2.1 Server Edition.

Deploying the Veeam Agent for Microsoft Windows 2.1 CBT Driver

The Veeam Agent for Windows 2.1 ships with an optional, signed change block tracking filter driver for Windows servers. That agent is included in your VBR 9.5 Update 3 download or you can choose to download an update that does not have the CBT driver included. That’s up to you. I just upgraded my lab and production environment with the agent included as I might have a use for them. If not now, then later and at least my environment is ready for that.

clip_image001

When you have installed VAW 2.1 you can navigate to C:\Program Files\Veeam\Endpoint Backup\CBTDriver and find the driver files there for the supported Windows Server OS versions under their respective folders.

clip_image003

As you can see in the screenshot above we have CBT drivers for any version of Windows Server back to Windows Server 2008 R2. If you are running anything older we really need to talk about your environment in 2018. I mean it.

Note that right clicking the .inf file for your version of Windows Server and selecting Install is the most manual way of installing the CBT driver. You’ll need to reboot the host.

clip_image005

Normally you’ll either integrate the deployment and updating of the CBT driver into the VBR 9.5 Update 3 console or you’ll deploy and update the CBT driver manually.

Install / uninstall the CBT driver via Veeam Backup & Replication Console

You can add servers individually or as part of a protection group (Active Directory based). Whatever option you chose you’ll have the option of managing them via the agent manually or via VBR server. Once you have done that you can deploy and update the optional CBT driver for supported Windows Server versions via the individual servers or the protection groups.

Individual Server

clip_image007

Once the agent is installed you’ll can optionally install the CBT driver. When that’s you can also uninstall the CBIT driver and the agent form the VBR Console.

clip_image009

Protection Group

You can add servers to protect via VAW 2.1 individually, via active directory (domain, organizational unit, container, computer, cluster or a group) or a CSV file with server names /IP-addresses. That’s another subject actually but you get the gist of what a protection group is.

clip_image011

Checking the CBT driver version

You can always check the CBT driver version via the details of a server added to the physical or cloud infrastructure.

clip_image012

Install / uninstall the CBT driver via the standalone Veeam Agent for Windows

My workstation at home isn’t managed by a Veeam Backup & Replication v9.5 Update 3 server. It’s a standalone system. But it does run Windows Server 2016. Now, even while such a standalone system can send its backups to Veeam Repository, I don’t do that at home. The target is a local disk in disk bay that I can easily swap out every week. I just rotate through a couple of recuperated larger HDDs for this purpose and this also allows me to take a backup copy off site. The Veeam Agent for Windows configuration for my home office workstation is done locally, including the installation of the CBT driver. Doing so is easy. Under settings in the we now have a 3rd entry VAW 2.1 that’s there to install the CBT driver if we want to.

clip_image014

When you click install it will be done before you can even blink. It will prompt you the restart the computer to finish installing the driver. Do so. If not, the next backup will complain about failing over to MFT analysis based incremental backups as you can’t use the installed CBT driver yet.

clip_image016

When using the new VAW 2.1 CBT drivers for windows changes get tracked a VCT file. These can be seen under C:\ProgramData\Veeam\EndpointData\CtStore\VctStore.

clip_image018

Ready to Go

I’ll compare the results of backing up my main workhorse with and without the CBT driver installed. Veeam indicates the use case is for servers with a lot of data churn and that’s where you should use them. The idea is that you don’t need to deal with updating the drivers when the benefits are not there. That’s fair enough I’d say but I’m going to experiment a little with them anyway to see what difference I can notice without resorting to a microscope.

If we conclude that having the CBT driver installed is not worth while for our workstation we can easily uninstall it again via the control panel, under settings, where we now see the option to uninstall it. Easy enough.

clip_image020

However, as it can track changes in NTFS as well as ReFS and FAT partitions it might be wise to use it for those servers that have one or more of such volumes, even when for NFTS volumes the speed difference isn’t that significant. Normally the bigger the data churn delta the bigger the benefits of the CBT driver will be.