Introduction
Some support statements for ReFS have been updated recently. These reflect well over a year of me, fellow MVPs and others testing and providing feedback to Microsoft. For all practical purposes I’m talking about ReFSv3, which was introduced with Windows Server 2016. Read up on this because that’s what I’m discussing here: Resilient File System (ReFS) overview
As many you know the ReFS supported storage deployment option has “fluctuated a bit. It was t limited ReFS to Storage Spaces and standalone disks only. That meant no RAID controllers, no FC or iSCSI LUNs via a SAN whether that was a high end one or and entry level one that you normally only use for backup purposes.
I was never really satisfied with the reasons why and I kept being a passionate advocate for a decent explanation as tying a files system with the capabilities and potential of ReFS to almost a single storage solution (S2D, and yes that’s a very good HCI offering) isn’t going to help proliferate the goodness of ReFS around the globe.
I was not alone and many others, amongst them fellow MVPs Anton Gostev (Senior Vice President, Product Management at Veaam and an industry heavy weight when it comes to credibility and technical skill), Cars ten Rachfahl and Jan Kappen (both at Rachfahl IT-Solutions) were arguing he case for broader ReFS support. Last week we go the news that the ReFS deployment documentation had been revised. Guest what? Progress! A big thank you to Andrew Hansen for taking the time to hear us plead or case, listen to our testing results and passionate feedback. He picked up the ball, ran with it and delivered! Let’s take a look.
ReFS Storage Deployment Options
Storage Spaces Direct
Deploying ReFS on Storage Spaces Direct is recommended for virtualized workloads or network-attached storage. This is well known and is used for a Hyper Converged Infrastructure and Converged (SOFS) solution (Hyper-V, IIS, SQL, User Profile Disks and even archival or backup targets). You can deploy it with simple, mirrored (2-way or 3-way), parity or Mirror accelerated parity volumes.
Storage Spaces
Storage Spaces supports local non-removable direct-attached via BusTypes SATA, SAS, NVME, or attached via HBA (aka RAID controller in pass-through mode). You can deploy it with simple, mirrored (2-way or 3-way) or parity volumes. Do note that this can be both non-shared as shared storage spaces (Shared SAS enclosures). This is the high available solution with storage spaces we have before Windows Server 2016 added S2D.
Basic disks
Deploying ReFS on basic disks is best suited for applications that implement their own software resiliency and availability solutions. Applications that introduce their own resiliency and availability software solutions can leverage integrity-streams, block-cloning, and the ability to scale and support large data sets. A poster child for this use case is and Exchange DAG.
Now it is important to note that basic disks with ReFS are supported with local non-removable direct-attached disks via BusTypes SATA, SAS, NVME, or RAID. So yes, you can have RAID 1, 5,6,10 and make the storage redundant. Now, be smart, ReFS is great but it is not magic. If your workload requires redundancy and high availability you should provide it. This is not different when you use NTFS. When you have shared PCI RAID controllers (which can be redundant like in a DELL VRTX) this can be uses as well to create high availability deployments with shared storage.
SAN Storage
You can also use ReFS with a SAN over FC or iSCSI, normally those are always configured with some form of storage redundancy. You can consume the ReFS SAN storage on stand alone, member or clustered serves for high availability. As long as you use that storage for supported use cases. For example, it is and remains not support to put knowledge worker data on SOFS shares, not matter what the underlying storage for ReFS or NTFS volumes is. For backups this can leveraged to build some very capable solutions.
What were the concerns that made ReFS Support so limited at a given point in time?
Well one of them was confusion and concerns around how data gets flushed and persisted with non-storage spaces and simple disks. A valid concern but one you have with any file system so any storage array or controller needs to handle this well. As it turns out any decent piece of storage hardware/controller that’s on the Microsoft Hardware Compatibility List and is certified does its job well enough to guarantee this happens correctly. So, any certified OEM SAN, both entry level ones to high end enterprise grade gear is supported. Just like any good (certified) raid controller. Those are backed with battery backed caches that can survive down time for days to many weeks. You just pick the one that fits your needs, use case and budget form the options you have. That can be S2D, a SAN, a raid controller, or even basic directly attached disks.
My take on things
Why do I like the new supported options? Well because I have been testing them for backup targets, both high available one as non- high available one. I can have the benefits of ReFS that can be leveraged by backup software (Veeam Backup & Replication 9.5 for example) and have better performance, data protection with more type of storage than S2D. I like to have options and choices when designing as solution.
It is important to note one thing when you do not use ReFS in combination with Storage Spaces (S2D, Shared storage Spaces or “stand alone” storage spaces) with any form of data redundancy (2-way or 3-way mirror, parity, mirror accelerate parity). You will not have the built-in capability to repair data corruption than can occur while data sits on disk (bit rot) by leveraging the redundant copies in storage Spaces. That only comes when ReFS is combined with redundant Storage Spaces. Not with Simple Storage Spaces or any other storage array, redundant or not. The combination of ReFS with Storage Spaces offers this capability and is one of its selling points.
Other than that, the above ReFS storage deployment options let you leverage the benefits ReFS has to offer and yes, for some use case that will be preferred over NTFS. But don’t think NTFS should now only be used for the OS and such. That’s not the case. It is and remains very much the dominant file system for Windows. It’s just that now we get to leverage the goodness of ReFS for suitable scenarios with a lot more storage deployment options. This has a reason. For example, if you are going to do Hyper-V with a SAN the supported file system is NTFS, not ReFS. Mind you ReFS works but it’s not supported. I have tested this and while it works one of the concerns is the redirect IO traffic this incurs. With S2D the network fabric to deal with this is there by design: SMB Direct (RDMA) over 10Gbps or better. With a SAN that’s not necessarily so and as a result the network leveraged by CSV traffic might take a beating. The network traffic behavioral patterns are also different with ReFS versus NTFS on SAN based CSV than what you are used to with NFTS when it comes to owner and non-owner nodes. While I can make things work I must consider the benefits versus the risk of being unsupported. On a good SAN with ODX support that’s not worth the risk. Might this ever change? Maybe, but for now that’s it.
That said, when I design my ReFS LUNs and fabric well with a SAN and use them for a supported uses case like backup targets I am supported and I get to leverage the benefits of ReFS as it fits the use case very well (DPM, Veeam).
A side note on mirror accelerated parity
Mirror accelerated parity is only supported with S2D. That’s the only thing that, in regards to backup an archive targets that I want to keep testing (see Hyper-V Amigos Showcast Episode 12 – ReFS and Backup )and asking Microsoft to support at least on non-shared Storage spaces. I know shared storage spaces is being depreciated, no worries. That would make for some great, budget, archival and backup targets due to the fact you get bit rot protection due to the combination ReFS with redundant Storage Spaces. I even have some ideas on how to add tuning capabilities to the mirror / parity movement of data based on data age etc. I can dream right ?
Conclusion
To all the naysayers, the ones that bashed me when I discussed options for and the potential for ReFSv3 outside of S2D, take note, this is where we are today.
And I like it. I like the options ReFSv3 offers with variety of storage solutions to design and implement backup targets for many different needs and budgets. That’s what I like as I’m convinced that one size fits all solution are an illusion. Even at economies of scale and with commodity materials understanding the context in which to design and implement a solution matters, as it allows you to chose the proper methods for the given needs when you genuinely understand the challenge.
If you need help with this there are quite a number of highly skilled, experienced people with the right mindset to make help you maximize your ROI and TCO in an effective and efficient way. Many of these are MVPs and have their own business or work for IT firms where customers are not milked like cattle but really do provide high value services. Just reach out.
Hi Didier, exploring mirror accelerated parity options with storage spaces (not direct) and it seems to be supported/work – per https://github.com/MicrosoftDocs/windowsserverdocs/issues/1145
Has that changed since you wrote your article?
It is a bit of confusing area for everyone. I am not sure what your scenario is but I would like to point out that Mirror Accelerated Parity (MAP) is not exactly the same as tiering in the Windows Sever 2012(R2) concept fore storage spaces (real time versus scheduled).
There was a possibility to do MAP in Shared Storage Spaces during the previews of Windows 2016, but if I remember correctly that was blocked in the later previews and RTM (unless that is changed again or I am mistaken). You can however use tiering a la Windows Server 2012 R2 Storage Spaces in Windows Server 2016 with parity. That model however is being depreciated.
In the video mentioned in this blog we demo using MAP in a stand alone storage spaces setup om a single server (not shared storage spaces) which is not blocked but I have found no official support for either. What I read and know is that MAP is S2D only in terms of being supported.
Hopes this helps a bit.
What are your thoughts on the benefits of ReFS over NTSF when your not using Storage Spaces, but instead your hosting the storage on enterprise SAN. I understand you lose the ability to repair data corruption, 2-way or 3-way mirror, parity, mirror accelerate parity, etc…
I see Veeam pushes the ReFS for their snap clone, but outside that are their benefits for large repositories for non Veeam backup software?
The application has to leverage the capabilities of ReFS in some way, shape or form that is beneficial to its purpose, in a supported use case for ReFS.
Im setting up a Windows Server 2019 HyperVisor on a RAID 5 SSD array. Id like to format the VM partition with ReFS. Do you see any potential issue doing this? I like the instabt fixed disk creation feature.
So this is a stand-alone host (no CSVs involved) with a quality battery backed raid controller? In that case, you should be just fine. The only thing you miss out on is the auto repair of bit rot that comes with ReFS/Storage Spaces combo. But you don’t have that with NTFS and RAID controllers either.
Yes. Stand alone host. LSI 9300-8i with Battery using FastPath No Read Ahead and Write Through no cache. The VMs seem to be running quicker too than when Ive had them on NTFS partition though this could be placebo. Are the writes more intense on SSDs with ReFS than NTFS? I love the instant fixed disk creation.
Follows MSFTs advice and stick to 4K for REFS and you should be fine https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/
I’ve seen so many back and forth articles for Hyper-V partitions that say still stick with 64K, it’s all very confusing. LOL. Thank you for your advice! It seems as though with the newest revision of ReFS that 64K is a viable option?
It is viable, I’ve used it before with no issues but MSFT for hyper-v workloads states 4K as a recommendation.
Hi,
I deployed Storage Space on Windows Server 2019 (Not S2D). I created a virtual disk with Storage Tiers. When I create volume on this virtual disk, I see no option for ReFS in GUI.
I tried to search for this, and everyone says tier optimization is not available on ReFS volumes with Windows Server 2016. However, no one gave out any authoritative reference.
I wonder if this is true that ReFS does not support tiered storage? And I wonder if there’s any change in ReFS and Storage Space in these years?
Thanks!
It works. You need to create the volumes in PowerShell. Here’s an example. New-Volume -FriendlyName “MyVolume1” -DriveLetter C -AllocationUnitSize 65536 -FileSystem ReFS -StoragePoolFriendlyName $StoragePoolFriendlyName -StorageTierFriendlyNames $SSDTier.FriendlyName, $HDDTier.FriendlyName -StorageTierSizes 10TB, 40TB
I cannot speak for MSFT in regards to support :-). I use it for backup targets. Do use all the newer commands and say away from examples that refer to W2K12R2 era setups.
Thanks for your reply!
“Tier optimization” is what I’m worried about. Sometimes I wonder if Microsoft hide the ReFS option to avoid “Tier optimization not working” complaints.
Now knowing tier optimization works, then I will use ReFS.
Used fixed (fully provisioned) volumes. Keep size reasonable and get sufficient memory. Windows 2016 is better than Windows 2019 unless you go for the SAC releases (190X)
I know this is an old discussion thread however I am posting to see if anyone might reply to my question.
I have just assembled a standalone storage pool using a JBOD compliant enclosure housing 5 8TB SATA III HDD. I created a tiered storage pool with PS and am utilizing an SSD (2X500GB) 2 way mirror and have added a 250GB NVMe that according to my research will be automatically used solely for cache purposes.
Using PS i created a Hybrid Pool using storage tiers of SSD and HDD. I have confirmed the existence of the tiers and resilience as mirror (SSD) and parity (HDD). I then used PS to create a New-Volume on the entire pool using TierSizeMax options for both tiers. I set allocation units as 64K, provisioning is fixed, ReFS filesystem.
The above PS command has yet to complete and I am wondering if anyone may have an idea of how long it will take to format the volume and complete the process?
This device will be used primarily for backup purposes and media archiving. I anticipate this working well, any thoughts?
The format should be quite fast when you do a quick format. But … on stand alone storage spaces you cannot use an NVMe cache, the cache is on the SSDs of the MAP config. The dedicated cache only works with S2D afaik and have tested.
Hi, We are using DPM 2019 on Windows Server 2019 to backup over 100 VMs on Hyper-V to an iSCSI volume mounted from another Windows 2019 storage server. We have seen massive memory usage for ReFS of over 10 GB freezing the VM to the point it will need a forced reset several times.
I read up about tuning of ReFS parameters, block size and Integrity streams. Some people suggest turning off integrity streams. Would you see any major issues with disabling integrity streams for this use case?
Would you have any other suggestions for tuning ReFS for large instances of DPM?
0) Make sure the DPM server/backup target has enough CPU/Memory.
1) make sure you are fully patched, a lot of ReFS issues have been solved since RTM/GA
2) Optimizations:
fsutil behavior set DisableDeleteNotify ReFS 1
REG ADD HKLM\System\CurrentControlSet\Control\FileSystem /v RefsEnableLargeWorkingSetTrim /t REG_DWORD /d 1
3) Windows 2021 will have a never ReFS version with the same and significant changes brought over several SAC releases.
4) Integrity streams should not have that serious an impact of backups, but you can try to see if it makes a difference.
5) Some extra tweaks exist but no use doing this before the above.
Thank you for your helpful advice, very appreciated!
You are most welcome.