The rejuvenated push for excellence by Veeam for Hyper-V customers

Introduction

As an observer of the changes in the hypervisor market in 2024 and 2025, you have undoubtedly noted considerable commotion and dissent in the market. I did not have to deal with it as I adopted and specialized in Hyper-V from day one. Even better, I am pleased to see that many more people now have the opportunity to experience Hyper-V and appreciate its benefits.

While the UI management is not as sleek and is more fragmented than that of some competitors, it offers all the necessary features available for free. Additionally, PowerShell automation enables you to create any tooling you desire, tailored to your specific needs. Do that well, and you do not need System Center Virtual Machine Manager for added capabilities. Denying the technical capabilities and excellence of Hyper-V only diminishes the credibility and standing of those who do so in the community.

That has been my approach for many years, running mission-critical, real-time data-sensitive workloads on Hyper-V clusters. So yes, Microsoft could have managed the tooling experience a bit better, and that would have put them in an even better position to welcome converting customers. Despite that, adoption has been rising significantly over the last 18 months and not just in the SME market.

Commotion, fear, uncertainty, and doubt

The hypervisor world commotion has led to people looking at other hypervisors to support their business, either partially or wholesale. The moment you run workloads on a hypervisor, you must be able to protect, manage, move, and restore these workloads when the need to do so arises. Trust me, no matter how blessed you are, that moment comes to us all. The extent to which you can handle it, on a scale from minimal impact to severe impact, depends on the nature of the issue and your preparedness to address it.

Customers with a more diverse hypervisor landscape means that data protection vendors need to support those hypervisors. I think that most people will recognize that developing high-quality software, managing its lifecycle, and supporting it in the real world requires significant investment. So then comes the question, which ones to support? What percentage of customers will go for hypervisor x versus y or z? I leave that challenge to people like Anton Gostev and his team of experts. What I can say is that Hyper-V has taken a significant leap in adoption, as it is a mature and capable platform built and supported by Microsoft.

The second rise of Hyper-V

Over the past 18 months, I have observed a significant increase in the adoption of Hyper-V. And why not? It is a mature and capable platform built and supported by Microsoft. The latter makes moving to it a less stressful choice as the ecosystem and community are large and well-established. I believe that Hyper-V is one of the primary beneficiaries of the hypervisor turmoil. Adoption is experiencing a second, significant rise. For Veeam, this was not a problem. They have provided excellent Hyper-V support for a long time, and I have been a pleased customer, building some of the best and most performant backup fabrics on our chosen hardware.

But who are those customers adopting Hyper-V? Are they small and medium businesses (SME) or managed service providers? Or is Hyper-V making headway with big corporate enterprises as well? Well, neither Microsoft nor Veeam shares such data with me. So, what do I do? Weak to strong signal intelligence! I observe what companies are doing and what they are saying, in combination with what people ask me directly. That has me convinced that some larger corporations have made the move to Hyper-V. Some of the stronger signals came from Veeam.

Current and future Veeam Releases

Let’s look at the more recent releases of Veeam Backup & Replication. With version 12.3, support for Windows Server 2025 arrived very fast after the general availability of that OS. Hyper-V, by the way, is getting all the improvements and new capabilities for Hyper-V just as much as Azure Local. That indicates Microsoft’s interest in making Hyper-V an excellent option for any customer, regardless of how they choose to run it, be it on local storage, with shared storage, on Storage Spaces Direct (S2D), or Azure Local. That is a strong, positive signal compared to previous statements. Naturally, Hyper-V benefits from Veeam’s ongoing efforts to resolve issues, enhance features, and add capabilities, providing the best possible backup fabric for everyone. I will discuss that in later articles.

Now, the strong signal and very positive signal from Veeam regarding Hyper-V came with updates to Veeam Recovery Orchestrator. Firstly, Veeam Recovery Orchestrator 7.2 (released on February 18th, 2025) introduced support for Hyper-V environments. What does that tell me? The nature, size, and number of customers leveraging Hyper-V that need and are willing to pay for Veeam Recovery Orchestrator have grown to a point where Veeam is willing to invest in developing and supporting it. That is new! On the Product Update page, https://community.veeam.com/product-updates/veeam-recovery-orchestrator-7-2-9827, you can find more information. The one requirement that sticks out is the need for System Center Virtual Machine Manager. Look at these key considerations:

  • System Center Virtual Machine Manager (SCVMM) 2022 & CSV storage registered in SCVMM is supported.
  • Direct connections to Hyper-V hosts are not supported.

But not that much later, on July 9th, 2025,  in Veeam Recovery Orchestrator 7.2.1 (see https://community.veeam.com/product-updates/veeam-recovery-orchestrator-7-2-1-10876), we find these significant enhancements:

  1. Support for Azure Local recovery target: You can now use Azure Local as a recovery target for both vSphere and Hyper-V workloads, expanding flexibility and cloud recovery options.
  2. Hyper-V direct-connected cluster support: Extended Hyper-V functionality enables support for direct-connected clusters, eliminating the need for SCVMM. This move simplifies deployment and management for Hyper-V environments.
  3. MFA integration for VRO UI: Multi-Factor Authentication (MFA) can now be enabled to secure logins to the VRO user interface, providing enhanced security and compliance. Microsoft Authenticator and Google Authenticator apps are supported.

Especially 1 and 2 are essential, as they enable Veeam Recovery Orchestrator to support many more Hyper-V customers. Again, this is a strong signal that Hyper-V is making inroads. Enough so for Veeam to invest. Ironically, we have Broadcom to thank for this. Which is why in November 2024, I nominated Broadcom as the clear and unchallenged winner of the “Top Hyper-V Seller Award 2024” (https://www.linkedin.com/posts/didiervanhoye_broadcom-mvpbuzz-hyperv-activity-7257391073910566912-bTTF/)

Conclusion

Hyper-V and Veeam are a potent combination that continues to evolve as market demands change. Twelve years ago, I was testing out Veeam Backup & Replication, and 6 months later, I became a Veeam customer. I am still convinced that for my needs and those of the environments I support, I have made a great choice.

The longevity of the technology, which evolves in response to customer and security needs, is a key factor in determining great technology choices. In that respect, Hyper-V and Veeam have performed exceptionally well, striking multiple bullseye shots without missing a beat. And missing out on the hypervisor drama, we have hit the bullseye once more!

ReFS Supported Deployment Scenarios Updated

Introduction

Some support statements for ReFS have been updated recently. These reflect well over a year of me, fellow MVPs and others testing and providing feedback to Microsoft. For all practical purposes I’m talking about ReFSv3, which was introduced with Windows Server 2016. Read up on this because that’s what I’m discussing here: Resilient File System (ReFS) overview

As many you know the ReFS supported storage deployment option has “fluctuated a bit. It was t limited ReFS to Storage Spaces and standalone disks only. That meant no RAID controllers, no FC or iSCSI LUNs via a SAN whether that was a high end one or and entry level one that you normally only use for backup purposes.

I was never really satisfied with the reasons why and I kept being a passionate advocate for a decent explanation as tying a files system with the capabilities and potential of ReFS to almost a single storage solution (S2D, and yes that’s a very good HCI offering) isn’t going to help proliferate the goodness of ReFS around the globe.

I was not alone and many others, amongst them fellow MVPs Anton Gostev (Senior Vice President, Product Management at Veaam and an industry heavy weight when it comes to credibility and technical skill), Cars ten Rachfahl and Jan Kappen (both at Rachfahl IT-Solutions) were arguing he case for broader ReFS support. Last week we go the news that the ReFS deployment documentation had been revised. Guest what? Progress! A big thank you to Andrew Hansen for taking the time to hear us plead or case, listen to our testing results and passionate feedback. He picked up the ball, ran with it and delivered! Let’s take a look.

ReFS Storage Deployment Options

Storage Spaces Direct

Deploying ReFS on Storage Spaces Direct is recommended for virtualized workloads or network-attached storage. This is well known and is used for a Hyper Converged Infrastructure and Converged (SOFS) solution (Hyper-V, IIS, SQL, User Profile Disks and even archival or backup targets). You can deploy it with simple, mirrored (2-way or 3-way), parity or Mirror accelerated parity volumes.

Storage Spaces

Storage Spaces supports local non-removable direct-attached via BusTypes SATA, SAS, NVME, or attached via HBA (aka RAID controller in pass-through mode). You can deploy it with simple, mirrored (2-way or 3-way) or parity volumes. Do note that this can be both non-shared as shared storage spaces (Shared SAS enclosures). This is the high available solution with storage spaces we have before Windows Server 2016 added S2D.

Basic disks

Deploying ReFS on basic disks is best suited for applications that implement their own software resiliency and availability solutions. Applications that introduce their own resiliency and availability software solutions can leverage integrity-streams, block-cloning, and the ability to scale and support large data sets. A poster child for this use case is and Exchange DAG.

Now it is important to note that basic disks with ReFS are supported with local non-removable direct-attached disks via BusTypes SATA, SAS, NVME, or RAID. So yes, you can have RAID 1, 5,6,10 and make the storage redundant. Now, be smart, ReFS is great but it is not magic. If your workload requires redundancy and high availability you should provide it. This is not different when you use NTFS. When you have shared PCI RAID controllers (which can be redundant like in a DELL VRTX) this can be uses as well to create high availability deployments with shared storage.

SAN Storage

You can also use ReFS with a SAN over FC or iSCSI, normally those are always configured with some form of storage redundancy. You can consume the ReFS SAN storage on stand alone, member or clustered serves for high availability. As long as you use that storage for supported use cases. For example, it is and remains not support to put knowledge worker data on SOFS shares, not matter what the underlying storage for ReFS or NTFS volumes is. For backups this can leveraged to build some very capable solutions.

What were the concerns that made ReFS Support so limited at a given point in time?

Well one of them was confusion and concerns around how data gets flushed and persisted with non-storage spaces and simple disks. A valid concern but one you have with any file system so any storage array or controller needs to handle this well. As it turns out any decent piece of storage hardware/controller that’s on the Microsoft Hardware Compatibility List and is certified does its job well enough to guarantee this happens correctly. So, any certified OEM SAN, both entry level ones to high end enterprise grade gear is supported. Just like any good (certified) raid controller. Those are backed with battery backed caches that can survive down time for days to many weeks. You just pick the one that fits your needs, use case and budget form the options you have. That can be S2D, a SAN, a raid controller, or even basic directly attached disks.

My take on things

Why do I like the new supported options? Well because I have been testing them for backup targets, both high available one as non- high available one. I can have the benefits of ReFS that can be leveraged by backup software (Veeam Backup & Replication 9.5 for example) and have better performance, data protection with more type of storage than S2D. I like to have options and choices when designing as solution.

It is important to note one thing when you do not use ReFS in combination with Storage Spaces (S2D, Shared storage Spaces or “stand alone” storage spaces) with any form of data redundancy (2-way or 3-way mirror, parity, mirror accelerate parity). You will not have the built-in capability to repair data corruption than can occur while data sits on disk (bit rot) by leveraging the redundant copies in storage Spaces. That only comes when ReFS is combined with redundant Storage Spaces. Not with Simple Storage Spaces or any other storage array, redundant or not. The combination of ReFS with Storage Spaces offers this capability and is one of its selling points.

Other than that, the above ReFS storage deployment options let you leverage the benefits ReFS has to offer and yes, for some use case that will be preferred over NTFS. But don’t think NTFS should now only be used for the OS and such. That’s not the case. It is and remains very much the dominant file system for Windows. It’s just that now we get to leverage the goodness of ReFS for suitable scenarios with a lot more storage deployment options. This has a reason. For example, if you are going to do Hyper-V with a SAN the supported file system is NTFS, not ReFS. Mind you ReFS works but it’s not supported. I have tested this and while it works one of the concerns is the redirect IO traffic this incurs. With S2D the network fabric to deal with this is there by design: SMB Direct (RDMA) over 10Gbps or better. With a SAN that’s not necessarily so and as a result the network leveraged by CSV traffic might take a beating. The network traffic behavioral patterns are also different with ReFS versus NTFS on SAN based CSV than what you are used to with NFTS when it comes to owner and non-owner nodes. While I can make things work I must consider the benefits versus the risk of being unsupported. On a good SAN with ODX support that’s not worth the risk. Might this ever change? Maybe, but for now that’s it.

That said, when I design my ReFS LUNs and fabric well with a SAN and use them for a supported uses case like backup targets I am supported and I get to leverage the benefits of ReFS as it fits the use case very well (DPM, Veeam).

A side note on mirror accelerated parity

Mirror accelerated parity is only supported with S2D. That’s the only thing that, in regards to backup an archive targets that I want to keep testing (see Hyper-V Amigos Showcast Episode 12 – ReFS and Backup )and asking Microsoft to support at least on non-shared Storage spaces. I know shared storage spaces is being depreciated, no worries. That would make for some great, budget, archival and backup targets due to the fact you get bit rot protection due to the combination ReFS with redundant Storage Spaces. I even have some ideas on how to add tuning capabilities to the mirror / parity movement of data based on data age etc. I can dream right ?

Conclusion

To all the naysayers, the ones that bashed me when I discussed options for and the potential for ReFSv3 outside of S2D, take note, this is where we are today.

clip_image001

And I like it. I like the options ReFSv3 offers with variety of storage solutions to design and implement backup targets for many different needs and budgets. That’s what I like as I’m convinced that one size fits all solution are an illusion. Even at economies of scale and with commodity materials understanding the context in which to design and implement a solution matters, as it allows you to chose the proper methods for the given needs when you genuinely understand the challenge.

If you need help with this there are quite a number of highly skilled, experienced people with the right mindset to make help you maximize your ROI and TCO in an effective and efficient way. Many of these are MVPs and have their own business or work for IT firms where customers are not milked like cattle but really do provide high value services. Just reach out.

Hyper V Amigos Showcast 15 – VMFleet explained

After a short break in broadcasts, due to extremely busy times, the Hyper-V Amigos ride again! Yes, fellow Microsoft MVP Carsten Rachfahl (@hypervserver) from Rachfahl IT Solutions and yours truly are back in the saddle!

In episode 15 of the Hyper-V Amigos Showcast we look at VMFleet. VMFleet is a free Microsoft test suite to show/test and help analyze the performance of a Hyper-converged Storage Spaces Direct Setup.

image

In this episode Hyper V Amigos Showcast 15 – VMFleet explained we show you how to set it up, how it works and how to measure and evaluate your S2D deployment with it. We also share some real life tips with you. All this in just around 2 hours. So stick around for the long haul!

DELL EMC Ready Nodes and Storage Spaces Direct

Introduction

Unless you have been living under a rock you must have heard about Storage Spaces Direct (S2D) in Windows Server 2016, which has gone RTM in Q4 2016.

There is a lot of enthusiasm for S2D and we have seen heard and assisted in early adopter situations. But that’s a bit of pioneering with OEM/MSFT approved components. So now bring in the DELL EMC Ready Nodes and Storage Spaces Direct.

DELL EMC Storage Spaces Direct Ready Nodes

So enter the DELL EMC ready nodes. These will be available this summer and should help less adventurous but interested customers get on the S2D bandwagon. These were announced at DELL EMC world 2017 and on may 30th they published some information on the TechCenter.

If offers a fully OEM supported S2D solution to the customers that cannot or will not carry the engineering effort associated with self built solution.

I was sort of hoping these would leverage the PowerEdge 740DX from the start but they seem to have opted to begin with the DELL 730DX. I’m pretty sure the R740DX will follow soon as it’s a perfect fit for the use case having 25Gbps support. In that respect I expect a refresh of the switches offered as well as the S4048 is a great switch but keeps us at 10Gbps. If I was calling the shots I’d that ready and done sooner rather than later as the 25/50/100Gbps network era is upon us. There’s a reason I’ve been asking for 25Gpbs capable switches with smaller port counts for SME.

Maybe this is an indication of where they think this offering will sell best. But I’d be considering future deployments when evaluating network gear purchases. These have a long service time. And when S2D proves it self I’m sure the size of the deployments will grow and with it the need for more bandwidth. Mind you 10Gbps isn’t bad even if if, for Hyper-V nodes would be doing 2*dual port Mellanox Connect-X 3 Pro cards.

Having mentioned them, I am very happy to see the Mellanox RoCE cards in there.That’s the best choice they could have made. The 1Gbps on board NICs are Intel, which matches my preference. The game is a foot!