New SMB Instances in Windows Server 2016

Introduction

I’m pretty sure you all know / remember that in Windows Server 2012 R2 one of the improvements we got for SMB 3 was the default and the CSV instance. You can take a peak here at my slide deck from a presentation I gave at the Microsoft Technical Summit in Berlin 2014 Failover Clustering- What’s new in Windows Server 2012 R2

image

You can also read more on TechNet here.

We have New SMB Instances in Windows Server 2016

I’m happy to see this concept being expanded in the new SMB workloads. When playing  with Windows Server 2016 TPv5 I you’ll notice that we now have gotten extra SMB instances:

image

So what are these New SMB Instances in Windows Server 2016 about?

Default: This is what we had before, it’s nonspecific default SMB traffic, such as file shares

CSV: Also already known since Windows Server 2012 R2. This is the CSV traffic. This got isolated for better resiliency and isolation of issues. The goal was not to let CSV SMB issues affect default SMB traffic.

SR: This stands for Storage Replica. The purpose of this instance is the same as for CSV, isolate and make SMB on the whole more resilient.

SBL: This stands for “Storage Bus Layer”, which is related to Storage Spaces Direct replication traffic. The Storage Bus Layer is now called the Software Storage Bus (SSB). Again, isolated in its own instance for better resilience and isolation of issues. You can find more on the Software Storage Bus in Storage Spaces Direct and its use of SMB here Storage Spaces Direct – Under the hood with the Software Storage Bus

Picture by Microsoft®

“SSB uses SMB3 and SMB Direct as the transport for communication between the servers in the cluster. SSB uses a separate named instance of SMB in each server, which separates it from other consumers of SMB, such as CSVFS, to provide additional resiliency. Using SMB3 enables SSB to take advantage of the innovation we have done in SMB3, including SMB Multichannel and SMB Direct. SMB Multichannel can aggregate bandwidth across multiple network interfaces for higher throughput and provide resiliency to a failed network interface. SMB Direct enables use of RDMA enabled network adapters, including iWARP and RoCE, which can dramatically lower the CPU overhead of doing IO over the network and reduce the latency to disk devices”

Funny anecdote: while doing some research I stumbled upon the MSDN article MSFT_SmbShare class. As you can see in the screenshot below the name was mistakenly put at “Storage Bus Later” but it should be Storage Bus Layer or as it’s now called the Software Storage Bus:

clip_image002

Now that will be fixed soon or by the time you read this blog post. All fun aside, if you want to see what that Software Storage Bus is capable of look at this blog post and video by Ned Pyle on what Claus Joergenson achieved already in 2015 with this technology. Amazing results!

Simplified SMB Multichannel and Multi-NIC Cluster Networks

Simplified SMB Multichannel and Multi-NIC Cluster Networks

One of the seemingly small feature enhancements in Windows Server 2016 Failover clustering is simplified SMB multichannel and multi-NIC cluster networks. In Windows 2016 failover clustering now recognizes and uses multiple NICs on the same subnet for cluster networking (Cluster & client access).

image

Why was this introduced?

The growth in the capabilities of the hardware ( Compute, memory, storage & networking) meant that failover clustering had to leverage this capability more easily and for more use cases than before. Talking about SMB, that now also is used for not “only” CSV and live migration but also for Storage Spaces Direct and Storage Replica.

  • It gives us better utilization of the network capabilities and throughput with Storage Spaces Direct, CSV, SQL, Storage Replica etc.
  • Failover clustering now works with multichannel as any other workload without the extra requirement of needing multiple subnets. This is more important that it seems to me at first. But in many environment getting another VLAN and/or extra subnet is a hurdle. Well that hurdle has gone.
  • For IPv6 Link local Subnets it just works, these are auto configured as cluster only networks.
  • The cluster Validation wizard won’t nag about it anymore and knows it’s a valid failover cluster configuration

See it in action!

You can find a quick demo of simplified SMB multichannel and multi-NIC cluster networks on my Vimeo channel here

image

In this video I demo 2 features. One is new and that is virtual machine compute resiliency. The other is an improved feature, simplified SMB multichannel and multi NIC cluster networks. The Multichannel demo is the first part of the video. Yes, it’s with RDMA RoCEv2, you know I just have to do SMB Direct when I can!

You can read more about simplified SMB multichannel and multi-NIC cluster networks on TechNet in here. Happy Reading!

Get-VMHostSupportedVersion

I wrote about this little Gem of a PowerShell Commandlet  Get-VMHostSupportedVersion before in here (there a bit more info on the impact of a VM configuration version in that blog). Now at TPv5 I took a new peak and what do we find?

image

We now have version virtual machine configuration version 7.1 at TPv5. We also got 2 new version ID’s 254.0 for Prerelease and 255.0 for Experimental. Clearly Microsoft has plans here.  I’ll update this blog with a link to the documentation when I find it.

All bets are open as to where we’ll land at RTM for the virtual machine configuration version. I’m guessing that we’re feature complete at Technical Preview 5 but version numbers can get funky. Will all TP version be supported at RTM? Normally upgrades from beta / preview versions are not supported but on the other hand some people in early adopter programs are working on it already so I’m guessing they will. We’ll see, but that’s where I put my money.

NUMA Spanning and Virtual NUMA in Hyper-V

When it comes to NUMA Spanning and Virtual NUMA in Hyper-V or anything NUMA related actually in Hyper-V virtualization this is one  subject that too many people don’t know enough about. If they know it they often could be helped by some more in depth information and examples on anything NUMA related in Hyper-V virtualization.

image

Some run everything on the defaults and  never even learn more l they read or find they need to dive in deeper for some needs or use cases. To help out many with some of the confusion or questions they struggled with in regards to Virtual NUMA, NUMA Topology, NUMA Spanning and their relation to static and dynamic memory.image

As I don’t have the time to answer all questions I get in regards to this subject I have written an article on the subject. I’ve published it as a community effort on the StarWind Software blog and you can find it here: A closer look at NUMA Spanning and virtual NUMA settings

I think t complements the information on this subject on TechNet well and it also touches on Windows Server 2016 aspects of this story. I hope you enjoy it!