Most of you might remember this slide by Jose Barreto on SMB Multichannel Auto Configuration in one of his many presentations:
- Auto configuration looks at NIC type/speed => Same NICs are used for RDMA/Multichannel (doesn’t mix 10Gbps/1Gbps, RDMA/non-RDMA)
- Let the algorithms work before you decide to intervene
- Choose adapters wisely for their function
You can fine tune things if and when needed (only do this when this is really the case) but let’s look at this feature in action.
So let’s look at this in real life. For this test we have 2 * X520 DA 10Gbps ports using 10.10.180.8X/24 IP addresses and 2 * Mellanox 10Gbps RDMA adaptors with 10.10.180.9X/24 IP addresses. No teaming involved just multiple NIC ports. Do not that these IP addresses are on different subnet than the LAN of the servers. Basically only the servers can communicate over them, they don’t have a gateway, no DNS servers and are as such not registered in DNS either (live is easy for simple file sharing).
Let’s try and copy a 50Gbps fixed VHDX file from server1 to server2 using the DNS name of the target host (pixelated), meaning it will resolve to that host via DNS and use the LAN IP address 10.10.100.92/16 (the host name is greyed out). In the below screenshot you see that the two RDMA capable cards are put into action. The servers are not using the 1Gbps LAN connection. Multichannel looked at the options:
- A 1Gbps RSS capable Link
- Two 10Gbps RSS capable Links
- Two 10Gbps RDMA capable links
Multichannel concluded the RDMA card is the best one available and as we have two of those it use both. In other words it works just like described.
Even if we try to bypass DNS and we copy the files explicitly via the IP address (10.10.180.84) assigned to the Intel X520 DA cards Multichannel intelligence detects that it has two better cards that provide RDMA available and as you can see it uses the same NICs as in the demo before. Nifty isn’t it
If you want to see the other NICs in action we can disable the Mellanox card and than Multichannel will choose the two X520 DA cards. That’s fine for testing but in real life you need a better solution when you need to manually define what NICs can be used. This is done using PowerShell (take a look at Jose Barrto’s blog The basics of SMB PowerShell, a feature of Windows Server 2012 and SMB 3.0 for more info).
New-SmbMultichannelConstraint –ServerName SERVER2 –InterfaceAlias “SLOT 6 Port 1”, “SLOT 6 Port 2”
This tells a server it can only use these two NICs which in this example are the two Intel X520 DA 10Gbps cards to access Server2. So basically you configure/tell the client what to use for SMB 3.0 traffic to a certain server. Note the difference in send/receive traffic between RDMA/Native 10Gbps.
On Server1, the client you see this:
On Server2, the server you see this:
Which is indeed the constraint set up as we can verify with:
Get-SmbMultichannelConstraint
We’re done playing so let’s clean up all the constraints:
Get-SmbMultichannelConstraint | Remove-SmbMultichannelConstraint
Seeing this technology it’s now up to the storage industry to provide the needed capacity and IOPS I a lot more affordable way. Storage Spaces have knocked on your door, that was the wake up call . In an environment where we throw lots of data around we just love SMB 3.0
Pingback: Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #26 - TechCenter - Blog - TechCenter – Dell Community
Pingback: Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #26 - Dell TechCenter - TechCenter - Dell Community
Reblogged this on ChBooth.
Is the first RDMA and the second picture without RDMA ? Trying to find real live throughput measurements “in the field” about performance difference between with and without RDMA… The way I (probably mis-)read your post, without outperformes with RDMA ? Also, those figures of around 700 MB/s is with 2 RDMA cards ? I would expect to already be able to get that kind of performance with one 10GbE…
First two are RDMA, 3rd is without. The Send/Receive metrics give that away. The blog post focuses on the autoconfiguration and NIC selection part of multichannel/RDMA not the throughput. One large file copy get’s maxes out at about 7Gbps with one card, 3.3/3.5 with two. But again this is not a Native 10Gbps/RDMA comparision. It also shows the offload to ethernet in action. RDMA is about max throughput with low latency and low CPU usage on the host. That will begin to shine once the load really increases.
I understand the purpose of the post, but seeing those pictures got me wondering… Happen to know somewhere/article where it is all about native/RDMA comparison ?
Here’s a good start http://blogs.technet.com/b/josebda/archive/2012/05/11/windows-server-2012-beta-with-smb-3-0-demo-at-interop-shows-smb-direct-at-5-8-gbytes-sec-over-mellanox-connectx-3-network-adapters.aspx
Pingback: SMBdirect (RDMA) and SMBmultichannel
what switch did you use? can i use unmanage switch? like TP-Link TL-SG1024D
With SMB Direct you can only use an unmanaged 10Gbps switch (there are a few) but only for iWarp as DCB is a hard requirement for RoCE (and I recommend it for iWarp to). Most 10Gbps swithes are managed and a lot of them support DBC.
If you only want SMB Multichannel any switch will do.
I mainly use Force10, PowerConnect (as in this blog post) or Mellanox or Cisco.