SMB Direct RoCE Does Not Work Without DCB/PFC

Introduction

SMB Direct RoCE Does Not Work Without DCB/PFC. “Yes”, you say, “we know, this is well documented. Thank you.” but before you sign of hear me out.

Recently I plugged to RoCE cards into some test servers and linked them to a couple of 10Gbps switches. I did some quick large file copy testing and to my big surprise RDMA kicked in with stellar performance even before I had installed the DCB feature, let alone configure it. So what’s the deal here. Does it work without DCB? Does the card fail back to iWarp? Highly unlikely. I was expecting it to fall back to plain vanilla 10Gbps and not being used at all but it was. A short shout out to Jose Barreto to discuss this helped clarify this.

DCB/PFC is a requirement RoCE

The more busy the network gets the faster the performance will drop. Now in our test scenario we had two servers  for a total of 4 RoCE ports on the network consisting of a beefy 48 port 10Gbps switches. So we didn’t see the negative results of this here.

DCB (Data Center Bridging) and Priority Flow Control are considered a requirement for any kind of RoCE deployment. RDMA with RoCE operates at the Ethernet layer. That means there is no overhead from TCP/IP, which is great for performance. This is the reason you want to use RDMA actually. It also means it’s left on it’s own to deal with Ethernet-level collisions and errors. For that it needs DCB/PFC other wise you’ll run into performance issues due to a ton of retries at the higher network layers.

The reason that iWarp doesn’t require DCB/PCF is that it works at the TCP/IP level also offloaded by using a TCP/IP stack on the NIC instead of the OS. So errors are handled by TCP/IP at a cost: iWarp results in the same benefits as RoCE but it doesn’t scale that well. Not that iWarp performance is lousy, far form! Mind you, for bandwidth management reasons,you’d be better of using DCB or some form of QoS as well.

Conclusion

So no, not configuring  DCB on your servers and the switches isn’t an option, but apparently it isn’t blocked either so beware of this. It might appear to be working fine but it’s a bad idea. Also don’t think it defaults back to iWarp mode, it doesn’t, as one card does one thing not both. There is no shortcut. RoCE RDMA does not work error free out of the box so you do have the install the DCB feature and configure it together with the switches.

Using RAMDisk To Test Windows Server 2012 Network Performance

I’m testing & playing different Windows Server 2012 & Hyper-V networking scenarios with 10Gbps, Multichannel, RDAM, Converged networking etc. Partially this is to find out what works best for us in regards to speed, reliability, complexity, supportability and cost.

Basically you have for basic resources in IT around which the eternal struggle for the prefect balance finds place. These are:

  • CPU
  • Memory
  • Networking
  • Storage

We need both the correct balance in capabilities, capacities and speed for these in well designed system. For many years now, but especially the last 2 years it very save to say that, while the sky is the limit, it’s become ever easier and cheaper to get what we need when it comes to CPU, Memory. These have become very powerful, fast and affordable relative to the entire cost of a solution.

Networking in the 10Gbps era is also showing it’s potential in quantity (bandwidth), speed (latency) and cost (well it’s getting there) without reducing the CPU or memory to trash thanks to a bunch of modern off load technologies. And basically in this case it’s these qualities we want to put to the test.

The most trouble some resource has been storage and it has been for quite a while. While SSD do wonders for many applications the balance between speed, capacity & cost isn’t that sweet as for our other resources.

In some environments were I’m active they have a need for both capacity and IOPS and as such they are in luck as next to caching a lot of spindles still equate to more IOPS. For testing the boundaries of one resource one needs to make sure non of the others hit theirs. That’s not easy as for performance testing can’t always have a truck load of spindles on a modern high speed SAN available.

RAMDisk to ease the IOPS bottleneck

To see how well the 10Gbps cards with and without Teaming, Multichannel, RDMA are behaving and what these configuration are capable of I wanted to take as much of the disk IOPS bottle neck out of the equation as possible. Apart from buying a Violin system capable of doing +1 million IOPS, which isn’t going to happen for some lab work, you can perhaps get the best possible IOPS by combining some local SSD and RAMDisk. RAMDisk is spare memory used as a virtual disk. It’s very fast and cost effective per IOPS. But capacity wise it’s not the worlds best, let alone most cost effective solution.

image

I’m using free RAMDisk software provided by StarWind. I chose this as they allow for large sized RAMDisks. I’m using the ones of 54GB right now to speed test copying fixed sized VHDX files. It install flawlessly on Windows Server 2012 and it hasn’t caused me any issues. Throw in some SSDs on the servers for where you need persistence and you’re in business for some very nice lab work.

clip_image001

You also need to be aware it doesn’t persist data when you reboot the system or lose power. This is not an issue if all we are doing is speed testing as we don’t care. Otherwise you’ll need to find a workaround and realize those ‘”flush the data to persistent storage” isn’t full proof or super fast, the SSDs do help here.

You have to register but the good news is that they don’t spam you to death at all, which I find cool. As said the tool is free, works with Windows Server 2012 and allows for larger RAMDisks where other free ones are often way to limited in size.

It has allowed me to do some really nice testing. Perhaps you want to check this out as well. WARNING: The below picture is a lab setup … I’m not a magician and it’s not the kind of IOPS I have all over the datacenters with 4 Cheapo SATA disks I touched my special magic pixie dust.

image

With #WinServ 2012 storage costs/performance/capacity are the only thing limiting you  http://twitter.yfrog.com/mnuo9fp #SMB3.0 #Multichannel

Some quick tests with a 52GB NTFS RAMDisk formatted with a 64K NTFS Allocation unit size.

image image

I also tested with another free tool from SoftPerfect ® RAM Disk FREE. It performs well but I don’t get to see the RAMDisk in the Windows Disk Management GUI, at least not on Windows Server 2012. I have not tested with W2K8R2.

NTFS Allocation unit size 4K NTFS Allocation unit size 64K
image image

Best of MMS 2013 – SCUG Belgium

Earlier this month I attended MMS2013 in Las Vegas. Today the Belgian System Center Community let’s us know about a live event “Best of MMS” they organize in order to share in-depth System center knowledge/presentations along with their impressions.

No one les than Wally Mead, the Senior Program Manager for System Center Configuration Manager who’s perhaps better know as The Godfather of Configuration Manger, will be joining the event. Wally is presenting twice along side the Belgian SCUG members, many of which belong to Microsoft Extended Experts Team (MEET) and/or are MVPs in Enterprise Client Management, Cloud & Datacenter & Virtual machine.

Grab a seat for "Best of MMS 2013” right here on Eventbrite

eb_press_big

You can find the (non final) agenda on the SCUGBE web site http://scug.be/events/2013/04/27/best-of-mms-19062013/. As you can see I have an early morning session at 09.15  – 10.15 on “Availability Strategies for a Resilient Private Cloud”. This provides the foundation for a high to continuous available private cloud my fellow speakers will be presenting on.

There will be opportunities to network, talk shop, learn and last but not least to win a TechEd Europe 2013 ticket in a lottery!

MVP Carsten Rachfahl Visits & Interviews Me On Networking & Storage in Windows Server 2012

Last month Carsten (MVP – Virtual Machine) & Kerstin Rachfahl (MVP – Office 365) visited me in my home town. Apart from a short visit to the historic center & a sushi diner amongst friends we also did an interview where we discussed our ongoing Windows Server 2012 Hyper-V activities. We’re trying to leverage as much of the product we can to get the best TCO & ROI and as early adopters we’ve been reaping the benefits form the day the RTM bits were available to us. So far that has been delivering great results. Funny to hear me mention the Fast Track designs as a week later we saw version 3 of those at MMS2013. The most interesting to me about those was the fact that the small & medium sizes focus on Cluster in a Box and Storage Spaces!

While we were having fun talking about the above we also enjoyed some of the most beautiful landmarks of the City of Ghent as a back drop for the interview. It was filmed in a meeting room at AGIV, to whom I provide Infrastructure services with a great team of colleagues. Just click the picture to view the video.

Videointerview_with_Didier_Van_Hoye_Storage_Networking_and_other_Stuff-Thumb2

You can also enjoy the video on Carsten’s blog http://www.hyper-v-server.de/videos/interview-mit-didier-van-hoye-ber-seinen-storage-netwerk-und-mehr/ All I need to do now is to arrange for Carsten to physically touch the Compellent storage I think.