TechEd 2013 Revelations for Storage Vendors as the Future of Storage lies With Windows 2012 R2

Imagine you’re a storage vendor until a few years ago. Racking in the big money with profit margins unseen by any other hardware in the past decade and living it up in dreams along the Las Vegas Boulevard like there is no tomorrow. To describe your days only a continuous “WEEEEEEEEEEEEEE” will suffice.

clip_image002

Trying to make it through the economic recession with less Ferraris has been tough enough. Then in August 2012 Windows Server 2012 RTMs and introduces Storage Spaces, SMB 3.0 and Hyper-V Replica. You dismiss those as toy solutions while the demos of a few 100.000 to > million IOPS on the cheap with a couple of Windows boxes and some alternative storage configurations pop up left and right. Not even a year later Windows Server 2012 R2 is unveiled and guess what? The picture below is what your future dreams as a storage vendor could start to look like more and more every day while an ice cold voice sends shivers down your spine.

clip_image003

“And I looked, and behold a pale horse: and his name that sat on him was Death, and Hell followed with him.”

OK, the theatrics above got your attention I hope. If Microsoft keeps up this pace traditional OEM storage vendors will need to improve their value offerings. My advice to all OEMs is to embrace SMB3.0 & Storage Spaces. If you’re not going to help and deliver it to your customers, someone else will. Sure it might eat at the profit margins of some of your current offerings. But perhaps those are too expensive for what they need to deliver, but people buy them as there are no alternatives. Or perhaps they just don’t buy anything as the economics are out of whack. Well alternatives have arrived and more than that. This also paves the path for projects that were previously economically unfeasible. So that’s a whole new market to explore. Will the OEM vendors act & do what’s right? I hope so. They have the distribution & support channels already in place. It’s not a treat it’s an opportunity! Change is upon us.

What do we have in front of us today?

  • Read Cache? We got it, it’s called CSV Cache.
  • Write cache? We got it, shared SSDs in Storage spaces
  • Storage Tiering? We got it in Storage Spaces
  • Extremely great data protection even against bit rot and on the fly repairs of corrupt data without missing a beat. Let me introduce you to ReFS in combination with Storage Spaces now available for clustering & CSVs.
  • Affordable storage both in capacity and performance … again meet storage spaces.
  • UNMAP to the storage level. Storage Spaces has this already in Windows Server 2012
  • Controllers? Are there still SAN vendors not using SAS for storage connectivity between disk bays and controllers?
  • Host connectivity? RDMA baby. iWarp, RoCE, Infiniband. That PCI 3 slot better move on to 4 if it doesn’t want to melt under the IOPS …
  • Storage fabric? Hello 10Gbps (and better) at a fraction of the cost of ridiculously expensive Fiber Channel switches and at amazingly better performance.
  • Easy to provision and manage storage? SMB 3.0 shares.
  • Scale up & scale out? SMB 3.0 SOFS & the CSV network.
  • Protection against disk bay failure? Yes Storage Spaces has this & it’s not space inefficient either Smile. Heck some SAN vendors don’t even offer this.
  • Delegation capabilities of storage administration? Check!
  • Easy in guest clustering? Yes via SMB3.0 but now also shared VHDX! That’s a biggie people!
  • Hyper-V Replication = free, cheap, effective and easy
  • Total VM mobility in the data center so SAN based solutions become less important. We’ve broken out of the storage silo’s

You can’t seriously mean the “Windoze Server” can replace a custom designed SAN?

Let’s say that it’s true and it isn’t as optimized as a dedicated storage appliance. So what, add another 10 commodity SSD units at the cost of one OEM SSD and make your storage fly. Windows Server 2012 can handle the IOPS, the CPU cycles, memory demands in both capacity and speed together with a network performance that scales beyond what most people needs. I’ve talked about this before in Some Thoughts Buying State Of The Art Storage Solutions Anno 2012. The hardware is a commodity today. What if Windows can and does the software part? That will wake a storage vendor up in the morning!

Whilst not perfect yet, all Microsoft has to do is develop Hyper-V replica further. Together with developing snapshotting & replication capabilities in Storage Spaces this would make for a very cost effective and complete solution for backups & disaster recoveries. Cheaper & cheaper 10Gbps makes this feasible.  SAN vendors today have another bonus left, ODX. How long will that last? ASIC you say. Cool gear but at what cost when parallelism & x64 8 core CPUs are the standard and very cheap. My bet is that Microsoft will not stop here but come back to throw some dirt on a part of classic storage world’s coffin in vNext. Listen, I know about the fancy replication mechanisms but in a virtualized data center the mobility of VM over the network is a fact. 10Gbps, 40Gbps, RDMA & Multichannel in SMB 3.0 puts this in our hands. Next to that the application level replication is gaining more and more traction and many apps are providing high availability in a “shared nothing“ fashion (SQL/Exchange with their database availability groups, Hyper-V, R-DFS, …). The need for the storage to provide replication for many scenarios is diminishing. Alternatives are here. Less visible than Microsoft but there are others who know there are better economies to storage http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/ & http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/.

The days when storage vendors offered 85% discounts on hopelessly overpriced storage and still make a killing and a Las Vegas trip are ending. Partners and resellers who just grab 8% of that (and hence benefits from overselling as much a possible) will learn just like with servers and switches they can’t keep milking that cash cow forever. They need to add true and tangible value. I’ve said it before to many VARs have left out the VA for too long now. Hint: the more they state they are not box movers the bigger the risk that they are. True advisors are discussing solutions & designs. We need that money to invest in our dynamic “cloud” like data centers, where the ROI is better. Trust me no one will starve to death because of this, we’ll all still make a living. SANs are not dead. But their role & position is changing. The storage market is in flux right now and I’m very interested in what will happen over the next years.

Am I a consultant trying to sell Windows Server 2012 R2 & System Center? No, I’m a customer. The kind you can’t sell to that easily. It’s my money & livelihood on the line and I demand Windows Server 2012 (R2) solutions to get me the best bang for the buck. Will you deliver them and make money by adding value or do you want to stay in the denial phase? Ladies & Gentleman storage vendors, this is your wake-up call. If you really want to know for whom the bell is tolling, it tolls for thee. There will be a reckoning and either you’ll embrace these new technologies to serve your customers or they’ll get their needs served elsewhere. Banking on customers to be and remain clueless is risky. The interest in Storage Spaces is out there and it’s growing fast. I know several people actively working on solutions & projects.

clip_image005

clip_image007

You like what you see? Sure IOPS are not the end game and a bit of a “simplistic” way to look at storage performance but that goes for all marketing spin from all vendors.

clip_image008

Can anyone ruin this party? Yes Microsoft themselves perhaps, if they focus too much on delivering this technology only to the hosting and cloud providers. If on the other hand they make sure there are feasible, realistic and easy channels to get it into the hands of “on premise” customers all over the globe, it will work. Established OEMs could be that channel but by the looks of it they’re in denial and might cling to the past hoping thing won’t change. That would be a big mistake as embracing this trend will open up new opportunities, not just threaten existing models. The Asia Pacific is just one region that is full of eager businesses with no vested interests in keeping the status quo. Perhaps this is something to consider? And for the record I do buy and use SANs (high-end, mid-market, or simple shared storage). Why? It depends on the needs & the budget. Storage Spaces can help balance those even better.

Is this too risky? No, start small and gain experience with it. It won’t break the bank but might deliver great benefits. And if not .. there are a lot of storage options out there, don’t worry. So go on Winking smile

Verifying SMB 3.0 Multichannel/RDMA Is Working In Windows Server 2012 (R2)

So you have spend some money on RDMA cards (RoCE in this example), spent even more money on 10Gbps Switches with DCB capabilities and last but not least you have struggled for many hours to get PFC, ETS, … configured. So now you’d like to see that your hard work has paid of, you want to see that RDMA power that SMB 3.0 leverages in action. How?

You could just copy files and look at the speed but when you have sufficient bandwidth and the limiting factor is in disk IO for example how would you know? Well let’s have a look below.

You can take a look at performance monitor for RDMA specific counters like “RDMA Activity” and “SMB Direct Connection”.image

Whilst copying six 3.4GB ISO files over the RDMA connection we see a speed of 1.05 GB/s. Not to shabby.  But hay nothing a good 10Gbps with TCP/IP can handle under the right conditions.image

It’s the RDMA counters in Performance Monitor that show us the traffic that going via SMB Direct.image

Another give away that RDMA is in play comes from Task Manager, Performance counters for the RDMA NIC => 1.3Mbps send traffic can’t possibly give us 1.05GB/s in copy speed magically Smile

image

When you run netstat –xan (instead of the usual –an) you get to see the RDMA connection. The mode is “Kernel” instead of the usual “TCP” or “UDP” with –an showing the TCP/IP connections/Listerners.

 image

If you want to go all geeky there is an event log where you look at RDMA events amongst others. Jose Baretto discusses this in Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-3 using 10GbE/40GbE RoCE – Step by Step with instructions how to use it. You’ll need to go to Event Viewer.On the menu, select “View” then “Show Analytic and Debug Logs”
Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic. On the “Actions” pane on the right, select “Enable Log”
You then run your RDMA work. And then disable the log to view the events. Some filtering & PowerShell might come in handy to comb through them.image

Complete VM Mobility Across The Data Center with SMB 3.0, RDMA, Multichannel & Windows Server 2012 (R2)

Introduction

The moment I figured out that Storage Live Migration (in certain scenarios) and Shared Nothing Live Migration leverage SMB 3.0 and as such Multichannel and RDMA in Windows Server 2012 I was hooked. I just couldn’t let go of the concept of leveraging RDMA for those scenarios.  Let me show you the value of my current favorite network design for some demanding Hyper-V environments. I was challenged a couple of time on the cost/port of this design which is, when you really think of it, a very myopic way of calculating TCO/ROI. Really it is. And this week at TechEd North America 2013 Microsoft announced that all types of Live Migrations support Multichannel & RDMA (next to compression) in Windows Server 2012 R2.  Watch that in action at minute 39 over here at Understanding the Hyper-V over SMB Scenario, Configurations, and End-to-End Performance. You should have seen the smile on my face when I heard that one! Yes standard Live Migration now uses multiple NIC (no teaming) and RDMA for lightning fast  VM mobility & storage traffic. People you will hit the speed boundaries of DDR3 memory with this! The TCO/ROI of our plans just became even better, just watch the session.

So why might I use more than two 10Gbps NIC ports in a team with converged networking for Hyper-V in Windows 2012? It’s a great solution for sure and a combined bandwidth of 2*10Gbps is more than what a lot of people have right now and it can handle a serious workload. So don’t get me wrong, I like that solution. But sometimes more is asked and warranted depending on your environment.

The reason for this is shown in the picture below. Today there is no more limit on the VM mobility within a data center. This will only become more common in the future.

image

This is not just a wet dream of virtualization engineers, it serves some very real needs. Of cause it does. Otherwise I would not spend the money. It consumes extra 10Gbps ports on the network switches that need to be redundant as well and you need to have 10Gbps RDMA capable cards and DCB capable switches.  So why this investment? Well I’m designing for very flexible and dynamic environments that have certain demands laid down by the business. Let’s have a look at those.

The Road to Continuous Availability

All maintenance operations, troubleshooting and even upgraded/migrations should be done with minimal impact to the business. This means that we need to build for high to continuous availability where practical and make sure performance doesn’t suffer too much, not noticeably anyway. That’s where the capability to live migrate virtual machines of a host, clustered or not, rapidly and efficiently with a minimal impact to the workload on the hosts involved comes into play.

Dynamics Environments won’t tolerate downtime

We also want to leverage our resources where and when they are needed the most. And the infrastructure for the above can also be leveraged for that. Storage live migration and even Shared Nothing Live Migration can be used to place virtual machine workloads where they are getting the resources they need. You could see this as (dynamically) optimizing the workload both within and across clusters or amongst standalone Hyper-V nodes. This could be to a SSD only storage array or a smaller but very powerful node or cluster in regards to CPU, memory and Disk IO. This can be useful in those scenarios where scientific applications, number crunching or IOPS intesive  software or the like needs them but only for certain times and not permanently.

Future proofing for future storage designs

Maybe you’re an old time fiber channel user or iSCSI rules your current data center and Windows Server 2012 has not changed that. But that doesn’t mean it will not come. The option of using a Scale Out File Server and leverage SMB 3.0 file shares to providing storage for Hyper-V deployments is a very attractive one in many aspects. And if you build the network as I’m doing you’re ready to switch to SMB 3.0 without missing a heart beat. If you were to deplete the bandwidth x number of 10Gbps can offer, no worries you’ll either use 40Gbps and up or Infiniband. If you don’t want to go there … well since you just dumped iSCSI or FC you have room for some more 10Gbps ports Smile

Future proofing performance demands

Solutions tend to stay in place longer than envisioned and if you need some long levity and a stable, standard way of doing networking, here it is. It’s not the most economical way of doing things but it’s not as cost prohibitive as you think. Recently I was confronted again with some of the insanities of enterprise IT. A couple of network architects costing a hefty daily rate stated that 1Gbps is only for the data center and not the desktop while even arguing about the cost of some fiber cable versus RJ45 (CAT5E). Well let’s look beyond the North – South traffic and the cost of aggregating band all the way up the stack with shall we? Let me tell you that the money spent on such advisers can buy you in 10Gbps capabilities in the server room or data center (and some 1Gbps for the desktops to go) if you shop around and negotiate well. This one size fits all and the ridiculous economies of scale “to make it affordable” argument in big central IT are not always the best fit in helping the customers. Think  a little bit outside of the box please and don’t say no out of habit or laziness!

Conclusion

In some future blog post(s) we’ll take a look at what such a network design might look like and why. There is no one size fits all but there are not to many permutations either. In our latest efforts we had been specifically looking into making sure that a single rack failure would not bring down a cluster. So when thinking of the rack as a failure domain we need to spread the cluster nodes across multiple racks in different rows. That means we need the network to provide the connectivity & capability to support this, but more on that later.

SMB Direct RoCE Does Not Work Without DCB/PFC

Introduction

SMB Direct RoCE Does Not Work Without DCB/PFC. “Yes”, you say, “we know, this is well documented. Thank you.” but before you sign of hear me out.

Recently I plugged to RoCE cards into some test servers and linked them to a couple of 10Gbps switches. I did some quick large file copy testing and to my big surprise RDMA kicked in with stellar performance even before I had installed the DCB feature, let alone configure it. So what’s the deal here. Does it work without DCB? Does the card fail back to iWarp? Highly unlikely. I was expecting it to fall back to plain vanilla 10Gbps and not being used at all but it was. A short shout out to Jose Barreto to discuss this helped clarify this.

DCB/PFC is a requirement RoCE

The more busy the network gets the faster the performance will drop. Now in our test scenario we had two servers  for a total of 4 RoCE ports on the network consisting of a beefy 48 port 10Gbps switches. So we didn’t see the negative results of this here.

DCB (Data Center Bridging) and Priority Flow Control are considered a requirement for any kind of RoCE deployment. RDMA with RoCE operates at the Ethernet layer. That means there is no overhead from TCP/IP, which is great for performance. This is the reason you want to use RDMA actually. It also means it’s left on it’s own to deal with Ethernet-level collisions and errors. For that it needs DCB/PFC other wise you’ll run into performance issues due to a ton of retries at the higher network layers.

The reason that iWarp doesn’t require DCB/PCF is that it works at the TCP/IP level also offloaded by using a TCP/IP stack on the NIC instead of the OS. So errors are handled by TCP/IP at a cost: iWarp results in the same benefits as RoCE but it doesn’t scale that well. Not that iWarp performance is lousy, far form! Mind you, for bandwidth management reasons,you’d be better of using DCB or some form of QoS as well.

Conclusion

So no, not configuring  DCB on your servers and the switches isn’t an option, but apparently it isn’t blocked either so beware of this. It might appear to be working fine but it’s a bad idea. Also don’t think it defaults back to iWarp mode, it doesn’t, as one card does one thing not both. There is no shortcut. RoCE RDMA does not work error free out of the box so you do have the install the DCB feature and configure it together with the switches.