Some Thoughts Buying State Of The Art Storage Solutions Anno 2012

Introduction

I’ve been looking into storage intensively for some time. At first it was reconnaissance. You know, just looking at what exist in software & hardware solutions. At that phase it was pure and only functionality wise as we found our current more traditional SANs a dead end.

After that there was the evaluation of reliability, performance and support. We went hunting for both satisfied and unsatisfied customers, experiences etc.  We also considered whether a a pure software SAN on commodity hardware would do for us or whether we still need specialized hardware or at least the combination of specialized software on vendor certified and support commodity hardware. Yes even if you have been doing things a certain way for a longer time and been successful with is it pays to step back and evaluate if there are better ways of doing it. This prevents tunnel vision and creates awareness of what’s out there that you might have missed.

Then came the job of throwing out vendors who we thought couldn’t deliver what was needed and /or who have solutions that are great but just to expensive. After that came the ones whose culture, reputation was not suited for or compatible with our needs & culture. So that big list became (sort of) a  long list, which eventually became a really short list.

There is a lot of reading thinking, listening, discussing done during these phases but I’m having fun as I like looking at this gear and dreaming of what we could do with it. But there are some things in storage world that I found really annoying and odd.

Scaling Up or Scaling Out with High Availability On My mind

All vendors, even the better ones in our humble opinion, have their strong and weak points. Otherwise they would not all exist. You’ll need to figure out which ones are a good or the best fit for your needs. So when a vendor writes or tells me that his product X is way above others and that those others their product Z only competes with the lower end Y in his portfolio I cringe. Storage is not that simple. On the other hand they sometimes over complicate straightforward functionality or operational needs if they don’t have a great solution for it. Some people in storage really have gotten trivializing the important and complicating the obvious down to an art. No brownie points for them!

One thing is for sure, when working on scalability AND high availability things become rather expensive. It’s a bit like the server world. Scale up versus scale out. Scaling up alone will not do for high availability except at very high costs. Then you have the scalability issue. There is only so much you can get out of one system and the last 20% become very expensive.

So, I admit,  I’m scale out inclined. For one, you can fail over to multiple less expensive systems and if you have an “N+1” scalability model you can cope with the load even when losing a node. On top of that you can and will use this functionality in your normal operations. That means you know how it works and that it will work during a crisis. Work and train in the same manner as you will when the shits hits the fan. It’s the only way you’ll really be able to cope with a crisis. Remember, chances are you won’t excel in a crisis but will fall back to you lowest mastered skill set.

Oh by the way, if you do happen to operate a nuclear power plant or such please feel free to work both fronts for both scalability & reliability and then add some extra layers. Thanks!

Expensive Scale Up Solutions On Yesterday’s Hardware?

I cannot understand what keeps the storage boys back so long when it comes to exploiting modern processing power. Until recently they all stilled lived in the 32 bit world running on hardware I wouldn’t give to the office temp. Now I’d be fine with that if the prices reflected that. But that isn’t the case.

Why did (does) it take ‘m so long to move to x64 bit? That’s been our standard server build since Windows 2003 for crying out loud and our clients have been x64 since the Vista rollout in 2007. It’s 2012 people. Yes that’s the second decade of the 21st century.

What is holding the vendors back from using more cores? Realistically, if you look at what’s available today, it is painful to see that vendors are touting the dual quad core controllers (finally and with their software running x64 bit) as their most recent achievement. Really, dual Quad core, anno 2012? Should I be impressed?

What’s this magic limit of 2 controllers with so many vendors? Did they hard code a 2 in the controller software and lost the source code of that module?

On the other hand what’s the obsession with 4 or more controllers? We’re not all giant cloud providers and please note my ideas on scale out versus scale up earlier.

Why are some spending time and money in ASIC development for controllers? You can have commodity motherboard with for sockets and 8, 10, 12 cores. Just buy them AND use them. Even the ones using commodity hardware (which is the way to go long term due to the fast pace and costs) don’t show that much love for lots of cores. It seems cheap and easy, when you need a processor upgrade or motherboard upgrade. It’s not some small or miniature device where standard form factors won’t work. What is wrong in your controller software that you all seem to be so slow in going that route? You all talk about how advanced, high tech, future tech driven the storage industry is, well prove it. Use the 16 or to 32 cores you can easily have today. Why? Because you can use the processing powers and also because I promise you all one thing: that state of the art newly released SAN of today is the old, obsolete junk we’ll think about replacing in 4 years time so we might not be inclined to spend a fortune on it Winking smile. Especially not when I have to do a fork lift upgrade. Been there, done that and rather not do it again. Which brings us to the next point.

Flexibility, Modularity & Support

If you want to be thrown out of the building you just need to show even the slightest form of forklift upgrade for large or complex SAN environments. Don’t even think about selling me very expensive highly scalable SANs with overrated and bureaucratic support. You know the kind where the response time in a crisis is 1/10 of that of when an ordinary disk fails.

Flexibility & Modularity

Large and complex storage that cost a fortune and need to be ripped out completely and/or where upgrades over its life time are impossible or cost me an arm and a leg are a no go. I need to be able to enhance the solution where it is needed and I must be able to do so without spending vast amounts of money on a system I’ll need to rip out within 18 months. It has more like a perpetual, modular upgrade model where over the years you can enhance and keep using what is still usable .

If that’s not possible and I don’t have too large or complex storage needs, I’d rather buy a cheap but functional SAN. Sure it doesn’t scale as well but at least I can throw it out for a newer one after 3 to 4 years. That means I can it replace long before I hit that the scalability bottleneck because it wasn’t that expensive. Or if I do hit that limit I’ll just buy another cheap one and add it to the mix to distribute the load. Sure that takes some effort but in the end I’m better and cheaper off than with expensive, complex highly scalable solutions.

Support

To be brutally honest some vendors read their own sales brochures too much and drank the cool aid. They think their support processes are second to none and the best in the business. If they really believe that they need to get out into the field an open up their eyes. If they just act like they mean that they’ll soon find out when the money starts talking. It won’t talk to you.

Really some of you have support process that are only excellent and easy in your dreams. I’ll paraphrase a recent remark on this subject about a big vendor “If vendor X their support quality and the level of responsiveness what only 10% of the quality of their hardware buying them would be a no brainer”. Indeed and now that fact it’s a risk factor or even a show stopper.

Look all systems will fail sooner or later. They will. End of story. Sure you might be lucky and never have an issue but that’s just that. We need to design and build for failure. A contract with promises is great for the lawyers. Those things combined with the law are their weapons on their battle field. An SLA is great for managers & the business. These are the tools they need for due diligence and check it off on the list of things to do. It’s CYA to a degree but that is a real aspect of doing business and being a manger. Fair enough. But for us, the guys and gals of ICT who are the boots on the ground, we need rock solid, easy accessible and fast support.  Stuff fails, we design for that, we build for that. We don’t buy promises. We buy results. We don’t want bureaucratic support processes. I’ve seen some where the overhead is worse than the defect and the only aim is to close calls as fast as they can. We want a hot line and an activation code to bring down the best support we can as fast as we can when we need it. That’s what we are willing to pay real good money for. We don’t like a company that sends out evaluation forms after we replaced a failed disk with a replacement to get a good score. Not when that company fails to appropriately interpret a failure that brings the business down and ignores signals from the customer things are not right. Customers don’t forget that, trust me on this one.

And before you think I’m arrogant. I fail as well. I make mistakes, I get sick, etc. That’s why we have colleagues and partners. Perfection is not of this world. So how do I cope with this? The same way as when we designing an IT solution. Acknowledge that fact and work around it. Failure is not an option people, it’s pretty much a certainty.That’s why we make backups of data and why we have backups for people. Shit happens.

The Goon Squad Versus Brothers In Arms

As a customer I never ever want to have to worry about where your interests are. So we pick our partners with care. Don’t be the guy that acts like a gangster in the racketeering business. You know they all dress pseudo upscale to hide the fact they’re crooks. We’re friends, we’re partners. Yeah sure, we’ll do grand things together but I need to lay down the money for their preferred solution that seems to be the same whatever the situation and environment.

Some sales guys can be really nice guys. Exactly how nice tends to depend on the size of your pockets. More specifically the depth of your pockets and how well they are lined with gold coin is important here. One tip, don’t be like that. Look we’re all in business or employed to make money, fair enough, really. But if you want me be your client, fix my needs & concerns first. I don’t care how much more money some vendor partnerships make you or how easy it is to only have to know one storage solution. I’m paying you to help me and you’ll have to make your money in that arena. If you weigh partner kickbacks higher than our needs than I’ll introduce you to the door marked “EXIT”. It’s a one way door. If you do help to address our needs and requirements you’ll make good money.

The best advisors – and I think we have one – are those that remember where the money really comes from and whose references really matter out there. Those guys are our brothers in arms and we all want to come out of the process good, happy and ready to roll.

The Joy

The joy simply is great, modern, functional, reliable, modular, flexible, affordable and just plain awesome storage. What virtualization /Private cloud /Database /Exchange systems engineer would mind getting to work with that. No one, especially not when in case of serious issues the support & responsiveness proves to be rock solid. Now combine that with the Windows 8 & Hyper-V 3.0 goodness coming and I have a big smile on my face.

I’m Attending The MVP Summit 2012

I’ll be attending the MVP Summit 2012 in Redmond from the 27th of February to the 2nd of March. I consider myself very lucky to be able to do so and I’m grateful that my employer is helping me make use of this opportunity. I appreciate that enormously.

So my hotel is booked, my flights are scheduled. It’s a long flight with a lengthy stopover in Heathrow. I actually spend a day travelling to and from the event. But I’m told it’s very much worth the effort Smile and I got some great tips from some veteran MVPs in the community. For newbies who can use some information on the MVP summit take a look at What to expect at your first MVP Summit by Pat Richard.

image

I also look forward to meeting so many peers in person and attending all the briefings where we’ll learn a lot of valuable new things about Hyper-V. I cannot talk about them as they are under NDA but there will be many opportunities to provide feedback to the product teams in my expertise “Virtual Machine’” (i.e. Hyper-V). I have a bunch of questions & feedback for the product teams.

From my colleagues I have learned it’s also a good way to help pass feedback from others on to Microsoft. So this is your chance. Take it! What do you like or need in the virtualization products. What should be enhanced and what is hurting you? What works and what doesn’t?  What is missing?

With Windows 8 going into beta by next month don’t expect immediate actions and changes based on your feedback. But if you want to have your opinions taken into consideration you have to let them be heard. So don’t be shy now! Let me know. Sincere & real concerns, along with problems, challenges and feedback on your experiences with the product are very much appreciated.

So, if you have any remarks, feedback, feature requests you’d like to share with the virtualization product teams let me know. Just post them in the comments, send me a e-mail via the contact form or message me via @workinghardinit on twitter. My colleagues tell me the program managers in the virtualization area are a very communicative and responsive bunch. I think that’s true from my experiences with them in the past.

Thinking About Windows 8 Server & Hyper-V 3.0 Network Performance

Introduction

The main purpose of this post is, as mentioned in the title, to think. This is not a design or a technical reference. When it comes to designing a virtualization solution for your private cloud their are a lot of components to consider. Storage, networking, CPU, memory all come in to play and there is no one size fits all. It all depends on your needs, budget in combination with how good your  insight into your future plans & requirements are. This is not and easy task. Virtualizing 40 webservers is very different from virtualizing SQL Server, Exchange or SharePoint.  Server virtualization is different from VDI and VDI itself comes in many different flavors.

  • So what workloads are you hosting? Is it a homogeneous or a heterogeneous environment?
  • What kind of applications are you supporting? Client-Server, SOA/Web Services, Cloud apps?
  • What storage performance & features do you need to support all that?
  • What kind of network traffic does this require?
  • What does your business demand? Do you know, do they even know? Can you even know in a private cloud environment?
  • Do you have one customers or many (multi tenancy) and how are they alike or different in both IT needs and business requirements.

The needs of true public cloud builders are different from those running their own private clouds in their own data centers or in a mix of those with infrastructure at a hosting provider. On top of that an SMB environment is different from large enterprises and companies of the same size will differ in their requirements enormously due to the nature of their business.

I’ve written about virtualization and CPU considerations before (NUMA, Power Save settings for both OS & 10Gbps network performance) before. I’ve also discussed a number of posts about 10Gbps networking and different approaches on how to introduce it with out breaking the bank. In 2012 I intend to blog some more on networking and storage options with Windows 8 and Hyper-V 3.0. But I still need to get my hands on the betas and release candidates of Windows 8 to do so. You’ll notice I don’t talk about Infiniband. Well I just don’t circulate in the ecosystems where absolute top notch performance is so important that they can justify and get that kind of budget to throw at those needs.

To set the scene for these blog posts I’ll introduce some considerations around networking options with Hyper-V. There are many features and options both in hardware, technologies, protocols, file systems. Even when everything is intended to make live simpler people might get lost in all the options and choices available.

Windows Server 8 NIC features – The Alphabet Soup

  • Data Center Bridging (DCB)
  • Receive Segment Coalescing (RSC)
  • Receive Side Scaling (RSS)
  • Remote Direct Memory Access (RDMA)
  • Single Root I/O Virtualization (SR-IOV)
  • Virtual Machine Queue (VMQ)
  • IPsec offload (IPsecTO)

A lot of this stuff has to do with converged networks. These offer a lot of flexibility and the potential for cost savings along the way. But convergence & cost savings are not a goal. They are means to an end. Perhaps you can have better, cheaper and more effective solutions leveraging you existing network infrastructure by adding some 10Gbps switches & NICs where they provide the best bang for the buck. Chances are you don’t need to throw it all out and do a fork lift replacement. Use what you need from the options and features. Be smart about it. Remember my post on A Fool With A Tool Is Still A Fool, don’t be that guy!

Now let’s focus on couple of the features here that have to do with network I/O performance and not as much convergence or QOS. As an example of this I like to use Live migration of virtual machines with 10Gbps. Right now with one 10Gbps NIC I can use 75% of the bandwidth of a dedicated NIC for live migration. When running 20 or more virtual machines per host with 4Gbps to 8Gbps of memory and with Windows 8 giving me multiple concurrent Live Migrations I can really use that bandwidth. Why would I want to cut it up to 2 or 3Gbps in that case. Remember the goal. All the features and concepts are just tools, means to and end. Think about what you need.

But wait, in Windows 8 we have some new tricks up our sleeve. Let’s team two 10Gbps NICs put all traffic over that team and than divide the bandwidth up and use QOS to assure Live Migration gets 10Gbps when needed but without taking it away from others network I/O when it’s not needed. That’s nice! Sounds rather cool doesn’t it and I certainly see a use for it. It might not be right if you can’t afford to loose that bandwidth when Live Migration kicks in but if you can … more power & cost savings to you. But there are other reasons not to put everything on one NIC or team.

RSS, VMQ, SR-IOV

One thing all these have in common is that they are used to reduce the CPU load / bottleneck on the host and allow to optimize the network I/O and bandwidth usage of your expensive 10Gbps NICs. Both avoiding having a CPU bottleneck and optimizing the use of the available bandwidth mean you get more out of your servers. That translates in avoiding buying more of them to get the same workload done.

RSS is targeted at the host network traffic. VMQ and SR-IOV are targeted at the virtual machine network traffic but in the end the both result in the same benefits as stated above. RSS & VMQ integrate well with other advanced windows features. VMQ for examples can be used with the extensible Hyper-V switch while RSS can be combined with QOS & DCB in storage & cluster host networking scenarios. So these give you a lot of options and flexibility. SR-IOV or RDMA is more focused on raw performance and doesn’t integrate so well with the more advanced features for flexibility & scalability. I’ll talk some more on this in future blog posts.

Now with all these features that have there own requirements and compatibilities you might want to reconsider putting all traffic over one pair of teamed NICs. You can’t optimize them all in such a scenario and that might hurt you. Perhaps you’ll be fine, perhaps you won’t.

image

So what to use where and when depends on how many NICs you’ll use in your servers and for what purpose. For example even in a private cloud for lightweight virtual machines running web services you might want to separate the host management & cluster traffic from the virtual machine network traffic. You see RSS & VMQ are mutually exclusive. That way you can use RSS for the host/cluster traffic and DVMQ for the virtual machine network. Now if you need redundancy you might see that you’ll already use 2*2NIC with Windows 8 NLB in combination with two switches to avoid a single point of failure. Do your really need that bandwidth for the guest servers? Perhaps not but, you might find that it helps improve density because of better better host & NIC performance helping you avoiding the cost of buying extra servers. If you virtualize SQL servers you’d be even more interested in all this. The picture below is just an illustration, just to get you to think, it’s not a design.

image

I’m sure a lot of matrices will be produced showing what features are compatible under what conditions, perhaps even with some decision charts to help you decide what to use where and when.