Moving Clustered Virtual Machines to Windows Server 2012 with the Cluster Migration Wizard

As you might remember I did a blog post on transitioning from a Windows Server 2008 R2 Hyper-V cluster to Windows Server 2012 (well I was using the beta at the time, not the RC yet):

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 3

Microsoft has now blogged about the process themselves and they use the migration wizard in Failover Cluster Manager to get the job done where I did this using the Import, “register only” functionality.

This is the first step by step that describes the official way. You can read about the process here:

How to Move Highly Available (Clustered) VMs to Windows Server 2012 with the Cluster Migration Wizard

TRIM/UNMAP Support in Windows Server 2012 & Hyper-V/VHDX

Introduction

I’m very exited about the TRIM/UNMAP support in Windows Server 2012 & Hyper-V with the VHDX file. Thin provisioning is a great technology. It’s there is more to it than just proactive provisioning ahead of time. It also provides a way to make sure storage allocation stays thin by reclaiming freed up space form a LUN. Until now this required either the use of sdelete on windows or dd for the Linux crowd, or some disk defrag product like Raxco’s PerfectDisk. It’s interesting to note here that sdelete relies on the defrag APIs in Windows and you can see how a defragmentation tool can pull off the same stunt. Take a look at Zero-fill Free Space and Thin-Provisioned Disks & Thin-Provisioned Environments for more information on this. Sometimes an agent is provided by the SAN vendor that takes care of this for you (Compellent) and I think NetApp even has plans to support it via a future ONTAP PowerShell toolkit for NTFS partitions inside the VHD (https://communities.netapp.com/community/netapp-blogs/msenviro/blog/2011/09/22/getting-ready-for-windows-server-8-part-i).  Some cluster file system vendors like Veritas (symantec) also offer this functionality.

A common “issue” people have with sdelete or the like is that is rather slow, rather resource intensive and it’s not automated unless you have scheduled tasks running on all your hosts to take care of that. Sdelete has some other issue when you have mount points, sdelete can’t handle that. A trick is to use the now somewhat ancient SUBST command to assign a drive letter to the path of the mount point you can use sdelete. Another trick would be to script it yourself see. Mind you can’t just create a big file in a script and delete it. That’s the same as deleting “normal” data and won’t do a thing for thing provisioning space reclamation. You really have to zero the space out. See (A PowerShell Alternative to SDelete) for more information on this. The script also deals with another annoying thing of sdelete is that is doesn’t leave any free space and thereby potentially endangers your operations or at least sets off all alarms on the monitoring tools. With a home grown script you can force a free percentage to remain untouched.

TRIM/UNMAP

With Windows Server 2012 and Hyper-V VHDX we get what is described in the documentation  “’Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks in the VM, and trim-compatible hardware.)  It also requires Windows 2012 on hosts & guests.

I was confused as to whether VHDX supports TRIM or UNMAP. TRIM is the specification for this functionality by Technical Committee T13, that handles all standards for ATA interfaces. UNMAP is the Technical Committee T10 specification for this and is the full equivalent of TRIM but for SCSI disks. UNMAP is used to remove physical blocks from the storage allocation in thinly provisioned Storage Area Networks. My understanding is that is what is used on the physical storage depends on what storage it is (SSD/SAS/SATA/NL-SAS or SAN with one or all or the above) and for a VHDX it’s UNMAP (SCSI standard)

Basically VHDX disks report themselves as being “thin provision capable”. That means that any deletes as well as defrag operation in the guests will send down “unmaps” to the VHDX file, which will be used to ensure that block allocations within the VHDX file is freed up for subsequent allocations as well as the same requests are forwarded to the physical hardware which can reuse it for it’s thin provisioning purpose. Also see http://msdn.microsoft.com/en-us/library/hh848053(v=vs.85).aspx

So unmap makes it way down the stack from the guest Windows Server 2012 Operating system, the VHDX , the hyper visor and the storage array.This means that an VHDX will only consume storage for really stored data & not for the entire size of the VHDX, even when it is a fixed one. You can see that not just the operating system but also the application/hypervisor that owns the file systems on which the VHDX lives needs to be TRIM/UNMAP aware to pull this off.

The good news here is that there is no more sdelete to run, scripts to write, or agents to install. It happens “automagically” and as ease of use is very important I for one welcome this!  By the way some SANs also provide the means to shrink LUNs which can be useful if you want the space used by a volume is so much lower than what is visible/available in windows and you don’t want people to think you’re wasting space or all that extra space is freely available to them.

To conclude I’ll be looking forward to playing around with this and I hope to blog on our experiences with this later in the year. Until Windows Server 2012 & VHDX specifications are RTM and fully public we are working on some assumptions. If you want to read up on the VHDX format you can download the specs here. It looks pretty feature complete.

Some Thoughts Buying State Of The Art Storage Solutions Anno 2012

Introduction

I’ve been looking into storage intensively for some time. At first it was reconnaissance. You know, just looking at what exist in software & hardware solutions. At that phase it was pure and only functionality wise as we found our current more traditional SANs a dead end.

After that there was the evaluation of reliability, performance and support. We went hunting for both satisfied and unsatisfied customers, experiences etc.  We also considered whether a a pure software SAN on commodity hardware would do for us or whether we still need specialized hardware or at least the combination of specialized software on vendor certified and support commodity hardware. Yes even if you have been doing things a certain way for a longer time and been successful with is it pays to step back and evaluate if there are better ways of doing it. This prevents tunnel vision and creates awareness of what’s out there that you might have missed.

Then came the job of throwing out vendors who we thought couldn’t deliver what was needed and /or who have solutions that are great but just to expensive. After that came the ones whose culture, reputation was not suited for or compatible with our needs & culture. So that big list became (sort of) a  long list, which eventually became a really short list.

There is a lot of reading thinking, listening, discussing done during these phases but I’m having fun as I like looking at this gear and dreaming of what we could do with it. But there are some things in storage world that I found really annoying and odd.

Scaling Up or Scaling Out with High Availability On My mind

All vendors, even the better ones in our humble opinion, have their strong and weak points. Otherwise they would not all exist. You’ll need to figure out which ones are a good or the best fit for your needs. So when a vendor writes or tells me that his product X is way above others and that those others their product Z only competes with the lower end Y in his portfolio I cringe. Storage is not that simple. On the other hand they sometimes over complicate straightforward functionality or operational needs if they don’t have a great solution for it. Some people in storage really have gotten trivializing the important and complicating the obvious down to an art. No brownie points for them!

One thing is for sure, when working on scalability AND high availability things become rather expensive. It’s a bit like the server world. Scale up versus scale out. Scaling up alone will not do for high availability except at very high costs. Then you have the scalability issue. There is only so much you can get out of one system and the last 20% become very expensive.

So, I admit,  I’m scale out inclined. For one, you can fail over to multiple less expensive systems and if you have an “N+1” scalability model you can cope with the load even when losing a node. On top of that you can and will use this functionality in your normal operations. That means you know how it works and that it will work during a crisis. Work and train in the same manner as you will when the shits hits the fan. It’s the only way you’ll really be able to cope with a crisis. Remember, chances are you won’t excel in a crisis but will fall back to you lowest mastered skill set.

Oh by the way, if you do happen to operate a nuclear power plant or such please feel free to work both fronts for both scalability & reliability and then add some extra layers. Thanks!

Expensive Scale Up Solutions On Yesterday’s Hardware?

I cannot understand what keeps the storage boys back so long when it comes to exploiting modern processing power. Until recently they all stilled lived in the 32 bit world running on hardware I wouldn’t give to the office temp. Now I’d be fine with that if the prices reflected that. But that isn’t the case.

Why did (does) it take ‘m so long to move to x64 bit? That’s been our standard server build since Windows 2003 for crying out loud and our clients have been x64 since the Vista rollout in 2007. It’s 2012 people. Yes that’s the second decade of the 21st century.

What is holding the vendors back from using more cores? Realistically, if you look at what’s available today, it is painful to see that vendors are touting the dual quad core controllers (finally and with their software running x64 bit) as their most recent achievement. Really, dual Quad core, anno 2012? Should I be impressed?

What’s this magic limit of 2 controllers with so many vendors? Did they hard code a 2 in the controller software and lost the source code of that module?

On the other hand what’s the obsession with 4 or more controllers? We’re not all giant cloud providers and please note my ideas on scale out versus scale up earlier.

Why are some spending time and money in ASIC development for controllers? You can have commodity motherboard with for sockets and 8, 10, 12 cores. Just buy them AND use them. Even the ones using commodity hardware (which is the way to go long term due to the fast pace and costs) don’t show that much love for lots of cores. It seems cheap and easy, when you need a processor upgrade or motherboard upgrade. It’s not some small or miniature device where standard form factors won’t work. What is wrong in your controller software that you all seem to be so slow in going that route? You all talk about how advanced, high tech, future tech driven the storage industry is, well prove it. Use the 16 or to 32 cores you can easily have today. Why? Because you can use the processing powers and also because I promise you all one thing: that state of the art newly released SAN of today is the old, obsolete junk we’ll think about replacing in 4 years time so we might not be inclined to spend a fortune on it Winking smile. Especially not when I have to do a fork lift upgrade. Been there, done that and rather not do it again. Which brings us to the next point.

Flexibility, Modularity & Support

If you want to be thrown out of the building you just need to show even the slightest form of forklift upgrade for large or complex SAN environments. Don’t even think about selling me very expensive highly scalable SANs with overrated and bureaucratic support. You know the kind where the response time in a crisis is 1/10 of that of when an ordinary disk fails.

Flexibility & Modularity

Large and complex storage that cost a fortune and need to be ripped out completely and/or where upgrades over its life time are impossible or cost me an arm and a leg are a no go. I need to be able to enhance the solution where it is needed and I must be able to do so without spending vast amounts of money on a system I’ll need to rip out within 18 months. It has more like a perpetual, modular upgrade model where over the years you can enhance and keep using what is still usable .

If that’s not possible and I don’t have too large or complex storage needs, I’d rather buy a cheap but functional SAN. Sure it doesn’t scale as well but at least I can throw it out for a newer one after 3 to 4 years. That means I can it replace long before I hit that the scalability bottleneck because it wasn’t that expensive. Or if I do hit that limit I’ll just buy another cheap one and add it to the mix to distribute the load. Sure that takes some effort but in the end I’m better and cheaper off than with expensive, complex highly scalable solutions.

Support

To be brutally honest some vendors read their own sales brochures too much and drank the cool aid. They think their support processes are second to none and the best in the business. If they really believe that they need to get out into the field an open up their eyes. If they just act like they mean that they’ll soon find out when the money starts talking. It won’t talk to you.

Really some of you have support process that are only excellent and easy in your dreams. I’ll paraphrase a recent remark on this subject about a big vendor “If vendor X their support quality and the level of responsiveness what only 10% of the quality of their hardware buying them would be a no brainer”. Indeed and now that fact it’s a risk factor or even a show stopper.

Look all systems will fail sooner or later. They will. End of story. Sure you might be lucky and never have an issue but that’s just that. We need to design and build for failure. A contract with promises is great for the lawyers. Those things combined with the law are their weapons on their battle field. An SLA is great for managers & the business. These are the tools they need for due diligence and check it off on the list of things to do. It’s CYA to a degree but that is a real aspect of doing business and being a manger. Fair enough. But for us, the guys and gals of ICT who are the boots on the ground, we need rock solid, easy accessible and fast support.  Stuff fails, we design for that, we build for that. We don’t buy promises. We buy results. We don’t want bureaucratic support processes. I’ve seen some where the overhead is worse than the defect and the only aim is to close calls as fast as they can. We want a hot line and an activation code to bring down the best support we can as fast as we can when we need it. That’s what we are willing to pay real good money for. We don’t like a company that sends out evaluation forms after we replaced a failed disk with a replacement to get a good score. Not when that company fails to appropriately interpret a failure that brings the business down and ignores signals from the customer things are not right. Customers don’t forget that, trust me on this one.

And before you think I’m arrogant. I fail as well. I make mistakes, I get sick, etc. That’s why we have colleagues and partners. Perfection is not of this world. So how do I cope with this? The same way as when we designing an IT solution. Acknowledge that fact and work around it. Failure is not an option people, it’s pretty much a certainty.That’s why we make backups of data and why we have backups for people. Shit happens.

The Goon Squad Versus Brothers In Arms

As a customer I never ever want to have to worry about where your interests are. So we pick our partners with care. Don’t be the guy that acts like a gangster in the racketeering business. You know they all dress pseudo upscale to hide the fact they’re crooks. We’re friends, we’re partners. Yeah sure, we’ll do grand things together but I need to lay down the money for their preferred solution that seems to be the same whatever the situation and environment.

Some sales guys can be really nice guys. Exactly how nice tends to depend on the size of your pockets. More specifically the depth of your pockets and how well they are lined with gold coin is important here. One tip, don’t be like that. Look we’re all in business or employed to make money, fair enough, really. But if you want me be your client, fix my needs & concerns first. I don’t care how much more money some vendor partnerships make you or how easy it is to only have to know one storage solution. I’m paying you to help me and you’ll have to make your money in that arena. If you weigh partner kickbacks higher than our needs than I’ll introduce you to the door marked “EXIT”. It’s a one way door. If you do help to address our needs and requirements you’ll make good money.

The best advisors – and I think we have one – are those that remember where the money really comes from and whose references really matter out there. Those guys are our brothers in arms and we all want to come out of the process good, happy and ready to roll.

The Joy

The joy simply is great, modern, functional, reliable, modular, flexible, affordable and just plain awesome storage. What virtualization /Private cloud /Database /Exchange systems engineer would mind getting to work with that. No one, especially not when in case of serious issues the support & responsiveness proves to be rock solid. Now combine that with the Windows 8 & Hyper-V 3.0 goodness coming and I have a big smile on my face.

Full Steam Ahead With Windows 8 & Hyper-V in 2012

Some History

There have been a good number of people who’ve always used, some a lot more and some others a lot less, a bit of Microsoft bashing to gain some extra credibility or try to position other products as superior. Sometimes this addressed, at least, some real challenges and issues with Microsoft products. A lot of the time it doesn’t. I have always found this ridiculous. In the early years of this century I was told to get out of the Microsoft stack and into the LAMP stack to make sure I still had a job in a few years’ time. My reaction was to buy Inside SQL Server 2000 among other technology books Smile. The paradox is that in some cases, like some storage integrators, is that the ones doing the bashing are forgetting that their customers are often heavily invested in the Microsoft stack.

I Still Have A Job

As you might have realized already, I still have a job today. I’m very busy, building more and better environments based on Microsoft technologies. Microsoft does not get everything right. Who does? Sometimes it takes more than a few tries, sometimes they fail. But they also succeed in a lot of their endeavors.They are capable to learn, adapt and provide outstanding results with a very good support system to boot (I would dare say that you get out of that what you put into it). Given the size and nature of the company, combined with IT evolving at the speed of light, that’s not an easy task.

Today that ability translates into the upcoming release of Windows 8. Things like Hyper-V 3.0, the new storage and networking features, the improvements to clustering and the file system are the current state an evolution. A path along Windows 2000 over Windows 2003(R2), to  the milestone Windows 2008 which was improved with Windows 2008 R2. Now, Windows 8 being the next generation improves vastly on that very good and solid foundation. With Windows 8 we’ll take the next step forward in building highly scalable, highly available, feature rich a very functional solutions in a very cost effective manner. On top of that we can do more now than ever before, with less complexity and with affordable  standard hardware. If you have a bigger budget, great, Windows 8 will deliver even more and better bang for the buck if and when your hardware vendors get on the band wagon.

Windows 8 & Storage

One of the things the Windows BUILD Conference achieved is that it wanted me to buy hardware that I couldn’t get yet. Just try asking DELL or HP for RDMA support on 10Gbps and you get a bit of a vacant blank stare.

Another thing is that it made me look at our storage roadmap again. One of the few sectors in IT that are still very expensive is storage. Some of the storage vendors might start to feel a bit like a major network gear vendor. You know the one that has also seen the effects of serious competition by high quality but lower cost kit. Just think about what Storage Pools/Spaces will do for affordable, easy to use and rich storage solutions. Both with standard over the shelf available (read affordable) hardware and with modern SANs that leverage the Windows 8 features there is value. Heath my warning storage vendors. You’re struggling in the SMB market due to complexity, cost and way to much overhead and expensive services. Well it’s only going to get worse. You’ll have to come with better proposals or you’ll end up being high end / niche market players in the future. Let’s face it, if I can buy a super micro chassis with the disks of my choosing I can build my own storage solution for cheap and use Windows 8 to achieve my storage needs. Perhaps is 80/20 but hey, that’s great. It’s not that much better with more expensive solutions (vendor disks are ridiculously over priced) and the support process is sometimes a drain on your workforce’s time and motivation. And yes you paid for that. Compare this with being able to buy some spare parts on the cheap and having it all available of the shelf with the vendors. No more calls, no more bureaucratic mess for return parts, nor more IT illiterate operators to work through before you reach support that can be sub standard as well. Once you reach a certain level of hardware quality there is not that much difference any more except for price and service. Granted, some vendors are better at this then others. The really big ones often struggle getting this right.

I’ve been in this business long enough to know that all stuff breaks. SLAs are fine for lawyers and for management. CYA is part of doing business. But for the IT Pro in the field you need reliable people, gear and services.  On top of that you have to design for failure. You know things will break. So it should be a cheap, easy and fast as possible to fix while your design and architecture should cope with the effects of a failure. That’s what IT Pros need and that what’s keeps things running (not that SLA paper in the mailbox of your manager).

Show the Windows customers a bit more love than you have done in the past. Some in the storage industry tend to like to look down on the Windows OS. But guess what, it is your largest customer base. Unless you want to end up in the same niche as a very expensive personal trainer for Hollywood stars (tip: there’s not a huge job market there) you’d better adjust to new realities. A lot of them are doing that already , some of them aren’t. To those: get over it and leverage the features in Windows 8. You’ll be able to sell to a more varied public and at the high end you’ll have even better solutions to offer. Today I notice way to many storage integrators who haven’t even looked at Windows 8. It’s about time they started … really, like today. I mean how do you want to sell me storage today if you can’t answer my queries on Windows 8 & System Center 2012 support and integration? To me this is huge! I want to know about ODX, RDMA, SMI-S and yes I want you to be able to answer me how your storage deals with CSVs. You should know about the consumption of persistent ISCSI-3 reservations and a rock solid hardware VSS provider. If you can do that it creates the warm fuzzy feeling a customers need to make that leap of faith.

When I look at the network improvements in Windows 8. Things like RDMA, SMB 2.2; File Transfer Offload and what that means for file sharing and data intensive environments I’m pretty impressed. Then there is Hyper-V 3.0 and it many improvements. Only a fool would deny that it is a very good, affordable & rich hypervisor with a bright future as far as hypervisors go (they are not the goal, just a means to an end). Live Storage Migration, an extensible virtual switch, monitoring of the virtual switch, Network Virtualization, Hyper-V Replica, … it’s just too much to mention here. But hop on over to Windows 8 Hyper-V Feature Glossary by Aidan Finn. He’s got a nice list up of the new features relevant to the Hyper-V crowd. Again, we see improvements for all business sizes, from SMB to enterprise, including the ISPs and Cloud providers. Windows 8 is breaking down barriers that would interdict it’s use in various environments and scenarios. Objections based on missing features, scalability, performance or security in multi tenancy environments are being wiped of the map. If you want to see some musing on this subject just look at Group Video Interview: What is your favorite Hyper-V feature in Windows 8?.

2012 & Beyond

Hyper-V is growing. It’s already won a lot of hearts and minds of many smaller Microsoft shops but it’s also growing in the enterprise. The hybrid world is here when you look at the numbers, even if it’s not yet the case in your neck of the woods. Why? Cost versus features. Good enough is good enough. Especially when that good is rather great. On top of that the integration is top notch and it won’t cost you a fortune and save you a lot of plumbing hassle.

Basically everyone can benefit from all this. You’ll get more and better at a lesser or at least a more affordable cost. Even if you don’t use any Microsoft technologies you’ll benefit from the increased competition. So everyone can be happy.