Windows Hyper-V Server 2012 Live Migration DOES support pass-through disks–KB2834898 is Wrong

See update in yellow in line (April 11th 2013)

I recently saw KB2834898 (pulled) appear and it’s an important one. This fast publish statement is important as until recently it was accepted that Live Migration with pass through disks was supported with Windows Server 2012 Hyper-V Live Migration (just like with Windows Server 2008 R2 Hyper-V) as long as the live migration is managed by the Hyper-V cluster, i.e. the pass through disk is a clustered resource => see http://social.technet.microsoft.com/wiki/contents/articles/440.hyper-v-how-to-add-a-pass-through-disk-on-a-failover-cluster.aspx

UPDATE April 11th 2013: Now after consulting some very knowledgeable people at Microsoft (like Jeff Woolsey and Ben Armstrong) this KB article is not factual correct and leaves much to be desired. It’s wrong, as pass-through disks are still supported  with Live Migration in Windows Server 2012 Hyper-V, when managed by the cluster, just like before in Windows 2008 R2. The KB article has been pulled meanwhile.

Mind you that Shared Nothing Live Migration with pass through disks have never been supported as there is no way to move the pass through disk between hosts. Storage Live Migration is not really relevant in this scenario either, there are no VHDX file to copy apart fro the OS VHDX. Live migrations between stand alone host are equally irrelevant. Hence it’s a Hyper-V Cluster game only for pass through disks.

I have never been a fan of pass through disks and we have never used them in production. Not in the Windows Server 2008 R2 era let alone in the Windows Server 2012 time frame. No really we never used them, not even in our SQL Server virtualization efforts as we just don’t like the loss of flexibility of VHDX files and due to the fact that they tend to complicate things (i.e. things fail like live migration).

I advise people to strongly reconsider if they think they need them and only to use them if they are really sure they actually do have a valid use case. I know some people had various reasons to use them in the past but I have always found them to be a bit of over engineering. One of the better reasons might have been that you needed disks larger then 2TB but than I would advise iSCSI and now with Windows Server 2012 also virtual Fibre Channel (vFC), which is however not needed due to VHDX now supporting up to 64TB in size. Both these options support Live Migration and are useful for in guest clustering, but not as much for size or performance issues in Windows Server 2012 Hyper-V. On the performance side of things we might have eaten a small IO hit before in lieu of the nice benefits of using VHDs. But even a MSFT health check of our Virtualized SQL Server environment didn’t show any performance issues, Sure your needs may be different from ours but the performance argument with Windows Server 2012 and VHDX can be laid to rest. I refer you to my blog Hyper-V Guest Storage Performance: Above & Beyond 1 Million IOPS for more information of VHDX performance improvements and to Windows Server 2012 with Hyper-V & The New VHDX Format Leads The Way for VHDX capabilities in general (size, unmap, …).

Is see only one valid reason why you might have to use them today. You have  > 2TB disks in the VM and your backup vendor doesn’t support the VHDX format. Still a reality today unfortunately Annoyed But that can be fixed by changing to another one Winking smile

Saying Goodbye To Old Hardware Responsibly

Last year we renewed our SAN storage and our backup systems. They had been serving us for 5 years and where truly end of life as both technologies uses are functionally obsolete in the current era of virtualization and private clouds. The timing was fortunate as we would have been limited in our Windows 2012, Hyper-V & disaster recovery plans if we had to keep it going for another couple of years.

Now any time you dispose of old hardware it’s a good idea to wipe the data securely to a decent standard such as DoD 5220.22-M. This holds true whether it’s a laptop, a printer or a storage system.

We did the following:

  • Un-initialize the SAN/VLS
  • Reinitialize the SAN/VLS
  • Un-initialize the SAN/VLS
  • Swap a lot of disks around between SAN/VLS and disk bays in a random fashion
  • Un-initialize the SAN/VLS
  • Create new (Mirrored) LUNS, as large as possible.
  • Mounted them to a host or host
  • Run the DoD grade  disk wiping software against them.
  • That process is completely automatic and foes faster than we were led to believe, so it was not really such a pain to do in the end. Just let it run for a week 24/7 and you’ll wipe a whole lot of data. There is no need to sit and watch progress counters.
  • Un-initialize the SAN/VLS
  • Have it removed by a certified company that assures proper disposal

We would have loved to take it to a shooting range and blast the hell of of those things but alas, that’s not very practical Smile nor feasible Sad smile. It would have been very therapeutic for the IT Ops guys who’ve been baby sitting the ever faster failing VLS hardware over the last years.

Here’s some pictures of the decommissioned systems. Below are the two old VLS backup systems, broken down and removed from the data center waiting disposal. It’s cheap commodity hardware with a reliability problem when over 3 years old and way to expensive for what is. Especially for up and out scaling later in the life time cycle, it’s just madness. Not to mention that those thing gave us more issues the the physical tape library (those still have a valid a viable role to play when used for the correct purposes). Anyway I consider this to have been my biggest technology choice mistake ever. If you want to read more about that go to Why I’m No Fan Of Virtual Tape Librariesimageimage

To see what replaced this with great success go to Disk to Disk Backup Solution with Windows Server 2012 & Commodity DELL Hardware – Part II

The old EVA 8000 SANs are awaiting removal in the junk yard area of the data center. They served us well and we’ve been early customers & loyal ones. But the platform was as dead as a dodo long before HP wanted to even admit to that. It took them quite a while to get the 3Par ready for the same market segment and I expect that cost them some sales. They’re ready today, they were not 24-12 months ago. image

image

So they’ve been replaced with Compellent SANs. You can read some info on this on previous blogs Multi Site SAN Storage & Windows Server 2012 Hyper-V Efforts Under Way and Migration LUNs to your Compellent SAN

The next years the storage wares will rage and the landscape will change a lot. But We’re out of the storm for now. We’ll leverage what we got Smile. One tip for all storage vendors. Start listening to your SME customers a lot more than you do now and getting the features they need into their hands. There are only so many big enterprises so until we’re all 100% cloudified, don’t ignore us, as together we buy a lot of stuff to. Many SMEs are interested in more optimal & richer support for their windows environments if you can deliver that you’ll see your sales rise. Keep commodity components, keep building blocks and from factors but don’t use a cookie cutter to determine our needs or “sell” us needs we don’t have. Time to market & open communication is important here. We really do keep an eye on technologies so it’s bad to come late to the party.

Are Data Tsunamis Inevitable Or Man Made Disasters?

What happens when people who have no real knowledge and context about how to handle data, infrastructure or applications insist on being in charge and need to be seen as taking strong decisive actions without ever being held responsible? It leads to real bad, often silly decisions with a bunch of unintended consequences. Storage vendors love this. More iron to sell. And yes, all this is predictable. When I’m able and allowed to poke around in storage and the data stored I often come to the following conclusion: there’s a bulk amount of data that is stored in an economical unsound fashion. Storage vendors & software vendors love this, as there are now data life cycle management tools & appliances to be sold.

The backlash of all this is? Cost cutting, which then leads to the data that has valid needs to be stored and protected not getting the resources it should. Why? Well who’s going to take responsibility to push the delete button to remove the other data? As we get ever better technology to store, transport and protect data we manage to do more with less money and personnel. But as is often the case, no good deed goes unpunished. Way to often these savings or efficiencies flow straight into the bottomless pit caused by that age old “horror vacui” principle in action in the world of data storage.

You get situations like this: “Can I have 60TB of storage?  It’s okay, I discussed this with your colleague last year, he said you’d have 60TB available at this time frame”

What is the use case? How do you need it? What applications or services will consume this storage? Do you really need this to be on a SAN or can we dump this in cost effective Windows Server Storage Spaces with ReFS? What are the economics involved around this data? Is it worth doing? What projects is this assigned to? Who’s the PM? Where is the functional analysis. Will this work? Has there been a POC? Was that POC sound? Was there a pilot? What the RTO? The RPO? Does it need to be replicated off site? What IOPS is required? How will it be accessed? What security is needed? Any encryption required? Any laws affecting the above? All you get is a lot of vacant blank stares and lot’s of “just get it done”. How can it be that with so many analysts and managers of all sorts running around to meeting after meeting, all in order to get companies running like a well oiled slick mean machine, we end up with this question at the desk of an operational systems administrator as a result? Basically what are you asking for? Why are you asking this and did you think this through?

waterjugs

Consider the following. What if you asked for 30 billion gallons of water at our desk and we say “sure” and just sent it to you. We did what you asked. Perhaps you meant bottled drinking water but below is what you’ll end up with. And yes it completely up to specifications, limited as they are.

vlcsnap-2013-01-12-10h49m42s238

The last words heard while drowning will be “Who ordered this? You can bet no one will be responsible, especially not when the bill arrives and when the resulting mess needs to be cleaned up. Data in the cloud will not solve this. Like the hosting business, who serve up massive amount of idle servers, the cloud will host massive amounts of idle data as in both situations it’s providing the service that generates revenue, not the real use of that service by you or it’s economic value to you.

Money Saving Hero of 2012: Windows 2012 In Box Deduplication Delivers Big Value

To wave goodbye to 2012 I’m posting the latest screenshot of the easiest and very effective money saving feature you got in Windows Server 2012 than RTM in August. Below you’ll find the status report of a backup LUN in a small environment.  Yes those are real numbers in a production environment.image

If you are not using it; you’re really throwing away vast amounts of money on storage right this moment. If you’re in the market for a practical, economical and effective backup solution my advice you to  is the following. Scrap any backup vendor or product that prevents it files of LUNs being duplicated  by Windows Server 2012.  They might as well be robbing you at gun point.

You can pay for a very nice company new years party with these savingsMartini glassParty smile

I wish you all a great end of 2012 and a magnificent 2013 ahead. In 2013 we’ll push Windows Server 2012 into service where we couldn’t before (waiting for 3 party vendor support and if they keep straggling they are out of the door) and work at making our infrastructure ever more resilient an protected.  With System Center SP1 some products of that suite will make a come back in our environment. 10Gbps is bound to become the standard all over our little data center network and not just our most important workloads.