Windows Hyper-V Server 2012 Live Migration DOES support pass-through disks–KB2834898 is Wrong

See update in yellow in line (April 11th 2013)

I recently saw KB2834898 (pulled) appear and it’s an important one. This fast publish statement is important as until recently it was accepted that Live Migration with pass through disks was supported with Windows Server 2012 Hyper-V Live Migration (just like with Windows Server 2008 R2 Hyper-V) as long as the live migration is managed by the Hyper-V cluster, i.e. the pass through disk is a clustered resource => see http://social.technet.microsoft.com/wiki/contents/articles/440.hyper-v-how-to-add-a-pass-through-disk-on-a-failover-cluster.aspx

UPDATE April 11th 2013: Now after consulting some very knowledgeable people at Microsoft (like Jeff Woolsey and Ben Armstrong) this KB article is not factual correct and leaves much to be desired. It’s wrong, as pass-through disks are still supported  with Live Migration in Windows Server 2012 Hyper-V, when managed by the cluster, just like before in Windows 2008 R2. The KB article has been pulled meanwhile.

Mind you that Shared Nothing Live Migration with pass through disks have never been supported as there is no way to move the pass through disk between hosts. Storage Live Migration is not really relevant in this scenario either, there are no VHDX file to copy apart fro the OS VHDX. Live migrations between stand alone host are equally irrelevant. Hence it’s a Hyper-V Cluster game only for pass through disks.

I have never been a fan of pass through disks and we have never used them in production. Not in the Windows Server 2008 R2 era let alone in the Windows Server 2012 time frame. No really we never used them, not even in our SQL Server virtualization efforts as we just don’t like the loss of flexibility of VHDX files and due to the fact that they tend to complicate things (i.e. things fail like live migration).

I advise people to strongly reconsider if they think they need them and only to use them if they are really sure they actually do have a valid use case. I know some people had various reasons to use them in the past but I have always found them to be a bit of over engineering. One of the better reasons might have been that you needed disks larger then 2TB but than I would advise iSCSI and now with Windows Server 2012 also virtual Fibre Channel (vFC), which is however not needed due to VHDX now supporting up to 64TB in size. Both these options support Live Migration and are useful for in guest clustering, but not as much for size or performance issues in Windows Server 2012 Hyper-V. On the performance side of things we might have eaten a small IO hit before in lieu of the nice benefits of using VHDs. But even a MSFT health check of our Virtualized SQL Server environment didn’t show any performance issues, Sure your needs may be different from ours but the performance argument with Windows Server 2012 and VHDX can be laid to rest. I refer you to my blog Hyper-V Guest Storage Performance: Above & Beyond 1 Million IOPS for more information of VHDX performance improvements and to Windows Server 2012 with Hyper-V & The New VHDX Format Leads The Way for VHDX capabilities in general (size, unmap, …).

Is see only one valid reason why you might have to use them today. You have  > 2TB disks in the VM and your backup vendor doesn’t support the VHDX format. Still a reality today unfortunately Annoyed But that can be fixed by changing to another one Winking smile

Traveling To MMS 2013

Well I’ll be traveling towards MMS 2013. A big thank you, by the way, to the team back home for keeping an eye on things while reading the Windows Server 2012 Hyper-V Installation and Configuration Guide.

I’m attending this conference for the great networking opportunities and to establish the role System Center will have in our future. Many thousands of us will be attending MMS 2013 in Las Vegas (Nevada, USA) once again for that very same reason. I’m travelling over LHR to LAS with the help of British Airways as one of their Boeing 747s does the job quite adequately.

image

System Center 2012 SP1 has been released with full support for Windows Server 2012 whilst Windows 8 is gaining traction and the BYOD & Hybrid trends are ever increasing the challenges for management & support. Meanwhile we’re faced with ever bigger challenges keeping up with Private, Hybrid & Public cloud efforts and trends while maintaining our “legacy” systems.

I’m looking forward to discuss some serious issues we’re dealing with in managing an ever increasing varied ecosystem. Things are moving fast in technology. This means we need to adapt and move even faster with the flow.My friends, colleagues, fellow MEET members & MVPs, business partners, Microsoft employees I’m looking forward to meeting up at the Summit in Mandalay Bay!

Next to the sessions I have meetings lined up with vendors, friends & colleagues from around the globe as we optimize our time when we can meet face to face to talk shop and provide feedback. If you can’t attend follow some of the action here at MMS 2013 Live!

image

If you  read my blog or follow me on twitter and are attending be sue to let us know so we can meet & greet.

Belgian TechDays 2013 Sessions Are On Line

Just a short heads up to let you all know that the sessions of the TecDays 2013 in Belgium are available on the TechNet site. The slide decks can be found on http://www.slideshare.net/technetbelux

In case you want to see my two sessions you can follow these links:

Now there are plenty more good sessions so I encourage you to browse and have a look. Kurt Roggen his session on PowerShell is a great one to start with.

Windows Server 2012 NIC Teaming Mode “Independent” Offers Great Value

There, I said it. In switching, just like in real life, being independent often beats the alternatives. In switching that would mean stacking. Windows Server 2012 NIC teaming in Independent mode, active-active mode makes this possible. And if you do want or need stacking for link aggregation (i.e. more bandwidth) you might go the extra mile and opt for  vPC (Virtual Port Channel a la CISCO) or VTL (Virtual Link Trunking a la Force10 – DELL).

What, have you gone nuts? Nope. Windows Server 2012 NIC teaming gives us great redundancy with even cheaper 10Gbps switches.

What I hate about stacking is that during a firmware upgrade they go down, no redundancy there. Also on the cheaper switches it often costs a lot of 10Gbps ports (no dedicated stacking ports). The only way to work around this is by designing your infrastructure so you can evacuate the nodes in that rack so when the stack is upgraded it doesn’t affect the services. That’s nice if you can do this but also rather labor intensive. If you can’t evacuate a rack (which has effectively become your “unit of upgrade”) and you can’t afford the vPort of VTL kind of redundant switch configuration you might be better of running your 10Gbps switches independently and leverage Windows Server 2012 NIC teaming in a switch independent mode in active active. The only reason no to so would be the need for bandwidth aggregation in all possible scenarios that only LACP/Static Teaming can provide but in that case I really prefer vPC or VLT.

Independent 10Gbps Switches

Benefits:

  • Cheaper 10Gbps switches
  • No potential loss of 10Gbps ports for stacking
  • Switch redundancy in all scenarios if clusters networking set up correctly
  • Switch configuration is very simple

Drawbacks:

  • You won’t get > 10 Gbps aggregated bandwidth in any possible NIC teaming scenario

Stacked 10Gbps Switches

Benefits:

  • Stacking is available with cheaper 10Gbps switches (often a an 10Gbps port cost)
  • Switch redundancy (but not during firmware upgrades)
  • Get 20Gbps aggregated bandwidth in any scenario

Drawbacks:

  • Potential loss of 10Gbps ports
  • Firmware upgrades bring down the stack
  • Potentially more ‘”complex” switch configuration

vPC or VLT 10Gbps Switches

Benefits:

  • 100% Switch redundancy
  • Get > 10Gbps aggregated bandwidth in any possible NIC team scenario

Drawbacks:

  • More expensive switches
  • More ‘”complex” switch configuration

So all in all, if you come to the conclusion that 10Gbps is a big pipe that will serve your needs and aggregation of those via teaming is not needed you might be better off with cheaper 10Gbps leverage Windows Server 2012 NIC teaming in a switch independent mode in active active configuration. You optimize 10Gbps port count as well. It’s cheap, it reduces complexity and it doesn’t stop you from leveraging Multichannel/RDMA.

So right now I’m either in favor of switch independent 10Gbps networking or I go full out for a vPC (Virtual Port Channel a la CISCO) or VTL (Virtual Link Trunking a la Force10 – DELL) like setup and forgo stacking all together. As said if you’re willing/capable of evacuating all the nodes on a stack/rack you can work around the drawback. The colors in the racks indicate the same clusters. That’s not always possible and while it sounds like a great idea, I’m not convinced.

image

When the shit hits the fan … you need as little to worry about as possible. And yes I know firmware upgrades are supposed to be easy and planned events. But then there is reality and sometimes it bites, especially when you cannot evacuate the workload until you’re resolved a networking issue with a firmware upgrade Confused smile Choose your poison wisely.