Hyper-V and Disk Fragmentation

There are 3 type of disk fragmentation you might need to deal with in regards to Hyper-V:

  1. Fragmentation of the file system on the host LUN where the VMs reside.
  2. Fragmentation of files system on the LUNs inside of the VM.
  3. Block fragmentation of the VHDX itself. This is potentially more of an issue with dynamic disks and differencing disks.

We deal with the first type by defragmenting the LUN, which might be a CSV, in which case you can take a look here for more information on this Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2.  For more information on fragmentation in general take a look here What’s New in Defrag for Windows Server 2012/2012R. The second type is business as usual and is similar to the first one except that it’s the file system inside a VM.

For the third type we need to create a new virtual disk using the fragmented one as the source. See Checking and Correcting Virtual Hard Disk Fragmentation. This easily done but it does cause down time unless you leverage storage live migration. So that’s my preferred method, especially as I leverage ODX when I do this, so it’s pretty fast. So always leave yourself some margin on storage to be able to perform maintenance operations. That has always been true and still is.

But how do you find out that you have this issue? PowerShell is your friend! Here’s a snippet to show you can check all VMs their vhdx files on a cluster:

$AllVMsOnAllNodesInCluster = Get-VM -ComputerName (get-ClusterNode)
ForEach ($VM in $AllVMsOnAllNodesIncluster)
{
    $VM.Name
    #$HardDrives  = $VM.HardDrives
    invoke-command -ComputerName $VM.computername -ScriptBlock {
        param ($VM)
        Get-VM -Name $VM.Name | Get-VMHardDiskDrive | Get-VHD | ft path, fragmentationpercentage -AutoSize
    } -arg $VM
}

Here’s a screenshot of some output of this snippet

image

As said the best solution that does not incur down time is to storage (live) migrate the virtual disks affected. We can automate this and put in some logic to do this for all virtual hard disks that are more than X% fragmented. Do take care to also check for disk space or the migration will fail.

Hope this helps some of you!

In Defense of Switch Independent Teaming With Hyper-V

For many old timers (heck, that includes me) NIC teaming with LACP mode was the best of the best, at least when it comes to teaming options. Other modes often led to passive/active, less than optimal receiving network traffic aggregation. Basically, and perhaps over simplified, I could say the other options were only used if you had no other choice to get things to work. Which we did a lot … I used Intel’s different teaming modes for various reasons in the past (before we had MLAG, VLT, VPC, …). Trying to use LACP where possible was a good approach in the past in physical deployments and early virtualized environments when 1Gbps networking dominated the datacenter realm and Windows did not have native support for LBFO.

But even LACP, even in those days, had some drawbacks. It’s the most demanding form of teaming. For one it required switch stacking. This demands the same brand and type of switches and that means you have no redundancy during firmware upgrades. That’s bad, as the only way to work around that is to move all workload to another rack unit … if you even had the capability to do that! So even in days past we chose different models if teaming out of need or because of the above limitations for high availability. But the superiority of NIC teaming with LACP still stands for many and as modern switches support MLAG, VLT, etc. the drawback of stacking can be avoided. So does that mean LACP for NIC teaming is always the superior choice today?

Some argue it is and now they have found support in the documentation about Microsoft CPS system documentation about Microsoft CPS system. Look, even if Microsoft chose to use LACP in their solutions it’s based on their particular design and the needs of that design I do not concur that this is the best overall. It is however a valid & probably the choice for their specific setup. While I applaud the use of MLAG (when available to you a no or very low cost) to have all bases covered but it does not mean that LACP is the best choice for the majority of use cases with Hyper-V deployments. Microsoft actually agrees with me on this in their Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management guide. They state that Switch Independent configuration / Dynamic distribution (or Hyper-V Port if on Hyper-V and if not on W2K12R2)  is the best possible default choice is for teaming in both native and Hyper-V environments. I concur, even if perhaps not that strong for native workloads (it depends). Exceptions to this:

  • Teaming is being performed in a VM (which should be rare),
  • Switch dependent teaming (e.g., LACP) is required by policy, or
  • Operation of a two-member Active/Standby team is required by policy.

In other words in 2 out of 3 cases the reason is a policy, not a technical superior solution …

Note that there are differences between Address Hash, Hyper-V Port mode & the new dynamic distribution modes and the latter has made things better in W2K12R2 in regards to bandwidth but you’ll need the read the white papers. Use dynamic as default, it is the best. Also note that LACP/Switch Dependent doesn’t mean you can send & receive to and from a VM over the aggregated bandwidth of all team members. Life is more complicated than that. So if that’s you’re main reason for switch dependent, and think you’re done => be ware Winking smile.

Switch Independent is also way better for optimization of VMQ. You have more queues available (sum-of-queues) and the IO path is very predictable & optimized.

If you don’t control the switches there’s a lot more cross team communication involved to set up teaming for your hosts. There’s more complexity in these configurations so more possibilities for errors or bugs. Operational ease is also a factor.

The biggest draw back could be that for receiving traffic you cannot get more than the bandwidth a single team member can deliver. That’s true but optimizing receiving traffic has it’s own demands and might not always be that great if the switch configuration isn’t that smart & capable. Do I ever miss the potential ability to aggregate incoming traffic. In real life I do not (yet) but in some configurations it could do a great job to optimize that when needed.

When using 10Gbps or higher you’ll rarely be in a situation where receiving traffic is higher than 10Gbps or higher and if you want to get that amount of traffic you really need to leverage DVMQ. And a as said switch independent teaming with port of dynamic mode gives you the most bang for the buck. as you have more queues available. This drawback is mitigated a bit by the fact that modern NICs have way larger number of queues available than they used to have. But if you have more than one VM that is eating close to 10Gbps in a non lab environment and you planning to have more than 2 of those on a host you need to start thinking about 40Gbps instead of aggregating a fistful of 10Gbps cables. Remember the golden rules a single bigger pipe is always better than a bunch of small pipes.

When using 1Gbps you’ll be at that point sooner and as 1Gbps isn’t a great fit for (Dynamic) VMQ anyway I’d say, sure give LACP a spin to try and get a bit more bandwidth but will it really matter? In native workloads it might but with a vSwith?  Modern CPUs eat 1Gbps NICs for breakfast, so I would not bother with VMQ. But when you’re tied to 1Gbps it’s probably due to budget constraints and you might not even have stackable, MLAG, VLT or other capable switches. But the arguments can be made, it depends (see Don’t tell me “It depends”! But it does!). But in any case I start saving for 10Gbps Smile

Today as the PC8100 series and the N4000 Series (budget 10Gbps switches, yes I know “budget” is relative but in the 10Gbps world, but they offer outstanding value for money), I tend to set up MLAG with two of these per rack. This means we have all options and needs covered at no extra cost and without sacrificing redundancy under any condition. However look at the needs of your VMs and the capability of your NICs before using LACP for teaming by default. The fact that switch independent works with any combination of budget switches to get redundancy doesn’t mean it’s only to be used in such scenarios. That’s a perk for those without more advanced gear, not a consolation price.

My best advise: do not over engineer it. Engineer it for the best possible solution for the environment at hand. When choosing a default it’s not about the best possible redundancy and bandwidth under certain conditions. It’s about the best possible redundancy and bandwidth under most conditions. It’s there that switch independent comes into it’s own, today more than ever!

There is one other very good, but luckily also a very rare case where LACP/Switch dependent will save you and switch independent won’t: dead switch ports, where the port becomes dysfunctional. So while switch independent protects against NIC, Switch, cable failures, here it doesn’t help you as it doesn’t know (it’s about link failures, not logical issues on a port).

For the majority of my Hyper-V deployments I do not use switch dependent / LACP. The situation where I did had to do with Windows NLB in combination with ICMP Multicast.

Note: You can do VLT, MLAG, stacking and still leverage switch independent teaming, LACP or static switch dependent is NOT mandatory even when possible.

Hyper-V did not find virtual machines to import from the location . The operation failed with error code ‘32784’.

I got contacted by some people how ran into some issues importing VMs from W2K12R2 Hyper-V into W2K12 Hyper-V. They got bitten by this “little” issue: Importing a VM that is exported from Windows Server 2012 R2 into Windows Server 2012 is not supported

This means you get greeted by

Hyper-V did not find virtual machines to import from the location <folder location>.
The operation failed with error code ‘32784’.

image

No the trick of not exporting the VM but doing an “in place” registration doesn’t cut it. That’s great for W2K8R2 to W2K12 or W2K12 to W2K12R2 but not from W2K12R2 to a lower version. In that way the title of the KB article could be seen as a bit misleading or incomplete, but the contents is pretty clear.

And that’s it. Woeps! What you have 200 VMs on the LUNs form the old cluster you already blew away to build the new one? You do have a tested exit plan for this right? Uh no?

Facepalm Combo

Oh MAN, NOOOOO!

Now if it’s only one or two VMs you can always work around this by creating new VMs using the old VHDXs. This will leave you to deal with networking cleanup inside of the VMs and configuring TCP/IP. PowerShell can help here but in large volumes this remains as serious effort. This is also the time that documentation pays!

Now what if this happens to you when you’re trying to roll back a migration of a hyper-V cluster (revert W2K12R2 to W2K12 for example). Well for one you should have know as you did test all this right? Right?!

What are your other options to roll back other than  the above? From the top of my head and without details?

  • Move back to your old cluster Smile You didn’t already nuke it, I hope.
  • If you have a SAN take a snapshot of the LUNs before you move them to Windows Server 2012 R2 for faster fall back. But beware, if you’re running applications that require some tender loving care in relation to snapshots like Exchange  or Active Directory in those VMs … shutting all VMs down before you create the can help snapshot mitigates issues but is not a full proof approach! “Know thy apps”!
  • A great backup & RESTORE solution to get you back up and running also comes in handy but don’t forget that it requires you to know your apps as well here. Yes, it’s not always just “CLICKEDYCLICKCLICKDONE”
  • Perhaps it’s now time to activate your paused replicas on the DRC cluster or hosts?  You did test this didn’t you?

Now for anyone involved in a migration to Windows Server 2012 R2 there is no excuse not to know this in advance and to test out the new cluster hardware as much as you can. This minimizes the chance you’ll need to fall back. And please test your exit scenarios, really, I mean it.  Also please, you can migrate one LUN/CSV at the time. Try to run the VMs on the first migrated LUN/CSV before you do all the others. That way you can do some damage control.

Now, this is not great but it is what it is and at least now you know before your migrate Winking smile. We’ve also asked MSFT to make falling back a bit less “"involved” in future versions. Perhaps they’ll do that, I’m pretty sure they’ll consider it. And by what we’ve seen in the recently available Technical Preview they did!

Microsoft Hyper-V S3 Cap warning when upgrading a Hyper-V Virtual Machine

When you do an in place upgrade of a Hyper-V virtual machine you’ll get a warning that Microsoft Hyper-V S3 Cap may not work after the upgrade and that you need to update the driver prior to the upgrade.This warning is logged to the Windows Compatibility Report.htm.

image

Microsoft Hyper-V S3 Cap is an old S3 Trio 765 emulated video device and the driver isn’t included anymore so you’ll get this particular warning. This will never give you an issues, all drivers needed are indeed in the install bits. You can safely ignore this and successfully upgrade.

Some people uninstall the device via device manager but basically that’s pure cosmetics & doesn’t really serve a purpose.

This warning is an artifact of the generation 1 virtual machines who still have this device on a PCI bus.  Below is a screenshot of a VM with W2K12R2, generation 1. As you can see the Microsoft Hyper-V S3 Cap is perfectly fine. No worries.image

As a matter of fact you will not even see this device on a generation 2 virtual machine and we should not see this with an upgrade of those.

image

I will have to wait on a public preview of Windows vNext to test an upgrade of a generation 2 machine to prove my thinking that this cosmetic error won’t be there anymore.