Hyper-V Virtual Machines and the Storage Optimizer

Windows Server 2012 (R2) has made many improvements to how storage optimization and maintenance is done. You can read a lot more about this in What’s New in Defrag for Windows Server 2012/2012R2. It boils down to a more intelligent approach depending on the capability of the underlying storage.

This is reflected in the Media type we see when we look at Optimize Drives.

This is my workstation … looks pretty correct a couple of SSDs and a couple of HDDs.

image

SSD are optimized intelligently by the way.When VSS is leveraged SSD do get fragmentation and so one in while they are “defragmented”. This has to do with keeping performance up to par. Read more about this in The real and complete story – Does Windows defragment your SSD? by Scott Hanselman.

The next example is a Hyper-V Cluster. You can see the local disks identified as HDD and the CSV as Thin provisioned disks. Makes sense to me, the SAN I use supports thin provisioned disks.

image

But now, let’s look at a Virtual Machine with virtual disks of every type known and on any type of storage we could find. All virtual disks are identified as “Thin provisioned disk”. How can that be?

image

What had me puzzled a little bit is that in a virtual machine each and every virtual disk is identified as thin provisioned disk. It doesn’t matter what type of virtual disk it is: fixed VHD/VHDX or dynamically expanding VHD/VHDX. It also doesn’t matter on what physical disk the virtual disk resides: SATA, SAS, SSD, SAN (iSCSI/FC) LUN or CSV, SMB Share …

So how does this work with a fixed VHD on a local SATA disk? A VHD doesn’t know about UNMAP, does it? And a SATA HHD? How does that compute? Well, my understanding on this is that all virtual disks, dynamically expanding or fixed, both VHDX/VHD are identified as thin provisioned disks, no matter what type of physical disk they reside on (CSV, SAS, SATA, SSD, shared/non shared). This is to allow for UNMAP (RETRIMs in Storage Optimizer speak, which is  way of dealing with the TRIM limitations / imperfections, again see Scott Hanselman’s blog for this) command to be sent from the guest to the Hyper-V storage stack below. If it’s a VHD those UNMAP command are basically black holed just like they would never be passed down to a local SATA HHD (on the host) that has no idea what it is and used for.

But wait a minute ….what about SSD and defragmentation you say, my VHDX lives on an SSD.. Well they are for one not identified as SSD or HDD. The hypervisors deals with the storage optimization at the virtual layer. The host OS handles the physical layer as intelligent as it can to optimize the disks as best as it can. How that happens depends on the actual storage beneath in the case of a modern SAN you’ll notice it’s also identified as a Thin provisioned disk. SANs or hyper converged storage arrays provide you with storage that is also virtual with all kinds of features and are often based on tier storage which will be a mix of SSD/SAS/NL-SAS and in some cases even NVMe Flash. So what would an OS have to identify it as?  The storage array must play its part in this.

So, if you ever wondered why that is, now you know. Hope you found this interesting!

Hyper-V and Disk Fragmentation

There are 3 type of disk fragmentation you might need to deal with in regards to Hyper-V:

  1. Fragmentation of the file system on the host LUN where the VMs reside.
  2. Fragmentation of files system on the LUNs inside of the VM.
  3. Block fragmentation of the VHDX itself. This is potentially more of an issue with dynamic disks and differencing disks.

We deal with the first type by defragmenting the LUN, which might be a CSV, in which case you can take a look here for more information on this Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2.  For more information on fragmentation in general take a look here What’s New in Defrag for Windows Server 2012/2012R. The second type is business as usual and is similar to the first one except that it’s the file system inside a VM.

For the third type we need to create a new virtual disk using the fragmented one as the source. See Checking and Correcting Virtual Hard Disk Fragmentation. This easily done but it does cause down time unless you leverage storage live migration. So that’s my preferred method, especially as I leverage ODX when I do this, so it’s pretty fast. So always leave yourself some margin on storage to be able to perform maintenance operations. That has always been true and still is.

But how do you find out that you have this issue? PowerShell is your friend! Here’s a snippet to show you can check all VMs their vhdx files on a cluster:

$AllVMsOnAllNodesInCluster = Get-VM -ComputerName (get-ClusterNode)
ForEach ($VM in $AllVMsOnAllNodesIncluster)
{
    $VM.Name
    #$HardDrives  = $VM.HardDrives
    invoke-command -ComputerName $VM.computername -ScriptBlock {
        param ($VM)
        Get-VM -Name $VM.Name | Get-VMHardDiskDrive | Get-VHD | ft path, fragmentationpercentage -AutoSize
    } -arg $VM
}

Here’s a screenshot of some output of this snippet

image

As said the best solution that does not incur down time is to storage (live) migrate the virtual disks affected. We can automate this and put in some logic to do this for all virtual hard disks that are more than X% fragmented. Do take care to also check for disk space or the migration will fail.

Hope this helps some of you!

Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2

When it comes to defragmenting CSV it seemed we took a step back when it comes to support from 3rd party vendors. While Windows provides for a great toolset to defragment a CSV it seemed to have disappeared form 3r party vendor software. Even from the really good Raxco Perfect disk. They did have support for this with Windows 2008 R2 and I even mentioned that in a blog.

If you need information on how to defragment a CSV in Windows 2012 R2, look no further.There is an absolutely fantastic blog post on the subject How to Run ChkDsk and Defrag on Cluster Shared Volumes in Windows Server 2012 R2, by Subhasish Bhattacharya one of the program managers in the Clustering and High Availability product group. He’s a great guy to talk shop to by the way if you ever get the opportunity to do so. One bizarre thing is that this must be the only place where PowerShell (Repair-ClusterSharedVolume cmdlet) is depreciated in lieu of chkdsk.

3rd party wise the release of Raxco Perfect Disk 13 SP2 brought back support for defragmenting CSV.

image

I don’t know why it took them so long but the support is here now. It looks like they struggled to get the CSVFS (the way CSV are now done since Windows Server 2012) supported. Whilst add it, they threw in support for ReFS by the way. This is the first time I’ve ever seen this. Any way it’s here and that’s good because I have a hard time accepting that any product (whatever it does) supports Hyper-V if it can’t handle CSV, not if you want to be taken seriously anyway. No CSV support equals = do not buy list in my book.

Here’s a screenshot of Perfect disk defragmenting away. One of the CSV LUNs in my lab is a SSD and the other a HDD.

image

Notice that in Global Settings you can tweak the behavior when defragmenting optimization of various drive types, including CSVFS but you just have to leave the default on unless you like manual labor or love PowerShell that much you can’t forgo any opportunity to use it Winking smile

image

Perfect disk cannot detect what kind of disks you have behind the CSV LUN so you might want to change the optimization method if you’re running SSD instead of HHD.

image

I’d love for Raxco to comment on this or point to some guidance.

What would also be beneficial to a lot of customers is guidance on defragmentation on the different auto-tiering storage arrays. That would make for a fine discussion I think.

Some Feedback On How to defrag a Hyper-V R2 Cluster Shared Volume

Hans Vredevoort posted a nice blog entry recently on the defragmentation of Clustered Shared Volumes and asked for some feedback & experiences on this subject. He describes the process used and steps taken to defrag your CSV storage and notes that there may be third party products that can handle this automatically. Well yes, there are. Two of the most know defragmentation products support Cluster Shared Volumes and automate the process described by Hans in his blog.  Calvin made a very useful suggestion to use Redirected Access instead of Maintenance mode. This is what the commercial tools like Raxco PerfectDisk and Diskeeper also do.

As the defragmentation of Cluster Shared Volumes requires them to be put into Redirected Access you should not have “always on” defragmentation running in a clustered Hyper-V node. Sure the software will take care of it all for you but the performance hit is there and is considerable. I might just use this point here as yet another plug for 10 Gbps networks for CSV Smile. Also note that the defragmentation has to run on the current owner or coordinator node. Intelligent defragmentation software should know what node to run the defrag on, move the ownership to the desired node that is running the defragmentation or just runs it on all nodes and skips the CSVs storage it isn’t the coordinator for. The latter isn’t that intelligent. John Savill did a great blog post on this before Windows 2008 R2 went RTM for Windows IT Pro Magazine where he also uses PowerShell scripts to move the ownership of the storage to the node where he’ll perform the defragmentation and retrieves the GUID of the disk to use with the  defrag command. You can read his blog post here and see how our lives have improved with the commands he mentions would be available in the RTM version of W2K8R2 (Repair-ClusterSharedVolume  with –defrag option).

For more information on Raxco PerfectDisk you can take a look at the Raxco support article, but the information is rather limited. You can also find some more information from Diskeeper on this subject here.  I would like to add that you should use defragmentation intelligently and not blindly. Do it with a purpose and in a well thought out manner to reap the benefits. Don’t just do it out of habit because you used to do it in DOS back in the day Smile.

To conclude I’ll leave you with some screenshots from my lab, take during the defragmentation of a Hyper-V cluster node.

As you can see the CSV storage is put into redirected access:

0

 

And our machines remain on line and  available:

1

 

This is because we started to defrag it on the Hyper-V cluster node:

2

 

Here you can see that the guest files are indeed being defragmented, in this case the VHD for the guest server Columbia (red circle at bottom):

image