E2EVC 2015 Berlin SMB Direct Slide Deck

I attended and presented at E2EVC 2015 in Berlin from June 12th to June 14th. The networking was a blast. No “marchitecure” bull shit or vendor fairy tales what so ever and lots of very open discussions on the realities we’re seeing and facing in virtualization and cloud. Most account managers and esoteric presales would die a painful (but fast) death in this environment.

image

One session was with my Hyper-V Amigo buddy Carsten Rachfahl and was pure demo extravaganza, so no slides. My own session was “SMB Direct – The Secret Decoder Ring” and was an attempt to position this technology what by looking at the why and where followed by the how by who and when.

image

I hope a lot of people had at least a better understanding of SMB Direct, RDMA and DCB. The second aim was to take away the fear many people have of this tech by showcasing it in short demos. Time constraints where a challenge so it was not a 200 level session.

Please download the presentation here if interested.

Enjoy. If you have any concerns or questions, ask, and I’ll try to answer.

Windows Deduplication And Mysterious Folder & File Sizes

There was a brief moment of “this can’t be good” the sys admin looked at the file size of the backup folders and compared it to the size reported for the files. Sure I had told him that Windows inbox deduplication rocked but this had to be too good to be true or deduplication had just eaten all the backup files and he was “toast”. It was neither. But that requires some explanation. The good news is that Windows Data Deduplication combined with a backup product that supports it like VEEAM will save you a ton of money on deduplication licenses some charge and storage costs.

This is what he saw, and what caused the raised eye brow. 12.4TB reduced to 285GB.

image

Deduplication can’t be that great, right? Did something go wrong? Checking the properties of ALL selected files themselves did not report anything else but compared to the volume info for used space something seems very wrong. That’s supposed to be 5.34 TB.

image

The volume properties report the effective spaces consumed on the volume, so that reflects the true deduplication results. You can confirm this with PowerShell

image
A savings rate of 57% and  5.34 TB of actually consumes space (5880575557632 bytes) and an unoptimized size of 12.4 TB.  Just as server manager reports.

image

So what is explorer up to at the folder and file level? Nothing, it just can’t show you the complete picture. Windows Data Deduplication stores duplicated chunks into the System Volume Information folder. Windows explorer runs under your account and has no access to that folder and doesn’t report the size of all chunks in there. The only thing it does reports are the non duplicated bits that are left in the source folder. In our case where the backups reside. The result is, as said, raised eyebrows.

The same is true for any other tool actually, like WinDirStat in the blow screenshot.

image

When we run this tools as system we get a different picture and you can navigate to the actual ChunkStore and learn more about the internals.

image

Hyper-V and Disk Fragmentation

There are 3 type of disk fragmentation you might need to deal with in regards to Hyper-V:

  1. Fragmentation of the file system on the host LUN where the VMs reside.
  2. Fragmentation of files system on the LUNs inside of the VM.
  3. Block fragmentation of the VHDX itself. This is potentially more of an issue with dynamic disks and differencing disks.

We deal with the first type by defragmenting the LUN, which might be a CSV, in which case you can take a look here for more information on this Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2.  For more information on fragmentation in general take a look here What’s New in Defrag for Windows Server 2012/2012R. The second type is business as usual and is similar to the first one except that it’s the file system inside a VM.

For the third type we need to create a new virtual disk using the fragmented one as the source. See Checking and Correcting Virtual Hard Disk Fragmentation. This easily done but it does cause down time unless you leverage storage live migration. So that’s my preferred method, especially as I leverage ODX when I do this, so it’s pretty fast. So always leave yourself some margin on storage to be able to perform maintenance operations. That has always been true and still is.

But how do you find out that you have this issue? PowerShell is your friend! Here’s a snippet to show you can check all VMs their vhdx files on a cluster:

$AllVMsOnAllNodesInCluster = Get-VM -ComputerName (get-ClusterNode)
ForEach ($VM in $AllVMsOnAllNodesIncluster)
{
    $VM.Name
    #$HardDrives  = $VM.HardDrives
    invoke-command -ComputerName $VM.computername -ScriptBlock {
        param ($VM)
        Get-VM -Name $VM.Name | Get-VMHardDiskDrive | Get-VHD | ft path, fragmentationpercentage -AutoSize
    } -arg $VM
}

Here’s a screenshot of some output of this snippet

image

As said the best solution that does not incur down time is to storage (live) migrate the virtual disks affected. We can automate this and put in some logic to do this for all virtual hard disks that are more than X% fragmented. Do take care to also check for disk space or the migration will fail.

Hope this helps some of you!

Hyper-V Amigo Chat Ignite 2015

Many MVP’s attended Microsoft Ignite 2015 in Chicago to see what our future will look like.

Hyper-V Amigo Chat Microsoft Ignite 2015 Thumb 1 (2)

Carsten published the “Hyper-V Amigo Chat” we did right after Ignite. The conference was a blast for us all. Tired but happy we chat about storage space direct, Nano Server, ReFS, Dedupe, Azure Stack, … Enjoy!

Here’s the link to the video Hyper V Amigos Chat – Microsoft Ignite 2015 on Carsten’s blog.