KB3063283 Updates the Hyper-V Integration Components for Windows Server 2012 R2 to 6.3.9600.17831

While investigating a backup issue with some VMs I noticed an entry in the VEEAM Backup & Replication logs that the Hyper-V integration components were out of date.


This was the case on all the guests on that particular cluster actually. A quick look at the IC version on the host showed them to be at 6.3.9600.17831.


Comparing that to the ones in the guest made clear very quickly that those were at 6.3.9600.16384. So lower.


A web search for Hyper-V Integration components led us to KB3063283 “Update to improve the backup of Hyper-V Integration components in Hyper-V Server 2012 R2”on their Hyper-V hosts. They keep a tight ship but due to regulations they are normally 3 to 4 months behind in patches and updates. So in their case they only recently installed that update. KB3063283 Updates the Hyper-V Integration Components for Windows Server 2012 R2 to 6.3.9600.17831

So a little word of warning while you are keeping your Hyper-V environment up to date (you should), don’t forget to update the integration components of your virtual machines. A good backup product like Veeam Back & Replication will log this during backups. It might not make the backups fail per se but they have been updated for a good reason. This upgrade  was even specifically for backup related issues so it’s wise to upgrade the virtual machines to this version a.s.a.p..

BitLooker In Veeam Backup and Replication v9

When your backup size is bigger than the amount of disk space used in the virtual machine you might wonder why that is. Well it’s deleted data who’s blocks have not been released for reuse by the OS yet. BitLooker in Veeam Backup and Replication v9 as announced at VeeamOn 2015 offers a solution for this situation. BitLooker analyses the NFTS MFT to identify deleted data. It uses this information to reduce the size of an imaged based backup file and helps reduce bandwidth needed for replication. It just makes sense!

I really like these additions that help out to optimize the consumption of backup storage. Now I immediately wondered f this would make any difference on the recent versions of Hyper-V that support UNMAP. Well, probably not. My take on this is that the Hyper-V virtual Machine is aware of the deleted blocks via UNMAP this way so they will not get backed up. This is one of the examples of the excellent storage optimization capabilities of Hyper-V.


It’s a great new addition to Veeam Backup & Replication v9. Especially when you’re running legacy hypervisors like like Windows 2 or older, or VMware. When you’ve been rocking Windows Server 212 R2 for the last three years Hyper-V already had your back with truly excellent UNMAP support in the virtual layer.

Remove Lingering Backup Checkpoints from a Hyper-V Virtual Machine

Sometimes things go wrong with a backup job. There are many reasons for this, but that’s not the focus of this blog post. We’re going to show you how to remove lingering Backup checkpoints from a Hyper-V Virtual Machine that where not properly removed after a backup on Windows Server 2012 R2 Hyper-V.

You see checkpoint with a “odd” icon that looks like this


You can right click on the checkpoint but you will not find an option to apply or delete it.This is a recovery snapshot and you’re not supposed to manipulate this manually. It’s there as part of the backup process. You can read more about that in my blog posts Some Insights Into How Windows 2012 R2 Hyper-V Backups Work and What Is AutoRecovery.avhdx all about?.

When we look ad the files we see the traces of a Windows Server 2012R2 Hyper-V backup


Some people turn to manually merging these checkpoints. I have discussed this process in blog posts 3 Ways To Deal With Lingering Hyper-V Checkpoints Formerly Known as Snapshots and Manually Merging Hyper-V Checkpoints. But you don’t have to do this! As a matter of fact you should avoid this if possible at all. The good news is that this can be done via PowerShell.

I’m not convinced the fact that you can’t do it in the GUI is to be considered a bug as some would suggest. The fact is you’re not supposed to have to do this, so the functionality is not there. I guess this is to protect people from trying to delete one by mistake when they see one during backups.

Anyway, PowerShell to the rescue!

So we run:

Get-VMSnapshot -ComputerName "MyHyperVHost" -VMName "VMWithLingeringBackupCheckpoint"


This show us the checkpoint information (note it does not show you the XXX-AutoRecovery.avhdx” as a checkpoint.


And then we simply run Remove-VMSnapshot

#Just remove the lingerging backup checkpoint like you would a normal one via PoSh 
Get-VMSnapshot -ComputerName "MyHyperVHost" -VMName "VMWithLingeringBackupCheckpoint" | Remove-VMSnapshot

You can see the merging process in the GUI:


If you inspect the remaining MyVM-AutoRecovery.avhdx file you’ll see that it points to the parent vhdx.Normally your VM is now already running from the vhdx anyway. If not you’ll need to deal with it.


During a normal backup that file is deleted when the merge of the redirect AVHDX is done. You’ll need to do that manually here and be done with it.

Conclusion: there is no need to dive in and start merging the lingering backup checkpoints manually. Leave that for those scenarios where that’s the only option you have left.

Windows Server 2016 Data Deduplication Scales and Performs Better

I’ve been leveraging Windows Server Data Deduplication since it became available with great results.

Embedded image permalink

One of the enhanced features in Windows Server 2016 is Data Deduplication and it’s one I welcome very much. The improvements we’re getting mostly have to do with scale and performance. I’m quite pleased that Microsoft listened to our previous feedback on this.


You cannot imagine how much money on backup target storage we have saved by using this. So we’re very happy that Windows Server 2016 Data Deduplication scales and performs better. The fact that we can no get even better scale and performance is music to our ears. The Backup target servers are the first in line for an upgrade, that’s for sure! That’s the reason I mentioned it as a subject to look into in the Hyper-V amigos interview at Ignite!

Scale Improvement of the supported LUN sizes, up to 64TB

Actually I was already pushing this to 50TB Embarrassed smile in some cases for testing but over all I used 6 to 10 TB volumes. But the support for bigger volumes is very welcome. Now, please not that you should NOT go any higher than 64TB (I actually stay below that) otherwise deduplication doesn’t work due to it’s dependency on VSS. Please read my blog

Windows 2012 R2 Data Deduplication Leverages Shadow Copies: “LastOptimizationResultMessage : A volume shadow copy could not be created or was unexpectedly deleted” on this subject.

In Windows 2012 R2 we were limited because data deduplication used a single-threaded job and I/O queue for each volume. That makes it wiser to have 10 target LUNS of 60TB than one huge 60TB LUN. The big issue otherwise is that large volumes could lead to the dedup processing keeping up with the rate of data changes (“churn”).  Now your milage would very depending on the type of data and the delta. More info on this in the blog post:Sizing Volumes for Data Deduplication in Windows Server. It will help you size the volumes but note that in Windows Server 2016 the rules have changed Smile

The dedup optimization processing now runs multiple threads in parallel using multiple I/O queues on a single volume which gives you better performance and doesn’t incur the overhead of having to use more smaller LUNs.

File sizes up to 1TB are good for dedup

Windows Server 2012 R2 Data Deduplication supports the use of file sizes up to 1TB, but they are considered as “not good candidates” for dedup.  So that DPM workaround of backing up to a truckload of virtual machines with 1TB virtual disks that are deduplicated is borderline. You can see one improvement in CPS v2 coming already (also see the next header). 1TB is now fully supported and a good candidate. I’ll be pushing it higher … in my opinion this is were the most work will need to be done for future improvements. It would allow for more scenarios (I have VMs that hold VHDX virtual disks of  2TB or more). Scale it something that helps keep things simple. Simple avoid costs & issue with complexity. That’s always a good thing if possible.

In Windows Server 2012 the algorithms can’t scale as well and performance suffers due to things like scanning for and inserting changes can slow down as the total data set increases. These processes have been redesigned in Windows Server 2016. It now uses new stream map structures and improved partial file optimization. As a result 1TB file sizes have become good candidates.

Virtualized backup is a new usage type

DPM is already leveraging deduplication of virtual machines (CPS drove that I think, see Deduplicating DPM Storage).


In Windows Server 2016 all the dedup configuration settings have been combined into a new usage type called “Backup”. This simplifies the deployment and helps “future proof” your setup as future changes can automatically be applied true this usage type.

Nano Server support

Data deduplication is (or will be) fully supported in Nano Server (new in TPv3). It’s not completely done yet so deduplication support in Nano Server still has a few restrictions:

  • Support has only been validated in non-clustered configurations
  • Deduplication job cancellation must be done manually (using the Stop-DedupJob PowerShell command)

Microsoft welcomes any feedback on the deduplication feature via an email sent to dedupfeedback@microsoft.com. For me the standing order is to break through that 1TB barrier!

My take & Magic Ball

In combination with the right backup product it saves a ton of money. I have leveraged VEEAM and in the past Windows Backup (inbox) with great results. The benefit of these two is that you can backup to physical storage and leverage deduplication. Virtualized backup as a new usage type and makes live easier for the supported “workaround” around the limitations of DPM where normally they only support VDI for  with deduplication.  What I’m really curious about is another possible future usage type: “Virtual Servers” … I guess for that one deduplication support for the OS disk would be very beneficial for “cloud” providers. We’ll see