Windows Server 2016 Data Deduplication Scales and Performs Better

I’ve been leveraging Windows Server Data Deduplication since it became available with great results.

Embedded image permalink

One of the enhanced features in Windows Server 2016 is Data Deduplication and it’s one I welcome very much. The improvements we’re getting mostly have to do with scale and performance. I’m quite pleased that Microsoft listened to our previous feedback on this.

image

You cannot imagine how much money on backup target storage we have saved by using this. So we’re very happy that Windows Server 2016 Data Deduplication scales and performs better. The fact that we can no get even better scale and performance is music to our ears. The Backup target servers are the first in line for an upgrade, that’s for sure! That’s the reason I mentioned it as a subject to look into in the Hyper-V amigos interview at Ignite!

Scale Improvement of the supported LUN sizes, up to 64TB

Actually I was already pushing this to 50TB Embarrassed smile in some cases for testing but over all I used 6 to 10 TB volumes. But the support for bigger volumes is very welcome. Now, please not that you should NOT go any higher than 64TB (I actually stay below that) otherwise deduplication doesn’t work due to it’s dependency on VSS. Please read my blog

Windows 2012 R2 Data Deduplication Leverages Shadow Copies: “LastOptimizationResultMessage : A volume shadow copy could not be created or was unexpectedly deleted” on this subject.

In Windows 2012 R2 we were limited because data deduplication used a single-threaded job and I/O queue for each volume. That makes it wiser to have 10 target LUNS of 6TB than one huge 60TB LUN. The big issue otherwise is that large volumes could lead to the dedup processing keeping up with the rate of data changes (“churn”).  Now your milage would very depending on the type of data and the delta. More info on this in the blog post:Sizing Volumes for Data Deduplication in Windows Server. It will help you size the volumes but note that in Windows Server 2016 the rules have changed Smile

The dedup optimization processing now runs multiple threads in parallel using multiple I/O queues on a single volume which gives you better performance and doesn’t incur the overhead of having to use more smaller LUNs.

File sizes up to 1TB are good for dedup

Windows Server 2012 R2 Data Deduplication supports the use of file sizes up to 1TB, but they are considered as “not good candidates” for dedup.  So that DPM workaround of backing up to a truckload of virtual machines with 1TB virtual disks that are deduplicated is borderline. You can see one improvement in CPS v2 coming already (also see the next header). 1TB is now fully supported and a good candidate. I’ll be pushing it higher … in my opinion this is were the most work will need to be done for future improvements. It would allow for more scenarios (I have VMs that hold VHDX virtual disks of  2TB or more). Scale it something that helps keep things simple. Simple avoid costs & issue with complexity. That’s always a good thing if possible.

In Windows Server 2012 the algorithms can’t scale as well and performance suffers due to things like scanning for and inserting changes can slow down as the total data set increases. These processes have been redesigned in Windows Server 2016. It now uses new stream map structures and improved partial file optimization. As a result 1TB file sizes have become good candidates.

Virtualized backup is a new usage type

DPM is already leveraging deduplication of virtual machines (CPS drove that I think, see Deduplicating DPM Storage).

image

In Windows Server 2016 all the dedup configuration settings needed for the DPM backup scenario have been combined into a new usage type called “Backup”. This simplifies the deployment and helps “future proof” your setup as future changes can automatically be applied true this usage type.

Nano Server support

Data deduplication is (or will be) fully supported in Nano Server (new in TPv3). It’s not completely done yet so deduplication support in Nano Server still has a few restrictions:

  • Support has only been validated in non-clustered configurations
  • Deduplication job cancellation must be done manually (using the Stop-DedupJob PowerShell command)

Microsoft welcomes any feedback on the deduplication feature via an email sent to [email protected]. For me the standing order is to break through that 1TB barrier!

My take & Magic Ball

In combination with the right backup product it saves a ton of money. I have leveraged VEEAM and in the past Windows Backup (inbox) with great results. The benefit of these two is that you can backup to physical storage and leverage deduplication. Virtualized backup as a new usage type and makes live easier for the supported “workaround” around the limitations of DPM where normally they only support VDI for  with deduplication.  What I’m really curious about is another possible future usage type: “Virtual Servers” … I guess for that one deduplication support for the OS disk would be very beneficial for “cloud” providers. We’ll see

Production Checkpoints in Windows Server 2016

We’ve  had snapshots, or better checkpoints as we call them now for consistency amongst products, for a longest time in Hyper-V. I have always liked and used them to my benefit. That’s what they are intended for. But you have to use them correctly, in a supported and smart manner. Some (or perhaps not an insignificant number of) people did not read the manual and/or do not test their assumptions before trying something in production. Some times that leads to a lesson, sometimes it leads to tears.

We now have the choice between two type of checkpoints: Production Checkpoints and Standard Checkpoints.

A standard virtual machine checkpoint all the memory state of running applications get stored and when you apply the checkpoint it’s back magically. Doing this to a production SQL or Exchange Server for example causes (huge) problems. With some applications these problems are minor or transient but it’s not a healthy consistent state to be in, and recovery has to happen. Which could  happen automatically or require disaster recovery depending on the situation at hand.

Production checkpoints are made in application consistent manner. For this the leverage Volume Shadow Copy Services (or File System Freeze on Linux) which puts the virtual machines into a safe state to create a checkpoint that can be restored like a VSS based, application consistent backup or SAN snapshot. This does mean that applying a production checkpoint requires the restored virtual machine to boot from an off line state, just like with a restored backup.

The choice for what type of checkpoint can be made on a per virtual machine basis which make it’s flexible as you can pick the best option for a particular virtual machine for a specific purpose. As you might have guessed, that still requires some insight, reading the manual and testing your assumptions. But you now can have the behavior you want and way to many assumed to have.

image

We also have the option of allowing or disallowing for a standard checkpoint to me made if for any reason (which results in the VSS snapshot in Windows or file systems freeze in Linux  in the guest might not work or are not available) a production checkpoint cannot be made. Here’s the table of what type of checkpoint can be used when from MSDN. I conclude that the chosen default is the best fitting one for most scenarios.

image

You also have the option of choosing the standard checkpoints for a virtual machine. That gives you the exact behavior as with all previous versions of Hyper-V.

I love the GUI for ad hoc work but when I need to do this on dozens or hundreds of virtual machines or potentially tens of thousands when running a larger private cloud this is not the way to go. PowerShell is you long time trusted friend here!

image

As you can see just the “-CheckpointType” parameter you control the check pointing behavior. And as it is very easy to grab all virtual machines on a host or cluster setting this for all or a selection of your virtual machines is easy and fast. Let’s set it to “ProductionOnly” and grab the setting for that VM via PowerShell

image

When you create a checkpoint of a Windows Server 2016 Hyper-V host you’ll even get a nice notification (you can turn it off) by default that the Production checkpoint used backup technology in the guest operating system.

image

It’s also important to realize that this capability is the basis of the new checkpoint based way of making backups in Windows Server 2016 as well. But that’s a subject for another blog post. Thank you for reading!

RemoteFX and vGPU Improvements in Windows Server 2016 Hyper-V

UPDATE 2015/11/23: RemoteFX works in Windows Server 2016 TPv4 and I’m successfully running OpenGL in a server VM with W2K16Tpv4 and a W2K16TPv4 host!

image

Let’s take a look at some of the RemoteFX and vGPU Improvements in Windows Server 2016 Hyper-V. For me the abilities they are adding in this release are significant and a break through. Why? They are talking away many of the last show stoppers for a number of scenarios that are important to the ecosystems I roam around in, when the CxO have a clue that is.

What are we looking at that’s new for Windows Server 2016?

The things that are breaking down the biggest showstoppers are:

  • OpenGL & OpenCL API Support (FINALLY!)
  • 1GB dedicated VRAM
  • 4K Resolution
  • Serverv VM Suppport (very important in our GIS environment actually) Generation 2 VM Support (YES!)
  • Improved performance
  • H.264/AVC codec investment

Now, I missed this initially but it was announced at Microsoft Ignite 2015 that RemoteFX will support generation 2 virtual machines and it allows us to still benefit form the future of virtual machines without losing RemoteFX. Until now generation 2 virtual machines were no compatible with RemoteFX.This was due to the Generation 2 virtual machines not having an emulated PCI bus, which RemoteFX needs until WIndows Server 2012 R2 and Windows 8.1.

Generation 2 support combined with Server support in the virtual machine and OpenGL (ip to 4.4) /OpenCL (up to 1.1)  is a breakthrough, let’s hope the versions supported don’t spoil the party. I wonder if they can come up with a mechanism to upgrade support if OpenGL for newer versions that are released. But application compatibility was very limiting.

This is really great news and will make Hyper-V a far better candidate for many more scenarios than ever before.

Get your test rig set up

So it’s time to upgrade the lab server with a RemoteFX capable GPU to Windows Server 2016 TPv3 and test this.

I think some of our GIS engineers will be very happy with these new capabilities for ESRI Arc GIS, Adobe, AutoCAD, … and many more less well know specialty software they need.

If you want to test it out here’s the Experience guide for Enabling OpenGL Support for vGPU in Server 2016

image
So we set it all up but unfortunately there is still an issue being worked out at the moment of writing.

image

But I will help you get started for when it’s fixed.  Which I hope will be soon! To me it looks like they “just forgot” to activate RemoteFX for server as it look a lot like a Windows Server 2012 R2 VM where one tries to add a RemoteFX card, it just doesn’t work. Sale host with Windows 10 Enterprise does not have this issue …

image

 

So why not test with Windows 10? Well the OpenGL/CL capabilities are server only. And those are important to us!

The Hitch Hikers Guide to Hyper-V Administration: Don’t Panic

Not all information you might see or is presented to you is valid. You need to check, that’s the prime reason we have the “trust but verify” mantra in IT. If you don’t you might start trouble shooting a ghost issue. An example of this are GUI issues, such as when you leave the Hyper-V Manager GUI open for way to long and the information goes stale in the cache.

The below screen shot is what caused some diligent admins to start trouble shooting a non existent problem. The figured that the VMs were left in a locked state due to backups failing. But hey, all backups had run and succeeded?! So they searched and found  KB article 2964439 Hyper-V virtual machine backup leaves the VM in a locked state. When they wanted to install the hotfix it failed stating it was not applicable to their system.

At that moment they considered killing the VMMS.exe service and/or failing over the nodes. While preparing for that they’d logged in to all nodes, only to see the issue not present there. That made ‘m think and step back for a while.

image

In this case it’s just a quirk with the Hyper-V manager that is left open way to long. Right click the host and refresh or close the GUI and reopen it is all that’s needed to see the real information.

So slow down before you start trouble shooting & recovering form a “ghost” problem. It may cause real issues. The lesson here is you should not go into the “Action Jackson” mode. You can move swift and efficient but the ability to execute does not constitute just speed it doing what’s needed when and when needed. Here ends the lesson Smile