Hyper-V Amigos Showcast Episode 14: RemoteFX & DDA

Carsten and I dove into our labs and played around with RemoteFX and Discrete Device Assignment in Windows Server 2016 Hyper-V and RDS. This resulted in the Hyper-V Amigos Showcast Episode 14: RemoteFX & DDA.

Some background on RemoteFX & DDA

I’ve discussed the new capabilities in previous blog posts such as https://blog.workinghardinit.work/?s=DDA&submit=Search  and RemoteFX and vGPU Improvements in Windows Server 2016 Hyper-V. But here the Hyper-V Amigos talk about it for your benefit and enjoyment. I for one know we had a ton of fun. Microsoft only VDI solutions are really taking off both on-premises and in Azure in cost conscious environments that still need good performance. I think we’ll see an uptake of such deployments as Microsoft has made some decisions and added some features to make this more feasible.

Hyper-V Amigos Showcast Episode 14: RemoteFX & DDA.

Click this link or the image below to watch Hyper-V Amigos Showcast Episode 14: RemoteFX & DDA

image

There’s a bit of a learning curve associated with using DDA in Windows Server 2016. You’ll have to get acquainted with how to do it and put it to the test in labs and POCs. Do this before you even start thinking about designing production ready solutions. Having a good understanding on how it works and behaves is paramount to success.

Enjoy!

Warning on Windows Server 2016 Deduplication Corruption

UPDATE 2 – 2017/02/06

DO NOT INSTALL KB3216755 if you don’t need it.  Huge memory leak reported to associated with this. If you need it I’d consider all my options.

UPDATE – GET KB3216755

As you can read it the comments, Microsoft reached out and confirms the issues are fixed as part of KB3216755 => https://support.microsoft.com/en-us/help/4011347/windows-10-update-kb3216755 . I commend them for responding so quickly and getting it sorted. Do not that at the time of writing this (late on January 30th CET) the Windows Sever 2016 update isn’t in the Windows Catalog yet, only the Windows 10 ones. But Microsoft confirms you should install the update  on their blog

Windows Server 2016 Data Deduplication users: please install KB3216755!

The issue

Good morning. A quick blog post to give a heads up to my readers who might not be subscribed to Anton Gostev (Veeam) his “The Word Form Gostev”. It concerns a warning on Windows Server 2016 Deduplication corruption.

Warning on Windows Server 2016 Deduplication Corruption

There are multiple reports of data corruption with Windows Server 2016 deduplication. One is related to file sizes over 2TB. The other with the loss of checksum values. Microsoft is aware these issues and a fix is coming for these issues.

I quote Gostev

I’ve already received the official confirmation from Microsoft that this is the know issue (ID 10165851) which is scheduled to be addressed in the next Windows Server 2016 servicing update. There are actually two separate issues, both leading to file corruption when using deduplication on very large files. One issue occurs when files grow to 2.2TB or larger, and another one causes loss of checksums for files with “smaller sizes” – this is the actual wording of the official note, so I have no idea how small

What to do?

If you use Windows Server 2016 deduplication for backups, create new full backups regularly. Also make sure you do backup integrity testing and restore tests. Follow up on the update when it arrives.

If you use the for production data make sure you have frequent and validated backups! Design & operate under the mantra of “Trust but verify”.

Also, we’ve heard reports and noticed that Windows Server 2016 Deduplication resource configuration isn’t always respected. I.e. it can take all resources away despite limitations being set. We hope a fix for this is also under way.

Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Introduction

When you have Windows Server 2016 RD Gateway server and you expect to be able to import a configuration XML file you’ll might find yourself in a pickle when you are also using local resources. Because the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean! With local resources I mean local user accounts and groups. These are leveraged more than I imagined at first.

When does it happen?

In the past I have blogged about migrating RD Gateway servers that contain policies referencing local resources here: Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server “%1” because they are associated with local computer groups on another RD Gateway server”.

We used to be able to use the trick of making sure the local resources exist on the new server (either by recreating them there via the server migration wizard or manually) and changing the server name in the exported configuration XML file  to successfully import the configuration. That no longer works. You get an error.Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As far as migrations go from older versions, they work fins as long as you don’t have policies with local resources. Otherwise you’d better do an in place upgrade or recreate the resources & policies on the new servers. The method described in my blog is not working any more. That’s to bad. But it gets worse.

Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As said,it doesn’t end there. The issue is there even when you try to import the configuration on to the same server you exported it from.That’s really bad as it a quick way to protect against any mistakes you might make, and allows to get back to the original configuration.

What’s even worse, when the import fails it wipes ALL the policies in the RD Gateway Server => dangerous! So yes, the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Precautions

Only a backup or a checkpoint can save your then (or recreate the all manually)! Again this is only when the exported configuration file references local resources! The fasted way to clean out an RD Gateway configuration on Windows Server 2016 is actually importing a configuration export which contains a policy referring to local resource. Ouch! I’m not aware of a fix up to this date.

For now you only protection is a checkpoint or a backup. Depending on where and how you source your virtual machines you might not have access to a checkpoint.

You have been warned, be careful.

vNIC Speed in guests on Windows Server 2016 Hyper-V

Prior to Windows Server 2016 Hyper-V the speed a vNIC reported was an arbitrary fixed value. In Windows 2012 R2 Hyper-V that was 10Gbps.

This is a screen shot of a Windows Server 2012 R2 VM attached to a vSwitch on an LBFO of 2*1Gbps running on Windows Server 2012 R2 Hyper-V.

image

This is a screen shot of a Windows Server 2016 VM attached to a vSwitch on an LBFO of 2*10Gbp running on Windows Server 2012 R2 Hyper-V.

image

As you can see the fixed speed of 10Gbps meant that even when the switch was attached to a LBFO with 2 1Gbps NIC it would show 10Gbps etc. Obviously that would not happen unless the 2 VMs are on the same host and the limitations of the NIC don’t come into play as these are bypassed.Do note that the version of Windows in the guest doesn’t matter here as demonstrated above.

The reported speed behavior has changed in Windows Server 2016 Hyper-V. You’ll get a more realistic view of the network capabilities of the VM in some scenarios and configurations.

Here’s a screenshot of a VM attached to a vSwitch on a 1Gbps NIC.

image

As you can see it reports the speed the vSwitch can achieve, 1Gbps. Now let’s look at a VM who’s vNIC is attached to a LFBO of two 10Gbps NICs.

image

This NIC reports 20Gbps inside of the VM, so that’s 2 * 10Gbps.

You get the idea. the vNIC reports the aggregated maximum bandwidth of the NICs used for the  vSwitch. If we had four 10Gbps NICs in the LBFO used for the vSwitch we could see 40Gbps.

You do have to realize some things:

  • Whether a VM has access to the the entire aggregated bandwidth depends on the model of the aggregation. Just consider Switch independent teaming versus LACP teaming modes.
  • The reported bandwidth has no knowledge of any type of QoS. Not hardware based, or virtual via Hyper-V itself.
  • The bandwidth also depends on the capabilities of the other components involved (CPU, PCIe, VMQ, uplink speed, potentially disk speed etc.)
  • Traffic within a host bypasses the physical NIC and as such isn’t constraint  by the NIC capabilities it self.
  • As before the BIOS power configuration has an impact on the speed of your 10Gbps or higher NICs.