Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Introduction

When you have Windows Server 2016 RD Gateway server and you expect to be able to import a configuration XML file you’ll might find yourself in a pickle when you are also using local resources. Because the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean! With local resources I mean local user accounts and groups. These are leveraged more than I imagined at first.

When does it happen?

In the past I have blogged about migrating RD Gateway servers that contain policies referencing local resources here: Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server “%1” because they are associated with local computer groups on another RD Gateway server”.

We used to be able to use the trick of making sure the local resources exist on the new server (either by recreating them there via the server migration wizard or manually) and changing the server name in the exported configuration XML file  to successfully import the configuration. That no longer works. You get an error.Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As far as migrations go from older versions, they work fins as long as you don’t have policies with local resources. Otherwise you’d better do an in place upgrade or recreate the resources & policies on the new servers. The method described in my blog is not working any more. That’s to bad. But it gets worse.

Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As said,it doesn’t end there. The issue is there even when you try to import the configuration on to the same server you exported it from.That’s really bad as it a quick way to protect against any mistakes you might make, and allows to get back to the original configuration.

What’s even worse, when the import fails it wipes ALL the policies in the RD Gateway Server => dangerous! So yes, the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Precautions

Only a backup or a checkpoint can save your then (or recreate the all manually)! Again this is only when the exported configuration file references local resources! The fasted way to clean out an RD Gateway configuration on Windows Server 2016 is actually importing a configuration export which contains a policy referring to local resource. Ouch! I’m not aware of a fix up to this date.

For now you only protection is a checkpoint or a backup. Depending on where and how you source your virtual machines you might not have access to a checkpoint.

You have been warned, be careful.

vNIC Speed in guests on Windows Server 2016 Hyper-V

Prior to Windows Server 2016 Hyper-V the speed a vNIC reported was an arbitrary fixed value. In Windows 2012 R2 Hyper-V that was 10Gbps.

This is a screen shot of a Windows Server 2012 R2 VM attached to a vSwitch on an LBFO of 2*1Gbps running on Windows Server 2012 R2 Hyper-V.

image

This is a screen shot of a Windows Server 2016 VM attached to a vSwitch on an LBFO of 2*10Gbp running on Windows Server 2012 R2 Hyper-V.

image

As you can see the fixed speed of 10Gbps meant that even when the switch was attached to a LBFO with 2 1Gbps NIC it would show 10Gbps etc. Obviously that would not happen unless the 2 VMs are on the same host and the limitations of the NIC don’t come into play as these are bypassed.Do note that the version of Windows in the guest doesn’t matter here as demonstrated above.

The reported speed behavior has changed in Windows Server 2016 Hyper-V. You’ll get a more realistic view of the network capabilities of the VM in some scenarios and configurations.

Here’s a screenshot of a VM attached to a vSwitch on a 1Gbps NIC.

image

As you can see it reports the speed the vSwitch can achieve, 1Gbps. Now let’s look at a VM who’s vNIC is attached to a LFBO of two 10Gbps NICs.

image

This NIC reports 20Gbps inside of the VM, so that’s 2 * 10Gbps.

You get the idea. the vNIC reports the aggregated maximum bandwidth of the NICs used for the  vSwitch. If we had four 10Gbps NICs in the LBFO used for the vSwitch we could see 40Gbps.

You do have to realize some things:

  • Whether a VM has access to the the entire aggregated bandwidth depends on the model of the aggregation. Just consider Switch independent teaming versus LACP teaming modes.
  • The reported bandwidth has no knowledge of any type of QoS. Not hardware based, or virtual via Hyper-V itself.
  • The bandwidth also depends on the capabilities of the other components involved (CPU, PCIe, VMQ, uplink speed, potentially disk speed etc.)
  • Traffic within a host bypasses the physical NIC and as such isn’t constraint  by the NIC capabilities it self.
  • As before the BIOS power configuration has an impact on the speed of your 10Gbps or higher NICs.

Migrate a Windows Server 2012 R2 AD FS farm to a Windows Server 2016 AD FS farm

Introduction

I recently went through the effort to migrate a Windows Server 2012 R2 AD FS farm to a Windows Server 2016 AD FS farm. For this exercise the people in charge wanted to maintain the server names and IP addresses. By doing so there is no need for changes in the Kemp Technologies load balancer.

Farm Behavior Level Feature

In Windows Server 2016 ADFS we now have a thing called  the Farm Behavior Level (FBL)  feature (FBL). It  determines the features that the AD FS farm can use. Optimistically you can state that the FBL of a Windows Server 2012 R2 AD FS farm is at the Windows Server 2012 R2 FBL.

The FBL feature and mixed mode now makes a “trick” many used to upgrade a ADFS farm to AD FS Windows Server 2012 R2 organizations without the hassle of setting up a new farm and exporting / importing the configuration possible. looking to upgrade to Windows Server 2016 will not have to deploy an entirely new farm, export and import configuration data. Instead, they can add Windows Server 2016 nodes to an existing farm while it is online and only incur the relatively brief downtime involved in the FBL raise.

We can add a secondary Windows Server 2016 AD FS server to a Windows Server 2012 R2 farm. The farm will continue operating at the Windows Server 2012 R2 FBL. This is “mixed mode” so to speak. There is no need to move all the node to the same version immediately.

As long as you are in mixed mode you don’t get the benefits of the new capabilities and features in Windows Server 2016 ADFS. These are not available.

Administrators can add new, Windows Server 2016 federation servers to an existing Windows Server 2012 R2 farm. As a result, the farm is in “mixed mode” and operates the Windows Server 2012 R2 farm behavior level.

When all Windows Server 2012 R2 nodes have been removed form the farm and all nodes are Windows Server 2016 you can raise the FBL level. This results in the new Windows Server 2016 ADFS features being enabled and ready for configuration and use.

The Migration Path Notes

WARNING

You cannot in place upgrade a Windows Server 2012 R2 ADFS Farm node to Windows Server 2016. You will need to remove it from the farm and replace it with a new Windows Server 2016 ADFS node.

Note

In this migration we are preserving the node names and IP addresses. This means the load balancer needed no configuration changes. So in that respect this process is different from what is normally recommended.

This is a WID based deployment example. You can do the same for an SQL based deployment.

The FBL approach is only valid for a migration from Windows Server 2012 AD FS to Windows Server 2016 AD FS. A migration from AD FS 2.0 or 2.1 (Windows Server 2008 R2 or Windows Server 2012) requires the use of the Export-FederationConfiguration and Import FederationConfiguration as before.

Also see https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-fs/overview/ad-fs-2016-requirements

Step by Step

  • Start with the secondary nodes. For each of them make sure you have the server name and the IP configuration.
  • Make sure you have the Service Communications SSL cert for your AD FS and the domain or managed service account name and password.
  • Make sure you have an ADFS configuration backup and also that you have a backup or an export (cool thing about VMs) of the VMs for rapid recovery if needed.

Remove the ADFS role via Server management

image

  • Shut down the VM
  • Edit the setting and remove the OS VHDX. Delete the file (you have a backup/export)
  • Copy your completely patched and syprepped OS VHDX with Windows Server to the location for this VM. Rename that VHDX to something sensible like adfs2disk01.vhdx.
  • Edit the settings and add the new sysprepped OS VHDX to the VM. Make sure that the disk is 1st in the boot order.
  • Start the VM
  • Go through the mini wizard and log in to it.
  • Configure the NIC with the same setting as your old DNS Server
  • Rename the VM to the original VM name and join the domain.
  • Restart the VM
  • Login to the VM and Install ADFS using Add Roles and Features in Server Manager

image

  • When done configure ADFS

image

  • Select to add the node to an existing federation farm

image

  • Make sure you have an account with AD admin permissions

image

  • Tell the node what primary federation server is

image

  • Import your certificate

image

  • Specify the ADFS Service account and its password

image

  • You’re ready to go on

image

  • If any prerequisites don’t work out you’ll be notified, we’re good to go!

image

  • Let the wizard complete all it steps

image

  • When the configuration is done you need to restart the VM to complete adding the node to the ADFS farm.

image

  • Restart your VM and log back in. When you open up ADFS  you’ll see that this new Windows Server 2016 node is a secondary node in your ADFS Farm.

image

  • Note that from a load balancer perspective nothing has changed. They just saw the node go up and down a few times; if they were paying attention at all that is.
  • Now repeat the entire process for all you secondary ADFS Farm nodes. When done we’ll swap the primary node to one of the secondary nodes. This is needed so you can repeat the process for the last remaining node in the farm, which at that time needs to be a secondary node. In the example of our 2 node farm we swap the roles between ADFS1 and ADFS2.

image

  • Verify that ADFS2 is the primary node and if so, repeat the migration process for the last remaining node (ADFS1) in our case.
  • Once that’s been completed we swap them back to have exactly the same situation as before the migration.

image

  • On the primary node run Get-AdfsFarmInformation (a new cmdlet in Windows Server 2016 R2).

image

  • You’ll see that our current farm behavior is 1 and our 2 nodes (all of them Windows Server 2016) are listed. Note that any nodes still on Windows Server 2012 R2 would not be shown.

WARNING: to raise the FBL to Windows Server 2016 your AD Schema needs to be upgraded to at least Windows Server 2016 version 85 or higher. This is also the case form new AD FS farm installations which will be at the latest FBL by default. My environment is already 100% on Windows Server 2016 AD. So I’m good to go. If yours is not, don’t forget to upgrade you schema. You don’t need to upgrade your DC’s unless you want to leverage Microsoft Password authentication, then you need al least 1 Windows Server 2016 domain controller. See https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-fs/overview/ad-fs-2016-requirements

  • As we know all our nodes are on Windows Server 2016 we can raise our Farm Behavior Level  (FBL) by running Invoke-AdfsFarmBehaviorLevelRaise

image

  • Just let it run it has some work to do including creating a new database.

image

  • It will tell you when it’s done and point out changes in the configuration.

image

  • Now run Get-AdfsFarmInformation again

image

  • Note that the  current farm behavior is 3 and our 2 nodes (all of them Windows Server 2016) are listed. Note that  if any nodes had still been on Windows Server 2012 R2 they would have been kicked from the farm and should be removed form the load balancer.

image

PS: with some creativity and by having a look at my blog on https://blog.workinghardinit.work/2016/11/28/easily-migrating-non-ad-integrated-dns-servers-while-preserving-server-names-and-ip-addresses/ You can easily figure out how to add some extra steps to move to generation 2 VMs while you’re at it if you don’t use these yet.

Changes in RDP over UDP behavior in Windows 10 and Windows 2016

Introduction

With Windows Server 2012 and Windows 8 (and Windows 7 RDP client 8.0) with some updates we got support for RDP to use UDP for data transport. This gave us a great experience over less reliable to even rather bad networks.

Anecdote: I was in an area of the world where there was no internet access available bar a very bad and lousy Wi-Fi connection at the shop/cafeteria. That was just fine, I wasn’t there for the great Wi-Fi access at all. But I needed to check e-mail and that wasn’t succeeding in any way, the network reliability was just too bad. I got the job done by using RDP to connect to a workstation back home (across the ocean on another continent) and check my e-mail there. Not a super great experience but UDP made it possible where nothing else worked. I was impressed.

Changes in RDP over UDP behavior in Windows 10 and Windows 2016

When connecting to Windows Server 2016 or a Windows 10 over a RD Gateway we see 1 HTTP and only one UDP connection being established for a session. We used to see 1 HTTP and 2 UDP connections per session with Windows 8/8.1 and Windows Server 2012(R2)

It doesn’t matter if your client is running RDP 8.0 or RDP 10.0 or whether the RD Gateway itself is running Windows Server 2012 R2 or Windows Server 2016. The only thing that does matter is the target that you are connecting to.

Also, this has nothing to do with a Firewall or so acting up, we’re testing with and without with the same IP etc. Let’s take a quick look at some examples and compare.

When connecting to Windows 10 or Windows Server 2016 we see that 1 UDP connection is established.

In total, there are 8 events logged for a successful connection over the RDG Gateway.

clip_image002

You’ll find 2 event ID 302 events (1 for a HTTP connection and 1 for a UDP connection) as well as 1 Event ID 205 events for the UDP proxy usage.

clip_image003

clip_image004

In the RD Gateway manager, monitoring we can see 1 HTTP and the 1 UDP connections for one RDP Session to a Windows 2016 Server.

clip_image006

When connecting to Windows 8/8.1 or Windows Server 2012 (R2) we see that 2 UDP connections are established.

In total, there are 10 events logged for a successful connection over the RDG Gateway:

clip_image008

You’ll find 3 event ID 302 events (1 for a HTTP connection and 2 for a UDP connection) as well as 2 Event ID 205 events for the UDP proxy usage.

In the RD Gateway manager, monitoring we can see 1 HTTP and the 2 UDP connections for one RDP Session to a Windows 2012 R2 Server.

clip_image010

So, RDP wise something seems to have changed. But I do not know the story and why.