vNIC Speed in guests on Windows Server 2016 Hyper-V

Prior to Windows Server 2016 Hyper-V the speed a vNIC reported was an arbitrary fixed value. In Windows 2012 R2 Hyper-V that was 10Gbps.

This is a screen shot of a Windows Server 2012 R2 VM attached to a vSwitch on an LBFO of 2*1Gbps running on Windows Server 2012 R2 Hyper-V.

image

This is a screen shot of a Windows Server 2016 VM attached to a vSwitch on an LBFO of 2*10Gbp running on Windows Server 2012 R2 Hyper-V.

image

As you can see the fixed speed of 10Gbps meant that even when the switch was attached to a LBFO with 2 1Gbps NIC it would show 10Gbps etc. Obviously that would not happen unless the 2 VMs are on the same host and the limitations of the NIC don’t come into play as these are bypassed.Do note that the version of Windows in the guest doesn’t matter here as demonstrated above.

The reported speed behavior has changed in Windows Server 2016 Hyper-V. You’ll get a more realistic view of the network capabilities of the VM in some scenarios and configurations.

Here’s a screenshot of a VM attached to a vSwitch on a 1Gbps NIC.

image

As you can see it reports the speed the vSwitch can achieve, 1Gbps. Now let’s look at a VM who’s vNIC is attached to a LFBO of two 10Gbps NICs.

image

This NIC reports 20Gbps inside of the VM, so that’s 2 * 10Gbps.

You get the idea. the vNIC reports the aggregated maximum bandwidth of the NICs used for the  vSwitch. If we had four 10Gbps NICs in the LBFO used for the vSwitch we could see 40Gbps.

You do have to realize some things:

  • Whether a VM has access to the the entire aggregated bandwidth depends on the model of the aggregation. Just consider Switch independent teaming versus LACP teaming modes.
  • The reported bandwidth has no knowledge of any type of QoS. Not hardware based, or virtual via Hyper-V itself.
  • The bandwidth also depends on the capabilities of the other components involved (CPU, PCIe, VMQ, uplink speed, potentially disk speed etc.)
  • Traffic within a host bypasses the physical NIC and as such isn’t constraint  by the NIC capabilities it self.
  • As before the BIOS power configuration has an impact on the speed of your 10Gbps or higher NICs.

Happy New Year & Microsoft MVP 2017 Renewal

Happy new year to all of you. May you and your loved ones enjoy good health, happiness, prosperity and interesting work and studies in 2017.

image

 

While I enjoy the time off around new year and appreciate the comfort of a soap stone wood stove on a cold winter evening I also enjoy IT work.Luckily, as I cannot retire yet to enjoy road trips for sightseeing and hiking for the remainder of my time on this planet.

Designing, building, deploying, supporting & troubleshooting high available on premises, hybrid and cloud infrastructure is what I love to do. Today that means ever more a software defines approach. That doesn’t mean you have to work at Amazon, Google or Microsoft. That means you have to investigate how PowerShell, DSC, JSON & Azure Automation can help you achieve your goals. That also doesn’t mean you don’t have understand clustering, networking, storage or virtualization anymore yet. Trust me on that.

This afternoon I also received my renewal e-mail as a Microsoft MVP in the Cloud and Datacenter Management expertise. This is my sixth award and I’m as happy, honored and proud to be one as ever.

MVP_Logo_Horizontal_Preferred_Cyan300_CMYK_300ppi

2017 will be filled with many Windows Server 2016 projects on top of our already strong start in 2016 after it became generally available. These projects will be tied to some new cloud efforts in Azure, efforts that surpass IAAS alone.

The IT world evolves and moves fast but technology doesn’t disappear over night. Keeping things tied together, moving forward to the new, leveraging new capabilities, enabling new opportunities and staying up to date takes a serious effort. Sharing what we learn with the global community is what the MVP program recognizes and stimulates. We all learn together and advance by sharing experiences and knowledge. We also help each other out and this year I’ve seen and participated in a number of cases where community members and fellow MVPs came together when needed to solve some serious problems.

Migrate a Windows Server 2012 R2 AD FS farm to a Windows Server 2016 AD FS farm

Introduction

I recently went through the effort to migrate a Windows Server 2012 R2 AD FS farm to a Windows Server 2016 AD FS farm. For this exercise the people in charge wanted to maintain the server names and IP addresses. By doing so there is no need for changes in the Kemp Technologies load balancer.

Farm Behavior Level Feature

In Windows Server 2016 ADFS we now have a thing called  the Farm Behavior Level (FBL)  feature (FBL). It  determines the features that the AD FS farm can use. Optimistically you can state that the FBL of a Windows Server 2012 R2 AD FS farm is at the Windows Server 2012 R2 FBL.

The FBL feature and mixed mode now makes a “trick” many used to upgrade a ADFS farm to AD FS Windows Server 2012 R2 organizations without the hassle of setting up a new farm and exporting / importing the configuration possible. looking to upgrade to Windows Server 2016 will not have to deploy an entirely new farm, export and import configuration data. Instead, they can add Windows Server 2016 nodes to an existing farm while it is online and only incur the relatively brief downtime involved in the FBL raise.

We can add a secondary Windows Server 2016 AD FS server to a Windows Server 2012 R2 farm. The farm will continue operating at the Windows Server 2012 R2 FBL. This is “mixed mode” so to speak. There is no need to move all the node to the same version immediately.

As long as you are in mixed mode you don’t get the benefits of the new capabilities and features in Windows Server 2016 ADFS. These are not available.

Administrators can add new, Windows Server 2016 federation servers to an existing Windows Server 2012 R2 farm. As a result, the farm is in “mixed mode” and operates the Windows Server 2012 R2 farm behavior level.

When all Windows Server 2012 R2 nodes have been removed form the farm and all nodes are Windows Server 2016 you can raise the FBL level. This results in the new Windows Server 2016 ADFS features being enabled and ready for configuration and use.

The Migration Path Notes

WARNING

You cannot in place upgrade a Windows Server 2012 R2 ADFS Farm node to Windows Server 2016. You will need to remove it from the farm and replace it with a new Windows Server 2016 ADFS node.

Note

In this migration we are preserving the node names and IP addresses. This means the load balancer needed no configuration changes. So in that respect this process is different from what is normally recommended.

This is a WID based deployment example. You can do the same for an SQL based deployment.

The FBL approach is only valid for a migration from Windows Server 2012 AD FS to Windows Server 2016 AD FS. A migration from AD FS 2.0 or 2.1 (Windows Server 2008 R2 or Windows Server 2012) requires the use of the Export-FederationConfiguration and Import FederationConfiguration as before.

Also see https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-fs/overview/ad-fs-2016-requirements

Step by Step

  • Start with the secondary nodes. For each of them make sure you have the server name and the IP configuration.
  • Make sure you have the Service Communications SSL cert for your AD FS and the domain or managed service account name and password.
  • Make sure you have an ADFS configuration backup and also that you have a backup or an export (cool thing about VMs) of the VMs for rapid recovery if needed.

Remove the ADFS role via Server management

image

  • Shut down the VM
  • Edit the setting and remove the OS VHDX. Delete the file (you have a backup/export)
  • Copy your completely patched and syprepped OS VHDX with Windows Server to the location for this VM. Rename that VHDX to something sensible like adfs2disk01.vhdx.
  • Edit the settings and add the new sysprepped OS VHDX to the VM. Make sure that the disk is 1st in the boot order.
  • Start the VM
  • Go through the mini wizard and log in to it.
  • Configure the NIC with the same setting as your old DNS Server
  • Rename the VM to the original VM name and join the domain.
  • Restart the VM
  • Login to the VM and Install ADFS using Add Roles and Features in Server Manager

image

  • When done configure ADFS

image

  • Select to add the node to an existing federation farm

image

  • Make sure you have an account with AD admin permissions

image

  • Tell the node what primary federation server is

image

  • Import your certificate

image

  • Specify the ADFS Service account and its password

image

  • You’re ready to go on

image

  • If any prerequisites don’t work out you’ll be notified, we’re good to go!

image

  • Let the wizard complete all it steps

image

  • When the configuration is done you need to restart the VM to complete adding the node to the ADFS farm.

image

  • Restart your VM and log back in. When you open up ADFS  you’ll see that this new Windows Server 2016 node is a secondary node in your ADFS Farm.

image

  • Note that from a load balancer perspective nothing has changed. They just saw the node go up and down a few times; if they were paying attention at all that is.
  • Now repeat the entire process for all you secondary ADFS Farm nodes. When done we’ll swap the primary node to one of the secondary nodes. This is needed so you can repeat the process for the last remaining node in the farm, which at that time needs to be a secondary node. In the example of our 2 node farm we swap the roles between ADFS1 and ADFS2.

image

  • Verify that ADFS2 is the primary node and if so, repeat the migration process for the last remaining node (ADFS1) in our case.
  • Once that’s been completed we swap them back to have exactly the same situation as before the migration.

image

  • On the primary node run Get-AdfsFarmInformation (a new cmdlet in Windows Server 2016 R2).

image

  • You’ll see that our current farm behavior is 1 and our 2 nodes (all of them Windows Server 2016) are listed. Note that any nodes still on Windows Server 2012 R2 would not be shown.

WARNING: to raise the FBL to Windows Server 2016 your AD Schema needs to be upgraded to at least Windows Server 2016 version 85 or higher. This is also the case form new AD FS farm installations which will be at the latest FBL by default. My environment is already 100% on Windows Server 2016 AD. So I’m good to go. If yours is not, don’t forget to upgrade you schema. You don’t need to upgrade your DC’s unless you want to leverage Microsoft Password authentication, then you need al least 1 Windows Server 2016 domain controller. See https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-fs/overview/ad-fs-2016-requirements

  • As we know all our nodes are on Windows Server 2016 we can raise our Farm Behavior Level  (FBL) by running Invoke-AdfsFarmBehaviorLevelRaise

image

  • Just let it run it has some work to do including creating a new database.

image

  • It will tell you when it’s done and point out changes in the configuration.

image

  • Now run Get-AdfsFarmInformation again

image

  • Note that the  current farm behavior is 3 and our 2 nodes (all of them Windows Server 2016) are listed. Note that  if any nodes had still been on Windows Server 2012 R2 they would have been kicked from the farm and should be removed form the load balancer.

image

PS: with some creativity and by having a look at my blog on https://blog.workinghardinit.work/2016/11/28/easily-migrating-non-ad-integrated-dns-servers-while-preserving-server-names-and-ip-addresses/ You can easily figure out how to add some extra steps to move to generation 2 VMs while you’re at it if you don’t use these yet.

Unscheduled Maintenance for WorkingHardInIT Blog

If you tried to visit my blog on December the 15th you might have found it to be unreachable or that it timed out. My blog is attracting quite a bit or traffic and that was beginning to show for while now, mainly in CPU cycles and memory consumption. To be honest, the sizing was “tight” for budget reasons.

Well,  the site finally caved in under the load that became too much for the virtual machine in Azure it’s running on. I did investigate any other possible cause, but in the end it just needed more CPU and memory. So that’s what was behind the spotty behavior and suboptimal responsiveness yesterday which we kindly refer to as unscheduled maintenance.

Anyway, this meant I had to move the VM to a bigger size and pay more. I hope this will do for a while and I’ll see how much the bill will be to see if the costs are sustainable. The blogging is “just” a community effort in the end.