Find and Update Your KMS Service Host Key To Activate Windows 10

I have done a series of blog post on preparing your KMS environment for Windows 10 activation. You should be set to go but the final step you need to take is update the KMS Service Host Key.  That means that in a corporate environment you’ll need to get your KMS Service updated so you can active the Windows 10 clients.

This one has tripped up some people when it comes to finding it. So we’ll address that here is as well.

Please note that only Windows Server 2012 (R2) or Windows 8, 8.1 or 10 can act as KMS Service Hosts.

Preparations

On the Volume License Servicing Center you cannot only get the bits but also the MAK and KMS keys. Normally you’d go directly to Downloads and Keys, filter down to what you’re looking for and find your KMS host key over there.That works for the Windows clients as before. But you cannot find and updated KMS Service key right now for Windows Server, that will probably work again when Windows Server 2016 goes RTM.

image

But for now you need to get the key by using a bit of a different path than you’re used to. Got to Licenses and select Relationship Summary

image

Navigate to the correct license id and click it to open the details of your license. There you select Product Keys

image

In the list of keys under that license you’ll find the KMS keey you need under product key for the product Windows Srv 2012R2 DataCtr/Std KMS for Windows 10

image

The Windows 10 KMS client keys are listed publicly by Microsoft. They are the keys of clients that activate against a KMS server. If you have you volume licensing media that’s normally the one in box with the client. You can read more in my blog Windows 10 KMS Client Setup Keys. If you don’t want or cannot use a KM you’ll need to use MAK keys on the clients. These are found on the Volume License Servicing Center as well when you have a valid license.

You also need to have installed an update on your KMS Service hosts. You can read all about that in my blog post KB3058168: Update that enables Windows 8.1 and Windows 8 KMS hosts to activate Windows 10  If you don’t install this update registering a Windows 10 KMS key will throw an error:

0xc004f015: The Software Licensing Service reported that the license is not installed.
SL_E_PRODUCT_SKU_NOT_INSTALLED

So grab the hotfix if it isn’t installed via Windows Update, WSUS etc. and install it from an elevated command prompt. Just follow the instructions and you’ll be fine Smile

Upgrading the KMS Service Host Key

It goes without saying that we’ll need to update the KMS Service Host key or we’ll see error 0xC004F015:

0xc004f042 – SL_E_VL_KEY_MANAGEMENT_SERVICE_ID_MISMATCH
he Software Licensing Service determined that the specified Key Management Service (KMS) cannot be used
.

This is also described in KB 3086418 Error 0xC004F015 while activating Windows 10 Enterprise using Windows Server 2012 R2 KMS Host

We take a look at our current situation by running slmgr.vbs /dlv which show us a Wk212R2 KMS Service host which can activate all servers & clients up to Windows 8.1/ Windows Server 2012 R2.

image

Uninstall the current  please use an elevated command prompt Winking smile

image

Now you can install the new Windows Srv 2012R2 DataCtr/Std KMS for Windows 10 key. If you run in to any issues here, restarting the KMS Service can help ((“net stop sppsvc” and “net start sppsvc“) . Try that first.

slmgr.vbs /ipk PIRAT-ESARE-NOTGE-TTING-AKEY!

Be patient, it’s not instantaneous.

image

Fall you wannabe pirates out there, that’s not a real key. As far as you are concerned this is the Navy Winking smile. If you’re, looking for illegal keys, cracks, keygens, activators or dodgy KMS virtual machines and such this is not the place!

Show what’s up and running now by running slmgr.vbs /dlv again and as you can see we’re in business to activate all our Windows Server 2012 R2 and Windows 10 machines as well as all lower versions down to Windows Server 2008 an Windows Vista.image

So we’re ready to roll out Windows 10 now via MDT and have our KMS server activate them.

I’m a Veeam Vanguard 2015

Veeam has announced it’s Veeam Vanguard program last month while I was on vacation. I am honored to have been nominated as 1 of 31 professionals world wide. Veeam states the following, which I consider to be a great compliment:

These individuals have been nominated as Veeam Vanguards for 2015. A Veeam Vanguard represents our brand to the highest level in many of the different technology communities in which we engage. These individuals are chosen for their acumen, engagement and style in their activities on and offline.

veeam_vanguard

Rick Vanover is spearheading this program together with the Veeam Product Strategy Team and the entire company is behind this initiative as you can read here What is the Veeam Vanguard Program?

Veeam now has a program like the VMware vExpert, Cisco Champion and Microsoft MVP programs. I’m honored to be nominated and I’m sure Veeam will execute this well as I have one very consistent experience with both Veeam employees and products: quality and dedication to deliver the best possible solutions for their customers. The fact that I’ve been nominated makes me feel appreciated by people whom I respect for their professionalism and skills. As I’m confortable acting as the tip of the spear implementing technologies at the organizations I support I kind of feel that being a Veeam Vanguard is a great fit Smile

I have shared insights, ideas and feedback with VEEAM before and I’m sure we’ll get plenty of opportunities to do even more of that in the future.

High performance live migration done right means using SMB Direct

I  saw people team two 10GBps NICs for live migration and use TCP/IP. They leveraged LACP for this as per my blog Teamed NIC Live Migrations Between Two Hosts In Windows Server 2012 Do Use All Members . That was a nice post but not a commercial to use it. It was to prove a point that LACP/Static switch dependent teaming did allow for multiple VMs to be live migrated in the same direction between two node. But for speed, max throughput & low CPU usage teaming is not the way to go. This is not needed as you can achieve bandwidth aggregation and redundancy with SMB via Multichannel. This doesn’t require any LACP configuration at all and allows for switch independent aggregation and redundancy. Which is great, as it avoids stacking with switches that don’t do  VLT, MLAG,  …

Even when your team your NICs your better off using SMB. The bandwidth aggregation is often better. But again, you can have that without LACP NIC teaming so why bother? Perhaps one reason, with LACP failover is faster, but that’s of no big concern with live migration.

We’ll do some simple examples to show you why these choices matter. We’ll also demonstrate the importance of an optimize RSS configuration. Do not that the configuration we use here is not a production environment, it’s just a demo to show case results.

But there is yet another benefit to SMB.  SMB Direct.  That provides for maximum throughput, low latency and low CPU usage.

LACP NIC TEAM with 2*10Gbps with TCP

With RSS setting on the inbox default we have problems reaching the best possible throughput (17Gbps). But that’s not all. Look at the CPU at the time of live migration. As you can see it’s pretty taxing on the system at 22%.

image

If we optimize RSS with 8 RSS queues assigned to 8 physical cores per NIC on a different CPU (dual socket, 8 core system) we sometimes get better CPU overhead at +/- 12% but the throughput does not improve much and it’s not very consistent. It can get worse and look more like the above.

image

LACP NIC TEAM with 2*10Gbps with SMB (Multichannel)

With the default RSS Settings we still have problems reaching the best possible throughput but it’s better (19Gbps). CPU wise, it’s pretty taxing on the system at 24%.

image

If we optimize RSS with 8 RSS queues assigned to 8 physical cores per NIC on a different CPU (dual socket, 8 core system) we get better over CPU overhead at +/- 8% but the throughput actually declined (17.5 %). When we run the test again we were back to the results we saw with default RSS settings.

image

Is there any value in using SMB over TCP with LACP for live migration?

Yes there is. Below you see two VMs live migrate, RSS is optimized. One core per VM is used and the throughput isn’t great, is it. Depending on the speed of your CPU you get at best 4.5 to 5Gbps throughput per VM as that 1 core per VM is the limiting factor. Hence see about 9Gbps here, as there’s 2 VMs, each leveraging 1 core.

image

Now look at only one VM with RSS is optimized with SMB over an LACP NIC team. Even 1 large memory VM leverages 8 cores and achieves 19Gbps.

image

What about Switch Independent Teaming?

Ah well that consumes a lot less CPU cycles but it comes at the price of speed. It has less CPU overhead to deal with in regards to LACP. It can only receive on one team member. The good news is that even a single VM can achieve 10Gbps (better than LACP) at lower CPU overhead. With SMB you get better CPU distribution results but as the one member is a bottle neck, not faster. But … why bother when we have …better options!? Read on Smile!

No Teaming – 2*10Gbps with SMB Multichannel, RSS Optimized

We are reaching very good throughput but it’s better (20Gbps) with 8 RSS queues assigned to 8 physical cores. The CPU at the time of live migration is pretty good at 6%-7%.

image

Important: This is what you want to use if you don’t have 10Gbps but you do have 4* 1Gbps NICs for live migration. You can test with compression and LACP teaming if you want/can to see if you get better results. Your mirage may vary Smile. If you have only one 1Gbps NIC => Compression is your sole & only savior.

2*10Gbps with SMB Direct

We’re using perfmon here to see the used bandwidth as RDMA traffic does not show up in Task Manager.

image

We have no problems reaching the best possible throughput but it’s better (20Gbps, line speed). But now look at the CPU during live migration. How do you like them numbers?

Do not buy non RDMA capable NICs or Switches without DCB support!

These are real numbers, the only thing is that the type and quality of the NICs, firmware and drivers used also play a role an can skew the results a bit. The onboard LOM run of the mill NICs aren’t always the best choice. Do note that configuration matters as you have seen. But SMB Direct eats them all for breakfast, no matter what.

Convinced yet? People, one of my core highly valuable skillsets is getting commodity hardware to perform and I tend to give solid advice. You can read all my tips for fast live migrations here in Live Migration Speed Check List – Take It Easy To Speed It Up

Does all of this matter to you? I say yes , it does. It depends on your environment and usage patterns. Maybe you’re totally over provisioned and run only very small workloads in your virtual machines. But it’s save to say that if you want to use your hardware to its full potential under most circumstances you really want to leverage SMB Direct for live migrations. What about that Hyper-V cluster with compute and storage heavy applications, what about SQL Server virtualization? Would you not like to see this picture with SMB RDMA? The Mellanox  RDMA cards are very good value for money. Great 10Gbps switches that support DCB (for PFC/ETS) can be bought a decent prices. You’re missing out and potentially making a huge mistake not leveraging SMB Direct for live migrations and many other workloads. Invest and design your solutions wisely!

Trouble Shooting Intermittent Virtual Machine Network Connectivity

I was asked to take a look at an issue with virtual machines losing network connectivity. The problems were described as follows:

Sometimes some VMs had connectivity, some times they didn’t. It was not tied to specific virtual machines. Sometimes the problem was not there, than it showed up again. It was, not an issue of a wrong subnet mask or gateway.

They suspected firmware or driver issues. Maybe it was a Windows NIC teaming bug or problems with DVMQ or NIC offload settings. There’s a lot of potential reasons, just Google Intermittent VM connectivity Issues Hyper-V and you’ll get a truckload of options.

So a round of wishful firmware, driver upgrading started. Followed by a round of wishful disabling network features. That’s one way to do it. But why not sit back an look at the issue.

Based on what they said I looked at the environment and asked it was tied to specific host as only VMs on one of the hosts had the issue.  Could it be be after a live migration or a VM restart. They didn’t really know but it could. So we started looking at the hosts. All teams for the vSwitch were correctly configured on all host. No tagged VLAN on the member NIC. No extra team interfaces that would violate the rule that there can be only one if the team is used by a Hyper-V switch. They used the switch independent teaming mode with the load balancing mode set to Dynamic, all member active. Perfect.

I asked it they used tagged VLAN on the VMs some times. They said yes. Which gave me a clue they had trunking or general mode configured on the ports. So I looked at the switches to see what the port configuration was like?  Guess what. All ports on both switches were correctly configured bar the ports of the vSwitch team members on one Hyper-V host. The one with problematic VMs. The two ports were in general mode but the port on the top switch had PVID* 100 and the one on the bottom switch had PVID 200. That was the issue. If the VM “landed” on the team member with PVID 200 it has no network connectivity.

HyperV-vSwitchTeam-WronNativeVLAN

 

* PVID (switchport general pvid 200) is the default VLAN of the port, in CISCO speak that would translate into “”native VLAN as in switchport trunk native vlan 200

Yes NIC firmware and drivers have issues. There are bugs or problems with advanced features once in a while. But you really do need to check that the configuration is correct and the design or setup makes sense. Do yourself a favor by not assuming anything. Trust but verify.