Issues to watch out for when configuring Discrete Device Assignment

When you’re discovering how to get discrete device assignment to work you have some potential bumps that might trip you up. So what are the issues to watch out for when configuring Discrete Device Assignment? We’ll share some here but note this is from testing with Windows Server 2016 Technical Preview 4. Changes can and probably will happen before RTM.

Make sure your VMs are running on the latest configuration version. That means 7.x at the time of writing. Many of the new features require this as discussed in Windows Server 2016 TPv4 Hyper-V brings virtual machine configuration version 7

Check the configuration version of the VM

When you try to add a GPU to a VM via Discrete Device Assignment you’ll get an error when the VM has version 5.0 in stead of 7.x. This can easily happen when you move VMs from older versions to a shiny new Windows 2016 environment as in the example below:

image

Naturally all of this is logged in the Hyper-V-VMMS Admin logs as well

‘W2K12R2’ cannot add device ‘Virtual Pci Express Port’ until the virtual machine is upgraded. (Virtual machine ID 592A920F-B0E9-480C-9052-A397B377BCC9)

Mind your Dynamic Memory settings

Another thing you need to watch out for is that when you use dynamic memory the startup memory and the minimum memory values have to match. So minimum memory cannot be lower than the startup memory. Do not that this is TPv4 and things might change.

image

Cannot add the device to ‘W2K12R2’ as that virtual machine has Dynamic Memory configured with different startup memory and minimum memory values. When adding a device, the virtual machine must be configured with equal startup memory and minimum memory values.(Virtual machine ID 592A920F-B0E9-480C-9052-A397B377BCC9)

If you try to change this on a VM with discrete device assignment enabled you’ll also find that this isn’t allowed.

image

Cannot perform the operation for ‘W2K12R2’ as the specified memory settings are not compatible for device assignment. The startup memory size and minimum memory size must be equal when Dynamic Memory is enabled and devices are also assigned.(Virtual machine ID 592A920F-B0E9-480C-9052-A397B377BCC9)

Set the automatic stop action to “Turn off the virtual machine”

I already mentioned this in the blog but you need to make sure that the automatic stop action for the virtual machine is set to “turn off the virtual machine and not to the default of “save the virtual machine state”. You cannot use DDA unless you do so.

image

Cannot add the device to ‘W2K12R2’ as that virtual machine is configured to go to saved state on host shutdown. (Virtual machine ID 592A920F-B0E9-480C-9052-A397B377BCC9)

Again, changing this on a VM that has DDA assigned will not work.

image

Discrete means one on one

Remember that you cannot assign a device to more than one VM. The thing here is it won’t block you when both VMs are shut down, at least not in TPv4.  But It’s dedicated and won’t work. When you do and you try to start any of those VMs it won’t work.

image

An error occurred while attempting to start the selected virtual machine(s).

‘RFX-WIN10ENT’ failed to start.

Virtual Pci Express Port (Instance ID 9B15DD32-5F94-46EF-8524-501007830322): Failed to Power on with Error ‘The device is in use by an active process and cannot be disconnected.’.

When you try to assign a GPU to a VM that is assigned to a running VM it will block you!

The dive Cleary identifies the VM the device is already assigned to.

Add-VMAssignableDevice -LocationPath $LocationPathOfDismountedDA -VMName RFX-WIN10ENT
Add-VMAssignableDevice : ‘RFX-WIN10ENT’ failed to add resources to ‘RFX-WIN10ENT’.
Virtual Pci Express Port (Instance ID EA7CB907-C38A-4396-97E0-A9A8F3C2D1B0): Failed to Power on with Error ‘The device
is in use by an active process and cannot be disconnected.’.
‘RFX-WIN10ENT’ failed to add resources. (Virtual machine ID 425A366E-E380-4D8C-AADE-DE16EAC0A104)
‘RFX-WIN10ENT’ Virtual Pci Express Port (Instance ID EA7CB907-C38A-4396-97E0-A9A8F3C2D1B0): Failed to Power on with
Error ‘The device is in use by an active process and cannot be disconnected.’ (0x80070964). (Virtual machine ID
425A366E-E380-4D8C-AADE-DE16EAC0A104)
Could not allocate the PCI Express device with the Plug and Play Device Instance path
‘PCIP\VEN_10DE&DEV_0FF2&SUBSYS_101210DE&REV_A1\6&17F903&0&00400010’ because it is already in use by another VM.
At line:1 char:1
+ Add-VMAssignableDevice -LocationPath $LocationPathOfDismountedDA -VMN …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (:) [Add-VMAssignableDevice], VirtualizationException
+ FullyQualifiedErrorId : OperationFailed,Microsoft.HyperV.PowerShell.Commands.AddVmAssignableDevice

Shut down your VM to make change to DDA

Last but not least to use DDA (assign, configure) with a VM you have to shut it down.  Removing devices whilst the VM is running isn’t blocked. But, he results can be quite “harsh”. This is me removing a DDA GPU form a Windows 2012 R2 VM whilst it’s running.

image

The fun part is that you can add it again while the VM is running and with will work, but it’s not a healthy thing to do.

As stated above, these notes are from testing with Windows Server 2016 Technical Preview 4 so thing can still change. Happy testing!

Setting up Discrete Device Assignment with a GPU

Introduction

Let’s take a look at setting up Discrete Device Assignment with a GPU. Windows Server 2016 introduces Discrete Device Assignment (DDA). This allows a PCI Express connected device, that supports this, to be connected directly through to a virtual machine.

The idea behind this is to gain extra performance. In our case we’ll use one of the four display adapters in our NVIDIA GROD K1 to assign to a VM via DDA. The 3 other can remain for use with RemoteFX. Perhaps we could even leverage DDA for GPUs that do not support RemoteFX to be used directly by a VM, we’ll see.

As we directly assign the hardware to VM we need to install the drivers for that hardware inside of that VM just like you need to do with real hardware.

I refer you to the starting blog of a series on DDA in Windows 2016:

Here you can get a wealth of extra information. My experimentations with this feature relied heavily on these blogs and MSFT provide GitHub script to query a host for DDA capable devices. That was very educational in regards to finding out the PowerShell we needed to get DDA to work! Please see A 1st look at Discrete Device Assignment in Hyper-V to see the output of this script and how we identified that our NVIDIA GRID K1 card was a DDA capable candidate.

Requirements

There are some conditions the host system needs to meet to even be able to use DDA. The host needs to support Access Control services which enables pass through of PCI Express devices in a secure manner. The host also need to support SLAT and Intel VT-d2 or AMD I/O MMU. This is dependent on UEFI, which is not a big issue. All my W2K12R2 cluster nodes & member servers run UEFI already anyway. All in all, these requirements are covered by modern hardware. The hardware you buy today for Windows Server 2012 R2 meets those requirements when you buy decent enterprise grade hardware such as the DELL PowerEdge R730 series. That’s the model I had available to test with. Nothing about these requirements is shocking or unusual.

A PCI express device that is used for DDA cannot be used by the host in any way. You’ll see we actually dismount it form the host. It also cannot be shared amongst VMs. It’s used exclusively by the VM it’s assigned to. As you can imagine this is not a scenario for live migration and VM mobility. This is a major difference between DDA and SR-IOV or virtual fibre channel where live migration is supported in very creative, different ways. Now I’m not saying Microsoft will never be able to combine DDA with live migration, but to the best of my knowledge it’s not available today.

The host requirements are also listed here: https://technet.microsoft.com/en-us/library/mt608570.aspx

  • The processor must have either Intel’s Extended Page Table (EPT) or AMD’s Nested Page Table (NPT).

The chipset must have:

  • Interrupt remapping: Intel’s VT-d with the Interrupt Remapping capability (VT-d2) or any version of AMD I/O Memory Management Unit (I/O MMU).
  • DMA remapping: Intel’s VT-d with Queued Invalidations or any AMD I/O MMU.
  • Access control services (ACS) on PCI Express root ports.
  • The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.

You get this technology both on premises with Windows Server 2016 as and with virtual machines running Windows Server 2016; Windows 10 (1511 or higher) and Linux distros that support it. It’s also an offering on high end Azure VMs (IAAS). It supports both Generation 1 and generation 2 virtual machines. All be it that generation 2 is X64 bit only, this might be important for certain client VMs. We’ve dumped 32 bit Operating systems over decade ago so to me this is a non-issue.

For this article I used a DELL PowerEdge R730, a NVIIA GRID K1 GPU. Windows Server 2016 TPv4 with CU of March 2016 and Windows 10 Insider Build 14295.

Microsoft supports 2 devices at the moment:

  • GPUs and coprocessors
  • NVMe (Non-Volatile Memory express) SSD controllers

Other devices might work but you’re dependent on the hardware vendor for support. Maybe that’s OK for you, maybe it’s not.

Below I describe the steps to get DDA working. There’s also a rough video out on my Vimeo channel: Discrete Device Assignment with a GPU in Windows 2016 TPv4.

Setting up Discrete Device Assignment with a GPU

Preparing a Hyper-V host with a GPU for Discrete Device Assignment

First of all, you need a Windows Server 2016 Host running Hyper-V. It needs to meet the hardware specifications discussed above, boot form EUFI with VT-d enabled and you need a PCI Express GPU to work with that can be used for discrete device assignment.

It pays to get the most recent GPU driver installed and for our NVIDIA GRID K1 which was 362.13 at the time of writing.

clip_image001

On the host when your installation of the GPU and drivers is OK you’ll see 4 NIVIDIA GRID K1 Display Adapters in device manager.

clip_image002

We create a generation 2 VM for this demo. In case you recuperate a VM that already has a RemoteFX adapter in use, remove it. You want a VM that only has a Microsoft Hyper-V Video Adapter.

clip_image003

In Hyper-V manager I also exclude the NVDIA GRID K1 GPU I’ll configure for DDA from being used by RemoteFX. In this show case that we’ll use the first one.

clip_image005

OK, we’re all set to start with our DDA setup for an NVIDIA GRID K1 GPU!

Assign the PCI Express GPU to the VM

Prepping the GPU and host

As stated above to have a GPU assigned to a VM we must make sure that the host no longer has use of it. We do this by dismounting the display adapter which renders it unavailable to the host. Once that is don we can assign that device to a VM.

Let’s walk through this. Tip: run PoSh or the ISE as an administrator.

We run Get-VMHostAssignableDevice. This return nothing as no devices yet have been made available for DDA.

I now want to find my display adapters

#Grab all the GPUs in the Hyper-V Host

$MyDisplays = Get-PnpDevice | Where-Object {$_.Class -eq “Display”}

$MyDisplays | ft -AutoSize

This returns

clip_image007

As you can see it list all adapters. Let’s limit this to the NVIDIA ones alone.

#We can get all NVIDIA cards in the host by querying for the nvlddmkm

#service which is a NVIDIA kernel mode driver

$MyNVIDIA = Get-PnpDevice | Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”}

$MyNVIDIA | ft -AutoSize

clip_image009

If you have multiple type of NVIDIA cared you might also want to filter those out based on the friendly name. In our case with only one GPU this doesn’t filter anything. What we really want to do is excluded any display adapter that has already been dismounted. For that we use the -PresentOnly parameter.

#We actually only need the NVIDIA GRID K1 cards, let’s filter some #more,there might be other NVDIA GPUs.We might already have dismounted #some of those GPU before. For this exercise we want to work with the #ones that are mountedt he paramater -PresentOnly will do just that.

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

Extra info: When you have already used one of the display adapters for DDA (Status “UnKnown”). Like in the screenshot below.

clip_image011

We can filter out any already unmounted device by using the -PresentOnly parameter. As we could have more NVIDIA adaptors in the host, potentially different models, we’ll filter that out with the FriendlyName so we only get the NVIDIA GRID K1 display adapters.

clip_image013

In the example above you see 3 display adapters as 1 of the 4 on the GPU is already dismounted. The “Unkown” one isn’t returned anymore.

Anyway, when we run

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

We get an array with the display adapters relevant to us. I’ll use the first (which I excluded form use with RemoteFX). In a zero based array this means I disable that display adapter as follows:

Disable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

Your screen might flicker when you do this. This is actually like disabling it in device manager as you can see when you take a peek at it.clip_image015

When you now run

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

Again you’ll see

clip_image017

The disabled adapter has error as a status. This is the one we will dismount so that the host no longer has access to it. The array is zero based we grab the data about that display adapter.

#Grab the data (multi string value) for the display adapater

$DataOfGPUToDDismount = Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths -InstanceId $MyNVidiaGRIDK1[0].InstanceId

$DataOfGPUToDDismount | ft -AutoSize

clip_image019

We grab the location path out of that data (it’s the first value, zero based, in the multi string value).

#Grab the location path out of the data (it’s the first value, zero based)

#How do I know: read the MSFT blogs and read the script by MSFT I mentioned earlier.

$locationpath = ($DataOfGPUToDDismount).data[0]

$locationpath | ft -AutoSize

clip_image021

This locationpath is what we need to dismount the display adapter.

#Use this location path to dismount the display adapter

Dismount-VmHostAssignableDevice -locationpath $locationpath -force

Once you dismount a display adapter it becomes available for DDA. When we now run

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

We get:

clip_image023

As you can see the dismounted display adapter is no longer present in display adapters when filtering with -presentonly. It’s also gone in device manager.

clip_image024

Yes, it’s gone in device manager. There’s only 3 NVIDIA GRID K1 adaptors left. Do note that the device is unmounted and as such unavailable to the host but it is still functional and can be assigned to a VM.That device is still fully functional. The remaining NVIDIA GRID K1 adapters can still be used with RemoteFX for VMs.

It’s not “lost” however. When we adapt our query to find the system devices that have dismounted I the Friendly name we can still get to it (needed to restore the GPU to the host when needed). This means that -PresentOnly for system has a different outcome depending on the class. It’s no longer available in the display class, but it is in the system class.

clip_image026

And we can also see it in System devices node in Device Manager where is labeled as “PCI Express Graphics Processing Unit – Dismounted”.

We now run Get-VMHostAssignableDevice again see that our dismounted adapter has become available to be assigned via DDA.

clip_image029

This means we are ready to assign the display adapter exclusively to our Windows 10 VM.

Assigning a GPU to a VM via DDA

You need to shut down the VM

Change the automatic stop action for the VM to “turn off”

clip_image031

This is mandatory our you can’t assign hardware via DDA. It will throw an error if you forget this.

I also set my VM configuration as described in https://blogs.technet.microsoft.com/virtualization/2015/11/23/discrete-device-assignment-gpus/

I give it up to 4GB of memory as that’s what this NVIDIA model seems to support. According to the blog the GPUs work better (or only work) if you set -GuestControlledCacheTypes to true.

“GPUs tend to work a lot faster if the processor can run in a mode where bits in video memory can be held in the processor’s cache for a while before they are written to memory, waiting for other writes to the same memory. This is called “write-combining.” In general, this isn’t enabled in Hyper-V VMs. If you want your GPU to work, you’ll probably need to enable it”

#Let’s set the memory resources on our generation 2 VM for the GPU

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $True -LowMemoryMappedIoSpace 2000MB -HighMemoryMappedIoSpace 4000MB

You can query these values with Get-VM RFX-WIN10ENT | fl *

We now assign the display adapter to the VM using that same $locationpath

Add-VMAssignableDevice -LocationPath $locationpath -VMName RFX-WIN10ENT

Boot the VM, login and go to device manager.

clip_image033

We now need to install the device driver for our NVIDIA GRID K1 GPU, basically the one we used on the host.

clip_image035

Once that’s done we can see our NVIDIA GRID K1 in the guest VM. Cool!

clip_image037

You’ll need a restart of the VM in relation to the hardware change. And the result after all that hard work is very nice graphical experience compared to RemoteFX

clip_image039

What you don’t believe it’s using an NVIDIA GPU inside of a VM? Open up perfmon in the guest VM and add counters, you’ll find the NVIDIA GPU one and see you have a GRID K1 in there.

clip_image041

Start some GP intensive process and see those counters rise.

image

Remove a GPU from the VM & return it to the host.

When you no longer need a GPU for DDA to a VM you can reverse the process to remove it from the VM and return it to the host.

Shut down the VM guest OS that’s currently using the NVIDIA GPU graphics adapter.

In an elevated PowerShell prompt or ISE we grab the locationpath for the dismounted display adapter as follows

$DisMountedDevice = Get-PnpDevice -PresentOnly |

Where-Object {$_.Class -eq “System” -AND $_.FriendlyName -like “PCI Express Graphics Processing Unit – Dismounted”}

$DisMountedDevice | ft -AutoSize

clip_image045

We only have one GPU that’s dismounted so that’s easy. When there are more display adapters unmounted this can be a bit more confusing. Some documentation might be in order to make sure you use the correct one.

We then grab the locationpath for this device, which is at location 0 as is an array with one entry (zero based). So in this case we could even leave out the index.

$LocationPathOfDismountedDA = ($DisMountedDevice[0] | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]

$LocationPathOfDismountedDA

clip_image047

Using that locationpath we remove the DDA GPU from the VM

#Remove the display adapter from the VM.

Remove-VMAssignableDevice -LocationPath $LocationPathOfDismountedDA -VMName RFX-WIN10ENT

We now mount the display adapter on the host using that same locationpath

#Mount the display adapter again.

Mount-VmHostAssignableDevice -locationpath $LocationPathOfDismountedDA

We grab the display adapter that’s now back as disabled under device manager of in an “error” status in the display class of the pnpdevices.

#It will now show up in our query for -presentonly NVIDIA GRIDK1 display adapters

#It status will be “Error” (not “Unknown”)

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

clip_image049

clip_image051

We grab that first entry to enable the display adapter (or do it in device manager)

#Enable the display adapater

Enable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

The GPU is now back and available to the host. When your run you Get-VMHostAssignableDevice it won’t return this display adapter anymore.

We’ve enabled the display adapter and it’s ready for use by the host or RemoteFX again. Finally, we set the memory resources & configuration for the VM back to its defaults before I start it again (PS: these defaults are what the values are on standard VM without ever having any DDA GPU installed. That’s where I got ‘m)

#Let’s set the memory resources on our VM for the GPU to the defaults

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $False -LowMemoryMappedIoSpace 256MB -HighMemoryMappedIoSpace 512MB

clip_image053

Now tell me all this wasn’t pure fun!

A 1st look at Discrete Device Assignment in Hyper-V

Let’s take a 1st look at Discrete Device Assignment in Hyper-V

Discrete Device Assignment (DDA) is the ability to take suitable PCI Express devices in a server and pass them through directly to a virtual machine.

This sounds a lot like SR-IOV that we got in Windows 2012  It might also make you think of virtual Fibre Channel in a VM where you get access to the FC HBA on the host. The concept is similar as in that it giving the physical capabilities to a virtual machine.

But this is about Windows Server 2016 new capability which takes all this a lot further. The biggest use case for this seems to be GPUs and NVMe disks. In environment where absolute speed matters the gains by doing DDA are worth the effort. Especially when this works well with live migrations (not yet as far as I can see right now, and it’s probably quite a challenge).

There a great series of blog post on DDA

Microsoft also has a survey script available to find potential DDA devices on your hosts. It will reporting on which of them meet the criteria for passing them through to a VM.

https://github.com/Microsoft/Virtualization-Documentation/tree/master/hyperv-samples/benarm-powershell/DDA

That script is a very nice educational tool by the way. When I look at one of my servers I find that I have a couple of devices that are potential candidates:

So I’m looking at the Mellanox NICs. I wondered if this is the first step to RDMA in the guest. Probably not. The way I’m seeing the network stack evolve in Windows 2016 that’s the place where that will be handled.

image

And the Nvidia GRID K1 –  one of the poster childs of DDA and needed to compete in the high end VDI market

image

And yes the Emulex FC HBAs in the server. This is interesting and I’m curious about the vFC/DDA with FC story. But I have no further info on this. vFC is still there in W2K16 and I don’t think this DDA will replace it.

image

And finally it showed me the PERC controller as a potential candidate. Now trying to get a PERC 730 exposed to a VM with DDA sounds like and experiment I might just spend some evening hours one just to see where it leads. But only NVMe is supported. But hey a boy can play right?

image

So we have some lab work to do and we’ll see where we end up when we get TPv5 in our hands. What I also really need to get my hands on are some NVMe disks Smile But soon I’ll publish my findings on configuring this with an NVIDIA GRID K1 GPU.