Hyper-V Technical Preview Live Migration & Changing Static Memory Size

I have played with  Hot Add & Remove Static Memory in a Hyper-V vNext Virtual Machine before and I love it. As I’m currently testing (actually sometimes abusing) the Technical Preview a bit to see what breaks I’m sometimes testing silly things. This is one of them.

I took a Technical Preview VM with 45GB of memory, running in a Technical Preview Hyper-V cluster and live migrate it.  I then tried to change the memory size up and down during live migration to see what happens, or at least nothing goes “BOINK”. Well, not much, we get a notification that we’re being silly. So no failed migrations, crashed or messed up VMs or, even worse hosts.

image

It’s early days yet but we’re getting a head start as there is a lot to test and that will only increase. The aim is to get a good understanding of the features, the capabilities and the behavior to make sure we can leverage our existing infrastructures and software assurance benefits as fast a possible. Rolling cluster upgrades should certainly help us do that faster, with more ease and less risk. What are your plans for vNext? Are you getting a feeling for it yet or waiting for a more recent test version?

Hot add/remove of network adapters and enabling device naming in Windows Server Hyper-V

One of the cool new features in Window Server vNext Hyper-V (in Technical Preview at the moment of writing) is that you gain the ability to hot add and remove NICs.  That might sound not to important to the non initiated in the fine art of virtualization & clouds. But it is. You see anything you can do to a VM configuration wise that does not require downtime is good. That’s what helps shift the needle of high availability to that holey grail of continuous availability.

On top of that the names of the network adapters are now exposed to the guest. Which is also great. It’s become lot easier to automate the VM network configuration.

Hot adding NICs can be done via the GUI and PoSh.

image

But naming the network adapter seems a PowerShell only game for now (nothing hard, no sweat). This can be done during creation of the network adapter. Here I add a NIC to VM RAGNAR connected to the ISCSI-GUEST switch & named ISCSI.

Add-VMNetworkAdapter –VMName RAGNAR –SwitchName ISCSI-GUEST –Name ISCSI

Now I want this name to be reflected into the VM’s NCI configuration properties. This is done by enabling device naming. You can do this via the GUI or PoSh.

Set-VMNetworkAdapter –VMName RAGNAR –Name ISCSI –Devicenaming On

That’s it.

image

So now let’s play with our existing network adapter “Network Adapter” which connects our Hyper-V guests to the LAN via the HYPER-V-GUESTS switch? Can you rename it?  Yes, you can. In PoSh run this:

Rename-VMNetworkAdapter –VMName RAGNAR –Name “Network Adapter” –NewName “LAN”

And that’s it. If you refresh the setting of your VM or reopen it you’ll see the name change.

image

The one thing that I see in the Tech Preview is that I need to reboot the VM to see the Name change reflected inside the VM in the NIC configuration under advance properties, called “Hyper-V Network Adapter Name”. Existing one show their old name and new one are empty until then.

image

 

Two important characteristics to note about enabling device naming

You notice that one can edit this field in NIC configuration of the VM but it doesn’t move up the stack into the settings of the VM. Security wise this seems logical to me and it’s not intended to work. It’s a GUI limitation that the field cannot be disabled for editing but no one can try and  be “funny” by renaming the ethernet adapter in the VMs settings via the guest Winking smile

Do note that this is not exactly the same a Consistent Device Naming in Windows 2012 or later. It’s not reflected in the name of the NIC in the GUI, these are still Ethernet, Ethernet 2, … Enable device naming is mainly meant to enable identifying the adapter assigned to the VM inside the VM, mainly for automation. You can name the NIC in the Guest whatever works best for you and you’ll never lose the correlations between the Network adapter in your VM settings and the Hyper-V Network Adapter name in the NIC configuration properties. In that respect is a bit more solid/permanent even if some one found it funny to rename all vNICs to random names you’re still OK with this feature.

That’s it off, you go! Download the Technical Preview bits from MSDN, start exploring and learning. Knowledge is seldom a bad thing Winking smile

Hot Add & Remove Static Memory in a Hyper-V vNext Virtual Machine

One of the very nice and handy new capabilities in Windows Server vNext Hyper-V (Windows Server 2015 or Windows Server 10?) is the fact that you can now hot add or remove memory from a virtual machine with a fixed amount of memory. Another step towards continuous availability.

Here’s a test virtual machine RAGNAR running the Technology Preview bits on my test cluster. As you can see it has 1024MB of fixed memory. And Dynamic memory is not enabled.

image

I can simply adjust this upward by typing in the value and clicking “Apply”

image

and downwards in the same way

image

Cool huh!

On top of this to make sure you’re not adding memory needlessly when someone says they need more memory for their VM you can see the memory demand now in the GUI. Both in Hyper-V Manager and in the Failover Clustering GUI.

image

image

The technical preview of Windows vNext is looking good even in this early stage and I have no doubt it’s only going to get better Open-mouthed smile.

Some things to note

  • It’s configurable via PowerShell so you can start dreaming of script to query memory assignment & demand and use that output to redistribute memory available on the host amongst the VMs …

image

  • Both Hyper-V host and guest have to run vNext (Technical Preview)
  • The guest has to be a generation 2 VM
  • It’s still virtualization technology and not magic, so if you try to assign more memory than available you’ll get a warning Winking smile

image

First experiences with a rolling cluster upgrade of a lab Hyper-V Cluster (Technical Preview)

Introduction

In vNext we have gotten a long awaited  & very welcome new capability: rolling cluster upgrades. Which for the Hyper-V roles is a 100% zero down time experience. The only step that will require some down time is the upgrade of the virtual machine configuration files to vNext (version 5 to 6) as the VM has to be shut down for this.

How to

The process for a rolling upgrade is so straight forward I’ll just give you a quick bullet list of the first part of the process:

  • Evacuate the workload from the cluster node you’re going to upgrade
  • Evict the node to upgrade to vNext from the cluster
  • Upgrade (no in place upgrade supported but in your lab you can get away with it)
  • Add the upgraded node to the cluster
  • Rinse & repeat until all nodes have been upgraded (that can take a while with larger clusters)

Please note that all actions you administration you do on a cluster in mixed mode should be done from a node running vNext or a system running Windows 10 with the vNext RSAT installed.

Once you’ve upgraded all nodes in the cluster, the situation you’re in now is basically that you’re running a Windows Server vNext Hyper-V cluster in cluster functional level 8 (W2K12R2) and the next step is to upgrade to 9, which is vNext, no there no 10 yet in server Winking smile

You do this by executing the Update-ClusterFunctionalLevel cmdlet. This is an online process.  Again, do this from a node running vNext or a system running Windows 10 with the vNext RSAT installed. Note that this is where you’re willing to commit to the vNext level for the cluster. That’s where you want to go but you get to decide when. When you’ve do this you can’t go back to W2K12R2. It’s a matter of fact that as long as you’re running cluster functional level 8, you can reverse the entire process. Talk about having options! I like having options, just ask Carsten Rachfahl (@hypervserver), he’ll tell you it’s one of my mantras.

image

When this goes well you can just easily check the cluster functional level as follows:

image

When this is done you can do the upgrade of the VM configuration by running the Update-VMConfigurationVersion cmdlet. This is an off line process where the VMs you’re updating have to be shut down. You can do this for just one VM, all or anything in between. This is when you decided you’re committing to all the goodness vNext brings you.  But the fact that you have some time before you need to do it means you can  easily get those machine to run smoothly on a W2K12R2 cluster in case you need to roll back. Remember, options are good!

Doing so updates VM version from 5 to 6 and enables new Hyper-V features (hit F5 a lot or reopen Hyper-V Manager to see the value change.

image

image

Note: If in the lab you’re running some VMs on a cluster node are not highly available (i.e. they’re not clustered) they cannot be updated until the cluster functional level has been upgraded to version 9.