Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 3

This is a multipart series based on some lab test & work I did.

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 3

And we have arrived at part three of my adventures while “transitioning” my Hyper-V cluster nodes to Windows Server 2012. I prefer the term transition as is more correct. We can still not do a rolling upgrade a cluster cluster. We still need to create a new cluster and recuperate the evicted nodes.

I’ll repeat myself here (again) by stating I did not reinstall the evicted nodes but upgraded them. Why, because I can and I wanted to try it out and see what happens. For production purposes I do advise you to rebuild nodes from scratch using a well defined and automated plan if possible. I already mentioned this in Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1

Moving the Storage & Hyper-V Guests

So we stopped Part 2 at a newly created cluster without any storage. That’s what we’ll be taking care of in this part.  Let’s recap what we already mentioned at the end of Part 2.

We have several options for storage here. We could assign new storage but we cannot do a Quick Storage Migration between cluster using SCVMM2008R2 but that doesn’t fly as SCVMM2008R2 can’t manage Windows 2012 clusters and I don’t know if it ever will. We can do a good old manual or scripted export to and import from the new storage of the VMs what takes a considerable amount of time. You also need to have the extra storage available.

We can also recuperate the old storage with the VMs still on there. This could get tricky as no two cluster should be able to see & use the storage at the same time. The benefit could be that we can just use the import type in Windows Server 2012 “Register the virtual machine in-place” (use the existing unique ID) and be done with it. We’ll try that one. We’ll still have some down time but it should be pretty fast. It’s only from Windows Server 2012 on that we’ll be able to do Shared Nothing Live Migrations between clusters Smile and live will be good. If you have a SAN you could also use clones to get this job done without less risk. You work on cloned data and keep the original around instead of using that for the process described below.

So how do we approach this?

Since Windows Server 2008 storage & clustering isn’t the pain it could be in earlier version. It’s the disk manager handling all that and it makes live a lot easier. All disks presented to a cluster node are off line to the operating system until you bring it online. Even if it contains data or is presented to another host, whether that is a member of another cluster or a stand alone host. Pretty cool. It also means you can have all your nodes on line during the process. The process of bringing the disk online and, if needed formatting it with NTFS and then adding it to the cluster as storage can be done on just one of the nodes.

As you recall I unplugged the evicted node from the iSCSI storage (you could also disable the ports) before I upgraded it. The entire iSCSI configuration got upgraded perfectly so all I needed to do was plug the iSCSI cables in and the storage appeared offline. My old cluster node was up and running still accessing it. Pretty slick! And great as a demo but you can play it safer. That was fun Smile but perhaps we won’t be that brave in a production environment.

Options

You could decide to bring all LUNS over at once or one at the time. The process is the same. If you do it one by one you’ll have to rely on the above behavior to protect the LUNs against corruption or you can un-present the LUNS remaining on the old cluster from the new cluster so you’ll never have an issue. We’ve done both and it works out rather fine in testing. Windows clustering is really doing it’s best to prevent you from shooting yourself in the foot Smile

Let’s say I go LUN by LUN. Now I can just remove the VMs from the old cluster using the Failover cluster GUI so they are no longer highly available on that node. When I have no more clustered VMs on a CSV LUN I can shut down all the guests in Hyper-V Manager and stop right there.

On the old cluster I remove that LUN from the CSV storage and from the cluster storage. At that moment that LUN is already taken offline for you!

image

Pardon the silly size but I didn’t have space left to make a realistic screenshot Smile

Great, Windows is protecting us against any possible data corruption! So now I can than un-present the LUN form the old cluster nodes. The next step is to enable the ISCI ports, present that LUN to the new cluster node or nodes (depends on where in the x number of node process you are) or just plug in the cable .

You’ll see the new LUN off line than on the new cluster. We can than make the LUN on line so it will be available to add to the cluster. Just right click that disk and select “Online”.

image

image

 

Right click on storage

image

 

Select an disk that’s available to add to the cluster.

02

 

Things has gotten a lot simpler with CSV in Windows Server 2012. No more enabling it with a funky warning message that’s well meant but is rather confusing an annoying. You just right click the disk and choose “Add to Cluster Shared Volumes” and that’s it.

image

 

And there it is. That disk in our new cluster is ready to use as a CSV.

image

 

So we can now us a nifty new capability in Windows Server 2012 Hyper-V: “Register the virtual machine in-place” (use the existing unique ID)

05

 

The wizard starts.

06

 

Select the folder where your VM or VMs live. yes you can do multiple given that your folder structure allows for this.

07

 

It’s found one VM in our folder

08

 

We click Next

10

 

We select “Register the virtual machine in-place” (use the existing unique ID) and click next.

11

If something is not right like some forgotten “saved” states you’ll get a change to dump those or cancel the process to deal with it properly before trying it again.

12

 

If virtual network names do not match you’ll get the opportunity to set correct that by specifying what virtual switch to use.

13

 

If all was well in the first place or after you’ve fixed any issues like the ones demonstrated above you’re good to go. Click finish and enjoy your Windows Server 2012 Hyper-V Guest.

18

 

At this point you can already start your VMs. I know that the next step is to make all these VMs highly available but here we have some good news as well. You can now make running VMs highly available. Yeah! They no longer need to be shut down. All this is done via the well know process so I’m not going to walk trough the entire process here. But the screen shot of a making a running VM highly available is worth posting Smile

addrunningvm

Enhanced (Failover) Placement of Virtual Machines on Windows 8 Hyper-V Cluster

One of the nice features in Windows 8 Hyper-V clustering are the “drain node” capability and the virtual machine priorities. You can see this in action in a video by Aidan Finn here.

image

For more details on draining a node see Draining Nodes for Planned Maintenance with Windows Server "8" for a detailed explanation.

Now another very important feature is the fact that the cluster is intelligent enough to determine what node is the best suited as a target during failover or live migration. Not only CPU and memory load of the hosts are taking into consideration but also the resource needs of the VM and the priority you have given those. This entire process is NUMA aware and as such with windows 8 can be evaluated on a per virtual machine basis. This means that you that the cluster will always try to get the best possible placement and thus performance for your virtual machines.

image

 

Now we also have affinity and anti affinity rules. Anti Affinity ensures that the nodes of a virtualized NLB farm will be placed on separate hosts to minimize risk. You don’t want one host to house all the nodes of you NLB farm!

On the other hand sometime you want virtual machines to stick together lets say you have an NLB farm but the virtual machines with the front end and middle tier need  to stay together. In that case you use affinity rules to achieve this. On top of this the anti affinity rules will ensure that the NLB farm virtual machines are on different nodes.

image

Do note that when the cluster has to choose to break these rules versus bot being able to run the virtual machines it will choose to keep them running. It knows its priorities! Now if in such a situation there are not enough resource the priority will also come into play and the low priority machine may be shut down to ensure the higher priority ones can be up and running.

As you can imagine there are potentially a lot of factors/permutations at play here and I’m looking into doing some more test of these features and the intelligence in the process to see if we make the same decisions and how to best configure this for maximal performance & availability.

Learning Resources for Windows 8 Clustering, WNLB & Hyper-V 3.0

I know a lot of you are, just like me, actively investigating Windows 8 & Hyper-V. You’re downloading the Beta, reading blog post, discussing scenarios and use cases and that’s great. You’re investing in a bright future Smile

Here are some blog post that contain gems of information and that I’d like to bring to your attention:

Enjoy!

Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1

This is a multipart series based on some lab test & work I did.

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 3

After I got back from the MVP Summit 2012 in Bellevue/Redmond I could wait to start playing with a Windows 8 Hyper-V cluster so I decided to upgrade my Windows 2008 R2 cluster nodes to Windows 8. That means evicting them on by one, upgrading them and adding them to a new Windows 8 cluster. As we can build a one node cluster this can be done a node at the time. This isn’t a fail proof definite “How To”, I’m just sharing what I did.

Evicting a node

Before evicting a node make sure all virtual machines are running on the other node(s). As you can see the cluster warrior has 2 nodes, crusader & saracen (I was listening to some Saxon heavy metal at the time I built that lab setup). We evacuated node saracen prior to evicting it.

image

Evict the node & confirm when asked.

image

image

When this is done all storage is off line to the node evicted from the cluster. No need to worry about that.

Upgrade that node to Windows 8

To anyone having installed/upgraded to Windows 2008 R2 this should all be a very recognizable experience. Being lazy, I left the iSCSI initiator configuration in there with the Hyper-V & failover cluster roles installed during the upgrade. Now for production environments I like to build my nodes from scratch to have an exactly known, new and clean installation base. But for my test lab at home I wanted to get it done as fast as possible. If only the days had more hours …For extra safety you can pull the plug (or disable the switch ports) on your iSCSI or FC connections and make sure no storage is presented to the node during the upgrade process. Now please do mind is use Intel server grade NIC adaptors for which Windows 8 beta has drivers. Your situation may vary so I can’t guarantee the 7 year old FC HBA in your lab server will just work, OK!?

So run setup.exe from the Windows 8 (Beta) ISO you extracted to a folder on the server or  from the (bootable) USB you created with the downloaded ISO.

image

 

The Windows Setup installer will start.

04 run setup

 

Click on “Install now” to proceed and start the setup process.

image

 

Select to “Go online to get the latest updates for Setup (Recommended)”

image

 

So it looks for updates on line.

image

 

It didn’t find any but that’s OK.

image

 

Select the installation you want. I went with for Server with a GUI as I want screen shots. But as I wrote in the blog post Windows 8 Server With GUI, Minimal Server Interface & Server Core Lesson with the Desktop Experience Feature you can turn it into a Server Core Installation and back again now. So no regrets with any choice you make here, which is a nice improvement that can save us a lot of time.

image

Accept the EULA

image

 

We opt to upgrade (in production I go for a clean install)

image

 

I get notified that I have to remove PerfectDisk. I had an evaluation copy of Raxco PerfectDisk installed I used to do some testing with redirected CSV traffic and defragmentation (see Some Feedback On How to defrag a Hyper-V R2 Cluster Shared Volume).

image

 

So the upgrade was cancelled.

image

 

I uninstalled PerfectDisk but still it was a no go. I  had to remove all traces of it in the registry & files systems that the uninstall left or the upgrade just wouldn’t start. But after that it worked.

image

 

That means we can kick of the upgrade! It all looks very familiar Smile It takes a couple of reboots and some patience. But all in all it’s a fast process.

image

image

image

image

After this step it takes a couple of reboots and some patience. But all in all it’s a fast process. After some reboots and a screen that goes dark in between those …we get our restyled beta fish.

image

image

image

And voila we’re where we need to be … Smile

image

 

After the upgrade process I ran into one error. The GUI for Failover Clustering would not start. The solution if found for that was simply to remove that role and add it again. That did the trick.

ClusGUI

 

So this was a description of the first steps to transition a  Windows 2008 R2 SP1 cluster to a  Windows 8 (Beta) Cluster. As seen we evict the nodes one by one to upgrade them or do a clean install. In the latter case you’ll need to do the iSCSI initiator configuration again,  install the Failover Cluster role and in the case of a Hyper-V cluster the Hyper-V role. The nodes can than be added to a new Windows 8 cluster, starting out with a one node cluster. More on that in the second part of this blog post.