Cluster Operating System Rolling Upgrade Leaves Traces

Introduction

When you perform a cluster OS rolling upgrade of Windows Server 2012 R2 cluster to a Windows Server 2016 Cluster you’ll have two options.

1. You evict the nodes, one after the other, perform a clean OS install and join them to the existing cluster.

2. You do an in-place OS upgrade of the nodes (no need to evict the nodes, you can if you want to). I tested this and blogged about it in In Place upgrades of cluster nodes to Windows Server 2016  

Both of these give you the benefits that you can keep your workloads (Hyper-V, SOFS, SQL Server) running and you don’t have to create a new cluster to do so. The moment you have Windows Server 2016 Nodes added to an existing Windows Server 2012 R2 cluster you are running in Mixed mode. Until all your nodes have been upgraded to Windows Server 2016 will remain running in mixed mode.

Illustration showing the three stages of a cluster OS rolling upgrade: all nodes Windows Server 2012 R2, mixed-OS mode, and all nodes Windows Server 2016

When there are only Windows Server 2016 nodes you can decide to also upgrade the cluster functional level.  This enables all the new capabilities in Windows Server 2016 Failover Clustering and also means you cannot go back to a Windows 2012 R2 cluster anymore. So, only take this step after a final validation of all drivers and firmware to make sure you don’t need to go back and you’re ready to fully commit to a fully functional Windows Server 2016 Failover Cluster.

A cluster operating system rolling upgrade does leave some traces, but that’s OK. Let’s take a look. 

This is what a get-cluster against Windows Server 2016 that was upgraded from Windows Server 2012 R2 looks like.

image

As you can see the cluster functional level is 8 and not 9 yet. This means that we have not yet run the Update-ClusterFunctionalLevel command on this cluster yet. Which still allows us to roll back all the way to a cluster running only Windows 2012 R2 nodes. The ClusterUpgradeVersion has a value of 3.

We now execute the Update-ClusterFunctionalLevel command and take a look at Get-Cluster again.

image

As you can see we are now at cluster functional level 9 which enables all the capabilities offered by Windows Server 2016 Failover Clustering. The cluster Upgrade version is 8. That’s the previous cluster functional level we were at before we executed Update-ClusterFunctionalLevel.

Note that both properties ClusterFunctionalLevel and ClusterUpgradeVersion are only available with Windows Server 2016. You will not find it on a Windows Server 2012 R2 or lower cluster. If you run this command from Windows Server 2016 against a Windows Server 2012 R2 cluster both properties will be empty. If you run it on a Windows Server 2012 R2 host against Windows Server 2012 R2 or lower and even a Windows Server 2016 cluster these properties are not even there. The commandlet is older on those OS versions and didn’t know about these properties yet.

What about if you create a brand-new cluster, perhaps even on freshly installed windows Server 2016 Nodes? What does ClusterUpgradeVersion have as a value then? Well it’s also 8. In the end, there is no difference between an in-place upgrade Windows Server 2016 cluster and a cleanly created one. So where are those traces?

Cluster Operating System Rolling Upgrade Leaves Traces

What gives a rolling upgrade away is that in the registry, under the HKLM\Cluster the OS and OSVersion values are not updated (purple in the picture below). This is a benign artifact and I don’t know if this if on purpose or not.  I have changed them to Windows Server 2016 Datacenter as an experiment and I have not found any issues by doing so. Now, please don’t take this as recommendation to do so. The smartest and safest thing is to leave it alone. These are not used, so don’t worry about them.

image

But even if you would change those values a cluster resulting from of a cluster operating system rolling upgrade still has other ways of telling it was not born as a Window Server 2016 Cluster.

Under HKLM\Cluster (and Cluster.0) you’ll find the value CusterFunctionaLevel that does not exist on a cleanly installed Windows Server 2016 Cluster (green in the picture above). As you can see this is a Window Server 2016 cluster running at functional level 9.

There is even an extra key OperatingVersion under HKLM\Cluster that you will not find on a cleanly installed cluster either. It also has a Mixed Mode value under that key which indicates whether the cluster is still running in mixed mode or not.

image

Here is a screenshot of newly installed/created Windows Server 2016 cluster. No ClusterFunctionalLevel value, the OS and OSVersion Values are correct and there is no OperatingVersion key to be found.

image

What if you don’t like traces?

First of all, these traces are harmless. One thing you can do if you want to weed out all traces of a rolling upgrade (as far as the cluster is concerned) is to destroy the cluster and create one with the same CNO (and IP address if that was a fixed one). This might all be a bit more involved when it comes to CSV naming and other existing resources but then these remnants will be gone in a supported way. Now this does defeat one of the main purposes of this feature: no down time. The operating system itself might also contain traces if you did in-place OS upgrades but the cluster will not. Just adapting OS/OSVersion, ClusterFunctionalLevel and deleting the key OperatingVersion from HKLM\Cluster (and HKLM\Cluster.0) are not supported actions and messing around in the cluster registry keys can lead to problems, so don’t! The advice is to just leave it all alone. Microsoft developed cluster operating system rolling upgrade the way they did for a reason and by leaving things as Microsoft has set or left them will make sure you are always in a fully supported condition. So, use it if it fits the circumstances & you comply with all the prerequisites. Look at these traces a flag of honor, not a smudge on your shining armor. When I see these artifacts, I see people who have used this feature to their own benefits. Well done I say.

Learn more about the Cluster OS Rolling Upgrade process

Next to my blogs like First experiences with a rolling cluster upgrade of a lab Hyper-V Cluster (Technical Preview) and In Place upgrades of cluster nodes to Windows Server 2016 there are many resources out there by fellow blogger and Microsoft. A great video on the subject is Introducing Cluster OS Rolling Upgrades in Windows Server 2016 with Rob Hindman, who actually works on this feature and knows it inside out.

An important thing to keep in mind is that this can be automated using PowerShell or by leveraging SCVMM for orchestration for example. 3rd party tools could also support this and help you automate this process in order to scale it when needed.

Finally, the official documentation can be found here Cluster operating system rolling upgrade

The Cluster and 0.Cluster Registry Hives

The cluster database

In a Windows Server Cluster the cluster database is where the cluster configuration gets stored. It’s a file called CLUSDB with some assisting files (CLUSDB.1.container, CLUSDB.2.container, CLUSDB.blf) and you’ll find those in C:\Windows\Cluster (%systemroot%\Cluster).

clip_image002

But the cluster database also lives in a registry hive that gets loaded when the cluster service gets started. You’ll find under HKEY_LOCAL_MACHINE and it’s called Cluster. You might also find a 0.Cluster hive on one of the nodes of the cluster.

clip_image004

The 0.Cluster hive gets loaded on a node that is the owner of the disk witness. So if you have a cloud share or a file share witness this will not be found on any cluster node. Needless to say if there is no witness at all it won’t be found either.

On a lab cluster you can shut down the cluster service and see that the registry hive or hives go away. When you restart the cluster service the Cluster hive will reappear. 0.Cluster won’t as some other node is now owner of the disk witness and even when restarting the cluster service gets a vote back for the witness the 0.Cluster hive will be on that owner node.

If you don’t close the Cluster or 0.Cluster registry hive and navigate to another key when you test this you’ll get an error message thrown that the key cannot be opened. It won’t prevent the cluster service from being stopped but you’ll see an error as the key has gone. If you navigate away, refresh (F5) you’ll see they have indeed gone.

So far the introduction about the Cluster and 0.Cluster Registry Hives.

How is the cluster database kept in sync and consistent?

Good so now we know the registry lives in multiple places and gets replicated between nodes. That replication is paramount to a healthy cluster and it should not be messed with. You can see an DWORD value under the Cluster Key called PaxosTag (see https://support.microsoft.com/en-us/kb/947713 for more information). That’s here the version number lives that keep track of any changes and which is important in maintaining the cluster DB consistency between the nodes and the disk witness – if present – as it’s responsible for replicating changes.

clip_image006

You might know that certain operations require all the nodes to be on line and some do not. When it’s require you can be pretty sure it’s a change that’s paramount to the health of the cluster.

To demonstrate the PaxosTag edit the Cluster Networks Live Migration settings by enabling or disable some networks.

clip_image008

Hit F5 on the registry Cluster/0.cluster Hive and notice the tag has increased. That will be the case on all nodes!

As said when you have a disk witness the owner node of the witness disk also has 0.Cluster hive which gets loaded from the copy of the cluster DB that resides on the cluster witness disk.

clip_image010

As you can see you find 0.hive for the CLUSTERDB and the equivalent supporting files (.container, .blf) like you see under C:\Windows\Cluster on the cluster disk un the Cluster folder. Note that there is no reason to have a drive letter assigned to the witness disk. You don’t need to go there and I only did so to easily show you the content.

Is there a functional difference between a disk witness and a file share or cloud witness?

Yes, a small one you’ll notice under certain conditions. Remember a file share of cloud witness does not hold a copy of the registry database. That also means there’s so no 0.Cluster hive to be found in the registry of the owner node. In the case of a file share you’ll find a folder with a GUID for its name and some files and with a cloud witness you see a file with the GUID of the ClusterInstanceID for its name in the storage blob. It’s bit differently organized but the functionality of these two is exactly the same. This information is used to determine what node holds the latest change and in combination with the PaxosTag what should be replicated.

The reason I mention this difference is that the disk witness copy of the Cluster DB is important because it gives a disk witness a small edge over the other witness types under certain scenarios.

Before Windows 2008 there was no witness disk but a “quorum drive”. It always had the latest copy of the database. It acted as the master copy and was the source for replicating any changes to all nodes to keep them up to date. When a cluster is shut down and has to come up, the first node would download the copy from the quorum drive and then the cluster is formed. The reliance on that quorum copy was a single point of failure actually. So that’s has changed. The PaxosTag is paramount here. All nodes and the disk witness hold an up to date copy, which would mean the PaxosTag is the same everywhere. Any change as you just tested above updates that PaxosTag on the node you’re working on and is replicated to every other node and to the disk witness.

So now when a cluster is brought up the first node you start compares it’s PaxosTag with the one on the disk witness. The higher one (more recent one) “wins” and that copy is used. So either the local clusterDB is used and updates the version on the disk witness or vice versa. No more single point of failure!

There’s a great article on this subject called Failover Cluster Node Startup Order in Windows Server 2012 R2. When you read this you’ll notice that the disk witness has an advantage in some scenarios when it comes to the capability to keep a cluster running and started. With a file share or cloud witness you might have to use -forcequorum to get the cluster up if the last node to be shut down can’t be started the first. Sure these are perhaps less common or “edge” scenarios but still. There’s a very good reason why the dynamic vote and dynamic witness have been introduced and it makes the cluster a lot more resilient. A disk witness can go just a little further under certain conditions. But as it’s not suited for all scenarios (stretched cluster) we have the other options.

Heed my warnings!

The cluster DB resides in multiple places on each node in both files and in the registry. It is an extremely bad idea to mess round in the Cluster and 0.Cluster registry hives to clean out “cluster objects”. You’re not touching the CLUSDB file that way or the PaxosTag used for replicating changes and things go bad rather quickly. It’s a bad situation to be in and for a VM you tried to remove that way you might see:

  • You cannot live or quick migrate that VM. You cannot start that VM. You cannot remove that VM from the cluster. It’s a phantom.
  • Even worse, you cannot add a node to the cluster anymore.
  • To make it totally scary, a server restart ends up with a node where the cluster service won’t start and you’ve just lost a node that you have to evict from the cluster.

I have luckily only seen a few situations where people had registry corruption or “cleaned out” the registry of cluster objects they wanted to get rid of. This is a nightmare scenario and it’s hard, if even possible at all, to recover from without backups. So whatever pickle you get into, cleaning out objects in the Cluster and/or 0.Cluster registry hive is NOT a good idea and will only get you into more trouble.

Heed the warnings in the aging but still very relevant TechNet blog Deleting a Cluster resource? Do it the supported way!

I have been in very few situations where I managed to get out of such a mess this but it’s a tedious nightmare and it only worked because I had some information that I really needed to fix it. Once I succeeded with almost no down time, which was pure luck. The other time cluster was brought down, the cluster service on multiple nodes didn’t even start anymore and it was a restore of the cluster registry hives that saved the day. Without a system state backup of the cluster node you’re out of luck and you have to destroy that cluster and recreate it. Not exactly a great moment for high availability.

If you decide to do muck around in the registry anyway and you ask me for help I’ll only do so if it pays 2000 € per hour, without any promise or guarantee of results and where I bill a minimum of 24 hours. Just to make sure you never ever do that again.