Windows Server 2012 Cluster Reset Recent Events Feature

There are various small improvements in Windows Server 2012 Failover Clustering that make live a little easier. When playing in the lab one of the things I like to do is break stuff. You know, like pull out the power plug  of a host during a live migration or remove a network cable  for one or more of the networks, flip the power of the switch off and on again, crash the vmms.exe process and other really bad things …Smile Just getting a feel for what happens and how Windows 2012 & Hyper-V responds.

As you can imagine this fills up the cluster event logs real fast. It also informs you in that you’ve had issues in the past 24 hours. Those recent cluster events could not be cleared or set to “acknowledged” up to Windows 2008 R2 except by deleting the log files. Now this has to be done on all nodes and is something you should not do in production and is probably even prohibited. There are environments where this is indeed a “resume generating” action. But it’s annoying that you can leave a client with a healthy looking environment after you have fixed an issue.

image

For the lab or environments where event log auditing is a no issue I used to run a little script that would clear the event logs of the lab cluster nodes not to be dealing with to much noise between tests or to leave a GUI that represents the healthy state of the cluster for the customer.

This has become a lot easier and better in Windows Server 2012 we now have a feature for this build in to the Failover Cluster Manager GUI. Just right click the cluster events and select “Reset Recent Events”.

image

 

The good thing is this ignores the recent events before “now” but it does not clear the event log. You can configure the query to show older events again. This is nice during testing in the lab. Even in a production environment where this is a big no-no, you can’t do this you can now get rid of noise from previous issues,focus on the problem you working on or leave the scene with a clean state after fixing an issue without upsetting any auditors.

image

Configuring Jumbo Frames with PowerShell in Windows Server 2012

During lab and test time with Windows Server 2012 Hyper-V some experimenting with PowerShell is needed to try and automate actions and settings. One of the thing we have been playing around with was how to enable and configure jumbo frames.

Many advanced features like Large Send Offload have commandlets of their own (Enable-NetAdapterLso etc.), but not all them and jumbo frames is one of the latter. For those advanced features you can use the NetAdapterAdvancedProperty commandlets (Network Adapter Cmdlets in Windows PowerShell). You can than set/enable those features via the registry keywords & values. Let’s say we want to enable jumbo frames on a virtual  adapter named “ISCSI” in a VM.

image

To know what values to use you can run:

Get-NetAdapterAdvancedProperty -Name ISCSI

image

As you can see Jumbo Packet has a RegistryValue of 1514 and a DisplayValue  of “Disabled”. You can also see that the RegistryKeyword to use to enable and configure jumbo frames is “*JumboPacket”. So to enable jumbo frames you run the following command:

Set-NetAdapterAdvancedProperty -Name “ISCSI” -RegistryKeyword “*JumboPacket” -Registryvalue 9014

image

The RegistryValue is set to 9014 and the DisplayValue is set to “9014 Bytes”, i.e. it’s enabled.

If you type in an disallowed value it will list the accepted values. Please note also that these can differ from NIC to NIC depending on what is supported. Some will only show 1514, 4088, some will show 1514, 4088, 9014.

image

Now to disable jumbo frames you just need to reset the RegistryValue back to 1514

Set-NetAdapterAdvancedProperty -Name “ISCSI” -RegistryKeyword “*JumboPacket” -Registryvalue 1514

The result of this command can be seen in the picture below. DisplayName Jumbo Packet has a DisplayValue of “Disabled” again.

image

Let’s say you want to enable jumbo frames on all network adapters in a host you can run this:

Get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | Set-NetAdapterAdvancedProperty –RegistryValue “9014

Or run

Set-NetAdapterAdvancedProperty -Name * -RegistryKeyword “*JumboPacket” -Registryvalue 9014

I didn’t notice much difference in speed testing this with measure-command.

If you mess things up to much and you want to return all DisplayName settings to a well known status, i.e. the defaults you can run:

Reset-NetAdapterAdvancedProperty –Name SCSCI –DisplayName *

If you’ve just messed around with the jumbo frame settings run

Reset-NetAdapterAdvancedProperty -Name ISCSI –DisplayName “Jumbo Packet”

Or you can do the same for all network adapters:

Reset-NetAdapterAdvancedProperty –Name * –DisplayName “Jumbo Packet”

There you go, you’re well on your way doing the more advanced configurations of your network setup. Enjoy!

Transition a Windows Server 8 to Windows 2012 Release Candidate Hyper-V Cluster

For those of you interested in moving their lab from Windows Server 8 beta to Windows Server 2012 Release Candidate I can refer you to my 3 part blog series on Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta).

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 3

So the entire process is very similar but for the fact that to go from Windows Server 8 Beta you have to do a clean install on every node you evict during the process. An upgrade is not supported and not possible. I even tried the old trick of editing the cversion.ini file in the sources folder to lower the supported minimum version for an upgrade, but no joy.imageimage

You probably remember this trick to enable an upgrade form the beta/RC  to RTM with Windows Server 2008 R2/Windows 7  But that doesn’t work and even if it did it would not be supported.

But just follow the 3 part series and do an fresh install instead of an upgrade of the cluster nodes and you’ll be just fine.

Windows Server 2012 Supports Data Center Bridging (DCB)

Data Center Bridging (DCB) is a collection of standards-based end-to-end networking technologies that allow Ethernet to act as the unified fabric for multiple types of traffic in the data center. You cannot put a bunch of traffic types / protocols on the same physical pipes if you have no way of guaranteeing that they will each get what they need when they need it based on priority and impact. Even with ludicrous over provisioning you could still run into issues and even if that’s not the case that’s a very expensive option. When you think about iSCSI, Remote Direct Memory Access (RDMA) and Fibre Channel over Ethernet (FCoE) you can see where the benefits are to be found. We just can keep adding network after network infrastructure for all these applications on a large scale.

  • Integrates with the standard Ethernet networks
  • Prevents congestion in NIC & network by reserving bandwidth for particular traffic types giving better performance for all
  • Windows 2012 provides support & control for DCB and allows to tags packets by traffic type
  • Provides lossless transport for mission critical workloads

You can see why this can be handy in a virtualized world evolving in to * cloud infrastructure. By enabling multiple traffic types to use an Ethernet fabric you can simplify & reduce the  network infrastructure (hardware & cabling).  In some environments this is a big deal. Imagine that a cloud provider does storage traffic over Ethernet on the same hardware infrastructure as the rest of the Ethernet traffic. You can get rid of the isolated storage-specific switches and HBAs reducing complexity, and operational costs. Potentially even equipment costs, I say potentially because I’ve seen the cost of some unified fabric switches and think your mileage may vary depending on the scale and nature of your operations.

Requirements for Data Center Bridging

DCB is based on 4 specifications by the DCB Task Group

  1. Enhanced Transmission Selection (IEEE 802.1Qaz)
  2. Priority Flow Control (IEEE 802.1Qbb)
  3. Datacenter Bridging Exchange protocol
  4. Congestion Notification (IEEE 802.1Qau)

3. & 4. are not strictly required but optional (and beneficial) if I understand things correctly. If you want to dive a little deeper have a look here at the DCB Capability Exchange Protocol Specification and have a chat with your network people on what you want to achieve.

You also need support for DCB in the switches and in the network adaptors. 

Finally don’t forget to run Windows Server 2012 as the operating systems Winking smile. You can find some more information on TechNet  Data Center Bridging (DCB) Overview but it is incomplete. More information is coming!

Understanding what it is and does

So, in the same metaphor of a traffic situation like we used with Data Center TCP  we can illustrate the situation & solution with traffic lanes for emergency services and the like. Instead of having your mission critical traffic stuck in grid lock like the fire department trucks below

image

You could assign an reserved lane, QOS, guaranteed minimal bandwidth, for that mission critical service.  Whilst you at it you might do the same for some less critical services that none the less provide a big benefit to the entire situation as well.

image