July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2

Microsoft recently released another update rollup (aka cumulative update). The

July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2.

This rollup includes improvements and fixes but more importantly it also contains ‘improvements’ from June 2016 update rollup KB3161606 and May 2016 update rollup KB3156418. When it comes to the June rollup KB3161606 it’s fixes the bugs that cause concerns with Hyper-V Integration Components (IC) to even serious down time to Scale Out File Server (SOFS) users. My fellow MVP Aidan Finn discuses this in this blog post. Let’s say it caused a wrinkle in the community.

In short with KB3161606 the Integration Components needed an upgrade (to 6.3.9600.18339) but due to a mix up with the manifest files this failed. You could leave them in pace but It’s messy. To make matters worse this cumulative update also messed up SOFS deployments which could only be dealt with by removing it.

Bring in update rollup 3172614. This will install on hosts and guest whether they have  already installed or not and it fixes these issues. I have now deployed it on our infrastructure and the IC’s updated successfully to 6.3.9600.18398. The issues with SOFS are also resolved with this update. We have not seen any issues so far.

image

In short, CU should be gone from Windows Update and WSUS. It it was already installed you don’t need to remove it. CU will install on those servers (hosts and guests) and this time is does things right.

I hope this leads to better QA in Redmond as it really is causing a lot of people grief at the moment. It also feed conspiracy nuts theories that MSFT is sabotaging on-premises to promote Azure usage even more. Let’s not feed the trolls shall we?

SOFS / SMB 3 Offers Best VM Resiliency Experience

I have blogged about Virtual Machine Resiliency in Windows 2016 Failover Clustering before in Testing Virtual Machine Compute Resiliency in Windows Server 2016 

Those test and demos were done with block lever storage, CSV on Fibre Channel, iSCSI or shared SAS. Today we’ll look at the experience when you’re running your VMs on a continually available file share on a Scale Out File Server (SOFS). This configuration offers the best possible experience.

Why well, when the cluster node is in Isolated mode this has no impact on the SOFS share as this is a resource external to the Hyper-V cluster. In other words it remains on line. This means that the VMs, even if they have lost their high availability during the time the node is Isolated, they keep running. After all there is nothing wrong with Hyper-V itself. With block level CSV storage you lose access to the storage as that a cluster resource and the node got isolated. That’s why the VMs go into a paused critical state during a transient failure with block level storage but they don’t when you’re using SOFS.

image

The virtual machine compute resiliency feature in action shows you that the VMs service a transient failure without issues. Your services need never know something was up. Even when the transient failure is reoccurring that doesn’t mean it will cause down time. The node will be quarantined and if it come backup the workload will be live migrated away.

image

You can watch a video of this in action here on Vimeo:

The quarantine threshold and duration as well as the resiliency period and can be tweaked to your environment to get the best possible results.

image

SMB 3 for the win! This is yet one more convincing argument to start looking into SOFS and leveraging the capabilities of SMB3. Remember that you can run as SOFS cluster against your existing shared storage to get started if you can get the IOPS/latency you require. But also look into storage spaces, especially storage spaces direct which avoids some of the drawback SANs have in such a scenario. High time for storage vendors to really scale out, implement SMB 3 well and complete and keep the great added value features they already have in their offering. It’s this or becoming yet a bit more irrelevant in todays storage scene in the Microsoft ecosystem.

Simplified SMB Multichannel and Multi-NIC Cluster Networks

Simplified SMB Multichannel and Multi-NIC Cluster Networks

One of the seemingly small feature enhancements in Windows Server 2016 Failover clustering is simplified SMB multichannel and multi-NIC cluster networks. In Windows 2016 failover clustering now recognizes and uses multiple NICs on the same subnet for cluster networking (Cluster & client access).

image

Why was this introduced?

The growth in the capabilities of the hardware ( Compute, memory, storage & networking) meant that failover clustering had to leverage this capability more easily and for more use cases than before. Talking about SMB, that now also is used for not “only” CSV and live migration but also for Storage Spaces Direct and Storage Replica.

  • It gives us better utilization of the network capabilities and throughput with Storage Spaces Direct, CSV, SQL, Storage Replica etc.
  • Failover clustering now works with multichannel as any other workload without the extra requirement of needing multiple subnets. This is more important that it seems to me at first. But in many environment getting another VLAN and/or extra subnet is a hurdle. Well that hurdle has gone.
  • For IPv6 Link local Subnets it just works, these are auto configured as cluster only networks.
  • The cluster Validation wizard won’t nag about it anymore and knows it’s a valid failover cluster configuration

See it in action!

You can find a quick demo of simplified SMB multichannel and multi-NIC cluster networks on my Vimeo channel here

image

In this video I demo 2 features. One is new and that is virtual machine compute resiliency. The other is an improved feature, simplified SMB multichannel and multi NIC cluster networks. The Multichannel demo is the first part of the video. Yes, it’s with RDMA RoCEv2, you know I just have to do SMB Direct when I can!

You can read more about simplified SMB multichannel and multi-NIC cluster networks on TechNet in here. Happy Reading!

Get-VMHostSupportedVersion

I wrote about this little Gem of a PowerShell Commandlet  Get-VMHostSupportedVersion before in here (there a bit more info on the impact of a VM configuration version in that blog). Now at TPv5 I took a new peak and what do we find?

image

We now have version virtual machine configuration version 7.1 at TPv5. We also got 2 new version ID’s 254.0 for Prerelease and 255.0 for Experimental. Clearly Microsoft has plans here.  I’ll update this blog with a link to the documentation when I find it.

All bets are open as to where we’ll land at RTM for the virtual machine configuration version. I’m guessing that we’re feature complete at Technical Preview 5 but version numbers can get funky. Will all TP version be supported at RTM? Normally upgrades from beta / preview versions are not supported but on the other hand some people in early adopter programs are working on it already so I’m guessing they will. We’ll see, but that’s where I put my money.