Trouble Shooting Intermittent Virtual Machine Network Connectivity

I was asked to take a look at an issue with virtual machines losing network connectivity. The problems were described as follows:

Sometimes some VMs had connectivity, some times they didn’t. It was not tied to specific virtual machines. Sometimes the problem was not there, than it showed up again. It was, not an issue of a wrong subnet mask or gateway.

They suspected firmware or driver issues. Maybe it was a Windows NIC teaming bug or problems with DVMQ or NIC offload settings. There’s a lot of potential reasons, just Google Intermittent VM connectivity Issues Hyper-V and you’ll get a truckload of options.

So a round of wishful firmware, driver upgrading started. Followed by a round of wishful disabling network features. That’s one way to do it. But why not sit back an look at the issue.

Based on what they said I looked at the environment and asked it was tied to specific host as only VMs on one of the hosts had the issue.  Could it be be after a live migration or a VM restart. They didn’t really know but it could. So we started looking at the hosts. All teams for the vSwitch were correctly configured on all host. No tagged VLAN on the member NIC. No extra team interfaces that would violate the rule that there can be only one if the team is used by a Hyper-V switch. They used the switch independent teaming mode with the load balancing mode set to Dynamic, all member active. Perfect.

I asked it they used tagged VLAN on the VMs some times. They said yes. Which gave me a clue they had trunking or general mode configured on the ports. So I looked at the switches to see what the port configuration was like?  Guess what. All ports on both switches were correctly configured bar the ports of the vSwitch team members on one Hyper-V host. The one with problematic VMs. The two ports were in general mode but the port on the top switch had PVID* 100 and the one on the bottom switch had PVID 200. That was the issue. If the VM “landed” on the team member with PVID 200 it has no network connectivity.

HyperV-vSwitchTeam-WronNativeVLAN

 

* PVID (switchport general pvid 200) is the default VLAN of the port, in CISCO speak that would translate into “”native VLAN as in switchport trunk native vlan 200

Yes NIC firmware and drivers have issues. There are bugs or problems with advanced features once in a while. But you really do need to check that the configuration is correct and the design or setup makes sense. Do yourself a favor by not assuming anything. Trust but verify.

Hyper-V Storage QoS in Windows Server 2016 Works on SOFS and on LUNs/CSVs

Introduction

I addressed storage QoS in Windows Server 2012 R2 at length in a coupe of blog posts quite a while ago:

I love the capability and I use it in real life. I also discussed where we were still lacking features and capabilities. I address the fact that there is no multiple host QoS, there is no cluster wide QoS and there is no storage wide QoS in Windows. On top of that, if there is QoS in the storage array (not many have that) most of the time this has no knowledge of Hyper-V, the cluster and the virtual machines. There is one well know exception and that GridStore, possible the only storage vendor that doesn’t treat Hyper-V as a second class citizen.

Any decent storage QoS that not only provides maximums but also minimums, does this via policies and is cluster and hypervisor even virtual machine aware. It needs to be easy to implement and mange. This is not a very common feature. And if it’s exists it’s is tied to the storage vendor, most of the time a startup or challenger.

Windows Server 2016

In Windows Server 2016 they are taking a giant step for all mankind in addressing these issues. At least in my humble opinion. You can read more here:

Basically  Microsoft enables us to define IOPs management policies for virtual machines based on virtual hard disks and  IOPs reserves and limits. These are shared by a group of virtual machines / virtual hard disks.  We can have better resource allocation between VMs, or groups of VMs. These could be high priority VMs or VMs belonging to an platinum customer /tenant. Storage QoS enhances what we already have since Windows Server 2012 R2.  It enables us to monitor and enforce performance thresholds via policies on groups of VMs or individual VMs.

Great for SLA’s but also to make sure a run away VM that’s doing way to much IO doesn’t negatively impact the other VMs and customers on the cluster. They did this via via a Centralized Policy Controller. Microsoft Research really delivered here I would dare say. A a public cloud provider they must have invested a lot in this capability.

At Ignite 2015 there was a great session by Senthil Rajaram and Jose Barreto on this subject. Watch it for some more details.

What caught my eye after  attending and watching sessions, talking to MSFT at the boot was the following marked in red.

image

So not enabled by default on non SOFS storage but can you enable it on your block level CSV Hyper-V cluster? There is a lot of focus on Microsoft providing Storage QoS for SOFS. Which ties into the “common knowledge” that virtualization and LUNs are a bad idea, you need file share and insights into the files of the virtual machines to put intelligence into the hypervisor or storage system right? Well perhaps no! I Windows Server 2016 there is now also the ability to provide it to any block level storage you use for Hyper-V. Yes your low end iSCSI SAN or your high End 16Gbps FC SAN … as long as it’s leveraging CVS (and you should!). Yes, this is what they state in an awesome interview with my Fellow Hyper-V MVP Carsten Rachfahl at Ignite 2015.

Videointerview with Jose and Senthil Storage QoS Thumb2

Senthil and Jose look happy and proud. They should be.  I’m happy and proud of them actually as to me this is huge. This information is also in the TechNet guide Storage Quality of Service in Windows Server Technical Preview

Storage QoS supports two deployment scenarios:

Hyper-V using a Scale-Out File Server This scenario requires both of the following:

  • Storage cluster that is a Scale-Out File Server clusterCompute cluster that has least one server with the Hyper-V role enabled.
  • For Storage QoS, the Failover Cluster is required on Storage, but optional on Compute. All servers (used for both Storage and Compute) must be running Windows Server Technical Preview.

Hyper-V using Cluster Shared Volumes. This scenario requires both of the following:

  • Compute cluster with the Hyper-V role enabled
  • Hyper-V using Cluster Shared Volumes (CSV) for storage

Failover Cluster is required. All servers must be running Windows Server Technical Preview.

So let’s have a quick go following the TechNet guide on a lab cluster leveraging CSV over FC with a Dell Compellent.image

Which give me running Storage QoS Resource

image

And I can play with my new PoSh Commands … Get-StorageFlowQos, Get-StorageQosPolicy and Get-StorageQosVolume …

image

The guide is full of commands, examples and tips. Go play with it. It’s great stuff Smile. I’ll blog more as I experiment.

Here’s my test VMs doing absolutely nothing, bar one on which I’m generating traffic. Even without a policy set it shows the IOPS the VM is responsible for on the storage node.image

Yu can dive into this command and get details about what virtual disk on what volume are contributing to the this per storage node.

image

More later no doubt but here I just wanted to share this as to me this is very important! You can have the cookie of your choice and eat it to! So the storage can be:

  1. SOFS provided (with PCI RAID, Shared SAS, FC, FCoE, ISCI storage as backend storage) that doesn’t matter. In this case Hyper-V nodes can be clustered or stand alone
  2. The storage can be any other block level storage: iSCSI/FC/FCoE it doesn’t matter as long as you use CSVs. So yes this is clustered only. That Storage QoS Resource has to run somewhere.

You know that saying that you can’t do storage QoS on a LUN as they can’t be tweaked to the individual VM and virtual hard disks? Well, that’s been busted as myth it seems.

What’s left? Well if you have SOFS against a SAN or block level storage you cannot know if the storage is being used for other workloads that are not Hyper-V, policies are not cross cluster and stand alone hosts are a no go without SOFS. The cluster is a requirement for this to work with non SOFS Hyper-V deployments.  Also this has no deep knowledge or what’s happening inside of your storage array. So it knows how much IOPS you get, but it’s actually unaware of the total IOPS capability of the entire storage system or controller congestion etc. Is that a big show stopper? No. The focus here is on QoS for virtualization. The storage arrays storage behavior is always in flux anyway. It’s unpredictable by nature. Storage QoS is dynamic and it looks pretty darn promising to me! People this is just great. Really great and it’s very unique as far as I can say. Microsoft, you guys rock.

MS15-068: Vulnerability in Windows Hyper-V could allow remote code execution: July 14, 2015

Hi people, Hyper-V has been blessed with a very good security track record. The few security issues that did arise over the years have always been resolved quickly. Today it’s

time to act fast once again and make sure you have your security & patch process act together.

Note the following:

  • Microsoft has not identified any mitigating factors for this vulnerability.
  • Microsoft has not identified any workarounds for this vulnerability.

This security update resolves vulnerabilities in Windows that could allow remote code execution in a host context if a specially crafted application is run by an authenticated and privileged user on a guest virtual machine that is hosted by Hyper-V. An attacker must have valid logon credentials for a guest virtual machine to exploit this vulnerability.

To learn more about the vulnerability, see Microsoft Security Bulletin MS15-068

This one is critical. So do not delay long after your smoke testing of this patch.You have some time to act but don’t wait too long:

Microsoft received information about this vulnerability through coordinated vulnerability disclosure. When this security bulletin was originally issued Microsoft had not received any information to indicate that this vulnerability had been publicly used to attack customers.

Go and secure your environment wisely and effectively now.

Hyper-V Virtual Machines and the Storage Optimizer

Windows Server 2012 (R2) has made many improvements to how storage optimization and maintenance is done. You can read a lot more about this in What’s New in Defrag for Windows Server 2012/2012R2. It boils down to a more intelligent approach depending on the capability of the underlying storage.

This is reflected in the Media type we see when we look at Optimize Drives.

This is my workstation … looks pretty correct a couple of SSDs and a couple of HDDs.

image

SSD are optimized intelligently by the way.When VSS is leveraged SSD do get fragmentation and so one in while they are “defragmented”. This has to do with keeping performance up to par. Read more about this in The real and complete story – Does Windows defragment your SSD? by Scott Hanselman.

The next example is a Hyper-V Cluster. You can see the local disks identified as HDD and the CSV as Thin provisioned disks. Makes sense to me, the SAN I use supports thin provisioned disks.

image

But now, let’s look at a Virtual Machine with virtual disks of every type known and on any type of storage we could find. All virtual disks are identified as “Thin provisioned disk”. How can that be?

image

What had me puzzled a little bit is that in a virtual machine each and every virtual disk is identified as thin provisioned disk. It doesn’t matter what type of virtual disk it is: fixed VHD/VHDX or dynamically expanding VHD/VHDX. It also doesn’t matter on what physical disk the virtual disk resides: SATA, SAS, SSD, SAN (iSCSI/FC) LUN or CSV, SMB Share …

So how does this work with a fixed VHD on a local SATA disk? A VHD doesn’t know about UNMAP, does it? And a SATA HHD? How does that compute? Well, my understanding on this is that all virtual disks, dynamically expanding or fixed, both VHDX/VHD are identified as thin provisioned disks, no matter what type of physical disk they reside on (CSV, SAS, SATA, SSD, shared/non shared). This is to allow for UNMAP (RETRIMs in Storage Optimizer speak, which is  way of dealing with the TRIM limitations / imperfections, again see Scott Hanselman’s blog for this) command to be sent from the guest to the Hyper-V storage stack below. If it’s a VHD those UNMAP command are basically black holed just like they would never be passed down to a local SATA HHD (on the host) that has no idea what it is and used for.

But wait a minute ….what about SSD and defragmentation you say, my VHDX lives on an SSD.. Well they are for one not identified as SSD or HDD. The hypervisors deals with the storage optimization at the virtual layer. The host OS handles the physical layer as intelligent as it can to optimize the disks as best as it can. How that happens depends on the actual storage beneath in the case of a modern SAN you’ll notice it’s also identified as a Thin provisioned disk. SANs or hyper converged storage arrays provide you with storage that is also virtual with all kinds of features and are often based on tier storage which will be a mix of SSD/SAS/NL-SAS and in some cases even NVMe Flash. So what would an OS have to identify it as?  The storage array must play its part in this.

So, if you ever wondered why that is, now you know. Hope you found this interesting!