Musings On Switch Embedded Teaming, SMB Direct and QoS in Windows Server 2016 Hyper-V

When you have been reading up on what’s new in Windows Server 2016 Hyper-V networking you probably read about Switch Embedded Teaming (SET). Basically this takes the concept of teaming and has this done by the vSwitch. Which means you don’t have to team at the host level. The big benefit that this opens up is the RDMA can be leveraged on vNICs. With host based teaming the RDMA capabilities of your NICs are no longer exposed, i.e. you can’t leverage RDMA. Now this has become possible and that’s pretty big.

clip_image001

With the rise of 10, 25, 40, 50 and 100 Gbps NICs and switches the lure to go fully converged becomes even louder. Given the fact that we now don’t lose RDMA capabilities to the vNICs exposed to the host that call sounds only louder to many.  But wait, there’s even more to lure us to a fully converged solution, the fact that we now do no longer lose RSS on those vNICs! All good news.

I have written an entire whitepaper on convergence and it benefits, drawback, risks & rewards. I will not repeat all that here. One point I need to make that lossless traffic and QoS are paramount to the success of fully converged networking. After all we don’t want lossy storage traffic and we need to assure adequate bandwidth for all our types of traffic. For now, in Technical Preview 3 we have support for Software Defined Networking (SDN) QoS.

What does that mean in regards to what we already use today? There is no support for native QoS  and vSwitch QoS in Windows Server 2016 TPv3. There is however the  mention of DCB (PFC/ETS ), which is hardware QoS in the TechNet docs on Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET). Cool!

But wait a minute. When we look at all kinds of traffic in a converged Hyper-V environment we see CSV (storage traffic), live migration (all variations), backups over SMB3 all potentially leveraging SMB Direct. Due to the features and capabilities in SMB3 I like that. Don’t get me wrong about that. But it also worries me a bit when it comes to handling QoS on the hardware side of things.

In DCB Priority Flow Control (PFC) is the lossless part, Enhanced Transmission Selection (ETS) is the minimum bandwidth QoS part. But how do we leverage ETS when all types of traffic use SMB Direct. On the host it all gets tagged with the same priority. ETS works by tagging different priorities to different workloads and assuring minimal bandwidths out of a total of 100% without reserving it for a workload if it doesn’t need it. Here’s a blog post on ETS with a demo video DCB ETS Demo with SMB Direct over RoCE (RDMA .

Does this mean a SDN QoS only approach to deal with the various type of SMB Direct traffic or do they have some aces up their sleeves?

This isn’t a new “concern” I have but with SET and the sustained push for convergence it does has the potential to become an issue. We already have the SMB bandwidth limitation feature for live migration. That what is used to prevent LM starving CSV traffic when needed. See Preventing Live Migration Over SMB Starving CSV Traffic in Windows Server 2012 R2 with Set-SmbBandwidthLimit.

Now in real life I have rarely, if ever, seen a hard need for this. But it’s there to make sure you have something when needed. It hasn’t caused me issues yet, but I’m a performance & scale first, in “a non-economies of scale” world compared to hosters. As such convergence is a tool I use with moderation. My testing when traffic competes without ETS is that they all get part of the cake but not super predictable/ consistent. SMB bandwidth limitation is a bit of a “bolted on” solution => you can see the perf counters push down the bandwidth in an epic struggle to contain it, but as said it’s a struggle, not a nice flat line.

Also Set-SmbBandwidthLimit is not a percentage, but hard max bandwidth limit, so when you lose a SET member the math is off and you could be in trouble fast. Perhaps it’s these categories that could or will be used but it doesn’t seem like the most elegant solution/approach. That with ever more traffic leveraging SMB Direct make me ever more curious. Some switches offer up to 4 lossless queues now so perhaps that’s the way to go leveraging more priorities … Interesting stuff! My preferred and easiest QoS tool, get even bigger pipes, is an approach convergence and evolution of network needs keeps pushing over. Anyway, I’ll be very interested to see how this is dealt with. For now I’ll conclude my musings On Switch Embedded Teaming, SMB Direct and QoS in Windows Server 2016 Hyper-V

Virtual Network Appliances I Use for Hyper-V Labs

When you build and maintain a test lab you’re always on the lookout for gear you can use. That’s either hardware or virtual appliances. My main concern is cost, it should work well on Hyper-V and the ability to mimic real world environments. That’s a great help for educational purposes as well as for testing and as an aid to troubles shooting. One of the nice things virtualization and now also cloud IAAS offers is the ability to run virtual storage and network appliances that allow us to have that real world look and feel. Add to that ever more software defined storage, networking and compute and we’re able to build very realistic labs. The limits we’re left with are time, money and space.

When building a lab some people tend to run into perceived limitations of their hypervisor. That’s to be expected as for many that hypervisor is just something to quickly get up and running an get to work writing code, implementing a backup solution or whatever the workload at hand is all about. The tip here is not to give up to fast.

More recently I’m build/working on a new lab setup simulating different sites. I need to route between these isolated test networks and load balance traffic in a site redundant manner. The idea was to mimic real life as well as we good. Add to that lab setup an Azure “site” and it’s fun all over. It’s all based on Hyper-V and Windows Server virtual machines but some components are not. Windows NLB has had its best day and RRAS is limited in the abilities I need to test. They can and do work fine for certain scenarios, but not for all that I need to test. I add virtual load balancers, virtual switches with the look and feel of physical ones and the same for virtual firewalls.

Now in real life you’ll be dealing with Link Aggregation Groups, Trunking, MLAG, routing, teaming … in short the tools of the trade when doing networking. One side effect of this is that on a Hyper-V host you quickly run out of physical network ports to work with. That’s not a problem, in real life your firewall or load balancer does not have 48 ports either. Often you have 4 to 8 and sometimes more, but often not, ports at your disposal and depending on the complexity that’s more than enough or not at all. Trunking & VLAN’s are the way we deal with this. In the Hyper-V GUI you will not find a way to define a trunk on an vNIC attached to a vSwitch. But this can be done via PowerShell. So please do not reject Hyper-V as not being up to the job. It is! Read about this in my blog post.

People often ask me what virtual network appliances I Use for Hyper-V Labs. This does vary over time, but there are some constants. In the lab I hate wasting time on time bombed trials. So I avoid those in favor of either fully featured solutions or I use free open source alternatives. Smart vendors provide the easiest access possible to their solutions. They realize that easy access delivers the ability to learn and test every aspect of the products which make a huge difference in the success of their offerings in the real world. When it comes to load balancers I use the KEMP Virtual Load Masters. You can read more about these in projects and lab testing  in blogs about the KEMP (Virtual) Load Master.

As an MVP I got 1 free license. Together with the ability to restore configurations I can have a pseudo permanent redundant load balancing setup. Only building labs for multi-site geo load balancing solutions requires to start from scratch every time. For routing I use VyOS, it works on both hardware and on a bunch of hypervisors with X64 bit virtual machines. When I need the look and feel of a firewall you’ll encounter in business I use Opnsense. It supports the synthetic vNICs with the enlightened Hyper-V drivers. Yup, the integration components are there.  It doesn’t boot from UEFI so no Generation 2 virtual machine support as of yet. imageimage

Another good one is IPFire. This one also does a nice job with the integration components.

image

I also have a DELL SonicWall in my home office where I have some ports to play with but it tends to be leveraged more for the permanent parts of the lab. It’s a crucial & permanent component.

SonicWALL NSA 220 Wireless-N Appliance

Recover From Expanding VHD or VDHX Files On VMs With Checkpoints

So you’ve expanded the virtual disk (VHD/VHDX) of a virtual machine that has checkpoints (or snapshots as they used to be called) on it. Did you forget about them?  Did you really leave them lingering around for that long?  Bad practice and not supported (we don’t have production snapshots yet, that’s for Windows Server 2016). Anyway your virtual machine won’t boot. Depending on the importance of that VM you might be chewed out big time or ridiculed. But what if you don’t have a restore that works? Suddenly it’s might have become a resume generating event.

All does not have to be lost. Their might be hope if you didn’t panic and made even more bad decisions. Please, if you’re unsure what to do, call an expert, a real one, or at least some one who knows real experts. It also helps if you have spare disk space, the fast sort if possible and a Hyper-V node where you can work without risk. We’ll walk you through the scenarios for both a VHDX and a VHD.

How did you get into this pickle?

If you go to the Edit Virtual Hard Disk Wizard via the VM settings it won’t allow for that if the VM has checkpoints, whether the VM is online or not.

image

VHDs cannot be expanded on line. If the VM had checkpoints it must have been shut down when you expanded the VHD. If you went to the Edit Disk tool in Hyper-V Manager directly to open up the disk you don’t get a warning. It’s treated as a virtual disk that’s not in use. Same deal if you do it in PowerShell

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -SizeBytes 15GB

That just works.

VHDXs can be expanded on online if they’re attached to a vSCSI controller. But if the VM has checkpoints it will not allow for expanding.

image

So yes, you deliberately shut it down to be able to do it with the the Edit Disk tool in Hyper-V Manager. I know, the warning message was not specific enough but consider this. The Edit disk tool when launched directly has no idea of what the disk you’re opening is used for, only if it’s online / locked.

Anyway the result is the same for the VM whether it was a VHD or a VHDX. An error when you start it up.

[Window Title]
Hyper-V Manager

[Main Instruction]
An error occurred while attempting to start the selected virtual machine(s).

[Content]
‘DidierTest06’ failed to start.

Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk.’.

Virtual disk ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd’ failed to open because a problem occurred when attempting to open a virtual disk in the differencing chain, ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd’: ‘The size of the virtual hard disk is not valid.’.

image

You might want to delete the checkpoint but the merge will only succeed for the virtual disk that have not been expanded.  You actually don’t need to do this now, it’s better if you don’t, it saves you some stress and extra work. You could remove the expanded virtual disks from the VM. It will boot but in many cased the missing data on those disks are very bad news. But al least you’ve proven the root cause of your problems.

If you inspect the AVVHD/AVHDX file you’ll get an error that states

The differencing virtual disk chain is broken. Please reconnect the child to the correct parent virtual hard disk.

image

However attempting to do so will fail in this case.

Failed to set new parent for the virtual disk.

The Hyper-V Virtual Machine Management service encountered an unexpected error: The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk. (0xC03A0017).

image

Is there a fix?

Let’s say you don’t have a backup (shame on you). So now what? Make copies of the VHDX/AVHDX or VHD/AVHD and save guard those. You can also work on copies or on the original files.I’ll just the originals as this blog post is already way too long. If you. Note that some extra disk space and speed come in very handy now. You might even copy them of to a lab server. Takes more time but at least you’re not working on a production host than.

Working on the original virtual disk files (VHD/AVHD and / or VHDX/AVHDX)

If you know the original size of the VHDX before you expanded it you can shrink it to exactly that. If you don’t there’s PowerShell to the rescue if you want to find out the minimum size.

image

But even better you can shrink it to it’s minimum size, it’s a parameter!

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -ToMinimumSize

Now you not home yet. If you restart the VM right now it will fail … with the following error:

‘DidierTest06’ failed to start. (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)

‘DidierTest06’ Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the identifiers of the parent virtual hard disk and differencing disk.’ (0xC03A000E). (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)

image

What you need to do is reconnect the AVHDX to it’s parent and choose to ignore the ID mismatch. You can do this via Edit Disk in Hyper-V Manager of in PowerShell. For more information on manually merging & repairing checkpoints see my blogs on this subject here. In this post I’ll just show the screenshots as walk through.

image

image

image

image

image

Once that’s done you’re VHDX is good to go.

For a VHD you can’t shrink that with the inbox tools. There is however a free command line tool that can do that names VHDTool.exe. The original is hard to find on the web so here is the installer if you need it. You only need the executable, which is portable actually, don’t install this on a production server. It has a repair switch to deal with just this occurrence!

Here’s an example of my lab …

D:\SysAdmin>VhdTool.exe /repair “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd”

image

That’s it for the VHD …

You’re back in business!  All that’s left to do is get rid of the checkpoints. So you delete them. If you wanted to apply them an get rid of the delta, you could have just removed the disks, re-added the VHD/VHDX and be done with it actually. But in most of these scenarios you want to keep the delta as you most probably didn’t even realize you still had checkpoints around. Zero data loss Winking smile.

Conclusion

Save your self the stress, hassle and possibly expense of hiring an expert.  How? Please do not expand a VHD or VHDX of a virtual machine that has checkpoints. It will cause boot issues with the expanded virtual disk or disks! You will be in a stressful, painful pickle where you might not get out of if you make the wrong decisions and choices!

As a closing note, you must have have backups and restores that you have tested. Do not rely on your smarts and creativity or that others, let alone luck. Luck runs out. Otions run out. Even for the best and luckiest of us. VEEAM has save my proverbial behind a few times already.

Exchange 2016 On The Horizon

With Exchange 2016 on the horizon (RTM in Q4 2015) I’ve been prepping the lab infrastructure and dusting of some parts of the Exchange 2010/2013 lab deployments to make sure I’m ready to start testing an migration to Exchange 2016. While Office 365 offers great value for money sometimes there is no option to switch over completely and a (used) hybrid scenario is the way to go.  This can be regulations, politics, laws, etc. No matter what we have to come up with a solutions that satisfy all needs as well as possible. Even in 2015 or 2016 this will mean on premises e-mail. This is no my “default” option when it comes to e-mail in anno 2015, but it’s still a valid option and choice. So they can get the best of both worlds and be compliant. Is this the least complex solution? No, but it gives them the compliancy they need and/or want. It’s not my job to push companies 100% to the cloud. That’s for the CIO to decide and for cloud vendors to market/sell. I’m in the business of helping create the best possible solution to the challenge at hand.

Figure: Exchange 2016 Architecture © Microsoft

The labs were setup to test & prepare for production deployments. It all runs on Hyper-V and it has survived upgrades of hypervisor (half of the VMs are even running on Windows Server 2016 hosts) and the conversion of the VHDX to VHDX.  These labs have been kept around for testing and trouble shooting. There are fully up to date. It’s fun to see the old 2009 test mails still sitting in some mail boxes.

image

Both Windows NLB and Kemp Technologies Loadmasters are used. Going forward we’ll certainly keep using the hardware load balancing solution. Oh, when it comes to load balancing, there only the best possible solution for your needs in your environment. That will determine which of the various options you have you’ll use. In Exchange 2016 that’s a will be very different from Exchange 2010 in regards to session affinity, affinity is no longer needed since Exchange 2013.image

In case you’re wondering what LoadMaster you need take a look at their sizing guides:

Another major change will be the networking. On Windows Server 2012 R2 we’ll go with a teamed 10Gbps NIC for all workloads simplifying the setup.  Storage wise one change will be the use of ReFS, especially if we can do this on Storage Spaces. The data protection you get from that combination is just awesome. Disk wise the IOPS have dropped yet even a little more so that should be OK. Now, being a geek I’m still tempted to leverage cheap / larger SSDs to give flying performance Smile. If possible at all I’d like to make it a self contained solution, so no external storage form any type of SAN / centralized storage. Just local disks. I’m eyeing the DELL R730DX for that purpose. Ample of storage, the ability to have 2 controllers and my experience with these has been outstanding.

So no virtualization? Sure where and when it makes sense and it fits in with the needs, wants and requirements of the business.  You can virtualize Exchange and it is supported. It goes without saying (serious bragging alert here) that I can make Hyper-V scale and perform like a boss.