Virtual Network Appliances I Use for Hyper-V Labs

When you build and maintain a test lab you’re always on the lookout for gear you can use. That’s either hardware or virtual appliances. My main concern is cost, it should work well on Hyper-V and the ability to mimic real world environments. That’s a great help for educational purposes as well as for testing and as an aid to troubles shooting. One of the nice things virtualization and now also cloud IAAS offers is the ability to run virtual storage and network appliances that allow us to have that real world look and feel. Add to that ever more software defined storage, networking and compute and we’re able to build very realistic labs. The limits we’re left with are time, money and space.

When building a lab some people tend to run into perceived limitations of their hypervisor. That’s to be expected as for many that hypervisor is just something to quickly get up and running an get to work writing code, implementing a backup solution or whatever the workload at hand is all about. The tip here is not to give up to fast.

More recently I’m build/working on a new lab setup simulating different sites. I need to route between these isolated test networks and load balance traffic in a site redundant manner. The idea was to mimic real life as well as we good. Add to that lab setup an Azure “site” and it’s fun all over. It’s all based on Hyper-V and Windows Server virtual machines but some components are not. Windows NLB has had its best day and RRAS is limited in the abilities I need to test. They can and do work fine for certain scenarios, but not for all that I need to test. I add virtual load balancers, virtual switches with the look and feel of physical ones and the same for virtual firewalls.

Now in real life you’ll be dealing with Link Aggregation Groups, Trunking, MLAG, routing, teaming … in short the tools of the trade when doing networking. One side effect of this is that on a Hyper-V host you quickly run out of physical network ports to work with. That’s not a problem, in real life your firewall or load balancer does not have 48 ports either. Often you have 4 to 8 and sometimes more, but often not, ports at your disposal and depending on the complexity that’s more than enough or not at all. Trunking & VLAN’s are the way we deal with this. In the Hyper-V GUI you will not find a way to define a trunk on an vNIC attached to a vSwitch. But this can be done via PowerShell. So please do not reject Hyper-V as not being up to the job. It is! Read about this in my blog post.

People often ask me what virtual network appliances I Use for Hyper-V Labs. This does vary over time, but there are some constants. In the lab I hate wasting time on time bombed trials. So I avoid those in favor of either fully featured solutions or I use free open source alternatives. Smart vendors provide the easiest access possible to their solutions. They realize that easy access delivers the ability to learn and test every aspect of the products which make a huge difference in the success of their offerings in the real world. When it comes to load balancers I use the KEMP Virtual Load Masters. You can read more about these in projects and lab testing  in blogs about the KEMP (Virtual) Load Master.

As an MVP I got 1 free license. Together with the ability to restore configurations I can have a pseudo permanent redundant load balancing setup. Only building labs for multi-site geo load balancing solutions requires to start from scratch every time. For routing I use VyOS, it works on both hardware and on a bunch of hypervisors with X64 bit virtual machines. When I need the look and feel of a firewall you’ll encounter in business I use Opnsense. It supports the synthetic vNICs with the enlightened Hyper-V drivers. Yup, the integration components are there.  It doesn’t boot from UEFI so no Generation 2 virtual machine support as of yet. imageimage

Another good one is IPFire. This one also does a nice job with the integration components.


I also have a DELL SonicWall in my home office where I have some ports to play with but it tends to be leveraged more for the permanent parts of the lab. It’s a crucial & permanent component.

SonicWALL NSA 220 Wireless-N Appliance

Recover From Expanding VHD or VDHX Files On VMs With Checkpoints

So you’ve expanded the virtual disk (VHD/VHDX) of a virtual machine that has checkpoints (or snapshots as they used to be called) on it. Did you forget about them?  Did you really leave them lingering around for that long?  Bad practice and not supported (we don’t have production snapshots yet, that’s for Windows Server 2016). Anyway your virtual machine won’t boot. Depending on the importance of that VM you might be chewed out big time or ridiculed. But what if you don’t have a restore that works? Suddenly it’s might have become a resume generating event.

All does not have to be lost. Their might be hope if you didn’t panic and made even more bad decisions. Please, if you’re unsure what to do, call an expert, a real one, or at least some one who knows real experts. It also helps if you have spare disk space, the fast sort if possible and a Hyper-V node where you can work without risk. We’ll walk you through the scenarios for both a VHDX and a VHD.

How did you get into this pickle?

If you go to the Edit Virtual Hard Disk Wizard via the VM settings it won’t allow for that if the VM has checkpoints, whether the VM is online or not.


VHDs cannot be expanded on line. If the VM had checkpoints it must have been shut down when you expanded the VHD. If you went to the Edit Disk tool in Hyper-V Manager directly to open up the disk you don’t get a warning. It’s treated as a virtual disk that’s not in use. Same deal if you do it in PowerShell

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -SizeBytes 15GB

That just works.

VHDXs can be expanded on online if they’re attached to a vSCSI controller. But if the VM has checkpoints it will not allow for expanding.


So yes, you deliberately shut it down to be able to do it with the the Edit Disk tool in Hyper-V Manager. I know, the warning message was not specific enough but consider this. The Edit disk tool when launched directly has no idea of what the disk you’re opening is used for, only if it’s online / locked.

Anyway the result is the same for the VM whether it was a VHD or a VHDX. An error when you start it up.

[Window Title]
Hyper-V Manager

[Main Instruction]
An error occurred while attempting to start the selected virtual machine(s).

‘DidierTest06’ failed to start.

Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk.’.

Virtual disk ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd’ failed to open because a problem occurred when attempting to open a virtual disk in the differencing chain, ‘C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd’: ‘The size of the virtual hard disk is not valid.’.


You might want to delete the checkpoint but the merge will only succeed for the virtual disk that have not been expanded.  You actually don’t need to do this now, it’s better if you don’t, it saves you some stress and extra work. You could remove the expanded virtual disks from the VM. It will boot but in many cased the missing data on those disks are very bad news. But al least you’ve proven the root cause of your problems.

If you inspect the AVVHD/AVHDX file you’ll get an error that states

The differencing virtual disk chain is broken. Please reconnect the child to the correct parent virtual hard disk.


However attempting to do so will fail in this case.

Failed to set new parent for the virtual disk.

The Hyper-V Virtual Machine Management service encountered an unexpected error: The chain of virtual hard disks is corrupted. There is a mismatch in the virtual sizes of the parent virtual hard disk and differencing disk. (0xC03A0017).


Is there a fix?

Let’s say you don’t have a backup (shame on you). So now what? Make copies of the VHDX/AVHDX or VHD/AVHD and save guard those. You can also work on copies or on the original files.I’ll just the originals as this blog post is already way too long. If you. Note that some extra disk space and speed come in very handy now. You might even copy them of to a lab server. Takes more time but at least you’re not working on a production host than.

Working on the original virtual disk files (VHD/AVHD and / or VHDX/AVHDX)

If you know the original size of the VHDX before you expanded it you can shrink it to exactly that. If you don’t there’s PowerShell to the rescue if you want to find out the minimum size.


But even better you can shrink it to it’s minimum size, it’s a parameter!

Resize-VHD -Path “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” -ToMinimumSize

Now you not home yet. If you restart the VM right now it will fail … with the following error:

‘DidierTest06’ failed to start. (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)

‘DidierTest06’ Synthetic SCSI Controller (Instance ID 92ABA591-75A7-47B3-A078-050E757B769A): Failed to Power on with Error ‘The chain of virtual hard disks is corrupted. There is a mismatch in the identifiers of the parent virtual hard disk and differencing disk.’ (0xC03A000E). (Virtual machine ID 7A54E4DB-7CCB-42A6-8917-50A05354634F)


What you need to do is reconnect the AVHDX to it’s parent and choose to ignore the ID mismatch. You can do this via Edit Disk in Hyper-V Manager of in PowerShell. For more information on manually merging & repairing checkpoints see my blogs on this subject here. In this post I’ll just show the screenshots as walk through.






Once that’s done you’re VHDX is good to go.

For a VHD you can’t shrink that with the inbox tools. There is however a free command line tool that can do that names VHDTool.exe. The original is hard to find on the web so here is the installer if you need it. You only need the executable, which is portable actually, don’t install this on a production server. It has a repair switch to deal with just this occurrence!

Here’s an example of my lab …

D:\SysAdmin>VhdTool.exe /repair “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD.vhd” “C:\ClusterStorage\Volume2\DidierTest06\Virtual Hard Disks\RuinFixedVHD_8DFF476F-7A41-4E4D-B41F-C639478E3537.avhd”


That’s it for the VHD …

You’re back in business!  All that’s left to do is get rid of the checkpoints. So you delete them. If you wanted to apply them an get rid of the delta, you could have just removed the disks, re-added the VHD/VHDX and be done with it actually. But in most of these scenarios you want to keep the delta as you most probably didn’t even realize you still had checkpoints around. Zero data loss Winking smile.


Save your self the stress, hassle and possibly expense of hiring an expert.  How? Please do not expand a VHD or VHDX of a virtual machine that has checkpoints. It will cause boot issues with the expanded virtual disk or disks! You will be in a stressful, painful pickle where you might not get out of if you make the wrong decisions and choices!

As a closing note, you must have have backups and restores that you have tested. Do not rely on your smarts and creativity or that others, let alone luck. Luck runs out. Otions run out. Even for the best and luckiest of us. VEEAM has save my proverbial behind a few times already.

Exchange 2016 On The Horizon

With Exchange 2016 on the horizon (RTM in Q4 2015) I’ve been prepping the lab infrastructure and dusting of some parts of the Exchange 2010/2013 lab deployments to make sure I’m ready to start testing an migration to Exchange 2016. While Office 365 offers great value for money sometimes there is no option to switch over completely and a (used) hybrid scenario is the way to go.  This can be regulations, politics, laws, etc. No matter what we have to come up with a solutions that satisfy all needs as well as possible. Even in 2015 or 2016 this will mean on premises e-mail. This is no my “default” option when it comes to e-mail in anno 2015, but it’s still a valid option and choice. So they can get the best of both worlds and be compliant. Is this the least complex solution? No, but it gives them the compliancy they need and/or want. It’s not my job to push companies 100% to the cloud. That’s for the CIO to decide and for cloud vendors to market/sell. I’m in the business of helping create the best possible solution to the challenge at hand.

Figure: Exchange 2016 Architecture © Microsoft

The labs were setup to test & prepare for production deployments. It all runs on Hyper-V and it has survived upgrades of hypervisor (half of the VMs are even running on Windows Server 2016 hosts) and the conversion of the VHDX to VHDX.  These labs have been kept around for testing and trouble shooting. There are fully up to date. It’s fun to see the old 2009 test mails still sitting in some mail boxes.


Both Windows NLB and Kemp Technologies Loadmasters are used. Going forward we’ll certainly keep using the hardware load balancing solution. Oh, when it comes to load balancing, there only the best possible solution for your needs in your environment. That will determine which of the various options you have you’ll use. In Exchange 2016 that’s a will be very different from Exchange 2010 in regards to session affinity, affinity is no longer needed since Exchange 2013.image

In case you’re wondering what LoadMaster you need take a look at their sizing guides:

Another major change will be the networking. On Windows Server 2012 R2 we’ll go with a teamed 10Gbps NIC for all workloads simplifying the setup.  Storage wise one change will be the use of ReFS, especially if we can do this on Storage Spaces. The data protection you get from that combination is just awesome. Disk wise the IOPS have dropped yet even a little more so that should be OK. Now, being a geek I’m still tempted to leverage cheap / larger SSDs to give flying performance Smile. If possible at all I’d like to make it a self contained solution, so no external storage form any type of SAN / centralized storage. Just local disks. I’m eyeing the DELL R730DX for that purpose. Ample of storage, the ability to have 2 controllers and my experience with these has been outstanding.

So no virtualization? Sure where and when it makes sense and it fits in with the needs, wants and requirements of the business.  You can virtualize Exchange and it is supported. It goes without saying (serious bragging alert here) that I can make Hyper-V scale and perform like a boss.

Windows Server 2016 Data Deduplication Scales and Performs Better

I’ve been leveraging Windows Server Data Deduplication since it became available with great results.

Embedded image permalink

One of the enhanced features in Windows Server 2016 is Data Deduplication and it’s one I welcome very much. The improvements we’re getting mostly have to do with scale and performance. I’m quite pleased that Microsoft listened to our previous feedback on this.


You cannot imagine how much money on backup target storage we have saved by using this. So we’re very happy that Windows Server 2016 Data Deduplication scales and performs better. The fact that we can no get even better scale and performance is music to our ears. The Backup target servers are the first in line for an upgrade, that’s for sure! That’s the reason I mentioned it as a subject to look into in the Hyper-V amigos interview at Ignite!

Scale Improvement of the supported LUN sizes, up to 64TB

Actually I was already pushing this to 50TB Embarrassed smile in some cases for testing but over all I used 6 to 10 TB volumes. But the support for bigger volumes is very welcome. Now, please not that you should NOT go any higher than 64TB (I actually stay below that) otherwise deduplication doesn’t work due to it’s dependency on VSS. Please read my blog

Windows 2012 R2 Data Deduplication Leverages Shadow Copies: “LastOptimizationResultMessage : A volume shadow copy could not be created or was unexpectedly deleted” on this subject.

In Windows 2012 R2 we were limited because data deduplication used a single-threaded job and I/O queue for each volume. That makes it wiser to have 10 target LUNS of 60TB than one huge 60TB LUN. The big issue otherwise is that large volumes could lead to the dedup processing keeping up with the rate of data changes (“churn”).  Now your milage would very depending on the type of data and the delta. More info on this in the blog post:Sizing Volumes for Data Deduplication in Windows Server. It will help you size the volumes but note that in Windows Server 2016 the rules have changed Smile

The dedup optimization processing now runs multiple threads in parallel using multiple I/O queues on a single volume which gives you better performance and doesn’t incur the overhead of having to use more smaller LUNs.

File sizes up to 1TB are good for dedup

Windows Server 2012 R2 Data Deduplication supports the use of file sizes up to 1TB, but they are considered as “not good candidates” for dedup.  So that DPM workaround of backing up to a truckload of virtual machines with 1TB virtual disks that are deduplicated is borderline. You can see one improvement in CPS v2 coming already (also see the next header). 1TB is now fully supported and a good candidate. I’ll be pushing it higher … in my opinion this is were the most work will need to be done for future improvements. It would allow for more scenarios (I have VMs that hold VHDX virtual disks of  2TB or more). Scale it something that helps keep things simple. Simple avoid costs & issue with complexity. That’s always a good thing if possible.

In Windows Server 2012 the algorithms can’t scale as well and performance suffers due to things like scanning for and inserting changes can slow down as the total data set increases. These processes have been redesigned in Windows Server 2016. It now uses new stream map structures and improved partial file optimization. As a result 1TB file sizes have become good candidates.

Virtualized backup is a new usage type

DPM is already leveraging deduplication of virtual machines (CPS drove that I think, see Deduplicating DPM Storage).


In Windows Server 2016 all the dedup configuration settings have been combined into a new usage type called “Backup”. This simplifies the deployment and helps “future proof” your setup as future changes can automatically be applied true this usage type.

Nano Server support

Data deduplication is (or will be) fully supported in Nano Server (new in TPv3). It’s not completely done yet so deduplication support in Nano Server still has a few restrictions:

  • Support has only been validated in non-clustered configurations
  • Deduplication job cancellation must be done manually (using the Stop-DedupJob PowerShell command)

Microsoft welcomes any feedback on the deduplication feature via an email sent to For me the standing order is to break through that 1TB barrier!

My take & Magic Ball

In combination with the right backup product it saves a ton of money. I have leveraged VEEAM and in the past Windows Backup (inbox) with great results. The benefit of these two is that you can backup to physical storage and leverage deduplication. Virtualized backup as a new usage type and makes live easier for the supported “workaround” around the limitations of DPM where normally they only support VDI for  with deduplication.  What I’m really curious about is another possible future usage type: “Virtual Servers” … I guess for that one deduplication support for the OS disk would be very beneficial for “cloud” providers. We’ll see