Lightning Fast Fixed VHDX File Creation Speed With ReFS on Windows Server 2016

In this blog post we’re going to take a quick look at the lightning fast fixed VHDX file creation speed with ReFS on Windows Server 2016. We’ll compare it to creating fixed VHDX files On NTFS with a SAN that supports ODX. Both the NTFS and the CSV volume are CSV disk in a Hyper-V cluster and the test is run on the same node. The ODX cabale SAN is a Dell Compellent with Storage Center 6.5.20.

We create on  a selection of fixed VHDX files sizes (50GB, 100GB, 500GB and 1TB) on NTFS volume Windows Server 2016 host You can see the quite excellent results in file creation speeds with ODX.

image

These results are very good (DELL Compellent always did a great job implementing ODX) and the time to create a 1TB fixed VHDX is just over 5 seconds consistently. Impressive by any standard I would say! When we start using CSVs we can see that times double for the larger VHDX sizes but still +/- 12seconds for a 1TB disk is impressive by any standard. There is little difference whether the node where the script runs owns the CSV or not.

image

image

Can things be more impressive? Let’s do the same exercise on a ReFS volume on a Windows Server 2016 host. Same server, same SAN with ODX enabled but note that ReFS does not even support ODX, so it cannot be leveraged.

image

No matter what the file size of our fixed VHDX files they are created in just over 1 second consistently. This is very impressive.

imageimage

When we use a CVS LUN we still see the same impressive results. On CSV LUNS not owned by the node where we execute the test script we see a creation time of 2 seconds for VHDX sizes of 1TB. Still lightning fast.

If you do not have a SAN that supports ODX you can see why ReFS might become a very attractive choice for the file system for your Hyper-V virtual machine data volumes in Windows Server 2016. I can see why they mentioned it as the preferred option for Storage Spaces Direct. Do note that ReFS does not support deduplication and/or UNMAP (I see no dedupe support yet for virtual server workloads on the horizon either yet?). If you move large amounts of data around ODX does provide significant assistance with this. So with ReFS go for a large SSD tier. Flash only without deduplication or erasure coding might be cost prohibitive I’m afraid.

But let this not put you off ReFS. It has many benefits in combination with storage spaces and these new VHDX operation capabilities just add to that. So for many environments with commodity based storage this has become an even more interesting choice.

The Hitch Hikers Guide to Hyper-V Administration: Don’t Panic

Not all information you might see or is presented to you is valid. You need to check, that’s the prime reason we have the “trust but verify” mantra in IT. If you don’t you might start trouble shooting a ghost issue. An example of this are GUI issues, such as when you leave the Hyper-V Manager GUI open for way to long and the information goes stale in the cache.

The below screen shot is what caused some diligent admins to start trouble shooting a non existent problem. The figured that the VMs were left in a locked state due to backups failing. But hey, all backups had run and succeeded?! So they searched and found  KB article 2964439 Hyper-V virtual machine backup leaves the VM in a locked state. When they wanted to install the hotfix it failed stating it was not applicable to their system.

At that moment they considered killing the VMMS.exe service and/or failing over the nodes. While preparing for that they’d logged in to all nodes, only to see the issue not present there. That made ‘m think and step back for a while.

image

In this case it’s just a quirk with the Hyper-V manager that is left open way to long. Right click the host and refresh or close the GUI and reopen it is all that’s needed to see the real information.

So slow down before you start trouble shooting & recovering form a “ghost” problem. It may cause real issues. The lesson here is you should not go into the “Action Jackson” mode. You can move swift and efficient but the ability to execute does not constitute just speed it doing what’s needed when and when needed. Here ends the lesson Smile

Windows Server 2016 Technical Preview Version 3 Cluster Upgrades

I was eagerly awaiting the release of Windows Server 2016 Technical Preview 3 for further experimenting and testing and August 19th 2015was the big day with a truck load of announcements and press releases including the arrival of TPv3 which also made containers publicly available for testing to all of us.

image

After a swift download II set out upgrading the labs, both PC hardware based and enterprise grade server hardware. I always test out the less wise things as well just to kick the tires and test behavior left and right.

As always I tested some in place upgrades just to see how well that goes before doing clean installs . Not recommend in production but hey,Testing is good. At first all networking seem to be OK but it wasn’t. So I ended up with doing clean installs which are advisable, even more so with non production versions of the OS. The product is not finished yet! This is also the supported way of doing a new cluster build. imageThe end result is a lab at home on PC hardware and an enterprise grade lab to work with in the datacenter. Busy times ahead.

For help on what’s new in this build go here What’s New in Windows Server 2016 Technical Preview 3 and good luck on your Windows Server 2016 Technical Preview Version 3 cluster upgrades!

Happy testing!

Trouble Shooting Intermittent Virtual Machine Network Connectivity

I was asked to take a look at an issue with virtual machines losing network connectivity. The problems were described as follows:

Sometimes some VMs had connectivity, some times they didn’t. It was not tied to specific virtual machines. Sometimes the problem was not there, than it showed up again. It was, not an issue of a wrong subnet mask or gateway.

They suspected firmware or driver issues. Maybe it was a Windows NIC teaming bug or problems with DVMQ or NIC offload settings. There’s a lot of potential reasons, just Google Intermittent VM connectivity Issues Hyper-V and you’ll get a truckload of options.

So a round of wishful firmware, driver upgrading started. Followed by a round of wishful disabling network features. That’s one way to do it. But why not sit back an look at the issue.

Based on what they said I looked at the environment and asked it was tied to specific host as only VMs on one of the hosts had the issue.  Could it be be after a live migration or a VM restart. They didn’t really know but it could. So we started looking at the hosts. All teams for the vSwitch were correctly configured on all host. No tagged VLAN on the member NIC. No extra team interfaces that would violate the rule that there can be only one if the team is used by a Hyper-V switch. They used the switch independent teaming mode with the load balancing mode set to Dynamic, all member active. Perfect.

I asked it they used tagged VLAN on the VMs some times. They said yes. Which gave me a clue they had trunking or general mode configured on the ports. So I looked at the switches to see what the port configuration was like?  Guess what. All ports on both switches were correctly configured bar the ports of the vSwitch team members on one Hyper-V host. The one with problematic VMs. The two ports were in general mode but the port on the top switch had PVID* 100 and the one on the bottom switch had PVID 200. That was the issue. If the VM “landed” on the team member with PVID 200 it has no network connectivity.

HyperV-vSwitchTeam-WronNativeVLAN

 

* PVID (switchport general pvid 200) is the default VLAN of the port, in CISCO speak that would translate into “”native VLAN as in switchport trunk native vlan 200

Yes NIC firmware and drivers have issues. There are bugs or problems with advanced features once in a while. But you really do need to check that the configuration is correct and the design or setup makes sense. Do yourself a favor by not assuming anything. Trust but verify.