Cloud & Datacenter Conference Germany

It’s with great joy that I can share that the Cloud & Datacenter Conference Germany website is live and you can now register to attend. My fellow MVP and friend Carsten Rachfahl has realized one of his ambitions to organize a large community driven conference by and for the IT community in the DACH region. But please free to attend if you’re from outside that region, it is open to all and welcomes anyone who wishes to attend. Just note not all sessions will be in English, but in German.

Organizing such an event is not an easy undertaking and I want to applaud Carsten for making this happen. He’s one of Germany’s for most experts and via his company, Rachfahl IT-Solutions, he’s always contributed heavily into the community. Thank you Carsten, you contribute a lot and we appreciate those efforts.

image

I invite you all to attend and join us on May 12 for the first edition of the Cloud & Datacenter Conference Germany in Dusseldorf. It offers more than 25 presentations by top community speakers in five parallel tracks. These tracks cover the entire spectrum of Microsoft technologies available to help you design, build and maintain a state of the art modern IT infrastructure. The conference covers Windows Server 2016, Hyper-V, Microsoft software defined storage, networking, azure stack, System Center, OMS, failover cluster, IaaS, azure, Nano server, PowerShell, containers, and much more.

I’m happy and honored to speak at this conference with so many true real life experts that are part the global community around Microsoft technologies. My presentation will aim to get you briefed on the new and improved functionality in Windows Server 2016 Failover Clustering. In that respect it’s a nice addition to my session What’s new in Failover Clustering in Windows Server 2012 R2. I can only suggest to get up to speed on those as these are still very much valid and I’’ be focusing on the delta between Windows Server 212 R2 and 2016.

The breadth and depth of the technologies available to us cannot be overstated and is still growing. It takes a team effort with both complementary and overlapping expertise to stay on top of things. Education is a huge and important part of daily life for anyone working in IT.

Nowadays when any meeting can be held on-line an in person a conference is still very valuable. It enables you to focus on absorbing the content without being distracted by the realities and interrupts of daily work life. That’s why I still invest in attending conference and I hope you do so as well. When you attend one, be there! That might sound silly but it’s painful to see attendees working remotely and being on the phone all the time. Bart true emergencies that’s a waste of money and effort. Allow yourself or your employees to optimize the ROI of that conference by having them do why they came. Learn, get inspired and network with peers.

Register soon to secure your spot. The price is set a level to make sure it will not be an issue. Sponsoring by companies who have real investment in cloud and datacenter management and benefit from a flourishing well informed ecosystem make this possible.

Shared VHDX In Windows 2016: VHDS and the backing storage file

Introduction into the VHD Set

I have talked about the VHD Set with a VHDS file and a AVHDX backing storage file in Windows Server 2016 in a previous blog post A first look at shared virtual disks in Windows Server 2016. One of the questions I saw pass by a couple of times is whether this is still a “normal VHDX” or a new type of virtual disk. Well the VHDS files is northing but a small file containing some metadata to coordinate disk actions amongst the guest cluster nodes accessing the shared virtual disk. The avhdx file associated with that VHDS file is an automatically managed dynamically expanding or fixed virtual disk. How do I know this? Well I tested it.

There is nothing that preventing you from copying or moving the avhdx file of a VHD Set that not in use. You can rename the extension from avhdx to vhdx. You can attach it to another VM or mount it in the host and get to the data. In essence this is a vhdx file. The “a” in avhdx stands for automatic. The meaning of this is that an vhdx is under control of the hypervisor and you’re not supposed to be manipulating it but let the hypervisor handle this for you. But as you can see for yourself if you try the above you can get to the data if that’s the only option left. Normally you should just leave it alone. It does however serve as proof that the VHD Set uses an standard virtuak disk (VHDX) file.

I’ll demonstrate this with an example below.

Fun with a backing storage file in a VHD Set

Shut down all the nodes of the guest cluster so that the VHD Set files are not in use. We then rename the virtual disk’s extension avhdx to vhdx.

image

You can then mount it on the host.

image

And after mounting the VHDX we can see the content of the virtual disk we put there when it was a CSV in that guest cluster.

image

We add some files while this vhdx is mounted on the host

image

Rename the virtual disk back to a avhdx extension.

image

We boot the nodes of the guest cluster and have a look at the data on the CSV. Bingo!

image

I’m NOT advocating you do this as a standard operation procedure. This is a demo to show you that the backing storage files are normal VHDX files that are managed by the hypervisor and as such get the avhdx extension (automatic vhdx) to indicate that you should not manipulate it under normal circumstances. But in a pinch, it a normal virtual disk so you can get to it with all options and tools at your disposal if needed.

Maximum bandwidth in Hyper-V storage QoS policies

Introduction

In a previous blog post Hyper-V Storage QoS in Windows Server 2016 Works on SOFS and on LUNs/CSV I have discussed Storage QoS Policies in Windows Server 2016. I have also demonstrated this in a lab setup at VEEAMON 2015 in one of my talks at the Microsoft presentation area. It’s one of those features where a home lab will do the job. There is no need for special storage hardware. It’s all in box functionality. Cool!

Maximum bandwidth in Hyper-V storage QoS policies

Now that was in the Technical Preview 2 and 3 era, where it all revolved around minimum and maximum QoS. In Windows Server 2016 Technical Preview 4 we got some new features in regards to storage QoS policies. One of those is that we can now also set the Maximum bandwidth on a policy using the parameter MaximumIOBandwidth. This parameter, which is set in bytes per second determines the maximum bandwidth that any flow assigned to the policy is allowed to consume.

image

We use that policy ID to assign it to the 2 shared virtual disks of our cluster nodes. You’ll need to do this for all of the guest cluster nodes.image

You can copy the PoSh demo script below

[sourcecode language=”powershell”]

#Create a Storage Policies
$DemoVMPolicy = New-StorageQosPolicy -Name DemoVMPolicy -PolicyType MultiInstance `
-MinimumIops 250 -MaximumIops 500 -MaximumIOBandwidth 100MB

#Look at our storage Policies
Get-StorageQosPolicy -name DemoVMPolicy

#Grab our policy ID
$DemoVMPolicy = (get-StorageQosPolicy -Name DemoVMPolicy).PolicyId
$DemoVMPolicy

#Look at our VMs policy setting before and after assigning a storage policy.
#We assign the storage policy to the 2 shared virtual disks
#that are located a location 1 and 2 on SCSI controller 0

Get-VM -Name GuestClusterNode1 | Get-VMHardDiskDrive |
ft Path,MinimumIOPS, MaximumIOPS, MaximumIOBandwidth, QoSPolicyID -AutoSize

Get-VM -Name GuestClusterNode1 | Get-VMHardDiskDrive | Where-Object {$_.controllerlocation -ge 1}|
Set-VMHardDiskDrive -QoSPolicyID $DemoVMPolicy

Get-VM -Name GuestClusterNode1 | Get-VMHardDiskDrive |
ft Path, MinimumIOPS, MaximumIOPS, MaximumIOBandwidth, QoSPolicyID -AutoSize
[/sourcecode]

You can use MaximumIOBandwidth by itself or you can combine it with the maximum IOPS setting. When both of these parameter are set in a storage QoS policy they are both active. The one that is reached first by a flow assigned to this policy will be the limiting factor in the I/O of that flow.

As an example. Let’s say you specify 500 IOPS and 100Mbps bandwidth as maxima. Your workload hits 500 IOPS but only consumes 58 Mbps it’s the IOPS that are limiting the flow.

Between Windows Server 2016 TPv3 and TPv4 we moved from ReFS version 2.0 to 3.0

Introduction

The fact that between Windows Server 2016 TPv3 and TPv4 we moved from ReFS version 2.0 to 3.0 is something I stumbled upon by accident. In  Windows Server 2016 we’re getting a new and improved version of ReFS. ReFS, (Resilient File System) was introduced in Windows Server 2012.

Since Windows Server 2016 Technical Previews we got a new capability with fsutil as it now knows about ReFS. Using fsutil we can check for the version of ReFS. The command you need for that is:

fsutil fsinfo refsinfo <driveletter>

This is something we definitely could not do in windows Server 2012 or Windows Server 2012 R2. I stumbled onto this by accident while experimenting with ReFS in the previews. Considering the ReFS focus in Windows Server 2016 R2 this is not a surprise. I noticed that.

TPv3 to TPv4 = ReFS version 2.0 to 3.0

In Windows 2016 TPv2 and TPv3 fsutil fsinfo refsinfo reports ReFS 2.0.

image

After installing (clean install) TPv4 I was faced with the fact that my existing ReFS formatted volumes showsed up as RAW, they could not be mounted.

image

I had to reformat those (or move them to a TPv3 installation to recuperate the data). When investigating this on Windows Server 2016 TPv4 with fsutil I noticed that we are at ReFS version 3.

image

The same actually goes for a ReFS version 3.0 volume, it’s RAW in Windows Server 2016 TPv3, unusable.

The important thing to keep in mind going forward is that from my upgrade experiences I learned that ReFS version 2 is not usable in TPv4. Keep that in mind when upgrading. You might want to get your data copied to NTFS or so if you still need it.

I also don’t know if in future technical preview release or whatever they are called then we’ll see 3.1 or 4.0 arrive. But it’s something I’ll watch very carefully when moving to those versions Smile.