802.1x Support with the Hyper-V switch is here!

Introduction

THANK YOU MICROSOFT!

Anyone who has had to support developers and IT Pros alike running Hyper-V on their clients and test systems in an environment with 802.1x port authentication knows the extra effort you had put into workarounds. This was needed due to the fact that the Hyper-V switch did not support 802.1X EAPoL. Sometimes it was an extra NIC on non-authenticated ports, physical security for rooms with non-authenticated ports, going Wi-Fi everywhere and for everything etc. But in conditions where multiple interfaces are a requirement, this becomes impractical (not enough outlets, multiple dongles etc. or add in cards).

On top of that, there was always at least someone less than happy with the workaround. 802.1x Support with the Hyper-V switch looks like it could or should work when looking at the vNICs both on the host and inside the VMs. You’ll see that the authentication properties are there, the policies to make it all work are pushed but no joy, authentication would fail

802.1x Support with the Hyper-V switch is here!

Windows Server 2019 LTSC (1809) & Windows 10 (1809), as well as the 1809 or later SAC versions, now offer 802.1x Support with the Hyper-V switch.

This is not enabled by default. You will need to add a registry key in order for it to be enabled. Form an elevated command prompt run

Reg add “HKEY_LOCAL_MACHINE\SYSTEM\CURRENTControlSet\Services\vmsmp\parameters” /v 8021xEnabled /t REG_DWORD /d 1 /f

This change requires a reboot. So, we also give the Hyper-V host a kick

shutdown /r /t 0

When you have a Hyper-V switch that you share with the management OS you will see that the management vNIC now authenticates.

You can also authenticate VMs. Depending on your needs the configuration and setup will differ. 802.1x allows for Single-Session/Single-Host, Single-Session/Multiple-Host, Multiple- Session (names, abilities vary with switch type, model, vendor) and you’ll need to work out what is needed where for the scenarios you want to support, you won’t have one size fits all with port authentication. I’ll be sharing my experiences in the future.

The point is you’ll have to wrap your head around port authentication with 802.1x and its various options, permutations on the switches and radius servers. I normally deal with Windows NPS for the radius needs and the majority of my sites have DELL campus switches. Depending on the needs of the users (developers, IT Pros, engineers) for your VMs you will have to configure port authentication a bit differently and you’d better either own that network or have willing and able network team to work with.

Conclusion

Hurrah! I am a very happy camper. I am so very happy that 802.1x Support with the Hyper-V switch is here. This was very much missing from Hyper-V for such a long time the joy of finally getting makes me forget how long I had to wait! For this feature, I will shout “BOOM”!

With the extra focus on making Hyper-V on Windows 10 the premier choice for developers, this had to be fixed and they did. There are a lot more environments in my neck of the woods that leverage (physical) port authentication via 802.1x than I actually see IPSec in the wild. It might be different in other places but, that’s my reality. With ever more mobile and flex work as well as body shoppers, temp labor that bring their own devices I see physical port authentication remain for a very long time still.

Move Storage Spaces from Windows 8.1 to Windows 10

Introduction

I recently assisted a little at a help desk. It was to validate the steps to move storage spaces from Windows 8.1 to Windows 10 workstations (DELL Precision). The engineers tend to have considerable local storage needs. Delivering to them both the capacity and performance they need, as well a choice in protection levels, was facilitated tremendously by Storage Spaces.

Ever bigger sized SSD and NVMe disks helped a lot as well. I remember way back when we built and validated a workstation configuration that had a JBOD to achieve the needed IOPS and capacity. That was very cool. Not literally (it produced quite some heat) and it was also a bit noisy. But their needs at that time required it. We have it easy now with 4TB NVMe, SSD readily available. Then again, the need ever more and faster storage. That has not changed at all.

PS: making backups is easier than ever as well and for that, I leverage Veeam Agent for Windows. You have free and paid versions and it is a great tool in our arsenal. You can have them leverage it as a DIY solution or centrally manage it from the Veeam Backup & Replication console. Whatever fits your need and budgets.

Why Storage Spaces in the first place

By leveraging storage spaces for the data volume(s) we avoid a dependency on raid controllers. The quality ones are expensive and you run the risk that when the workstation needs to be replaced you either have to move the raid controller with it (drivers, firmware and support might be an issue) or you can’t easily move between controllers when dealing with different vendors.

Move Storage Spaces from Windows 8.1 to Windows 10

Let’s take a look at the Storage Pool and the volume in the old Windows 8.1 workstation. As you can see all is well. I advise you to fix any issues before you move the disks to the new Windows 10 Workstation.

A healthy Storage pool
The volume with all the engineering data

Shut down the Windows 8.1 workstation.
Remove the disks used in the storage spaces pool from the Windows 8.1 workstation.
Add these disks to the new Windows 10 workstation (no raid controller or such, Storage Spaces rules apply!).

You can have up to 10 drives in most modern workstations. Buy your own drives for better pricing & sizing.

Boot the Windows 10 workstation and log in.
Open Windows Explorer. The data on the Storage Spaces volume is already there and accessible. Smooth!

All the data is right there !

Open Storage Spaces Manager. You will see an informational block about upgrading the Storage Spaces pool to enable new features. This is recommend when you know you don’t have to move the pool back to an older OS.

Informational block about upgrading the Storage Spaces pool to enable new features

Click change settings and click on Upgrade pool. You will be ask to click the button upgrade pool to confirm this. Note that when you upgrade the pool you can no longer move it back to Windows 8.1.

Upgrade the pool to enable the new features that come with Windows 10.

That’s it. As Upgrading the storage pool is fast and online. The only downtime was to physically move the disks from the old to the new workstation. As the last action, I choose to optimize drive usage when the workstations are returned to the engineer’s desk. This is new in Windows 10.

Just let it run. It has some performance impact but the engineers where too happy with a easy data move and their new workstation to complain about that.

Conclusion

Banking on storage spaces to provide some organizations their GIS and CAD engineers with a lot of local storage in their workstations has proven to be a rock solid choice. They get to have both capacity and performance which can be balanced. Large SSD disk sizes have been a great help with this. Anyway, one makes choices but ours to leverage storage spaces on the client has been a success. The migration of the disks from old workstations to new ones was easy and straightforward. It allowed us
to move storage spaces from Windows 8.1 to Windows 10 workstations easily. The portability of Storage Spaces rocks. IT Support happy, clients happy. Some of the engineers on their own or with the helpdesk are replacing disks for bigger ones or moving to SSD.

Welcome 2019

Happy New Year! Today we welcome 2019. I wish all my readers the best for 2019. May your hikes and journeys, both recreational & inspirational, lead you to beautiful places and gorgeous views to behold. Enjoy the experience, the adventure and efforts along the way to get there. Be grateful you have the abilities to do so.

Me relaxing after hiking up and down the trail network at Lake O’Hara in Yoho National Park (yes ,we got a golden ticket and were allowed in for hiking those gorgeous trails) – I just love the Rockies and RoCE 

As I welcome 2019, I’ll be diving into some interesting technologies, trends & strategies to investigate, discuss, implement and advise on. Join me on my journey in 2019!

SCOS 7.3 Distributed Spares

Introduction

When you have upgraded your SC Series SAN to SCOS 7.3 (7.3.5.8.4 at the time of writing, see https://blog.workinghardinit.work/2018/SCX08/13/sc-series-scos-7-3/ ) you are immediately ready to start utilizing the SCOS 7.3 distributed spares feature. This is very easy and virtually transparent to do.

SCOS 7.3 Distributed Spares
7.3 on an SC-7020 AFA

You will actually notice a capacity and treshhold jump in your SC array when you upgraded to 7.3. The system now know the spare capacity is now dealt with differently.

SCOS 7.3 Distributed Spares
Usable space and alert threshold increase right after upgrading to SCOS 7.3

How?

After upgrading you’ll see a notification either in Storage Manager or in Unisphere Central that informs you about the following:

“Distributed spare optimizer is currently disabled. Enabling optimizer will increase long- term disk performance.”

SCOS 7.3 Distributed Spares

Once you click enable you’ll be asked if you want to proceed.

SCOS 7.3 Distributed Spares

When you click “OK” the optimizer is configured and will start its work. That’s a one way street. You cannot go back to classic hot spares. That is good to know but in reality, you don’t want to go back anyway.

SCOS 7.3 Distributed Spares

In “Background Processes” you’ll be able to follow the progress of the initial redistribution. This goes reasonabely fast and I did 3 SANs during a workday. No one even noticed or complained about any performance issues.

SCOS 7.3 Distributed Spares
The Raid rebuild starts … 
SCOS 7.3 Distributed Spares
RAID rebuild near the end …  it took about 2-3 hours on All Flash Arrays.

The benefits are crystal clear

The benefits of SCOS 7.3 Distributed Spares are crystal clear:

  • Better performance. All disks contribute to the overall IOPS. There are no disks idling while reserved as a hot spare. Depending on the number of disks in your array the number of hot spares adds up. Next up for me is to rerun my base line performance test and see if I can measure this.
  • The lifetime of disks increases. On each disk, a portion is set aside as sparing capacity. This leads to an extra amount of under-provisioning. The workload on each of the drives is reduced a bit and gives the storage controller additional capacity from which to perform wear-leveling operations thus increasing the life span.
  • Faster rebuilds. This is the big one. As we can now read AND write to many disks the rebuild speed increases significantly. With ever bigger disks this is something you need and what was long overdue. But it’s here! It also allows for fast redistribution when a failed disk is replaced. On top of that when a disk is marked suspect the before it fails. A copy of the data takes place to spare capacity and only when that is done is the orginal data on the failing disk taken out fo use and is the disk marked as failed for replacement. This copy operation is less costly than a rebuild.