Windows Server 2019 is a supported guest OS on Windows Server 2016 Hosts

Is this even a concern?

While many of you are probably already running Windows Server 2019 VMs in test and production without a worry a little hiccup in the Microsoft documentation cause some concern. So, yes, it is, or rather, it was. Some people noticed or were told that Windows Server 2019 is not a supported guest OS on Windows Server 2016 Hosts. That was a mistake in the documentation and confused some people and account managers. But, yes. Windows Server 2019 is a supported guest OS on Windows Server 2016 Hosts. No worries!

The documentation mistake has been fixed

When we look at Supported Windows guest operating systems for Hyper-V on Windows Server and GitHub https://github.com/MicrosoftDocs/windowsserverdocs/commit/2c54e781c64e0cc3fec2cef349a762b972987870#diff-5347e6e782aa2be9a9ec94ff6ef0436b today we’ll see that the mistake has been corrected. In good tradition of Hyper-V the host support guest OS versions up to N+1. This means that Windows Server 2019 is a supported guest OS on Windows Server 2016 Hosts.

But until recently you might have seen the below.

This is what caused the concern. It was a simple mistake. So please if someone tells you Windows Server 2019 guests are or might not be supported on a Windows Server 2016 host, tell them to check again and point them to the above links.

Windows Server 2019 is a supported guest OS on Windows Server 2016 Hosts

The good news is that the mistake is fixed and all is well. I’m sorry if your decision makers or managers who got shown those documents before that mistake was fixed got scared but all is well. Windows Server 2019 is a supported guest OS on Windows Server 2016 Hosts. Be happy and start rolling out and upgrading as soon as you have all things like backup covered. I know I am.

Move Storage Spaces from Windows 8.1 to Windows 10

Introduction

I recently assisted a little at a help desk. It was to validate the steps to move storage spaces from Windows 8.1 to Windows 10 workstations (DELL Precision). The engineers tend to have considerable local storage needs. Delivering to them both the capacity and performance they need, as well a choice in protection levels, was facilitated tremendously by Storage Spaces.

Ever bigger sized SSD and NVMe disks helped a lot as well. I remember way back when we built and validated a workstation configuration that had a JBOD to achieve the needed IOPS and capacity. That was very cool. Not literally (it produced quite some heat) and it was also a bit noisy. But their needs at that time required it. We have it easy now with 4TB NVMe, SSD readily available. Then again, the need ever more and faster storage. That has not changed at all.

PS: making backups is easier than ever as well and for that, I leverage Veeam Agent for Windows. You have free and paid versions and it is a great tool in our arsenal. You can have them leverage it as a DIY solution or centrally manage it from the Veeam Backup & Replication console. Whatever fits your need and budgets.

Why Storage Spaces in the first place

By leveraging storage spaces for the data volume(s) we avoid a dependency on raid controllers. The quality ones are expensive and you run the risk that when the workstation needs to be replaced you either have to move the raid controller with it (drivers, firmware and support might be an issue) or you can’t easily move between controllers when dealing with different vendors.

Move Storage Spaces from Windows 8.1 to Windows 10

Let’s take a look at the Storage Pool and the volume in the old Windows 8.1 workstation. As you can see all is well. I advise you to fix any issues before you move the disks to the new Windows 10 Workstation.

A healthy Storage pool
The volume with all the engineering data

Shut down the Windows 8.1 workstation.
Remove the disks used in the storage spaces pool from the Windows 8.1 workstation.
Add these disks to the new Windows 10 workstation (no raid controller or such, Storage Spaces rules apply!).

You can have up to 10 drives in most modern workstations. Buy your own drives for better pricing & sizing.

Boot the Windows 10 workstation and log in.
Open Windows Explorer. The data on the Storage Spaces volume is already there and accessible. Smooth!

All the data is right there !

Open Storage Spaces Manager. You will see an informational block about upgrading the Storage Spaces pool to enable new features. This is recommend when you know you don’t have to move the pool back to an older OS.

Informational block about upgrading the Storage Spaces pool to enable new features

Click change settings and click on Upgrade pool. You will be ask to click the button upgrade pool to confirm this. Note that when you upgrade the pool you can no longer move it back to Windows 8.1.

Upgrade the pool to enable the new features that come with Windows 10.

That’s it. As Upgrading the storage pool is fast and online. The only downtime was to physically move the disks from the old to the new workstation. As the last action, I choose to optimize drive usage when the workstations are returned to the engineer’s desk. This is new in Windows 10.

Just let it run. It has some performance impact but the engineers where too happy with a easy data move and their new workstation to complain about that.

Conclusion

Banking on storage spaces to provide some organizations their GIS and CAD engineers with a lot of local storage in their workstations has proven to be a rock solid choice. They get to have both capacity and performance which can be balanced. Large SSD disk sizes have been a great help with this. Anyway, one makes choices but ours to leverage storage spaces on the client has been a success. The migration of the disks from old workstations to new ones was easy and straightforward. It allowed us
to move storage spaces from Windows 8.1 to Windows 10 workstations easily. The portability of Storage Spaces rocks. IT Support happy, clients happy. Some of the engineers on their own or with the helpdesk are replacing disks for bigger ones or moving to SSD.

Welcome 2019

Happy New Year! Today we welcome 2019. I wish all my readers the best for 2019. May your hikes and journeys, both recreational & inspirational, lead you to beautiful places and gorgeous views to behold. Enjoy the experience, the adventure and efforts along the way to get there. Be grateful you have the abilities to do so.

Me relaxing after hiking up and down the trail network at Lake O’Hara in Yoho National Park (yes ,we got a golden ticket and were allowed in for hiking those gorgeous trails) – I just love the Rockies and RoCE 

As I welcome 2019, I’ll be diving into some interesting technologies, trends & strategies to investigate, discuss, implement and advise on. Join me on my journey in 2019!

SCOS 7.3 Distributed Spares

Introduction

When you have upgraded your SC Series SAN to SCOS 7.3 (7.3.5.8.4 at the time of writing, see https://blog.workinghardinit.work/2018/SCX08/13/sc-series-scos-7-3/ ) you are immediately ready to start utilizing the SCOS 7.3 distributed spares feature. This is very easy and virtually transparent to do.

SCOS 7.3 Distributed Spares
7.3 on an SC-7020 AFA

You will actually notice a capacity and treshhold jump in your SC array when you upgraded to 7.3. The system now know the spare capacity is now dealt with differently.

SCOS 7.3 Distributed Spares
Usable space and alert threshold increase right after upgrading to SCOS 7.3

How?

After upgrading you’ll see a notification either in Storage Manager or in Unisphere Central that informs you about the following:

“Distributed spare optimizer is currently disabled. Enabling optimizer will increase long- term disk performance.”

SCOS 7.3 Distributed Spares

Once you click enable you’ll be asked if you want to proceed.

SCOS 7.3 Distributed Spares

When you click “OK” the optimizer is configured and will start its work. That’s a one way street. You cannot go back to classic hot spares. That is good to know but in reality, you don’t want to go back anyway.

SCOS 7.3 Distributed Spares

In “Background Processes” you’ll be able to follow the progress of the initial redistribution. This goes reasonabely fast and I did 3 SANs during a workday. No one even noticed or complained about any performance issues.

SCOS 7.3 Distributed Spares
The Raid rebuild starts … 
SCOS 7.3 Distributed Spares
RAID rebuild near the end …  it took about 2-3 hours on All Flash Arrays.

The benefits are crystal clear

The benefits of SCOS 7.3 Distributed Spares are crystal clear:

  • Better performance. All disks contribute to the overall IOPS. There are no disks idling while reserved as a hot spare. Depending on the number of disks in your array the number of hot spares adds up. Next up for me is to rerun my base line performance test and see if I can measure this.
  • The lifetime of disks increases. On each disk, a portion is set aside as sparing capacity. This leads to an extra amount of under-provisioning. The workload on each of the drives is reduced a bit and gives the storage controller additional capacity from which to perform wear-leveling operations thus increasing the life span.
  • Faster rebuilds. This is the big one. As we can now read AND write to many disks the rebuild speed increases significantly. With ever bigger disks this is something you need and what was long overdue. But it’s here! It also allows for fast redistribution when a failed disk is replaced. On top of that when a disk is marked suspect the before it fails. A copy of the data takes place to spare capacity and only when that is done is the orginal data on the failing disk taken out fo use and is the disk marked as failed for replacement. This copy operation is less costly than a rebuild.