Windows Server 2019 in place upgrade testing

Windows Server 2019 in place upgrade testing

In theory in place upgrade testing is easy. You just validate Microsoft’s efforts and testing that went into the process. If it succeeds all is well. Well, not really. The amount of permutations in real life are so large it can never be done for all of them. But even today in this era of “services as cattle” they have a role to play. I would say, even more than before. That means that Windows Server 2019 in place upgrade testing is also important.

image

In place upgrade paths to Windows Server 2019

In-place upgrade allows an administrator to upgrade an existing installation of Windows Server to a newer version, retaining settings and installed features.

The following (Long Term Service Branch) LTSC versions and editions of Windows Server with their supported path for in-place upgrade are shown below:

image

Please note that when you are performing cluster operating system rolling upgrades this can only be done from N-1 to N. This means that you can only do those from Windows Server 2016 to Windows Server 2019.

image

The ability to perform cluster operating system rolling upgrades is just one benefit you get by keeping your environment current.

Conclusion

Currently Myself and a couple of fellow MVPs are busy  doing some testing on “real” hardware. That means servers, the kind you’d use in a professional environment, not the PC lab. Testing on virtual machines rocks and those are heavily used in real life, but you can’t test everything you need to verify hardware deployments. Think about S2D, Persistent Memory, SET, vRSS/VMQ/VMMQ etc.

Part of that testing is in place upgrades. Yes, there are times and places when I will avoid them, there are also moments where I leverage them. I do think they are important and they have their place. Doing them depends on the value it can offer.

Whatever you do, you test, you verify and your break stuff in the lab before casually on a Monday morning upgrading a cluster to a new version of the Operating System. I hope I don’t need explain this anymore anno 2018? Or actually I do, we always have new talent join us and we all have to learn. So big tip, learning on the job doesn’t always equal learning in production. That will happen anyway, but don’t default to it.

SC Series SCOS 7.3

Introduction

While I was on vacation the SC Series SCOS 7.3 was announced by DELL to the public at large. Finally, I would almost say as I really expected this to be a bigger thing at DELL World 2018. SCOS updates are free to people with a valid support contract. Bar bug fixes and feature enhancements or additions we really get a lot in this new version. As a matter of fact, we get so much I can only wonder what they have planned for 8.x! SCOS 7.3

clip_image001_thumb[1]

What’s new in SC Series SCOS 7.3

Let’s look very briefly at what is new in the SC Series SCOS 7.3 release:

  • Considerable performance gains for Hybrid or All Flash Arrays. I tend to use 70/30 read/write ratio and random IO for my base lines. So, it won’t be a magical doubling of speed. But hey IOPS/latency/bandwidth measurements are a sport by itself. As long as you can measure real and useful to significant progress for your workloads against a baseline you’re doing well!
  • Easy SC4020 upgrades: you can now migrate the storage enclosure to new controller units.
  • 25GbE &100GbE iSCSI support for SC5020, SC5020F, SC7020, SC7020F and SC9000.
  • CloudIQ support. CloudIQ is a free cloud-based analytics and monitoring tool for Unity that is now available for the SC Series.
  • Management with Unisphere :
    • “Unisphere for SC HTML5 Web UI” – the web UI is back & no more Java.
    • “Unisphere for SC” for managing a single array.
    • “Unisphere Central for SC” when you need to manage multiple arrays.
  • SCv2000 can now federate &replicates with other SC arrays models.
  • Capacity increases for many SC series models.
  • Distributed spares offer up to 500% faster rebuilds. On top of that all drives are now used instead of leaving assigned hot spare drives go to waste when not needed.
  • ALUA support for Live Volumes brings lower latency by reducing/optimizing network traffic
  • Increases the number of Live Volumes supported in the array.

My personal top favorite in SCOS 7.3 is distributed spares. First of all, this allows us to have way better performance overall as we don’t reserve hot spares physically anymore. It just reserves spaces, so all disks add to the total IOPS available.

clip_image003_thumb[1]

Secondly, the speed of rebuilds is now a lot faster due to “many to many” read/writes instead of many to one. Third, more disks help extend the life span of SSD, as do large SSD actually, so this is also an added benefit. With ever bigger SSD in our arrays, I am now leveraging All Flash Arrays (AFA)with 15TB SDDs the latter is very much needed and welcomed. If your read my blog post My first Dell SC7020(F) Array you know this was on my priority list!

Another great benefit to me is the inherent better performance SCOS 7.3 brings us. Even with AFA we can always use more especially at crunch time with transactional workloads, backups, data copies etc. VDI customers will also welcome this.

Conclusion

I really look forward to this SCOS version and I’ll share my upgrade experiences with you here. It fixes my main concern around rebuilds anno 2018. I’m still very happed with SCOS as far as general-purpose traditional SANs go for a variety of workloads. It is on my buy list and I am a repeat buyer. That is actually worth something and means they do things well. Now they should upgrade Replay Manager to really support and understand Windows Server 2016 and 2019 Hyper-V improvements. What they have now is works with (a la Windows Server 2012). I would not call that supported yet. Anyway, the SC Series SCOS 7.3 is definitely bringing a lot to the table. You can read more here.

My perspective on work and life

Introduction

What is so important about my perspective on work and life? Well, nothing at all unless you’re me. As an IT expert I spend way to much time in front of screens. It’s an occupational hazard. It’s not that I don’t talk to other people. I do, quite a lot. I do so for my work but also, a lot of the time, outside of my day job. That’s essential to prevent tunnel vision and echo chambers. But a big part of my time is spent working on projects (design, architecture, implementation). The remainder goes to assisting others, learning and experimenting or troubleshooting. That’s a never ending story, rinse and repeat. This never ending cycle which can lead to loss of perspective. Not just the loss of your professional perspective, but work & life wise. The rat race goes fast and in IT everything comes and goes faster than ever. You can work very hard and not get ahead. You might make lots of money but have no time to enjoy it. And it can all be over in a second.  You can spend you whole life working for something, just to have it taken away by illness, accident, natural or man made disaster or crime. Sobering thoughts, to say the least.

My perspective on work and life

While I love the IT business from silicon to the clouds I also adore the wonderful scenery that real clouds help create in the great outdoors.That’s why it’s good to take a break and go on a “walk about”. When looking out over the Grand Canyon, hiking in Yellow Stone valleys or in Great Basin with its 5000 year and older Bristlecone pines you can’t feel but insignificant. Both the big picture and over time. On a geological scale what’s a couple of million years any way, let alone less. So every now and then I get my proverbial behind out of the IT cloud, data center and out of the mind numbing open landscape offices. I go watch wild life, hike through landscapes formed by many hundreds of millions of years of natures forces at work.

image

It’s a mind set where the little aid above, the GSA (American Geological Society) geologic time scale  becomes relevant to appreciate & try to understand the natural beauty around me.

Some advise

Don’t take life and work too serious, step out of the “rat race” now and then.  Changing my priorities and my perspective on work and life during time off is a good thing. During vacations it sure is a lot different during such periods. I love it. Seeing the Rocky Mountains scenery as you drive to a hike in a comfy Ford Explorer is a just magnificent.

My perspective on work and life

From the majestic Rockies & the Pacific North & South West, the views during a road trip are stunning. The hikes amazing & the serenity is soothing to the soul. I feel great when exploring them. Take a long week-end, go on a road trip, hike around and recharge your batteries. If you’re able to work remotely, do so and explore your local natural resources during your down time or breaks.

Get over that fear of missing out and realize that “promotions” or work are less important than yourself best interest. No one will pay you double  when you work twice as hard or give you back tour time. It’s a typical example of diminishing retruns. Remember that you don’t get a second life. Live this one. Don’t pointless rush through it from birth to death. You won’t be THAT rich and THAT famous (or infamous) enough to be remembered. You’ll probably be forgotten within one or two generations. So enjoy yourself a bit. Even when Rome does burn down during your absence, that’s were new empires can grow.

Monitor the UNMAP/TRIM effect on a thin provisioned SAN

Introduction

During demo’s I give on the effectiveness of storage efficiencies (UNMAP, ODX) in Hyper-V I use some PowerShell code to help show his. Trim in the virtual machine and on the Hyper-V host pass along information about deleted blocks to a thin provisioned storage array. That means that every layer can be as efficient as possible. Here’s a picture of me doing a demo to monitor the UNMAP/TRIM effect on a thin provisioned SAN.

clip_image002

The script shows how a thin provisioned LUN on a SAN (DELL SC Series) grows in actual used spaced when data is being created or copied inside VMs. When data is hard deleted TRIM/UNMAP prevents dynamically expanding VHDX files form growing more than they need to. When a VM is shut down it even shrinks. The same info is passed on to the storage array. So, when data is deleted we can see the actual space used in a thin provisioned LUN on the SAN go down. That makes for a nice demo. I have some more info on the benefits and the potential issues of UNMAP if used carelessly here.

Scripting options for the DELL SC Series (Compellent)

Your storage array needs to support thin provisioning and TRIM/UNMAP with Windows Server Hyper-V. If so all you need is PowerShell library your storage vendor must provide. For the DELL Compellent series that use to be the PowerShell Command Set (2008) which made them an early adopter of PowerShell automation in the industry. That evolved with the array capabilities and still works to day with the older SC series models. In 2015, Dell Storage introduced the Enterprise Manager API (EM-API) and also the Dell Storage PowerShell SDK, which uses the EM-API. This works over a EM Data Collector server and no longer directly to the management IP of the controllers. This is the only way to work for the newer SC series models.

It’s a powerful tool to have and allows for automation and orchestration of your storage environment when you have wrapped your head around the PowerShell commands.

That does mean that I needed to replace my original PowerShell Command Set scripts. Depending on what those scripts do this can be done easily and fast or it might require some more effort.

Monitoring UNMAP/TRIM effect on a thin provisioned SAN with PowerShell

As a short demo let me show case the Command Set and the DELL Storage PowerShell SDK version of a script monitor the UNMAP/TRIM effect on a thin provisioned SAN with PowerShell.

Command Set version

Bar the way you connect to the array the difference is in the commandlets. In Command Set retrieving the storage info is done as follows:

$SanVolumeToMonitor = “MyDemoSANVolume”

#Get the size of the volume
$CompellentVolumeSize = (Get-SCVolume -Name $SanVolumeToMonitor).Size

#Get the actual disk space consumed in that volume
$CompellentVolumeReakDiskSpaceUsed = (Get-SCVolume -Name $SanVolumeToMonitor).TotalDiskSpaceConsumed

In the DELL Storage PowerShell SDK version it is not harder, just different than it used to be.

$SanVolumeToMonitor = “MyDemoSANVolume”
$Volume = Get-DellScVolume -StorageCenter $StorageCenter -Name $SanVolumeToMonitor

$VolumeStats = Get-DellScVolumeStorageUsage -Instance $Volume.InstanceID

#Get the size of the volume
$CompellentVolumeSize = ($VolumeStats).ConfiguredSpace

#Get the actual disk space consumed in that volume
$CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

Which gives …

clip_image004

I hope this gave you some inspiration to get started automating your storage provisioning and governance. On premises or cloud, a GUI and a click have there place, but automation is the way to go. As a bonus, the complete script is below.

#region PowerShell to keep the PoSh window on top during demos
$signature = @’ 
[DllImport("user32.dll")] 
public static extern bool SetWindowPos( 
    IntPtr hWnd, 
    IntPtr hWndInsertAfter, 
    int X, 
    int Y, 
    int cx, 
    int cy, 
    uint uFlags); 
‘@ 
$type = Add-Type -MemberDefinition $signature -Name SetWindowPosition -Namespace SetWindowPos -Using System.Text -PassThru

$handle = (Get-Process -id $Global:PID).MainWindowHandle 
$alwaysOnTop = New-Object -TypeName System.IntPtr -ArgumentList (-1) 
$type::SetWindowPos($handle, $alwaysOnTop, 0, 0, 0, 0, 0x0003) | Out-null
#endregion

function WriteVirtualDiskVolSize () {
    $Volume = Get-DellScVolume -Connection $Connection -StorageCenter $StorageCenter -Name $SanVolumeToMonitor
    $VolumeStats = Get-DellScVolumeStorageUsage -Connection $Connection -Instance $Volume.InstanceID
       
    #Get the size of the volume
    $CompellentVolumeSize = ($VolumeStats).ConfiguredSpace
    #Get the actual disk space consumed in that volume.
    $CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

    Write-Host -Foregroundcolor Magenta "Didier Van Hoye - Microsoft MVP / Veeam Vanguard
& Dell Techcenter Rockstar"
    Write-Host -Foregroundcolor Magenta "Hyper-V, Clustering, Storage, Azure, RDMA, Networking"
    Write-Host -Foregroundcolor Magenta  "http:/blog.workinghardinit.work"
    Write-Host -Foregroundcolor Magenta  "@workinghardinit"
    Write-Host -Foregroundcolor Cyan "DELLEMC Storage Center model $SCModel version" $SCVersion.version
    Write-Host -Foregroundcolor Cyan  "Dell Storage PowerShell SDK" (Get-Module DellStorage.ApiCommandSet).version
    Write-host -foregroundcolor Yellow "
 _   _  _   _  __  __     _     ____   
| | | || \ | ||  \/  |   / \   |  _ \ 
| | | ||  \| || |\/| |  / _ \  | |_) |
| |_| || |\  || |  | | / ___ \ |  __/
 \___/ |_| \_||_|  |_|/_/   \_\|_|
"
    Write-Host ""-ForegroundColor Red
    Write-Host "Size Of the LUN on SAN: $CompellentVolumeSize" -ForegroundColor Red
    Write-Host "Space Actually Used on SAN: $CompellentVolumeRealDiskSpaceUsed" -ForegroundColor Green 

    #Wait a while before you run these queries again.
    Start-Sleep -Milliseconds 1000
}

#If the Storage Center module isn't loaded, do so!
if (!(Get-Module DellStorage.ApiCommandSet)) {    
    import-module "C:\SysAdmin\Tools\DellStoragePowerShellSDK\DellStorage.ApiCommandSet.dll"
}

$DsmHostName = "MyDSMHost.domain.local"
$DsmUserName = "MyAdminName"
$DsmPwd = "MyPass"
$SCName = "MySCName"
# Prompt for the password
$DsmPassword = (ConvertTo-SecureString -AsPlainText $DsmPwd -Force)

# Create the connection
$Connection = Connect-DellApiConnection -HostName $DsmHostName `
    -User $DsmUserName `
    -Password $DsmPassword

$StorageCenter = Get-DellStorageCenter -Connection $Connection -name $SCName 
$SCVersion = $StorageCenter | Select-Object Version
$SCModel = (Get-DellScController -Connection $Connection -StorageCenter $StorageCenter -InstanceName "Top Controller").model.Name.toupper()

$SanVolumeToMonitor = "MyDemoSanVolume"

#Just let the script run in a loop indefinitely.
while ($true) {
    Clear-Host
    WriteVirtualDiskVolSize
}