Working Hard In IT

My view on IT from the trenches

Working Hard In IT

Nuclear data waste

Introduction

Nowadays everyone seems to be heading to the hills to try and cash in on the new gold rush. Data! You have all heard that data is the new gold. Some call it the new oil, as that is their favorite fantasy, that’s all good. But there are drawbacks to this.

The cost of data and data waste

Like any resource you mine it does come with associated costs. A cost that has to be covered by the value you derive from it. That value has to exceed the money spend to gather, process and consume it. That can be an expensive business.

On top of that you have to deal with “the waste” it creates as a byproduct. Waste can be toxic. Mining data tends to produce nuclear data waste, the bad kind where “safe levels” are hard to determine. In the rush to grab the gold many forget a couple of important lessons from history. We should know by now we need to act proactively to avoid waste. That is the cheapest option in the long run and mitigates many of the risks. We should also know that not all data is gold, some of it just glitters, but it isn’t valuable. Fool’s data, like fool’s gold, is essentially worthless no matter how much money you have spent. Even worse, it’s still produces nuclear data waste asset than can get you into (legal) trouble.

Data storages and backups

How much data to you need to get to the gold and at what cost. Storage capabilities as well as storage capacity grows fast and cost seems under control for now. But will this last forever? And even if so, what’s the ratio of data gold versus raw data stored? Can we improve this ratio? Because even when things are cheap, why even do it if it is not needed?

Protecting the data and the waste

And then there is the cost with protecting that data as well as the governance around it.  The sad reality with data is that once you have it the probability that it will get you into trouble is real. Data, sooner or latter will get lost, misplaced, sold, hacked, leaked, … it’s almost guaranteed. Ask any real InfoSec professional (not the standard issue, policy quoting security officers, those are just windows dressing) and they will open your eyes to the reality of the risks. It’s very sobering.

The gold rush

As with any hype or gold rush we can avoid costly mistakes buy looking at history. Think about who benefited and who lost out. Think about why this happened and how. Can you see any parallels?

  • Many people are drawn to the data gold fields. Very few strike a gold vein.
  • There is a lot of money to be made selling the tools, supplies and gear to mine the data, process it, store and protect it.
  • Gathering raw data and processing it can be highly toxic
  • Storing and protecting the gold is expensive and hard.

Let’s dive a bit deeper into these issues, what these mean and how they materialize. They all have one thing in common for sure and that is that the fear of missing out is one of the driving factors.

Gold diggers

The reality is that many people that now become “data scientists” are not all highly skilled mathematicians and experts at statistics analysis. It’s a new hype, just like OLAP tools and data mining where before. We now have BI, big data and data science. That’s where the gold can be found so that’s where gold diggers flock to. Some have the skills, abilities and the luck to derive wealth from that. Most will just have the job digging.

Pick Axe

There is a lot more data than there is science in the hype created around data scientists. Data scientists should be great at math and statistics. Those are not very fields of human endeavor that do not scale well. They are not even popular. Attaching “scientist” to something doesn’t make it a science. Be sure of the quality of your gold and make sure it is not fool’s gold. But the gold rush is on. There’s money to be made. In an era where science is viewed by many as “an opinion” the urge to derive some credibility from adding “science” to any endeavor is a paradox. It is on the rise. Clearly, this shows the value of real science even when some only seem to like it when it suits their agenda.

But the field is exploding as companies want people working on all the raw data they collect. As one statistician stated: “I used to be a boring, underpaid geek with glasses, now that I’m a data scientist I’m cool, in demand and paid very well even if the work is less scientific.” That’s the nature of the beast. Her employer got a real statistician, but as the mines require a lot more bodies, many will make due with less.

Merchants

The “sure” money is in the supply chain. Storage, networking, compute … no matter where it is (cloud, fog or on-premises computing). There is money to be made with tools to process the data, protect it (backups) and secure it against unwanted prying eyes and theft. If you’re selling any of those business is a booming.

Everyone seems obsessed with collecting data. Luckily storage costs are down per GB and we can store ever more. We also need to protect more. But whoever deletes data? Who dares push the button? A lot of data is collected “just in case”. We might find gold in there later and if we don’t have it we cannot for sure. The fear of missing out in action. That is great if you’re selling stuff. Data lakes, data ponds, storage blobs or tables, Mongo DB or SQL PaaS, storage arrays, data processing technology and data protection. These can be products or services, it doesn’t matter, there is money to be made. And while you’re selling you’re not asking the buyers if the really need it. You don’t question them, you praise their insights and help them protect their investment. Everyone is doing it, so must you. The copy/paste strategy in action.

Nuclear data waste

While the vast growth in data is spectacular. A lot of it is crap. But there is very little effort put into  being selective. It’s too cheap right now to collect and store it. No one want to say “we don’t need it” and be the one to blame if you don’t have it.

But in the age of data leaks, hackers, privacy concerns and ever more legislation around data protection it’s worth making sure you don’t store data just because you can. Storing data holds inherent risks. Risk of losing it, corrupting it, deriving faulty information from it, leaking it, have it stolen or abused. It

In the age of GDPR and many other rightful privacy and data protection concerns collecting data should be treated like nuclear power. The value it brings is undeniable. But you don’t need vast amounts of nuclear fuel to deliver that value. You do need very good processes, fail safes, regulation, capable people and technology.

We should start looking at data as nuclear fuel and as such, after use and processing, part of it is left as toxic nuclear data waste. It’s a long-term toxic by product of the process of deriving information form data. Minimize the collection, storage and of data to achieve your goals at minimum cost and risk. Luckily, we have a very good solution for toxic data waste. You can delete it and wipe it securely.

We have to stop thinking that more is better when much of it is junk. The overhead of caring for that junk is ridiculous. We might have to do so for nuclear waste out of need. But for data there are alternatives. Destroy it if you don’t need it. It’s the safest way to handle the legal and reputation risks related to it. That will take a conscious effort.

Efforts and costs

Critical thinking about collecting data is lacking. That is understandable. There is a lot of the money to be made in data mining is in providing the tools to collect, process, store and protect the data. Even with many people that warn us of the security issues and legal responsibilities around it is often about selling services and products. For many all this might turn out to be a lot like the other gold rushes. There were a lot more suppliers of the tools that got rich than actual finders of profitable gold mines. This means there is also a lot of pressure and incentive to feed the “data is the new gold” beast.

Where a SQL database or a data warehouse at least meant you had to put effort into collecting the data, the rise of unstructured data technologies means way too often we don’t care and we’ll figure it out later. Imagine doing that to nuclear fuel! For now, the technical advances in storage and data technologies has allowed us to act without too much deliberation on the sanity of our choices. That might change, it might be wise to avoid the cold shower when it does and benefit from minimizing toxic data risks today.

Conclusion

Now true data gold is very valuable, but make sure you can recognize it. Just going through the motions and buying the tools, copying “in the know” statements from the internet isn’t going to cut it. That is called pretending. Sure, it’s fun. It is also a very dangerous and costly mistake when things get real. At best you look like an idiot with money. Many sales people will separate you from your money very efficiently.

The smarter organizations already have a data strategy that includes waste avoidance, reduction and management. Many don’t unfortunately. Collecting data for those is the main goal, driven by the tyranny of action over strategies. You have to be seen acting and being in charge. The buzz words have to be present and you have to come across as a “can do sir, yes sir” person. Well that is what will kill you. The late Norman Schwarzkopf knew this all too well.

Take care of your weaknesses, figure them out before they hurt you and before they destroy your ability to exploit your strengths. That people is a strategy exercise. I can do that for you and it will cost you a lot of money. But remember, strategies are not products you can buy, they are not commodities and as such buying them is a paradox. A strategy is what will give you the edge over your competitors.If you have others determine your strategy, your competitors will pay them more to find out . So, roll up your sleeves and put in the effort yourself. In the end, it’s all about common sense and this is true for data-mining, AI and BI as well.

Monitor the UNMAP/TRIM effect on a thin provisioned SAN

Introduction

During demo’s I give on the effectiveness of storage efficiencies (UNMAP, ODX) in Hyper-V I use some PowerShell code to help show his. Trim in the virtual machine and on the Hyper-V host pass along information about deleted blocks to a thin provisioned storage array. That means that every layer can be as efficient as possible. Here’s a picture of me doing a demo to monitor the UNMAP/TRIM effect on a thin provisioned SAN.

clip_image002

The script shows how a thin provisioned LUN on a SAN (DELL SC Series) grows in actual used spaced when data is being created or copied inside VMs. When data is hard deleted TRIM/UNMAP prevents dynamically expanding VHDX files form growing more than they need to. When a VM is shut down it even shrinks. The same info is passed on to the storage array. So, when data is deleted we can see the actual space used in a thin provisioned LUN on the SAN go down. That makes for a nice demo. I have some more info on the benefits and the potential issues of UNMAP if used carelessly here.

Scripting options for the DELL SC Series (Compellent)

Your storage array needs to support thin provisioning and TRIM/UNMAP with Windows Server Hyper-V. If so all you need is PowerShell library your storage vendor must provide. For the DELL Compellent series that use to be the PowerShell Command Set (2008) which made them an early adopter of PowerShell automation in the industry. That evolved with the array capabilities and still works to day with the older SC series models. In 2015, Dell Storage introduced the Enterprise Manager API (EM-API) and also the Dell Storage PowerShell SDK, which uses the EM-API. This works over a EM Data Collector server and no longer directly to the management IP of the controllers. This is the only way to work for the newer SC series models.

It’s a powerful tool to have and allows for automation and orchestration of your storage environment when you have wrapped your head around the PowerShell commands.

That does mean that I needed to replace my original PowerShell Command Set scripts. Depending on what those scripts do this can be done easily and fast or it might require some more effort.

Monitoring UNMAP/TRIM effect on a thin provisioned SAN with PowerShell

As a short demo let me show case the Command Set and the DELL Storage PowerShell SDK version of a script monitor the UNMAP/TRIM effect on a thin provisioned SAN with PowerShell.

Command Set version

Bar the way you connect to the array the difference is in the commandlets. In Command Set retrieving the storage info is done as follows:

$SanVolumeToMonitor = “MyDemoSANVolume”

#Get the size of the volume
$CompellentVolumeSize = (Get-SCVolume -Name $SanVolumeToMonitor).Size

#Get the actual disk space consumed in that volume
$CompellentVolumeReakDiskSpaceUsed = (Get-SCVolume -Name $SanVolumeToMonitor).TotalDiskSpaceConsumed

In the DELL Storage PowerShell SDK version it is not harder, just different than it used to be.

$SanVolumeToMonitor = “MyDemoSANVolume”
$Volume = Get-DellScVolume -StorageCenter $StorageCenter -Name $SanVolumeToMonitor

$VolumeStats = Get-DellScVolumeStorageUsage -Instance $Volume.InstanceID

#Get the size of the volume
$CompellentVolumeSize = ($VolumeStats).ConfiguredSpace

#Get the actual disk space consumed in that volume
$CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

Which gives …

clip_image004

I hope this gave you some inspiration to get started automating your storage provisioning and governance. On premises or cloud, a GUI and a click have there place, but automation is the way to go. As a bonus, the complete script is below.

#region PowerShell to keep the PoSh window on top during demos
$signature = @’ 
[DllImport("user32.dll")] 
public static extern bool SetWindowPos( 
    IntPtr hWnd, 
    IntPtr hWndInsertAfter, 
    int X, 
    int Y, 
    int cx, 
    int cy, 
    uint uFlags); 
‘@ 
$type = Add-Type -MemberDefinition $signature -Name SetWindowPosition -Namespace SetWindowPos -Using System.Text -PassThru

$handle = (Get-Process -id $Global:PID).MainWindowHandle 
$alwaysOnTop = New-Object -TypeName System.IntPtr -ArgumentList (-1) 
$type::SetWindowPos($handle, $alwaysOnTop, 0, 0, 0, 0, 0x0003) | Out-null
#endregion

function WriteVirtualDiskVolSize () {
    $Volume = Get-DellScVolume -Connection $Connection -StorageCenter $StorageCenter -Name $SanVolumeToMonitor
    $VolumeStats = Get-DellScVolumeStorageUsage -Connection $Connection -Instance $Volume.InstanceID
       
    #Get the size of the volume
    $CompellentVolumeSize = ($VolumeStats).ConfiguredSpace
    #Get the actual disk space consumed in that volume.
    $CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

    Write-Host -Foregroundcolor Magenta "Didier Van Hoye - Microsoft MVP / Veeam Vanguard
& Dell Techcenter Rockstar"
    Write-Host -Foregroundcolor Magenta "Hyper-V, Clustering, Storage, Azure, RDMA, Networking"
    Write-Host -Foregroundcolor Magenta  "http:/blog.workinghardinit.work"
    Write-Host -Foregroundcolor Magenta  "@workinghardinit"
    Write-Host -Foregroundcolor Cyan "DELLEMC Storage Center model $SCModel version" $SCVersion.version
    Write-Host -Foregroundcolor Cyan  "Dell Storage PowerShell SDK" (Get-Module DellStorage.ApiCommandSet).version
    Write-host -foregroundcolor Yellow "
 _   _  _   _  __  __     _     ____   
| | | || \ | ||  \/  |   / \   |  _ \ 
| | | ||  \| || |\/| |  / _ \  | |_) |
| |_| || |\  || |  | | / ___ \ |  __/
 \___/ |_| \_||_|  |_|/_/   \_\|_|
"
    Write-Host ""-ForegroundColor Red
    Write-Host "Size Of the LUN on SAN: $CompellentVolumeSize" -ForegroundColor Red
    Write-Host "Space Actually Used on SAN: $CompellentVolumeRealDiskSpaceUsed" -ForegroundColor Green 

    #Wait a while before you run these queries again.
    Start-Sleep -Milliseconds 1000
}

#If the Storage Center module isn't loaded, do so!
if (!(Get-Module DellStorage.ApiCommandSet)) {    
    import-module "C:\SysAdmin\Tools\DellStoragePowerShellSDK\DellStorage.ApiCommandSet.dll"
}

$DsmHostName = "MyDSMHost.domain.local"
$DsmUserName = "MyAdminName"
$DsmPwd = "MyPass"
$SCName = "MySCName"
# Prompt for the password
$DsmPassword = (ConvertTo-SecureString -AsPlainText $DsmPwd -Force)

# Create the connection
$Connection = Connect-DellApiConnection -HostName $DsmHostName `
    -User $DsmUserName `
    -Password $DsmPassword

$StorageCenter = Get-DellStorageCenter -Connection $Connection -name $SCName 
$SCVersion = $StorageCenter | Select-Object Version
$SCModel = (Get-DellScController -Connection $Connection -StorageCenter $StorageCenter -InstanceName "Top Controller").model.Name.toupper()

$SanVolumeToMonitor = "MyDemoSanVolume"

#Just let the script run in a loop indefinitely.
while ($true) {
    Clear-Host
    WriteVirtualDiskVolSize
}

 

Collect cluster nodes with HBA WWN info

Introduction

Below is a script that I use to collect cluster nodes with HBA WWN info. It grabs the cluster nodes and their HBA (virtual ports) WWN information form an existing cluster. In this example the nodes have Fibre Channel (FC) HBAs. It works equally well for iSCSI HBA or other cards. You can use the collected info in real time. As an example I also demonstrate writing and reading the info to and from a CSV.

This script comes in handy when you are replacing the storage arrays. You’ll need that info to do the FC zoning for example.  And to create the cluster en server object with the correct HBA on the new storage arrays if it allows for automation. As a Hyper-V cluster admin you can grab all that info from your cluster nodes without the need to have access to the SAN or FC fabrics. You can use it yourself and hand it over to those handling them, who can use if to cross check the info they see on the switch or the old storage arrays.

image

Script to collect cluster nodes with HBA WWN info

The script demos a single cluster but you could use it for many. It collects the cluster name, the cluster nodes and their Emulex HBAs. It writes that information to a CSV files you can read easily in an editor or Excel.

image

The scripts demonstrates reading that CSV file and parsing the info. That info can be used in PowerShell to script the creation of the cluster and server objects on your SAN and add the HBAs to the server objects. I recently used it to move a bunch of Hyper-V and File clusters to a new DELLEMC SC Series storage arrays. That has the DELL Storage PowerShell SDK. You might find it useful as an example and to to adapt for your own needs (iSCSI, brand, model of HBA etc.).

#region Supporting Functions
Function Convert-OutputForCSV {
    <#
        .SYNOPSIS
            Provides a way to expand collections in an object property prior
            to being sent to Export-Csv.

        .DESCRIPTION
            Provides a way to expand collections in an object property prior
            to being sent to Export-Csv. This helps to avoid the object type
            from being shown such as system.object[] in a spreadsheet.

        .PARAMETER InputObject
            The object that will be sent to Export-Csv

        .PARAMETER OutPropertyType
            This determines whether the property that has the collection will be
            shown in the CSV as a comma delimmited string or as a stacked string.

            Possible values:
            Stack
            Comma

            Default value is: Stack

        .NOTES
            Name: Convert-OutputForCSV
            Author: Boe Prox
            Created: 24 Jan 2014
            Version History:
                1.1 - 02 Feb 2014
                    -Removed OutputOrder parameter as it is no longer needed; inputobject order is now respected 
                    in the output object
                1.0 - 24 Jan 2014
                    -Initial Creation

        .EXAMPLE
            $Output = 'PSComputername','IPAddress','DNSServerSearchOrder'

            Get-WMIObject -Class Win32_NetworkAdapterConfiguration -Filter "IPEnabled='True'" |
            Select-Object $Output | Convert-OutputForCSV | 
            Export-Csv -NoTypeInformation -Path NIC.csv    
            
            Description
            -----------
            Using a predefined set of properties to display ($Output), data is collected from the 
            Win32_NetworkAdapterConfiguration class and then passed to the Convert-OutputForCSV
            funtion which expands any property with a collection so it can be read properly prior
            to being sent to Export-Csv. Properties that had a collection will be viewed as a stack
            in the spreadsheet.        
            
    #>
    #Requires -Version 3.0
    [cmdletbinding()]
    Param (
        [parameter(ValueFromPipeline)]
        [psobject]$InputObject,
        [parameter()]
        [ValidateSet('Stack', 'Comma')]
        [string]$OutputPropertyType = 'Stack'
    )
    Begin {
        $PSBoundParameters.GetEnumerator() | ForEach {
            Write-Verbose "$($_)"
        }
        $FirstRun = $True
    }
    Process {
        If ($FirstRun) {
            $OutputOrder = $InputObject.psobject.properties.name
            Write-Verbose "Output Order:`n $($OutputOrder -join ', ' )"
            $FirstRun = $False
            #Get properties to process
            $Properties = Get-Member -InputObject $InputObject -MemberType *Property
            #Get properties that hold a collection
            $Properties_Collection = @(($Properties | Where-Object {
                        $_.Definition -match "Collection|\[\]"
                    }).Name)
            #Get properties that do not hold a collection
            $Properties_NoCollection = @(($Properties | Where-Object {
                        $_.Definition -notmatch "Collection|\[\]"
                    }).Name)
            Write-Verbose "Properties Found that have collections:`n $(($Properties_Collection) -join ', ')"
            Write-Verbose "Properties Found that have no collections:`n $(($Properties_NoCollection) -join ', ')"
        }
 
        $InputObject | ForEach {
            $Line = $_
            $stringBuilder = New-Object Text.StringBuilder
            $Null = $stringBuilder.AppendLine("[pscustomobject] @{")

            $OutputOrder | ForEach {
                If ($OutputPropertyType -eq 'Stack') {
                    $Null = $stringBuilder.AppendLine("`"$($_)`" = `"$(($line.$($_) | Out-String).Trim())`"")
                }
                ElseIf ($OutputPropertyType -eq "Comma") {
                    $Null = $stringBuilder.AppendLine("`"$($_)`" = `"$($line.$($_) -join ', ')`"")                   
                }
            }
            $Null = $stringBuilder.AppendLine("}")
 
            Invoke-Expression $stringBuilder.ToString()
        }
    }
    End {}
}
function Get-WinOSHBAInfo {
<#
Basically add 3 nicely formated properties to the HBA info we get via WMI
These are the NodeWWW, the PortWWN and the FabricName. The raw attributes
from WMI are not readily consumable. WWNs are given with a ":" delimiter.
This can easiliy be replaced or removed depending on the need.
#>

param ($ComputerName = "localhost")
 
# Get HBA Information
$Port = Get-WmiObject -ComputerName $ComputerName -Class MSFC_FibrePortHBAAttributes -Namespace "root\WMI"
$HBAs = Get-WmiObject -ComputerName $ComputerName -Class MSFC_FCAdapterHBAAttributes  -Namespace "root\WMI"
 
$HBAProperties = $HBAs | Get-Member -MemberType Property, AliasProperty | Select -ExpandProperty name | ? {$_ -notlike "__*"}
$HBAs = $HBAs | Select-Object $HBAProperties
$HBAs | % { $_.NodeWWN = ((($_.NodeWWN) | % {"{0:x2}" -f $_}) -join ":").ToUpper() }
 
ForEach ($HBA in $HBAs) {
 
    # Get Port WWN
    $PortWWN = (($Port |? { $_.instancename -eq $HBA.instancename }).attributes).PortWWN
    $PortWWN = (($PortWWN | % {"{0:x2}" -f $_}) -join ":").ToUpper()
    Add-Member -MemberType NoteProperty -InputObject $HBA -Name PortWWN -Value $PortWWN
    # Get Fabric WWN
    $FabricWWN = (($Port |? { $_.instancename -eq $HBA.instancename }).attributes).FabricName
    $FabricWWN = (($FabricWWN | % {"{0:x2}" -f $_}) -join ":").ToUpper()
    Add-Member -MemberType NoteProperty -InputObject $HBA -Name FabricWWN -Value $FabricWWN
 
    # Output
    $HBA
}
}
#endregion 

#Grab the cluster nane in a variable. Adapt thiscode to loop through all your clusters.
$ClusterName = "DEMOLABCLUSTER"
#Grab all cluster node 
$ClusterNodes = Get-Cluster -name $ClusterName | Get-ClusterNode
#Create array of custom object to store ClusterName, the cluster nodes and the HBAs
$ServerWWNArray = @()

ForEach ($ClusterNode in $ClusterNodes) {
    #We loop through the cluster nodes the cluster and for each one we grab the HBAs that are relevant.
    #My lab nodes have different types installed up and off, so I specify the manufacturer to get the relevant ones.
    #Adapt to your needs. You ca also use modeldescription to filter out FCoE vers FC HBAs etc.
    $AllHBAPorts = Get-WinOSHBAInfo -ComputerName $ClusterNode.Name | Where-Object {$_.Manufacturer -eq "Emulex Corporation"} 

    #The SC Series SAN PowerShell takes the WWNs without any delimiters, so we dump the ":" for this use case.
    $WWNs = $AllHBAPorts.PortWWN -replace ":", ""
    $NodeName = $ClusterNode.Name

    #Build a nice node object with the info and add it to the $ServerWWNArray 
    $ServerWWNObject = New-Object psobject -Property @{
        WWN         = $WWNs
        ServerName  = $NodeName 
        ClusterName = $ClusterName         
    }
    $ServerWWNArray += $ServerWWNObject
}

#Show our array
$ServerWWNArray

#just a demo to list what's in the array
ForEach ($ServerNode in $ServerWWNArray) {    
    $Servernode.ServerName
    
    ForEach ($WWN in $Servernode.WWN)
    {$WWN}

}

#Show the results
$Export = $ServerWWNArray | Convert-OutputForCSV
#region write to CSV and read from CSV

#You can dump this in a file
$Export | export-csv -Path "c:\SysAdmin\$ClusterName.csv" -Delimiter ";"

#and get it back from a file
Get-Content -Path "c:\SysAdmin\$ClusterName.csv"
$ClusterInfoFile = Import-CSV -Path "c:\SysAdmin\$ClusterName.csv" -Delimiter ";"
$ClusterInfoFile | Format-List

#just a demo to list what's in the array
$MyClusterName = $ClusterInfoFile.clustername | get-unique
$MyClusterName
ForEach ($ClusterNode in $ClusterInfoFile) {  

    $ClusterNode.ServerName
    
    ForEach ($WWN in $ClusterNode.WWN) {
        $WWN
    }

}

Dell SC Series MPIO Registry Settings script

Introduction

When  you’re using DELL Compellent (SC Series) storage you might be leveraging the  Dell SC Series MPIO Registry Settings script they give you to set the recommended settings. That’s a nice little script you can test, verify and adapt to integrate into your set up scripts. You can find it in the Dell EMC SC Series Storage and Microsoft Multipath I/O

Dell SC Series MPIO Registry Settings script

Recently I was working with a new deployment ( 7.2.40) to test and verify it in a lab environment. The lab cluster nodes had a lot of NIC & FC HBA to test all kinds of possible scenarios Microsoft Windows Clusters, S2D, Hyper-V, FC and iSCSI etc. The script detected the iSCSI service but did not update any setting but did throw errors.

image

After verifying things in the registry myself it was clear that the entries for the Microsoft iSCSI Initiator that the script is looking for are there but the script did not pick them up.

image

Looking over the script it became clear quickly what the issue was. The variable $IscsiRegPath = “HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000*” has 3 leading zeros out of a max of 4 characters. This means that if the Microsoft iSCSI Initiator info is in 0009 it get’s picked up but not when it is in 0011 for example.

So I changed that to only 2 leading zeros. This makes the assumption you won’t exceed 0099 which is a safer assumption, but you could argue this should even be only one leading zero as 999 is an even safer assumption.

$IscsiRegPath = “HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\00*”

I’m sharing the snippet with my adaptation here in case you want it. As always I assume nu responsibility for what you do with the script or the outcomes in your environment. Big boy rules apply.

# MPIO Registry Settings script
# This script will apply recommended Dell Storage registry settings
# on Windows Server 2008 R2 or newer
#
# THIS CODE IS MADE AVAILABLE AS IS, WITHOUT WARRANTY OF ANY KIND.
# THE ENTIRE RISK OF THE USE OR THE RESULTS FROM THE USE OF THIS CODE
# REMAINS WITH THE USER.
# Assign variables

$MpioRegPath = "HKLM:\SYSTEM\CurrentControlSet\Services\mpio\Parameters"
$IscsiRegPath = "HKLM:\SYSTEM\CurrentControlSet\Control\Class\"
#DIDIER adaption to 2 leading zeros instead of 3 as 0010 and 0011 would not be
#found otherwise.This makes the assumption you won't exceed 0099 which is a
#safer #assumption, but you could argue that this should even be only one
#leading zero as 999 is #an even #safer assumption.
$IscsiRegPath += "{4d36e97b-e325-11ce-bfc1-08002be10318}\00*"

# General settings
Set-ItemProperty -Path $MpioRegPath -Name "PDORemovePeriod" -Value 120
Set-ItemProperty -Path $MpioRegPath -Name "PathRecoveryInterval" -Value 25
Set-ItemProperty -Path $MpioRegPath -Name "UseCustomPathRecoveryInterval" -Value 1
Set-ItemProperty -Path $MpioRegPath -Name "PathVerifyEnabled" -Value 1

# Apply OS-specific general settings
$OsVersion = ( Get-WmiObject -Class Win32_OperatingSystem ).Caption
If ( $OsVersion -match "Windows Server 2008 R2" )
{
New-ItemProperty –Path $MpioRegPath –Name "DiskPathCheckEnabled" –Value 1 –PropertyType DWORD –Force
New-ItemProperty –Path $MpioRegPath –Name "DiskPathCheckInterval" –Value 25 –PropertyType DWORD –Force
}
Else
{
Set-ItemProperty –Path $MpioRegPath –Name "DiskPathCheckInterval" –Value 25
}

# iSCSI settings
If ( ( Get-Service -Name "MSiSCSI" ).Status -eq "Running" )
{
# Get the registry path for the Microsoft iSCSI initiator parameters
$IscsiParam = Get-Item -Path $IscsiRegPath | Where-Object { ( Get-ItemProperty $_.PSPath ).DriverDesc -eq "Microsoft iSCSI Initiator"} | Get-ChildItem | Where-Object { $_.PSChildName -eq "Parameters" }

# Set the Microsoft iSCSI initiator parameters
Set-ItemProperty -Path $IscsiParam.PSPath -Name "MaxRequestHoldTime" -Value 90
Set-ItemProperty -Path $IscsiParam.PSPath -Name "LinkDownTime" -Value 35
Set-ItemProperty -Path $IscsiParam.PSPath -Name "EnableNOPOut" -Value 1
}
Else
{
Write-Host "iSCSI Service is not running."
Write-Host "iSCSI registry settings have NOT been configured."
}

Write-Host "MPIO registry settings have been configured successfully."
Write-Host "The system must be restarted for the changes to take effect."