SC Series SCOS 7.3

Introduction

While I was on vacation the SC Series SCOS 7.3 was announced by DELL to the public at large. Finally, I would almost say as I really expected this to be a bigger thing at DELL World 2018. SCOS updates are free to people with a valid support contract. Bar bug fixes and feature enhancements or additions we really get a lot in this new version. As a matter of fact, we get so much I can only wonder what they have planned for 8.x! SCOS 7.3

clip_image001_thumb[1]

What’s new in SC Series SCOS 7.3

Let’s look very briefly at what is new in the SC Series SCOS 7.3 release:

  • Considerable performance gains for Hybrid or All Flash Arrays. I tend to use 70/30 read/write ratio and random IO for my base lines. So, it won’t be a magical doubling of speed. But hey IOPS/latency/bandwidth measurements are a sport by itself. As long as you can measure real and useful to significant progress for your workloads against a baseline you’re doing well!
  • Easy SC4020 upgrades: you can now migrate the storage enclosure to new controller units.
  • 25GbE &100GbE iSCSI support for SC5020, SC5020F, SC7020, SC7020F and SC9000.
  • CloudIQ support. CloudIQ is a free cloud-based analytics and monitoring tool for Unity that is now available for the SC Series.
  • Management with Unisphere :
    • “Unisphere for SC HTML5 Web UI” – the web UI is back & no more Java.
    • “Unisphere for SC” for managing a single array.
    • “Unisphere Central for SC” when you need to manage multiple arrays.
  • SCv2000 can now federate &replicates with other SC arrays models.
  • Capacity increases for many SC series models.
  • Distributed spares offer up to 500% faster rebuilds. On top of that all drives are now used instead of leaving assigned hot spare drives go to waste when not needed.
  • ALUA support for Live Volumes brings lower latency by reducing/optimizing network traffic
  • Increases the number of Live Volumes supported in the array.

My personal top favorite in SCOS 7.3 is distributed spares. First of all, this allows us to have way better performance overall as we don’t reserve hot spares physically anymore. It just reserves spaces, so all disks add to the total IOPS available.

clip_image003_thumb[1]

Secondly, the speed of rebuilds is now a lot faster due to “many to many” read/writes instead of many to one. Third, more disks help extend the life span of SSD, as do large SSD actually, so this is also an added benefit. With ever bigger SSD in our arrays, I am now leveraging All Flash Arrays (AFA)with 15TB SDDs the latter is very much needed and welcomed. If your read my blog post My first Dell SC7020(F) Array you know this was on my priority list!

Another great benefit to me is the inherent better performance SCOS 7.3 brings us. Even with AFA we can always use more especially at crunch time with transactional workloads, backups, data copies etc. VDI customers will also welcome this.

Conclusion

I really look forward to this SCOS version and I’ll share my upgrade experiences with you here. It fixes my main concern around rebuilds anno 2018. I’m still very happed with SCOS as far as general-purpose traditional SANs go for a variety of workloads. It is on my buy list and I am a repeat buyer. That is actually worth something and means they do things well. Now they should upgrade Replay Manager to really support and understand Windows Server 2016 and 2019 Hyper-V improvements. What they have now is works with (a la Windows Server 2012). I would not call that supported yet. Anyway, the SC Series SCOS 7.3 is definitely bringing a lot to the table. You can read more here.

My perspective on work and life

Introduction

What is so important about my perspective on work and life? Well, nothing at all unless you’re me. As an IT expert I spend way to much time in front of screens. It’s an occupational hazard. It’s not that I don’t talk to other people. I do, quite a lot. I do so for my work but also, a lot of the time, outside of my day job. That’s essential to prevent tunnel vision and echo chambers. But a big part of my time is spent working on projects (design, architecture, implementation). The remainder goes to assisting others, learning and experimenting or troubleshooting. That’s a never ending story, rinse and repeat. This never ending cycle which can lead to loss of perspective. Not just the loss of your professional perspective, but work & life wise. The rat race goes fast and in IT everything comes and goes faster than ever. You can work very hard and not get ahead. You might make lots of money but have no time to enjoy it. And it can all be over in a second.  You can spend you whole life working for something, just to have it taken away by illness, accident, natural or man made disaster or crime. Sobering thoughts, to say the least.

My perspective on work and life

While I love the IT business from silicon to the clouds I also adore the wonderful scenery that real clouds help create in the great outdoors.That’s why it’s good to take a break and go on a “walk about”. When looking out over the Grand Canyon, hiking in Yellow Stone valleys or in Great Basin with its 5000 year and older Bristlecone pines you can’t feel but insignificant. Both the big picture and over time. On a geological scale what’s a couple of million years any way, let alone less. So every now and then I get my proverbial behind out of the IT cloud, data center and out of the mind numbing open landscape offices. I go watch wild life, hike through landscapes formed by many hundreds of millions of years of natures forces at work.

image

It’s a mind set where the little aid above, the GSA (American Geological Society) geologic time scale  becomes relevant to appreciate & try to understand the natural beauty around me.

Some advise

Don’t take life and work too serious, step out of the “rat race” now and then.  Changing my priorities and my perspective on work and life during time off is a good thing. During vacations it sure is a lot different during such periods. I love it. Seeing the Rocky Mountains scenery as you drive to a hike in a comfy Ford Explorer is a just magnificent.

My perspective on work and life

From the majestic Rockies & the Pacific North & South West, the views during a road trip are stunning. The hikes amazing & the serenity is soothing to the soul. I feel great when exploring them. Take a long week-end, go on a road trip, hike around and recharge your batteries. If you’re able to work remotely, do so and explore your local natural resources during your down time or breaks.

Get over that fear of missing out and realize that “promotions” or work are less important than yourself best interest. No one will pay you double  when you work twice as hard or give you back tour time. It’s a typical example of diminishing retruns. Remember that you don’t get a second life. Live this one. Don’t pointless rush through it from birth to death. You won’t be THAT rich and THAT famous (or infamous) enough to be remembered. You’ll probably be forgotten within one or two generations. So enjoy yourself a bit. Even when Rome does burn down during your absence, that’s were new empires can grow.

Monitor the UNMAP/TRIM effect on a thin provisioned SAN

Introduction

During demo’s I give on the effectiveness of storage efficiencies (UNMAP, ODX) in Hyper-V I use some PowerShell code to help show his. Trim in the virtual machine and on the Hyper-V host pass along information about deleted blocks to a thin provisioned storage array. That means that every layer can be as efficient as possible. Here’s a picture of me doing a demo to monitor the UNMAP/TRIM effect on a thin provisioned SAN.

clip_image002

The script shows how a thin provisioned LUN on a SAN (DELL SC Series) grows in actual used spaced when data is being created or copied inside VMs. When data is hard deleted TRIM/UNMAP prevents dynamically expanding VHDX files form growing more than they need to. When a VM is shut down it even shrinks. The same info is passed on to the storage array. So, when data is deleted we can see the actual space used in a thin provisioned LUN on the SAN go down. That makes for a nice demo. I have some more info on the benefits and the potential issues of UNMAP if used carelessly here.

Scripting options for the DELL SC Series (Compellent)

Your storage array needs to support thin provisioning and TRIM/UNMAP with Windows Server Hyper-V. If so all you need is PowerShell library your storage vendor must provide. For the DELL Compellent series that use to be the PowerShell Command Set (2008) which made them an early adopter of PowerShell automation in the industry. That evolved with the array capabilities and still works to day with the older SC series models. In 2015, Dell Storage introduced the Enterprise Manager API (EM-API) and also the Dell Storage PowerShell SDK, which uses the EM-API. This works over a EM Data Collector server and no longer directly to the management IP of the controllers. This is the only way to work for the newer SC series models.

It’s a powerful tool to have and allows for automation and orchestration of your storage environment when you have wrapped your head around the PowerShell commands.

That does mean that I needed to replace my original PowerShell Command Set scripts. Depending on what those scripts do this can be done easily and fast or it might require some more effort.

Monitoring UNMAP/TRIM effect on a thin provisioned SAN with PowerShell

As a short demo let me show case the Command Set and the DELL Storage PowerShell SDK version of a script monitor the UNMAP/TRIM effect on a thin provisioned SAN with PowerShell.

Command Set version

Bar the way you connect to the array the difference is in the commandlets. In Command Set retrieving the storage info is done as follows:

$SanVolumeToMonitor = “MyDemoSANVolume”

#Get the size of the volume
$CompellentVolumeSize = (Get-SCVolume -Name $SanVolumeToMonitor).Size

#Get the actual disk space consumed in that volume
$CompellentVolumeReakDiskSpaceUsed = (Get-SCVolume -Name $SanVolumeToMonitor).TotalDiskSpaceConsumed

In the DELL Storage PowerShell SDK version it is not harder, just different than it used to be.

$SanVolumeToMonitor = “MyDemoSANVolume”
$Volume = Get-DellScVolume -StorageCenter $StorageCenter -Name $SanVolumeToMonitor

$VolumeStats = Get-DellScVolumeStorageUsage -Instance $Volume.InstanceID

#Get the size of the volume
$CompellentVolumeSize = ($VolumeStats).ConfiguredSpace

#Get the actual disk space consumed in that volume
$CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

Which gives …

clip_image004

I hope this gave you some inspiration to get started automating your storage provisioning and governance. On premises or cloud, a GUI and a click have there place, but automation is the way to go. As a bonus, the complete script is below.

#region PowerShell to keep the PoSh window on top during demos
$signature = @’ 
[DllImport("user32.dll")] 
public static extern bool SetWindowPos( 
    IntPtr hWnd, 
    IntPtr hWndInsertAfter, 
    int X, 
    int Y, 
    int cx, 
    int cy, 
    uint uFlags); 
‘@ 
$type = Add-Type -MemberDefinition $signature -Name SetWindowPosition -Namespace SetWindowPos -Using System.Text -PassThru

$handle = (Get-Process -id $Global:PID).MainWindowHandle 
$alwaysOnTop = New-Object -TypeName System.IntPtr -ArgumentList (-1) 
$type::SetWindowPos($handle, $alwaysOnTop, 0, 0, 0, 0, 0x0003) | Out-null
#endregion

function WriteVirtualDiskVolSize () {
    $Volume = Get-DellScVolume -Connection $Connection -StorageCenter $StorageCenter -Name $SanVolumeToMonitor
    $VolumeStats = Get-DellScVolumeStorageUsage -Connection $Connection -Instance $Volume.InstanceID
       
    #Get the size of the volume
    $CompellentVolumeSize = ($VolumeStats).ConfiguredSpace
    #Get the actual disk space consumed in that volume.
    $CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

    Write-Host -Foregroundcolor Magenta "Didier Van Hoye - Microsoft MVP / Veeam Vanguard
& Dell Techcenter Rockstar"
    Write-Host -Foregroundcolor Magenta "Hyper-V, Clustering, Storage, Azure, RDMA, Networking"
    Write-Host -Foregroundcolor Magenta  "http:/blog.workinghardinit.work"
    Write-Host -Foregroundcolor Magenta  "@workinghardinit"
    Write-Host -Foregroundcolor Cyan "DELLEMC Storage Center model $SCModel version" $SCVersion.version
    Write-Host -Foregroundcolor Cyan  "Dell Storage PowerShell SDK" (Get-Module DellStorage.ApiCommandSet).version
    Write-host -foregroundcolor Yellow "
 _   _  _   _  __  __     _     ____   
| | | || \ | ||  \/  |   / \   |  _ \ 
| | | ||  \| || |\/| |  / _ \  | |_) |
| |_| || |\  || |  | | / ___ \ |  __/
 \___/ |_| \_||_|  |_|/_/   \_\|_|
"
    Write-Host ""-ForegroundColor Red
    Write-Host "Size Of the LUN on SAN: $CompellentVolumeSize" -ForegroundColor Red
    Write-Host "Space Actually Used on SAN: $CompellentVolumeRealDiskSpaceUsed" -ForegroundColor Green 

    #Wait a while before you run these queries again.
    Start-Sleep -Milliseconds 1000
}

#If the Storage Center module isn't loaded, do so!
if (!(Get-Module DellStorage.ApiCommandSet)) {    
    import-module "C:\SysAdmin\Tools\DellStoragePowerShellSDK\DellStorage.ApiCommandSet.dll"
}

$DsmHostName = "MyDSMHost.domain.local"
$DsmUserName = "MyAdminName"
$DsmPwd = "MyPass"
$SCName = "MySCName"
# Prompt for the password
$DsmPassword = (ConvertTo-SecureString -AsPlainText $DsmPwd -Force)

# Create the connection
$Connection = Connect-DellApiConnection -HostName $DsmHostName `
    -User $DsmUserName `
    -Password $DsmPassword

$StorageCenter = Get-DellStorageCenter -Connection $Connection -name $SCName 
$SCVersion = $StorageCenter | Select-Object Version
$SCModel = (Get-DellScController -Connection $Connection -StorageCenter $StorageCenter -InstanceName "Top Controller").model.Name.toupper()

$SanVolumeToMonitor = "MyDemoSanVolume"

#Just let the script run in a loop indefinitely.
while ($true) {
    Clear-Host
    WriteVirtualDiskVolSize
}

 

SFP+ and SFP28 compatibility

Introduction

As 25Gbps (SFP28) is on route to displace 10Gbps (SFP+) from its leading role as the work horse in the datacenter. That means that 10Gbps is slowly but surely becoming “the LOM option”. So it will be passing on to the role and place 1Gbps has held for many years. What extension slots are concerned we see 25Gbps cards rise tremendously in popularity. The same is happening on the switches where 25-100Gbps ports are readily available. As this transition takes place and we start working on acquiring 25Gbps or faster gear the question about SFP+ and SFP28 compatibility arises for anyone who’s involved in planning this.

SPF+ and SFP28 compatibility

Who needs 25Gbps?

When I got really deep into 10Gbps about 7 years ago I was considered a bit crazy and accused of over delivering. That was until they saw the speed of a live migration. From Windows Server 2012 and later versions that was driven home even more with shared nothing and storage live migration and SMB 3 Multichannel SMB Direct.

On top of that storage spaces and SOFS came onto the storage scene in the Microsoft Windows server ecosystem. This lead us to S2D and storage replica in Windows Server 2016 and later. This meant that the need for more bandwidth, higher throughput and low latency was ever more obvious and clear. Microsoft has a rather extensive collection of features & capabilities that leverage SMB 3 and as such can leverage RDMA.

In this time frame we also saw the strong rise of All Flash Array solutions with SSD and NVMe. Today we even see storage class memory come into the picture. All this means even bigger needs for high throughput at low latency, so the trend for ever faster Ethernet is not over yet.

What does this mean?

That means that 10Gbps is slowly but surely becoming the LOM option and is passing on to the role 1Gbps has held for many years. In our extension slots we see 25-100Gbps cards rise in popularity. The same is happening on the switches where we see 25, 50, 100Gbps or even higher. I’m not sure if 50Gbps is ever going to be as popular but 25Gbps is for sure. In any case I am not crazy but I do know how to avoid tech debt and get as much long term use out of hardware as possible.

When it comes to the optic components SFP+ is commonly used for 10Gbps. This provides a path to 40Gbps and 100Gbps via QSFP. For 25Gbps we have SFP28 (1 channel or lane for 25Gbps). This give us a path to 50Gbps (2225Gbps – two lanes) and to 100Gbps (4*25Gbps – 4 lanes) via QSFP28. In the end this a lot more economical. But let’s look at SFP+ and SFP28 compatibility now.

SFP+ and SFP28 compatibility

When it comes to SFP+ and SFP28 compatibility we’re golden. SFP+ and SFP28 share the same form factor & are “compatible”. The moment I learned that SFP28 share the same form factor with SFP+ I was hopeful that they would only differ in speed. And indeed, that hope became a sigh of relief when I read and experimentally demonstrated to myself the following things I had read:

  1. I can plug in a SFP28 module into an SFP+ port
  2. I can plug in a SFP+ module into an SFP28 port
  3. Connectivity is established at the lowest common denominator, which is 10Gbps
  4. The connectivity is functional but you don’t gain the benefits SFP28 bring to the table.

Compatibility for migrations & future proofing

For a migration path that is phased over time this is great news as you don’t need to have everything in place right away from day one. I can order 25Gbps NIC in my servers now, knowing that they will work with my existing 10Gbps network. They’ll be ready to roll when I get my switches replaced 6 months or a year later. Older servers with 10Gbps SFP+ that are still in production when the new network gear arrives can keep working on new SFP28 network gear.

  • SFP+: 10Gbps
  • SFP28: 25Gbps but it can go up to 28 so the name is SFP28, not 25. Note that SFP28 can handle 25Gbps, 10Gbps and even 1Gbps.
  • QSFP28: 100Gbps to 4*25Gbps or 2*50Gbps gives you flexibility and port density.
  • 25Gbps / SFP28 is the new workhorse to deliver more bandwidth, better error control, less cross talk and an economical sound upgrade path.

Do note that SFP+ modules will work in SFP28 ports and vice versa but you have to be a bit careful:

  • Fix the ports speed when you’re not running at the default speed
  • On SFP28 modules you might need to disable options such as forward error correction.
  • Make sure a 10Gbps switch is OK with a 25Gbps cables, it might not.

If you have all your gear from a vendor specializing in RDMA technology like Mellanox this detects this all this and takes care of everything for you. Between vendors and 3rd party cables pay extra attention to verifying all will be well.

SFP+ and SFP28 compatibility is also important for future proofing upgrade paths. When you buy and introduce new network gear it is nice to know what will work with what you already have and what will work with what you might or will have in the future. Some people will get all new network switches in at once while others might have to wait for a while before new servers with SFP28 arrive. Older servers might be around and will not force you to keep older switches around just for them.

SFP28 / QSFP28 provides flexibility

Compatibility is also important for purchase decision as you don’t need to match 25Gbps NIC ports to 25Gbps switch ports. You can use the QSFP28 cables and split them to 4 * 25Gbps SFP28.

SPF+ and SFP28 compatibility

QSFP28

The same goes for 50Gbps, which is 100Gbps QSFP to 2 * 50Gbps QSFP.

SPF+ and SFP28 compatibility

SPF+ and SFP28 compatibility

This means you can have switch port density and future proofing if you so desire. Some vendors offer modular switches where you can mix port types (Dell EMC Networking S6100-ON)

Conclusion

More bandwidth at less cost is a no brainer. It also makes your bean counters happy as this is achieved with less switches and cables. That also translates to less space in a datacenter, less consumption of power and less cooling. And the less material you have the less it cost in operational expenses (management and maintenance). This is only offset partially by our ever-growing need for more bandwidth. As converged networking matures and becomes better that also helps with the cost. Even where economies of scale don’t matter that much. The transition to 25Gbps and higher is facilitated by SFP+ and SFP28 compatibility and that is good news for all involved.