Monitor the UNMAP/TRIM effect on a thin provisioned SAN

Introduction

During demo’s I give on the effectiveness of storage efficiencies (UNMAP, ODX) in Hyper-V I use some PowerShell code to help show his. Trim in the virtual machine and on the Hyper-V host pass along information about deleted blocks to a thin provisioned storage array. That means that every layer can be as efficient as possible. Here’s a picture of me doing a demo to monitor the UNMAP/TRIM effect on a thin provisioned SAN.

clip_image002

The script shows how a thin provisioned LUN on a SAN (DELL SC Series) grows in actual used spaced when data is being created or copied inside VMs. When data is hard deleted TRIM/UNMAP prevents dynamically expanding VHDX files form growing more than they need to. When a VM is shut down it even shrinks. The same info is passed on to the storage array. So, when data is deleted we can see the actual space used in a thin provisioned LUN on the SAN go down. That makes for a nice demo. I have some more info on the benefits and the potential issues of UNMAP if used carelessly here.

Scripting options for the DELL SC Series (Compellent)

Your storage array needs to support thin provisioning and TRIM/UNMAP with Windows Server Hyper-V. If so all you need is PowerShell library your storage vendor must provide. For the DELL Compellent series that use to be the PowerShell Command Set (2008) which made them an early adopter of PowerShell automation in the industry. That evolved with the array capabilities and still works to day with the older SC series models. In 2015, Dell Storage introduced the Enterprise Manager API (EM-API) and also the Dell Storage PowerShell SDK, which uses the EM-API. This works over a EM Data Collector server and no longer directly to the management IP of the controllers. This is the only way to work for the newer SC series models.

It’s a powerful tool to have and allows for automation and orchestration of your storage environment when you have wrapped your head around the PowerShell commands.

That does mean that I needed to replace my original PowerShell Command Set scripts. Depending on what those scripts do this can be done easily and fast or it might require some more effort.

Monitoring UNMAP/TRIM effect on a thin provisioned SAN with PowerShell

As a short demo let me show case the Command Set and the DELL Storage PowerShell SDK version of a script monitor the UNMAP/TRIM effect on a thin provisioned SAN with PowerShell.

Command Set version

Bar the way you connect to the array the difference is in the commandlets. In Command Set retrieving the storage info is done as follows:

$SanVolumeToMonitor = “MyDemoSANVolume”

#Get the size of the volume
$CompellentVolumeSize = (Get-SCVolume -Name $SanVolumeToMonitor).Size

#Get the actual disk space consumed in that volume
$CompellentVolumeReakDiskSpaceUsed = (Get-SCVolume -Name $SanVolumeToMonitor).TotalDiskSpaceConsumed

In the DELL Storage PowerShell SDK version it is not harder, just different than it used to be.

$SanVolumeToMonitor = “MyDemoSANVolume”
$Volume = Get-DellScVolume -StorageCenter $StorageCenter -Name $SanVolumeToMonitor

$VolumeStats = Get-DellScVolumeStorageUsage -Instance $Volume.InstanceID

#Get the size of the volume
$CompellentVolumeSize = ($VolumeStats).ConfiguredSpace

#Get the actual disk space consumed in that volume
$CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

Which gives …

clip_image004

I hope this gave you some inspiration to get started automating your storage provisioning and governance. On premises or cloud, a GUI and a click have there place, but automation is the way to go. As a bonus, the complete script is below.

#region PowerShell to keep the PoSh window on top during demos
$signature = @’ 
[DllImport("user32.dll")] 
public static extern bool SetWindowPos( 
    IntPtr hWnd, 
    IntPtr hWndInsertAfter, 
    int X, 
    int Y, 
    int cx, 
    int cy, 
    uint uFlags); 
‘@ 
$type = Add-Type -MemberDefinition $signature -Name SetWindowPosition -Namespace SetWindowPos -Using System.Text -PassThru

$handle = (Get-Process -id $Global:PID).MainWindowHandle 
$alwaysOnTop = New-Object -TypeName System.IntPtr -ArgumentList (-1) 
$type::SetWindowPos($handle, $alwaysOnTop, 0, 0, 0, 0, 0x0003) | Out-null
#endregion

function WriteVirtualDiskVolSize () {
    $Volume = Get-DellScVolume -Connection $Connection -StorageCenter $StorageCenter -Name $SanVolumeToMonitor
    $VolumeStats = Get-DellScVolumeStorageUsage -Connection $Connection -Instance $Volume.InstanceID
       
    #Get the size of the volume
    $CompellentVolumeSize = ($VolumeStats).ConfiguredSpace
    #Get the actual disk space consumed in that volume.
    $CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

    Write-Host -Foregroundcolor Magenta "Didier Van Hoye - Microsoft MVP / Veeam Vanguard
& Dell Techcenter Rockstar"
    Write-Host -Foregroundcolor Magenta "Hyper-V, Clustering, Storage, Azure, RDMA, Networking"
    Write-Host -Foregroundcolor Magenta  "http:/blog.workinghardinit.work"
    Write-Host -Foregroundcolor Magenta  "@workinghardinit"
    Write-Host -Foregroundcolor Cyan "DELLEMC Storage Center model $SCModel version" $SCVersion.version
    Write-Host -Foregroundcolor Cyan  "Dell Storage PowerShell SDK" (Get-Module DellStorage.ApiCommandSet).version
    Write-host -foregroundcolor Yellow "
 _   _  _   _  __  __     _     ____   
| | | || \ | ||  \/  |   / \   |  _ \ 
| | | ||  \| || |\/| |  / _ \  | |_) |
| |_| || |\  || |  | | / ___ \ |  __/
 \___/ |_| \_||_|  |_|/_/   \_\|_|
"
    Write-Host ""-ForegroundColor Red
    Write-Host "Size Of the LUN on SAN: $CompellentVolumeSize" -ForegroundColor Red
    Write-Host "Space Actually Used on SAN: $CompellentVolumeRealDiskSpaceUsed" -ForegroundColor Green 

    #Wait a while before you run these queries again.
    Start-Sleep -Milliseconds 1000
}

#If the Storage Center module isn't loaded, do so!
if (!(Get-Module DellStorage.ApiCommandSet)) {    
    import-module "C:\SysAdmin\Tools\DellStoragePowerShellSDK\DellStorage.ApiCommandSet.dll"
}

$DsmHostName = "MyDSMHost.domain.local"
$DsmUserName = "MyAdminName"
$DsmPwd = "MyPass"
$SCName = "MySCName"
# Prompt for the password
$DsmPassword = (ConvertTo-SecureString -AsPlainText $DsmPwd -Force)

# Create the connection
$Connection = Connect-DellApiConnection -HostName $DsmHostName `
    -User $DsmUserName `
    -Password $DsmPassword

$StorageCenter = Get-DellStorageCenter -Connection $Connection -name $SCName 
$SCVersion = $StorageCenter | Select-Object Version
$SCModel = (Get-DellScController -Connection $Connection -StorageCenter $StorageCenter -InstanceName "Top Controller").model.Name.toupper()

$SanVolumeToMonitor = "MyDemoSanVolume"

#Just let the script run in a loop indefinitely.
while ($true) {
    Clear-Host
    WriteVirtualDiskVolSize
}

 

SFP+ and SFP28 compatibility

Introduction

As 25Gbps (SFP28) is on route to displace 10Gbps (SFP+) from its leading role as the work horse in the datacenter. That means that 10Gbps is slowly but surely becoming “the LOM option”. So it will be passing on to the role and place 1Gbps has held for many years. What extension slots are concerned we see 25Gbps cards rise tremendously in popularity. The same is happening on the switches where 25-100Gbps ports are readily available. As this transition takes place and we start working on acquiring 25Gbps or faster gear the question about SFP+ and SFP28 compatibility arises for anyone who’s involved in planning this.

SPF+ and SFP28 compatibility

Who needs 25Gbps?

When I got really deep into 10Gbps about 7 years ago I was considered a bit crazy and accused of over delivering. That was until they saw the speed of a live migration. From Windows Server 2012 and later versions that was driven home even more with shared nothing and storage live migration and SMB 3 Multichannel SMB Direct.

On top of that storage spaces and SOFS came onto the storage scene in the Microsoft Windows server ecosystem. This lead us to S2D and storage replica in Windows Server 2016 and later. This meant that the need for more bandwidth, higher throughput and low latency was ever more obvious and clear. Microsoft has a rather extensive collection of features & capabilities that leverage SMB 3 and as such can leverage RDMA.

In this time frame we also saw the strong rise of All Flash Array solutions with SSD and NVMe. Today we even see storage class memory come into the picture. All this means even bigger needs for high throughput at low latency, so the trend for ever faster Ethernet is not over yet.

What does this mean?

That means that 10Gbps is slowly but surely becoming the LOM option and is passing on to the role 1Gbps has held for many years. In our extension slots we see 25-100Gbps cards rise in popularity. The same is happening on the switches where we see 25, 50, 100Gbps or even higher. I’m not sure if 50Gbps is ever going to be as popular but 25Gbps is for sure. In any case I am not crazy but I do know how to avoid tech debt and get as much long term use out of hardware as possible.

When it comes to the optic components SFP+ is commonly used for 10Gbps. This provides a path to 40Gbps and 100Gbps via QSFP. For 25Gbps we have SFP28 (1 channel or lane for 25Gbps). This give us a path to 50Gbps (2225Gbps – two lanes) and to 100Gbps (4*25Gbps – 4 lanes) via QSFP28. In the end this a lot more economical. But let’s look at SFP+ and SFP28 compatibility now.

SFP+ and SFP28 compatibility

When it comes to SFP+ and SFP28 compatibility we’re golden. SFP+ and SFP28 share the same form factor & are “compatible”. The moment I learned that SFP28 share the same form factor with SFP+ I was hopeful that they would only differ in speed. And indeed, that hope became a sigh of relief when I read and experimentally demonstrated to myself the following things I had read:

  1. I can plug in a SFP28 module into an SFP+ port
  2. I can plug in a SFP+ module into an SFP28 port
  3. Connectivity is established at the lowest common denominator, which is 10Gbps
  4. The connectivity is functional but you don’t gain the benefits SFP28 bring to the table.

Compatibility for migrations & future proofing

For a migration path that is phased over time this is great news as you don’t need to have everything in place right away from day one. I can order 25Gbps NIC in my servers now, knowing that they will work with my existing 10Gbps network. They’ll be ready to roll when I get my switches replaced 6 months or a year later. Older servers with 10Gbps SFP+ that are still in production when the new network gear arrives can keep working on new SFP28 network gear.

  • SFP+: 10Gbps
  • SFP28: 25Gbps but it can go up to 28 so the name is SFP28, not 25. Note that SFP28 can handle 25Gbps, 10Gbps and even 1Gbps.
  • QSFP28: 100Gbps to 4*25Gbps or 2*50Gbps gives you flexibility and port density.
  • 25Gbps / SFP28 is the new workhorse to deliver more bandwidth, better error control, less cross talk and an economical sound upgrade path.

Do note that SFP+ modules will work in SFP28 ports and vice versa but you have to be a bit careful:

  • Fix the ports speed when you’re not running at the default speed
  • On SFP28 modules you might need to disable options such as forward error correction.
  • Make sure a 10Gbps switch is OK with a 25Gbps cables, it might not.

If you have all your gear from a vendor specializing in RDMA technology like Mellanox this detects this all this and takes care of everything for you. Between vendors and 3rd party cables pay extra attention to verifying all will be well.

SFP+ and SFP28 compatibility is also important for future proofing upgrade paths. When you buy and introduce new network gear it is nice to know what will work with what you already have and what will work with what you might or will have in the future. Some people will get all new network switches in at once while others might have to wait for a while before new servers with SFP28 arrive. Older servers might be around and will not force you to keep older switches around just for them.

SFP28 / QSFP28 provides flexibility

Compatibility is also important for purchase decision as you don’t need to match 25Gbps NIC ports to 25Gbps switch ports. You can use the QSFP28 cables and split them to 4 * 25Gbps SFP28.

SPF+ and SFP28 compatibility

QSFP28

The same goes for 50Gbps, which is 100Gbps QSFP to 2 * 50Gbps QSFP.

SPF+ and SFP28 compatibility

SPF+ and SFP28 compatibility

This means you can have switch port density and future proofing if you so desire. Some vendors offer modular switches where you can mix port types (Dell EMC Networking S6100-ON)

Conclusion

More bandwidth at less cost is a no brainer. It also makes your bean counters happy as this is achieved with less switches and cables. That also translates to less space in a datacenter, less consumption of power and less cooling. And the less material you have the less it cost in operational expenses (management and maintenance). This is only offset partially by our ever-growing need for more bandwidth. As converged networking matures and becomes better that also helps with the cost. Even where economies of scale don’t matter that much. The transition to 25Gbps and higher is facilitated by SFP+ and SFP28 compatibility and that is good news for all involved.

Talking to business & technical audiences

Introduction

In my professional IT life is have been a developer and an IT Pro. I have worked on specific parts of solutions or owned the entire stack, top to bottom. No matter what the environment is like, the one “truth” is that both the business management and technologist need to trust and respect each other. The solution is always a compromise between the needs, budgets, politics within an environment. This is the context I often talk about. Without context you’re blindly doing “stuff” on a playing field you do not see, let alone understand. No matter how much money, resource, cool tech and superb PM’s you have the result will be suboptimal, often mediocre and always to expensive, taking to long to deliver and even longer to fix. Now, talking to business & technical audiences about IT, requires the right content for the public you talk to.

Talking to business & technical audiences about IT

I have nothing but the greatest respect for good managers and good sales people even as a techie. My problem with them is just due to the fact there’s way too few of them around! That’s a pity as we need them to deliver great results and address needs. It also makes things easier. As a technologist I have talked to C level executives and board of directors to get funding for key projects. Even up to that special occasion where I had to go and defend a major project to get the funding after the IT manager had been thrown out by the board during the previous meeting. That was fun! 1 hour long for the board, convincing them of the value. Normally you don’t spend that long in a board to finally succeed and needing to get on a later flight to a conference due to that. They paid for my flight change actually. When I was having a beer with my fellow MVPs in Vienna late that spring evening when I received a couple of messages from some of our C level execs congratulating me. Times when CxO’s and IT are collaborating and on the same page are the best. You can even overcome the odds at that moment.

Talking to business & technical audiencesImage courtesy of @rawpixel at https://unsplash.com/photos/phDXV_uhx_g

Know your audience

But such heroic moments are seldom. It’s all about preparation, a bit of evangelizing and continuing communications about value. The general consensus is that when communicating with diverse audiences on the subject of IT you must recognize the differences and adapt to them. Good sales people know this. Most other struggle with it. But to get things going we need everyone on board. Technical people care about the why, what and how. Managerial types are more focused on the what, the why and the budget. When both have some context and understanding about each other’s needs that helps tremendously in terms of effectiveness. This is because you can the focus on telling each what they need and nothing more.

There are prerequisites

This comes with a warning however. Communication between C levels, middle management, technical architects, analysts and implementing technologist must be functional. They should understand the context and the dependencies and you have to make sure those are dealt with and are OK. If not, giving them only the information they need isn’t going to work. For that to happens the right people at the right place must have the capabilities, budget and mandate to achieve this. Trust is a factor in all this. When that is the case, the real challenge, which is making sure the communication lines are open and are effective and efficient, is normally taken care of. That makes it possible to talk constructively with all parties.

In many cases where organizations struggle with IT this is often a huge challenge. If the quality of the roles isn’t up to the level required talking to business & technical audiences is actually more a key problem.

The lure of having a Ransomware Fund

Introduction

What is the the lure of having a ransomware fund all about? It’s the idea that just paying is the best way to deal with a ransomware incident.While preventing as many ransomware attacks as possible is great, it is not something that will be 100% effective. Detecting an incident as early as possible is key to minimizing the effects. This even in the event of successful and early detection some data has been compromised (encrypted). The nature and function of that data will determine the blast radius and the fall out. To recover from that the attack needs to be stopped by finding and eliminating the points of infection.Next to that, the proven ability to restore data and do so fast is a key capability when it comes to recovering form a ransomware attack. If you don’t you’ll either need to eat the loss or try to pay up.

Dealing with Ransomware step by step

  • Prevention is not 100% effective. Don’t bank on it.
  • Early detection
  • Swift & adequate response
  • Quarantine, wipe (nuke from orbit) of contaminated systems & data
  • See if a free decryption solution is available via the security community or your police services cyber crime department
  • Restore your data. You must have multiple options. You must have implemented the 3-2-1 rule. But beware, your off site, air gapped copy cannot be too old. You need to have fairly recent backups in there to have a decent RPO that is meaningful to the business.
  • Bring data, systems and services back into production.

Now make sure you can do this for end user files, server data (images, VMs, Databases, configuration files,  backups) regardless of where it is (on-premises, private, hybrid & public cloud) what delivery model it comes in (Physical, virtual, IAAS, PAAS, SAAS, Serverless).

The lure of having a Ransomware Fund (Isn’t it cheaper to pay?)

Now some bean counter might come up with the idea that paying is cheaper (and easier) than prevention, let alone backup & restore capabilities.

The lure of having a Ransomware Fund

Some would even consider it a “cost of doing business”. This is the the lure of having a ransomware Fund. Ouch, well I know many parts of the world are a lot less save than mine but this is a path down a slippery slope so dangerous you will fall down sooner or later. Let’s look at why that is.

petya ransomware

The lure of having a Ransomware Fund

First, let’s not forget about the down time caused no matter how you resolve it. So prevention and early detection are key. You might not even survive if you pay and get your data back.

Secondly, while I love the idea of prevention and early detection this doesn’t mean that you can get rid of your backup and restore capabilities. Prevention is an mitigation strategy, it doesn’t eradicate the issue. Early detection minimizes the immediate and secondary damage in many cases. But not in all cases and it is also not perfect.

Third, when you pay your ransom how sure are you you’ll get your decryption key and be able to access your data? Well it seems only in 50% of the cases. Now, some ransomware “businesses’’ have a better customer service than many commercial companies and governments. But that doesn’t mean all of them do and by definition they are not honest people. Unless you consider ransomware “Encryption As A Service” that helps you with GDPR. I think not. You might think that a smart ransomware player delivers not to ruin future revenue streams by acquiring a bad reputation. Probably true, but they to can make mistakes, you can make mistakes, you can become road kill of vandals or of criminals who desire or are hired to incur havoc on a certain industry.

Finally, you might end up being a repeat victim as you have shown the willingness & ability to pay. Don’t forget that ransomware is not like mobster protection money. It will not protect you from others or the same ones doing it again.

Conclusion

Banking on having an emergency stash of Bitcoin (ransomware fund) just to pay ransomware isn’t your best option. It might be a last resort faced with the alternative of bankruptcy but even then it remains a costly and risky gamble.

I know that for some people in IT, backups seem outdated and from a gone by era, a solution to a problem form yesterday. I kid you not. Well, I advise you to think again and act upon what you concluded.