When using file shares as backup targets you should leverage continuous available SMB 3 file shares


When using file shares as backup targets you should leverage Continuous Available SMB 3 file shares. For now, at least. A while back Anton Gostev wrote a very interesting piece in his “The Word from Gostev”. It was about an issue that they saw with people using SMB 3 files shares as backup targets with Veeam Backup & Replication. To some it was a reason to cry wolf. But it’s a probably too little-known issue that can and a such might (will) occur. You need to be aware of it to make good decisions and give good advice.

I’m the business of building rock solid solutions that are highly available to continuous available. This means I’m always looking into the benefits and drawbacks of design choices. By that I mean I study, test and verify them as well. I don’t do “Paper Proof of Concepts”. Those are just border line fraud.

So, what’s going on and what can you do to mitigate the risk or avoid it all together?

Setting the scenario

Your backup software (in our case Veeam Backup & Recovery) running on Windows leverages an SMB 3 file share as a backup target. This could be a Windows Server file share but it doesn’t have to be. It could be a 3rd party appliance or storage array.

When using file shares as backup targets you should leverage Continuous Available SMB 3 file shares.

The SMB client

The client is the SMB 3 Client Microsoft delivers in the OS (version depends on the OS version). But this client is under control of Microsoft. Let’s face it the source in these scenarios is a Hyper-V host/cluster or a Windows SMB 3 Windows File share, clustered or not.

The SMB server

In regards to the target, i.e. the SMB Server you have a couple of possibilities. Microsoft or 3rd party.

If it’s a third-party SMB 3 implementation on Linux or an appliance. You might not even know what is used under the hood as an OS and 3rd party SMB 3 solution. It could be a storage vendors native SMB 3 implementation on their storage array or simple commodity NAS who bought a 3rd party solution to leverage. It might be high available or in many (most?) cases it is not. It’s hard to know if the 3rd party implements / leverages the full capabilities of the SMB 3 stack as Microsoft does or not. You light not know of there are any bugs in there or not.

You get the picture. If you bank on appliances, find out and test it (trust but verify). But let’s assume its capabilities are on par with what Windows offers and that means the subject being discussed goes for both 3rd party offerings and Windows Server.

When the target is Windows Server we are talking about SMB 3 File Shares that are either Continuous Available or not. For backup targets General Purpose File Shares will do. You could even opt to leverage SOFS (S2D for example). In this case you know what’s implemented in what version and you get bug fixes from MSFT.

When you have continuously available (CA) SMB 3 shares you should be able to sleep sound. SMB 3 has you covered. The risks we are discussing is related to non-CA SMB 3 file shares.

What could go wrong?

Let’s walk through this. When your backup software writes to an SMB 3 share it leverages the SMB 3 client & server in the SMB 3 stack. Unlike when Veeam uses its own data mover, all the cool data persistence stuff is handled by Windows transparently. The backup software literally hands of the job to Windows. Which is why you can also leverage SMB Multichannel and SMB direct with your backups if you so desire. Read Veeam Backup & Replication leverages SMB Multichannel and Veeam Backup & Replication Preferred Subnet & SMB Multichannel for more on this.

If you are writing to a non-CA SMB 3 share your backup software receives the messages the data has been written. Which actually means that the data is cached in the SMB Clients “queue” of data to write but which might not have been written to the storage yet.

For short interruptions this is survivable and for Office and the like this works well and delivers fast performance. If the connection is interrupted or the share is unavailable the queue keeps the data in memory for a while. So, if the connection restores the data can be written. The SMB 3 Client is smart.

However, this has its limits. The data cache in the queue doesn’t exist eternally. If the connectivity loss or file share availability take too long the data in the SMB 3 client cache is lost. But it was not written to storage! To add a little insult to injury the SBM client send back “we’re good” even when the share has been unreachable for a while.

For backups this isn’t optimal. Actually, the alarm bell should start ringing when it is about backups. Your backup software got a message the data has been written and doesn’t know any better. But is not on the backup target. This means the backup software will run into issues with corrupted backups sooner or later (next backup, restores, synthetic full backups, merges, whatever comes first).

Why did they make it this way?

This is OK default behavior. it works just fine for Office files / most knowledge worker client software that have temp files, auto recovery, and all such lovely capabilities and work is mostly individual and interactive. Those applications are resilient to this by nature. Mind you, all my SMB 3 file share deployments are clustered and highly available where appropriate. By “appropriate” I mean when we don’t have off line caching for those shares as a requirement as those too don’t mix well (https://blogs.technet.microsoft.com/filecab/2016/03/15/offline-files-and-continuous-availability-the-monstrous-union-you-should-not-consecrate/). But when you know what your doing it rocks. I can actually failover my file server roles all day long for patching, maintenance & fun when the clients do talk SMB 3. Oh, and it was a joy to move that data to new SANs under the hood. More on that perhaps in another post. But I digress.

You need adequate storage in all uses cases

This is a no brainer. Nothing will save you if the target storage isn’t up to the task. Not the Veeam data move or SMB3 shares with continuous availability. Let’s be very clear about this. Even at the cost-effective side of the equation the storage has to be of sufficient decent quality to prevent data loss. That means decent controllers with battery cached IO as safe guard etc. Whether that’s a SAN or a “simple” raid controller or pass through HBA’s for storage spaces, doesn’t matter. You have to have it. Putting your data on SATA drives without any save guard is sure way of risking data loss. That’s as simple as it gets. You don’t do that, unless you don’t care. And if you care, you would not be reading this!

Can this be fixed?

Well as a non-SMB 3 developer I would say we need an option added that the SMB 3 client can be configured to not report success until that data has been effectively written on the target, or at least has landed somewhere on quality, cache protected storage.

This option does not exist today. I do not work for Microsoft but I know some people there and I’m pretty sure they want to fix it. I’m just not sure how big of a priority it is at the moment. For me it’s important that when a backup application goes to a non-continuous available file share it can request that it will not cache and the SMB Server says “OK” got it, I will behave accordingly. Now the details in the implementation will be different but you get the message?

I would like to make the case that it should be a configurable option. It is not needed for all scenarios and it might (will) have an impact on performance. How big that would be I have no clue. I’m just a blogger who does IT as a job. I’m not a principal PM at Microsoft or so.

If you absolutely want to make sure, use clustered continuous available file shares. Works like a charm. Read this blog Continuous available general purpose file shares & ReFSv3 provide high available backup targets, there is even one of my not so professional videos show casing this.

It’s also important not to panic. Most of you might even never has heard or experienced this. But depending on the use case and the quality of the network and processes you might. In a backup scenario this is not something that makes for a happy day.

The cry wolf crowd

I’ll be blunt. WARNING. Take a hike if you have a smug “Windoze sucks” attitude. If you want to deal dope you shouldn’t be smoking too much of your own stuff, but primarily know it inside out. NFS in all its varied implementations has potential issues as well. So, I’d also do my due diligence with any solution you recommend. Trust but verify, remember?! Actually, an example of one such an issue was given for an appliance with NFS by Veeam. Guess what, every one has issues. Choose your poison, drink it and let other chose theirs. Condescending remarks just make you look bad every time. And guess what that impression tends to last. Now on the positive side, I hear that caching can be disabled on modern NFS client implementations. So, the potential issue is known and is is being addressed there as well.


Don’t panic. I just discussed a potential issue than can occur and that you should be aware off when deciding on a backup target. If you have rock solid networking and great server management processes you can go far without issues, but that’s not 100 % fail proof. As I’m in the business of building the best possible solutions it’s something you need to be aware off.

But know that they can occur, when and why so you can manage the risk optimally. Making Windows Server SMB 3 file shares Continuously Available will protect against this effectively. It does require failover clustering. But at least now you know why I say that when using file shares as backup targets you should leverage continuous available SMB 3 file shares

When you buy appliances or 3rd party SMB 3 solutions, this issue also exists but be extra diligent even with highly available shares. Make sure it works as it should!

I hope Microsoft resolves this issue as soon as possible. I’m sure they want to. They want their products to be the best and fix any possible concerns you might have.

Correcting the permissions on the folder with VHDS files & checkpoints for host level Hyper-V guest cluster backups


It’s not a secret that while guest clustering with VHDSets works very well. We’ve had some struggles in regards to host level backups however. Right now I leverage Veeam Agent for Windows (VAW) to do in guest backups. The most recent versions of VAW support Windows Failover Clustering. I’d love to leverage host level backups but I was struggling to make this reliable for quite a while. As it turned out recently there are some virtual machine permission issues involved we need to fix. Both Microsoft and Veeam have published guidance on this in a KB article. We automated correcting the permissions on the folder with VHDS files & checkpoints for host level Hyper-V guest cluster backup

The KB articles

Early August Microsoft published KB article with all the tips when thins fail Errors when backing up VMs that belong to a guest cluster in Windows. Veeam also recapitulated on the needed conditions and setting to leverage guest clustering and performing host level backups. The Veeam article is Backing up Hyper-V guest cluster based on VHD set. Read these articles carefully and make sure all you need to do has been done.

For some reason another prerequisite is not mentioned in these articles. It is however discussed in ConfigStoreRootPath cluster parameter is not defined and here https://docs.microsoft.com/en-us/powershell/module/hyper-v/set-vmhostcluster?view=win10-ps You will need to set this to make proper Hyper-V collections needed for recovery checkpoints on VHD Sets. It is a very unknown setting with very little documentation.

But the big news here is fixing a permissions related issue!

The latest addition in the list of attention points is a permission issue. These permissions are not correct by default for the guest cluster VMs shared files. This leads to the hard to pin point error.

Error Event 19100 Hyper-V-VMMS 19100 ‘BackupVM’ background disk merge failed to complete: General access denied error (0x80070005). To fix this issue, the folder that holds the VHDS files and their snapshot files must be modified to give the VMMS process additional permissions. To do this, follow these steps for correcting the permissions on the folder with VHDS files & checkpoints for host level Hyper-V guest cluster backup.

Determine the GUIDS of all VMs that use the folder. To do this, start PowerShell as administrator, and then run the following command:

get-vm | fl name, id
Output example:
Name : BackupVM
Id : d3599536-222a-4d6e-bb10-a6019c3f2b9b

Name : BackupVM2
Id : a0af7903-94b4-4a2c-b3b3-16050d5f80f

For each VM GUID, assign the VMMS process full control by running the following command:
icacls <Folder with VHDS> /grant “NT VIRTUAL MACHINE\<VM GUID>”:(OI)F

icacls “c:\ClusterStorage\Volume1\SharedClusterDisk” /grant “NT VIRTUAL MACHINE\a0af7903-94b4-4a2c-b3b3-16050d5f80f2”:(OI)F
icacls “c:\ClusterStorage\Volume1\SharedClusterDisk” /grant “NT VIRTUAL MACHINE\d3599536-222a-4d6e-bb10-a6019c3f2b9b”:(OI)F

My little PowerShell script

As the above is tedious manual labor with a lot of copy pasting. This is time consuming and tedious at best. With larger guest clusters the probability of mistakes increases. To fix this we write a PowerShell script to handle this for us.

#Didier Van Hoye
#Twitter: @WorkingHardInIT 
#Blog: https://blog.Workinghardinit.work
#Correct shared VHD Set disk permissions for all nodes in guests cluster

$GuestCluster = "DemoGuestCluster"
$HostCluster = "LAB-CLUSTER"

$PathToGuestClusterSharedDisks = "C:\ClusterStorage\NTFS-03\GuestClustersSharedDisks"

$GuestClusterNodes = Get-ClusterNode -Cluster $GuestCluster

ForEach ($GuestClusterNode in $GuestClusterNodes)

#Passing the cluster name to -computername only works in W2K16 and up.
#As this is about VHDS you need to be running 2016, so no worries here.
$GuestClusterNodeGuid = (Get-VM -Name $GuestClusterNode.Name -ComputerName $HostCluster).id

Write-Host $GuestClusterNodeGuid "belongs to" $GuestClusterNode.Name

$IcalsExecute = """$PathToGuestClusterSharedDisks""" + " /grant " + """NT VIRTUAL MACHINE\"+ $GuestClusterNodeGuid + """:(OI)F"
write-Host "Executing " $IcalsExecute
CMD.EXE /C "icacls $IcalsExecute"


Below is an example of the output of this script. It provides some feedback on what is happening.

Correcting the permissions on the folder with VHDS files & checkpoints for host level Hyper-V guest cluster backup

Correcting the permissions on the folder with VHDS files & checkpoints for host level Hyper-V guest cluster backup

PowerShell for the win. This saves you some searching and typing and potentially making some mistakes along the way. Have fun. More testing is underway to make sure things are now predictable and stable. We’ll share our findings with you.

Monitor the UNMAP/TRIM effect on a thin provisioned SAN


During demo’s I give on the effectiveness of storage efficiencies (UNMAP, ODX) in Hyper-V I use some PowerShell code to help show his. Trim in the virtual machine and on the Hyper-V host pass along information about deleted blocks to a thin provisioned storage array. That means that every layer can be as efficient as possible. Here’s a picture of me doing a demo to monitor the UNMAP/TRIM effect on a thin provisioned SAN.


The script shows how a thin provisioned LUN on a SAN (DELL SC Series) grows in actual used spaced when data is being created or copied inside VMs. When data is hard deleted TRIM/UNMAP prevents dynamically expanding VHDX files form growing more than they need to. When a VM is shut down it even shrinks. The same info is passed on to the storage array. So, when data is deleted we can see the actual space used in a thin provisioned LUN on the SAN go down. That makes for a nice demo. I have some more info on the benefits and the potential issues of UNMAP if used carelessly here.

Scripting options for the DELL SC Series (Compellent)

Your storage array needs to support thin provisioning and TRIM/UNMAP with Windows Server Hyper-V. If so all you need is PowerShell library your storage vendor must provide. For the DELL Compellent series that use to be the PowerShell Command Set (2008) which made them an early adopter of PowerShell automation in the industry. That evolved with the array capabilities and still works to day with the older SC series models. In 2015, Dell Storage introduced the Enterprise Manager API (EM-API) and also the Dell Storage PowerShell SDK, which uses the EM-API. This works over a EM Data Collector server and no longer directly to the management IP of the controllers. This is the only way to work for the newer SC series models.

It’s a powerful tool to have and allows for automation and orchestration of your storage environment when you have wrapped your head around the PowerShell commands.

That does mean that I needed to replace my original PowerShell Command Set scripts. Depending on what those scripts do this can be done easily and fast or it might require some more effort.

Monitoring UNMAP/TRIM effect on a thin provisioned SAN with PowerShell

As a short demo let me show case the Command Set and the DELL Storage PowerShell SDK version of a script monitor the UNMAP/TRIM effect on a thin provisioned SAN with PowerShell.

Command Set version

Bar the way you connect to the array the difference is in the commandlets. In Command Set retrieving the storage info is done as follows:

$SanVolumeToMonitor = “MyDemoSANVolume”

#Get the size of the volume
$CompellentVolumeSize = (Get-SCVolume -Name $SanVolumeToMonitor).Size

#Get the actual disk space consumed in that volume
$CompellentVolumeReakDiskSpaceUsed = (Get-SCVolume -Name $SanVolumeToMonitor).TotalDiskSpaceConsumed

In the DELL Storage PowerShell SDK version it is not harder, just different than it used to be.

$SanVolumeToMonitor = “MyDemoSANVolume”
$Volume = Get-DellScVolume -StorageCenter $StorageCenter -Name $SanVolumeToMonitor

$VolumeStats = Get-DellScVolumeStorageUsage -Instance $Volume.InstanceID

#Get the size of the volume
$CompellentVolumeSize = ($VolumeStats).ConfiguredSpace

#Get the actual disk space consumed in that volume
$CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

Which gives …


I hope this gave you some inspiration to get started automating your storage provisioning and governance. On premises or cloud, a GUI and a click have there place, but automation is the way to go. As a bonus, the complete script is below.

#region PowerShell to keep the PoSh window on top during demos
$signature = @’ 
public static extern bool SetWindowPos( 
    IntPtr hWnd, 
    IntPtr hWndInsertAfter, 
    int X, 
    int Y, 
    int cx, 
    int cy, 
    uint uFlags); 
$type = Add-Type -MemberDefinition $signature -Name SetWindowPosition -Namespace SetWindowPos -Using System.Text -PassThru

$handle = (Get-Process -id $Global:PID).MainWindowHandle 
$alwaysOnTop = New-Object -TypeName System.IntPtr -ArgumentList (-1) 
$type::SetWindowPos($handle, $alwaysOnTop, 0, 0, 0, 0, 0x0003) | Out-null

function WriteVirtualDiskVolSize () {
    $Volume = Get-DellScVolume -Connection $Connection -StorageCenter $StorageCenter -Name $SanVolumeToMonitor
    $VolumeStats = Get-DellScVolumeStorageUsage -Connection $Connection -Instance $Volume.InstanceID
    #Get the size of the volume
    $CompellentVolumeSize = ($VolumeStats).ConfiguredSpace
    #Get the actual disk space consumed in that volume.
    $CompellentVolumeRealDiskSpaceUsed = ($VolumeStats).ActiveSpace

    Write-Host -Foregroundcolor Magenta "Didier Van Hoye - Microsoft MVP / Veeam Vanguard
& Dell Techcenter Rockstar"
    Write-Host -Foregroundcolor Magenta "Hyper-V, Clustering, Storage, Azure, RDMA, Networking"
    Write-Host -Foregroundcolor Magenta  "http:/blog.workinghardinit.work"
    Write-Host -Foregroundcolor Magenta  "@workinghardinit"
    Write-Host -Foregroundcolor Cyan "DELLEMC Storage Center model $SCModel version" $SCVersion.version
    Write-Host -Foregroundcolor Cyan  "Dell Storage PowerShell SDK" (Get-Module DellStorage.ApiCommandSet).version
    Write-host -foregroundcolor Yellow "
 _   _  _   _  __  __     _     ____   
| | | || \ | ||  \/  |   / \   |  _ \ 
| | | ||  \| || |\/| |  / _ \  | |_) |
| |_| || |\  || |  | | / ___ \ |  __/
 \___/ |_| \_||_|  |_|/_/   \_\|_|
    Write-Host ""-ForegroundColor Red
    Write-Host "Size Of the LUN on SAN: $CompellentVolumeSize" -ForegroundColor Red
    Write-Host "Space Actually Used on SAN: $CompellentVolumeRealDiskSpaceUsed" -ForegroundColor Green 

    #Wait a while before you run these queries again.
    Start-Sleep -Milliseconds 1000

#If the Storage Center module isn't loaded, do so!
if (!(Get-Module DellStorage.ApiCommandSet)) {    
    import-module "C:\SysAdmin\Tools\DellStoragePowerShellSDK\DellStorage.ApiCommandSet.dll"

$DsmHostName = "MyDSMHost.domain.local"
$DsmUserName = "MyAdminName"
$DsmPwd = "MyPass"
$SCName = "MySCName"
# Prompt for the password
$DsmPassword = (ConvertTo-SecureString -AsPlainText $DsmPwd -Force)

# Create the connection
$Connection = Connect-DellApiConnection -HostName $DsmHostName `
    -User $DsmUserName `
    -Password $DsmPassword

$StorageCenter = Get-DellStorageCenter -Connection $Connection -name $SCName 
$SCVersion = $StorageCenter | Select-Object Version
$SCModel = (Get-DellScController -Connection $Connection -StorageCenter $StorageCenter -InstanceName "Top Controller").model.Name.toupper()

$SanVolumeToMonitor = "MyDemoSanVolume"

#Just let the script run in a loop indefinitely.
while ($true) {


SFP+ and SFP28 compatibility


As 25Gbps (SFP28) is on route to displace 10Gbps (SFP+) from its leading role as the work horse in the datacenter. That means that 10Gbps is slowly but surely becoming “the LOM option”. So it will be passing on to the role and place 1Gbps has held for many years. What extension slots are concerned we see 25Gbps cards rise tremendously in popularity. The same is happening on the switches where 25-100Gbps ports are readily available. As this transition takes place and we start working on acquiring 25Gbps or faster gear the question about SFP+ and SFP28 compatibility arises for anyone who’s involved in planning this.

SPF+ and SFP28 compatibility

Who needs 25Gbps?

When I got really deep into 10Gbps about 7 years ago I was considered a bit crazy and accused of over delivering. That was until they saw the speed of a live migration. From Windows Server 2012 and later versions that was driven home even more with shared nothing and storage live migration and SMB 3 Multichannel SMB Direct.

On top of that storage spaces and SOFS came onto the storage scene in the Microsoft Windows server ecosystem. This lead us to S2D and storage replica in Windows Server 2016 and later. This meant that the need for more bandwidth, higher throughput and low latency was ever more obvious and clear. Microsoft has a rather extensive collection of features & capabilities that leverage SMB 3 and as such can leverage RDMA.

In this time frame we also saw the strong rise of All Flash Array solutions with SSD and NVMe. Today we even see storage class memory come into the picture. All this means even bigger needs for high throughput at low latency, so the trend for ever faster Ethernet is not over yet.

What does this mean?

That means that 10Gbps is slowly but surely becoming the LOM option and is passing on to the role 1Gbps has held for many years. In our extension slots we see 25-100Gbps cards rise in popularity. The same is happening on the switches where we see 25, 50, 100Gbps or even higher. I’m not sure if 50Gbps is ever going to be as popular but 25Gbps is for sure. In any case I am not crazy but I do know how to avoid tech debt and get as much long term use out of hardware as possible.

When it comes to the optic components SFP+ is commonly used for 10Gbps. This provides a path to 40Gbps and 100Gbps via QSFP. For 25Gbps we have SFP28 (1 channel or lane for 25Gbps). This give us a path to 50Gbps (2225Gbps – two lanes) and to 100Gbps (4*25Gbps – 4 lanes) via QSFP28. In the end this a lot more economical. But let’s look at SFP+ and SFP28 compatibility now.

SFP+ and SFP28 compatibility

When it comes to SFP+ and SFP28 compatibility we’re golden. SFP+ and SFP28 share the same form factor & are “compatible”. The moment I learned that SFP28 share the same form factor with SFP+ I was hopeful that they would only differ in speed. And indeed, that hope became a sigh of relief when I read and experimentally demonstrated to myself the following things I had read:

  1. I can plug in a SFP28 module into an SFP+ port
  2. I can plug in a SFP+ module into an SFP28 port
  3. Connectivity is established at the lowest common denominator, which is 10Gbps
  4. The connectivity is functional but you don’t gain the benefits SFP28 bring to the table.

Compatibility for migrations & future proofing

For a migration path that is phased over time this is great news as you don’t need to have everything in place right away from day one. I can order 25Gbps NIC in my servers now, knowing that they will work with my existing 10Gbps network. They’ll be ready to roll when I get my switches replaced 6 months or a year later. Older servers with 10Gbps SFP+ that are still in production when the new network gear arrives can keep working on new SFP28 network gear.

  • SFP+: 10Gbps
  • SFP28: 25Gbps but it can go up to 28 so the name is SFP28, not 25. Note that SFP28 can handle 25Gbps, 10Gbps and even 1Gbps.
  • QSFP28: 100Gbps to 4*25Gbps or 2*50Gbps gives you flexibility and port density.
  • 25Gbps / SFP28 is the new workhorse to deliver more bandwidth, better error control, less cross talk and an economical sound upgrade path.

Do note that SFP+ modules will work in SFP28 ports and vice versa but you have to be a bit careful:

  • Fix the ports speed when you’re not running at the default speed
  • On SFP28 modules you might need to disable options such as forward error correction.
  • Make sure a 10Gbps switch is OK with a 25Gbps cables, it might not.

If you have all your gear from a vendor specializing in RDMA technology like Mellanox this detects this all this and takes care of everything for you. Between vendors and 3rd party cables pay extra attention to verifying all will be well.

SFP+ and SFP28 compatibility is also important for future proofing upgrade paths. When you buy and introduce new network gear it is nice to know what will work with what you already have and what will work with what you might or will have in the future. Some people will get all new network switches in at once while others might have to wait for a while before new servers with SFP28 arrive. Older servers might be around and will not force you to keep older switches around just for them.

SFP28 / QSFP28 provides flexibility

Compatibility is also important for purchase decision as you don’t need to match 25Gbps NIC ports to 25Gbps switch ports. You can use the QSFP28 cables and split them to 4 * 25Gbps SFP28.

SPF+ and SFP28 compatibility


The same goes for 50Gbps, which is 100Gbps QSFP to 2 * 50Gbps QSFP.

SPF+ and SFP28 compatibility

SPF+ and SFP28 compatibility

This means you can have switch port density and future proofing if you so desire. Some vendors offer modular switches where you can mix port types (Dell EMC Networking S6100-ON)


More bandwidth at less cost is a no brainer. It also makes your bean counters happy as this is achieved with less switches and cables. That also translates to less space in a datacenter, less consumption of power and less cooling. And the less material you have the less it cost in operational expenses (management and maintenance). This is only offset partially by our ever-growing need for more bandwidth. As converged networking matures and becomes better that also helps with the cost. Even where economies of scale don’t matter that much. The transition to 25Gbps and higher is facilitated by SFP+ and SFP28 compatibility and that is good news for all involved.