Upgrading to DELLEMC Unisphere Central for SC Series

Upgrading to DELLEMC Unisphere Central for SC Series

To prepare for rolling out SCOS 7.3 (see my blog post SC Series SCOS 7.3 for more information on this version) we upgraded our Dell Storage Manager Data Collector and DELL Storage Manager Client to 18.1.10.171. I am happy to report that went flawlessly. This means we are ready to work with CoPilot and will upgrade our SANs over the next week. That is always a phased roll out, to minimize risk.

Upgrading to 18.1.10.171 actually means we are upgrading to DELLEMC Unisphere Central for SC Series, which was announced as part of the SCOS 7.3 upgrade benefits.

The upgrade process itself is straight forward and isn’t different from what we are used to. First you upgrade the Storage Manager Data Collector and then the Storage Manager Client. If you have a remote Storage Manager Data Collector you must then upgrade that one as well.

image

Make sure you have successful backup and create a checkpoint before you start the upgrade. That way you also have an easy exit plan when things go south.

Upgrading the Storage Manager Data Collector & Client

This needs to be done first. It can take a while so be patient. Run the Storage Manager Data Collector 178.1.10.171.exe with elevated permissions. It unpacks and asks you to select a language.

image

Click OK to continue and just follow the wizard.

image

It will ask you to confirm you want to upgrade.

image

Click yes and follow the wizard.

image

Click “Next” to kick of the upgrade and relax.

image

The wizard will provide you with plenty of feedback of what it is doing along the way.

image

The final step after the upgrade is to start the Data Collector service.

image

Starting the Data Collector service can take quite a while. Be patient. When it’s done the wizard will inform you of this.

image

Click “Finish” to close the installer.

On your desktop you’ll notice that you now have an icon called DELL EMC Unisphere Central. This indicates that the storage management for DELL EMC offerings are converging.

image

Do note that if you have a remote Storage Manager Data Collector you must now upgrade that one also, Do NOT forget to keep both deployments at the same software levels.

You are now ready to upgrade the Storage Manager Client. Run the installer with elevated permissions.

image

Just follow the wizard, normally this goes really fast and that’s it. You can log into the new and see that the GUI is very familiar to anyone using already.

What is new is the look and feel of the DELLEMC Unisphere Central for SC Series. It’s not you father’s data collector any more.

image

We’ll talk about DELLEMC Unisphere Central for SC Series later when we have had a chance to work with it some more in real live.

Add HBAs to SC Series Servers with Add-DellScPhysicalServerHba

Introduction

Before I dump the script to add HBAs to SC Series Servers with Add-DellScPhysicalServerHba on you, first some context. I have been quite busy with multiple SAN migrations. A bunch of older DELLEMC SC Series (Compellent) to newer All Flash Arrays (see My first Dell SC7020(F) Array Solution)  When I find the time I’ll share some more PowerShell snippets I use to make such efforts a bit easier. It’s quite addictive and it allows you migrate effectively and efficiently.

In the SC Series we create cluster objects in which we place server objects. That make life easier on the SAN end.image

Those server objects are connected to the SAN via FC or iSCSI. For this we need to add the HBAs to the servers after we have set up the zoning correctly. That’s a whole different subject.

image

This is tedious work in the user interface, especially when there are many WWN entries visible that need to be assigned. Mistakes can happen. This is where automation comes in handy and a real time saver when you have many clusters/nodes and multiple SANs. So well show you how to grab the WWN info your need from the cluster nodes to add HBAs to SC Series Servers with Add-DellScPhysicalServerHba.

Below you see a script that loops through all the nodes of a cluster and gets the HBA WWNs we need. I than adds those WWNs to the SC Series server object. In another blog post I’ll share so snippets to gather the cluster info needed to create the cluster objects and server objects on the Compellent SC Series SAN. In this blog post we’ll assume the server have objects has been created.

We leverage the Dell Storage Manager – 2016 R3.20 Release (Public PowerShell SDK for Dell Storage API). I hope it helps.

Add HBAs to SC Series Servers with Add-DellScPhysicalServerHba


 

Dell SC Series MPIO Registry Settings script

Introduction

When  you’re using DELL Compellent (SC Series) storage you might be leveraging the  Dell SC Series MPIO Registry Settings script they give you to set the recommended settings. That’s a nice little script you can test, verify and adapt to integrate into your set up scripts. You can find it in the Dell EMC SC Series Storage and Microsoft Multipath I/O

Dell SC Series MPIO Registry Settings script

Recently I was working with a new deployment ( 7.2.40) to test and verify it in a lab environment. The lab cluster nodes had a lot of NIC & FC HBA to test all kinds of possible scenarios Microsoft Windows Clusters, S2D, Hyper-V, FC and iSCSI etc. The script detected the iSCSI service but did not update any setting but did throw errors.

image

After verifying things in the registry myself it was clear that the entries for the Microsoft iSCSI Initiator that the script is looking for are there but the script did not pick them up.

image

Looking over the script it became clear quickly what the issue was. The variable $IscsiRegPath = “HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000*” has 3 leading zeros out of a max of 4 characters. This means that if the Microsoft iSCSI Initiator info is in 0009 it get’s picked up but not when it is in 0011 for example.

So I changed that to only 2 leading zeros. This makes the assumption you won’t exceed 0099 which is a safer assumption, but you could argue this should even be only one leading zero as 999 is an even safer assumption.

$IscsiRegPath = “HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\00*”

I’m sharing the snippet with my adaptation here in case you want it. As always I assume nu responsibility for what you do with the script or the outcomes in your environment. Big boy rules apply.

 

Replay Manager 7.8 and cluster OS rolling upgrade Tips

Compellent Replay manager 7.8  Windows Server 2016 Clusters in mixed mode or at cluster functional lever 8

Consider this a a quick publish about tips for when you combine Replay Manager 7.8, Compellent and Windows Server 2016. Many of you will be doing cluster operating system rolling upgrade of your Windows Server 2012 R2 clusters to Windows Server 2016. If you have done your homework and made sure your hardware is supported you can still run into a surprise. As long as your in mixed mode (Wi2K12R2 mixed with W2K16 nodes) or have not updated the cluster functional level to 9 (Windows Server 2016) you will have a few issues.

In Replay Manager 7.8  itself you’ll notice that the nodes of your cluster only see the CSV LUNs under local volumes that they are the owner of currently. Normally you’ll see all of the CSV LUNs of the (Hyper-V) cluster on all of the nodes of that cluster. So that’s not the expected behavior. This leads to failed  restore points when you run a snapshot from a host that is not the owner of the CSV etc.

image

On top of that when you try to run a backup job it will fail. The reason given is:

The requested volumes is not supported because it is not managed by the provider, is a dynamic volume, or it has some other incompatibility with the current operation.

The fix? Just update your upgrade cluster to cluster functional level  (level 9)

It’s as easy as that. The moment you upgrade your cluster functional level to 9 you will see all the CSV on the cluster on every node of that cluster you connect to. At that moment the replays will also work. That’s OK, you want to move swiftly trough the rolling upgrade and once you’re comfortable all drivers and firmware are working fine. You do not want to be in a the lower cluster version too long, but upgrade to benefit from the new capabilities in Windows Server 2016 Failover clustering. You do need to know this when you start your upgrades

image

Close your backups apps, restart the Replay manager service on the cluster nodes, refresh / reconnect to the backup apps, and voila. You’ll see the image you are use to in Replay Manager 7.8 (green text / arrows) and the backup jobs will work as well as any other backup product using the Compellent Replay Manager 7.8 hardware VSS provider.image

I hope this helps some of you out there. So yes Replay Manager 7.8 supports Windows Server 2016 Clusters with CSV LUNs but if you upgraded your cluster via cluster operating system rolling upgrade you need to have upgraded your cluster functional level! Until then, Replay Manager 7.8 isn’t going to work very well.

So there you go, that’s another reason to move through that process fast and smooth as you can.

Still missing in action for Hyper-V with Replay Manager 7.8

I’d really like for Replay Manager to be a bit more cluster friendly. No matter what node you are connected to they show you all CSV LUNs in the cluster. Since Replay manager 7.8 with Windows Server 2016 when you run a job manually you must start it when connected to the cluster node that owns the CSV or the job will fail with “No resources found on current cluster node for backup set”.

image

This was not the case with Windows Server 2012(R2) and earlier versions of Replay Manager. That did throw some benign errors in the event logs on the cluster node but it did work. I would love for DELLEMC to make sure the Replay Manager Client is smart enough to detect who owns the CSV and make sure it’ starts the job from that node. That would be a lot more user friendly. At the very least it should indicate which of the CSV LUNs you see are owned by the cluster node you are connected to.But when launching a backup job for a CSV that’s not owned by the node you are connect to the job quits/fails. They can detect the node they need, launch the job on that node and show it to you. That avoids having to go find out yourself what cluster node to connect to in Replay manager when you need to run a out of schedule job manually? The tech/logic is already there as the scheduled jobs get launched on the correct node.

It would also be great if they finally could get the logic built into Replay manager for the Hyper-V VM backups to know on what CSV and Hyper-V node the VM lives and deal with that. Sure it might cause more more snapshots to be made but that’s an invalid argument. When the VMs are on the same node,but different  CSV’s that’s already happening. Really on VM per job to avoid this isn’t a great answer.