Dell was the 1st OEM to actively support and deliver Microsoft Storage Spaces solutions to its customers.
They recognized the changing landscape of storage and saw that this was one of the option customers are interested in. When DELL adds their logistical prowess and support infrastructure into the equation it helps deliver Storage Spaces to more customers. It removes barriers.
In June 2015 DELL launched their newest offering based on generation 13 hardware.
You can find some more information on DELL storage spaces here and here.
I’m looking forward to what they’ll offer in 2016 in regards to Storage Spaces Direct (S2D) and networking (10/25/40/50/100Gbps). I’m expecting that to be a results of some years experience combined with the most recent networking stack and storage components. 12gbps SAS controller, NVMe options in Storage Spaces Direct. Dell has the economies of scale & knowledge to be one of the best an major players in this area. Let’s hope they leverage this to all our advantage. They could (and should) be the first to market with the most recent & most modern hardware to make these solutions shine when Windows Server 2016 RTM somewhere next year.
Recently as a DELL Compellent customer version 184.108.40.206 became available to us. I download it and found some welcome new capabilities in the release notes.
Support for vSphere 6
2024 bit public key support for SSL/TLS
The ability to retry failed jobs (Microsoft Extensions Only)
The ability to modify a backup set (Microsoft Extensions Only)
The ability to retry failed jobs is handy. There might be a conflicting backup running via a 3rd party tool leveraging the hardware VSS provider. So the ability to retry can mitigate this. As we do multiple replays per day and have them scheduled recurrently we already mitigated the negative effects of this, but this only gibes us more options to deal with such situations. It’s good.
The ability to modify a backup set is one I love. It was just so annoying not to be able to do this before. A change in the environment meant having to create a new backup set. That also meant keeping around the old job for as long as you wanted to retain the replays associated with that job. Not the most optimal way of handling change I’d say, so this made me happy when I saw it.
Now I’d like DELL to invest a bit more in make restore of volume based replays of virtual machines easier. I actually like the volume based ones with Hyper-V as it’s one snapshot per CSV for all VMs and it doesn’t require all the VMs to reside on the host where we originally defined the backup set. Optimally you do run all the VMs on the node that own the CSV but otherwise it has less restrictions. I my humble opinion anything that restricts VM mobility is bad and goes against the grain of virtualization and dynamic optimization. I wonder if this has more to do with older CVS/Hyper-V versions, current limitations in Windows Server Hyper-V or CVS or a combination. This makes for a nice discussion, so if anyone from MSFT & the DELL Storage team responsible for Repay Manager wants to have one, just let me know
Last but not least I’d love DELL to communicate in Q4 of 2015 on how they will integrate their data protection offering in Compellent/Replay manager with Windows Server 2016 Backup changes and enhancements. That’s quite a change that’s happing for Hyper-V and it would be good for all to know what’s being done to leverage that. Another thing that is high on my priority for success is to enable leveraging replays with Live Volumes. For me that’s the biggest drawback to Live Volumes: having to chose between high/continuous availability and application consistent replays for data protection and other use cases).
I have some more things on my wish list but these are out of scope in regards to the subject of this blog post.
If you have a virtual Loadmaster you gain a capability you do not have with an appliance: console access. You can have lost all network connectivity to the Loadmaster but you can still gain access over the Hyper-V console connection to the virtual machine. Virtual appliances are not the only or best choice for all environments and needs. When evaluating your options you should consider going for a bare metal solution like the DELL R320.
These are basically DELL servers and as such have a Dell Remote Access Card (DRAC) that allows for remote access independently of the production network. Great for when you need to resolve an issue where you cannot connect to the unit anymore and you’re not near the Loadmaster. It also allows for remote shutdown and start capabilities, mounting images for updates, … all the good stuff. Basically it offers all the benefits of a DELL Server with a DRAC has to offer.
That means I have an independent way into my load balancer to deal wit problems when I can no longer connect to it via the network interface or even when it is shut down. As we normally telecommute as much as possible, either from the offices, on the road or home this is a great feature to have. It sure beats driving to your data center at zero dark thirty if that is even a feasible option.
I know that normally you put in two units for high availability but that will not cover all scenarios and if you have a data center filled with DELL PowerEdge servers that have DRAC and you cannot restore services because you cannot get to your load balancers that’s a bummer. It’s for that same reason we have IP managed PDU, OOB capabilities on the switches. The idea is to have options and be able to restore services remotely as much as possible. This is faster, cheaper and easier than going over there, so reducing that occurrence as much as possible is good. Knowledge today flies across the planet a lot faster than human being can.
It happens to the best of us, sometime we selected the wrong option during deployment and or configuration of our original virtual disks. Or, even with the best of planning, the realities and use cases of your storage change so the original choice might not be the most optimal one. Luckily on a DELL MD PowerVault storage device, you do not need to delete the virtual disk or disks and lose your data to reconfigure the segment’s size. Even better is that you can do this online as a background process., which is a must at it can take a very long time and it would cause prohibitively long downtime if you had to take the data offline for that amount of time.
You have some control over the speed at which this happens via the priority setting but do realize that this takes a (very) long time. Due to the fact it’s a background process you can keep working. I have noticed little to no impact on performance but your mileage may vary.
How long does it take? Hard to predict. This is a screenshot of two 50TB virtual disks where the segment size is being adjusted online…
You cannot always go to the desired segment size in one step. Sometimes you have only an intermediate size available. This is the case in the example below.
The trick is to first move to that segment size and then repeat the process to reach the size you require. In this case, we’ll first move to 256 KB and then to 512 KB segment size. So this again takes a long time. But again, it all happens online.
In conclusion, it’s great to have this capability. When you need to change the size when there is already data on the PowerVault virtual disks you have the ability to do so online while the data remains available. That this can require multiple steps and take a long time is not a huge deal. You kick it off and let it run. No need to sit there and watch it.