May 2017 will be a travelling month

Introduction

In ICT, you never stop learning. Changes come and go fast. Navigating through these turbulent times of rapid change, short value cycles in order to provide continuity in both services & value without falling behind or being held back is a challenge we all face every day. If you hire or employ technologists, please take a moment to consider what they pull off for you every day. It helps to be realistic on what to expect from and to achieve with them. For that a solid understanding of the technology ecosystem and good doctrine to achieve your goals are necessary. For that to really happen and for their efforts to pay off we need to make sure politics and bureaucracy are kept under control. Let your people shine and move ahead. Long term planning does not equate a strategy and you might find yourself out paced & maneuvered by the industry and your competitors. That’s a reason why you see technologist move up the ladder and take on the leadership role inside many companies. They tend to be better placed to see the opportunities and what these require. In that respect, it pays off to walk out of your office every now and then in order to prevent tunnel vision and echo chambers. That’s one of the reasons that for me May 2017 will be a travelling month.

May 2017 will be a travelling month

Cloud & Datacenter Conference Germany

First, I’ll be in Munich, Germany, fort he Hyper-V community day and to both attend / speak at the Cloud & Datacenter Conference Germany 2017. That’s a conference for and by the community and the speakers are all highly experienced people who talk the talk and walk the walk.

clip_image002

I you can grab a ticket asap. From the very 1st edition the Cloud & Datacenter Conference Germany has set the standard for what a conference should be like.  I’ll be talking about SMB Direct / RDMA on the Hyper-V community day and about Windows Server 2016 Failover Clustering & Hyper-V at the conference. Please feel free to come over an chat.

Dell EMC World 2017

After that I’m off to DELL EMC World 2017 where I’ll be diving into the offerings that exit today and in the near future. As you might have guessed I’m very interested in the DELL Storage Spaces solutions, there take on and use of ReFSv3 and Windows Server 2016. Next to that, I would not be nick named RoCE Balboa if I was not interested in networking. Hardware wise I have my eye on the S-Series S6100-ON as that is one versatile piece of equipment. Man, I imaging having a lab with a 6 of those to test and play around with. No to mention the S2D clusters & backup targets to hammer them with a nice workload. Throw in the Mellanox cards for good measure. I can dream right ? As I’m a realist I’m also very interested in their servers and still, the Compellent offerings, which as far as traditional SANs go is one easy to manage & leverage piece of gear. It goes without saying I’ll be taking a look at what the EMC addition to the portfolio can achieve for us as well as the DELL EMC 3rd party offerings

clip_image004

VeeamOn 2017

After that I continue on to VeeamON 2017 which makes a great addition to the two above. The Windows Server 2016 core stack as the basis for Azure Stack, S2D running on that great DELLEMC hardware. Now have that protected and made continuous available by the Veeam Availability Suite 9.5. That’s how you get an amazing stack of technologies on which to build, support amazingly good services.

clip_image005

At VeeamON 2017 I’ll be joining two big names in the industry Luca Dell’Oca and Carsten Rachfahl to talk about ReFSv3. We’ll be attending sessions and “hanging out” at the MSFT boot as well.

So, no rest for a Microsoft MVP, Microsoft Extended Experts team member, Azure Advisor, a DELL Community Rockstar and a Veeam Vanguard. We’re always reading up, learning, investigating, sharing experiences & insights with our peers and learning from them. Conferences done right are very valuable and a great networking / leaning opportunity. Make the most of them when you can.

My value is your value

These conferences together with our focus on some very innovative and promising public and hybrid cloud technologies in Azure will keep me busy contemplating designs, testing the concepts of solution I have in my mind and delivering very efficient and effective solutions both in functionality as well as in TCO and ROI. That (and caffeine) combined with working with great and smart people is what makes me run. So for that reason alone I do not mind that May 2017 will be a travelling month.

Full or Thick Provisioned Volume on Compellent

Introduction

There are pundits out there that claim that you cannot create a fully provisioned LUN on a Compellent SAN.  Now that what I call unsubstantiated rumors, better know as bull shit.

Sure the magic sauce of many modern storage array lies in thin provisioning. Let there be no mistake about that. But there are scenarios where you might want to leverage a fully provisioned volume. This is also know a s thick provisioned LUN. You can read about one such a scenario where they make perfect sense in this blog post Mind the UNMAP Impact On Performance In Certain Scenarios

Create a  Full or Thick Provisioned Volume on Compellent

First of all you create brand new volume in the Storage Center System Explorer. That’s a standard as it gets.

You then map this volume to a server

At that moment, before you even mount that volume on your server let alone do anything else such a bringing it on line or formatting it you’ll “Preallocate Storage” for that volume in Storage Center.

image

You’ll get a warning as this is not a default action and you should only do so when the conditions of the IO warrant this.

image

When you continue you’ll get some feedback. This can take quite some time depending on the size of the volume.

image

When it’s done peek at the statistics of that full or this provisioned volume on the Compellent.This is what it looks like when you look at the statistics for that volume after is was done. So before we even formatted the volume on a server and wrote data to it. It’s using all the space on the SAN for the start.

image

Due to data protection it’s even more. It’s clear form the image above that a 500GB disk in RAID 10 fully provisioned is using 1TB of space as its all still in RAID 10 (no tiering down has occurred yet). Raid 10 has an overhead factor 2. The volume is for a large part in Tier 2 because my Tier 1 is full, so writing spilled over into Tier 2.

Now compare this to a thinly provisioned volume that we just created and again we haven’t even touched it in any other way.

image

Yup, until we actually write data to the volume it’s highly space efficient, there is absolutely no spaces use and we’ll see only a little when we mount, initialize the disk in Windows, create a simple volume and format it.image

This is completely in Tier 2 and my tier 1 is full. I accept donations of SANs and SSD’s for my lab it this bothers you Winking smile. When we write data to it you’ll see this rise and over time you’ll see it tier down and up as well.

Dell Compellent SCOS 6.7 ODX Bug Heads Up

UPDATE 3: Bad and disappointing news. After update 2 we’ve seen DELL change the CSTA (CoPilot Services Technical Alert)  on the customer website to “’will be fixed” in a future version. No according to the latest comment on this blog post that would be In Q1 2017. Basically this is unacceptable and it’s a shame to see a SAN that was one of the best when in comes to Hyper-V Support in Windows Server 2012 / 2012 R2 decline in this way. If  7.x is required for Windows Server 2016 Support this is pretty bad as it means early adopters are stuck or we’ll have to find an recommend another solution. This is not a good day for Dell storage.

UPDATE 2: As you can read in the comments below people are still having issues. Do NOT just update without checking everything.

UPDATE: This issue has been resolved in Storage Center 6.7.10 and 7.Ximage

If you have 6.7.x below 6.7.10 it’s time to think about moving to 6.7.10!

No vendor is exempt form errors, issues, mistakes and trouble with advances features and unfortunately Dell Compellent has issues with Windows Server 2012 (R2) ODX in the current release of SCOS 6.7. Bar a performance issue in a 6.4 version they had very good track record in regards to ODX, UNMAP, … so far. But no matter how good your are, bad things can happen.

DellCompellentModern

I’ve had to people who were bitten by it contact me. The issue is described below.

In SCOS 6.7 an issue has been determined when the ODX driver in Windows Server 2012 requests an Extended Copy between a source volume which is unknown to the Storage Center and a volume which is presented from the Storage Center. When this occurs the Storage Center does not respond with the correct ODX failure code. This results in the Windows Server 2012 not correctly recognizing that the source volume is unknown to the Storage Center. Without the failure code Windows will continually retry the same request which will fail. Due to the large number of failed requests, MPIO will mark the path as down. Performing ODX operations between Storage Center volumes will work and is not exposed to this issue.

You might think that this is not a problem as you might only use Compellent storage but think again. Local disks on the hosts where data is stored temporarily and external storage you use to transport data in and out of your datacenter, or copy backups to are all use cases we can encounter.  When ODX is enabled, it is by default on Windows 2012(R2), the file system will try to use it and when that fails use normal (non ODX) operations. All of this is transparent to the users. Now MPIO will mark the Compellent path as down. Ouch. I will not risk that. Any IO between an non Compellent LUN and a Compellent LUN might cause this to happen.

The only workaround for now is to disable ODX on all your hosts. To me that’s unacceptable and I will not be upgrading to 6.7 for now. We rely on ODX to gain performance benefits at both the physical and virtual layer. We even have our SMB 3 capable clients in the branch offices leverage ODX to avoid costly data copies to our clustered Transparent Failover File Servers.

When a version arrives that fix the issue I’Il testing even more elaborate than before. We’ve come to pay attention to performance issues or data corruption with many vendors, models and releases but this MPIO issue is a new one for me.

Kemp LoadMaster OEM Servers and Dell Firmware Updates with Lifecycle Controller

When you buy a DELL OEM based Kemp Technologies LoadMaster you might wonder who will handle the hardware updates to the server. Well Dell handles all OEM updates via its usual options and as with all LoadMasters Kemp Technologies handles the firmware update of the LoadMaster image.

KempLM320

Hardware wise both DELL and Kemp have been two companies that excel in support. If you can find the solution that meets your needs it’s a great choice. Combine them and it make for a great experience.  Let me share a small issue I ran into updating Kemp Loadmaster OEM Servers and Dell Firmware Updates with Lifecycle Controller

For a set of DELL R320 loadmasters in HA is was upgrading ( I not only wanted to move to 7.1-Patch28b-BARE-METAL.bin but I also wanted to take the opportunity to bring the firmware of those servers to the latest versions as that had been a while (since they had been delivered on site).

There is no OS that runs in those server,s as they are OEM hardware based appliances for the Loadmaster image. No worries these DELL servers come with DRAC & Lifecycle controllers so you can leverage those to do the firmware updates from a Server Update Utility ISO locally, via virtual media, over over the network, via FTP or a network share. FTP is either the DELL FTP Site or an internal one.

image

image

Now as I had just downloaded the  latest SUU at the time (SUU-32_15.09.200.74.ISO – for now you need to use the 32 bit installers with the life cycle controller) I decided to just mount it via the virtual media, boot to the lifecycle controller and update using local media.

image

image

But I got stuck  …

It doesn’t throw an error but it just returns to the start point and nothing can fix it. Not even adding “/repository”  to the file path . You can type the name of an individual DUP (32 bit!) and that works. Scanning the entire repository however wouldn’t move beyond step 2 “Enter Access Details”.

Scanning for an individual DUP seemed to work but leaving the file path blank while trying to find all eligible updates seemed not to return any results so I could not advance. The way I was able to solve this was by leveraging the DRAC ability to update it own firmware using the firmware image file to the most recent version. I just got mine by extracting the DUP and taking the image file from the payload sub folder.

image

You can read on how to upgrade DRAC / Lifecycle Controller via the DRAC here.

image

When you’ve done that, I give the system a reboot for good measure, and try again. I have found in all my cases fixes the issue. My take on this is that older firmware can’t handle more recent SUU repositories. So give it a try if you run into this and you’ll be well on your way to get your firmware updated. If you need help with this process DELL has excellent documentation here in “Lifecycle Controller Platform Update/Firmware Update in Dell PowerEdge 12th Generation Servers”

image

image

image

The end result is a fully updated DELL Server / Kemp Loadmaster. Mission accomplished. All this can be done from the comfort of your home office. A win-win for both you and your customer/employer. Think about it, it would be a shame to miss out on all the benefits you get from working in the cloud when your on premises part of a hybrid infrastructure forces you to get in a car and drive to a data center 70 km away. Especially at 21:21 at night.