Join me for aTechNet Live Meeting: Hyper-V Storage Efficiencies & Optimizations in Windows Server 2012 R2

So you have been  playing with or down right seriously testing Windows Server 2012 and perhaps even Windows Server 2012 R2. That’s great. Many of you might have it running in production or are working on that. That’s even better.

Windows Server 2012 has brought us unseen capabilities & performance enhancements that make it a future proof fundament for many versions to come and it is ready for the ever accelerating pace of hardware improvements. R2 has fine tuned some points and added improvements that are stepping stones to better today and even greater in vNext. I’d like to invite you to a free TechNet Live Meeting on Hyper-V Storage Efficiencies & Optimizations in Windows Server 2012 R2 and look at some of these capabilities with me.

image

As a virtualization guy two subjects are very dear to me and that is networking & storage, and this event is about a subset of the storage improvements. You might have heard about ODX and UNMAP but you have not had the change to play with it. You have read about the tremendous scalability of the IOPS in a VM and about large sector support for the next generation of hard disks drives. Well some of these we’ll demonstrate (ODX, UNMAP, Dynamically expanding VHDX performance) if the demo gods are with us. Others we’ll discuss so you’ll know where this comes into play and how you’ll benefit from them even without realizing you do. So without further delay register for the free TechNet Live Event here.

Future Proofing Storage Acquisitions Without A Crystal Ball

Dealing with an unknown future without a crystal ball

I’ve said it before and I’ll say it again. Storage Spaces in Windows Server 2012 (R2) is are the first steps of MSFT to really make a difference (or put a dent into) in the storage world. See TechEd 2013 Revelations for Storage Vendors as the Future of Storage lies With Windows 2012 R2 (that was a nice blog by the way to find out what resellers & vendors have no sense of humor & perspective). It’s not just Microsoft who’s doing so. There are many interesting initiatives at smaller companies to to the same. The question is not if these offerings can match the features sets, capabilities and scenario’s of the established storage vendors offerings. The real question is if the established vendors offer enough value for money to maintain themselves in a good enough is good enough world, which in itself is a moving target due to the speed at which technology & business needs evolve. The balance of cost versus value becomes critical for selecting storage. You need it now and you know you’ll run it for 3 to 5 years. Perhaps longer, which is fine if it serves your needs, but you just don’t know. Due to speed of change you can’t invest in a solution that will last you for the long term. You need a good fit now at reasonable cost with some headway for scale up / scale out. The ROI/TCO has to be good within 6 months or a year. If possible get a modular solution. One where you can replace the parts that are the bottle neck without having to to a fork lift upgrade. That allows for smaller, incremental, affordable improvements until you have either morphed into a new system all together over a period of time or have gotten out of the current solution what’s possible and the time has arrived to replace it. Never would I  invest in an expensive, long term, fork lift, ultra scalable solution. Why not. To expensive and as such to high risk. The risk is due to the fact I don’t have one of these:

http://trustbite.co.nz/wp-content/uploads/2010/01/Crystal-Ball.jpg

So storage vendors need to perform a delicate balancing act. It’s about price, value, technology evolution, rapid adoption, diversification, integration, assimilation & licensing models in a good enough is good enough world where the solution needs to deliver from day one.

I for one will be very interested if all storage vendors can deliver enough value to retain the mid market or if they’ll become top feeders only. The push to the cloud, the advancements in data replication & protection in the application and platform layer are shaking up the traditional storage world. Combine that with the fast pace at which SSD & Flash storage are evolving together with Windows Server 2012 that has morphed into a very capable storage platform and the landscape looks very volatile for the years to come. Think about  ever more solutions at the application (Exchange, SQL server) and platform layer (Hyper-V replica) with orchestration on premise and/or in the cloud and the pressure is really on.

So how do you choose a solution in this environment?

Whenever you are buying storage the following will happen. Vendors, resellers & sales people, are going to start pulling at you. Now, some are way better than others at this, some are even down right good at this whole process a proceed very intelligently.

Sometimes it involves FUD, doom & gloom combined with predictions of data loss & corruption by what seem to be prophets of disaster. Good thing is when you buy whatever they are selling that day, they can save you from that. The thing is this changes with the profit margin and kickbacks they are getting. Sometimes you can attribute this to the time limited value of technology, things evolve and todays best is not tomorrows best. But some of them are chasing the proverbial $ so hard they portray themselves as untrustworthy fools.

That’s why I’m not to fond of the real big $ projects. Too much politics & sales. Sure you can have people take care of but you are the only one there to look out for your own interests. To do that all you need to do is your own due diligence and be brave. Look, a lot of SAN resellers have never ever run a SAN, servers, Hyper-V clusters, virtualized SQL Server environments or VDI solutions in your real live production environments for a sustained period of time. You have. You are the one whose needs it’s all about as you will have to live and work with the solution for years to come.  We did this exercise and it was worth while. We got the best value for money looking out for our own interests.

Try this with a reseller or vendor. Ask them about how their hardware VSS providers & snapshot software deals with the intricacies of CSV 2.0 in a Hyper-V cluster. Ask them how it works and tell them you need to references to speak to who are running this in production. Also make sure you find your own references. You can, it’s a big world out there and it’s a fun exercise to watch their reactions Winking smile

As Aidan remarked in his blog on ODX–Not All SANs Are Created Equally

These comparisons reaffirm what you should probably know: don’t trust the whitepapers, brochures, or sales-speak from a manufacturer.  Evidently not all features are created equally.

You really have to do your own due diligence. Some companies can afford the time, expense & personnel to have the shortlisted vendors deliver a system for them to test. Costs & effort rise fast if you need to get a setup that’s comparable to the production environment. You need to device tests that mimic real life scenario’s in storage capacity, IOPS, read/write patterns and make sure you don’t have bottleneck outside of the storage system in the lab.

Even for those that can, this is a hard thing to do. Some vendors also offer labs at their Tech Centers or Solutions Centers where customers or potential customers can try out scenarios. No matter what options you have, you’ll realize that this takes a lot of effort. So what do I do? I always start early. You won’t have all the information, question & answers available with a few hours of browsing the internet & reading some brochures. You’ll also notice that’s there’s always something else to deal with or do, so give your self time, but don’t procrastinate. I did visit the Tech Centers & Solution Centers in Europe of short listed vendors. Next to that I did a lot of reading, asked questions and talked to a lot of people about their view and experiences with storage. Don’t just talk to the vendors or resellers. I talked a lot with people in my network, at conferences and in the community. I even tracked down owners of the shortlisted systems and asked to talk to them. All this was part of my litmus test of the offered storage solutions. While perfection is not of this world there is a significant difference between vendor’s claims and the reality in the field. Our goal was to find the best solution for our needs based on price/value and who’s capabilities & usability & support excellence materialized with the biggest possible majority of customers in the field.

Friendly Advice To Vendors

So while the entire marketing and sales process is important for a vendor I’d like to remind all of them of a simple fact. Delivering what you sell makes for very happy customers who’s simple stories of their experiences with the products will sell it by worth of mouth. Those people can afford to talk about the imperfections & some vNext wishes they have. That’s great as those might be important to you but you’ll be able to see if they are happy with their choice and they’ll tell you why.

Windows Server 2012 R2 Unmap, ODX On A Dell Compellent SAN Demo

UNMAP & ODX Video

Some things are easier to show using a video so have a look at a video on UNMAP/ODX used with Windows Server 2012 R2 and Compellent SAN:

You can also go directly to the Vimeo page by clicking on the below screen shotimage

We start out with a 10.5TB large thinly provisioned LUN that has about 203GB of space in use on the SAN. So the LUN on the SAN might be 10.5TB and windows sees a volume that is 10.5TB only the effective data stored consumes storage space on the SAN. That ought to demonstrate the principle of thin provisioning adequately Smile. The nice PowerShell counter is made possible via the Compellent PowerShell Command Set.

We then copy 42GB worth of ISO files inside a Windows Server 2012 virtual machine from a fixed VHD to a dynamically expanding VDHX. Those are nice speeds. And look at how the size of the VHDX file grows on the CSV volume and how the space used on the SAN is growing. That’s because the LUN is thinly provisioned.

Secondly we copy the same ISO files to a fixed size VHDX. Again, some really nice speeds. As the VHDX is fixed in size you do not see it grow. When looking at the little SAN counter however we do see that the thinly provisioned LUN is using more storage capacity.

Once that is done we see that the total space consumed on the SAN for that CSV LUN has risen to 284GB. We then delete the data from both dynamically expanding VHDX and are about to run the Optimize-Volume command when we notice that the SAN has already reclaimed the space. So we don’t run the optimize command. Keep that in mind. By the way, this process is done as part of standard maintenance (defrag) and some NTFS check pointing mechanism that’s run every 5 minutes and sends down the info from the virtual layer to the physical layer to the SAN. During demo’s it’s kind of boring to sit around and wait for it to happen Smile. Just remember that in real life it’s a zero touch feature, you don’t need to baby sit it.

We then also delete the ISO files from the fixed VHDX and run Optimize-Volume G –Retrim and as result you see the space reclaimed on the SAN. As this is a fixed disk the size of the VDHX will not change. But what about the dynamically expanding VHDX? Well you need to shut it down for that. But hey, nothing happens. So we fire it up again and do run Optimize-Volume H –Retrim before shutting it down again and voila.

So what do you need for this?

Rest assured. You don’t need the most high end, most expensive, complex and proprietary SAN hardware to get this done. What you need is good software (firmware) on quality commodity hardware and you’re golden. If any SAN vendor wants to charge you a license fee for ODX/UNMAP just throw them out. If they don’t even offer it walk away from them and just use storage spaces. There are better alternatives than overpriced SANs lacking features.

I’ve found that systems like Equalogic & Compellent are in the sweet point for 90 % of their markets based on price versus capabilities and features.  Let’s look at the a Compellent for example. For all practical intend this SAN runs on commodity hardware. It’s servers & disk bays. SAS to the storage & FC, iSCSI or SMB/NFS for access. With capable hardware the magic is in the software. Make no mistake about it, commodity hardware when done right, is very, very capable. You don’t need a special proprietary hardware & processors unless for some specialized nice markets. And if you think you do, what about buying commodity hardware anyway at 50% of the cost and replacing it with the latest of the greatest commodity hardware after 4 years and still come out on top cost wise whilst beating the crap out of that now 4 year old ASIC and reaping the benefits of a new capabilities the technology evolutions offers? Things move fast and you can’t predict the future anyway.

Upgrading Your DELL Compellent Storage Center Firmware (Part 2)

This is Part 2 of this blog. You’ll find Part 1 over here.

In part 1 we prepared our Compellent SAN to be ready and install Storage Center 6.3.10 that has gone public.  As said, 6.3.10 brings interesting features like ODX and UNMAP to us Windows Server 2012 Hyper-V users. It also introduces some very nice improvements to synchronous replication and Live Volumes. But here we’ll just do the actual upgrade, the preparations & health check have been done in part 1 so we can get started here right away.

Log in to your Compellent system and navigate to the Storage Management menu. Click on “System”, select Update and finally click on “Install Update”.  It’s already there as we downloaded it in Part 1. Click on “Install Now” to kick it all off.

image

Click on Install now to launch the upgrade.

image

After initialization you can walk away for 10 minutes but you might want to keep an eye on things and the progress of the process.

image

So go have a look at your storage center. Look at the Alert Monitor for example and notice that the “System is undergoing maintenance”.

image

When the controller the VIP address of the SAN reboots it becomes unavailable. After a while you can login again to the other controller via the VIP, if you cant’ wait a few seconds just use the IP address of the active controller. That will do.

image

When you log in again you’ll see the evidence of an ongoing SAN firmware upgrade. Nothing to panic about.image

This is also evident in Alert Monitor. CoPilot knows you’re doing the upgrade so no unexpected calls to make sure your system is OK will come in. They’re there every step of the way. The cool thing is that is the very first SAN we ever owned that we don’t need engineers on site or complex and expensive procedure to do all this. It’s all just part of an outstanding customer service Compellent & DELL deliver.image

You can also take a peak at your Enterprise manager software to see paths going down and so on. The artifacts of a sequential controller failovers during an upgrade. Mind you you’re not suffering downtime in most cases.image

Just be patient and keep an eye on the process. When you log in again after the firmware upgrade and your system is up and running again, you’ll be asked to rebalance the ports & IO load between the controllers on the system. You do, so click yes.image

image

When done you’ll return to the Storage Center interface. Navigate to “Help”" and click on About Compellent Storage Center. image

You can see that both controllers are running 6.3.10.

image

You’re rocking the new firmware. As you kept an eye on your hosts you should know these are good to go. Send of an e-mail to CoPilot support and they’ll run a complete health check on your system to make sure you’re good to go. Now it’s time to start leveraging the new capabilities you just got.