Attending the Dell Tech Summit EMEA

As you read this I’m preparing to get on my way to the DELL Tech Summit in Lisbon, Portugal for a few days. I’ll be discussing the needs we have from them as customers (and their competition actually for that matter) when it comes to hardware in the Microsoft landscape in the era of Windows Server 2012.

image

I’m very happy and eager to tell them what, in my humble opinion, they are doing wrong and what they are doing right and even what they are not doing at all Smile  I believe in giving feedback and interaction with vendors. Not that I have any illusion of self importance as to the impact of my voice on the grand scheme of things but if I don’t speak up nothing changes either. As Intel and Microsoft are there as well,  this makes for a good selection of the partners involved. So here I go:

  1. More information on storage features, specifications and roadmaps
  2. Faster information on storage features, specifications and roadmaps
    • Some of these are in regards to Windows Server 2012 & System Center 2012 (Storage Pools & Spaces, SMI-S, ODX, UNMAP, RDMA/SMB3.0 …) and some are more generic like easier & better SAN/Cluster failovers capabilities, ease of use, number of SCSI 3 persistent reservations, etc.
  3. How to address the IOPS lag in the technology evolution. Their views versus my ideas on how to tackle them until we get better solutions.
  4. Plans, if any, for Cluster In a Box (CiB) building blocks for Windows Server 2012 Private Cloud solutions.
  5. When does convergence make sense and when not cost/benefit wise (and at what level). I’d like a bit more insight into what DELLs vision is and how they’ll execute that. What will new storage options mean to that converged network, i.e. SMB 3.0, Multichannel & RDMA capable NICs. Now convergence always seems tied to one tech/protocol (VOIP in the past, FCoE at the moment) and it shouldn’t, plenty of other needs for loads of bandwidth (Live migration, Storage Live Migration, Shared Nothing Live Migration, CSV redirected mode, …).

Now while it’s important to listen to you customers, this is not easy if you want to do it right, far from it. For one we’re all over the place as a group. This is always the case unless you cater to a specialized niche market. But DELL serves both consumers and enterprises form 1 person shops to fortune 500 companies in all fields of human endeavor. That makes for nice cocktail of views and opinions I suspect.

Even more importantly than listening is processing what you hear from your customers. Do you ignore, react, or take it away as more or less valuable information. Information on which to act or not, to use in decision making, and perhaps even in executing those decisions. And let’s face it without execution decisions are pretty academic exercises. In the end management is in control and for all the feedback, advise, research that gathered and done, they are at the steering wheel and they are responsible for the results.

One thing that I do know from my fellow MVPs and the community is that for the past 12 months any vendor who would address those questions with a good plan and communications would be a top favorite while selecting hardware at many customers for a lot of projects.

Disk to Disk Backup Solution with Windows Server 2012 – Part I

Backing Up 100 Plus Terabyte of Data Cheaply

When dealing with large amounts of data to backup you’re going to start bleeding money. Sure people will try to sell you great solutions with deduplication, but in a lot of scenarios this is not a very cost effective solution. The cost of dedupe in either backup hardware or software is very expensive and in some scenarios the cost cannot be justified. It’s also not very portable by the way unless in certain scenarios in which you stick with certain vendors. Once you get into backing up  > 100TB you need to forget about overly expensive hard & software. Just build your own solutions. Now depending on your needs you might want to buy backup software anyway but forget about dedupe licenses. Some of the more profitable hosting companies & cloud providers are not buying appliances or dedupe software either. They make real good money but they rather spend it on SUVs and swimming pools.

What Can You Do?

You can build your own solution. Really. You can put together some building blocks that scale up and out. You’ll a dual socket server with two 8 core CPUs and 24GB of ram, perhaps 32GB. Plug in some 6Gbps SAS controllers, hook those up to a bunch of 3.5” disk bays with 12 *2TB or 3TB disks each and you’re good to go. You can scale out to about 8 disk bays if you don’t cluster. Plug in a dual port 10Gbps card. You’ll need that as you be hammering that server. If you need more than this system, than scale out, put in a second, a third, etc. 3.5TB –4TB of backup capacity per hour in total should be achievable..

image

When you buy the components from super micro and some on line retailers you can do this pretty cheap. Spare parts you say? Buy some cold spares. You can have a dozen disk on the shelf, a SAS controller and even a shelf if you want. You could use hardware redundancy (RAID, hot spares) or use storage pools & spaces if you’re going the Windows Server 2012 route and save some extra money. Disk bay failure? Scale out so that even when you loose a node you still have tree others up and running. Spread backups around. Don’t backup the same data only to the same node. I know it’s not perfect for deduplication with Windows 2012 that way but hey, you win some, you lose some. Checks & balances right?  If you need a bit more support get some DELL PowerVaults or the like. It depends on what you’re comfortable with and how deep your pockets are.

You can by more storage than dedupe will ever save you & still come out with money to pay for the electricity. Okay it’s less good for the penguins but trust me, those companies selling those solutions would fry a penguin for breakfast everyday if it would make them money. Now talking about those penguins, the Windows Server 2012 deduplication feature could be providing me with the tools to save them Smile, but that’s for another post. I hope this works. I’d love to see it work. I bet some would hate to see it work. So much perhaps that they might even consider making their backup format non dedupableDevil?

Tip for users: Don’t use really cheap green SATA disks. They’re pretty environment friendly but the performance sucks. My view on “Green IT” is to right size everything, never to over subscribe and let that infrastructure work hard for you. This will minimize the hardware needed  and the performance is way better than all the power saving settings and green hardware. Which will ruin the environment anyway as you’ll end up buying more gear to compensate for lack of performance unless you’ll just suffer the bad performance. Keep the green disks for the home user’s picture, movie & music collection and use 2TB/3TB SAS/NL-SAS. Remember that when you don’t cluster (shared storage) you can make due nicely without the enterprise NL-SAS disks.

Now I’m not saying you should do what I suggest here, but you might find it useful to test this on your own scale for your own purposes. I did it for the money. For the money? Yup for the money. No not for me personally, I don’t have a swimming pool and I don’t even own a car, let alone an SUV. But saving your company a 100.000 or more in cash isn’t going to get you into trouble now is it? Or perhaps this is the only way you’re going to afford to back up that volume of data. People don’t throw away data and they don’t care about budgets you’d better be able to restore their data. Which reminds me, you will also need some backup software solutions that doesn’t cost an arm and a leg. That’s also a challenge as you need one that can handle large amounts of data and has some intelligence when I comes to virtualization, snapshots etc. It also has to be easy to use, as simple as possible as this helps ensure backups are made and are valid.

Are we trying to replace appliances or other solutions? No, we’re trying to provide lots of cheap and “fast enough” storage. Reading the data & providing it to the backup device can be an issue as well. Why fast enough? Pure speed on the target side is not useful if the sources can’t deliver. We need this backup space for when the shits really hits the fan and all else has failed.That doesn’t have to be a SAN crash or a SAN firmware issue ruing all your nice snapshots. It can also be the business detecting a mistake in a large data set a mere 14 months after the facts when all replicas, snapshots etc. have already expired. I’m sure you’ve got quality assurance that is so rock solid that this would never happen to you but hey, welcome to my world Sarcastic smile.

Migration LUNs to your Compellent SAN

A Hidden Gem in Compellent

As you might well know I’m in the process of doing a multi site SAN replacement project to modernize the infrastructure at a non disclosed organization. The purpose is to have a modern, feature reach, reliable and affordable storage solution that can provide the Windows Server 2012 roll out with modern features (ODX, SMI-S, …).

One of the nifty things you can do with a Compellent SAN is migrations from LUNs of the old SAN to the Compellent SAN with absolute minimal downtime. For us this has proven a real good way of migrating away from 2 HP EVA 8000 SANs to our new DELL Compellent environment. We use it to migrate file servers, Exchange 2010 DAG Member servers (zero downtime),  Hyper-V clusters, SQL Servers, etc. It’s nothing less than a hidden gem not enough people are aware off and it comes with the SAN. I was told that it was hard & not worth the effort by some … well clearly they never used and as such don’t know it. Or they work for competitors and want to keep this hidden Winking smile.

The Process

You have to set up the zoning on all SANs involved to all fabrics. This needs to be done right of course but I won’t be discussing this here. I want to focus on the process of what you can do. This is not a comprehensive how to. It depends on your environment and I can’t write you a migration manual without digging into that. And I can’t do that for free anyway. I need to eat & pay bills as well Winking smile

Basically you add your target Compellent SAN as a host to your legacy SAN (in our case HP EVA 8000) with an operating system type of “Unknown”. This will provide us with a path to expose EVA LUNs to our Compellent SAN.

image

Depending on what server LUNs you are migrating this is when you might have some short downtime for that LUN. If you have shared nothing storage like in an Exchange 2010 or a SQL Server 2012 DAG you can do this without any downtime at all.

Stop any IO to the LUN if you can (suspend copies, shut down data bases, virtual machines) and take CSVs or disks offline. Do what is needed to prevent any application and data issue, this varies.

What we then do is we unpresent the LUN of a server on the legacy SAN.

image

After a rescan of the disks on the server you’ll see that disk/LUN disappear.

This same LUN we then present to the Compellent host we added above.

image

 

We then “Scan for Disks” in the Compellent Controller GUI. This will detect the LUN as an unassigned disk. That unassigned disk can be mapped to an “External Device” which we name after the LUN to keep things clear (“Classify Disk as External Device” in the picture below).

image

 

Then we right click that External Device and choose to “Restore Volume from External Device”.

image

This kicks off replication from the EVA LUN mapped to the Compellent target LUN. We can now map that replica to the host as you can see in this picture.

image

After this rescan the disks on the server and voila, the server sees the LUN again. Bring the disk/CSV back online and you’re good to go.

image

All the downtime you’ll have is at a well defined moment in time that you choose. You can do this one LUN at the time or multiple LUNs at once. Just don’t over do it with the number of concurrent migrations. Keep an eye on the CPU usage of your controllers.

After the replication has completed the Compellent SAN will transparently map the destination LUN to the server and remove the mapping for the replica.

image

 

The next step is that the mirror is reversed. That means that while this replica exists the data written to the Compellent LUN is also mirrored to the old SAN LUN until you break the mirror.

image

 

Once you decide you’re done replicating and don’t want to keep both LUNs in sync anymore, you break the mirror.

image

 

You delete the remaining replica disk and you release the external disk.

image

 

Now you unpresent the LUN from the Compellent host on your old SAN.

image

 

After a rescan your disks will be shown as down in unassigned disks and you can delete them there. This completes the clean up after a LUN migration.

image

 

Conclusion

When set up properly it works very well. Sure it takes some experimenting to deal with some intricacies, but once you figure all that out you’re good to go and are ready to deal with any hiccups that might occur. The main take away is that this provides for minimal downtime at a moment that you choose. You get this out of the box with your Compellent. That’s a pretty good deal I say!

So as you can see this particular environment will be ready for Windows Server 2012 & Hyper-V. Life is good!

Windows Server 2012 Cluster in a Box as a New Form Factor?

Let’s look at “Cluster in a Box” (CiB)as a building block or a form factor. Let’s say you’ve committed to building a private/hybrid cloud for your organizations but you’re at the end of your hardware life cycle or you just don’t have the capacity right now to build it. What options do you have. Do you want to acquire storage, data connectivity network gear, servers, NICs with etc. or will you just buy CiB blocks to scale out as you go? Perhaps you’ll buy a Hyper-V fast track solution or if you’re really big a one or multiple containers.

I do think that the modular principle throughout the data center is pretty cool. The industry has done a great job at this with servers and smaller components as well as with the modular containers by SUN, HP, DELL.

clip_image002

While I do like and admire the concept of the “shipping container form factor” I do find it a couple of sizes too large to be practical for most of us. After all, let’s face it, we’re not all building public cloud service data centers. This means that between what we have seen today with server & storage modularity and the container form factor we’ve got a void. While some of these voids have been filled for specific applications like Exchange 2010 through custom build solutions by some vendors you cannot call this modular. Is a very application specific solution. The other, more generic, solution that has existed for a while now is the hardware that vendors deliver with the Hyper-V fast track we’ve mentioned already. Whiles these are nice, pre-configured solutions these are, again, not very modular. It’s not a complete unit that just needs to be hooked the network and provisioned with power. The time is ripe with the current state of Microsoft Windows Server 2012 to fill that void using the “Cluster in a Box” form factor. That would mean that in the future we could of the same benefits as the big players but at a size that’s fit for our purposes in the smaller data centers. This opens up a lot of scenarios for better efficiency.

What if the entire unit shipped to a customer contains everything packed away internally. That is servers, networking and storage. You just have to mount it in a rack, connect it to redundant power outlets and to redundant network paths. That’s it. Just power it up, fill out the wizard and be done with it. That’s all it takes to have a functional Hyper-V, Scale Out File System, SQL Server cluster etc. With the capabilities delivered by Windows Server 2012 this could very well be a scenario that might evolve. It’s more than just a business in or a branch office in a box. I can also be more that the Scale Out File Server unit for a private cloud solution. It just might be the first step of a new form factor building block for medium to even some large enterprises. If the economies are too good to be ignored I think this might happen.

clip_image004

The reason I think that this concept will work is that we have virtual machine mobility now so we no longer need to fear the isolation that silos might create. As a matter of fact this is a key element that might drive this. For the applications that are less suited for virtualization today we see two solutions. One is in the scalability of the Hyper-V platform with Windows Server 2012 and the other is the fact that the shared nothing approach is gaining popularity. It started with Exchange 2010 but is no also available with SQL Server 2012.

These clusters in a box can be made with existing servers (blades or not), storage and switches but I think there will be also new designs that are purpose build and not just existing hardware in a “rackable” box as in my drawings below Smile. Those boxes might have some scale up capability or come in different sizes

image

But scale out is the way that would make this work in the bigger environments, whatever the size of the Cluster in a Box.

image