First Windows Server 2012 Cluster/Hyper-V related Patches

With November 2012 Patch Tuesday having come and gone, the first hotfixes (it’s a cumulative update) related to Windows Server 2012 are available. These are relevant to both Hyper-V & Failover clustering (Scale Out File Server)  There is also an older hotfix that has been brought to our attention that related to certain versions Windows Server 2008/R2 domain controllers,which is also important for Windows Server 2012 Clustering. None of these are urgent/critical and only apply in specific circumstances but it’s good to keep up with these and protect your environment..

Windows 8 and Windows Server 2012 cumulative update: November 2012

http://support.microsoft.com/kb/2770917: A collection of small changes – for HA VMs (Hyper-V on Cluster) there are three minor CSV file system fixes in this Hotfix : Improves clustered server performance and reliability in Hyper-V and Scale-Out File Server scenarios. Improves SMB service and client reliability under certain stress conditions.

Error code when the kpasswd protocol fails after you perform an authoritative restore: “KDC_ERROR_S_PRINCIPAL_UNKNOWN”

http://support.microsoft.com/kb/976424: Install on every domain controller running Windows Server 2008 Service Pack 2  or Windows Server 2008 R2 in order to add a Windows Server 2012 failover cluster. This is included in Windows Server 2008 R2 Service Pack 1. So just see if you need this fix in your environment or not.

I’m happy to see Microsoft acting fast on these issues,, even if not critical, to serve & protect their customers deployments.

Windows Server 2012 Deduplication Results In A Small Environment

There is a small environment that provides web presence and services. In total there a bout 20 production virtual machines. These are all backed up to a Transparent Failover File Share on a Windows Server 2012 cluster that is used to host all the infrastructure and management services.

The LUN/Volume for the backups is about 5.5 TB of storage is available. The folder layout is shown in the screenshot below. The backups are run “in guest” using native Windows Backup which has the WindowsImageBackup subfolder as target. Those backups are  archived to an “Archives” folder. That archive folder is the one that gets deduplicated, as the WindowsImageBackup folder is excluded.

image

This means that basically the most recent version is not deduplicated guaranteeing the fastest possible restore times at the cost of some disk space. All older (> 1 day) backup files are deduplicated. We achieve the following with this approach:

  • It provides us with enough disk space savings to keep archived backups around for longer in case we need ‘m.
  • It also provides for enough storage to backup more virtual machines while still being able to maintain a satisfactory number of archived backups.
  • Ay combination of the above two benefits can be balanced versus the business needs
  • It’s a free, zero cost solution

The Results

About 20 virtual machines are backed up every week (small delta and lots of stateless applications).As the optimization runs we see the savings grow. That’s perfectly logical. The more backups we make of virtual machines with a small delta the more deduplication can shine. So let’s look at the results using Get-DedupStatus | fl

image

A couple of weeks later it looks like this.

image

Give it some more months, with more retained backusp, and I think  we’ll keep this around 88%-90% .From tests we have done (ddpeval.exe) we think we’ll max out at around 80% savings rate. But it’s a bit less here overall because we excluded the most recent backups. Guess what, that’s good enough for us Winking smile. It beats buying extra storage of paying a wad of money for disk deduplication licenses from some backup vendor or appliance. Just using the build in deduplication mechanisms in Windows Server 2012 Server saved us a bunch of money.

The next step is to also convert the production  Hyper-V cluster to Windows Server 2012 so we can do host based backups with the native Windows Backup that now supports Cluster Shared Volumes (another place where that 64TB VHDX  size can come in handy as Windows backup now writes to VHDX).

Some interesting screen shots

image

The volume reports we’re using 3TB in data. So 2.4TB is free.

image

Looking at the backup folder you see  10.9TB of data stored on 1.99 TB of disk .

So the properties of the volume reports more disk space used that the actual folder containing the data. Let’s use WinDirStat to have a look.

image

So the above agrees with the volume properties. In the details of this volumes we again see about 2TB of consumed space.

image

Could it be that the volume might is reserving some space ensure proper functioning?

When you dive deeper things we get some cool view of storage space used.. Where Windows Explorer is aware of deduplication and shows the non deduplicates size for the vhd file, WinDirStat does not, it always shows shows the size on disk, which is a whole lot less.

image

This is the same as when you ask for the properties of a file in Windows Explorer.

image

Discussion

Is it the best solution for everyone? Not always no. The deduplication is done on the target after the data is copied there. So in environments where bandwidth is seriously constrained and there is absolutely no technical and/or economical way to provide the needed throughput this might not be viable solution. But don’t dismiss this option to fast. In a lot of scenarios is it is very good and cost effective feature. Technically & functionally it might be wiser to do it on the target as you don’t consumes to much memory (deduplication is a memory hog) an CPU cycles on the source hosts. Also nice is that these dedupe files are portable across systems. VEEAM has demonstrated some nice examples of combing their deduplication with Windows dedupe by the way. So this might also be an interesting scenario.

Financially the the cost of deduplication functionality with hardware appliances or backup software hurts like the kick of a horse straight onto the head. So even if you have to invest a little in bandwidth and cabling you might be a lot better of. Perhaps, as you’re replacing older switches by new 1Gbps or 10Gbps gear, you might be able to recuperate the old ones as dedicated backup switches. We’re using mostly recuperated switch ports and native Windows NIC teaming, it works brilliantly. I’ve said this before, saving money whilst improving operations rarely gets you fired. The sweet thing about this that this is achieved by building good & reliable solutions, which means they are efficient even if it costs some money to achieve. Some managers focus way to much on efficiency from the start as to them means nothing more than a euphemism for saving every € possible. Penny wise and Pound foolish. Bad move. Efficiency, unless it is the goal itself, is a side effect of a well designed and optimized solution. A very nice and welcome one for that matter, but it’s not the end all be all of a solution or you’ll have the wrong outcome.

The Microsoft Management Summit 2013

MMS 2013 is in Las Vegas, Nevada, USA

Time flies fast and it’s time to look ahead to 2013. My continuing investment in myself is part of that.  Despite a lot of rumors about big changes to MMS (its future, location, timing etc.) things will go forward as they’ve been in the past years. That includes the location. As you probably already heard it’s back in Las Vegas, state of Nevada, USA. So after the, for many people, somewhat disconcerting announcement at MMS 2012 indicating the above mentioned changes, MMS 2013 will once again be held in Las Vegas again. As before it will be focused on the entire System Center Suite. That was confirmed by a mail form the MSS conference team recently and a TechNet blog post

image

Recently is was announced that the MMS 2013 content survey is now open. So they’re planning for the Microsoft Management Summit 2013 content and they’d like to hear from us. Why? Well, the better they align the content of the conference to our needs, the better it will be as an experience. This means our return on investment will be bigger which is always a good thing. So if you’re going or thinking of going this is the place, MMS 2013 Content Survey, to voice your opinions on what it should look like content wise. You have two more weeks to fill it out and than it’s scheduled to close down.

Why Attend?

It’s great to have an event focused on managing, deploying and protecting the infrastructure we’ve spent so much time, effort and money building. This conference is dedicated to exactly that. Smaller in scale but very focused. All together in the same hotel/conference center for 5 long days living in System Center and nothing else. As the world’s top operators in this space are there, the networking opportunities are also excellent. I can still remember the amount of talking and discussing I did with my colleagues in 2012, that was stimulating.

It’s also the place to provide feedback to Microsoft about System Center. Things you like, don’t like, things that are missing etc. I most certainly have some feedback for them.

Will I attend?

I’ll most certainly try to attend, that’s for sure. So it’s time to fill out the request form and start cutting through the red tape. Let’s hope the economy doesn’t tank completely and that we can go. The chips might be down right now but let’s not cost cut ourselves out of skills, education, opportunities and a future. Remember, keep moving forward and don’t quit yet, you can always give up later Winking smile.

Hyper-V Guest Storage Performance: Above & Beyond 1 Million IOPS

Making a million IOPS possible in a Windows Server 2012 Hyper-V VM

A lot of you will have seen the demos of a Hyper-V guest with VHDX disks running on Windows Server 2012 doing a million apps, if you haven’t yet, take a look here. While some quickly dismissed this as “irrelevant boasting” without real life relevance, I respectfully disagree. This is smart future proofing by Microsoft and provides a hypervisor ready for the future hardware capabilities and capable to handle the most demanding workloads today & in the years to come. Sure such a demo is under lab/ideal conditions and does not reflect the majority of real life environments but it’s nice to see what a hypervisor is capable of if and when you might need it. Remember there was a day that 4GB was a lot of RAM and 2TB sounded gigantic. Also remember that some people have larger needs than others.  Until Windows Server 2008 R2 you had some limitations in storage IO performance that would not allow for a million IOPS. These had to be addressed or all the efforts with regards to capabilities and performance in regard to storage, CPU, networking and memory would just hit those particular bottlenecks. So it is addressing real needs and indeed also smart future proofing.

Capabilities of virtual machine storage IO throughput in Windows 2008 R2

The capabilities listed below dictate the IO capabilities in virtual machines running on Windows Server 2008 R2 Hyper-V:

  1. Limited to one IO channel per virtual SCSI Controller
  2. 256 queue depth/SCSI for all devices attached to that SCSI adapter.
  3. There was one fixed vCPU (0) dedicated to handling IO.

image

The picture above illustrates these limits. You see two virtual SCSI Controllers each having 2 VHD virtual disks attached. Each disk shares the one channel the controller it is attached to has.

These limits could become a bottle neck but that was never was too big of a problem with a maximum of 4 vCPUs in Windows 2008 R2 Hyper-V. If needed for performance we might have attached VHDs to different virtual SCSI controllers for the best possible performance in Windows Server 2008 R2 Hyper-V .

With 64 vCPUs and ever more demanding workloads these limitations would become a (serious) issue so this needed to be addressed. If not, despite all other efforts in regards to the 4 big resources (memory, storage and network) in Windows 2012, this would remain the limiting factor of IOPS inside a virtual machine on Windows 2012.

Windows Server 2012 improvements to virtual machine storage IO scaling

image

The picture above illustrates the improvements in Windows Server 2012 Hyper-V IO Scaling:

  1. There is now 1 channel per 16 vCPUs, per virtual SCSI device, per controller. So that means you have 4 channels, per VHDX attached to a virtual SCSI Controller when you have 64 vCPUs in the virtual machine. Compared to before, this is a significant improvement and a much needed one with the 64 vCPUs capability there is now.
  2. IO interrupt handling is now distributed amongst all vCPUs and this process is NUMA aware. This is a huge improvement!
  3. There is now a 256 queue depth/device attached to a specific SCSI adapter. That’s another big improvement.

That people, is how you get a virtual machine to handle a million IOPS. Nice! The questions or doubts whether Hyper-V can deliver the capacity, throughput & performance have been wiped of the table, yes also for virtual storage IOPS. You can now go straight to how it will address your business needs. From my experience it does so brilliantly and very cost effectively. Life might not be perfect but it is very good Smile