Logging Cluster Aware Updating Hotfix Plug-in Installations To A File Share

As an early adopter of Windows Server 2012 it’s not about being the fist it’s about using the great new features. When you leverage the Cluster Aware Updating (CAU) Plug-in to deploy hardware vendor updates like those from DELL which are called DUPs (Dell Update Packages) you have the option to to log the process via parameter /L

This looks like this in the config XML file for the CAU (I’ll address this XML file in more details later).

<Folder name="Optiplex980DUPS" alwaysReboot="false"> 
    <Template path="$update$" parameters="/S /L=\zuluCAULoggingCAULog.log"/>

 

As you can see I use a file share as I don’t want to log locally because this would mean I’d have to collect the logs on all nodes of a cluster.   Now if you log to  file share you need to do two things that we’ll discuss below.

1. Set up a share where you can write the log or logs to

Please note that you cannot and should not use the CAU file share for this. First off all only a few accounts are allows to have write permissions to the CAU file share. This is documented in How CAU Plug-ins Work

Only certain security principals are permitted (but are not required) to have Write or Modify permission. The allowed principals are the local Administrators group, SYSTEM, CREATOR OWNER, and TrustedInstaller. Other accounts or groups are not permitted to have Write or Modify permission on the hotfix root folder.

This makes sense. SMB Signing and Encryption are used to protect tampering with the files in transit and to make sure you talk to the one an only real CAU file share. To protect the actual content of that share you need to make sure now one but some trusted accounts and a select group of trusted administrators can add installers to the share. If not you might be installing malicious content to your cluster nodes without you ever realizing. Perhaps some auditing on that folder structure might be a good idea?

image_thumb61

This means that you need a separate file share so you can add modify or at least write permissions to the necessary accounts on the folder. Which brings us to the second thing you need to do.

2. Set up Write or Modify permissions on the log share

You’ll need to set up Write or Modify permissions on the log share for all cluster node computer accounts. To make this work more practically with larger clusters please you can add the computer accounts to an AD group, which makes for easier administration).

image_thumb61

The two nodes here have permissions to write to the location

image

As you can see the first node to create the loge file is the owner:

image

Some extra tips

The log can grow quite large if used a lot. Keep an eye on it so avoid space issues or so it doesn’t get too big to handle and be useful. And for clarities sake you might get a different log per cluster or even folder type. You can customize to your needs.

Join Us For The VKernel (Dell) VIRTu Alley Online Symposium

VIRTu Alley Online Symposium

vKernel (now part of DELL) is kicking of 2013  with the VIRTu Alley Online Symposium. It runs over two days and focuses on virtualization and cloud management. virtu-alley-c96ba660

When

VIRTu Alley takes place on Tuesday, January 15th from 10:00 – 13:00 EST and on Wednesday, January 16th from 10:00-13:30 EST. Don’t forget it’s EST. Meaning that you need to add 5 hours if you’re in GMT (Dublin) or 6 Hours if you’re in GMT +1 (Brussels)

Where

From the comfort of your office & home  Smile

Agenda

Go to the Virtu Alley web page and have a look at the agenda for both days. you’ll see they’ve mananged to line up some great speakers. Some of them I know personally. Aidan Finn and Damian Flynn (MVP and published authors) are both presenting on day 2 just like myself. Aidan on what’s new & improved in Windows Server 2012 Hyper-V and Damian on Network virtualization. I will be speaking on Advanced Hyper-V Maintenance with Cluster Aware Updating.

It’s Free

The event itself if free but you do need to register for each day on which you want to attend sessions.

See you there!

Quest Technical Experts Conference 2012 Europe Podcast

As the readers of my blog know I was in Barcelona this week to attend the Technical Experts Conference Europe 2012 organized by Quest (now a part of DELL).  Together with my fellow MVPs, friends and colleagues  Aidan Finn (@joe_elway), Carsten Rachfahl (@hypervserver) and Hans Vredevoort (@hvredevoort) we presented 8 sessions at the Virtualization Track and did a Hyper-V Experts panel to share what we have learned and help answer questions attendees on the new capabilities of Hyper-V in Windows Sever 2012. It was both fun and interesting to do. We learned some more from each other and also from the questions of an alert audience whom we enjoyed presenting to.

Mattias Sundling, a Technical evangelist at Quest and the owner of the Virtualization Track at TEC 2012 Europe did an audio podcast all 4 of us MVPs in the “Virtual Machine” expertise presenting in that track . You can find that podcast here on the vKernel/DELL web site http://www.vkernel.com/podreader/items/tec-europe-2012-mvps-with-mattias-sundling or on YouTube by clicking on the screenshot below. Enjoy.

image

Disk to Disk Backup Solution with Windows Server 2012 & Commodity DELL Hardware – Part II

As I blogged in a previous post we’ve been building a Disk2Disk based backup solution with commodity hardware as all the appliances on the market are either to small in capacity for our needs, ridiculously expensive or sometimes just suck or a combination of the above (Virtual Library Systems or Virtual Tape Libraries come to mind, one of my biggest technology mistakes ever, at least the ones I had and in my humble opinion Disappointed smile) .

Here’s a logical drawing of what we’re talking about. We are using just two backup media agent building blocks (server + storage)  in our setup for now so we can scale out.

image

Now in future post I hope to be discussing storage spaces & Windows deduplication thrown into the mix.

So what do we get?

Not to shabby …  > 1TB/Hour

image

To great …

image

In close up you are seeing just 2 Windows 2012 Hyper-V cluster nodes, each being backed up over a native LBFO team of 2*1Gbps NIC ports to one Windows Server 2012 Backup Media Agent with a 10Gbps pipe. Look at the max throughput we got  …

image

Sure this is under optimal conditions, but guess what? When doing backup from multiple hosts to dual backup media servers or more we’re getting very fast backups at very low cost compared to some other solutions out there. This is our backup beast Smile. More bandwidth needed at the backup media server? It has dual port 10Gbps that can be teamed and/or leverage SMB 3.0 multichannel. High volume hosts can use 10Gbps at the source as well.

Lessons learned

  • The Windows 2012 networking improvements rock. Upgrade and benefit from it! We’re seeing great results thanks to Multichannel leveraging RSS and in box NIC teaming (LBFO).
  • A couple of 1Gbps NICS teamed on Windows Server 2012 work really well. Don’t worry about not having 10Gbps on all your hosts.
  • Having 10Gbps on your backup media hosts (target) is great as you’ll be pushing a lot of data to them from multiple (source) hosts.
  • Make sure your backup software supports enough streams before it keels over under the load you’re pushing through. More streams means more concurrent files (read VHDs/VMs) and thus more throughput and allows multichannel to shine over RSS capable NICs.
  • Find the sweet sport for number of disks per node and total IOPS versus the throughput you can send to the backup media agents. 4 Nodes of 50TB might be better than 2 nodes of a 100TB. If you can, experiment a bit to find your optimal backup block size.
  • Isolate your backup network traffic from data traffic either physically or by other means (QOS) and don’t route it all over the place to end up where it needs to be.
  • We’re doing this using Dell PowerConnect 5424 (end of life) /5524 switches … no need for the real  high end very expensive gear to do this. The 10Gbps switch, well yes that’s always high end at the moment.
  • Use JBODS with SAS/Storage spaces & you’ll be fine. Select them carefully for performance. You can use bays like the MD3X00 if you want to replicates the backups somewhere otherwise MD12x0 will do or any other decent JBOD => even cheaper. You can also mix, some building blocks that can replicate & other on Storage Spaces /JBOS. Mix and match with different backup needs means you have flexibility. Note that at TechEd Europe (June 2012), in a session by DELL, they mentioned the need for a firmware update with the MD1200 to optimize performance with Storage Spaces.

It’s all about the money in a smart way!

As I said before, you will not get fired for:

  • Increasing backup throughput at least 4 fold (without dedupe)
  • Increasing backup capacity 3.5 fold (without deduplication)
  • Doing the above for 20% of systems that are replaced & new offerings with specialized appliances (even at hilarious discount rates). That’s CAPEX reduction.
  • This helps pay for the primary storage, DRC site & extra SAN for data replication in case of disaster
  • Make backups faster, more reliable & reduce OPEX (The difference for us is huge and important)
  • Putting an affordable scale up & scale out Disk2Disk backup solution into place to the business can safely handle future backup loads as very acceptable costs.
  • It’s a modular solution which we like. On top of that it’s about as zero vendor lock in as it gets. You can mix servers, bays, switches. Use what you like best for your needs. Only the bays have to remain the same within an individual “building block”.

Cost reduction is one thing but look at what we get whilst saving money… wow!

What am I missing?  Specialized dedupe. Yes, but we’re  going for the poor mans workaround there. More on that later.  As long as we get enough throughput that doesn’t hurt us. And give the cost we cannot justify it + it’s way to much vendor lock in. If you can’t get backups done without, sure you might need to walk that route. But for now I’m using a different path. Our money is better spend in different places.

Now how to get the same economic improvements from the backup software? Disk capacity licensing sucks. But we need a solution that can handle this throughput & load , is reliable, has great support & product information, get’s support for new release fast after RTM (come on CommVault, get a move on) and is simple to use ==> even more money and time saved.

Spin off huge file server project?

Why is support for new releases in backup software important. Because the lack of it is causing me delays. Delays cost me, time, money & opportunities. I’m really interested to covert our large LUN file servers to Windows Server 2012 Hyper-V virtual machines, which I now can rather smoothly thanks to those big VDHX sizes that are possible now and slash the backup times of those millions of small files to pieces by backing them up per VHDX over this setup. But I’ll have to wait and see when CommVault will support VHDX files and GPT disks in guests because they are not moving as fast as a leading (and high cost) product should. Altaro & Veeam have them beaten solid there and I don’t like to be held back.