Optimizing Backups: PowerShell Script To Move All Virtual Machines On A Cluster Shared Volume To The Node Owing That CSV

When you are optimizing the number of snapshots to be taken for backups or are dealing with storage vendors software that leveraged their hardware VSS provider you some times encounter some requirements that are at odds with virtual machine mobility and dynamic optimization.

For example when backing up multiple virtual machines leveraging a single CSV snapshot you’ll find that:

  • Some SAN vendor software requires that the virtual machines in that job are owned by the same host or the backup will fail.
  • Backup software can also require that all virtual machines are running on the same node when you want them to be be protected using a single CSV snap shot. The better ones don’t let the backup job fail, they just create multiple snapshots when needed but that’s less efficient and potentially makes you run into issues with your hardware VSS provider.

image

VEEAM B&R v8 in action … 8 SQL Server VMs with multiple disks on the same CSV being backed up by a single hardware VSS writer snapshot (DELL Compellent 6.5.20 / Replay Manager 7.5) and an off host proxy Organizing & orchestrating backups requires some effort, but can lead to great results.

Normally when designing your cluster you balance things out a well as you can. That helps out to reduce the needs for constant dynamic optimizations. You also make sure that if at all possible you keep all files related to a single VM together on the same CSV.

Naturally you’ll have drift. If not you have a very stable environment of are not leveraging the capabilities of your Hyper-V cluster. Mobility, dynamic optimization, high to continuous availability are what we want and we don’t block that to serve the backups. We try to help out to backups as much a possible however. A good design does this.

If you’re not running a backup every 15 minutes in a very dynamic environment you can deal with this by live migrating resources to where they need to be in order to optimize backups.

Here’s a little PowerShell snippet that will live migrate all virtual machines on the same CSV to the owner node of that CSV. You can run this as a script prior to the backups starting or you can run it as a weekly scheduled task to prevent the drift from the ideal situation for your backups becoming to huge requiring more VSS snapshots or even failing backups. The exact approach depends on the storage vendors and/or backup software you use in combination with the needs and capabilities of your environment.

cls

$Cluster = Get-Cluster
$AllCSV = Get-ClusterSharedVolume -Cluster $Cluster

ForEach ($CSV in $AllCSV)
{
    write-output "$($CSV.Name) is owned by $($CSV.OWnernode.Name)"
    
    #We grab the friendly name of the CSV
    $CSVVolumeInfo = $CSV | Select -Expand SharedVolumeInfo
    $CSVPath = ($CSVVolumeInfo).FriendlyVolumeName

    #We deal with the \ being and escape character for string parsing.
    $FixedCSVPath = $CSVPath -replace '\\', '\\'

    #We grab all VMs that who's owner node is different from the CSV we're working with
    #From those we grab the ones that are located on the CSV we're working with
      $VMsToMove = Get-ClusterGroup | ? {($_.GroupType –eq 'VirtualMachine') -and ( $_.OwnerNode -ne $CSV.OWnernode.Name)} | Get-VM | Where-object {($_.path -match $FixedCSVPath)} 
     
    ForEach ($VM in $VMsToMove)

    {
        write-output "`tThe VM $($VM.Name) located on $CSVPath is not running on host $($CSV.OwnerNode.Name) who owns that CSV"
        write-output "`tbut on $($VM.Computername). It will be live migrated."
        #Live migrate that VM off to the Node that owns the CSV it resides on
        Move-ClusterVirtualMachineRole -Name $VM.Name -MigrationType Live -Node $CSV.OWnernode.Name
    }

Now there is a lot more to discuss, i.e. what and how to optimize for virtual machines that are clustered. For optimal redundancy you’ll have those running on different nodes and CSVs. But even beyond that, you might have the clustered VMs running on different cluster, which is the failure domain.  But I get the remark my blogs are wordy and verbose so … that’s for another time Winking smile

Some Insights Into How Windows 2012 R2 Hyper-V Backups Work

How Windows Server 2012 R2 backups differ from Windows Server 2012 and earlier

You’ll remember our previous blog about an error when backing up a virtual machine on Windows Server 2012 R2, throwing this error:

Dealing With Event ID 10103 “The virtual machine ‘VM001′ cannot be hot backed up since it has no SCSI controllers attached. Please add one or more SCSI controllers to the virtual machine before performing a backup. (Virtual machine ID DCFE14D3-7E08-845F-9CEE-21E0605817DC)” In Windows Server 2012 R2

The fix was easy enough, adding a virtual SCSI controller to the virtual machine. But why does it need that now?

Well, this all has to do with the changed way Windows Server 2012 R2 backups work. Before Windows Server 20012 R2 the VSS provider created a VSS snapshot inside the guest virtual machine. That snapshot was exposed to the host, to create a volume snapshot for backup purposes. Right after the volume snapshot has been taken this VSS snapshot inside the guest virtual machine needed to be reverted. The backups then run against that volume snapshot and is consistent thanks to both host & guest VSS capabilities.

For an overview of VSS based backup process in general take a peak at Overview of Processing a Backup Under VSS

Now it is the “Hyper-V Integration Services Shadow Copy Provider” that is being used. When the the host initiates a volume snapshot (Microsoft or hardware VSS provider) the host VSS writer goes in to freeze. This process leverages the Hyper-V Integration Services Shadow Copy Provider  to create the virtual machine checkpoint. After that the volume/LUN/CSV snapshot is taken. When that is done the host VSS writes goes into thaw and the virtual machine checkpoint is deleted. After that the backup runs against the Volume snapshot and at the end that is also deleted. You can follow this process quite nicely in the GUI of your Hyper-V host, you SAN (if you use a Hardware VSS provider).

Dear storage vendors: a great, reliable, fast VSS Hardware Provider is paramount to success in a Microsoft environment. You need to get this absolutely right and out of the door before spending any more time and money on achieving yet more IOPS. Keep scalability in mind when doing this.

Dear backup software vendors: think about the scalability when designing your products. If we have 200 or 500 or a thousand VMs … can we leverage CSV based backups to protect every VM on the LUN or do we need to snap the LUN for every VM backed up? Choice there is good for both data protection schemes and scalability.

At this stage the hardware VSS snapshot is being taken …

image_thumb3

Contrary to common belief this means that the backup will indeed application consistent to the time of the checkpoint as the CSV snapshot being taken is of a consistent checkpoint. It’s the delta in the active avhdx that is only crash consistent, like any running VM by the way. Now pay attention to the screenshot below. The two red arrows are indicating to ntfs source events, two volumes seem to be exposed to the next free drive letters. E: and F: here as C: is the virtual machine OS and D: the DVD.

image_thumb5

Look at the detail. Indeed two. Well it the previous screenshot we only saw one in the CSV path but there are two avhdx files indeed.

image_thumb[1]

Exposing a snapshot on the SAN to a server actually shows us this much better … look here at the avhdx with the GUID and one with “AutoRecovery” in the name. So that makes for two nfts events … and as the backup needs to do this life it requires a vSCSI controller to be present in the virtual machine … and vIDE controller can’t do this.

image_thumb[3]

Anyway, enough under the hood detective work for now, In VEEAM that stage looks like this:

image_thumb7

And on the Compellent it looks like this. The screenshots are from different backups at different times so don’t get confused about the time stamps here. It’s just as illustration of what you can expect to see.

image_thumb12

Now when the CSV snapshot has been taken the virtual machine checkpoint is removed. At that time the backup runs against the CSV snapshot. In our case (hardware VSS provider) this is a snapshot on the SAN that gets exposed in a view and mapped to the off host backup proxy VEEAM server. On the DELL Compellent it looks like this.

image_thumb16

This takes a while to o…but after a while the backup will kick off. Do not that the checkpoint has merged and is no longer visible at this time.

image_thumb18

Once the backup is complete, the mapping is removed, the view deleted and the snapshot expired. So your SAN is left as the backup found it.

There you go. I hope this helped clarify certain things on how Hyper-V guest backups work in Windows 2012 R2. So your backups are still application consistent, just not when you’re running Linux or DOS or NT4.0 as there is no support / VSS for that. However they are based on a  consistent virtual machine snapshot which explains why Hyper-V backups can protect Linux guests very adequately!

You Got To Love Windows Server 2012 Deduplication for Backups

I’ve discussed this before in Windows Server 2012 Deduplication Results In A Small Environment but here’s a little updated screenshot of a backup volume:

image

Not to shabby I’d say and 100% free in box portable deduplication … What are you waiting for Winking smile

Windows Server 2012 Deduplication Results In A Small Environment

There is a small environment that provides web presence and services. In total there a bout 20 production virtual machines. These are all backed up to a Transparent Failover File Share on a Windows Server 2012 cluster that is used to host all the infrastructure and management services.

The LUN/Volume for the backups is about 5.5 TB of storage is available. The folder layout is shown in the screenshot below. The backups are run “in guest” using native Windows Backup which has the WindowsImageBackup subfolder as target. Those backups are  archived to an “Archives” folder. That archive folder is the one that gets deduplicated, as the WindowsImageBackup folder is excluded.

image

This means that basically the most recent version is not deduplicated guaranteeing the fastest possible restore times at the cost of some disk space. All older (> 1 day) backup files are deduplicated. We achieve the following with this approach:

  • It provides us with enough disk space savings to keep archived backups around for longer in case we need ‘m.
  • It also provides for enough storage to backup more virtual machines while still being able to maintain a satisfactory number of archived backups.
  • Ay combination of the above two benefits can be balanced versus the business needs
  • It’s a free, zero cost solution

The Results

About 20 virtual machines are backed up every week (small delta and lots of stateless applications).As the optimization runs we see the savings grow. That’s perfectly logical. The more backups we make of virtual machines with a small delta the more deduplication can shine. So let’s look at the results using Get-DedupStatus | fl

image

A couple of weeks later it looks like this.

image

Give it some more months, with more retained backusp, and I think  we’ll keep this around 88%-90% .From tests we have done (ddpeval.exe) we think we’ll max out at around 80% savings rate. But it’s a bit less here overall because we excluded the most recent backups. Guess what, that’s good enough for us Winking smile. It beats buying extra storage of paying a wad of money for disk deduplication licenses from some backup vendor or appliance. Just using the build in deduplication mechanisms in Windows Server 2012 Server saved us a bunch of money.

The next step is to also convert the production  Hyper-V cluster to Windows Server 2012 so we can do host based backups with the native Windows Backup that now supports Cluster Shared Volumes (another place where that 64TB VHDX  size can come in handy as Windows backup now writes to VHDX).

Some interesting screen shots

image

The volume reports we’re using 3TB in data. So 2.4TB is free.

image

Looking at the backup folder you see  10.9TB of data stored on 1.99 TB of disk .

So the properties of the volume reports more disk space used that the actual folder containing the data. Let’s use WinDirStat to have a look.

image

So the above agrees with the volume properties. In the details of this volumes we again see about 2TB of consumed space.

image

Could it be that the volume might is reserving some space ensure proper functioning?

When you dive deeper things we get some cool view of storage space used.. Where Windows Explorer is aware of deduplication and shows the non deduplicates size for the vhd file, WinDirStat does not, it always shows shows the size on disk, which is a whole lot less.

image

This is the same as when you ask for the properties of a file in Windows Explorer.

image

Discussion

Is it the best solution for everyone? Not always no. The deduplication is done on the target after the data is copied there. So in environments where bandwidth is seriously constrained and there is absolutely no technical and/or economical way to provide the needed throughput this might not be viable solution. But don’t dismiss this option to fast. In a lot of scenarios is it is very good and cost effective feature. Technically & functionally it might be wiser to do it on the target as you don’t consumes to much memory (deduplication is a memory hog) an CPU cycles on the source hosts. Also nice is that these dedupe files are portable across systems. VEEAM has demonstrated some nice examples of combing their deduplication with Windows dedupe by the way. So this might also be an interesting scenario.

Financially the the cost of deduplication functionality with hardware appliances or backup software hurts like the kick of a horse straight onto the head. So even if you have to invest a little in bandwidth and cabling you might be a lot better of. Perhaps, as you’re replacing older switches by new 1Gbps or 10Gbps gear, you might be able to recuperate the old ones as dedicated backup switches. We’re using mostly recuperated switch ports and native Windows NIC teaming, it works brilliantly. I’ve said this before, saving money whilst improving operations rarely gets you fired. The sweet thing about this that this is achieved by building good & reliable solutions, which means they are efficient even if it costs some money to achieve. Some managers focus way to much on efficiency from the start as to them means nothing more than a euphemism for saving every € possible. Penny wise and Pound foolish. Bad move. Efficiency, unless it is the goal itself, is a side effect of a well designed and optimized solution. A very nice and welcome one for that matter, but it’s not the end all be all of a solution or you’ll have the wrong outcome.