VEEAM Endpoint backup has gone RTM and that’s great news. I’ve been using it since the beta version with great results. I moved to the release candidate when that became available and now I’m running RTM. The version number of the RTM bits is 1.0.0.1954.
You can download it here and put it into action straight away!
Quick Tips & Findings
There is no supported upgrade path form the beta release. As a matter of fact the RTM version cannot read the backup files. When trying to upgrade from beta to RTM you’ll be greeted with this message:
Now that’s OK. You should have been on the RC already and there things are better . Mind you, there’s no way to do an in place upgrade either but it can read the backups made by the RC version!
With a clean install (green field or after uninstalling the beta or RC version) the installation will kick off.
Now in the case of or RC backups we tested 2 things:
Can we restore the existing backups? Yes we can!
How are the backs made by the RTM version handled in regards to the already present ones. We just reconfigured the backups to the same repository and kicked of a backup. A new backup job folder was created and the backup was made there. So our DBA’s great self service SQL Server backup offloading repository made with the RC candidate is still available for restores while RTM backups to it’s own new folder.
Well there you go, VEEAM Endpoint Backup just got launched in production. We still have to wait for the production ready update for integration with VEEAM Backup & Replication v8 but that will arrive soon enough. The future looks bright.
It happens to the best of us, sometime we selected the wrong option during deployment and or configuration of our original virtual disks. Or, even with the best of planning, the realities and use cases of your storage change so the original choice might not be the most optimal one. Luckily on a DELL MD PowerVault storage device, you do not need to delete the virtual disk or disks and lose your data to reconfigure the segment’s size. Even better is that you can do this online as a background process., which is a must at it can take a very long time and it would cause prohibitively long downtime if you had to take the data offline for that amount of time.
You have some control over the speed at which this happens via the priority setting but do realize that this takes a (very) long time. Due to the fact it’s a background process you can keep working. I have noticed little to no impact on performance but your mileage may vary.
How long does it take? Hard to predict. This is a screenshot of two 50TB virtual disks where the segment size is being adjusted online…
You cannot always go to the desired segment size in one step. Sometimes you have only an intermediate size available. This is the case in the example below.
The trick is to first move to that segment size and then repeat the process to reach the size you require. In this case, we’ll first move to 256 KB and then to 512 KB segment size. So this again takes a long time. But again, it all happens online.
In conclusion, it’s great to have this capability. When you need to change the size when there is already data on the PowerVault virtual disks you have the ability to do so online while the data remains available. That this can require multiple steps and take a long time is not a huge deal. You kick it off and let it run. No need to sit there and watch it.
When you are optimizing the number of snapshots to be taken for backups or are dealing with storage vendors software that leveraged their hardware VSS provider you some times encounter some requirements that are at odds with virtual machine mobility and dynamic optimization.
For example when backing up multiple virtual machines leveraging a single CSV snapshot you’ll find that:
Some SAN vendor software requires that the virtual machines in that job are owned by the same host or the backup will fail.
Backup software can also require that all virtual machines are running on the same node when you want them to be be protected using a single CSV snap shot. The better ones don’t let the backup job fail, they just create multiple snapshots when needed but that’s less efficient and potentially makes you run into issues with your hardware VSS provider.
VEEAM B&R v8 in action … 8 SQL Server VMs with multiple disks on the same CSV being backed up by a single hardware VSS writer snapshot (DELL Compellent 6.5.20 / Replay Manager 7.5) and an off host proxy Organizing & orchestrating backups requires some effort, but can lead to great results.
Normally when designing your cluster you balance things out a well as you can. That helps out to reduce the needs for constant dynamic optimizations. You also make sure that if at all possible you keep all files related to a single VM together on the same CSV.
Naturally you’ll have drift. If not you have a very stable environment of are not leveraging the capabilities of your Hyper-V cluster. Mobility, dynamic optimization, high to continuous availability are what we want and we don’t block that to serve the backups. We try to help out to backups as much a possible however. A good design does this.
If you’re not running a backup every 15 minutes in a very dynamic environment you can deal with this by live migrating resources to where they need to be in order to optimize backups.
Here’s a little PowerShell snippet that will live migrate all virtual machines on the same CSV to the owner node of that CSV. You can run this as a script prior to the backups starting or you can run it as a weekly scheduled task to prevent the drift from the ideal situation for your backups becoming to huge requiring more VSS snapshots or even failing backups. The exact approach depends on the storage vendors and/or backup software you use in combination with the needs and capabilities of your environment.
cls
$Cluster = Get-Cluster
$AllCSV = Get-ClusterSharedVolume -Cluster $Cluster
ForEach ($CSV in $AllCSV)
{
write-output "$($CSV.Name) is owned by $($CSV.OWnernode.Name)"
#We grab the friendly name of the CSV
$CSVVolumeInfo = $CSV | Select -Expand SharedVolumeInfo
$CSVPath = ($CSVVolumeInfo).FriendlyVolumeName
#We deal with the \ being and escape character for string parsing.
$FixedCSVPath = $CSVPath -replace '\\', '\\'
#We grab all VMs that who's owner node is different from the CSV we're working with
#From those we grab the ones that are located on the CSV we're working with
$VMsToMove = Get-ClusterGroup | ? {($_.GroupType –eq 'VirtualMachine') -and ( $_.OwnerNode -ne $CSV.OWnernode.Name)} | Get-VM | Where-object {($_.path -match $FixedCSVPath)}
ForEach ($VM in $VMsToMove)
{
write-output "`tThe VM $($VM.Name) located on $CSVPath is not running on host $($CSV.OwnerNode.Name) who owns that CSV"
write-output "`tbut on $($VM.Computername). It will be live migrated."
#Live migrate that VM off to the Node that owns the CSV it resides on
Move-ClusterVirtualMachineRole -Name $VM.Name -MigrationType Live -Node $CSV.OWnernode.Name
}
}
Now there is a lot more to discuss, i.e. what and how to optimize for virtual machines that are clustered. For optimal redundancy you’ll have those running on different nodes and CSVs. But even beyond that, you might have the clustered VMs running on different cluster, which is the failure domain. But I get the remark my blogs are wordy and verbose so … that’s for another time