Veeam will be holding its annual conference VeeamOn 2017 in New Orleans, Louisiana on May 16th – 18th. You can actually already pre-register for the conference today. Just follow this link. This qualifies you for a 200$ discount.
But don’t stop there. When you work with Veeam products you might have some interesting solutions and experiences to share. Maybe you got creative and designed a smart solution to you needs. That’s something that can inspire people to think about how they use the products. So please, don’t be shy. Consider submitting your proposal for a presentation at VeeamOn 2017. Help your peers to achieve their needed availability in an always-on world. Go to https://www.veeam.com/veeamon/call-for-presentations and share your experience, knowledge and insights.
I hope to see you there to learn form and be inspired by you, my peers and colleagues from all over the world!
One of the many gems in Veeam Backup & Replication v9 is the introduction of storage-level corruption guard for primary backup jobs. This was already a feature for backup copy jobs. But now we have the option of periodically scanning or backup files for storage issues.It works like this: if any corrupt data blocks are found the correct ones are retrieved from the primary storage and auto healed. Ever bigger disks, vast amounts of storage and huge amounts of data mean more chances of bit rot. It’s an industry wide issue. Microsoft tries to address this with ReFS and storage space for example where you also see an auto healing mechanism based on retrieving the needed data from the redundant copies.
We find this option on the maintenance tab of the advanced setting for the storage settings of a backup job, where you can enable it and set a schedule.
The idea behind this is that this is more efficient than doing periodical active full backups to protect against data corruption. You can reduce them in frequency or, perhaps better, get rid of those altogether.
Veeam describes Storage-level corruption guard as follows:
Can it replace any form of full backup completely? I don’t think so. The optimal use case seems to lie in the combination of storage-level corruption guard with periodic synthetic backups. Here’s why. When the bit rot is in older data that can no longer be found in the production storage, it could fail at doing something about it, as the correct data is no longer to be found there. So we’ll have to weigh the frequency of these corruption guard scans to determine what reduction if making full backups is wise for our environment and needs. The most interesting scenario to deal with this seems to be the one where we indeed can eliminate periodic full backups all together. To mitigate the potential issue of not being able to recover, which we described above, we’d still create synthetic full backups periodically in combination with the Storage-level corruption guard option enabled. Doing this gives us the following benefits:
- We protect our backup against corruption, bit rot etc.
- We avoid making periodic full backups which are the most expensive in storage space, I/O and time.
- We avoid having no useful backup let in the scenario where Storage-level corruption guard needs to retrieve data from the primary storage that is no longer there.
To me this seems to be a very interesting scenario. To optimize backup times and economies. In the end it’s all about weighing risks versus cost and effort. Storage-level corruption guard gives us yet another tool to strike a better balance between those two. I have enabled it on a number of the jobs to see how it does in real life. So far things have been working out well.