I’m back form attending, speaking, learning and sharing experiences and knowledge at VeeamON 2017 (and DELL EMC World before). It was a blast and I had the opportunity to engage in very interesting discussions with experts from around the globe.
As it was a Veeam event it wil be no surprise that we got some very interesting information about the new Veeam offerings now as well as in the near future. Points of particular interest to me are:
- Veeam backup for file shares. Really this might solve my entire dubio around virtualizing very large capacity clustered files shares (100-200TB) I have to protect. I’m looking forward to testing and leveraging the various restore options like File share rollback. Handy when ransomware just struck.
- I like what Veeam is doing for disaster recovery in Microsoft’s Azure public cloud. Veeam’s Direct Restore and new Power Network (PN) in order to facilitate and automate the disaster recovery process.
- The Veeam agent that can protect Windows ad Linux based physical servers and endpoints, along with applications running in Microsoft Azure, AWS and other public clouds tied into Veeam Backup & Replication. We will also get support for failover clusters with this. Something I have been lobbying for!
- They support native object storage support using Amazon S3, Amazon Glacier, Microsoft Azure Blob etc.
- They announced improved and extended Office 365 protection including OneDrive for Business and SharePoint Online. One of those improvements is very handy with multiple tenants.
Ramsomware did something very significant beyond reminding everyone of the importance of recoverable backups and that is reigniting the interest in tape as a backup medium. The inherent “air gap” that tape offers has become more interesting to many people as ransomware can also delete or encrypt backups. So the 3-2-1 rule has never been more important and is being extended by additional rules of thumb. The product to investigate for me is Starwind Virtual Tape Library (VTL). What I like is that I can have an air gapped backup integrated with Veeam in Amazon AWS. Even while my entire business might run in Azure, this separates my data protection technology and location form my production / development environment. Ideal for maximum isolation to protect us form both external and insider threats and risks while avoiding the need to deal with physical tapes. This is and remains a major concern for operational costs and RTO.
The new capabilities are very welcome to help solve the challenges we have now and the ones we see coming in the near future. We have plenty of ideas and plans to build the next generation of data protection and data availability solutions. Whatever the need, on-premises, IAAS, PAAS, SAAS, private/hybrid/public cloud, the need to protect data against loss and down time is there in one form or another. That is and remains a primary responsibility of any business regardless of the technology. As always, my fellow MVPs and Vanguards are ready, willing and able to get the job done.
Today DELL EMC World 2017 ends with a dinner with DELL EMC management and engineers to discus our impressions on the information we took away from DELL EMC World 2017. I would like to thank the ever hard working Sarah Vela for making this possible. It’s much appreciated.
Professionally I’m blessed with multiple opportunities to attend conferences and summits. That’s where I get to talk to the skilled and passionate people who work on the technologies we work with intensively. This is very much a two way street where we learn from each other. And on many conferences I might also be a speaker or participate in advisory boards to provide feedback. Some of those latter discussions are under NDA. This is normal and I have NDA’s with other companies as well. That’s the legal side of the trust we place in each other in order to discuss evolving and future technologies.
I attend multiple events from different players. Some of these disagree with me and that is fine. We learn from being challenged. It helps us define more clearly what we design and build as well as why and how. More and more solutions become a more diverse, multi pronged combination of components with their specific capabilities at our disposal. These change fast and so do our solutions. An element not to be ignored in designing those solutions. That’s one take away from DELL EMC world that seems to have hit home. The other is that some companies are in a rather dire IT condition due to years of stand still.
I’m happy to see that today and tomorrow DELL EMC has the technologies needed for us to deliver modern IT solutions. The way in which we choose to do so is our choice and DELL EMC states it is committed to supporting that. As a testimonial to that we got to see the the DELL EMC Storage Spaces Direct Ready nodes based on the soon to be available generation 14 PowerEdge servers.
That is how we worked for many years with DELL and we have been assured we can continue to work with DELL EMC. That what Michael Dell committed to and I have seen them deliver on that promise for many years. For me that’s enough to be confident in that until proven different. Even if that message was sometimes brought in a way that made me think Las Vegas had gotten the better of some conference managers. But let’s not get the form in the way of the content.
On a final note, Dell EMC is not anti public cloud or pro on-premises. That’s how it should be and that how we deliver IT. We use the tools at our disposal to build the best possible solutions we can. What we use depends on the needs and changes as technology evolves. That’s OK. Saying you need hardware doesn’t make you a cloud hater or vice versa. The world is not that simple.
Veeam will be holding its annual conference VeeamOn 2017 in New Orleans, Louisiana on May 16th – 18th. You can actually already pre-register for the conference today. Just follow this link. This qualifies you for a 200$ discount.
But don’t stop there. When you work with Veeam products you might have some interesting solutions and experiences to share. Maybe you got creative and designed a smart solution to you needs. That’s something that can inspire people to think about how they use the products. So please, don’t be shy. Consider submitting your proposal for a presentation at VeeamOn 2017. Help your peers to achieve their needed availability in an always-on world. Go to https://www.veeam.com/veeamon/call-for-presentations and share your experience, knowledge and insights.
I hope to see you there to learn form and be inspired by you, my peers and colleagues from all over the world!
One of the many gems in Veeam Backup & Replication v9 is the introduction of storage-level corruption guard for primary backup jobs. This was already a feature for backup copy jobs. But now we have the option of periodically scanning or backup files for storage issues.It works like this: if any corrupt data blocks are found the correct ones are retrieved from the primary storage and auto healed. Ever bigger disks, vast amounts of storage and huge amounts of data mean more chances of bit rot. It’s an industry wide issue. Microsoft tries to address this with ReFS and storage space for example where you also see an auto healing mechanism based on retrieving the needed data from the redundant copies.
We find this option on the maintenance tab of the advanced setting for the storage settings of a backup job, where you can enable it and set a schedule.
The idea behind this is that this is more efficient than doing periodical active full backups to protect against data corruption. You can reduce them in frequency or, perhaps better, get rid of those altogether.
Veeam describes Storage-level corruption guard as follows:
Can it replace any form of full backup completely? I don’t think so. The optimal use case seems to lie in the combination of storage-level corruption guard with periodic synthetic backups. Here’s why. When the bit rot is in older data that can no longer be found in the production storage, it could fail at doing something about it, as the correct data is no longer to be found there. So we’ll have to weigh the frequency of these corruption guard scans to determine what reduction if making full backups is wise for our environment and needs. The most interesting scenario to deal with this seems to be the one where we indeed can eliminate periodic full backups all together. To mitigate the potential issue of not being able to recover, which we described above, we’d still create synthetic full backups periodically in combination with the Storage-level corruption guard option enabled. Doing this gives us the following benefits:
- We protect our backup against corruption, bit rot etc.
- We avoid making periodic full backups which are the most expensive in storage space, I/O and time.
- We avoid having no useful backup let in the scenario where Storage-level corruption guard needs to retrieve data from the primary storage that is no longer there.
To me this seems to be a very interesting scenario. To optimize backup times and economies. In the end it’s all about weighing risks versus cost and effort. Storage-level corruption guard gives us yet another tool to strike a better balance between those two. I have enabled it on a number of the jobs to see how it does in real life. So far things have been working out well.