LoadMaster LMOS 7.2.52 firmware feature enhancements

LoadMaster LMOS 7.2.52 firmware feature enhancements

Let’s take a look at some of the recently released LoadMaster LMOS 7.2.52 firmware feature enhancements. You can read the release notes on their web site. Next to security updates there are two entries in the new features and change notices that caught my attention. The first is the new feature that delivers the Ability to use SNI in SubVS, as well as SNI-Hostname Pass-Through. Secondly, it is the enhancement that we can now configure Per-VS Health Check Settings. Both are very welcome as I have to deal with such scenarios or needs frequently. So let’s take a look.

Ability to use SNI in SubVS, as well as SNI-Hostname Pass-Through

The Server Name Indication (SNI) feature has been enhanced to support the following:

  • The ability to pass through the original hostname as the SNI hostname to the Real Server.
  • The ability to specify a different (manual) SNI hostname per SubVS. This is the same as the previous functionality to specify this on the parent Virtual Service (Reencryption SNI Hostname) but on the SubVS level with content switching.


These new features help make scenarios, where you may want to consolidate as many services as possible to the least amount of IP addresses, easier and less confusing to implement.

The Pass-through SNI hostname check box is available in the SSL Properties section of the Virtual Service modify screen. When this is enabled and when re-encrypting, the received SNI hostname is passed through as the SNI to be used to connect to the Real Server. If the Virtual Server has a Reencryption SNI Hostname set, this overrides the received SNI.

LoadMaster LMOS 7.2.52 firmware feature enhancements
The pass-through SNI hostname can be overridden at the VS level

It is also possible to set the re-encryption SNI hostname in a SubVS (in the Basic Properties section). If it is set in a SubVS, this overrides the parent Virtual Service value and/or the received SNI value.

LoadMaster LMOS 7.2.52 firmware feature enhancements
You can also define the Reencryption SNI Hostname at the subVS level

Per-VS Health Check Settings

Until now, health check settings were global-only, located on the Rule & Checking > Check Parameters UI page: Check Interval, Connect Timeout, and Retry Count

LoadMaster LMOS 7.2.52 firmware feature enhancements
We still have global Service Check Parameters, but the can be overridden now at the VS and subVS level

Now, we can also configure these settings on the Virtual Service Real Servers tab. This means we can tune the health check behavior for specific VSs and SubVSs. By default, real server health checks use the global settings. The VS or SubVS settings change as the global settings change. So the default behaves as in previous releases.

LoadMaster LMOS 7.2.52 firmware feature enhancements
You can mix: customize the check interval, use the defautl value for the timeout and leverage the global setting for the retry value.

Once you change a check parameter on a VS or SubVS level, however, those custom VS or SubVS settings will remain unchanged regardless of changes made to the global setting. The UI indicates whether the currently in-use value is the global value or is set to a custom value.

Conclusion

I am really happy with the enhancements that are introduced with FW 7.2.52. The two I highlighted above are the ones I really come up against in the future. Not having a re-encryption SNI hostname on the SubVS level was something we could workaround (see How to Re-Encrypt Multiple SNIs on the Same IP and Port with Kemp LoadMaster – PART 1 and How to Re-Encrypt Multiple SNIs on the same IP and port with a Kemp LoadMaster – PART 2). But having this feature on the subVS makes life a lot easier.

Being able to set the real server health check interval, timeout, and retry count helps us in those scenarios where we have services with different needs. These global setting where always a balancing act between all the services. So this capability is very welcome as well.

It is great to see Kemp Technologies their offerings evolve and improve. They have established themselves quite well over the years. It makes me happy that way back I chose them for their price/value and excellent support. I still remember the first HA pair (LM-2200) I ever deployed (2011) for a real time reference GPS position system. That was the beginning of a very succesful journey building solutions with LoadMasters using both physical and virtual appliances.

File-Level Restore in a hardened environment

Introduction

A File-Level Restore in a hardened environment can trip you up. In Veeam Backup & Replication the ability to do file-level restores from virtual machine backups is a handy option to have. In secured environments with an isolated and protected Veeam backup fabric, this is not always a straightforward or fast process. Let’s look at this and make some suggestions for a solution or workarounds.

File-Level Restore in a hardened environment
The Veeam Backup browser for File-Level Restores

Veeam File-Level Restore in a hardened environment

Nowadays, it is a common best practice that the Veeam Backup & Replication environment is isolated and independent from the fabrics it protects. It is even more than a best practice and ransomware educated many of us fast and hard on the need for this.

In such an environment every server in the Veeam backup fabric solution is isolated, with their own credentials and not joined or in any way dependent on the environment it is protecting. Dedicated accounts are used to run the actual backup jobs and these are not used for normal administrative or operational tasks.

The majority of Veeam functionality works just fine in such an environment. But there are some things that need to be addressed. One such thing is a file-level restore (FLR). In a secured and isolated environment, an FLR job cannot leverage admin shares to gain access to the virtual machine to where files are being restored or access the data in the backup. To work around this Veeam can leverage PowerShell Direct by which it can securely restore the files into the virtual machine anyway.

But leveraging PowerShell Direct comes at the cost of speed. 7 MB/s is okay for a small number of smaller files. It becomes frustratingly slow when it comes to restoring many and larger files. Minutes turn into hours. In some cases, this just won’t do. In our case, the DBA who needed to do the database restore was not happy and the developers waiting for the restored database were not overly pleased either.

So what other options do we have?

Set up a secured file share in the protected environment

Below we will discuss the other options you have for a File-Level Restore in a hardened environment via FLR in Veeam Backup & Recovery. But first, we prepare a landing zone in the secured environment where we can copy restored files to. For this, we set up a restore share in the protected environment where a few admins that are responsible for backup and restores can write to. No one else can enumerate, read, or write to that share. When a file gets restored there they can copy it over to where the people that need it can access it. This helps protect and limit access to restored files to just the people that need to get to it. The accounts used to access that share from the isolated backup environment do not get admin permissions on the system where that share resides or anywhere else.

Providing a restore share in the protected environment is safe. You only allow certain admins to have access to it, no one else. Since the less trusted environment (the one we protect) allows access from a more secure environment (the backup environment) this does not expose the backup environment to extra danger. At no point does this allow any access to the backup environment from the protected environment.

Use the FLR wizard to copy the files

You can use the FLR wizard to copy the file locally in the secured Veeam backup environment.

File-Level Restore in a hardened environment
File-Level Restore in a hardened environmentFLR, copy the file to restore to a local folder

From there you can copy it to the restore share in the protected environment. This is very fast as nothing is. On our network we get 230 MB/s.

But what if you don’t have the space for that locally? Not everyone can have a large local restore disk onWould it not be handy to copy it to restore share in the protected environment directly? Well yes, you can try in the wizard and it will ask for the credentials to access the file share.

But it will fail to read and restore the backups but this will fail with an “Access is denied” error.

File-Level Restore in a hardened environment

You could work around it if you use the account VBR uses to actually make the backups. But that won’t do. For one those credentials should be guarded and not use by anything or anyone but VBR. Secondly, it doesn’t work anyway.

So are you out of luck? The good news is no, you are not, so read on!

Copy the files directly from the VeeamFLR mountpoint

If you don’t have the space to restore the file locally in the secure backup environment, there is another trick. When you look at the restore log you will see what the mount server is. That is the node where you can find the C:\VeeamFLR folder, which is where the content of the virtual Machine VHDX volumes is being mounted. This means that on that node you can navigate to the folder or files you want to restore.

Copy the files directly from the VeeamFLR mountpoint

Select the ones you want to recover and just copy them. Navigate to restore share in the protected environment. You just need to provide the credentials of the dedicated account with write access to that share. Those are the only rights you need on that share. The copy is again fast at 230 MB/s. That is a lot better than 7MB/s.

There is an issue with this, however. In this scenario, you need access to the server where the volumes of the virtual machine get mounted. Normally that would be the local repository where the backup files reside and guess what, you lock these down as much as possible and don’t log on to them unless needed. You could specify another, but then it needs to be a server where you have access to and it copies the data over the network, which is slower. With 10Gbps or more that’s not an issue but with 1Gbps, it can take a while with a lot of large files. Can we fix this? Yes. with Mount to Console!

Mount to Console to the rescue

But hey, this would not be Veeam if they did not have yet another option on their sleeve! You can use the Mount to Console button on the ribbon of the Veeam Backup browser to create an extra mount point on your VBR console server. From here you can copy the files to the restore file share in the protected environment a lot faster as well. All you need for this is a local user (non-administrator account) with the Veeam Restore Operator role.

File-Level Restore in a hardened environment
Mount to console to the rescue
File-Level Restore in a hardened environment
Et voila, you can browse the files even if you don’t have space on your console server

See https://helpcenter.veeam.com/docs/backup/hyperv/guest_restore_save_hv.html?ver=100 for more information on this.

The correct answer is Veeam Enterprise Manager

Veeam Enterprise Manager addresses the challenges we worked around here. But you’ll need Enterprise (plus) to get full-featured use of that. It is nice as it takes care of the above scenario and you can have restore rights assigned to people without compromising your backup environments operational security. So, yes, use Enterprise Manager when possible. If not, see the above workarounds for other options.

Conclusion

Having some insights into how Veeam Backup & Replication works can be in handy at times. It also helps if you can think a bit outside the box and act on those insights to come up with some alternatives or workarounds. This is exactly what we did here with great results. The DBA could do his restores faster while the devs have the version of the database they needed. Do note that the correct answer lies in Veeam Enterprise manager, but lacking that. You now have some options. I hope these insights into a File-Level Restore in a hardened environment help you out someday.

Veeam Live 2020

Attend Veeam Live 2020

I am attending Veeam Live 2020 from the comfort of my home this year. I can stay safe and still learn, connect, and investigate new technologies and options.

This works for me 🙂

Allow me to invite you Veeam Live 2020. This year the content focus area is on “Cloud Data Best Practices”. The online event takes place on October the 20th 2020 for a full day.
Veeam is gathering its global talent pool to present at this event. That talent is both internal to Veeam as well as external. Some of my fellow Veeam Vanguards are presenting and sharing their expertise.

With names like Anton Gostev, Danny Allen, Rick Vanover, Michael Cade, Anthony Spiteri, Dave Kawula, Andrew Zhelezko, Dmitry Kniazez, David Hill, Karinne Bessette, Kirsten Stoner, Dave Russel Melissa Palmer, Sander Berkouwer, Drew J. Como and so many others, the experience and expertise to share are second to none. Many industry and customer experts are also joining in to share their insights.

As Veeam states

At Veeam Live, you’ll gain data management guidance you can activate today. You’ll learn how to up your data protection game across your enterprise, connect with like-minded professionals, set the strategy right for your organization, and be part of the future of Cloud Data Management™.”

Veeam Live 2020 October 2020 – Join for free

So no matter what level you are at or what part you play in managing and safeguarding the data of your organization there are things to explore and learn.

Topics

Topics to be discussed are Multi-Cloud Data Management, AWS- and Azure-Native Backup, Office 365 Backup, Ransomware Best Practices, Kubernetes Backup and App Mobility. Check out the full agenda to find the topics and sessions that are of most interest to you.

On all those subjects Veeam is actively developing and releasing new capabilities. Just think about their recent acquisition of Kasten. They are also sharing information about Veeam Backup & Replication V11 which is currently in Beta.

Get your questions answerd

Do you want to find out how you can make your solutions more efficient? Need to figure out the biggest threats and opportunities there are in today’s technical, business, and security landscape? Want to learn what new technologies you need to keep an eye on and learn about? Is the evergrowing ransomware threat keeping you awake at night?

 Free for all

The event is free for all. You can register here.

Join Veeam Live 2020 for free

Join us from the comfort of your own (home) office or couch. It all works. Just bring an open mind, a willingness to listen and learn. The interesting thing about Veeam is that they sell solutions that cater to real, existing, and emerging needs of their (potential) customers. They keep it real and have a tradition of explaining why they develop and bring their solutions and offerings to the market. It makes for educational and insightful sessions and events.

So now you know the secret of how I stay on top of things in the data protection and management world. I listen. Not the sound of crickets (that’s for vacations) but to people that are smart, experienced, and have a proven track record of delivering value in a very competitive and ever-changing landscape. So, now you also know how to stay up to speed, all that is left to do is register today. You are very welcome.

Project Bicep, an ARM Domain-Specific Language

Project Bicep

Project Bicep? Biceps? Do you mean like bicep curls? Muscles? What does this have to do with ARM or ARM templates? Well, to master ARM templates, we can use a little extra power. So, It’s a joke so bad it’s good as Microsoft’s Alex Frankel put it.

Impressive power but not the kind of biceps we are talking about (image by Eduardo Romero on Pexels.com)

Over the years, I have noticed a couple of challenges when it comes to Infrastructure as Code (IaC). It is not an easy thing to achieve in practice. Not only in the cloud but anywhere. It is a significant hurdle in achieving IaC. Maybe you have the same experiences.

Azure Infrastructure as Code

In Azure, one of the biggest challenges has been the learning curve when it comes to writing the JSON. JSON, the “human-readable” data interchange format that brings ARM and ARM templates to live. It isn’t something you pick up super quickly and turns out to be harder and harder to use when things become more complex and diverse.

Other challenges are related to managing the templates, getting pipelines set up reliably and consistently for all resources in Azure tenants, subscriptions, resource groups, etc. It is not something that I would call inviting and easy to do.

Then there are the real-world realities we need to handle. There is a ton of “stuff” out there where deployment, configuration, orchestration, and change happens in different ways. How does one onboard all that in an IaC process without too much risk of breaking things? Unfortunately, this is tedious and fragile.

We like Infrastructure as Code

For many people, the above is a bit discouraging. Don’t get me wrong. People see, understand, and like the idea of Infrastructure as Code. They just have a hard time getting there. There are all sorts of tools for various environments and needs. We have all at least heard and probably looked at Chef, Ansible, Puppet, or Terraform. There are many others still, but I just listed the ones that have been getting some serious attention over the last four years. Choosing one is losing the benefits of another. Using them all is an operational, skills, and management challenge. They all have their strengths and weaknesses. The main differences are whether they take a procedural, declarative, or an orchestral approach to getting the job done.

While orchestration is very popular, it does feel a bit like a failure, but that is “emotion”. Why? Well, because in the end, we cannot manage change very well and end up throwing everything away and replacing it with a new deployment that has the changes in there. Everything is a cow now that gets slaughtered and replaced when it doesn’t function as expected or needs to change. That works well for lightweight and fast implementations. It is somewhat painful when using this in more massive deployments. But still, when looking at the results and preventing configuration drift, it gets the job done.

But even the best tools have issues that can be best described as “death by a thousand cuts.” The concept is simple, but that doesn’t make it easy to do!

Microsoft has heard this feedback

We like Infrastructure as Code. We do find it too hard to do well, especially if that is not the bulk of your work, and you are not a guru at it like Stanislav Zhelyazkov.

When Microsoft asked on Twitter, “What do you think is a knowledge gap for traditional #ITPros when it comes to transitioning to the cloud” I replied, “The biggest skills hurdle is related to IaC. ARM is tedious and hard to learn for many, yet a cornerstone … Fix that, and we can move 10 faster in any cloud journey.”.

Project Bicep
My honest reply to Microsoft’s Anna Chu

The above is not new feedback, far from. But recently (Build 2020) they talked publically about what they are doing about it if I recall correctly. Yes, Microsoft is addressing this challenge. Just last week I saw Project Bicep go public on Github!

So what is project Bicep?

Bicep is a project Microsoft announced at Build 2020 (May). It delivers what Microsoft calls Transparent Abstraction over Azure ARM and ARM templates.  ARM ==> Bicep get the joke? Ok, never mind, it is terrible to search for, however. You get a lot of irrelevant hits.

It has a couple of goals as you can learn from the video.

  • Human friendly, so readability and comprehensibility are essential. You have to be able to understand what you read and write without much effort.
  • What you write will create or compile JSON for you. Microsoft now seems to like “Transpiles” for this. Where earlier the sort of made the analogy of JSON being some sort of the Intermediate Language (IL) of IaC.
  • If you think of JSON of an IL (as MSFT suggests), it is easy to see that, just like with .NET, you might see different languages use to achieve the same goal. But for now, that is not the goal. The goal is to get a working, functional declarative language that is suitable for all kinds of users. We’ll see where this ends up.
  • It focuses on modularity, so no, it will not create giant ARM templates, but modular ones. That means there is multi-file support.
  • It should evolve at the speed of Azure, so no waiting for six months to get new functionality implemented. Microsoft calls this “transparent abstraction.”
  • They plan a migration/conversion/export tool for existing ARM JSON!

Read up on project Bicep over here https://github.com/Azure/bicep. It clarifies the current state of what Bicep is and is not. I hope this moves fast and delivers better tooling to make Infrastructure as Code a better, more accessible, and more achievable goal for all of us.

WARNING

Bicep is at a super early stage of its existence. This is the earliest Alpha you can imagine. It is going to break, barf, and probably puke on your Azure stuff once in a while. So please, DO NOT USE THIS IN PRODUCTION. Right now it is only to get a feel for it, tinker around and get some feedback. This is you only and final warning.

In all honesty, it is very raw and as a (non-Linux) hardcore dev, this is not love at first sight for me, as I had hoped to use PowerShell for this. I hope it will mature and I will grow to love it and like using it much more than ARM. Anyway, dive into the bicep tutorial to see what you think.