Since diving into the Veeam Backup & Replication v11 Linux hardened repository I have started to use XFS in bite-size deployments to gain experience with it. One of the things that will certainly come in handy is extending a Veeam Repository XFS File System. In this blog post, I show to do that.
Mind you that I am doing this with a virtual machine on Hyper-V (Windows Server 2019) in the lab. Not every permutation of hardware and storage controllers you can find. But still, the procedure here will not differ that much.
Determine the size of the current disk.
sudo slblk
Ours is the 20 TB disk, sdd, a SCSI disk.
Now take note of the bytes and sectors
sudo fdisk -l
We just notice the size, bytes and sectors to compare after we extended disk.
Expand the disk
In the virtual machine settings I extend the virtual disk I want to grow with the required capacity.
Let’s add 20 TB and make it 30 TB in total.
In real life that might be you growing a raid controllers’ virtual disk by adding physical disks to the raid controller, you expanding the volume on the storage array or simply adding disks to the local server and adding them to the software-based raid solution you use.
The virtual machine will pick up the extra capacity right away. For our UBUNTU 20.04.1 OS to see it up we’ll need to rescan the SCSI busses for change. In a virtual machine, this can be done via rescan-scsi-bus.sh, available scsitools that will need to be installed if not there.
Use the -s options as that will really show the resized disks.
Yup, that’s our disk on SCSI controller 1, location 0.
Now let’s check the disk size again
Yes, lsbsk shows 30 TB.fdisk -l confirms. Note the new bytes and sector values. It has gone up.
Extend the xfs volume to use the unallocated space
Now we need our xfs volume to use the unallocated capacity in this disk. We use -d as this will grow the file system to the largest possible size, 30 TB in our case.
Note: If you run the below command with -n instead of -d, this gives you the current information on your xfs volume with extending the filesystem yet.
Note: What I did find is that if you just expand the disk and than extend the xfs file system, it also works. It seems to just work without rescanning the disk after extending it! The disks size in df -h will show this space then as well.
Conclusion
That was it. Short and sweet. There is not much to it once you know how to do it. One thing to remember is that you cannot shrink an XFS file system. So, as always, start smaller and grow when needed. Always leave spare capacity to work with when needed. Yes, even in 2021 this is advice to live by in the storage world. For Veeam this means that multiple smaller repositories or extents give you more wiggle room than fewer very large ones. Leave capacity in reserve, either in a spare repository/extend or unallocated. This, especially combined with a scale-out backup repository in Veeam will allow you to work your self out of most capacity pickles you might find your self in.
As I have started to use XFS in bite-size deployments to gain experience with it I wanted to write up some of the toolings I found to manage XFS file systems. Here’s how to check/repair/defragment an XFS volume.
My main use case for XFS volumes is on hardened Linux repositories with immutability to use with Veeam Backup & Replication v11 and higher. It’s handy to be able to find out if XFS needs repairing and if they do, repair them. Another consideration is fragmentation. You can also check that and defrag the volume.
Check XFS Volume and repair it
xfs_repair is the tool you need. You can both check if a volume needs repair and actually repair it with the same tool. Note that the use of xfs_check has been depreciated or is not even available (anymore).
To work with xfs_repair you have to unmount the filesystem, so there will be downtime. Plan for a maintenance window.
To check the file system use the -n switch
sudo xfs_repair -n /dev/sdc
Check, repair and defragment an XFS volumea dry run with xfs_repair -n
There is nothing much to do but we’ll now let’s run the repair.
sudo xfs_repair /dev/sdc
Repairing an XFS file system
The output is similar as for the check we did for anything to repair is basically a dry run of what will be done. In this case, nothing.
Now, don’t forget to mount the file system again!
sudo mount /dev/sdc /mnt/veeamsfxrepo01-02
Check a volume for fragmentation and defrag it
Want to check the fragmentation of an XFS volume? You can but again, with xfs_db. The file system has to be unmounted for that or you will get the error xfs_db: can’t determine device size. To check for fragmentation run the following command against the storage device /file system.
sudo xfs_db -c frag -r /dev/sdc
A lab simulation of sudo xfs_db -c frag -r /dev/sdc – Yeah know it’s meaningless 😉
Cool, now we know that we can defrag it online. For that we use xfs_fsr.
xfs_fsr /devsdc /mnt/veeamxfsrepo01-02
There is nothing to do in our example
xfs_scrub – the experimental tool
xfs_scrub is a more recent addition but the program is still experimental. The good news is it will check and repair a mounted XFS filesystem. At least it sounds promising, right? It does, but it doesn’t work (Ubuntu 20.04.1 LTS).
No joy – still a confirmed bug – not assigned yet, importance undecided. Not yet my friends.
Conclusion
That’s it. I hope this helps you when you decide to take XFS for a spin for your storage needs knowing a bit more about the tooling. As said, for me, the main use case is hardened Linux repositories with immutability to use with Veeam Backup & Replication v11. In a Hyper-V environment of course.
A File-Level Restore in a hardened environment can trip you up. In Veeam Backup & Replication the ability to do file-level restores from virtual machine backups is a handy option to have. In secured environments with an isolated and protected Veeam backup fabric, this is not always a straightforward or fast process. Let’s look at this and make some suggestions for a solution or workarounds.
The Veeam Backup browser for File-Level Restores
Veeam File-Level Restore in a hardened environment
Nowadays, it is a common best practice that the Veeam Backup & Replication environment is isolated and independent from the fabrics it protects. It is even more than a best practice and ransomware educated many of us fast and hard on the need for this.
In such an environment every server in the Veeam backup fabric solution is isolated, with their own credentials and not joined or in any way dependent on the environment it is protecting. Dedicated accounts are used to run the actual backup jobs and these are not used for normal administrative or operational tasks.
The majority of Veeam functionality works just fine in such an environment. But there are some things that need to be addressed. One such thing is a file-level restore (FLR). In a secured and isolated environment, an FLR job cannot leverage admin shares to gain access to the virtual machine to where files are being restored or access the data in the backup. To work around this Veeam can leverage PowerShell Direct by which it can securely restore the files into the virtual machine anyway.
But leveraging PowerShell Direct comes at the cost of speed. 7 MB/s is okay for a small number of smaller files. It becomes frustratingly slow when it comes to restoring many and larger files. Minutes turn into hours. In some cases, this just won’t do. In our case, the DBA who needed to do the database restore was not happy and the developers waiting for the restored database were not overly pleased either.
So what other options do we have?
Set up a secured file share in the protected environment
Below we will discuss the other options you have for a File-Level Restore in a hardened environment via FLR in Veeam Backup & Recovery. But first, we prepare a landing zone in the secured environment where we can copy restored files to. For this, we set up a restore share in the protected environment where a few admins that are responsible for backup and restores can write to. No one else can enumerate, read, or write to that share. When a file gets restored there they can copy it over to where the people that need it can access it. This helps protect and limit access to restored files to just the people that need to get to it. The accounts used to access that share from the isolated backup environment do not get admin permissions on the system where that share resides or anywhere else.
Providing a restore share in the protected environment is safe. You only allow certain admins to have access to it, no one else. Since the less trusted environment (the one we protect) allows access from a more secure environment (the backup environment) this does not expose the backup environment to extra danger. At no point does this allow any access to the backup environment from the protected environment.
Use the FLR wizard to copy the files
You can use the FLR wizard to copy the file locally in the secured Veeam backup environment.
File-Level Restore in a hardened environmentFLR, copy the file to restore to a local folder
From there you can copy it to the restore share in the protected environment. This is very fast as nothing is. On our network we get 230 MB/s.
But what if you don’t have the space for that locally? Not everyone can have a large local restore disk onWould it not be handy to copy it to restore share in the protected environment directly? Well yes, you can try in the wizard and it will ask for the credentials to access the file share.
But it will fail to read and restore the backups but this will fail with an “Access is denied” error.
You could work around it if you use the account VBR uses to actually make the backups. But that won’t do. For one those credentials should be guarded and not use by anything or anyone but VBR. Secondly, it doesn’t work anyway.
So are you out of luck? The good news is no, you are not, so read on!
Copy the files directly from the VeeamFLR mountpoint
If you don’t have the space to restore the file locally in the secure backup environment, there is another trick. When you look at the restore log you will see what the mount server is. That is the node where you can find the C:\VeeamFLR folder, which is where the content of the virtual Machine VHDX volumes is being mounted. This means that on that node you can navigate to the folder or files you want to restore.
Copy the files directly from the VeeamFLR mountpoint
Select the ones you want to recover and just copy them. Navigate to restore share in the protected environment. You just need to provide the credentials of the dedicated account with write access to that share. Those are the only rights you need on that share. The copy is again fast at 230 MB/s. That is a lot better than 7MB/s.
There is an issue with this, however. In this scenario, you need access to the server where the volumes of the virtual machine get mounted. Normally that would be the local repository where the backup files reside and guess what, you lock these down as much as possible and don’t log on to them unless needed. You could specify another, but then it needs to be a server where you have access to and it copies the data over the network, which is slower. With 10Gbps or more that’s not an issue but with 1Gbps, it can take a while with a lot of large files. Can we fix this? Yes. with Mount to Console!
Mount to Console to the rescue
But hey, this would not be Veeam if they did not have yet another option on their sleeve! You can use the Mount to Console button on the ribbon of the Veeam Backup browser to create an extra mount point on your VBR console server. From here you can copy the files to the restore file share in the protected environment a lot faster as well. All you need for this is a local user (non-administrator account) with the Veeam Restore Operator role.
Mount to console to the rescueEt voila, you can browse the files even if you don’t have space on your console server
Veeam Enterprise Manager addresses the challenges we worked around here. But you’ll need Enterprise (plus) to get full-featured use of that. It is nice as it takes care of the above scenario and you can have restore rights assigned to people without compromising your backup environments operational security. So, yes, use Enterprise Manager when possible. If not, see the above workarounds for other options.
Conclusion
Having some insights into how Veeam Backup & Replication works can be in handy at times. It also helps if you can think a bit outside the box and act on those insights to come up with some alternatives or workarounds. This is exactly what we did here with great results. The DBA could do his restores faster while the devs have the version of the database they needed. Do note that the correct answer lies in Veeam Enterprise manager, but lacking that. You now have some options. I hope these insights into a File-Level Restore in a hardened environment help you out someday.
I am attending Veeam Live 2020 from the comfort of my home this year. I can stay safe and still learn, connect, and investigate new technologies and options.
This works for me 🙂
Allow me to invite you Veeam Live 2020. This year the content focus area is on “Cloud Data Best Practices”. The online event takes place on October the 20th 2020 for a full day. Veeam is gathering its global talent pool to present at this event. That talent is both internal to Veeam as well as external. Some of my fellow Veeam Vanguards are presenting and sharing their expertise.
With names like Anton Gostev, Danny Allen, Rick Vanover, Michael Cade, Anthony Spiteri, Dave Kawula, Andrew Zhelezko, Dmitry Kniazez, David Hill, Karinne Bessette, Kirsten Stoner, Dave Russel Melissa Palmer, Sander Berkouwer, Drew J. Como and so many others, the experience and expertise to share are second to none. Many industry and customer experts are also joining in to share their insights.
As Veeam states
At Veeam Live, you’ll gain data management guidance you can activate today. You’ll learn how to up your data protection game across your enterprise, connect with like-minded professionals, set the strategy right for your organization, and be part of the future of Cloud Data Management™.”
Veeam Live 2020 October 2020 – Join for free
So no matter what level you are at or what part you play in managing and safeguarding the data of your organization there are things to explore and learn.
Topics
Topics to be discussed are Multi-Cloud Data Management, AWS- and Azure-Native Backup, Office 365 Backup, Ransomware Best Practices, Kubernetes Backup and App Mobility. Check out the full agenda to find the topics and sessions that are of most interest to you.
On all those subjects Veeam is actively developing and releasing new capabilities. Just think about their recent acquisition of Kasten. They are also sharing information about Veeam Backup & Replication V11 which is currently in Beta.
Get your questions answerd
Do you want to find out how you can make your solutions more efficient? Need to figure out the biggest threats and opportunities there are in today’s technical, business, and security landscape? Want to learn what new technologies you need to keep an eye on and learn about? Is the evergrowing ransomware threat keeping you awake at night?
Join us from the comfort of your own (home) office or couch. It all works. Just bring an open mind, a willingness to listen and learn. The interesting thing about Veeam is that they sell solutions that cater to real, existing, and emerging needs of their (potential) customers. They keep it real and have a tradition of explaining why they develop and bring their solutions and offerings to the market. It makes for educational and insightful sessions and events.
So now you know the secret of how I stay on top of things in the data protection and management world. I listen. Not the sound of crickets (that’s for vacations) but to people that are smart, experienced, and have a proven track record of delivering value in a very competitive and ever-changing landscape. So, now you also know how to stay up to speed, all that is left to do is register today. You are very welcome.
Cookie Consent
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
comment_author
Used to track the user across multiple sessions.
Session
These cookies are used for managing login functionality on this website.
Name
Description
Duration
wordpress_logged_in
Used to store logged-in users.
Persistent
wordpress_sec
Used to track the user across multiple sessions.
15 days
wordpress_test_cookie
Used to determine if cookies are enabled.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Google Analytics is a powerful tool that tracks and analyzes website traffic for informed marketing decisions.
Used to monitor number of Google Analytics server requests when using Google Tag Manager
1 minute
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
_gac_
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.