Check/repair/defragment an XFS volume

Introduction

As I have started to use XFS in bite-size deployments to gain experience with it I wanted to write up some of the toolings I found to manage XFS file systems. Here’s how to check/repair/defragment an XFS volume.

My main use case for XFS volumes is on hardened Linux repositories with immutability to use with Veeam Backup & Replication v11 and higher. It’s handy to be able to find out if XFS needs repairing and if they do, repair them. Another consideration is fragmentation. You can also check that and defrag the volume.

Check XFS Volume and repair it

xfs_repair is the tool you need. You can both check if a volume needs repair and actually repair it with the same tool. Note that the use of xfs_check has been depreciated or is not even available (anymore).

To work with xfs_repair you have to unmount the filesystem, so there will be downtime. Plan for a maintenance window.

To check the file system use the -n switch

sudo xfs_repair -n /dev/sdc
Check, repair, and defragment an XFS volume
Check, repair and defragment an XFS volumea dry run with xfs_repair -n

There is nothing much to do but we’ll now let’s run the repair.

sudo xfs_repair /dev/sdc
Check, repair, and defragment an XFS volume
Repairing an XFS file system

The output is similar as for the check we did for anything to repair is basically a dry run of what will be done. In this case, nothing.

Now, don’t forget to mount the file system again!

sudo mount /dev/sdc /mnt/veeamsfxrepo01-02

Check a volume for fragmentation and defrag it

Want to check the fragmentation of an XFS volume? You can but again, with xfs_db. The file system has to be unmounted for that or you will get the error xfs_db: can’t determine device size. To check for fragmentation run the following command against the storage device /file system.

sudo xfs_db -c frag -r /dev/sdc
Check, repair, and defragment an XFS volume
A lab simulation of sudo xfs_db -c frag -r /dev/sdc – Yeah know it’s meaningless 😉

Cool, now we know that we can defrag it online. For that we use xfs_fsr.

xfs_fsr /devsdc /mnt/veeamxfsrepo01-02
Check, repair, and defragment an XFS volume
There is nothing to do in our example

xfs_scrub – the experimental tool

xfs_scrub is a more recent addition but the program is still experimental. The good news is it will check and repair a mounted XFS filesystem. At least it sounds promising, right? It does, but it doesn’t work (Ubuntu 20.04.1 LTS).

No joy – still a confirmed bug – not assigned yet, importance undecided. Not yet my friends.

Conclusion

That’s it. I hope this helps you when you decide to take XFS for a spin for your storage needs knowing a bit more about the tooling. As said, for me, the main use case is hardened Linux repositories with immutability to use with Veeam Backup & Replication v11. In a Hyper-V environment of course.

Veeam FastSCP for Microsoft Azure IAAS went in to Beta

VEEAM is also keeping us on our toes here at Ignite in Chicago. They just publicly announced the beta of a new free tool that looks extremely handy, VEEAM FastSCP. It’s a tool that enables you to copy files in and out of Azure virtual machines without the need for a VPN. People who have been working with IAAS in Azure for labs or production known that sometimes even benign tasks on premises can be a bit convoluted in the cloud without a VPN or Express Route to Azure.

VeeamFastSCPforMicrosoftAzure

Until today our options without a VPN (to leverage file shares / SMB) are to use either RDP which gives us 2 options:

  1. Direct copy/paste (limited to 2GB)
  2. Mapped local drives in your VM

or leverage the portability of a VHD.

So why is VEEAM FastSCP a big deal? Well the virtual hard disk method is painstakingly tedious. Putting data into a VHD and moving that around to get data in and out of a virtual machine is a nice workaround but hardly a great solution. It works and can be automated with PowerShell but you only do it because you have no other choice.

The first RDP method (copy/paste) is fast and easy but it lacks ease of automation and it’s a bit silly to launch an RDP session to copy files. It also has a file size limit of 2GB. Anything bigger will just throw you an error.

clip_image001

Another option is to leverage your mapped local disks in the VM but that’s not a great option for automation either.

clip_image002

Sure you could start running FTPS or SFTP servers in all your VMs but that’s borderline silly as well.

VEEAM FastSCP for Microsoft Azure

VEEAM is offering this tools as a quick, secure and easy tool to copy files in and out of Azure virtual machines without the need for a VPN or turning your virtual machine into a free target to bad people in the world. Do note this is not meant for blob storage or anything else but an Azure virtual machine. Plenty of tools to go around for blob storage already.

clip_image002

The tool connects to the PowerShell endpoint port of your public IP address. No VPN, 3rd party tool or encryption required, it’s all self-contained. Inside the VM it’s based on winrm.

clip_image004

image

This will not interfere with your normal RDP or PowerShell sessions at all, so no worries there. When using this tool there is also no file size limit to worry about like with copy/paste over RDP.

Via the GUI you connect to the Virtual Machine with your credentials. After that you can browse the file system of that VM and copy data in and out. All of this is secured over SSL.

image

A nice thing is that you don’t need to keep the GUI open after you’ve started the copy just close it and things will get done. No babysitting required.

It’s all wizard driven so it’s very easy and to top it all off you can schedule jobs making it a perfect little automation tool bypassing the limitations we’re facing right now.

scheduler

Some use cases

Any one who has an IAAS lab in Azure will appreciate this tool I think. It’s quick and easy to get files in and out of your VMs and you can schedule this.

Backups. I create a backup of my WordPress blog and the MySQL database regularly to file. While these are protected in the cloud themselves I love backup in depth and have extra option incase plan A fails. Using the build in scheduler I can now easily download a copy of those files just in case Azure goes south longer than I care to suffer. Having an off-cloud copy is just another option to have when Murphy comes knocking.

This is another valuable tool in my toolkit courtesy of VEEAM and all I can say is: thank you! To get it you can register here and download the Beta bits.

A Fool With A Tool Is Still A Fool

Aidan Finn started this cool blog post visually explaining how cool Hyper-V engineers are. This prompted a funny a response by Marcel van den Berg concerning the technology used. Well those blog post inspired me to demonstrate an issue popping up in certain ICT projects to our business audience with the help of some visual aids. That public might not always be IT savvy, but I think we can show them what goes wrong in the ICT world every now and then. Especially if experience, context and realism are missing in a team. For this purpose I’ll use technology everyone knows from TV, the movies & the news. That way even the technically uninitiated (management) will get the drift.

So what goes wrong with a certain percentage of IT implementations today?  Well they tend to look like this:

Over the top deployments, using every option & technology known to man that become unmanageable to the “ridiculous” level and end up reducing operational capabilities and reliability. These projects cost vast amounts of money and are very costly in time / billable hours.

Look, we have a lot of features at our disposal. That’s great, as this gives us options to build the best solution, in a cost effective way, for the business need that needs to be addressed. But we don’t have to use everything everywhere just because we can. Look at the monster setup above. All pretty neat tools & option in itself but it just won’t work this way. Do note that this is not just a simple case of overkill. That would be more like a tank where a rifle suffices. This is using the entire content of the  toolbox when only few tools are needed.

Constructions like this only result in final prove that TCO stands for “Totally Cost Oblivious” and ROI for ‘”Running On Instinct”. These configurations are, more often than not, bought & configured by wannabe “’professionals” who do so to in vain attempt to get some instant credibility. The “Hey, it sure does look impressive”  approach so to speak. These people can’t hack it anyway and often look like this guy.

image

He’s got the gear, he’s got the tools. But there is just no way poor  “bubba” can figure out what’s wrong. Really he can’t.

image

Now a good engineer (like the one below) knows how to use the correct technology where and when needed in a professional manner. He or she does so in the most cost & result effective way.

image

And it’s not only implementations where things go wrong, stuff also breaks.  That’s were a secondary (a.k.a  a backup) comes in. We all know that, no matter how charmed the lives we lead are, inevitably, luck runs out at times. Yes Murphy is out there and bad things happen to the best of us. So tell me, when that luck runs out, who do you want to come take care of business and save you?  Bubba or the guy above? In ICT that’s exactly the same question you need to answer to address the challenges your business faces. Great solutions are, even in this era of commoditization, seldom bought of the shelf as a one size fits all package, they are custom built to specs for the job at hand.