You cannot shrink a VHDX file because you cannot shrink the volume on the virtual disk

Introduction

I have discussed the capability of resizing a VHDX on line in this blog post Online Resizing Of Hyper-V Virtual Disks Is Possible in Windows 2012 R2. It’s a good resource to learn how to successfully do so.

Despite this you still might run into issue. As mentioned in the above blog post you need unallocated disk space at the end of the disk inside the virtual machine or you cannot shrink the VHDX at all. This situation is shown in the screenshot below.

clip_image001

In most cased this will call for you to shrink the volume size inside your virtual machine first as all space might be allocated to the volume. For this article we’ve set up a lab virtual machine to recreate the issue. The virtual machine had the page file disabled initially. We copied lots of data in it and then created shadow copies. Only then did we created a 10GB fixed sized page file to make sure it was somewhere in the beginning of the volume space. All of this was done to simulate a real world situation with lot of data churn over time. We then shift deleted the data. We now take a look at the disk where we need to shrink volume C in order to be able to shrink the virtual disk itself.

clip_image002

For the shrinking of a volume to succeed you need free space in that volume. But sometimes this doesn’t shrink a virtual machine as much as you’d like or not at all based on the amount of free space you see in the volume as in the figure below.

clip_image003

We should be able to free up to 26GB it seems. But when you try to shrink that volume you see this:

clip_image004

Only 11GB as available shrink space. Not quite what you’d expect based on the free space on the volume! We’ve seen this a couple of times before with virtual servers in real life. The reasons are actually well known, although more often associated with your PC at home than with virtualized servers. So how do we deal with this?

Dealing with a volume with free space that cannot shrink

The issue at hand is most probably that you have files at the end of that volume on your virtual hard disk file that prevent the disk being shrunk. There are a couple tips and tricks associated with getting this fixed.

Defragment the volume

As long as files are movable fragmentation by itself should not prevent resizing a volume. But it never hurts to run it before and it will create continuous free space at the end of the volume that can be shrunk. What’s more important here is that defragmentation cannot move all files, some are unmovable. These files have their fragments scattered all over the place and might prevent you from shrinking the volume.

On modern Windows operating systems defragmentation is part of the storage optimization maintenance job. It also runs UNMAP which informs the virtual hard disk of free space due to data having been deleted.

clip_image006

That’s all good and it means that you don’t even need to run defragmentation manually. But how can we deal with these unmovable files?

There are free and commercial tools that can defragment unmovable files during a boot time defragmentation run. They can even defragment and move system files that are otherwise impossible to move. A commercial tool can do off line defragmentation of your page file and other system files. By doing the defragmentation during boot time they can handle NTFS metadata files on the %systemdrive% directory (usually C:\) such as $MFTMirr, $LogFile, $Volume, $Bitmap, $Boot, and $BadClus:$Bad.

Not all unmovable files can be dealt with this way however. You must realize that since Windows vista the contents of the System Volume Information directory where Windows stores System Restore Points (shadow copies) are completely off-limits to defragmentation software.

As with many things there are manual workarounds.

Remove any “previous versions” or restore points created by shadow copies

Space efficient as these shadow copies for data protection are they can and do consume space on the disk you’re trying to shrink. As mention above, we cannot deal with them via defragmentation. Getting rid of them temporarily can help in this case. Just enable them again if needed when you’re done resizing the volume.

clip_image008

Tip: You can locate the shadow copies to be on a different disk. That’s worth considering when they grow large for both space considerations and performance.

Could the hibernation file cause issues?

We are discussing resizing and virtual hard disk and on virtual machine you won’t find a hyberfil.sys file. This only comes in to play when shrinking a volume on physical hardware. Hibernation is not supported or even available inside a guest OS. You can see this if you try to enable it:

clip_image009

Disable the page file

The page file itself can be come fragmented and it can reside completely or partially on a location of the disk that prevents the volume from being shrunk. While a page file is important to the operating system you can disable it during a maintenance window to make sure it doesn’t block resizing of the virtual hard disk. Be aware that both disabling and re-enabling the page file requires a reboot. So this does mean the online VHDX resize will cause downtime but that’s not because it’s not supported, but because of the action you need to take here to be able to shrink the volume.

clip_image010

clip_image012

clip_image013

The little extra unallocated space left is taken care or by extending the disk a little. Done!

clip_image014

Don’t forget to turn the page file back on in the best possible configuration for your workload afterwards.

Some situations require even more drastic interventions

Another issue might be that there are multiple volumes on the virtual hard disk and the free space is not at the end of the disk as in the below screen shot.

clip_image016

Unless you can delete volume volume H: and create it again to restore the data to the new volume which is then at the end of volume F: you’ll need to turn to 3rd party tools. Free open source tools like GParted will do the job nicely and I have used it extensively. I have a blog post on using it Using Gparted to fix virtual disk resizing issues. You still want a backup or a copy of your vhdx before doing anything like that, just in case.

The results

In the example above which is a lab setup, deleting the shadow copies and getting rid of the page file which was unfortunately located and prevent shrinking the volume more this allowed to shrink with 23GB instead of 11GB. Not bad.

clip_image017

Which gives us 23GB of unallocated space on the virtual disk.

clip_image018

Which we can now shrink the virtual hard disk with that amount!

clip_image019

clip_image013[1]

The little extra unallocated space left is taken care or by extending the disk a little. Done!

clip_image014[1]

Don’t forget to turn the page file back on in the best possible configuration for your workload afterwards and re-enable shadow copies if needed.

A real Word Example

A real world example of this is when we needed to move a 120 GB of indexing files to a dedicated virtual disk because it was causing the OS volume, the C:\ drive to run out of space. We could and did not want to grow virtual hard disk on which the guest OS drive was located. After we had moved the index we wanted to shrink the volume with about 120 GB, leaving ample frees space for the OS volume to function optimally but we could not. We could gain a pitiful 2GB of space!

First we made sure the index data was shift deleted and ran the optimizer to defrag the disk but that did not help. We check for shadow copies but there were none present. As this was a virtual server we did not have a hyberfil.sys file to worry about. In the end what did the trick for us was disabling the page file, rebooting the virtual machines, shrinking the volume and rebooting the virtual machine again.

Conclusion

You have seen how to address an issue where, despite having free space in a volume you cannot shrink it, and as a result, cannot shrink a VHDX file in size. That was blocking our real goal here, which was to shrink the virtual hard disk. While the latter is possible on line we cannot always mitigate the issues we encounter with shrinking a volume (by itself an online event) without down time. Disabling or enabling the page file require a reboot. Defragmentation can be done on line most of the time, but not when it comes to NTFS metadata. Disabling and enabling shadow copies is an online process however.

This is of cause a prime example of what DevOps and cloud computing at scale is discouraging. That brave new world promotes threating your servers as cattle. When one is giving you an issue you don’t nurse it back to health but fire up the barbeque as Jeffrey Snover would put it. That’s a great model if it applies to your environment. But before you do so I’d make sure that your server is not a holy cow instead of cattle. For many applications, even modern ones, in the enterprise you cannot not just kill them off. If you do you’d better have great backups but even those will not solve issues like we one, we’ve addressed here. The backups are there to protect you when things go wrong with your interventions.

NVMe Storage for Backup Targets

Introduction

I’ve used NVMe disks on a modest scale already for code build servers, SQL Server deployments (physical or virtual) and basically for any workload where the benefits of better storage performance outweigh the loss of high availability (clustering, live migration) such as workstation use, I can run a pretty nice lab on my workstation and not feel miserable due to disk IO contention. Let’s see what NVMe Storage for Backup Targets can do!

For the price you pay and the problems they solve, the performance benefits of NVMe are a great deal. Just run Windows Server 2016 with nested Hyper-V on an NVME as a developer with a dozen VMs for AD, IIS, Middle ware and SQL Server. You’ll see what it means. Anything less than 8 cores, DDR 4 and a modern motherboard need not apply by the way.

We’re looking forward to NVMe deployments where high available storage is available (shared or shared nothing) for virtualized workloads. We’re seeing the first examples of this in certain Storage Spaces Direct deployments with Windows 2016. I’m pretty sure the industry will push NVMe usage to new heights for use in such scenarios the coming years with NVMe Fabrics.

Recently we’ve been looking at NVMe disks as a high performant backup tier in our backup storage targets. Yup, read on. Sometimes I get this crazy idea I need to scratch, or better, test out in the lab.

NVMe Storage for Backup Targets

When needed you can build pretty solid backup target with cheap, “high capacity” SATA SSDs as well. The thing is that you’ll be limited by the capabilities of SATA itself. You also need decent controllers leading to costs associated with mitigating those. SATA isn’t exactly the best choice for high throughput, concurrent workloads either. You can move up to SAS in order to go beyond the limits of SATA for SSD but the cost goes up accordingly.

When it comes to cost versus performance, that’s where PCIe shines brighter than anything we have today. Sure it’s not yet feasible to do so for large data volumes but we’re not looking at this for the bulk of our VMs or data. We’re looking a use case where we need stellar performance in a reasonable volume we can drop into a server.

Some people will shout in a visceral reaction (*) that I’m nuts spending that amount of money on backup storage. Well no, I’m not. You have to look at the needs of the use case and the economics of achieving a solution. For a company that has the need to back up a number of state full virtual machines every 10 minutes and want to keep 12-24 or so restore points around NVMe disks can deliver a very cost effective solution. You’re probably running those VMs high available, shared tier 1 storage already, the cost of which is a multitude of a couple of NVMe disks. Let’s look at an example. Say we’re leveraging Scale-Out Repositories with Veeam Backup and Replication and we have 3 to 4 repositories. Dropping 1 or 2 NVME disks to every node can deliver 6 to 8 TB of stellar performance to your existing setup. In many of my deployments we get all the other resources in those nodes cost effectively because we typically recycle our Hyper-V hosts. So cores, memory and bandwidth are plentiful without huge investments in new dedicated servers. If you do buy some of the high density kit the cost of memory and the CPU cores won’t kill the project. So am I nuts for trying or not? Heck no, we’ll learn a lot and I’m sure prices will drop and capacities will rise without sacrificing on performance.

Really, the price isn’t that bad. Just look on Amazon for the cheapest pricing of Intel 750 series NVMe disks of 1.2 TB and come back.

clip_image002

Today you won’t be buying 20 of them anyway to put in a JBOD as those don’t exist yet. You’ll put one or 2 in 1 or more backup target servers to provide high performance backup storage.

clip_image004

Testing 64K 100% sequential writes with 8 worker nodes enabled … not too shabby

NVME disks have stellar IOPS and throughput at low latencies. If you ever wear them out they are cheap enough to swap out for a new one. They absolutely rock under concurrent use, with multiple sessions and heavy workloads. Their massive IO queues make them shine as server storage in many to one scenarios. So backing up many different Hyper-V nodes (clustered or not) concurrently and continuously throughout the day is a use case where they should rock. Just search for some of the reviews out there for details.

Do you need bigger sized NVMe disks and a bit more “enterprise grade” comfort? Look at the Intel 3700 series or equivalents. Simplistically these are the same family but the 750 series disk has been tuned to do better for workstation workloads. But even then most people won’t get to see their true capabilities. Anyway the 3700 are more expensive and the 2TB seize mark might be what pushes you to buy them. Compared to some OEM enterprise grade SAS SSDs you’re still getting a pretty good deal. In any case many workstations cannot even make the Intel 750 series break out in single drop of sweat. We can push them a bit more in server workloads.

If you need redundancy with local NVMe storage you have some options. You can make local NVMe disks redundant today via Storage Spaces if you want or mitigate the risk by using 2 and have to backup jobs protecting the same VMs to different targets.

clip_image006

The Intel 750 NVMe disk installed in a Dell R730 dual socket server

clip_image008

Booting the DELL R730 which provides sufficient resources to evaluate the capabilities of an NVMe disk.

I cannot share to much info on this yet but look at the screenshot below. The VMs run on Storage Spaces (pure SSD) and the backup Target is the Intel 750 1.2 TB NVMe disk.

When the delta in the VMs is low, the amount of data you’ll need to backup with Veeam and Windows 2016 CBT is minimal so backup target performance is not that a big deal. But when you have bigger delta’s and multiple backup jobs running simultaneously that becomes a point that requires attentions.

clip_image010

Look at the above screen shot of some tests backing up VMs on Storage Spaces (Windows Server 2016) ReFS v3 source storage to NVMe with ReFS v3 target storage. Continuously protecting a company’s gold doesn’t have to cost you a king’s ransom in diamonds. We’re running Windows Server 2016 TPv5 and Veeam backup & Replication 9.5 Beta. I hope to discuss the capabilities of Windows Server 2016, ReFS and Veeam Backup and Replication 9.5 in later posts.

What will that cost me?

So let’s say you need 2 TB of backup storage in your backup target for your “always on” mission critical, state full virtual machines. For under 1600 € you can have that in Intel NVMe 750 Series. Today this really is not the technology to build a 300TB backup capacity solution with but when used for the right reasons in the right place with the right use cases this is a good solution.

Now, this isn’t the cheapest per GB, far from, but it is the absolutely best offering when with comes to fantastic throughput even, or better, especially when hitting that target storage with multiple concurrent backups from multiple sources. That’s where its shines beyond anything we have today. The real challenge there will be for the other resources to keep up as well as for the operating system and backup software to be capable of delivering what the NVMe disk(s) can handle. Compared to the OEM prices for their enterprise SAS SSD’s this is still reasonable.

We’ll compare this to “standard” SSD with controllers and see where this gets us. You can learn whether this works for you at relatively low cost, gain experience (i.e. find the bottle necks in the rest of your stack) and deliver a great result for the workloads you’re testing it with. Good backup software lets you fine tune the backups and even throttle backups based on latency of the source storage so you don’t have to worry about it killing the performance of your primary workloads.

Disclaimer: Don’t run of to your boss telling her or him I told you do implement NVMe backup storage targets. Only do so if you have a use case for this and are willing to try it out. Heck, I bought one on my own dime. So I could try it out and see if we can leverage this. If not, I have a great use case for the disk in my workstation for all those Hyper-V virtual machines.

For those 20 ultra-special stateful virtual machines in an “Always-On” environment … this might be the current solution. And please think beyond backups, think recovery of those virtual machines!

clip_image012

It’s kind of cool to use Veeam’s Instant VM recovery when the backup resides on an NVMe.

The future

Today, even with the NVMe Fabric v1.0 specifications published recently we don’t yet have “NVMe JBODS” or fabrics we can buy as commodity components but I’m rather sure those will come soon. These are interesting times and I’ll keep a keep a keen eye on the evolutions around NVMe.

Until then I’ll leverage commodity SSDs for landing the short term backups of VMs. When speed & frequency of those backups become crucial I’ll add a one or more NVMe disks to the mix.

I can put long term backup to other backup targets either via different jobs that run at night and/or via copies.

On top of all this the availability of 7.5 and 15 TB 3D NAND disks are about to change the way we look at high capacity disk based storage solutions. Those capacities in small form factors provide tremendous opportunities to deliver high capacity and performance in small building blocks making the power & cooling economics significantly better. Needing half a rack or a full rack of 3 or 6TB HDD to get both capacity & IOPS doesn’t seem that attractive anymore looking at the TCO over 5 years compared with 2 disk bays full with 7.5 or 15TB SSDs. In the future, with the rise of high capacity SSDs and dropping prices we might soon find that ever bigger SSDs deliver the bulk of our storage & NVMe is reserved for the truly demanding workloads.

Slowly but surely we can put most businesses in my country in one or half a rack without compromising in anything or needing to by vendor lock in converged solutions to make it happen. The scenario where we deliver on premises where it makes the most sense and move to the public cloud where it matters the most is more and more cost effective for those that can’t make data center zero happen yet. Combine that with a software defined approach and you’re looking good.

(*) I had a discussion about using NVMe for certain backup loads with some data center architects recently and they were convinced it was too expensive, too early and needed a consulting engagement leading to a POC to determine if this was a good idea. That would involve project & administrative costs, time and materials etc. Well, we just bought a couple of NVMe disk with on our own budget to test out the idea and concept. It works and is affordable for the right use cases. Just make sure you don’t put an NVMe disk in an anemic budget server where all other resources will be the bottle necks. Also make sure you have the intra host bandwidth to deliver the throughput. Last but not least, it’s pretty silly to have super performant backup targets when your backup source storage can’t deliver the data fast enough. Use common sense and you’ll be alright. It doesn’t need to cost you 10K to find out if buying 800 or 1600 € of NVME storage will work for you. If it seems to work, we can drop 2TB worth of NVMe storage in 3 backup target servers for under 4800 €. Using that in production for 6 months will teach us more than an expensive POC anyway.

Discrete Device Assignment with Storage Controllers

Discrete Device Assignment with Storage Controllers is the second type of DDA we’ll look at. I have written before on Discrete Device Assignment in Windows 2016. A the time of writing officially the 2 supported use cases are GPU and NVMe disk pass-through. I have demonstrated the configuration of DDA with a NVIDIA GRID K1 GPU here.

Meanwhile I have also successfully configured DDA with a NVMe disk. I’ll demonstrate how to do this later but in this blog post I want to address a consequence of this experiment. So let’s take a preliminary at Discrete Device Assignment with storage controllers

With an Intel NMVe disk you do not assign the actual disk to the virtual machine. It’s the controller. By disabling and dismounting the standard NVM Express Controller from the host and assigning it to the guest you make the NVME disk available in the guest.

image

This is supported. This makes me wonder if MSFT would consider officially supporting other storage controllers. What if you need 8TB of high performance storage dedicated to a single VM? You could assign an extra controller in the host with a RAID 10 SSD virtual disk to the virtual machine. How different would that be from NVMe? No too much I guess. Smile

https://i0.wp.com/blog.workinghardinit.work/wp-content/uploads/2016/04/image-7.png?ssl=1

By the way this assigning & un-assigning keeps data intact. This means that is also a roundabout sort of way to get data in and out of a virtual machine. On of the funky, crazy ideas I have already is to use this to export & import data. Maybe.

I really do wonder how things will evolve here. Perhaps these are too much “niche” use case scenarios but it’s interesting none the less. But perhaps the advances in NVMe Fabrics and the added performance available via VHDX outpace the need for DDA storage solutions.

Anuway enough musing as we’ll be taking a more hands on look at assigning a NVMe disk to a VM and Discrete Device Assignment with storage controllers via PowerShell in a later blog and video.

Storage-level corruption guard

One of the many gems in Veeam Backup & Replication v9 is the introduction of storage-level corruption guard for primary backup jobs. This was already a feature for backup copy jobs. But now we have the option of periodically scanning or backup files for storage issues.It works like this: if any corrupt data blocks are found the correct ones are retrieved from the primary storage and auto healed. Ever bigger disks, vast amounts of storage and huge amounts of data mean more chances of bit rot. It’s an industry wide issue. Microsoft tries to address this with ReFS and storage space for example where you also see an auto healing mechanism based on retrieving the needed data from the redundant copies.

We find this option on the maintenance tab of the advanced setting for the storage settings of a backup job, where you can enable it and set a schedule.

image

The idea behind this is that this is more efficient than doing periodical active full backups to protect against data corruption. You can reduce them in frequency or, perhaps better, get rid of those altogether.

Veeam describes Storage-level corruption guard as follows:

image

Can it replace any form of full backup completely? I don’t think so. The optimal use case seems to lie in the combination of storage-level corruption guard with periodic synthetic backups. Here’s why. When the bit rot is in older data that can no longer be found in the production storage, it could fail at doing something about it, as the correct data is no longer to be found there. So we’ll have to weigh the frequency of these corruption guard scans to determine what reduction if making full backups is wise for our environment and needs. The most interesting scenario to deal with this seems to be the one where we indeed can eliminate periodic full backups all together. To mitigate the potential issue of not being able to recover, which we described above, we’d still create synthetic full backups periodically in combination with the Storage-level corruption guard option enabled. Doing this gives us the following benefits:

  • We protect our backup against corruption, bit rot etc.
  • We avoid making periodic full backups which are the most expensive in storage space, I/O and time.
  • We avoid having no useful backup let in the scenario where Storage-level corruption guard needs to retrieve data from the primary storage that is no longer there.

To me this seems to be a very interesting scenario. To optimize backup times and economies. In the end it’s all about weighing risks versus cost and effort. Storage-level corruption guard gives us yet another tool to strike a better balance between those two. I have enabled it on a number of the jobs to see how it does in real life. So far things have been working out well.