Hyper-V Guest Storage Performance: Above & Beyond 1 Million IOPS

Making a million IOPS possible in a Windows Server 2012 Hyper-V VM

A lot of you will have seen the demos of a Hyper-V guest with VHDX disks running on Windows Server 2012 doing a million apps, if you haven’t yet, take a look here. While some quickly dismissed this as “irrelevant boasting” without real life relevance, I respectfully disagree. This is smart future proofing by Microsoft and provides a hypervisor ready for the future hardware capabilities and capable to handle the most demanding workloads today & in the years to come. Sure such a demo is under lab/ideal conditions and does not reflect the majority of real life environments but it’s nice to see what a hypervisor is capable of if and when you might need it. Remember there was a day that 4GB was a lot of RAM and 2TB sounded gigantic. Also remember that some people have larger needs than others.  Until Windows Server 2008 R2 you had some limitations in storage IO performance that would not allow for a million IOPS. These had to be addressed or all the efforts with regards to capabilities and performance in regard to storage, CPU, networking and memory would just hit those particular bottlenecks. So it is addressing real needs and indeed also smart future proofing.

Capabilities of virtual machine storage IO throughput in Windows 2008 R2

The capabilities listed below dictate the IO capabilities in virtual machines running on Windows Server 2008 R2 Hyper-V:

  1. Limited to one IO channel per virtual SCSI Controller
  2. 256 queue depth/SCSI for all devices attached to that SCSI adapter.
  3. There was one fixed vCPU (0) dedicated to handling IO.

image

The picture above illustrates these limits. You see two virtual SCSI Controllers each having 2 VHD virtual disks attached. Each disk shares the one channel the controller it is attached to has.

These limits could become a bottle neck but that was never was too big of a problem with a maximum of 4 vCPUs in Windows 2008 R2 Hyper-V. If needed for performance we might have attached VHDs to different virtual SCSI controllers for the best possible performance in Windows Server 2008 R2 Hyper-V .

With 64 vCPUs and ever more demanding workloads these limitations would become a (serious) issue so this needed to be addressed. If not, despite all other efforts in regards to the 4 big resources (memory, storage and network) in Windows 2012, this would remain the limiting factor of IOPS inside a virtual machine on Windows 2012.

Windows Server 2012 improvements to virtual machine storage IO scaling

image

The picture above illustrates the improvements in Windows Server 2012 Hyper-V IO Scaling:

  1. There is now 1 channel per 16 vCPUs, per virtual SCSI device, per controller. So that means you have 4 channels, per VHDX attached to a virtual SCSI Controller when you have 64 vCPUs in the virtual machine. Compared to before, this is a significant improvement and a much needed one with the 64 vCPUs capability there is now.
  2. IO interrupt handling is now distributed amongst all vCPUs and this process is NUMA aware. This is a huge improvement!
  3. There is now a 256 queue depth/device attached to a specific SCSI adapter. That’s another big improvement.

That people, is how you get a virtual machine to handle a million IOPS. Nice! The questions or doubts whether Hyper-V can deliver the capacity, throughput & performance have been wiped of the table, yes also for virtual storage IOPS. You can now go straight to how it will address your business needs. From my experience it does so brilliantly and very cost effectively. Life might not be perfect but it is very good Smile

Quest Technical Experts Conference 2012 Europe Podcast

As the readers of my blog know I was in Barcelona this week to attend the Technical Experts Conference Europe 2012 organized by Quest (now a part of DELL).  Together with my fellow MVPs, friends and colleagues  Aidan Finn (@joe_elway), Carsten Rachfahl (@hypervserver) and Hans Vredevoort (@hvredevoort) we presented 8 sessions at the Virtualization Track and did a Hyper-V Experts panel to share what we have learned and help answer questions attendees on the new capabilities of Hyper-V in Windows Sever 2012. It was both fun and interesting to do. We learned some more from each other and also from the questions of an alert audience whom we enjoyed presenting to.

Mattias Sundling, a Technical evangelist at Quest and the owner of the Virtualization Track at TEC 2012 Europe did an audio podcast all 4 of us MVPs in the “Virtual Machine” expertise presenting in that track . You can find that podcast here on the vKernel/DELL web site http://www.vkernel.com/podreader/items/tec-europe-2012-mvps-with-mattias-sundling or on YouTube by clicking on the screenshot below. Enjoy.

image

I’m Presenting at the Technical Experts Conference 2012 Europe

I’ll be speaking at the Technical Experts Conference 2012 Europe in Barcelona on Windows Server 2012 Hyper-V and it’s storage and network related improvements and promising new features. Some of you might know that I’m a Microsoft MVP in the Virtual Machine Expertise (i.e. Hyper-V), but these sessions are not marketing or vapor ware. Being an MVP is about sharing knowledge and experiences with you. I’m are early adopter in production from the day the RTM bits became available and we’re already reaping the benefits of those features, so it’s more than just lab work and theory.

TEC2012-Europe-170x40-vVirtualization

I won’t be there alone, as my friends, colleagues and fellow MVPs Aidan Finn (@joe_elway), Carsten Rachfahl (@hypervserver) and Hans Vredevoort (@hvredevoort) will be there as well to present and share their knowledge, which is extensive, I assure you. It’s great to have the chance to come together again and talk about our technology passions.

You can find an overview of the session agenda here

So I hope you can join us for an interesting conference and interactive event where we can discuss your challenges and ways to address them. Trust me when I say that talking to other customers and technologist is a great way to learn, understand the needs and find opportunities. We learn a lot from presenting and talking to you. I’ve attended a lot of conferences in my career now and I still find them valuable. The return on investment for my employers has been great. Motivated and skilled employees can save a business 10 fold the cost spent to keep them that way.

Haven’t heard of TEC  before?

Neither did I before a couple of years, but by good fortune I had the opportunity to attend as a delegate and found it very worth while in both content and networking opportunities. As it turns out The Experts Conference Europe 2012 (TEC) has been running for over a decade now and it delivers level 400 sessions on core Microsoft technologies. It focuses on Active Directory and Identity, Exchange, Virtualization and User Workspace Management.

TEC2012-emailsignature-Europe-v3

TEC Europe is held at the Hotel Rey Carlos in Barcelona from 22-24 October 2012. Quest , as an alliance partner of Microsoft, welcomes program management, product management, development staff from Redmond and a number of field team members to the event every year to support the training requirements of its users. This means two things: It’s a valuable event and, I admit, I’m honored to be invited to speak at this event.

Budgets are tight

A great tip. Quest is offering a discount rate of 850 Euro to delegates who register by 21 September! You can get a discount code for registration by sending an email to [email protected]

Windows Server 2012 with Hyper-V & The New VHDX Format Leads The Way

Introduction

Whether you realize this or not but our trusted old VHD format is getting a bit old in the tooth. As a matter of fact it has been around since the last century. It has served us well but now it needs a major overhaul to better serve us at present and to prepare us for the decades to come. We (at least in the environments I support) see a continuing demand for bigger virtual disks & ever better performance. This should be no surprise. Not only does the amount of data produced keep going up year after year but we’re virtualizing more very resource intensive workloads than ever. Think image intensive data that has to be processed by number crunching virtual machines or large databases like SQL Servers. Sure 64 vCPUs and 1TB of memory are great and impressive but we also need loads of fast and ever more reliable storage. Trying to serve and support these needs with combined 2TB disks is very cumbersome (to be polite) and pass trough disks take a way a lot of the flexibility & options the VHD format gives us. So here comes the new VHDX format.  There is no back porting here, the only OS at the moment that supports VHDX is Windows Server 2012. The good news here is that we have in box tools to convert between VHD & VHDX.

Bigger, Better & Faster

Size

The VHDX format supports up to 64TB now. Yes that is 32 times more than the current VHD. As a matter of fact al lot of SANs still in use today won’t give you that size of LUN. Is there a need for this?  Well, I circle in some places with huge files in massive amounts so I can use big LUNs and large data VHDX files. Concatenating disks is something I do no like to do. Come upgrade/maintenance/renewal time that one bites too much for comfort.

There are also some other virtual disk formats that need to wake up and break that 2TB size boundary . Especially when Microsoft states that this is not a File format hard limitation. By that they mean they have room to increase it. Wow!

Protection Against Disk Corruption

The VHDX format also provides corruption protection during power failures for the VHDX files. This is done by a logging mechanism for the updates of the VHDX metadata structures. The logging mechanism is contained within the VHDX file so no worries, you won’t have to worry about managing log files. The overhead is minimal, as they only log metadata such as block allocations, block state updates and NOT the actual data stored. So no, it has not become a database Smile you need to manage, don’t worry. The protection works only for the VHDX file and not the data that is written to it. That job falls to NTFS or ReFS. What we discussed here was protection against VHDX file corruption.

The Need For Speed

With VHDX we also get larger block sizes up to 256MB for dynamic & differencing disks, meaning they perform better with workloads that allocate in larger chunks.

Modern Large Sector Disks

We get support to run VHDX on large sector disks without loosing performance.

I refer you to KB articles Using Hyper-V with large sector drives on Windows Server 2008 and Windows Server 2008 R2 and Information about Microsoft support policy for large-sector drives in Windows.

As you can read there the performance hit for both non fixed VHDs and applications is pretty bad. The 512e (4K physical and 512-byte logical sector size) approach is bad due to the Read-Modify-Write (RMW) process overhead in dynamic & differencing disks. 4K native (4K logical sector size) just isn’t supported by Hyper-V before Windows Server 2012. The maximum logical & physical sector size is now 4KB and that means that we get a lot better performance when running applications that are designed to use 4KB workloads in Hyper-V 3.0. VHDX structures are aligned on MB boundaries, so the need for the RMW from the disk is eliminated if the physical sector size of the virtual disk is set to 4K.

image

Storing Custom Metadata

We also get the ability to store custom metadata in the VHDX  file for information we find relevant. This could be about what’s on there, OS version or patches applied.
ODX Support. This custom data is stored using key/value pairs that support up to 1024 entries of 1MB. That should be adequate for a while Winking smile.

VHDX Leverages Offline Data Transfer (ODX)

The virtual stack allows ODX requests from the guest to flow down all the way to the hardware and as such VHDX operations benefit from this as well. Examples of this are:

  • Creating VHDX files, even such large ones has gotten an whole lot faster. Especially if you can offload this to the SAN. If your storage vendor supports ODX then you’re in VHDX creation speed heaven! As a bonus  even VHD files created in Windows Server 2012 benefit from this technology.
  • On top of that Merge & Mirror operation are also offloaded to the hardware which is great for merging snapshots or live storage migration.
  • In the future the virtual machines themselves might/will be able to pass through offload operations. This is hard core stuff  and due to the file layout far from trivial.

Please note that this only works with SCSI attached VHDX files. IDE devices have no ODX support capabilities.

TRIM/UNMAP Support

With Windows Server 2012 / VHDX we get what is described in the documentation “’Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks in the VM, and trim-compatible hardware.) It also requires Windows Server 2012 on hosts & guests.

It’s a major benefit in the “Stay Thin” philosophy associated with thin provisioning. No more running “sdelete” in your windows VMs (tedious, slow, resource intensive) or installing an agent (less tedious) to support reclaiming space. This is important to many of us and this level of support and integration makes our lives a lot easier & speeds things up. So choose you storage wisely.

TRIM is the specification for this functionality by Technical Committee T13, that handles all standards for ATA interfaces. UNMAP is the Technical Committee T10 specification for this and is the full equivalent of TRIM but for SCSI disks. UNMAP is used to remove physical blocks from the storage allocation in thinly provisioned Storage Area Networks. My understanding is that is what is used on the physical storage depends on what storage it is (SSD/SAS/SATA/NL-SAS or SAN with one or all or the above).

Basically VHDX disks report themselves as thin provision capable. That means that any deletes as well as defrag operation in the guests will send down “unmaps” to the VHDX file, which will be used to ensure that block allocations within the VHDX file is freed up for subsequent allocations as well as the same requests are forwarded to the physical hardware which can reuse it for it’s thin provisioning purpose. This means that an VHDX will only consume storage for really stored data & not for the entire size of the VHDX, even when it is a fixed one. You can see that not t just the operating system but also the application/hypervisor that owns the file systems on which the VHDX lives needs to be TRIM/UNMAP aware to pull this off. It is worth nothing this mean that it only works on the SCSI attached storage in the virtual machine, not on IDE connected VHDX disks.

Closing Thoughts On The Future Proof VHDX Format

For anyone interested in developing against the VHDX formats the specifications will be published. So that’s good news for ISVs, big and small. For all the reasons mentioned above I’m a fan of the VHDX format Open-mouthed smile and it’s yet one more reason to go full speed ahead with testing Windows 2012 so we can move forward fast and reap the benefits of reliability & scalability without sacrificing performance.