Hyper-V Guest Storage Performance: Above & Beyond 1 Million IOPS

Making a million IOPS possible in a Windows Server 2012 Hyper-V VM

A lot of you will have seen the demos of a Hyper-V guest with VHDX disks running on Windows Server 2012 doing a million apps, if you haven’t yet, take a look here. While some quickly dismissed this as “irrelevant boasting” without real life relevance, I respectfully disagree. This is smart future proofing by Microsoft and provides a hypervisor ready for the future hardware capabilities and capable to handle the most demanding workloads today & in the years to come. Sure such a demo is under lab/ideal conditions and does not reflect the majority of real life environments but it’s nice to see what a hypervisor is capable of if and when you might need it. Remember there was a day that 4GB was a lot of RAM and 2TB sounded gigantic. Also remember that some people have larger needs than others.  Until Windows Server 2008 R2 you had some limitations in storage IO performance that would not allow for a million IOPS. These had to be addressed or all the efforts with regards to capabilities and performance in regard to storage, CPU, networking and memory would just hit those particular bottlenecks. So it is addressing real needs and indeed also smart future proofing.

Capabilities of virtual machine storage IO throughput in Windows 2008 R2

The capabilities listed below dictate the IO capabilities in virtual machines running on Windows Server 2008 R2 Hyper-V:

  1. Limited to one IO channel per virtual SCSI Controller
  2. 256 queue depth/SCSI for all devices attached to that SCSI adapter.
  3. There was one fixed vCPU (0) dedicated to handling IO.

image

The picture above illustrates these limits. You see two virtual SCSI Controllers each having 2 VHD virtual disks attached. Each disk shares the one channel the controller it is attached to has.

These limits could become a bottle neck but that was never was too big of a problem with a maximum of 4 vCPUs in Windows 2008 R2 Hyper-V. If needed for performance we might have attached VHDs to different virtual SCSI controllers for the best possible performance in Windows Server 2008 R2 Hyper-V .

With 64 vCPUs and ever more demanding workloads these limitations would become a (serious) issue so this needed to be addressed. If not, despite all other efforts in regards to the 4 big resources (memory, storage and network) in Windows 2012, this would remain the limiting factor of IOPS inside a virtual machine on Windows 2012.

Windows Server 2012 improvements to virtual machine storage IO scaling

image

The picture above illustrates the improvements in Windows Server 2012 Hyper-V IO Scaling:

  1. There is now 1 channel per 16 vCPUs, per virtual SCSI device, per controller. So that means you have 4 channels, per VHDX attached to a virtual SCSI Controller when you have 64 vCPUs in the virtual machine. Compared to before, this is a significant improvement and a much needed one with the 64 vCPUs capability there is now.
  2. IO interrupt handling is now distributed amongst all vCPUs and this process is NUMA aware. This is a huge improvement!
  3. There is now a 256 queue depth/device attached to a specific SCSI adapter. That’s another big improvement.

That people, is how you get a virtual machine to handle a million IOPS. Nice! The questions or doubts whether Hyper-V can deliver the capacity, throughput & performance have been wiped of the table, yes also for virtual storage IOPS. You can now go straight to how it will address your business needs. From my experience it does so brilliantly and very cost effectively. Life might not be perfect but it is very good Smile