This question came up recently, once again, and deserves it a little blog post. If you want to see the benefits of ODX you’ll need to connect your virtual disks to a vSCSI controller or other supported controller option. These are iSCSI, vFC, a SMB 3 File Share or a pass-through disk. But unless you have really good reason to use pass-through disks, don’t. It’s limiting you in to many ways.
Basically in generation 1 virtual machines that boot from a vIDE this rules out the system disk. So the tip here is to store your data that’s moved around in or between virtual machines in vSCSI attached VDH or (preferably) VHDX virtual disks. If you can use generation 2 virtual machines, you’ll be able to leveraged ODX on the system partition as well as it boots from vSCSI .
It goes without saying you need to store any virtual disks involved on ODX capable LUNs via iSCSI, FC, FCoE, SMB 3 File Share or SAS for ODX to be available to the virtual machine.
Also beware that ODX only works on NTFS partitioned disks. The files cannot be compressed or encrypted. Sparse files are not supported either. And finally, the volume cannot be BitLocker protected.
Here’s a screenshot of a copy of 30GB worth of ISO files to a VHDX attached to a vSCSI controller:
Here’s a screenshot of a copy of 30GB worth of ISO files to a VHDX attached to a vIDE controller.
You’ll notice quite a difference. Depending on the load on the controllers/SAN it’s on average 3 times slower than the same action to a VHDX disk on a vSCSI controller.
As I was preparing a presentation on Hyper-V cluster high available & high performance networking by, you guessed it, presenting it. During that presentation I mentioned Jumbo Frames & VMQ (VMDq in Intel speak) for the virtual machine, Live Migration and CSV network. Jumbo frames are rather well know nowadays but VMQ is still something people have read about, at best have tinkered with, but no many are using it in production.
One of the reason for this that it isn’t explained and documented very well. You can find some decent explanation on what it is and does for you but that’s about it. The implementation information is woefully inadequate and, as with many advanced network features, there are many hiccups and intricacies. But that’s a subject for another blog post. I need some more input from Intel and or MSFT before I can finish that one.
Someone stated/asked that they knew that Jumbo frames are good for throughput on iSCSI networks and as such would also be beneficial to iSCSI networks provided to the virtual machines. But how about VMQ? Does that do anything at all for IP based storage. Yes it does. As a matter of fact It’s highly recommend by MSFT IT in one of their TechEd 2010 USA presentations on Hyper-V and storage.
So yes enable VMQ on both NIC ports used for iSCSI to the guest. Ideally these are two dedicated NICs connected to two separate switches to avoid a single point of failure. You do not need to team these on the host or have Multiple Path I/O (MPIO) running for this mat the parent level. The MPIO part is done in the virtual machines guests themselves as that’s where the iSCSI initiator lives with direct connect. And to address the question that followed, you can also use Multiple Connections per Session (MCS) in the guest if your storage device supports this but I must admit I have not seen this used in the wild. And then, finally coming to the point, both MPIO and MCS work transparently with Jumbo Frames and VMQ. So you’re good to go
Ever since Windows 2008 R2 SP1 became available people have been waiting for Windows Hyper-V Server R2 to catch up. The wait is over as last week Microsoft made it available on their website http://technet.microsoft.com/en-us/evalcenter/dd776191.aspx. That’s a nice package to have when it serves your needs and there ‘s little to argue about. Guidance on how to configure it and how to get remote management set up has been out for a while and is quite complete so that barrier shouldn’t stop you from using it where appropriate. If you’re staring out head over to José Barreto’s blog to get a head start and here’s some more information on the subject http://technet.microsoft.com/en-us/library/cc794756(WS.10).aspx and naturally there are some tools around to help out if needed and the Microsoft provided tools are not to you liking http://coreconfig.codeplex.com/. So there you go, now you have a free and very capable hypervisor available to the public that gives you high availability, Live Migration, Dynamic Memory, Remote FX and they even threw in their software iSCSI target 3.3 into the free package so you can build a free iSCSI SAN supported by Microsoft. Live is good.