I’m Presenting at the Technical Experts Conference 2012 Europe

I’ll be speaking at the Technical Experts Conference 2012 Europe in Barcelona on Windows Server 2012 Hyper-V and it’s storage and network related improvements and promising new features. Some of you might know that I’m a Microsoft MVP in the Virtual Machine Expertise (i.e. Hyper-V), but these sessions are not marketing or vapor ware. Being an MVP is about sharing knowledge and experiences with you. I’m are early adopter in production from the day the RTM bits became available and we’re already reaping the benefits of those features, so it’s more than just lab work and theory.

TEC2012-Europe-170x40-vVirtualization

I won’t be there alone, as my friends, colleagues and fellow MVPs Aidan Finn (@joe_elway), Carsten Rachfahl (@hypervserver) and Hans Vredevoort (@hvredevoort) will be there as well to present and share their knowledge, which is extensive, I assure you. It’s great to have the chance to come together again and talk about our technology passions.

You can find an overview of the session agenda here

So I hope you can join us for an interesting conference and interactive event where we can discuss your challenges and ways to address them. Trust me when I say that talking to other customers and technologist is a great way to learn, understand the needs and find opportunities. We learn a lot from presenting and talking to you. I’ve attended a lot of conferences in my career now and I still find them valuable. The return on investment for my employers has been great. Motivated and skilled employees can save a business 10 fold the cost spent to keep them that way.

Haven’t heard of TEC  before?

Neither did I before a couple of years, but by good fortune I had the opportunity to attend as a delegate and found it very worth while in both content and networking opportunities. As it turns out The Experts Conference Europe 2012 (TEC) has been running for over a decade now and it delivers level 400 sessions on core Microsoft technologies. It focuses on Active Directory and Identity, Exchange, Virtualization and User Workspace Management.

TEC2012-emailsignature-Europe-v3

TEC Europe is held at the Hotel Rey Carlos in Barcelona from 22-24 October 2012. Quest , as an alliance partner of Microsoft, welcomes program management, product management, development staff from Redmond and a number of field team members to the event every year to support the training requirements of its users. This means two things: It’s a valuable event and, I admit, I’m honored to be invited to speak at this event.

Budgets are tight

A great tip. Quest is offering a discount rate of 850 Euro to delegates who register by 21 September! You can get a discount code for registration by sending an email to [email protected]

Altaro Hyper-V Backup 3.5 Supports Windows 2012

Altaro is one of the first backup vendors to support Windows Server 2012 with release of Hyper-V Backup v3.5. It has few that can match that speed to market and then we’re talking the likes of CommVault who Altaro can teach some lessons left and right (I should know, I’m a long time CommVault customer and whilst a great product they should really address some issues, hire a GUI developer is one, get decent information and accessible support is another, we won’t even mention pricing Winking smile).

With Altaro Hyper-V Backup v3.5. we get full support Windows Server 2012. That is CSV 2.0, VSS backups of SMB 3.0  etc. As an early adaptor I can appreciate the speed and time to market of a backup product. I do not like 3rd party vendors keeping me back of getting the most of Volume License software assurance so these things matter to me when selecting products.

Check out htttp://www.altaro.com/hyper-v-backup/ for more information. Some of their customers are enough to make you look at their solution. At least it made me do so => Harvard University, Max-Planck Institute, Los Alamos National Laboratory, Princeton University, US Geological Survey, … etc. (I’m a scientist by training so yes these customers appeal to me Smile) .

Disclaimer: No I did not ever accept any offer for sponsorship from any vendor, even if asked, just because I wanted to make sure you know who’s story I’m telling.

Intel X520 Series NIC on Windows 2012 With Hyper-V Enabled Port Flapping Issue

When you install Windows Server 2012 RTM to a server with X520 series NIC cards you’ll notice that there is a native driver available and the performance of that driver is fantastic. It’s really impressive to see.

image

That’s great news but I’ve noticed an issue in RTM that I already dealt with in the release candidate.

The moment you install Hyper- V some of the X520 NIC ports can start flapping (connected/disconnected).  You’ll see the sequence below endlessly on one port, sometimes more.

image

image

image

As you can imagine this ruins the party in Hyper-V networking an bit too much for comfort Confused smile But it can be fixed. The root cause for this I do not know but it is driver related. The same thing happened in the release candidate. But now things are easier to fix. Navigate to the Intel Site to download their freshly released driver for the X520 series on Windows Server 2012 and install it (you don’t need to install the extra software with Advanced Network Services => native Windows NIC teaming has arrived). After that the flapping will be gone.

image

Hope this helps some folks out!

Windows Server 2012 with Hyper-V & The New VHDX Format Leads The Way

Introduction

Whether you realize this or not but our trusted old VHD format is getting a bit old in the tooth. As a matter of fact it has been around since the last century. It has served us well but now it needs a major overhaul to better serve us at present and to prepare us for the decades to come. We (at least in the environments I support) see a continuing demand for bigger virtual disks & ever better performance. This should be no surprise. Not only does the amount of data produced keep going up year after year but we’re virtualizing more very resource intensive workloads than ever. Think image intensive data that has to be processed by number crunching virtual machines or large databases like SQL Servers. Sure 64 vCPUs and 1TB of memory are great and impressive but we also need loads of fast and ever more reliable storage. Trying to serve and support these needs with combined 2TB disks is very cumbersome (to be polite) and pass trough disks take a way a lot of the flexibility & options the VHD format gives us. So here comes the new VHDX format.  There is no back porting here, the only OS at the moment that supports VHDX is Windows Server 2012. The good news here is that we have in box tools to convert between VHD & VHDX.

Bigger, Better & Faster

Size

The VHDX format supports up to 64TB now. Yes that is 32 times more than the current VHD. As a matter of fact al lot of SANs still in use today won’t give you that size of LUN. Is there a need for this?  Well, I circle in some places with huge files in massive amounts so I can use big LUNs and large data VHDX files. Concatenating disks is something I do no like to do. Come upgrade/maintenance/renewal time that one bites too much for comfort.

There are also some other virtual disk formats that need to wake up and break that 2TB size boundary . Especially when Microsoft states that this is not a File format hard limitation. By that they mean they have room to increase it. Wow!

Protection Against Disk Corruption

The VHDX format also provides corruption protection during power failures for the VHDX files. This is done by a logging mechanism for the updates of the VHDX metadata structures. The logging mechanism is contained within the VHDX file so no worries, you won’t have to worry about managing log files. The overhead is minimal, as they only log metadata such as block allocations, block state updates and NOT the actual data stored. So no, it has not become a database Smile you need to manage, don’t worry. The protection works only for the VHDX file and not the data that is written to it. That job falls to NTFS or ReFS. What we discussed here was protection against VHDX file corruption.

The Need For Speed

With VHDX we also get larger block sizes up to 256MB for dynamic & differencing disks, meaning they perform better with workloads that allocate in larger chunks.

Modern Large Sector Disks

We get support to run VHDX on large sector disks without loosing performance.

I refer you to KB articles Using Hyper-V with large sector drives on Windows Server 2008 and Windows Server 2008 R2 and Information about Microsoft support policy for large-sector drives in Windows.

As you can read there the performance hit for both non fixed VHDs and applications is pretty bad. The 512e (4K physical and 512-byte logical sector size) approach is bad due to the Read-Modify-Write (RMW) process overhead in dynamic & differencing disks. 4K native (4K logical sector size) just isn’t supported by Hyper-V before Windows Server 2012. The maximum logical & physical sector size is now 4KB and that means that we get a lot better performance when running applications that are designed to use 4KB workloads in Hyper-V 3.0. VHDX structures are aligned on MB boundaries, so the need for the RMW from the disk is eliminated if the physical sector size of the virtual disk is set to 4K.

image

Storing Custom Metadata

We also get the ability to store custom metadata in the VHDX  file for information we find relevant. This could be about what’s on there, OS version or patches applied.
ODX Support. This custom data is stored using key/value pairs that support up to 1024 entries of 1MB. That should be adequate for a while Winking smile.

VHDX Leverages Offline Data Transfer (ODX)

The virtual stack allows ODX requests from the guest to flow down all the way to the hardware and as such VHDX operations benefit from this as well. Examples of this are:

  • Creating VHDX files, even such large ones has gotten an whole lot faster. Especially if you can offload this to the SAN. If your storage vendor supports ODX then you’re in VHDX creation speed heaven! As a bonus  even VHD files created in Windows Server 2012 benefit from this technology.
  • On top of that Merge & Mirror operation are also offloaded to the hardware which is great for merging snapshots or live storage migration.
  • In the future the virtual machines themselves might/will be able to pass through offload operations. This is hard core stuff  and due to the file layout far from trivial.

Please note that this only works with SCSI attached VHDX files. IDE devices have no ODX support capabilities.

TRIM/UNMAP Support

With Windows Server 2012 / VHDX we get what is described in the documentation “’Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks in the VM, and trim-compatible hardware.) It also requires Windows Server 2012 on hosts & guests.

It’s a major benefit in the “Stay Thin” philosophy associated with thin provisioning. No more running “sdelete” in your windows VMs (tedious, slow, resource intensive) or installing an agent (less tedious) to support reclaiming space. This is important to many of us and this level of support and integration makes our lives a lot easier & speeds things up. So choose you storage wisely.

TRIM is the specification for this functionality by Technical Committee T13, that handles all standards for ATA interfaces. UNMAP is the Technical Committee T10 specification for this and is the full equivalent of TRIM but for SCSI disks. UNMAP is used to remove physical blocks from the storage allocation in thinly provisioned Storage Area Networks. My understanding is that is what is used on the physical storage depends on what storage it is (SSD/SAS/SATA/NL-SAS or SAN with one or all or the above).

Basically VHDX disks report themselves as thin provision capable. That means that any deletes as well as defrag operation in the guests will send down “unmaps” to the VHDX file, which will be used to ensure that block allocations within the VHDX file is freed up for subsequent allocations as well as the same requests are forwarded to the physical hardware which can reuse it for it’s thin provisioning purpose. This means that an VHDX will only consume storage for really stored data & not for the entire size of the VHDX, even when it is a fixed one. You can see that not t just the operating system but also the application/hypervisor that owns the file systems on which the VHDX lives needs to be TRIM/UNMAP aware to pull this off. It is worth nothing this mean that it only works on the SCSI attached storage in the virtual machine, not on IDE connected VHDX disks.

Closing Thoughts On The Future Proof VHDX Format

For anyone interested in developing against the VHDX formats the specifications will be published. So that’s good news for ISVs, big and small. For all the reasons mentioned above I’m a fan of the VHDX format Open-mouthed smile and it’s yet one more reason to go full speed ahead with testing Windows 2012 so we can move forward fast and reap the benefits of reliability & scalability without sacrificing performance.