Microsoft Management Summit 2013 Registration opens on December 3rd, 2012

Just as a heads up to all people planning to attend the Microsoft Management Summit 2013 (MMS 2013) this blog is to let you know that registrations open on December 3rd 2012.

image

So, I’d keep an eye out for the MMS 2013 site and register as soon as you get the opportunity. This event has the tendency to sell out fast.

Shared Nothing Live Migration Leverages SMB 3.0 Under the Hood

Shared Nothing Live Migration

By now most of you must have heard about the Shared Nothing Live Migration capabilities introduced with Windows Server 2012 Hyper-V. If not I suggest you check it out over here and then come back here for some extra insights in how it works.

Shared Nothing Live Migration is not magic however. It is made possible by the fact that it relies on some of the new capabilities SMB 3.0 in Windows Server 2012 brought us. Once you know this you also realize that this can be quite fast. The reason for this is that you can design your the network for Shared Nothing Live Migration with 10Gbps or higher, Multi Channel and RDMA for unprecedented throughput. Yup Smile, if you invest in setting up networking right the remaining bottle neck might be the amount of storage IO you can handle whilst reading from the source and writing to the target, or the CPU load you put o your host. Windows will protect you from draining your host beyond reason by the way.

Making Shared Nothing Live Migration Work

You need to set if up of course and do it right. Here’s a list of steps you need to do / check on every Hyper-V host involved.

  1. Enable incoming and outgoing live migrations on all involved Hyper-V host otherwise it will not work. If your host are part of  a cluster this is taken care of for you.
  2. Select an authentication protocol (CredSSP or Kerberos)
    Kerberos authentication allows you to Live Migrate VMs without having to login to the source host’s server itself. Kerberos authentication does require you to configure constrained delegation in Active Directory (don’t go for "Trust this computer for delegation to any services". Follow the principle of least privileges possible.
  3. Set the number of Simultaneous Live Migrations. Experiment with the best value for you environment. Test a little what’s
  4. Set the networks(s) for incoming Live Migrations. It’s best to design this and not just use any network.

See Keith Mayer’s excellent blog for more details.

Constraint Delegation

Shared Nothing Live Migration needs some prep work security wise before it will work. In Active directory you need to set up so constraint delegation permissions. To some people the concept of constraint delegation is brand new but if you’ve been deploying multi tiered web applications in your environment before this is a cookie you’ve dealt with many times before. It’s the same approach you need to get a web client using Windows Authentication to talk via an IIS web app or service to a SQL Server database and/or read file data from somewhere you’ve been configured this plenty of times.

Use an account to perform the Shared Nothing Live Migration that has administrator privileges on all computers that are involved. While you can use groups in AD to make your live and permission management easier when it comes to granting Share permissions & NTFS rights on folders it doesn’t work that way with constraint delegation. Groups can not be used here so you’ll need to use individual accounts. PowerShell scripting here can help lessen the work if you have many hyper-v hosts involved. In large environments (up to 64 nodes!) this inundates the constraint delegations tab with computer names, so PowerShell really is your friend here.

On each computer object you need to set the delegation permissions for the  CIFS and the Microsoft Virtual System Migration Service to all other computers you want to involve in Shared Nothing Live Migration as a source or a target.

IMPORTANT! Hey why do we need CIFS constraint delegation here? Well indeed because Shared Nothing Live Migration under the hood leverages SMB 3.0. It creates a temporary file share on the target to get the job done Smile! So once you realize that Shared Nothing Live Migration uses SMB 3.0 shares to do it’s magic it than becomes obvious why these constraint delegation permissions for CIFS in active directory are needed.

Visualizing the SMB 3.0 share in action

At the source server (ZULU) we run  after starting the Shared Nothing Live Migration and see that we have a connection to a share o the target server. That share is named after the source server with an ID like ZULU.3341302342$. So it’s a hidden share.image

 

On the target server we run Get-SmbSession | fl and see that indeed the source computer has two sessions open on target server.

image

 

Let’s see if a share is created using Get-SmbShare.on the target. Yes there is:

image

 

In Computer Management it shows up like this on the target sever:

image

In explorer you can see this as a $VSM$ folder in the root of C, that has a subfolder with the name of the source server and an ID like ZULU.2541288334$. This subfolder is shared (hidden) and contains a shortcut to the volume where the selected target folder resides, this could be C, D local storage (DAS), shared storage (CSV) or an SMB 3.0 share as well. In the screen shot below the folder doesn’t match up to the share name as they are taking from different Shared Nothing Live Migration

image

Security wise we’re to keep our hands of and the security settings reflect this Winking smile. But if you take ownership you can co peak at what’s in there. When writing a blog post for example WhistlingWe indeed saw the copied disk size of the VM being live migrated increase in the selected target folder.

image

image

Conclusion

I find it pretty cool to see how this all works under the hood. Hope you found this educational and interesting as well. It’s a testimonial to what SMB 3.0 can be leveraged for all kind of interesting scenarios.

KB2770917 Updating Host & Guest Integration Services Components – Most Current Version Depends on Guest OS

As after installing http://support.microsoft.com/kb/2770917 on Windows Server 2012 Hyper-V hosts the integration services components are upgraded from 6.2.9200.16384 to 6.2.9200.16433. Windows Server 2012 guest get that same upgrade and as such also the newer integration services components. The guest with older OS version needed a different approach. So I turned to all the great PowerShell support now available for Hyper-V to automate this. Pretty pleased with the results of our adventures in PowerShell scripting I let the script go on Hyper-V cluster dedicated to test & development. As such there are some virtual machines on there running Windows 2003 SP2 (X64) and Windows XP SP3 (x86).  Guess what, after running my script and verifying the integration services version I see that those VM still report version 6.2.9200.16384 . No update. Didn’t my new scripting achievement “take” on those older guests?

So I try the install manually and this is what I get:

clip_image001

 

Why is there no upgrade for these guests?  Are they not needed or do I have an issue? So I mount the ISO and dig around in the files to find a clue in the date:

clip_image001[10]

 

It looks like there are indeed no update components in there for Windows XP/ W2K3. So then I look at the following registry key on the host where I normally use the Microsoft-Hyper-V-Guest-Installer-Win6x-Package value to find out what integration services version my hosts are running:

image

 

Bingo, there it seems indicated that we indeed need version for XP/W2K3 and version for W2K8(R2)/W2K12 and Vista/Windows 7/Windows 8. Cool, but I had to check if this was indeed as it should be and I’m happy to confirm all is well. Ben Armstrong (http://blogs.msdn.com/b/virtual_pc_guy/) confirmed that this is how it should be. There was a update needed for backup that only applied to Windows 8 / Windows Server 2012 guests.  As this fix was in a common component for Windows Server 2008 and later they all got the update. But for the older OS versions this was not the case and hence no update is need. Which is reflected in all the above. In short, this means your XP SP3 & W2K3SP2 VMs are just fine running the version of the integration services and are not in any kind of trouble.

This does leave me with an another task. I was planning to do enhancements to my script like feedback on progress, some logging, some better logic for clustered and non clustered environments, but now I have to also address this possibility and verify using the registry keys on the host which IC version I should check against per OS version. Checking against just for the one related to the host isn’t good enough Smile.

Windows Server 2012 VHDX Thin Provisioning Benefits Explored

Thin Provisioning With Hyper-V

Windows Server 2012 provides thins provisioning at the virtual layer via the VHDX file format. It also provides it at the physical storage layer when your storage supports it. For the later don’t forget that this also means Storage Spaces! So even in environments where budgets are really tight you can leverage this on the physical storage now. So its not just for the feature rich SAN owners anymore Smile.

Even if you use a storage sub system that does not support thin provisioning at the physical layer you will benefit from this mechanism when you use dynamic VHDX files. Not only will these grow less but during shut down they shrink by the size of the empty blocks. Pretty cool! I do however see a potential risk for increased fragmentation. This has a negative impact on performance and needs defragmentation to remediate which also has an impact on IO performance. How much this is a concern depends on your environment and needs. We’ll also have to see in real life how well dynamic VHDX files live up to their performance improvements they got with Windows Server 2012 to entice more people to use this. You have proponents and naysayers. I’m selective and let the circumstances and needs/requirements decide.

Thin Provisioning at the Virtual Layer

You can take a look at the TechEd 2012 session VIR301 by Senthil Rajaram to see how VHD versus VHDX behaves in regards to thin provisioning. I will not repeat all of this here. What I am going to do is look at some other situations.

Important note: You get this UNMAP feature automatically in Windows. There’s no need to manually run the Optimize-Volume command we’ll use in the scenarios below. It’s run automatically for us when the standard Defrag scheduled task runs or during the NTFS check pointing mechanisms that sends the info down every 5 minutes.  So these will normally take care of all that. But the defrag “only” runs every week by default you might want to tweak it or create your own scheduled task in your environment if needed. In demos and labs we’re rather inpatient geeks so even the 5 minute interval for the check pointing mechanisms are to long so we run “Optimize-Volume  -DriveLetter X –ReTrim” to get immediate gratification while testing. In real life it’s zero touch feature, you don’t need to baby sit it.

Fixed VHDX versus Dynamic VHDX

Apart from the fact that you’ll have no shrink on shutdown this optimization does nothing for the file size. The only benefit here is that the UNMAP can be passed to the physical storage where it can help if that supports it. At the virtual layer it doesn’t matter for a fixed sized VDHX disk.

Dynamic VHDX Disk

You’ll profit from the savings in storage when the dynamically expanding VHDX file doesn’t need to grow as much this. This reduces the overhead of expanding the disk, which is a performance benefit and it even helps your non thin provisioning capable storage go further.

Watch Senthil’s presentation (from around minute 20) to see the benefits in action. With VHDX, If you “shift delete” the files inside the VM, then run “Optimize-Volume -DriveLetter X –ReTrim” or  the defrag job and then copy new files  you’ll see that there is no additional file growth as long as you don’t exceed the current size of the VHDX. If you don’t do this both the VHD and VHDX file will grow.

But is another potential benefit why this might be important. Even with the block sizes that have been increased to have less overhead when growing dynamic VDHX files we still have to deal with fragmentation of the VHDX files on the storage where they live. The better/more empty blocks are reused, the less the dynamic files will have to grow. This means you’ll have less opportunity for fragmentation. Whether this compensates for potential of more fragmentation due to the shrinking when they are shutdown I don’t know. If all the performance improvements for dynamic disks are good enough will depend on your environment and needs. Defragmentation can help mitigate this but IO performance during the defragmentation process suffers. Do it or better, schedule it, wisely!

Virtual SCSI controller attached versus virtual IDE controller attached

What about a guest (boot) VHDX disk attached to an IDE controller? I see a lot of one disk virtual machines out there, so it would be a pity if it didn’t work for those and just for the one who have extras vSCSI disk attached.So let’s test this.

image

Below you see the disk size of the VHD and VHDX files and what type of controller they are attached to. As you can see this they had one or two 3.3 GB ISO files copied to them and where then “shift deleted”. The size of the VHD(X) files reflects the amount of data that they stored.

image

Now after running the defrag job or executing “Optimize-Volume -DriveLetter X –ReTrim” inside the VM you’ll see the results below after you shut down the VM

image

So as it turns out, the thin provisioning benefits it work with an IDE attached VHDX files as well! Yes inside a Windows Server 2012 virtual machine you get the UNMAP support with IDE attached VHDX disks to. Think of Hosting companies with many thousands single disk virtual machines who can leverage this as well. So this is something you might not expect when having watch the video as there they only talk about virtual SCSI/ FC controllers.

Conclusion

Doing tests like these are a bit artificial but they do demonstrate how the technology works. In real life it will translate into efficiencies over time, based on the data creation and deletion in your VHDX files. Think about hundreds or thousands of virtual machines in your environment leveraging this mechanism. Over time, on that scale, the amount of storage consumed will be reduced which results in better economies. Now leverage that together with thin provisioning support in Storage Spaces and you see that there are some very interesting scenarios to investigate. Some how it’s starting to look like you can have your cookie and eat it to Smile. You don’t need an expensive SAN to get these efficiencies at the physical storage layer, but if you have and use to have to mess around with sdelete or agents, it’s easy to see the benefit you get from this here as well.