Assigning Large Memory To Virtual Machine Fails: Event ID 3320 & 3050

We had a kind reminder recently that we shouldn’t forget to complete all steps in a Hyper-V cluster node upgrade process. The proof of a plan lies in the execution Smile. We needed to configure a virtual machine with a whooping 50GB of memory for an experiment. No sweat, we have plenty of memory in those new cluster nodes. But when trying to do so it failed with a rather obscure error in System Center Virtual Machine Manager 2008 R2

Error (12711)

VMM cannot complete the WMI operation on server hypervhost01.lab.test because of error: [MSCluster_Resource.Name="Virtual Machine MYSERVER"] The group or resource is not in the correct state to perform the requested operation.

(The group or resource is not in the correct state to perform the requested operation (0x139F))

Recommended Action

Resolve the issue and then try the operation again.

image

One option we considered was that SCVMM2008R2 didn’t want to assign that much memory as one of the old host was still a member of the cluster and “only” has 48GB of RAM. But nothing that advanced was going on here. Looking at the logs found the culprit pretty fast: lack of disk space.

We saw following errors in the Microsoft-Windows-Hyper-V-Worker-Admin event log:

Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
Source:        Microsoft-Windows-Hyper-V-Worker
Date:          17/08/2011 10:30:36
Event ID:      3050
Task Category: None
Level:         Error
Keywords:     
User:          NETWORK SERVICE
Computer:      hypervhost01.lab.test
Description:
‘MYSERVER’ could not initialize memory: There is not enough space on the disk. (0x80070070). (Virtual machine ID DEDEFFD1-7A32-4654-835D-ACE32EEB60EE)

Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
Source:        Microsoft-Windows-Hyper-V-Worker
Date:          17/08/2011 10:30:36
Event ID:      3320
Task Category: None
Level:         Error
Keywords:     
User:          NETWORK SERVICE
Computer:      hypervhost01.lab.test
Description:
‘MYSERVER’ failed to create memory contents file ‘C:ClusterStorageVolume1MYSERVERVirtual MachinesDEDEFFD1-7A32-4654-835D-ACE32EEB60EEDEDEFFD1-7A32-4654-835D-ACE32EEB60EE.bin’ of size 50003 MB. (Virtual machine ID DEDEFFD1-7A32-4654-835D-ACE32EEB60EE)

Sure enough a smaller amount of memory, 40GB, less than the remaining disk space on the CSV did work. That made me remember we still needed to expand the LUNS on the SAN to provide for the storage space to store the large BIN files associated with these kinds of large memory configurations. Can you say "luxury problems"? The BIN file contains the memory of a virtual machine or snapshot that is in a saved state. Now you need to know that the BIN file actually requires the same disk space as the amount of physical memory assigned to a virtual machine. That means it can require a lot of room. Under "normal" conditions these don’t get this big and we provide a reasonable buffer of free space on the LUNS anyway for performance reasons, growth etc. But this was a bit more than that buffer could master.

As it was stated in the planning that we needed to expand the LUNS a bit to be able to deal with this kind of memory hogs this meant that the storage to do so was available and the LUN wasn’t maxed out yet. If not, we would have been in a bit of a pickle.

So there you go a real life example of what Aidan Finn warns about when using dynamic memory. Also see KB 2504962 “Dynamic Memory allocation in a Virtual Machine does not change although there is available memory on the host” which discusses the scenario where dynamic memory allocation seems not to work due to lack of disk space. Don’t forget about your disk space requirements for the bin files when using virtual machines with this much memory assigned. They tend to consume considerable chunks of your storage space. And even if you don’t forget about it in your planning, please don’t forget the execute every step of the plan Winking smile

Hyper-V Cluster Nodes Upgrade: Zero Down Time With Intel VT FlexMigration

Well the oldest Hyper-V cluster nodes are 3 + years old. They’ve been running Hyper-V clusters since RTM of Hyper-V for Windows 2008 RTM. Yes you needed to update the “beta” versions to the RTM version of Hyper-V that came later Smile Bit of a messy decision back then but all in all that experience was painless.

These nodes/clusters have been upgraded to W2KR2 Hyper-V clusters very soon after that SKU went RTM but now they have reached the end of their “Tier 1” production life. The need for more capacity (CPU, memory) was felt. Scaling out was not really an option. The cost of fiber channel cards is big enough but fiber channel switch ports need activation licenses and the cost for those border on legalized extortion.

So upgrading to more capable nodes was the standing order. Those nodes became DELL R810 servers. The entire node upgrade process itself is actually quite easy. You just live migrate the virtual machines over to clear a host that you then evict from the cluster. You recuperate the fiber channel HBAs to use in the new node that you than add to the cluster. You just rinse and repeat until you’re done with all nodes. Thank you Microsoft for the easy clustering experience in Windows 2008 (R2)! Those nodes now also have 10Gbps networking kit to work with (Intel X520 DA SPF+).

If you do your home work this process works very well. The cool thing there is not much to do on the SAN/HBA/Fiber Switch configuration side as you recuperate the HBA with their World Wide Names. You just need to updates some names/descriptions to represent the new nodes. The only thing to note is that the cluster validation wizard nags about inconsistencies in node configuration, service packs. That’s because the new nodes are installed with SP1 integrated as opposes to the original ones having been upgraded to SP1 etc.

The beauty is that by sticking to Intel CPUs we could live migrate the virtual machines between nodes having Intel E5430 2.66Ghz CPUs (5400-series "Harpertown") and those having the new X7560 2.27Ghz CPUs (Nehalem EX “Beckton”). There was no need to use the “Allow migration to a virtual machine with a different processor” option.  Intel’s investment (and ours) in VT FlexMigration is paying of as we had a zero down time upgrade process thanks to this.

image

You can read more about Intel VT FlexMigration here

And in case you’re wondering. Those PE2950 III are getting a second life. Believe it or not there are software vendors that don’t have application live cycle management, Virtualization support or roadmaps to support. So some hardware comes in handy to transplant those servers when needed. Yes it’s 2011 and we’re still dealing with that crap in the cloud era. I do hope the vendors of those application get the message or management cuts the rope and lets them fall.

Microsoft Offers Operations Manager Community Evaluation Program (2012 CEP)

At TechEd 2011 Microsoft announced the OpsMgr 2012 Community Evaluation Program (CEP) and are now inviting everyone to apply to take part in this in the public Beta time frame. They position a CEP as follows:

Many of you are likely familiar with Microsoft TAP’s, Technology Adoption Programs, where a small pool of customers partner with our engineering teams to preview and provide feedback on pre-beta software. TAP participants provide our engineers with some early guidance and validation of next generation software, prior to us releasing publicly-available beta software. TAP is a great program, but it starts very, very early on and usually fills up quick (and waay before beta). The OpsMgr 2012 TAP has been very active in helping us with early builds, but it is unfortunately full.

The Community Evaluation Program (CEP) has recently been created to provide a broader range of customers with an in-depth experience with our upcoming beta software.

Essentially, a CEP is an organized way of bringing our subject matter experts (SMEs) from our product teams, our community (like MVPs and experienced users) and those interested in taking a deep look at our v.Next software for evaluation and preparation for deployment purposes.

This is good news, we’ got SCVMM2012 Beta running in the lab, it will be nice to get our ands on SCOM 2012 Beta as well. For an overview of the Operations Manager 2012 CEP, take a look at TechNet blog post http://blogs.technet.com/b/momteam/archive/2011/06/02/now-enrolling-for-the-operations-manager-2012-cep.aspx and the OM12 CEP overview datasheet.

If this is to your liking you can get all the information you need here and follow this link to apply for the CEP Apply for the OpsMgr 2012 CEP. Somewhere in June the accepted participants will get the SCOM2012 topic schedule & access to the CEP discussion forums. If you have questions on all this you can send them to [email protected].

System Center Virtual Machine Manager 2012 Using WSUS To Update Hyper-V Cluster Hosts & Other Fabric Servers

One very neat feature in System Center Virtual Machine Manager 2012 (SCVMM2012), which is currently in Béta, is the integration with WSUS to automate the patching of Hyper-V cluster hosts (+ the Library servers, SCVMM servers and the update servers, i.e the fabric). The fact that SCVMM 2012 will give you the complete toolset to take care of this is yet a great addition to the functionality available in Virtual Machine Manager 2012. More and more I’m looking forward to using it in production as it has so many improvements and new features. Combine that with what’s being delivered in System Center Operations Manager (SCOM2012) and the other member of the System Center family and I’m quite happy with what is coming.

But let’s get back to the main subject of this blog. Using WSUS and SCVMM2012 to auto-update the Hyper-V cluster hosts without interruption to the virtual machines that are running on it. Up until now, we needed to script such a process out with PowerShell even tough having SCVMM2008R2 makes it easier since we have Maintenance Mode in that product which will evacuate all VMs from that particular host, one by one. The workflow of this script looks like this:

  • Place the Host Node in Maintenance Mode in SCOM 2007 R2 (So we don’t get pesky alerts)
  • Place the Host Node in Maintenance Mode in SCVMM2008R2 (this evacuates the VMs from the host via Live Migration to the other nodes in the cluster)
  • Patch the Host and restart it
  • Stop Maintenance Mode on the host node in SCVMM2008R2 (So it can be used to run VMs again)
  • Stop Maintenance Mode on the host node in SCOM 2007 (We want it to be monitored again)
  • Rinse & Repeat until all Host nodes are done. Depending on the size of the cluster you can do this with multiple nodes at the same time. Just remember that there can be only one Live Migration action taking place per node. That means you need at least 4 nodes to do something like Live migrate from Node A to Node B and Live Migrate from Node C to node D. So you need to work out what’s optimal for your cluster depending on load and number of nodes you have to work with.
  • Have the virtual machines redistributed so that the last host also gets its share or virtual machines

Now with SCVMM2012 we can do this out of the box using WSUS and all of this is achieved without ever interrupting any services provided by the guests as all virtual machines are kept running and are live migrated away from the host that will be patched. If you’re a shop that isn’t running System Center Configuration Manager you can still do this thanks to the use of WSUS and that’s great news.  There is an entire sub-section on the subject of Managing Fabric Updates in VMM 2012 already available on TechNet. But it goes beyond the Hyper-V host. It’s also the SCVMM server, the library server, and the Update Server that get patched. But don’t go wild now, that’s the entire scope of this. That means you still need regular WSUS or SCCM for patching the virtual machine guests and other physical servers. The aim of this solution is to patch your virtualization solution’s infrastructure as a separate entity, not your entire environment.

So how do we get this up and running? Well, it isn’t hard. Depending on your needs and environment you can choose to run WSUS and SCVMM on the same server or not. If you choose the latter please make sure you install the SWSUS Administration Console on the SCVMM server. This is achieved by downloading  WSUS 3.0 SP2 and installing it. Otherwise, just use the WSUS role from the roles available on Windows 2008 R2. This handles the prerequisites for you as well. It is also advisable to install the WSUS role on a separate server when your SCVMM 2012 Infrastructure is a highly available clustered one. For more information see http://technet.microsoft.com/en-us/library/gg675099.aspx . Time-saving tip: create a separate domain account for the WSUS server integration, it can not be the SCVMM 2012 service domain account.

Make sure you pay attention to the details in the documentation, don’t forget to install the WSUS 3.0 SP2 Administration Console on the SCVMM 2012 server or servers and to restart the SCVMM service when asked to. That will safe you some trouble. Also, realize that this WSUS Server will only be used for updating the SCVMM 2012 fabric and nothing else. So we do not configure anything except the operating system (W2K8R2) , and the languages needed. All other options & products that are not related to virtualization are unchecked as we don’t need them. Combine this with dynamic optimization to distribute the VM’s for you and you’re golden. A good thing to note here is that you’re completely in control. You as the virtualization infrastructure / SCVMM 2012 Fabric administrator control what happens regarding updates, service packs, …

You do need to get used to the GUI a bit when playing around with SCVMM2012 for the first time to make sure you’re in the right spot, but once you get the hang of it you’ll do fine. I’ll leave you with some screenshots of my lab cluster being scanned to check the compliance status and then being remediated. It works pretty neatly.

Here are the hosts being scanned.

You can right-click and select remediate per baseline or select the host and select remediate form context menu or the ribbon bar.

The crusader host is being remediated. I could see it being restarted in the lab.