Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested

Transparent Failover and node fault tolerance with SMB 2.2 in Windows 8 Server is something that caught my attention immediately. The entire effort in infrastructure has been to keep the plumbing as invisible & unnoticed as possible. In some areas we had great success in others not so much. Planned & unplanned down time of file servers has always been an issue as there was always a short or longer outage and any failover meant disconnecting & reconnecting leading to all kinds of end users problems and confusion. To them the network is down. But the same issues exist on the server side with apps depending on files shares or servers like SQL Server that are writing backups to remote share or read data from such a share. Often it needs some kind of human intervention to correct the situation. No not even 3rd party clustered file systems and active-active clustering software could achieve this. The SMB protocol prior to 2.2 did not allow for it.

So when one hears it is a possibility now we want to test this! So we throw some virtual machines on the test cluster and build a file cluster with windows 8 server and we also have a 3rd server to act as a client with SMB 2.2. Open the Failover Cluster Manager right click roles and choose to configure a role.

image

You’ll see the familiar wizard, click next

image

And choose the file server role

image

Give the Client Access Point a name and add an IP address.

image

Add some storage

image

And voila … after the confirmation we’re asked to configure high availability

image

This opens the New Share Wizard

image

…this is all pretty straight forward so I’ll leave out the screenshots but for the most important one where we explicitly uncheck the “Enable continuous availability” as we want to first run a test without it Smile

image

Continue through the wizard & voila you have a clustered file server with a Client Access Point as a single namespace. Please not that you can connect to this using that single name space. No need for \serverABizzShare & \ServerBBizzShare and going fancy with redundant DFS name spaces and the like.

Remember we still need to make this share highly available but let’s do some file copies and fail over the node to see what this looks like without transparent failover. Select the role transparent, right click, choose “Move” and “Select  Node”.

image

Choose an available node and click OK

image

As you can see this looks rather familiar.image

Let’s make that share continuously available. Go to and double click on the share you want to configure.

image

You’ll see a progress dialog whilst information is retrieved …

image

… and then the share properties are presented, most is familiar stuff but we need the bottom one “Settings”. Select the check box to make the share continuously available.

image

Now let’s try that file copy of again whilst failing over the file server role to another node.

image

So there is no loss of data, no need to the client to reconnect, you don’t have to retry but you do have a freeze that lasts for about 20 seconds on my test lab. I hope this will still improve before RTM.

What we learned here is that we can have Transparent (File Share) failover with SMB 2.2 in a virtualized environment and we can give it a “Client Access Point” name like “MyOldFileServer” so that users are not confused or need to learn another UNC path. There are many options to achieve keeping old namespaces around for end user ease of use but this is an extra ace up our sleeve. For now planned (patching, server maintenance) or unplanned (crash) is a 20 second freeze experience right now as the file share fails over. This freeze is probably due to active–passive clustering. For now active-active is not recommended/supported for file sharing in an end user scenario. I think they are “worried” of huge file shares with a zillion meta data updates to sync. But this is supported for apps like hyper-V, SQL Server backups or apps needing file data etc. I’m going to try it next and for user data. Things might change before RTM and with multichannel, RDAM, 10Gbps, NIC teaming in the OS perhaps that active-active scenario might be feasible for user file data? PLEASE? Otherwise here’s another request for “Windows 8 Server R2” Winking smile

The secret sauce is in:

  • SMB 2.2 on both client & server
  • Resume Key
  • SMB 2.2 Witness service which is stet to running when you make an share continuous available.

image

Go watch the sessions from the Build conference to hear more on al this. The work they’ve put in to this  + some of the complexities are quite amazing. http://www.buildwindows.com/Sessions

Things to find out: how to rename a Client Access Point or how to delete it. Adding a new one is easy.

Warning: It’s September 23rd 2011 and the Developer Preview is a little rough around the edges don’t run this on anything you need to get your bills paid yet  Winking smile

Windows 8 Server Developer Preview: NIC Teaming In The Operating System Works Just Fine

A short blog to share some first experiences with Window 8 Server functionality. I set up a couple of Hyper-V guests with Windows 8 to start playing with some of the functionality that is very promising. One of the first things I just had to try out was NIC teaming in the operating system. Well, the experience is still a little rough as the product is not yet finished but setting up NIC teaming is rather easy and it works like a charm! If this is the experience of future test, then bring it on!

image

Set it up, just with all defaults and start unplugging cables or disconnecting virtual NICs for the virtual networks in Hyper-V. I guarantee some fun. No pings dropped, no file copies failed Smile

I can also confirm that iSCSI works just like before, not much change there, walk in the park. So, I’m building a cluster with some virtual machines to play some more with new functionalities. I’ll report on that as I find the time between real work, work that pays the bills and some needed R&R once in a while.

Data Protection & Disaster Recovery in Windows 8 Server Hyper-V 3.0

The news coming in from the Build Windows conference is awesome. The speculation of the last months is being validated by what is being told and on top of that more goodness is thrown at us Hyper-V techies.

On the data protection and disaster recovery front some great new weapons are at our disposal. Let’s take a look at some of them.

Live Migration & Storage Live Migration.

Among the goodies are the improvements in Live Migration and the introduction of Storage Live Migration.  Hyper-V 3.0 supports multiple concurrent Live Migrations now, which combined with adequate bandwidth will provide for fast evacuation of problematic hosts. Storage Live Migration means you can move a VM (configuration, VHD & snapshots) to different storage while the guest remains on line so the users are not hindered by this. I’m trying to find out if they will support multiple networks / NICs  with this.

Now to make this shine even more MSFT has another ace up it’s sleeve. You can do Live Migration and Storage Live Migration without the requirement of shared storage on the backend. This combination is a big one. This is means “shared nothing” high availability. Even now when prices for entry level shard storage has plummeted we see SMB being weary of SAN technology. It’s foreign to them and the fact they haven’t yet gained any confidence with the technology makes them hesitant. Also the real or perceived complexity might hold ‘m back. For that segment of the market it is now possible to have high availability anyway with the combo Live Migration / Storage Migration.  Add to this that Hyper-V now supports running virtual machines on a file share and you can see the possibilities of NAS appliances in this space of the market for achieving some very nice solutions.

Replication to complete the picture

To top this of you have replication built in, meaning we have the possibility to provide reasonably fast disaster recovery. It might not be real time data center fail over but a lot of clients don’t need that. However, they do need easy recoverability and here it is. To give you even more options, especially  if you only have one location, you can replicate to the cloud.

So now I start dreaming Smile We have shared nothing Live & Storage Live  Migration, we have replication. What could achieve with this? Do synchronous replication locally over a 10Gbps for example and use that to build something like continuous availability. There we go, we already have requirements for “Windows 8 Server R2”!

NIC Teaming in the OS

No more worries about third party NIC teaming woes. It has arrived in the OS (finally!) and it will support load balancing & failover. I welcome this, again it makes this a lot more feasible for the SMB shops.

IP Virtualization / Address Mobility

Another thing that will aid with any kind of of site  disaster recovery / high availability is IP address Mobility. You have an IP for the hosting of the VM and one for internal use by VM. That means you can migrate to other environments (cloud, remote site) with other addresses as the VM can change the hosted IP address, while the internal IP address remains the same.  Just imagine the flexibility this gives us during maintenance, recovery, trouble shooting network infrastructure issues and all this without impacting the users who depend on the VM to get their job done.

Conclusion

Everything we described is out of the box with Windows 8 Server Hyper-V. To a lot of business this can  mean a  huge improvement in their current  availability and disaster recovery situation. More than ever there is now no more reason for any company to go down or even out of business due to catastrophic data loss as all this technology is available on site, in hybrid scenarios and in the cloud with the providers.

Assigning Large Memory To Virtual Machine Fails: Event ID 3320 & 3050

We had a kind reminder recently that we shouldn’t forget to complete all steps in a Hyper-V cluster node upgrade process. The proof of a plan lies in the execution Smile. We needed to configure a virtual machine with a whooping 50GB of memory for an experiment. No sweat, we have plenty of memory in those new cluster nodes. But when trying to do so it failed with a rather obscure error in System Center Virtual Machine Manager 2008 R2

Error (12711)

VMM cannot complete the WMI operation on server hypervhost01.lab.test because of error: [MSCluster_Resource.Name="Virtual Machine MYSERVER"] The group or resource is not in the correct state to perform the requested operation.

(The group or resource is not in the correct state to perform the requested operation (0x139F))

Recommended Action

Resolve the issue and then try the operation again.

image

One option we considered was that SCVMM2008R2 didn’t want to assign that much memory as one of the old host was still a member of the cluster and “only” has 48GB of RAM. But nothing that advanced was going on here. Looking at the logs found the culprit pretty fast: lack of disk space.

We saw following errors in the Microsoft-Windows-Hyper-V-Worker-Admin event log:

Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
Source:        Microsoft-Windows-Hyper-V-Worker
Date:          17/08/2011 10:30:36
Event ID:      3050
Task Category: None
Level:         Error
Keywords:     
User:          NETWORK SERVICE
Computer:      hypervhost01.lab.test
Description:
‘MYSERVER’ could not initialize memory: There is not enough space on the disk. (0x80070070). (Virtual machine ID DEDEFFD1-7A32-4654-835D-ACE32EEB60EE)

Log Name:      Microsoft-Windows-Hyper-V-Worker-Admin
Source:        Microsoft-Windows-Hyper-V-Worker
Date:          17/08/2011 10:30:36
Event ID:      3320
Task Category: None
Level:         Error
Keywords:     
User:          NETWORK SERVICE
Computer:      hypervhost01.lab.test
Description:
‘MYSERVER’ failed to create memory contents file ‘C:ClusterStorageVolume1MYSERVERVirtual MachinesDEDEFFD1-7A32-4654-835D-ACE32EEB60EEDEDEFFD1-7A32-4654-835D-ACE32EEB60EE.bin’ of size 50003 MB. (Virtual machine ID DEDEFFD1-7A32-4654-835D-ACE32EEB60EE)

Sure enough a smaller amount of memory, 40GB, less than the remaining disk space on the CSV did work. That made me remember we still needed to expand the LUNS on the SAN to provide for the storage space to store the large BIN files associated with these kinds of large memory configurations. Can you say "luxury problems"? The BIN file contains the memory of a virtual machine or snapshot that is in a saved state. Now you need to know that the BIN file actually requires the same disk space as the amount of physical memory assigned to a virtual machine. That means it can require a lot of room. Under "normal" conditions these don’t get this big and we provide a reasonable buffer of free space on the LUNS anyway for performance reasons, growth etc. But this was a bit more than that buffer could master.

As it was stated in the planning that we needed to expand the LUNS a bit to be able to deal with this kind of memory hogs this meant that the storage to do so was available and the LUN wasn’t maxed out yet. If not, we would have been in a bit of a pickle.

So there you go a real life example of what Aidan Finn warns about when using dynamic memory. Also see KB 2504962 “Dynamic Memory allocation in a Virtual Machine does not change although there is available memory on the host” which discusses the scenario where dynamic memory allocation seems not to work due to lack of disk space. Don’t forget about your disk space requirements for the bin files when using virtual machines with this much memory assigned. They tend to consume considerable chunks of your storage space. And even if you don’t forget about it in your planning, please don’t forget the execute every step of the plan Winking smile