Unable to retrieve all data needed to run the wizard. Error details: “Cannot retrieve information from server “Node A”. Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.

I was recently configuring a Windows Server 2012 File server cluster to provide SMB transparent failover with continuous available file shares for end users. So, we’re not talking about a Scale Out File Server here.

All seemed to go pretty smooth until we hit a problem. when the role is running on Node A and you are using the GUI on Node A this is what you see:

image

When you try to add a share you get this

"Unable to retrieve all data needed to run the wizard. Error details: "Cannot retrieve information from server "Node A". Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.”

image

When you failover the file server role to the other node, things seem to work just fine. So this is where you run the GUI from Node A while the file server role resides on Node B.

image

You can add a share, it all works. You notice the exact same behavior on the other node. So as long as the role is running on another node than the one on which you use Failover Cluster Manager you’re fine. Once you’re on the same node you run into this issue. So what’s going on?

So what to do? It’s related to WinRM so let’s investigate that.

image

So the WinRM config comes via a GPO. The local GPO for this is not configured. So that’s not the one, it must come from the  domain.The IP addresses listed are the node IP and the two cluster networks. What’s not there is local host 127.0.0.1, the cluster IP address or any of the IPV6 addresses.

I experimented with a lot of settings. First we ended up creating an OU in the OU where the cluster nodes reside on which we blocked inheritance. We than ran gpupdate /target:computer /force on both nodes to make sure WinRM was no longer configure by the domain GPO. As the local GPO was not configured it reverted back to the defaults. The listener show up as listing to all IPv4 and IPv6 addresses. Nice but the GPO was now disabled.

image

This is interesting but, things still don’t work. For that we needed to disable/enable WinRM

Configure-SMRemoting -disable
Configure-SMRemoting –enable

or via server manager

image

That fixed it, and we it seems a necessity to to. Do note that to disable/enable remote management it should not be configured via a GPO or it throws an error like

image

or

image

Some more testing

We experimented by adding 127.0.0.0-172.0.0.1 an enabling the GPO again. We then saw the listener did show the local host, cluster & file role IP address but the issue was back. Using * in just IPv 4 did not do the trick either.

image

What did the trick was to use * in the filter for IPv 6 and keep our original filters on IPv4. The good news is that having removed the GPO and disabling/enabling WinRM  the cluster IP address & Filer Role IP address are now in the list. That could be good for other use cases.

This is not ideal, but it all works now.

What we settled for

So we ended up with still restricting the GPO settings for IPv4 to subnet ranges and allowing * for IPv6. This made sure that even when we run the Failover Cluster Manager GUI from the node that owns the file server role everything still works.

One workaround is to work from a remote host, not from a cluster member, which is a good practice anyway.

The key takeaway is that when Microsoft says they test with IPv6 enabled they literally mean for everything.

Note

There is a TechNet article on WinRM GPO Settings for SCVMM 2012 RC where they advice to set both IPv4 and  IPv6  to * to avoid issues with SCVMM operations. How to Add Trusted Hyper-V Hosts and Host Clusters in VMM 

However, we found that IPv6 is the key requirement here, * for just IP4 alone did not work.

Cluster Validation Failure while setting up a Windows 2012 Continuous Available File Share: The password does not meet the password policy requirements

We were installing a Windows Server 2012 cluster in a W2K8R2 domain and while we were checking out our work by running the cluster validation we got one warning we’ve never seen before:

Validate CSV Settings

Description: Validate that settings and configuration required by Cluster Shared Volumes are present. This test can only be run with an administrative account, and it only tests servers that are cluster nodes.

Start: 9/24/2012 5:01:18 PM.

Validating Server Message Block (SMB) share access through the IP address of the fault tolerant network driver for failover clustering (NetFT), and connecting with the user account associated with validation.

Begin Cluster Shared Volumes support testing on node server1.test.lab.

Failure while setting up to run Cluster Shared Volumes support testing on node server1.test.lab: The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements.

Begin Cluster Shared Volumes support testing on node server2.test.lab.

Failure while setting up to run Cluster Shared Volumes support testing on node server2.test.lab: The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements.

This test requires more than one node. If your cluster contains more than one node, please run validation tests again with more than one node specified.

Now as it turns out this Active Directory domain does enforce some lengthy and complex passwords. By this they are basically driving the admins to use pass sentences which are lot more secure. That also means that the account we are using to run the validation have adequate lengths & complexities.

So, what if we tune down the password length requirements and than run GPUDATE from an elevated command prompt on all nodes of the cluster? Bingo! The cluster valid now passes with flying colors.

I’m guessing that perhaps the local doesn’t have a strong enough password to meet the requirements. But this is just guessing. This is the account that is involved in reducing the clusters dependency on Active Directory so that CSV for example can come on line even if there is not domain controller to contact. Hence my guess that this is related. This did not happen in a lab environment so I’m not going to change the password on all nodes to a more complex one. That is for a lab Smile

image