You cannot connect multiple NICs to a single Hyper-V vSwitch without teaming on the host

Can you connect multiple NICs to a single Hyper-V vSwitch without teaming on the host

Recently I got a question on whether a Hyper-V virtual switch can be connected to multiple NICs without teaming. The answer is no. You cannot connect multiple NICs to a single Hyper-V vSwitch without teaming on the host.

This question makes sense as many people are interested in the ease of use and the great results of SMB Multichannel when it comes to aggregation and redundancy. But the answer lies in the name “SMB”. It’s only available for SMB traffic. Believe it or not but there is still a massive amount of network traffic that is not SMB and all that traffic has to pass through the Hyper-v vSwitch.

What can we do?

Which means that any redundant scenario that requires other traffic to be supported than SMB 3 will need to use a different solution than SMB Multichannel. Basically, this means using NIC teaming on a server. In the pre Windows Server 2012 era that meant 3rd party products. Since Windows Server 2012 it means native LBFO (switch independent, static or LACP). In Windows Server 2016 Switch Embedded Teaming (SET) was added to your choice op options. SET only supports switch independent teaming (for now?).

If redundancy on the vSwitch is not an option you can use multiple vSwitches connected to separate NIC and physical switches with Windows native LBFO inside the guests. That works but it’s a lot of extra work and overhead so you only do this when it makes sense. One such an example is SR-IOV which isn’t exposed on top of  a LBFO team.

Unable to correctly configure Time Service on non PDC Domain Controller

Introduction

Around new year, between the 31st 2016 and the 1st of January 2017 some ISP had issues with the time service. It jumped 24 hours ahead. This cause all kinds of on line services issues ranging from non working digital TV to problems with the time service within companies. That caused some intervention time and temporarily switching the external reliable NTP time server sources to another provider that didn’t show this behavior. Some services required a server reboot to sort things out but things were operational again. But it became clear we still had a lingering issue afterwards as we were unable to correctly configure Time Service on non PDC Domain Controller.

Unable to correctly configure Time Service on non PDC Domain Controller

A few days later we still had one domain, which happend to be 100% virtualized, with issues. As turned out the second domain controller, which did not hold the PDC role, wasn’t syncing with the PDC. No matter what we tried to get it to do so. If you want to find out how to do this properly for virtualized environment I refer you to a blog post by Ben Armstrong Time Synchronization in Hyper-V and fellow MVP Kevin Green Hyper V Time Synchronization on a Windows Based Network.

But no matter what I did, the DC kept  getting the wrong date. I could configure it to refer to the PDC as much as I wanted, nothing helped. It also kept saying the source for the time was the local CMOS (w32tm /query /source). I kept getting an error, we’re normally able to fix by configuring the time service correctly.

image

Another trouble shooting path

The IT universe was not aligned to let me succeed. So that’s when you quit … for a coffee break. You relax a bit, look out of the window whilst sipping from your coffee. After that you dive back in.

I dove into the registry settings for the Windows time service in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time of a functional DC in my lab and the one of the problematic DC in the production domain.  I started comparing the settings and it all seemed to be in order. But for one serious issue with the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Security key on the problematic DC.

Trying to open that key greeted me with the following error:

Error Opening key

Security cannot be opened. An error is preventing this key from being opened. Details: The system cannot find the file specified.

image

That key was empty. Not good!

image

I exported the entire W32Time registry key and the Security key as a backup for good measure on the problematic DC and I grabbed an export of the security key from the working DC (any functional domain joined server will do) and imported that into the problematic DC. The next step was to restart the time service but that wasn’t enough or I was to inpatient. So finally I restarted the DC and after 10 minutes I got the result I needed …

image

Problem solved Smile

Testing Compellent Replay Manager 7.8

Testing Compellent Replay Manager 7.8

So today I found the Replay Manager 7.8 bits to download.image

As is was awaiting this eagerly (see Off Host Backup Jobs with Veeam and Replay Manager 7.8). So naturally, I set of my day by testing Compellent Replay Manager 7.8. I deployed in on a 2 node DELL PowerEdge Cluster with FC access to a secondary DELL Compellent running SC 6.7.30 (you need to be on 6.7).

image

The first thing I noticed is the new icon.

image

That test cluster is running Windows Server 2016 Datacenter edition and is fully patched. The functionality is much the same as it was. There is one difference and that if you launch the back upset manually of a local volume for a CSV and that CSV is not owned y the Node in which you launch it the backup is blocked.

image

This did not use to be the case. With scheduled backup sets this is not an issue, it detects the owner of the CSV and uses that.

image

Just remember when running a backup manually you nee to launch it from the CSV owner node in Replay Manager and all is fine.

image

Other than that testing has been smooth and naturally we’ll be leveraging RM 7.8 with transportable snapshots with Veeam B&R 9.5 as well.

Things to note

Replay Manager 7.8 is not backward compatible with 7.7.1 or lower so you have to have the same version on your Replay Manager management server as on the hosts you want to protect. You also have to be running SC 6.7 or higher.

Wish list

I’d love to see Replay manager become more intelligent and handle VM Mobility better. The fact that VMs are tied to the node on which the backup set is create is really not compatible with the mobility of VMs (maintenance, dynamic optimization, CSV balancing, …). A little time and effort here would go a long way.

Second. Live Volumes has gotten a lot better but we still need to choose between Replay Manager  snapshots & Live Volumes. In an ideal world that would not be the case and Replay manager would have the ability to handle this dynamically. A big ask perhaps, but it would be swell.

I just keep giving the feedback as I’m convinced this is a great SAN for Hyper-V environments and they could beat anyone by make a few more improvements.

DELL EMC Ready Nodes and Storage Spaces Direct

Introduction

Unless you have been living under a rock you must have heard about Storage Spaces Direct (S2D) in Windows Server 2016, which has gone RTM in Q4 2016.

There is a lot of enthusiasm for S2D and we have seen heard and assisted in early adopter situations. But that’s a bit of pioneering with OEM/MSFT approved components. So now bring in the DELL EMC Ready Nodes and Storage Spaces Direct.

DELL EMC Storage Spaces Direct Ready Nodes

So enter the DELL EMC ready nodes. These will be available this summer and should help less adventurous but interested customers get on the S2D bandwagon. These were announced at DELL EMC world 2017 and on may 30th they published some information on the TechCenter.

If offers a fully OEM supported S2D solution to the customers that cannot or will not carry the engineering effort associated with self built solution.

I was sort of hoping these would leverage the PowerEdge 740DX from the start but they seem to have opted to begin with the DELL 730DX. I’m pretty sure the R740DX will follow soon as it’s a perfect fit for the use case having 25Gbps support. In that respect I expect a refresh of the switches offered as well as the S4048 is a great switch but keeps us at 10Gbps. If I was calling the shots I’d that ready and done sooner rather than later as the 25/50/100Gbps network era is upon us. There’s a reason I’ve been asking for 25Gpbs capable switches with smaller port counts for SME.

Maybe this is an indication of where they think this offering will sell best. But I’d be considering future deployments when evaluating network gear purchases. These have a long service time. And when S2D proves it self I’m sure the size of the deployments will grow and with it the need for more bandwidth. Mind you 10Gbps isn’t bad even if if, for Hyper-V nodes would be doing 2*dual port Mellanox Connect-X 3 Pro cards.

Having mentioned them, I am very happy to see the Mellanox RoCE cards in there.That’s the best choice they could have made. The 1Gbps on board NICs are Intel, which matches my preference. The game is a foot!