Cloud & Datacenter Conference Germany 2018

Cloud & Datacenter Conference Germany 2018

The Cloud & Datacenter Conference Germany 2018 is a shining beacon of light in a sea of marketing driven IT events. It is organized by Carsten Rachfahl via his company Rachfahl IT Solutions. Carsten is a Microsoft MVP and Regional Director whose commitment to excellence has show for many years in his community engagements. I think his integrity and style is a driving force behind this conferences ability to attract the quality of attendees, speakers and sponsors.

clip_image002[4]

Cloud & Datacenter welcomes top expert speakers from the community and the industry. They deliver high quality sessions and share their combined experience and knowledge with attendees that are truly interested in working with those technologies. That combination delivers high value interactions and knowledge sharing. The sessions in combination with the interaction between everyone there is works very well due to the size of the conference. Its big enough to have the breath of topics needed I todays IT landscape while it is small enough to allow people to dive in deeper and discuss architecture, design, implementation and visions.

Some extra information

The Cloud & Datacenter Conference Germany 2018 is being held on May 15-16 2018 in the Congress Park Hanau, Scholes Plats 1, Hanau, 63450 Germany. That’s close to Frankfurt and as such has good travel accommodations. Topics of interest will be Azure, Azure Stack, Hybrid Cloud, Private Cloud, Software Defined Datacenter, System Center & O365. The conference is a real-life technology event so no one is pretending that the esoteric future is already here. We are working on that future by building it in our daily job and helping organizations move forward in an efficient and effective manner.

This is a great conference by technologists, for technologists. The opportunities to learn, network and exchange information are great. The speakers are approachable and all of them together are there both share and learn themselves. From my past experiences the organization outstanding and the feedback from attendees was outstanding.

I’ll be speaking on RDMA to give a roadmap on this ever more important technology. On top of that I’ll be around to discuss high availability, clustering, data protection both on premises as in (hybrid) cloud scenarios.

Do your self a favor and register for the Cloud & Datacenter Conference Germany in 2018.

All I can say is that you should really consider attending. It’s most definitely worthwhile. The quality of the attendees, the speakers and the absolute top-notch organization of the conference have been proven in the previous years. The Cloud & Datacenter Conference is a testimony to the professionalism, integrity and quality my fellow MVP and friend Carsten Rachfahl delivers with his company Rachfahl IT solution on a daily basis to his customers. So, help yourself out in your career and register right here. I hope to see you there.

Note: The CDC is German spoken conference but as some speakers are from around the globe you’ll have to listen to some of them speaking in English. If you’ve ever heard my German, I’m sure you’ll prefer me speaking English anyway.

Veeam Vanguard nominations are now open for 2018!

Just a quick blog post on the Veeam Vanguard program. The nominations for 2018 are open! That means that if you know people who would make a Veeam Vanguard you can nominate them. You can even nominate yourself, that’s perfectly fine. It’s not frowned upon, but it also doesn’t change anything in terms of evaluation for the program.

veeamvanguardnewlogo

Rick blogged on this yesterday on the Veeam blog in “Veeam Vanguard nominations are now open for 2018!” and gave some more insight of what the program is, tries to achieve and does. He also discusses the selection. The key take-away is that you cannot study for this and that it is not some kind of certification or such. Some of the current Vanguards were quoted on how they look at the program and one thing is constant in that. The fact that the people in these programs are contributors to the global tech community and it’s about sharing and helping others getting the best out of their environment and their investment in Veeam. It also helps Veeam as they get a very communicative group of people to give them feedback on their offerings, both products and services. It’s just one more tool that helps them get things right of fix thing when they got it wrong. Likewise understanding Veeam and their products better for us helps us make better decisions on design, implementation and operation of them.

You can have a look at the current lineup of Veeam Vanguards over here.

clip_image004

You’ll find a short video on the program on that page as well. So go meet the Vanguards and find their blog, their communities and follow @VeeamVanguard and the hash tag #VeeamVanguard to see what’s going on.

clip_image006

So, people, this is the moment if you want to nominate someone or yourself to join the Veeam Vanguards in 2018. You have time until December 29th 2017 to do so. I have always felt honored to be selected and have found memories of the events I was able to go to and I to this day I’m happy to be active in the Veeam Vanguard ecosystem. It’s a fine group of professionals in a program of a great company.

Windows Server 2016 RDMA and the Hyper-V vSwitch – Part I

Introduction

With Windows Server 2012 R2, using both RDMA and the Hyper-V vSwitch on the same host required separate physical network adapters (pNICs). There are 2 reasons for this.

  • First a vSwitch is generally created with a native Windows NIC team. Such a NIC team does not expose RDMA capabilities.
  • Second is that in Windows Server 2012 R2 you cannot expose RDMA capabilities via a vSwitch, even when you are using a non-teamed RDMA capable NIC.

As a result, the need for RDMA required more NICs on the Hyper-V hosts and/or a fully converged had some serious drawbacks. As servers have been quite capable and our VMs serve ever more intensive workloads this was not dramatic. Leveraging 2*10Gbps for a vSwitch and 2*10Gbps for redundant RDMA / SMB Direct traffic have long been one of my favorite designs. It leaves room for other traffic, such as backups, and it allows for high VM density. But with 40Gbps NICs that is overkill and a tad expensive in many scenarios, even when connecting to a SOFS share for Hyper-V storage, so 4*40Gbps on a Hyper-V host is not something I ever saw in real life.

Windows Server 2016 can expose RDMA capabilities via a vSwitch even without SET

What many people seem to have missed is that reason 2 has gone in Windows Server 2016 Hyper-V. Reason 1 still holds true. But that has been solved by Switch Embedded Teaming (SET). This means that you actually do not need SET to leverage RDMA with an vSwitch in Windows Server 2016 Hyper-V. You can do this as follows:

#Create a vSwith
New-VMSwitch -Name RDMACapable-vSwitch -NetAdapterName "NODE-A-S4P1-SW12P05-SMB1"

#Now add a host vNIC for the SMB Direct Traffic
Add-VMNetworkAdapter -SwitchName RDMACapable-vSwitch -Name SMB1 -ManagementOS

#Enable RDMA on it
Enable-NetAdapterRDMA "vEthernet (SMB1)"

#Grab that vNIC on the management OS and set the VLAN - PFC requires tagged VLANs
$NicSMB1 = Get-VMNetworkAdapter -Name SMB1 -ManagementOS
Set-VMNetworkAdapterVLAN -VMNetworkAdapter $NicSMB1 -Access -VlanID 110


Below is what this looks like. We have one vNIC on the management OS leveraging RDMA/SMB Direct consuming all 10Gbps if the NIC we connected to the vSwith. This is a nice lab demo but you can see this isn’t perhaps the best idea in real life.

clip_image002

Other things to note

Do realize this still requires the pNIC to be RDMA capable. This is not some sort of soft RoCE or other software RDMA magic as of today. The pNIC also has to have RDMA enabled or virtual NIC won’t be able to leverage RDMA but fall back to SMB (Multichannel only) instead of SMB Direct. Likewise, RDMA has to be enabled on the vNIC as well. So don’t forget, RDMA must be enabled on both the pNIC and the vNIC for this to work.clip_image004

DCB’s PFC/ETS requires a tagged VLAN to carry the priority, do don’t forget to tag the vNIC. There is actually no need to tag the pNIC as long as the switch port has the tagged VLAN set – most likely as a trunk or in general mode. If you don’t tag consistently across the entire network stack you’ll have network issues anyway and RDMA performance will be bad if it works at all.

Finally, don’t forget this is example is not using VMM /Network Controller and as such is using Set-VMNetworkAdapterVLAN and not Set-VMNetworkAdapterIsolation.

In real life, we need better and more than a single NIC vSwitch

The caveat here is that, while you have a converged setup, you have no redundancy for the vSwitch (there is no team). This also means that you’re are limited to a single NIC in regards to throughput for that vSwith. Depending on the needs of the solutions that might be perfectly fine. It it’s not – in most real-world scenarios you’ll need redundancy – you have to use SET in a converged scenario. That’s what we’ll take a look at in part 2. Then there is the question about QoS as you don’t want SMB Direct traffic to consume to much bandwidth at will. That’s still another issue to discuss and address.

The Hyper-V Processor Relative Weight

Introduction

Hyper-V offers 3 ways of managing or tweaking the CPU scheduler to provide the best possible configuration for certain scenarios and use cases. The defaults normally work fine but of certain conditions you might want to tweak them for the best possible outcome.  The CPU resource controls at your disposal are:

  • Virtual machine reserve  – Think of this as the minimum CPU “QoS”
  • Virtual machine limit – Think of this as the maximum CPU “QoS”
  • Relative Weight – Think of this as the scale defining what VM is more important.

Note that you should understand what these setting are and can do. Threat them like spices. Select the ones you need and don’t overdo it. They’re there to help you, if needed you can leverage all three. But it’s highly unlikely you’ll need to do so. Using one or two will server you best if and when you need them.

In this blog post we’ll look at the relative weight.

Relative weight

Relative weight is a relative number between 1 and 10000 that you can assign to a virtual machine. This determines the relative importance of a virtual machines CPU resources in regards to other virtual machines. So it’s not a % or number of cycles, it’s just a arbitrary weight. By default this is set to 100.

image

You need to come up with a scale and stick to it. 100, 200 and 300 for low, medium and high important virtual machines is a good example. You could also create 10 “classes”  1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500. This leaves room to even create even more (lower, in between and higher).

Note that as long as there are sufficient CPU resources on a host the relative weight does not come into play. It really doesn’t matter whether a virtual machine has relative weight of 1000 versus 5000 at that time. They both get whatever they need as there’s plenty to go around.

Relative weight kicks in when the demand is higher than the availability on a physical host. When you have left all the virtual machines at the default of 100 they will all get an equal share. But when you’ve set virtual machines with a higher relative weight these will be getting a higher share of the available CPU cycles.

Use Cases

Not all virtual machines are created equal. In reality some workloads are more important than others. This might be development and test versus production or high priority workloads versus lower priority workloads. The lower priority workloads are the once that you care about less when there is contention for CPY cycles. Or workloads where less CPU cycles and slower response times don’t make a real difference.

Another use case might be on your developers or lab host where you have a CPU sensitive workload you give a much higher weight and leave the others at the default of 100.

To make sure the high priority workloads or those that really depend on more CPU cycles being delivered fast don’t have to play second fiddle to those that don’t have those needs we use relative weight. It’s very flexible and only kicks in when needed, so there is no waste or inefficiency there.

Limitations

The biggest limitation is in the name. It’s all relative. Where as reserve or limit give you a minimum and a maximum respectively, the relative weight only defines what virtual machine more important than another in regards to CPU cycles. So some virtual machines get more than others but that might not be enough. It’s all about balance between virtual machines, not guaranteed minima or maxima.

You need to agree on a standard within the company to define weight. If everyone starts using a different scale you’re in trouble.

Let’s take one admin who uses 100 for less important virtual machines, 200 for standard virtual machines and 300 for the most important ones. That’s all great when he’s the only one defining the settings and when he does so consistently on all nodes/ cluster for all VMs. In that case all is well even when VMs move around between hosts or between clusters. But what happens when many admins use different “scales”. Well it’s a mess and the behavior won’t be what you want when your colleague used 1000, 2000 and 3000 respectively for the same definition. It’s also smart to not use 100, 101 and 102. leave some margin for adding a category when needed.

Conclusion

This is one handy tool to have at your disposal and I tend to use it to proactively set a higher weight for very important VMs. Even in an environment where there are no predefined categories or know minima this allows me to tell Hyper-V that, if there ever is contention for CPU cycles, the virtual machines with a higher weight are the one to serve a bigger share of the limited resources.