Kemp LoadMaster template import issue: Command serverinit needs 1 parameter(s)

Introduction

The error Kemp LoadMaster template import issue: Command serverinit needs 1 parameter(s) was encounter during a migration project I was involved in recnetly. Recently. The job was to migrate the virtual services of an aging Kemp LoadMaster HA solution (LM-2200, 32 bit) to newer and more capable versions (running 7.2.43, X64 bit). Even more important, migrate to a version that is still in support. The LM-2200 series are like many older ones End of Life (EOL). Kemp still delivers important fixes for critical security issues like this year (7.1.35.5 and 7.1.35.6) but that’s it in long term support (LTS). This is not bad, these have been supported for a very long time but the aging 32-bit hardware has reached the point where replacing the is the only right thing to do.

With Kemp we can select new hardware, virtual machines or appliances in the cloud. So on-premises, hybrid and public cloud needs can be taken care off with the same familiar load balancer / Application delivery controller (ADC).

One technique to migrate Kemp workloads is to export the virtual services to templates and use these to configure the new solution. It is during that process we ran into an error: Kemp LoadMaster template import issue: Command serverinit needs 1 parameter(s)

Kemp LoadMaster template import issueCommand serverinit needs 1 parameter(s)

One of the applications is a mission critical one that has many different virtual services. FTP/FTPS/HTTP/HTTPS and also a bunch of TCP/UDP connectivity over ports that are very application/industry specific. A dozen virtual services, all on the same IP address with different ports and configuration needs.

We exported all of them to templates which we then used to recreate them on the replacement ADC. While doing so we used a different IP address for these newly created virtual services for testing purposes. This test IP address is changesd to the original production IP when ready to make the move and after disabling the old virtual services. That also allows for a quick exit / roll back if needed.

The process of creating a new virtual service requires a little work to be done still such as defining a gateway, SSL certificates, adding real servers, … but the bulk of the work is done for you.

But with certain templates we got an error: Syntax error detected in template file: Command serverinit needs 1 parameter(s)

clip_image001

When looking at the template in a text editor we see the following:

clip_image002

Clearly “servinit” does not have a parameter set. So, it probably needs one, to specify which type of the Server Initiating Protocol needs to be set for our generic service.

clip_image003

Finding a solution

We need to find out what parameter goes with our setting and add that the template. That value is found is easy enough. Via educated trial and error. On a test virtual service on the current version ADC we set the value for the Server Initiating Protocols to our value (other server initiating) and then export this virtual service to a template. We look at the template and note the number. That way we map what server initiating protocol corresponds to what value. In our case other server initiating is “3”.

clip_image004

We edit our exported templates to have the correct parameter value set that we find out this way and try to import it again.

clip_image005

That’s all that we needed to do to get these exported templates to be imported with no further issues.

clip_image006

clip_image008

All that’s left to do is to finish configuring the virtual service and continue our migration.

Conclusion

Don’t panic. Many problems you encounter have a solution, workaround or fix. Maybe this will help someone out there. Take care and until next time.3

In place upgrade of RD Gateway farm nodes to Windows Server 2016 removes the Loopback adapter for UDP load balancing

Here’s a quick heads up to anyone who’s involved in upgrading existing Windows Server 2012 (R2) RD Gateway farms to Windows Server 2016.

In my recent experiences the in place upgrade (VMs) works rather well. Just make sure the netlogon service is set to automatic (a know issue and a fix is coming) after you upgrade and install all updates. Also make sure that you don’t have this issue

Windows Time Service settings are not preserved during an in-place upgrade to Windows Server 2016 or Windows 10 Version 1607

There is however one networks specific issue specific you’ll need to deal with when leveraging UDP with a load balancer via Direct Server Return.

When you have a RD Gateway farm you load balance it with a (preferably high available) load balancer like a Kemp Loadmaster. I have described this in these blogs/videos Load balancing Hyper-V Workloads With High To Continuous Availability With a KEMP Loadmaster and Quick Demo Video Of Site Failover With KEMP Loadmaster Global Balancing

What you also do is load balance both HTTPS (TCP, port 443) and UDP (port 3391). For UDP we use Direct Server Return ((DSR) as described in my blog post Load balancing UDP for a RD Gateway farm with a KEMP Loadmaster. This requires a properly configured loopback adapter.

image

During the in place upgrade to Windows Server 2016 this loopback adapter is removed form the nodes. So you need to add it back just a described in my original blog post. Normally it will find the settings for it in the registry but it’s bets you check it all out as I’ve found that the loopback adapter did have “Register this connection”s address in DNS” enabled as well as NETBIOS over TCP/IP. So, per my blog post, check it all to make sure. Other than that, after installing all the Windows Server 2016 updates all works smoothly after an in place upgrade.

Hope this helps someone out there!

High Availability has a price

We’ll go back to basics today. Some times the obvious, no matter how evident it is to us technologists, is challenged. Recently we got the remark that we were wasting CPU cycles by assigning to many vCPU to certain virtual machines on our Hyper-V cluster. So we had to explain that high availability has a price. On top of that we had to explain that things are not as wasteful as they seem in a virtual environment.

The case

Here’s one of the “offending” virtual machines. They assumed that we are wasting at least 50% of 12 CPUs.

image

This is one node in a dual node  load balancing (active-active) and highly available solution. This provides for zero down time during scheduled maintenance and very little downtime during system failures.

And here’s the second node (yes the 1st node has been down for scheduled maintenance more recently that node 2).

image

In a 2 node HA solution you need to make sure that one node can handle the entire workload. This is the absolute border line of an N+1 solution.  This means you can lose 1 node. N determines the number of nodes needed to guarantee an agreed upon service level and the number defines how many nodes failures can be tolerated before affecting the service.

In the above example there’s a need to have the CPU resources on each node to run the entire workload on one node without having an effect on the service. Therefore, when both nodes are up this might seem like a waste to the uninitiated. It is however a required to achieve the high availability goal. A constant CPU usage over 75 % will lead to a reduction in service quality in this case and even compromise the usability of the that service.

I did not even dive into the dangers of designing purely based on averages during this “explanation”. That was one step to much for the level of the discussion.

It’s also important to note that Hyper-V CPU scheduling is highly intelligent and is far less susceptible to the waste of CPU cycles via over provisioning of vCPU than some other solutions are or used to be. Knowing the capabilities and inner working of the technology used is also important in all this. More nodes generally also make “over provisioning” less of an issue. When you have 10 nodes and you lose 1, you only have lost 10% of the capabilities, not 33% like in a 3 node cluster.

Ideally you have 3 node so that even during an issue with one node you still maintain redundancy. However if you want acceptable services during a 2 node failure you’ll need to go to N+2, meaning that you need 2 nodes to provide the services and handle losing 2 nodes gracefully. In that case you’ll need 4 node and so on.  The larger the node count  the wiser it is to go to a N+2 model and ideally you’ll provide separate failure domains over which the nodes are distributed. An example of this is having a redundant geo-load balanced web farm of 32 virtual machine nodes spread over 2 locations and running on separate hardware failover clusters in each location. As you can see the higher the stakes and demands the faster the cost and potential complexity rises. You can offload some of the complexity by leveraging a public cloud like Azure, but the costs will still be there. There is no such thing as a free lunch, some are quite easy and affordable for what you get.

Conclusion

High Availability has a price. I did mention that already, right? To be able to keep your services running at a level that is both workable and acceptable to your customers and stake holders you will need to over provision to a degree. There is no magic here. When your solutions are being scrutinized by people with no real background, experience and context in high availability you might need to explain this.