Perversions of IT: License Dongles

The Sum of All Evil resulting in the Mother of All Fears

One of the most annoying things in the life of infrastructure support are the sometimes convoluted schemes software vendors come up with to protect their commercial interests. The “solution” we despise the most is a license dongle. It sounds easy and straight forward but it never ever is. Murphy’s Law!

Vendors think they are unique

What’s one dongle? What are you complaining about? Huh … we have customers who have more than more than 10. Only one application is actually capable of finding multiple dongles and able to survive the loss of one. Do we need multiple machines to be on the safe side? Yes. How many? You need at least two. Why to allow “quick” redirection if possible at all (using scripts, GPO’s, etc.)

We have a USB hub attached to the rack, so we can stick in many dongles. It saves on hardware. Now once in a while the USB will get a hick up and unplugging it and plugging in back it usually does the trick. As mentioned before, this is not very handy when that dongle is in a secured room or data center. To add insult to injury network issues might stop applications from working that otherwise would be fine just because they can’t find a license server.

Today they are almost always USB dongles, but the parallel port dongle was quite popular as well. If you’re lucky the vendor provides you with a USB dongle when your hardware doesn’t have a parallel port anymore. But sometimes you are not that lucky. Today most laptops and indeed most PC’s don’t come with a parallel port anymore. And no, to road warriors with laptops a USB to parallel converter isn’t really user friendly. Furthermore a dongle sticking out are accidents waiting to happen (broken, lost) and finally some laptops, especially the smaller ones road warriors love sometimes only have one USB port that is taken up by their G3 broadband stick. Heaven forbid that some users actually have two applications that require a dongle and an internet connection. These are only the silly, practical issues you have to deal with when license dongles come into the picture but it gets worse fast when things like uptime, redundancy, high availability etc. are added.

Some dongles are attached to a network license server; some are attach locally to the PC or server running the software. In all cases they need drivers/software to run. The server software is sometimes very basic, rather flimsy and error prone. Combine this with various vendors who all require license dongles with various brands/versions of software. USB ports itself have been known to malfunction now and then. As you can imagine you end up with a USB hub lit up like a X-Mas tree and lots of finger crossing.

Reliable, Predictable, High Available

Dongles and high availability are almost always by definition mutually exclusive. If they are not then it’s a very expensive we’ll work around it type of solution trying to make a dongles highly available. This is only possible when the software that requires it supports redundant setups. I have only seen this possibility once. With some vendors you need to buy extra licenses to get an extra dongles … if the software package is 50.000 € that hurts. Some vendors will show leniency in such cases, some are plain arrogant and complacent. The fact that they are still in business proves that the world runs on bull shit I guess. But even when you do have multiple dongles, multiple servers hosting the dongles most dongle protected software is no written to deal with losing a dongle so you can’t get high availability only some form of hot stand by. Supporting that is also a lot of work that requires cloning, copying, scripts, manual interventions etc.

Some vendors are even so paranoid they check 5 times within a minute … if they can still find a license and if not the application fails. That means that even rebooting a dongle host for patching or another intervention takes down the application. Zero tolerance for failure … dongle wise, power wise, human error wise … pretty unrealistic. And even if the dongle is attached to a redundant server in a secure data center you’ll see that the USB port will fail for some reason. The only reliable and predictable thing in this story is the fact that you will fail.

Security

This is a good one. Do you really want to hurt some companies I work/consult for? Walk around their offices or data center, unplug any dongle you can and flush ‘m down the toilet. That will take ‘m down for a while. Yes I known, they should have better physical security. Either we have a license dongle on a network server which makes it a bit more realistic or we lock up all those PC’s in a secure room. That is not always feasible, either due to cost or just practically. And by the way that doesn’t protect you from a pissed of contractor or employee that has access. Even when security cameras can identify them fast the damage is already done.

Dongles sticking out of a 1 U server prevent the use of bezels to help lock down access to the server. The USB ports in the back are used for KVM over IP or keyboard and mouse.

In some models you can try to plug the dongle into an USB port inside the server chassis but than the old trick of unplugging/inserting the dongle when it goes haywire isn’t that easy anymore, let alone the fact that the dongle sits in a data center somewhere so getting to it might not be feasible so you need to allow someone to access the server to be able to get to the dongle.

Dongles and Virtualization

When you need to virtualize server applications that need a locally attached dongle you need to start looking for USB over Ethernet solutions that are reliable. When you find one you need to manage it very carefully and well. You need to manage the versions of the server software and the client software. We’ve seen network connectivity loss when the versions don’t match up, even if the software didn’t complain about different versions. You need to test its stability, have extra hardware and extra dongles for testing as not all dongles respond well to this type of setup. We can’t afford to bring down production environments with USB over Ethernet software “upgrades of faith”. The need for dongles adds an extra layer of complexity and management, one that is very error prone and hard to make redundant let alone highly available. It’s not a pretty picture.

We used to buy Fabulatech for such implementations. Version 4.1.1 was rock solid but ever since version 4.2/4.3 & 4.4 Beta they have brought us nothing but “Blue Screen of Death” hell. We now implement KernelPro (2.5.5) which seems to be functioning very well for the moment.

Dongles are a virtualization show stopper in some environments due to these issues and risks. Behold dongle David brings down virtualization Goliath.

The Bottom Line

The biggest perversion, in what is essentially a big mess, is the fact that the only people affected by this are your paying customers. Software vendors should take note of the fact that paying clients despise your convoluted, error prone, “accidents waiting to happen” dongle licensing schemes. You not only have no clue what it means to run reliable IT operations but you don’t even care about your customer’s needs. There is only one rule. Software & hardware should work under all circumstances without the need for dongles. That darn piece of 50 cent plastic & silicon could well bring an entire application down. Let us just hope that it isn’t the geo routing software for 911 or 112 services.

There are two possibilities when you sell software. One is that your application is very popular and as such is being “keygenned” and cracked all over the place and the only ones you’re hurting are your paying customers. The other possibility is that your software is so unique and expensive it’s only bought by specialized firms and entities that couldn’t even operate it without being exposed as thieves. Stop fooling yourselves and stop making life hell for your customers. Protect you rights as well as you can but not at the expense of paying customers. You might even sell more if you care about their needs. Go figure. Maybe I’m just to demanding? Nah!

Dynamic Memory Allocation for Hyper-V in Windows Server 2008 R2 SP1

Great news, and it’s finally coming to our production environments (it was the buzz @ Tech Ed 2008 in Barcelona for Hyper-V next at that time together with Live Migration): Dynamic Memory allocation comes to hyper-V in Windows Server 2008 R2 SP1. This is a great and most welcome addition! We can adjust memory allocations on the fly with down time from a memory pool on the host, memory virtualization if you will. Grab the announcement here: http://blogs.technet.com/windowsserver/archive/2010/03/18/announcing-windows-server-2008-r2-and-windows-7-service-pack-1.aspx

DHCP Behavioral Change in 2008 R2

Well today we got bitten by an unexpected functional change in Windows 2008 R2 DHCP. A colleague of mine needed to replace a defect printer that had a reservation in DHCP so it would always get the same reserved IP configuration. This can be handy for some “light security” reasons or for when a repair men resets the configuration to default settings or so.

So he set out to replace the original reservation by deleting the existing one and replacing it with a new reservation with the same IP address but using the MAC address of the NIC in the new printer. At least that was the intention. He swiftly received an error: The specified DHCP Client is not a reserved client.

So what does that mean? Well it turns out that something that was quiet possible up until Windows 2008 is no longer allowed in Windows 2008 R2. Consider the following:

For example if you have a 10.100.0.0/16 subnet and about 600 client devices max on your LAN you could make your DHCP severs fault tolerant by setting them up as follows:

DHCP Server 1 with a Scope lease range of 10.100.20.0 – 10.100.23.255

DHCP Server 2 with a Scope lease range of 10.100.24.0 – 10.100.26.255

You can lose a DHCP server that way the remaining one still has more then enough IP addresses in it’s range handing out to all potential client devices. This is about the easiest and simplistic way of providing DHCP redundancy. You might not like it (not really the 80/20 rule for a split scope according to the book) but it is very widely used, simple and it works in environments where IP addresses are plentiful.

Now before Windows 2008 R2 you could add reservations for IP address that are in the subnet of the scope but are outside in the lease range. For example if you agreed internally to put all printers on 10.100.50.x you could add reservation for them, even if they fall outside the lease ranges of DHCP Server 1 and 2 because they where in the same subnet of the scope. That now no longer works and gives you the above error.

Why did we never notice this before? The existing reservations just keep working, you just can’t create new ones. And yes, this was the first time it was needed to be done after the upgrade to Windows 2008 R2.

The Solution (no impact on current IP addressing schemes already in use).

My colleague (great guy, keen eye for trouble shooting) was well on his way to the solution and finally we implemented the following. We set up a split scope with a lease range that included the IP addresses used by the printers, and than added exclusions for the ranges that are used for clients on the other DHCP server. More like the split scope DHCP concept by the rules but without 80/20 rule.

When you do this turn of the printer or other device and delete the lease it might already have for its MAC address. You can than work without any further issues because of the IP or hardware address already having a lease. Oh yeah, when you delete a lease in DHCP, refresh the MMC tree manually or you might see the result of your deletion 🙂

Et voila. We’re done and back in business. This helpdesk call on replacing a printer turned out to be a rather expensive one 🙂

Conclusion

DHCP got some really neat extra’s (DHCP allow & deny filtering with scripting support for automation, the split scope wizard, …) in Windows 2008 R2 but this little change is going to bite a lot of people when they migrate or upgrade from previous versions, as split DHCP scopes are de facto standard in a lot of DHPC implementations. Why it provides for simple and easy DHCP service redundancy and it used to let you let you define reserve IP address for special uses in a range that was not even handed to other clients (Printers, scanners, Wireless Access Points, …). Well now you have to work around it (or be a bit more by the book), but as you have seen you can still get it function again. So beware of this when you make the move to DHCP on Windows 2008 R2 and implement a solution accordingly.