StarWind Software V2V Image converter: a nice free tool for the toolbox

I get some questions now and then about free tools to use for converting virtual machine files from one format to another. One of my old time favorite tools (VMDK to VHD Converter)  for converting virtual machines files from VMware to Microsoft’s VHD is getting old and can give you some performance issues & problems on x64 bit operating systems. It also doesn’t support other file format conversions. If you need to try another tool you can give StarWind Software V2V Image converter a try.

It works pretty well on all major Windows operating systems up to & including windows 7 / Windows 2008 R2 x64 bit. It supports conversions between VMDK, VHD files and  IMG files. And it doesn’t alter the source files in any way. However to prevent the software from crashing I need to run it as an administrator. I consider this to be a drawback. Make sure you have a good machine with a modern processor & fast (extra) disk and adequate space to get optimal speed  Really, replace that P4, go on!

Drawbacks are that you cannot choose the location where the converted file is saved. So you can’t get optimal speed by using two physical disks, one for reading and one for writing. Also this means that for large virtual disk files you’ll need loads of free space. It requires registration for download which you might not like that (you can grab it here: StarWind V2V Image Converter). On the whole it’s a pretty fast tool and does the job well and a nice instrument in my toolkit.

Hyper-V 3 & Windows 8, Musings on Hypervisors & Crystal Ball Time

I think Microsoft sales might be getting a head ache by the ever increasing speed by which people are looking and long for features in the “vNext” version of their products whilst they are still just getting people to adopt the current releases but I like the insights and bits of information. It helps me plan better in the long term.

A lot of you, just like me, have been playing around with Hyper-V since the betas of Windows 2008. As I run Windows Server tweaked to act en look like a workstation I wanted to move my virtualization solution on the desktop to hyper-V as well. I use Windows server as a desktop because it allows me to install the server roles and features for quick testing, screen shot taking, managing the lab, etc. during writing and documenting.

Now a lot of you will have run into some performance issues on the host related to the video card, the GPU. Ben Armstrong mentioned it on his blog and wrote Knowledge Base article on it (http://support.microsoft.com/kb/961661). He later provided more insight into the cause of this behavior in the following blog post http://blogs.msdn.com/b/virtual_pc_guy/archive/2009/11/16/understanding-high-end-video-performance-issues-with-hyper-v.aspx it’s a good write up explaining why things are the way they are and why this cannot be “fixed” easily.

For me this was a bummer as I had a decent GPU on my workstation and I sometimes do need the advanced graphic capabilities of the card.

So when the first rumors of about “Windows 8” & “Hyper-V version 3” hit the internet I was very happy to see the mention of Hyper-V being used in Windows 8 as a client hyper-visor virtualization solution. See http://virtualization.info/en/news/2010/07/first-details-about-hyper-v-3-0-appear-online.html, this link was brought to my attention by Friea Berg from Netapp on twitter (@friea). Now there is more to it than just my tiny needs and wishes. Integration with App-V and other functionality that integration of Hyper-V in “MiniWin” can offer, but have a look at the link and follow the source links if you can read French.

The thing is that Hyper-V in the client would mean that they will have fixed this GPU performance issue by then. They have to; otherwise those plans can’t work. As the code bases of Windows client and server run parallel it should also be fixed on the server side. We’re used to more rich functionality in desktop virtualization by VMware Workstation en Virtual PC. Fixing this also makes sense in another way. Microsoft could be moving forward on one virtualization solution both on server and the desktop and gradually phasing out Virtual PC. They can opt to provide richer functionality with extra features that might be unnecessary or even undesirable on a server but is very handy on a workstation or on a lab server. This is all pure speculation (crystal ball time) by me but I’m pretty convinced this where things are heading.

Combine this that by the time “Windows 8” arrives most hardware in use will be much more capable of providing advanced virtualization features and enhancements and in all aspects, things are looking bright. So no I can dream of affordable 32 GB laptops with dual 8 core CPUs with a wicked high end GPU running Hyper-V.

By the way VMware is also working on similar ideas to provide a true hypervisor on the desktop I guess as they seem to be abandoning VMware Server (no enhancements, not fixes, etc.) and I can also imagine them making VMware Workstation as true hyper-Visor to reduce the product line development and support costs. Pure speculation, I know, especially since the confusing message around off line VDI but never underestimate the ability of a company tho change its mind when practical for them. 😉

Someone at SUN Oracle must be smiling at all of this, especially as Virtual Box is getting richer and richer with memory ballooning, hot add CPU capability (I like this and I want this in Hyper-V), etc. unless Microsoft and VMware totally succeed in making hosted virtualization a thing of the past. In the type 1 hypervisor space they are consolidating what they bought. Virtual Iron (Xen) was killed almost immediately and the SUN xVM Hypervisor is also dead. Both have been replaced by Oracle VM (Xen).

So as everyone seems to have good type 1 hypervisors that are ever improving it might become less and less a differentiator and more of a commodity that one day will be totally embedded in the hardware by Intel and AMD. The OS and software vendors then provide the management, high availability features and integration with their products. And if that is the evolution of things where does that leave KVM (Linux) in the long run? Probably the world is big enough for both types. For the moment both types seem to be doing fine.

As I said, all of this is musings and crystal ball time. Dreaming is allowed on sunny lazy Sunday afternoons. Open-mouthed

Partially Native USB support coming to W2K8R2 with SP1!?

As you might recall from a previous blog post of mine (https://blog.workinghardinit.work/2010/03/29/perversions-of-it-license-dongles/) one of the show stoppers for virtualization can be USB dongles. Apart from my aversion of USB license dongles that should never be mentioned in the same sentence with reliability and predictability, now the push for VDI has exposed another weakness, the need for end users to have USB access. Well Microsoft seems to have heard us. Take a look @ this blog post: http://blogs.technet.com/virtualization/archive/2010/04/25/Microsoft-RemoteFX_3A00_-Closing-the-User-Experience-Gap.aspx

What remains to be seen is if this will work with license dongles. Anyway for desktop virtualization a much needed improvement is under way. I would like to thank Christophe Van Mollekot from Microsoft Belgium for bringing this to my attention. This together with VDI license improvements for SLA customers are giving desktop virtualization a much better change of being adopted. Some times stuff like this really makes the difference. You can’t explain to your end users that the great super modern virtualized environment doesn’t support the ubiquitous USB drive. Trust me on that one.

Perversions of IT: License Dongles

The Sum of All Evil resulting in the Mother of All Fears

One of the most annoying things in the life of infrastructure support are the sometimes convoluted schemes software vendors come up with to protect their commercial interests. The “solution” we despise the most is a license dongle. It sounds easy and straight forward but it never ever is. Murphy’s Law!

Vendors think they are unique

What’s one dongle? What are you complaining about? Huh … we have customers who have more than more than 10. Only one application is actually capable of finding multiple dongles and able to survive the loss of one. Do we need multiple machines to be on the safe side? Yes. How many? You need at least two. Why to allow “quick” redirection if possible at all (using scripts, GPO’s, etc.)

We have a USB hub attached to the rack, so we can stick in many dongles. It saves on hardware. Now once in a while the USB will get a hick up and unplugging it and plugging in back it usually does the trick. As mentioned before, this is not very handy when that dongle is in a secured room or data center. To add insult to injury network issues might stop applications from working that otherwise would be fine just because they can’t find a license server.

Today they are almost always USB dongles, but the parallel port dongle was quite popular as well. If you’re lucky the vendor provides you with a USB dongle when your hardware doesn’t have a parallel port anymore. But sometimes you are not that lucky. Today most laptops and indeed most PC’s don’t come with a parallel port anymore. And no, to road warriors with laptops a USB to parallel converter isn’t really user friendly. Furthermore a dongle sticking out are accidents waiting to happen (broken, lost) and finally some laptops, especially the smaller ones road warriors love sometimes only have one USB port that is taken up by their G3 broadband stick. Heaven forbid that some users actually have two applications that require a dongle and an internet connection. These are only the silly, practical issues you have to deal with when license dongles come into the picture but it gets worse fast when things like uptime, redundancy, high availability etc. are added.

Some dongles are attached to a network license server; some are attach locally to the PC or server running the software. In all cases they need drivers/software to run. The server software is sometimes very basic, rather flimsy and error prone. Combine this with various vendors who all require license dongles with various brands/versions of software. USB ports itself have been known to malfunction now and then. As you can imagine you end up with a USB hub lit up like a X-Mas tree and lots of finger crossing.

Reliable, Predictable, High Available

Dongles and high availability are almost always by definition mutually exclusive. If they are not then it’s a very expensive we’ll work around it type of solution trying to make a dongles highly available. This is only possible when the software that requires it supports redundant setups. I have only seen this possibility once. With some vendors you need to buy extra licenses to get an extra dongles … if the software package is 50.000 € that hurts. Some vendors will show leniency in such cases, some are plain arrogant and complacent. The fact that they are still in business proves that the world runs on bull shit I guess. But even when you do have multiple dongles, multiple servers hosting the dongles most dongle protected software is no written to deal with losing a dongle so you can’t get high availability only some form of hot stand by. Supporting that is also a lot of work that requires cloning, copying, scripts, manual interventions etc.

Some vendors are even so paranoid they check 5 times within a minute … if they can still find a license and if not the application fails. That means that even rebooting a dongle host for patching or another intervention takes down the application. Zero tolerance for failure … dongle wise, power wise, human error wise … pretty unrealistic. And even if the dongle is attached to a redundant server in a secure data center you’ll see that the USB port will fail for some reason. The only reliable and predictable thing in this story is the fact that you will fail.

Security

This is a good one. Do you really want to hurt some companies I work/consult for? Walk around their offices or data center, unplug any dongle you can and flush ‘m down the toilet. That will take ‘m down for a while. Yes I known, they should have better physical security. Either we have a license dongle on a network server which makes it a bit more realistic or we lock up all those PC’s in a secure room. That is not always feasible, either due to cost or just practically. And by the way that doesn’t protect you from a pissed of contractor or employee that has access. Even when security cameras can identify them fast the damage is already done.

Dongles sticking out of a 1 U server prevent the use of bezels to help lock down access to the server. The USB ports in the back are used for KVM over IP or keyboard and mouse.

In some models you can try to plug the dongle into an USB port inside the server chassis but than the old trick of unplugging/inserting the dongle when it goes haywire isn’t that easy anymore, let alone the fact that the dongle sits in a data center somewhere so getting to it might not be feasible so you need to allow someone to access the server to be able to get to the dongle.

Dongles and Virtualization

When you need to virtualize server applications that need a locally attached dongle you need to start looking for USB over Ethernet solutions that are reliable. When you find one you need to manage it very carefully and well. You need to manage the versions of the server software and the client software. We’ve seen network connectivity loss when the versions don’t match up, even if the software didn’t complain about different versions. You need to test its stability, have extra hardware and extra dongles for testing as not all dongles respond well to this type of setup. We can’t afford to bring down production environments with USB over Ethernet software “upgrades of faith”. The need for dongles adds an extra layer of complexity and management, one that is very error prone and hard to make redundant let alone highly available. It’s not a pretty picture.

We used to buy Fabulatech for such implementations. Version 4.1.1 was rock solid but ever since version 4.2/4.3 & 4.4 Beta they have brought us nothing but “Blue Screen of Death” hell. We now implement KernelPro (2.5.5) which seems to be functioning very well for the moment.

Dongles are a virtualization show stopper in some environments due to these issues and risks. Behold dongle David brings down virtualization Goliath.

The Bottom Line

The biggest perversion, in what is essentially a big mess, is the fact that the only people affected by this are your paying customers. Software vendors should take note of the fact that paying clients despise your convoluted, error prone, “accidents waiting to happen” dongle licensing schemes. You not only have no clue what it means to run reliable IT operations but you don’t even care about your customer’s needs. There is only one rule. Software & hardware should work under all circumstances without the need for dongles. That darn piece of 50 cent plastic & silicon could well bring an entire application down. Let us just hope that it isn’t the geo routing software for 911 or 112 services.

There are two possibilities when you sell software. One is that your application is very popular and as such is being “keygenned” and cracked all over the place and the only ones you’re hurting are your paying customers. The other possibility is that your software is so unique and expensive it’s only bought by specialized firms and entities that couldn’t even operate it without being exposed as thieves. Stop fooling yourselves and stop making life hell for your customers. Protect you rights as well as you can but not at the expense of paying customers. You might even sell more if you care about their needs. Go figure. Maybe I’m just to demanding? Nah!