Windows Hyper-V Server R2 SP1 is available for download

Ever since Windows 2008 R2 SP1 became available people have been waiting for Windows Hyper-V Server R2 to catch up. The wait is over as last week Microsoft made it available on their website http://technet.microsoft.com/en-us/evalcenter/dd776191.aspx. That’s a nice package to have when it serves your needs and there ‘s little to argue about. Guidance on how to configure it and how to get remote management set up has been out for a while and is quite complete so that barrier shouldn’t stop you from using it where appropriate. If you’re staring out head over to José Barreto’s blog to get a head start and here’s some more information on the subject http://technet.microsoft.com/en-us/library/cc794756(WS.10).aspx and naturally there are some tools around to help out if needed and the Microsoft provided tools are not to you liking http://coreconfig.codeplex.com/. So there you go, now you have a free and very capable hypervisor available to the public that gives you high availability, Live Migration, Dynamic Memory, Remote FX and they even threw in their software iSCSI target 3.3 into the free package so you can build a free iSCSI SAN supported by Microsoft. Live is good.

Windows 2008 R2 SP1 – RemoteFX Hardware To Get The Needed GPU Performance

When the first information about RemoteFX in Windows 2008 R2 SP1 Beta became available a lot of people busy with VDI solutions found this pretty cool and good news. It’s is a very much needed addition in this arena. Now after that first happy reaction the question soon arises about how the host will provide all that GPU power to serve a rich GUI experience to those virtual machines. In VDI solutions you’re dealing with at least dozens and often hundreds of VM’s. It’s clear, when you think about it, that just the onboard GPU won’t hack it. And how many high performance GPU can you put into a server? Not many or not even none depending on the model. So where does the VDI hosts in a cluster get the GPU resources? Well there are some servers that can contain a lot of GPUs. But in most cases you just add GPU units to the rack which you attach to the supported server models. Such units exist for both rack servers and for blade servers. Dell has some info up on this over here here. The specs on the  the PowerEdge C410x, a 3U, external PCIe expansion chassis by DELL can be found following this link C410x. It’s just like with external DAS Disk bays. You can attach one or more 1U / 2U servers to a chassis with up to 16 GPUs. They also have solutions for blade servers. So that’s what building a RemoteFX enabled VDI farm will look like. Unlike some of the early pictures showing a huge server chassis in order to make room to stuff all those GPU’s cards the reality will be the use of one or more external GPU chassis, depending on the requirements.