Update that enables Windows 7 and Windows Server 2008 R2 KMS hosts to activate Windows 10

Today, a blog reader notified me that Microsoft released an update that extends KMS for Windows 7 Service Pack 1 (SP1) and Windows Server 2008 R2 SP1 to enable the activation of Windows 10-based clients. Find it here.

image

I blogged about the update for Windows Server 2012 (R2) and Windows 8/8.1 before but now these older operating systems can also serve as KMS servers for Windows 10. All you need is the update that enables Windows 7 and Windows Server 2008 R2 KMS hosts to activate Windows 10 as discussed in KB3079821

Until now the workaround was to leverage Active Directory based activation for Windows 10 whilst you left your KMS server in place for the older clients. That works well and I have used it before. But with this update you can use your older KMS servers for all activations of windows clients.

Legacy Apps Preventing Your Move From Windows XP to Windows 8.1?

Are old applications holding you back getting rid of Windows XP? It’s A reason we hear a lot and these apps do exist. But often it’s because the effort to make it work isn’t considered worth the cost. Year after year. So some people today are stuck on a Windows Server 2000/2003 & XP infrastructure. How does that cost compare now to the cost of dealing with the application? Was it worth not moving the application & have an out of date infrastructure holding your ENTIRE company down?

image

While some things can’t be fixed, putting in some effort could have prevented you of being in this mess. Yes it would have cost you a decent penny but nothing compared to where you are at now with your infrastructure “challenges”.

Here’s a little example for you. Over a period of 13 years we’ve moved an old application (using a Borland database engine & ISAPI DLLs in IIS). It ran on Windows Server 2000. It was P2V’d to VMware Server. Over the years the data base swapped from Informix to SQL Server 2000, 2005, 2008, 2008 R2. We upgraded the VM to Windows Server 2003(x86), moved to Hyper-V, upgraded to Windows 2008(x86) & final now put on W2K12R2(x64). So what do you mean you can’t get rid of XP? We’ve moved the client app for that VM to x64 with Vista in 2007.  We were not to let that app block our way to the future and Windows 7(x64) and Windows 8 & 8.1(x64). In 2014 you should be able to move to or you need to reconsider your approach to IT as you have totally painted the organization into a corner. We did not have installers for anything. We extracted registry entries & bits form installed systems and build installers ourselves with the free NSIS installer. We used  Windows SysInternals tools to figure out where the application wrote & read, what permissions where needed and add those to the installer to make sure it did not need local admin rights. It gave the business over a decade to get a grip on application live cycle management & replace the app. They failed twice, and while that’s bad and we do not like it, it was not deadly as they haven’t let the rest of the company suffer for it. Never, ever let your infrastructure get stuck in the past. But wait you say, what you did is not supported. That’s right. That’s one app, that works, and it beats being left with an unsupportable infrastructure blocking progress Winking smile

You might need some help and here’s a great place to start helping yourself The App Compat Guy. Read and view (TechEd presentations) anything Chris Jackson is offering on this subject and you’ll be on your way. Need a helping hand? Here’s a good place to start if your in Belgium: Microsoft Extended Experts Team (MEET). Chances are some of them known some one who knows how to get it done or are the person to talk to.

A VDI Reality Check @ BriForum 2011 For Resource Hungry Desktops In A Demanding Environment

So what did we notice? VDI generates enough interest from various angles that is for sure. Both on the demand side as on the (re)seller & integrator side. Most storage vendors are bullish enough to claim that they can handle whatever IOPS required to get the most bang for the buck but only the smaller or newest players were present and engaged in interaction with the attendees. One thing is for sure VDI has some serious potential but it has to be prepared well and implemented thoroughly. Don’t do it over the weekend and see if it works out for all your users.

The amount of tools & tactics for VDI on both the storage side and the configuration/management side is both more complex and diverse than with server virtualization.  The possible variations on how to tackle a VDI project are almost automatically more numerous as well. This is due to the fact that desktops are often a lot more complex and heterogenic in nature than server-side apps. On top of that, the IO on a desktop can be quite high. Some of it can be blamed on the client OS but lots of that has to do with the applications and utilities used on desktops.  I think that developers had so many resources at their disposal that there wasn’t to much pressure on optimization there. The age of multi-cores and x64 bit will help in thinking more about how and application uses CPY cycles but virtualization might very well help in abstracting that away. When a PC has one vCPU and the host has 4*8 cores, how good is that hypervisor at using all that pCPU power to address the needs of that one vCPU?  But I digress. All in all, it takes more effort and complexity to do VDI than server virtualization. So there is a higher cost or at least the APEX isn’t such a convincing clear cut story as it is with server virtualization. If you’re not doing the latter today when and where you can you are missing out of a major number of benefits that are just to good to ignore. I wouldn’t dare say that for VDI. Treating VDI just like server virtualization is said to be one of the main reasons for VDI failing or being put on hold or being limited to a smaller segment of the desktop population.

My experience with server virtualization is also with rather heterogenic environments where we have VMs with anything between 1 and 4 virtual CPUs, 2 to 12 GB of RAM. And yet I have to admit it has been a great success. Never the less I can’t say that helped me much in my confidence that a large part of our desktop environment can be virtualized successfully and cost-effectively as I think that our desktops are such vicious resource hogs they need another step forward in raw power and functionality versus cost. Let briefly describe the environment. 85% of the workforce at my current gig has dual 24” wide screens, with anything between 4GB to 8 GB of RAM, Quad-Core CPUs and SCSI / SATA 10.000 RPM disks with anything between 250 GB to 1TB local storage in combination with very decent GPUs. Now the employees run Visual Studio, SQL Server, multiple CAD & GIS packages, and various specialized image processing software that gauges image and other files that can be 2GB or even higher. If they aren’t that large than they are still very numerous. On top of that 1Gbps network to the desktop is the only thing we offer anymore. So this is not a common office suite plus a couple of LOB applications order, this is a large and rich menu for a very hard to please audiences. That means that if you ask them what they want, they only answer more, more, more … And I won’t even mention 3D screens & goggles.

Now I know that X amount of time the machines are idle or doing a lot less but in the end that’s just a very nice statistic. When a couple of dozen users start playing around with those tools and throw that data around you still need them and their colleagues to be happy customers. Frankly even with the physical hardware that they have now that can be a challenge. And please don’t start about better, less resource wasting applications and such. You can’t just f* the business and tell them to get or wait for better apps. That flies in the face of reality. You have to be able to deliver the power where and when needed with the software they use. You just can’t control the entire universe.

I heard about integrators achieving 40-60 VMs per host in a VDI project. Some customers can make due with Windows 7 and 1GB of RAM. I’m not one of those. I think the guys & gals of the service desk would need armed escorts if we rolled that out to the employees they care for. One of the things I notice is that a lot of people choose to implement storage just for VDI. I’m not surprised. But until now I’ve not needed to do it. Not even for databases and other resource hogs. Separate clusters, yes, as the pCPU/vCPU ratio and Memory requirements differ a lot from the other servers. The fact that the separate cluster uses other HBA’s en LUNS also helps.

Next to SANs local storage for VDI is another option for both performance and cost. But for recovery, this isn’t quite that good a solution. The idea of having non-persistent disks (in a pool) or a combination of that with persistent disks is not something I can see fly with our users. And frankly, a show of hands at BriForum seems to indicate that this isn’t very widespread. VDI takes really high-performance storage, isolated from your server virtualization to make it a success. On top of that if you need control, rapid provisioning, user virtualization &  workspace management in a layered/abstracted way. Lost of interest there but again, yet more tools to get it done. Then there is also application virtualization, terminal service-based solutions etc. So we get a more involved, divers, and expensive solution compared to server virtualization. Now to offset these costs we need to look at what we can gain. So where do the benefits to be found?

With non-persistent disk you have rapid provisioning of know good machines in a pool but your environment must accept this and I don’ see this flying well in face of the reality of consumerization of ICT. De-duplication and thin provisioning help to get the storage needs under control but the bigger the client-side storage needs and the more diverse these are the fewer gains can be found there. Better control, provisioning, resource sharing, manageability, disaster recovery, it is all possible but it is all so very specific to the environment compared to server virtualization and some solutions contradict gains that might have been secured with other approaches (disaster recovery, business continuity with SAN versus local storage). One of the most interesting possibilities for the environment I described was perhaps doing virtualization on the client. I look at it as booting from VHD in the Windows 7 era but on steroids. If you can save guard the images/disks on a SAN  with de-duplication & thin provisioning you can have high availability & business continuity as losing the desktops is a matter of pushing to VM to other hardware which due to abstraction by virtualization should be a problem. It also deals with the network issues of VDI, a hidden bottleneck as most people focus on the storage. Truth be told, the bandwidth we consume is that big, it could be that VDI might have it best improvements for us on that front.

Somewhat surprising was that Microsoft, whilst being really present at PubForum in Dublin, was nowhere to be seen at BriForum. Citrix was saving it’s best for its own conference (Synergy) I think. Too bad, I mean when talking about VDI in 2011 we’re talking about Windows 7 for the absolute majority of implementations and Citrix has a strong position in VDI really giving VMware a run for their money. Why miss the opportunity? And yesterday at TechEd USA we heard about the HSBC story of a 100.000 seat VDI solution on Hyper-V http://www.microsoft.com/Presspass/press/2011/may11/05-16TechEd11PR.mspx.

On a side note, I wish I would/could have gone to PubForum as well. Should have done that. Now, these musings are based upon what I see at my current place of endeavor. VDI has a time and place where it can provide significant operational and usage advantages to make the business case for VDI. Today, I’m not convinced this is the case for our needs at this moment in time. looking at our refresh schedule we’ll probably pass on a VDI solution for the coming one. But booting from VHD as a standard in the future… I’m going to look into that, it will be a step towards the future I think.

To conclude BriForum 2011 was a good experience and the smaller scale of it makes for good and plenty of opportunities for interaction and discussion. A very positive note is that most vendors & companies present were discussing real issues we all face. So it was more than just sales demos. Brian, nice job.

Kick Starting Your Windows 7 Deployments With Mastering Windows 7 Deployment

I have to hand it to Aidan Finn, he doesn’t stop at sharing information via his blogs or the community. He joined forces with Darril Gibson & Kenneth van Surksum went the extra mile. The wrote a readable, useful book Mastering Windows 7 Deployment about a subject on which consolidated documentation is scarce, scattered around the internet or written badly so you still can’t figure it out or is to boring you just don’t read it. If I need to define the goal of this book: get people a good head start for Windows 7 deployments in a planned and organized fashion.

This is not a book for the absolute newbie who doesn’t know the difference between a local and a domain account. It isn’t targeted at the WDS/MDT experts who’ve solved, fixed and worked around any and all PXE boot, network errors, cryptic WDS or MDT deployment errors & configuration challenges known to man kind. In that case this stuff is known to you (or should be). The point is those experts have already learnt a lot the hard way and they put in a considerable effort to do so. But knowledge needs to be transferred and spread around and to do that you need to cover the basics and work up from there, showing progress and results. The progress and results motivate people.

In that respect, this books get’s you started on that path from chapter one and by page 5 you’re already being guided into auditing & reporting via MAPS to prepare a roll out proposal. The effort put into discussing the Application Compatibility Toolkit (ACT) is important. I remember the work that we needed to do for Vista x64 bit and how that paid off when deploying Windows 7. What surprises me it that a lot of IT Pro’s don’t even know about the ACT, file and registry virtualization or shims. I recommend another blog on this subject http://blogs.msdn.com/b/cjacks/ , Chris Jackson, the “App Compat Guy” and a very good conference speaker on the subject. The scenarios with the User State Migration Tool will benefit system administrators who dread touching end users their PC and the precious data it might contain. If so, I hope you are backing up the data on those workstations, if not than that is really scary.

Perhaps some readers will already be using certain tools touched upon in the book but not others. In that case this is a great way to start with them and see where they fit in and what they can do for you. We did Vista x64 bit deployments in 2008 with WDS; rolled out Windows 7 x64 in 2010 using WDS/MDT and I still found this book interesting enough to buy some copies and add it to the toolkit of my team. What I’d like to add as a useful hint: look into disable rearming by using <SkipRearm>1</SkipRearm> in the unattended XML file you can pass to sysprep as in “/generalize /quiet /unattend:<file_name.xml” so you don’t run into a when you do it more than 4 times on the same image (An error message occurs when you run "Sysprep /generalize" in Windows Vista or Windows 7: "A fatal error occurred while trying to Sysprep the machine").

The Microsoft Deployment Toolkit (MDT) sections point you directly to some gems we found very useful in our deployments. That you can pre stage computers in the MDT database to help make the roll outs as “light touch” as possible is cool, but that you can automate that with the MDT PowerShell module makes it really very valuable. See http://blogs.technet.com/b/mniehaus/archive/2009/05/15/manipulating-the-microsoft-deployment-toolkit-database-using-powershell.aspx for more details. Michael Niehaus is to MDT what Chris Jason is to ACT. As identifier we use the MAC address as we get that on a label on the PC and we can easily get a list of those to mass import them together with creating the computer objects in Active Directory. We also added driver profiles depending on the client make & model. When you combine this with boot from PXE provided by WDS to boot to an MDT WinPE, and remember WDS also gives you multicast, you have a real sweet solution going. This is the route we went last year and has served us well (we came from a pure WDS solutions, and RIS before that when we still did XP rollouts but that was more than 4 years ago Open-mouthed smile … time flies.

Task sequencer is a gem that we indeed also use to roll out certain default software like 7zip, a pdf reader, ISO burner, anti malware, etc. The fact that these are not in the image makes it very easy to deploy newer versions as they come available.

The chapter on KMS, VAMT, volume licensing will be of use to people who have never dealt with it coming from Windows 2003/XP

This book will come into its own for any SME or enterprise departmental system administrator with who needs to be launched swiftly and on his or her way to their targets, which are smooth Windows 7 deployments. A lot of production system administrators are in the progress of looking at Windows 7 and might have a lot of experience with Windows XP and Windows 2003 but not with Windows 2008(R2) and Vista/Windows 7. If you’re in that bracket you’re definitely going to get a kick start with this book and it contains some neat tips and tricks to get over some initial gotchas. Don’t think that this is for big enterprises only. Apart from the system center products most tools are free downloads or a part of the Windows server license you already own.

As always, the only way to understand technologies is to work with them, use them. That’s the way to gain insight, experience, and context. So play with this stuff in a lab. Run into a bunch issues and fix them. If you need to get up to speed with all this stuff then you should dig into this book with a hands on approach. The book will also help you make more sense of other information out there and you’ll be able to put that into context better. As a bonus, I’m pretty sure that anything you learn from it will help you with deploying Windows vNext as well.