Workshop Datacenter Modernization -Microsoft Technical Summit 2014 Germany (Berlin)

While speaking (What’s new in Failover Clustering in Windows Server 2012 R2) and attending the Microsoft Technical Summit 2014 I’m taking the opportunity to see how Microsoft Germany and partners are doing a workshop which is based on the IT Camps they have been delivering over the past year. There is a lot of content to be delivered and both trainers Carsten Rachfahl (Rachfahl IT-Solutions GmbH) and Bernhard Frank (Partner Technology Strategist (Hosting), Microsoft) are doing that magnificently.

One thing I note is that they sure do put in a lot of effort. The one I’m attending requires some server infrastructure, a couple of switches, cabling for over 50 laptops etc. These have been neatly packed into road cases and the 50+ laptops had been placed, cabled and deployed using PXE boot /WDS the night before. Yes even in the era of cloud you need hardware especially if you’re doing an IT Camp on “Datacenter Modernization” (think private & hybrid infrastructure design and deployment).


Not bypassing this aspect of private cloud building adds value to the workshop and is made possible with the help of Wortmann AG. Yes the attendees get to deploy storage spaces, Scale Out File Server, networking etc. They don’t abstract any of the underlying technologies away, I like that a lot, it adds value and realism.

I’m happy to see that they leverage the real world experience of experts (fellow Hyper-V MVP Carsten Rachfahl) who helps hosting companies and enterprises deploy these technologies. Storage, Scale Out File Server, Hyper-V clusters, System Center and self service (Azure Pack) are the technologies used to achieve the goals of the workshop.


The smart use of PowerShell (workflows, PDT) allows to automate the process and frees up time to discuss and explain the technologies and design decisions. They take great care to explain the steps and tools used so the attendees can use these later in their own environments. Talking about their own experiences and mistakes helps the attendees avoid common mishaps and move along faster.


The fact that they have added workshops like this to the summit adds value. I think it’s a great idea that they are held on the last day as this means that attendees can put the information they gathered from 2 days of sessions into practice. This helps understanding the technologies better.

There is very little criticism to be given on the content and the way they deliver it. I have to say that it’s all very well done. Perhaps they make private cloud look a bit too easy Winking smile. Bernard, Carsten, well done guys, I’m impressed. If you’re based in Germany and you or your team members need to get up to speed on how these technologies can be leveraged to modernize your data center I can highly recommend these guys and their workshops/IT Camps.

Private Clouds, Hybrid Clouds & Public Clouds

Now that I’m thinking of and working more and more towards a concept that might be described as a hybrid cloud using tools and technologies that should facilitate this (Azure, Virtual Machine Manager 2012, System Center vNext, Hyper-V vNext) I get to think about what cloud means, is and could or might become. In the end it is nothing more or less than utility computing based on standard components to deliver commodity services. What you do with those and how determines the success of your endeavors. After all not all devices run by electricity where brilliant and successful. Why is this important to notice? Well people tend to get involved in silly discussion of my cloud is better than yours. And to confuse things even more in between all those discussions vendors are fighting about what constitutes the best technology (hardware & software) for building one. That can be fun, but it has little to do with the value of the cloud as a concept. Now where do private clouds a hybrid clouds fit in? Is the private cloud and the hybrid cloud something temporary, a facilitator towards a “true” public cloud? Or is this just the case for the private cloud?

If you concur that not all IT is the same, not all organizations are the same and not all IT is or will become a commodity one could state that the hybrid cloud has a more permanent character. It will aid having a unified, holistic way to manage it all. Multiple environments with separate management are less attractive as that incurs overhead in costs and perhaps even skills.

But what about that private cloud? Let’s face it. Most (none) of us will ever be able to get the share volume en thus the economy of scale for cost, pricing, redundancy or flexibility as the public cloud. If we can, that would mean the public cloud vendors are not doing their job right. On the other hand there are other needs in business than cutting costs. In the end cost cutting is a valuable tool but not a business model. You can’t run a company with as only mission statement "we’re cutting down on our costs”. So I think the private cloud might be more than just a transition model. It can live on, but probably not for most businesses. Depending on their needs I see a very bright future for hybrid cloud, both for transition and as a permanent solution. I would be hard pressed to call the choice for private or hybrid cloud wrong. It all depends if it’s a decision made for the right reasons. One should always note that choices and decisions have a limited life span. Business is very competitive and moves very fast and the very nature of cloud computing will only accelerate this.

I won’t surprise anyone that a lot of discussions around cloud are based on some assumptions. One of them is that we are discussing very well run organizations. Businesses and governments that have a clear understanding of their IT & business needs, have an IT strategy to support that and who use the best fitting management styles and methodologies as required and dictated by those needs. Sigh, perhaps it’s me but, while I so see occurrences of this at companies, I have never worked at or for one that is that well squared away. And, to me, becoming a financial sound success story with your business and IT in the cloud requires just this. Perhaps the lure or the push of the cloud will achieve for some companies and organization what nothing else has achieved, help transform them into better run entities. Some things, ugly hacks, internal IT can do (unwillingly) now are not possible in the cloud. The financial pressure will be bigger as well. It’s hard to hide or forget about certain costs in the cloud. If there is one thing, ISP, Telco’s (transforming into cloud vendors, but by origin giant “billing engines” for communications) and native cloud vendors are very good at it is sending you the monthly invoices. When things become visible they tend to attract attention. Sooner or later the bean counters will find you. A lot of the existing companies with legacy IT and politics will have a harder time dealing with all of this than the new, emerging ones that are built from the ground up using utility computing, so it’s time to step up to the plate and at least practice batting.

My First Hands On Experience With The System Center Virtual Machine Manager 2012 Beta

Today I made some time to take System Center Virtual Machine Manager 2012 (SCVMM2012) Beta for a little test drive. Nothing fancy yet. Just some first impressions and experiences. Is already had to VMs standing by. One running SQL Server 2008 R2 to take care of the database needs and one for installing SCVMM2012 on to. Normally, in further testing, I will install the self-service Portal on a separate machine for more flexibility but for now, it’s one host deal with a separate database server.

The documentation is already available on TechNet. I’m pretty sure this will grow a lot but the Installation guidelines are already pretty good. But as this is a test drive and I want to see how it behaves I didn’t get all the prerequisites ready from the start just to get a feel how the install behaves.

From the start we run into a symptom you need to take into consideration when using Dynamic Memory in a VM guest that was already discussed by Aidan Finn in Software Setup Does Not Meet Memory Requirements with Dynamic Memory Enabled. Just make sure you have plenty of memory during install time and afterward you can tweak it a bit to get some more breathing room on for the lab hosts.

The VMM 2012 setup wizard adds one prerequisite automatically for you if it isn’t installed and that’s the .NET Framework 3.5.1 feature is not installed (it is not installed by default).

Ok people, this is a bit rough and way pack with screenshots but here we go!

Start the setup and accept the License Agreement.

I opt to install on the roles on a single host

I provide the needed information, the key can wait, don’t worry about that here.

I’m opting into the Microsoft Update to keep my lab server running healthy & protected

I’m happy with the default installation location

It’s checking the prerequisites

And it complains. I’ve been too cheap on memory and the Dynamic Memory settings are not bailing me out as already indicated above.

So I fix both issues by installing more memory and installing IIS. Make sure you read the TechNet documentation for all the IIS components you need.

  • .NET Extensibility
  • Default Document
  • Directory Browsing
  • HTTP Errors
  • IIS 6 Metabase Compatibility
  • IIS 6 WMI Compatibility
  • ISAPI Extensions
  • ISAPI Filters
  • Request Filtering
  • Static Content

Then we rerun the prerequisites checks and we get another issue. We need the WAIK. You can avoid all these warnings or errors by reading the docs and preparing the server but as stated I wanted to get a look at how the process behaves. So we get the WAIK downloaded en install it.

The installer still thinks I’m too cheap. But it’s only a warning now. I did end up giving the VM 4GB with a limit of 5 GB of RAM.

The next error is just because I was to fast to launch the setup, we need to give the winRM service some time to start. It’s a service that as a delayed start

We didn’t do our prerequisites homework so we get nagged about the SQL Command Line Utilities. We can continue without them but when you do install these you’ll need to get the SQL Native Client installed on which the SQL Command Line Utilities depend.

I have my database already up and running so I have no worries there. The account here needs to have permissions to install and configure the database. It ‘s used for that purpose only. As you can see I use the default instance and create a new database. Make sure your SQL Server is set up right for remote access, the firewall is configured, etc.

I’ve prepared a nice and shiny new domain account for the SCVMM2012 service to run under. I don’t use a manage service account because I’m not sure whether I might use this account on multiple machines in more elaborate fault-tolerant installations.

As I’m not very creative and don’t want to use non=default ports I’ll forget is elect to keep all the default ports.

I also leave the default settings for the self-service portal.

But I do change the location of the library share to a separate large disk Smile

OK the installer is ready to rock.

The install goes very fast by the way. Went to get some coffee, called a colleague and voila …

In my environment, it took about 6 minutes

And after all that I got my reward SCVLL 2012 up and running

To which I add my test cluster

I need to provide some credentials that can discover the hosts and install the agents

I add some cluster hosts


SCVMM2012 picks up that it’s a cluster


And I add it to All hosts group I created. neatly organizing already (neurotic behavior is widespread in the IT world)

The cluster and hosts are added to SCVMM 2012

For fun, I put a host into maintenance mode.  It offers to use live migration as that is available

And that went just fine.

Well there you have it, a first rough hands-on experience with SCVMM2012 Beta.  We’re off to a very good start with this. More to follow later without any doubt.