Building A New Lab For 2011 And Beyond

Well with all this (Hyper-V) Clustering, Virtualization, System Center Suite, Exchange 2010 & Lync, SQL Servers, iSCSI demands on my lab network  I really need to refresh my hard ware. It sounds a bit like a paradox but such is life for the people building all this stuff. Yes, they still need some hardware, pretty beefy machines actually, to set it all up, test it, break it, fix it and keep learning. I’ve depleted my 4 years old lab material which in which I can’t put more than 4 GB RAM.  Now that I have finished all my infrastructure projects for 2010 I have time to focus on improving my old setup. Or at least I hope. Things are very busy. Thanks to W2K8R2 SP1 beta I could use Dynamic Memory which helped to keep churning away with these and various Exchange setups but now with Lync coming into the picture I want and need an upgrade.  A couple of SQL Servers in various high availability setups help eat any remaining resources resources . Add to that the fact that I want to do some private cloud testing so there it is. I need hosts with at least an Intel Quad Core  (i7) and at least 16 GB of DDR3 memory. They should have room for extra NIC cards. And I always try to get some speedy disks where it matters.  Now since Windows Server 2008 R2  added support for Second Level Address Translation (SLAT), which Intel calls Extended Page Tables (EPT) and which AMD calls Nested Page Tables (NPT) or Rapid Virtualization Indexing (RVI), we can make use of better graphics cards. Until now none of my processors had SLAT support.  With the Intel i7 (Nehalem) processor I’m good to go. As all machine in my lab are Intel so I’m sticking with them for Hyper-V migrations as that doesn’t work between brands.

So here’s an logical overview of my setup. This is what I already in place with my current hardware but have now drawn with my coveted hardware refreshment Smile Oh, yes the dual 1Gbps switches for iSCSI are new for this setup. I’m adding one so I can play with MPIO in the lab.

For disks I use 300GB – 16MB – 10.000 rpm and 600GB –32MB – 10.000rpm Raptors in combination with an external eSATA 1TB/2TB Western Digital Black Disk for storage of VHD’s, Images, backups etc.  I have to buy some extra now. The faster disks are expensive but a lab environment needs some performance as waiting around for servers & virtual machines becomes a major of annoyance when you need to get work done. The 10.000 rpm disks are great for iSCSI storage for which I use the iSCSI Target from Windows 2008 R2 Storage server via my TechNet subscription.

All this kit should keep me up and running from 2011 until the end of 2014. Is this expensive? Yes and no.  I can recuperate my 1 Gbps Intel NIC’s and most of my hard disks.  I already have my network switches, monitors and KVM switches. So in all it’s the new motherboards, CPU’s and memory that will eat the  most of the budget.  It’s a sum to put out but here’s a note to all IT Pro’s out there. You need to invest in yourself every now and then.

I’ve blogged about this before in https://blog.workinghardinit.work/2010/02/04/having-a-lab-using-it/. Self improvement and learning is a continuous process that never ends. Sure it does have some peak moments in financial costs when you need equipment. Remember you don’t need to buy it all at once. Talk to you employer about this if you’re not self employed. Look at how much a 5 day advanced course or a conference costs. You can use a lab to learn and experiment for many years to come. So basically the potential ROI is very good. In the end, what my employers and customers get out of this is knowledge, insight, skills and results. Think about it, it helps to put the investment in perspective. Sure, I invest more than just the hardware, my time which is very valuable to me. You can’t maker more time, everyone has the same 24 hours in a day. Now it really helps if you like this stuff and have fun whilst learning new technologies or setting up a proof of concept. In a way what people put into their job and knowledge is  an indicator of their professionalism. You do not become an expert by working 9 to 5 and only learning when a course is provided. It’s not going to happen. Even a genius who puts in the effort stands out amongst his or her peers. The same goes for you, but be smart about it. You can work yourself to death and not accomplish anything. So smart & hard is the way to go.

Windows 2008 R2 SP1 – RemoteFX Hardware To Get The Needed GPU Performance

When the first information about RemoteFX in Windows 2008 R2 SP1 Beta became available a lot of people busy with VDI solutions found this pretty cool and good news. It’s is a very much needed addition in this arena. Now after that first happy reaction the question soon arises about how the host will provide all that GPU power to serve a rich GUI experience to those virtual machines. In VDI solutions you’re dealing with at least dozens and often hundreds of VM’s. It’s clear, when you think about it, that just the onboard GPU won’t hack it. And how many high performance GPU can you put into a server? Not many or not even none depending on the model. So where does the VDI hosts in a cluster get the GPU resources? Well there are some servers that can contain a lot of GPUs. But in most cases you just add GPU units to the rack which you attach to the supported server models. Such units exist for both rack servers and for blade servers. Dell has some info up on this over here here. The specs on the  the PowerEdge C410x, a 3U, external PCIe expansion chassis by DELL can be found following this link C410x. It’s just like with external DAS Disk bays. You can attach one or more 1U / 2U servers to a chassis with up to 16 GPUs. They also have solutions for blade servers. So that’s what building a RemoteFX enabled VDI farm will look like. Unlike some of the early pictures showing a huge server chassis in order to make room to stuff all those GPU’s cards the reality will be the use of one or more external GPU chassis, depending on the requirements.

Exchange 2010 SP1 Public Folder High Availability Returns with Roll Up 2

Al lot of people were cheering in the inter active session on Exchange 2010 SP1 High Availability with Scott Schnoll and Ross Smith of the Exchange Team. They announced (between goofing around) that the alternate server that provides failover to the clients (so they can select another public folder database to connect to) for public folders and that is sadly missing from Exchange 2010 would return with Exchange 2010 SP 1 Roll Up 2. This feature is needed by Outlook to automatically connect to an alternate public folder and it’s return means that high availability will finally be achievable for public folders in Exchange 2010 SP1. That’s great news and frankly an “oversight” that shouldn’t have happened even in Exchange 2010 RTM. The issue is described in knowledge base article “You cannot open a public folder item when the default public folder database for the mailbox database is unavailable in an Exchange Server 2010 environment” which you can find here  http://support.microsoft.com/kb/2409597.

In previous versions of exchange you made public folders highly available to Outlook clients by having replica’s. The Outlook clients could access an replica on another server if the default public folders as defined in the client settings of the database was not available. Clustering in Exchange 2010 does nothing for public folders. In Exchange 2010 the Outlook clients connect directly to the mailbox server in order to get to a public folder so they do not leverage the CAS or CAS array. Also the DAG does not support public folders and as clustering happens at the database level on DAG members and no longer at the server level we no longer get any high availability for the clients with clustering in Exchange 2010. Sure, if you have multiple replica’s the data is highly available but the access to another replica/database/server for public folder doesn’t happen automatically in Outlook when you’re running Exchange 2010. To make that happen you need an alternate server to be offered to the client for selection But as this feature is missing in Exchange 2010 up until SP1 Roll Up 1 in reality until now you need to keep using Exchange 2003/2007 to have public folder high availability.  Exchange 2010 SP1 Roll Up 2 will change that. I call that good news.

New Spatial & High Availability Features in SQL Server Code-Named “Denali”

The SQL Server team is hard at work on SQL Server vNext, code name “Denali”. They have a whitepaper out on their web site, “New Features in SQL Server Code-Named “Denali” Community Technology Preview 1” which you can download here.

As I do a lot of infrastructure work for people who really dig al this spatial and GIS related “stuff” I always keep an eye out for related information that can make their lives easier an enhance the use of the technology stack they own.  Another part of the new features coming in “Denali” is Availability Groups. More information will be available later this year but for now I’ll leave you with the knowledge that it will provide for Multi-Database Failover, Multiple Secondaries, Active Secondaries, Fast Client Connection Redirection, can run on Windows Server Core & supports Multisite (Geo) Clustering as shown in the Microsoft (Tech Ed Europe, Justin Erickson) illustration below.

Availability Group can provide redundancy for databases on both standalone instances and failover cluster instances using Direct Attached storage (DAS), Network Attached Storage (NAS) and Storage Area Networks (SAN) which is useful for physical servers in a high availability cluster and virtualization. The latter is significant as they will support it with Hyper-V Live Migration where as Exchange 2010 Database Availability Groups do not. I confirmed this with a Microsoft PM at Tech Ed Europe 2010.  Download the CTP here and play all you want. Please pay attention to the fact that in CTP 1 a lot of stuff  isn’t quite ready for show time. Take a look at the Tech Europe 2010 Session on the high availability features here. You can also download the video and the PowerPoint presentation via that link. At first I thought MS might be going the same way with SQL as they did with Exchange, less choice in high availability but easier and covering all needs but than I don’t think they can. SQL Server Applications are beyond the realm of control of Redmond. They do control Outlook & OWA. So I think the SQL Server Team needs to provide backward compatibility and functionality way more than the Exchange team has. Brent Ozar (Twitter: @BrentO)  did a Blog on “Denali”/Hadron which you can read here http://www.brentozar.com/archive/2010/11/sql-server-denali-database-mirroring-rocks/. What he says about clustering is true. I’ use to cluster Windows 2000/2003 and suffered some kind of mental trauma. That was completely cured with Windows 2008 (R2) and I’m now clustering with Hyper-V, Exchange 2010, File Servers, etc. with a big smile on my face. I just love it!