Exchange 2016 On The Horizon

With Exchange 2016 on the horizon (RTM in Q4 2015) I’ve been prepping the lab infrastructure and dusting of some parts of the Exchange 2010/2013 lab deployments to make sure I’m ready to start testing an migration to Exchange 2016. While Office 365 offers great value for money sometimes there is no option to switch over completely and a (used) hybrid scenario is the way to go.  This can be regulations, politics, laws, etc. No matter what we have to come up with a solutions that satisfy all needs as well as possible. Even in 2015 or 2016 this will mean on premises e-mail. This is no my “default” option when it comes to e-mail in anno 2015, but it’s still a valid option and choice. So they can get the best of both worlds and be compliant. Is this the least complex solution? No, but it gives them the compliancy they need and/or want. It’s not my job to push companies 100% to the cloud. That’s for the CIO to decide and for cloud vendors to market/sell. I’m in the business of helping create the best possible solution to the challenge at hand.

Figure: Exchange 2016 Architecture © Microsoft

The labs were setup to test & prepare for production deployments. It all runs on Hyper-V and it has survived upgrades of hypervisor (half of the VMs are even running on Windows Server 2016 hosts) and the conversion of the VHDX to VHDX.  These labs have been kept around for testing and trouble shooting. There are fully up to date. It’s fun to see the old 2009 test mails still sitting in some mail boxes.

image

Both Windows NLB and Kemp Technologies Loadmasters are used. Going forward we’ll certainly keep using the hardware load balancing solution. Oh, when it comes to load balancing, there only the best possible solution for your needs in your environment. That will determine which of the various options you have you’ll use. In Exchange 2016 that’s a will be very different from Exchange 2010 in regards to session affinity, affinity is no longer needed since Exchange 2013.image

In case you’re wondering what LoadMaster you need take a look at their sizing guides:

Another major change will be the networking. On Windows Server 2012 R2 we’ll go with a teamed 10Gbps NIC for all workloads simplifying the setup.  Storage wise one change will be the use of ReFS, especially if we can do this on Storage Spaces. The data protection you get from that combination is just awesome. Disk wise the IOPS have dropped yet even a little more so that should be OK. Now, being a geek I’m still tempted to leverage cheap / larger SSDs to give flying performance Smile. If possible at all I’d like to make it a self contained solution, so no external storage form any type of SAN / centralized storage. Just local disks. I’m eyeing the DELL R730DX for that purpose. Ample of storage, the ability to have 2 controllers and my experience with these has been outstanding.

So no virtualization? Sure where and when it makes sense and it fits in with the needs, wants and requirements of the business.  You can virtualize Exchange and it is supported. It goes without saying (serious bragging alert here) that I can make Hyper-V scale and perform like a boss.

Putting Windows Server 2016 TPv3 To The Test

I’ve dedicated some time to start investigating the new and improved feature and capabilities ever since Technical Preview 1 (TPv1). We kept going with TPv2 and now TPV3. The proving grounds for putting Windows Server 2016 TPv3 to the test are up and running.image

As usual I’ll be sharing some of the results and finding.  I only use the public Technical Previews for this so this means that it’s public information you can read about and go test  or find out about yourself.

So far things are going quite well. I’m learing a lot. Sure, I’m also hitting some issues left and right but on the whole Windows Server 2016 is giving me good vibes.

Expertise, insights, knowledge and experience is hard won. It’s never free. So I test what I need to find out about, find interesting or think what will be valuable in the future. Asking for me to go and test things for you on demand isn’t really going to work. I have bills to pay and cannot spend time, effort & resources on all of the numerous roles and features available to us in this release. Trust me I get enough offer to work for free or peanuts from both strangers and employers, so, thanks for the offers but I need no more 😉

Using RAMDisk To Test Windows Server 2012 Network Performance

I’m testing & playing different Windows Server 2012 & Hyper-V networking scenarios with 10Gbps, Multichannel, RDAM, Converged networking etc. Partially this is to find out what works best for us in regards to speed, reliability, complexity, supportability and cost.

Basically you have for basic resources in IT around which the eternal struggle for the prefect balance finds place. These are:

  • CPU
  • Memory
  • Networking
  • Storage

We need both the correct balance in capabilities, capacities and speed for these in well designed system. For many years now, but especially the last 2 years it very save to say that, while the sky is the limit, it’s become ever easier and cheaper to get what we need when it comes to CPU, Memory. These have become very powerful, fast and affordable relative to the entire cost of a solution.

Networking in the 10Gbps era is also showing it’s potential in quantity (bandwidth), speed (latency) and cost (well it’s getting there) without reducing the CPU or memory to trash thanks to a bunch of modern off load technologies. And basically in this case it’s these qualities we want to put to the test.

The most trouble some resource has been storage and it has been for quite a while. While SSD do wonders for many applications the balance between speed, capacity & cost isn’t that sweet as for our other resources.

In some environments were I’m active they have a need for both capacity and IOPS and as such they are in luck as next to caching a lot of spindles still equate to more IOPS. For testing the boundaries of one resource one needs to make sure non of the others hit theirs. That’s not easy as for performance testing can’t always have a truck load of spindles on a modern high speed SAN available.

RAMDisk to ease the IOPS bottleneck

To see how well the 10Gbps cards with and without Teaming, Multichannel, RDMA are behaving and what these configuration are capable of I wanted to take as much of the disk IOPS bottle neck out of the equation as possible. Apart from buying a Violin system capable of doing +1 million IOPS, which isn’t going to happen for some lab work, you can perhaps get the best possible IOPS by combining some local SSD and RAMDisk. RAMDisk is spare memory used as a virtual disk. It’s very fast and cost effective per IOPS. But capacity wise it’s not the worlds best, let alone most cost effective solution.

image

I’m using free RAMDisk software provided by StarWind. I chose this as they allow for large sized RAMDisks. I’m using the ones of 54GB right now to speed test copying fixed sized VHDX files. It install flawlessly on Windows Server 2012 and it hasn’t caused me any issues. Throw in some SSDs on the servers for where you need persistence and you’re in business for some very nice lab work.

clip_image001

You also need to be aware it doesn’t persist data when you reboot the system or lose power. This is not an issue if all we are doing is speed testing as we don’t care. Otherwise you’ll need to find a workaround and realize those ‘”flush the data to persistent storage” isn’t full proof or super fast, the SSDs do help here.

You have to register but the good news is that they don’t spam you to death at all, which I find cool. As said the tool is free, works with Windows Server 2012 and allows for larger RAMDisks where other free ones are often way to limited in size.

It has allowed me to do some really nice testing. Perhaps you want to check this out as well. WARNING: The below picture is a lab setup … I’m not a magician and it’s not the kind of IOPS I have all over the datacenters with 4 Cheapo SATA disks I touched my special magic pixie dust.

image

With #WinServ 2012 storage costs/performance/capacity are the only thing limiting you  http://twitter.yfrog.com/mnuo9fp #SMB3.0 #Multichannel

Some quick tests with a 52GB NTFS RAMDisk formatted with a 64K NTFS Allocation unit size.

image image

I also tested with another free tool from SoftPerfect ® RAM Disk FREE. It performs well but I don’t get to see the RAMDisk in the Windows Disk Management GUI, at least not on Windows Server 2012. I have not tested with W2K8R2.

NTFS Allocation unit size 4K NTFS Allocation unit size 64K
image image

Transition a Windows Server 8 to Windows 2012 Release Candidate Hyper-V Cluster

For those of you interested in moving their lab from Windows Server 8 beta to Windows Server 2012 Release Candidate I can refer you to my 3 part blog series on Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta).

  1. Part 1 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 1
  2. Part 2 Upgrading Hyper-V Cluster Nodes to Windows Server 2012 (Beta) – Part 2
  3. Part 3 Upgrading Hyper-V Cluster Nodes to Windows 8 (Beta) – Part 3

So the entire process is very similar but for the fact that to go from Windows Server 8 Beta you have to do a clean install on every node you evict during the process. An upgrade is not supported and not possible. I even tried the old trick of editing the cversion.ini file in the sources folder to lower the supported minimum version for an upgrade, but no joy.imageimage

You probably remember this trick to enable an upgrade form the beta/RC  to RTM with Windows Server 2008 R2/Windows 7  But that doesn’t work and even if it did it would not be supported.

But just follow the 3 part series and do an fresh install instead of an upgrade of the cluster nodes and you’ll be just fine.