Best of MMS 2013 – SCUG Belgium

Earlier this month I attended MMS2013 in Las Vegas. Today the Belgian System Center Community let’s us know about a live event “Best of MMS” they organize in order to share in-depth System center knowledge/presentations along with their impressions.

No one les than Wally Mead, the Senior Program Manager for System Center Configuration Manager who’s perhaps better know as The Godfather of Configuration Manger, will be joining the event. Wally is presenting twice along side the Belgian SCUG members, many of which belong to Microsoft Extended Experts Team (MEET) and/or are MVPs in Enterprise Client Management, Cloud & Datacenter & Virtual machine.

Grab a seat for "Best of MMS 2013” right here on Eventbrite

eb_press_big

You can find the (non final) agenda on the SCUGBE web site http://scug.be/events/2013/04/27/best-of-mms-19062013/. As you can see I have an early morning session at 09.15  – 10.15 on “Availability Strategies for a Resilient Private Cloud”. This provides the foundation for a high to continuous available private cloud my fellow speakers will be presenting on.

There will be opportunities to network, talk shop, learn and last but not least to win a TechEd Europe 2013 ticket in a lottery!

MVP Carsten Rachfahl Visits & Interviews Me On Networking & Storage in Windows Server 2012

Last month Carsten (MVP – Virtual Machine) & Kerstin Rachfahl (MVP – Office 365) visited me in my home town. Apart from a short visit to the historic center & a sushi diner amongst friends we also did an interview where we discussed our ongoing Windows Server 2012 Hyper-V activities. We’re trying to leverage as much of the product we can to get the best TCO & ROI and as early adopters we’ve been reaping the benefits form the day the RTM bits were available to us. So far that has been delivering great results. Funny to hear me mention the Fast Track designs as a week later we saw version 3 of those at MMS2013. The most interesting to me about those was the fact that the small & medium sizes focus on Cluster in a Box and Storage Spaces!

While we were having fun talking about the above we also enjoyed some of the most beautiful landmarks of the City of Ghent as a back drop for the interview. It was filmed in a meeting room at AGIV, to whom I provide Infrastructure services with a great team of colleagues. Just click the picture to view the video.

Videointerview_with_Didier_Van_Hoye_Storage_Networking_and_other_Stuff-Thumb2

You can also enjoy the video on Carsten’s blog http://www.hyper-v-server.de/videos/interview-mit-didier-van-hoye-ber-seinen-storage-netwerk-und-mehr/ All I need to do now is to arrange for Carsten to physically touch the Compellent storage I think.

April 24th–Windows 2003 Is 10 Years Old

I’d like to chime in on a recent blog post by Aidan Finn Hey Look–Your Business Is Running On A 10-Year Old Server Operating System (W2003). The sad thing is this is so true and “the good” thing is some are even still on Windows Server 2000 so even in worse shape. Now I realize that not all industries are the same but keeping your operating systems up to date does have it’s benefits for all types of companies.

  • Security Improvements
  • Improved, richer, enhanced features
  • New functionality
  • Support for state of the art hardware & software
  • Supported for that day the SHTF
  • Future Proofing of your current investments

For one, all the above  it will save you time and money. On top of that mitigates the risks of lost revenue due to security incidents & unsupported environments no one can fix for you.

Think about it, if you’re running Windows Server 2000 or 2003 chances are you are paying for software to provide functionality that’s available right out of the box. You’re also putting in the extra effort & jumping through loops to run those on modern server grade hardware.

You’re also building up debt. Instead of yearly improvements keeping your infrastructure & services top notch you’re actively digging an ever bigger, very expensive, complex and high risk hole where you’ll have to dig your self out off. If you can, that is. Not a good place to be in. Still think leveraging software assurance is a bad thing?

So while way to many companies now have to assigned resources to mitigating that looming problem we’re focusing on other ventures (such as Hyper-V, Azure, Hybrid Cloud, …) and just keep our OS up to date at a steady pace, like before. Well people that doesn’t happen by accident. We’ve maintained a very healthy pace of upgrading to the most recent version of windows in our environments and at times I have had to fight for that and I’m I will again..But look at our base line, even if the economy tanks completely we’re in darn good shape to weather that storm and come out ahead. But it’s not going to happen by sitting there avoiding change out of fear or laziness. So start today.A point where I agree with Aidan completely: if your “Zombie ISV” and other vendors are telling you Windows 2003 is great and you shouldn’t use those new unproven versions of the OS; they are really touting BS. They have fallen behind so far on the technology stack that they need you to stay in their black hole of despair with them or they’ll go broke. Just move one. Trust me, they need you more than the other way around

SMB 3.0 Multichannel Auto Configuration In Action With RDMA / SMB Direct

Most of you might remember this slide by Jose Barreto on SMB Multichannel  Auto Configuration in one of his many presentations:image

  • Auto configuration looks at NIC type/speed => Same NICs are used for RDMA/Multichannel (doesn’t mix 10Gbps/1Gbps, RDMA/non-RDMA)
  • Let the algorithms work before you decide to intervene
  • Choose adapters wisely for their function

You can fine tune things if and when needed (only do this when this is really the case) but let’s look at this feature in action.

So let’s look at this in real life. For this test we have 2 * X520 DA 10Gbps ports using 10.10.180.8X/24 IP addresses and 2 * Mellanox  10Gbps RDMA adaptors with 10.10.180.9X/24 IP addresses. No teaming involved just multiple NIC ports. Do not that these IP addresses are on different subnet than the LAN of the servers. Basically only the servers can communicate over them, they don’t have a gateway, no DNS servers and are as such not registered in DNS either (live is easy for simple file sharing).

image

Let’s try and copy a 50Gbps fixed VHDX file from server1 to server2 using the DNS name of the target host (pixelated), meaning it will resolve to that host via DNS and use the LAN IP address 10.10.100.92/16 (the host name is greyed out). In the below screenshot you see that the two RDMA capable cards are put into action. The servers are not using  the 1Gbps LAN connection. Multichannel looked at the options:

  • A 1Gbps RSS capable Link
  • Two 10Gbps RSS capable Links
  • Two 10Gbps RDMA capable links

Multichannel concluded the RDMA card is the best one available and as we have two of those it use both. In other words it works just like described.

image

Even if we try to bypass DNS and we copy the files explicitly via the IP address (10.10.180.84)  assigned to the Intel X520 DA cards Multichannel intelligence detects that it has two better cards  that provide RDMA available and as you can see it uses the same NICs  as in the demo before.  Nifty isn’t it Smile

 image

If you want to see the other NICs in action we can disable the Mellanox card and than Multichannel will choose the two X520 DA cards. That’s fine for testing but in real life you need a better solution when you need to manually define what NICs can be used. This is done using PowerShell Smile (take a look at Jose Barrto’s blog The basics of SMB PowerShell, a feature of Windows Server 2012 and SMB 3.0  for more info).

New-SmbMultichannelConstraint –ServerName SERVER2 –InterfaceAlias “SLOT 6 Port 1”, “SLOT 6 Port 2”

This tells a server it can only use these two NICs which in this example are the two Intel X520 DA 10Gbps cards to access Server2. So basically you configure/tell the client what to use for SMB 3.0 traffic to a certain server. Note the difference in send/receive traffic between RDMA/Native 10Gbps.

On Server1, the client you see this:

image

On Server2, the server you see this:

image

Which is indeed the constraint set up as we can verify with:

Get-SmbMultichannelConstraint

image

We’re done playing so let’s clean up all the constraints:

Get-SmbMultichannelConstraint | Remove-SmbMultichannelConstraint

image

Seeing this technology it’s now up to the storage industry to provide the needed  capacity and IOPS I a lot more affordable way. Storage Spaces have knocked on your door, that was the wake up call Winking smile. In an environment where we throw lots of data around we just love SMB 3.0