Looking Back at the DELL CIO Executive Summit 2014

Yesterday I attended the DELL CIO Executive Summit 2014 in Brussels. Basically it was home match for me (yes that happens) and I consider it a compliment that I have been given the opportunity to be invited to a day of C level discussions.

image

Apart from a great networking opportunity with our peers we had direct access to many of DELL’s executives. I found it interesting to see what some existing customers had to say and share about their experiences with DELL Services. Especially in the security side of things where they provide a level of expertise and assistance I did not yet realize they did.

The format was small scale and encouraged interactive discussions. That succeeded quite well and made for good interaction between the attending CIOs an DELL executives. We were not being sold to or killed by PowerPoint. Instead we engaged in very open discussions on our challenges and opportunities while providing feedback. It reminded me of the great interaction promoting format at the DELL Enterprise Forum 2014 in Frankfurt this year. You learn a lot from each other and how others deal with the opportunities that arise.

To give you an idea about the amount of access we got consider the following. Where can you walk up to the CEO of a +/- 24 Billion $ company and provide him some feedback on what you like and don’t like about the company he founded? Even better you get a direct, no nonsense answer which explains why and where.  Does he need to do this? My guess is not, but he does and I appreciate that as an IT Professional, Microsoft MVP and customer.

Before the CIO Executive Summit started I joined the Solutions Summit, to go talk shop with sponsors/partners like Intel and Microsoft, DELL employees & peers and lay my eyes on some generation 13 hardware for the 1st time in real life.

It was a long but very good day. As the question gets asked every now and then as to why I attend such summits and events, I can only say that it’s highly interesting to talk to your peers, vendors, engineers and executives. It prevents tunnel vision & acting in your village without knowledge of the world around you. Keeping your situational awareness in IT and business requires you to put in the effort and is highly advisable. It’s as important as a map, reconnaissance and intelligence to the military, without it you’re acting on a playing field you don’t even see let alone understand.

DELL CIO Executive Summit

I’ve been invited and I’m attending the CIO Executive Summit with DELL’s Executive Leadership Team on Wednesday September 17, 2014 in Brussels. It’s an opportunity to meet and network with my peers and IT leaders.  It also provide the opportunities to discuss challenges with Dell executives and where they see DELL help us with those.

It runs parallel with DELL Solutions Tour 2014 Brussels (see http://www.dellsolutionstour2014.com/ for events near you) where I’m sure many will be looking at the recently released generation 13 servers & new Intel CPU offerings.

image

I’ll be attending 2 “Strategic Deep Dive Sessions” that address some of critical challenges facing IT C-Level professionals. I’m doing the one on security. This is important as alone eternal vigilance, preparedness & situational awareness can help mitigate disaster. The technology is just a force multiplier.

The other track is on future ready IT solutions. That means a lot different thins to many of us. The new capabilities and ever faster evolving IT places a financial and operational burden on everyone. I’m very interested to discuss how DELL will deal with this beyond the traditional answers. The need for fast, effective & cost effective solutions that deliver great ROI & TCO is definitely there but the move to OPEX versus CAPEX and the potential loss of ownership also introduces risk that can cost us dearly if not managed right. IT, is still more than a financial model of service billing, even if sometimes it looks like that. It’s important to keep the mix in balance & do it smart.

So on Wednesday I’ll be focusing on strategy and not action or tools. Something that get’s missed way too much by way too many way too often. Michael Dell will be there and if I get the opportunity I’ll be happy to give some feedback.

Configuring timestamps in logs on DELL Force10 switches

When you get your Force10 switches up and running and are about to configure them you might notice that, when looking at the logs, the default timestamp is the time passed since the switch booted. During configuration looking at the logs can very handy in seeing what’s going on as a result of your changes. When you’re purposely testing it’s not too hard to see what events you need to look at. When you’re working on stuff or trouble shooting after the fact things get tedious to match up. So one thing I like to do is set the time stamp to reflect the date and time.

This is done by setting timestamps for the logs to datetime in configuration mode. By default it uses uptime. This logs the events in time passed since the switch started in weeks, days and hours.

service timestamps [log | debug] [datetime [localtime] [msec] [show-timezone] | uptime]

I use: service timestamps log datetime localtime msec show-timezone

F10>en
Password:
F10#conf
F10(conf)#service timestamps log datetime localtime msec show-timezone
F10(conf)#exit

Don’t worry if you see $ sign appear left or right of your line like this:

F10(conf)##$ timestamps log datetime localtime msec show-timezone

it’s just that the line is to long and your prompt is scrolling Winking smile.

This gives me the detailed information I want to see. Opting to display the time zone and helps me correlate the events to other events and times on different equipment that might not have the time zone set (you don’t always control this and perhaps it can’t be configured on some devices).

image

As you can see the logging is now very detailed (purple). The logs on this switch were last cleared before I added these timestamps instead op the uptime to the logs. This is evident form the entry for last logging  buffer cleared: 3w6d12h (green).

Voila, that’s how we get to see the times in your logs which is a bit handier if you need to correlate them to other events.

SMB 3, ODX, Windows Server 2012 R2 & Windows 8.1 perform magic in file sharing for both corporate & branch offices

SMB 3 for Transparent Failover File Shares

SMB 3 gives us lots of goodies and one of them is Transparent Failover which allows us to make file shares continuously available on a cluster. I have talked about this before in Transparent Failover & Node Fault Tolerance With SMB 2.2 Tested (yes, that was with the developer preview bits after BUILD 2011, I was hooked fast and early) and here Continuously Available File Shares Don’t Support Short File Names – "The request is not supported" & “CA failure – Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

image

This is an awesome capability to have. This also made me decide to deploy Windows 8 and now 8.1 as the default client OS. The fact that maintenance (it the Resume Key filter that makes this possible) can now happen during day time and patches can be done via Cluster Aware Updating is such a win-win for everyone it’s a no brainer. Just do it. Even better, it’s continuous availability thanks to the Witness service!

When the node running the file share crashes, the clients will experience a somewhat long delay in responsiveness but after 10 seconds the continue where they left off when the role has resumed on the other node. Awesome! Learn more bout this here Continuously Available File Server: Under the Hood and SMB Transparent Failover – making file shares continuously available.

Windows Clients also benefits from ODX

But there is more it’s SMB 3 & ODX that brings us even more goodness. The offloading of read & write to the SAN saving CPU cycles and bandwidth. Especially in the case of branch offices this rocks. SMB 3 clients who copy data between files shares on Windows Server 2012 (R2) that has storage an a ODX capable SAN get the benefit that the transfer request is translated to ODX by the server who gets a token that represents the data. This token is used by Windows to do the copying and is delivered to the storage array who internally does all the heavy lifting and tell the client the job is done. No more reading data form disk, translating it into TCP/IP, moving it across the wire to reassemble them on the other side and write them to disk.

image

To make ODX happen we need a decent SAN that supports this well. A DELL Compellent shines here. Next to that you can’t have any filter drives on the volumes that don’t support offloaded read and write. This means that we need to make sure that features like data deduplication support this but also that 3rd party vendors for anti-virus and backup don’t ruin the party.

image

In the screenshot above you can see that Windows data deduplication supports ODX. And if you run antivirus on the host you have to make sure that the filter driver supports ODX. In our case McAfee Enterprise does. So we’re good. Do make sure to exclude the cluster related folders & subfolders from on access scans and schedules scans.

Do not run DFS Namespace servers on the cluster nodes. The DfsDriver does not support ODX!

image

The solution is easy, run your DFS Namespaces servers separate from your cluster hosts, somewhere else. That’s not a show stopper.

The user experience

What it looks like to a user? Totally normal except for the speed at which the file copies happen.

Here’s me copying an ISO file from a file share on server A to a file share on server B from my Windows 8.1 workstation at the branch office in another city, 65 KM away from our data center and connected via a 200Mbps pipe (MPLS).

image

On average we get about 300 MB/s or 2.4 Gbps, which “over” a 200Mbps WAN is a kind of magic. I assure you that they’re not complaining and get used to this quite (too) fast Winking smile.

The IT Pro experience

Leveraging SMB 3 and ODX means we avoid that people consume tons of bandwidth over the WAN and make copying large data sets a lot faster. On top of that the CPU cycles and bandwidth on the server are conserved for other needs as well. All this while we can failover the cluster nodes without our business users being impacted. Continuous to high availability, speed, less bandwidth & CPU cycles needed. What’s not to like?

Pretty cool huh! These improvements help out a lot and we’ve paid for them via software assurance so why not leverage them? Light up your IT infrastructure and make it shine.

What’s stopping you?

So what are your plans to leverage your software assurance benefits? What’s stopping you? When I asked that I got a couple of answers:

  • I don’t have money for new hardware. Well my SAN is also pré Windows 2012 (DELL Compellent SC40 controllers. I just chose based on my own research not on what VARs like to sell to get maximal kickbacks Winking smile. The servers I used are almost 4 years old but fully up to date DELL PowerEdge R710’s, recuperated from their duty as Hyper-V hosts. These server easily last us 6 years and over time we collected some spare servers for parts or replacement after the support expires. DELL doesn’t take away your access to firmware &drivers like some do and their servers aren’t artificially crippled in feature set.
  • Skills? Study, learn, test! I mean it, no excuse!
  • Bad support from ISV an OEMs for recent Windows versions are holding you back? Buy other brands, vote with your money and do not accept their excuses. You pay them to deliver.

As IT professionals we must and we can deliver. This is only possible as the result of sustained effort & planning. All the labs, testing, studying helps out when I’m designing and deploying solutions. As I take the entire stack into account in designs and we do our due diligence, I know it will work. The fact that being active in the community also helps me know early on what vendors & products have issues and makes that we can avoid the “marchitecture” solutions that don’t deliver when deployed. You can achieve this as well, you just have to make it happen. That’s not too expensive or time consuming, at least a lot less than being stuck after you spent your money.