Failed at dumping XP in a timely fashion? Reassert yourself by doing better with Windows Server 2003!

I could write a blog post that repeats the things I said bout XP here for Windows 2003 with even some more drama attached so I won’t. There’s plenty about that on the internet and you can always read these blogs again:

I also refer you to a old tweet of mine that got picked up by some one and he kind of agreed:

image

Replace “XP” with “Server 2003” and voila. Instant insight into the situation. You are blocking yourself from moving ahead and it getting worse by the day. All IT systems & solutions rot over time. They become an ever bigger problem to manage and maintain, costing you time, effort, money and lost opportunities due to blocking to progress. There comes a day that creative solutions won’t pop up anymore like the one in this blog post  Windows XP Clients Cannot Execute Logon Scripts against a Windows Server 2012 R2 Domain Controller – Workaround and more recently this on where people just waited to long to move AD over from Windows Server 2003 to something more recent It turns out that weird things can happen when you mix Windows Server 2003 and Windows Server 2012 R2 domain controllers. All situations where not moving ahead out of fear to break stuff actually broke the stuff.

In the environments I manage I look at the technology stack and plan the technologies that will be upgraded in the coming 12 months in the context of what needs to happen to support & sustain initiatives. This has the advantage that the delta between versions & technologies can never become to big. It avoids risk because it doesn’t let delta grow for 10 years an blocks introducing “solutions” that only supports old technology stacks. It make sure you never fall behind too much, pay off existing technology debt in a timely fashion and opens up opportunities & possibilities. That’s why our AD is running Windows Server 2012 R2 and our ADFS was moved to 3.0 already. It’s not because a lot of things have become commodities you should hand ‘m over to the janitor in break/fix mode. Oh the simplicity by which some wander this earth …

OODA

Observe, Orient, Decide, Act. Right now in 2014 we’ve given management and  every product/application owner their marching orders. Move away from any Windows 2008 / R2 server that is still in production. Why? They demand a modern capable infrastructure that can deliver what’s needed to grasp opportunities that exits with current technology. In return they cannot allow apps to block this. It’s as easy and simple as that. And we’ll stick to the 80/20 rule to call it successful and up the effort next year for the remainder. Whether it’s an informal group of dedicated IT staff or a full blown ITIL process that delivers that  doesn’t matter. It’s about the result and if I still see Windows 7 or Windows 2008 R2 being rolled out as a standard I look deeper and often find a slew of Windows 2003 or even Windows 2000 servers, hopefully virtualized by now. But what does this mean? That you’re in a very reactive modus & in a bad place. Courage & plans are what’s needed. Combine this with skills to deal with the fact that no plan ever woks out perfectly. Or as Mike Tyson said “Everybody has a plan until they get punched in the mouth. … Then, like a rat, they stop in fear and freeze.”

Organizations that still run XP and Windows Server 2003 are paralyzed by fear & have frozen even before they got hit. Hiding behind whatever process or methodology they can (or the abuse of it) to avoid failure by doing the absolute minimum for the least possible cost. Somehow they define that as success and it became a mission statement. If you messed up with XP, there’s very little time left to redeem yourself and avoid the same shameful situation with Windows Server 2003. What are you waiting for? Observe, Orient, Decide, Act.

Configuring timestamps in logs on DELL Force10 switches

When you get your Force10 switches up and running and are about to configure them you might notice that, when looking at the logs, the default timestamp is the time passed since the switch booted. During configuration looking at the logs can very handy in seeing what’s going on as a result of your changes. When you’re purposely testing it’s not too hard to see what events you need to look at. When you’re working on stuff or trouble shooting after the fact things get tedious to match up. So one thing I like to do is set the time stamp to reflect the date and time.

This is done by setting timestamps for the logs to datetime in configuration mode. By default it uses uptime. This logs the events in time passed since the switch started in weeks, days and hours.

service timestamps [log | debug] [datetime [localtime] [msec] [show-timezone] | uptime]

I use: service timestamps log datetime localtime msec show-timezone

F10>en
Password:
F10#conf
F10(conf)#service timestamps log datetime localtime msec show-timezone
F10(conf)#exit

Don’t worry if you see $ sign appear left or right of your line like this:

F10(conf)##$ timestamps log datetime localtime msec show-timezone

it’s just that the line is to long and your prompt is scrolling Winking smile.

This gives me the detailed information I want to see. Opting to display the time zone and helps me correlate the events to other events and times on different equipment that might not have the time zone set (you don’t always control this and perhaps it can’t be configured on some devices).

image

As you can see the logging is now very detailed (purple). The logs on this switch were last cleared before I added these timestamps instead op the uptime to the logs. This is evident form the entry for last logging  buffer cleared: 3w6d12h (green).

Voila, that’s how we get to see the times in your logs which is a bit handier if you need to correlate them to other events.

Defragmenting your CSV Windows 2012 R2 Style with Raxco Perfect Disk 13 SP2

When it comes to defragmenting CSV it seemed we took a step back when it comes to support from 3rd party vendors. While Windows provides for a great toolset to defragment a CSV it seemed to have disappeared form 3r party vendor software. Even from the really good Raxco Perfect disk. They did have support for this with Windows 2008 R2 and I even mentioned that in a blog.

If you need information on how to defragment a CSV in Windows 2012 R2, look no further.There is an absolutely fantastic blog post on the subject How to Run ChkDsk and Defrag on Cluster Shared Volumes in Windows Server 2012 R2, by Subhasish Bhattacharya one of the program managers in the Clustering and High Availability product group. He’s a great guy to talk shop to by the way if you ever get the opportunity to do so. One bizarre thing is that this must be the only place where PowerShell (Repair-ClusterSharedVolume cmdlet) is depreciated in lieu of chkdsk.

3rd party wise the release of Raxco Perfect Disk 13 SP2 brought back support for defragmenting CSV.

image

I don’t know why it took them so long but the support is here now. It looks like they struggled to get the CSVFS (the way CSV are now done since Windows Server 2012) supported. Whilst add it, they threw in support for ReFS by the way. This is the first time I’ve ever seen this. Any way it’s here and that’s good because I have a hard time accepting that any product (whatever it does) supports Hyper-V if it can’t handle CSV, not if you want to be taken seriously anyway. No CSV support equals = do not buy list in my book.

Here’s a screenshot of Perfect disk defragmenting away. One of the CSV LUNs in my lab is a SSD and the other a HDD.

image

Notice that in Global Settings you can tweak the behavior when defragmenting optimization of various drive types, including CSVFS but you just have to leave the default on unless you like manual labor or love PowerShell that much you can’t forgo any opportunity to use it Winking smile

image

Perfect disk cannot detect what kind of disks you have behind the CSV LUN so you might want to change the optimization method if you’re running SSD instead of HHD.

image

I’d love for Raxco to comment on this or point to some guidance.

What would also be beneficial to a lot of customers is guidance on defragmentation on the different auto-tiering storage arrays. That would make for a fine discussion I think.

Migrate A Windows 2003 RADIUS–IAS Server to Windows Server 2012 R2

Some days you walk into environments were legacy services that have been left running for 10 years as:

  1. They do what they need to do
  2. No one dares touch it
  3. Have been forgotten, yet they provide a much used service

Recently I had the honor of migrating IAS that was still running on Windows Server 2003 R2 x86, which was still there for reason 1. Fair enough but with W2K3 going it’s high time to replace it. The good news was it had already been virtualized (P2V) and is running on Hyper-V.

Since Windows 2008 the RADIUS service is provided by Network Policy Server (NPS) role. Note that they do not use SQL for logging.

Now in W2K3 there is no export/import functionality for the configuration in IAS. So are we stuck? Well no, a tool has been provided!

Install a brand new virtual machine with W2K12R2 and update it. Navigate to C:WindowsSysWOW64 folder and grab a copy of IasMigReader.exe.

image

Place IasMigReader.exe in the C:WindowsSystem32 path on the source W2K3 IAS server as that’s configured in the %path% environment variable and it will be available anywhere from the command prompt.

  • Open a elevated command prompt
  • Run IasMigReader.exe

image

  • Copy the resulting ias.txt file from the  C:WindowsSystem32IASfolder. Please keep this file secure it contains password. TIP: As a side effect you can migrate your RADIUS even if no one remembers the shared secrets and you now have them again Winking smile

image

Note: The good news is that in W2K12 (R2) the problem with IasMigReader.exe generating a bad parameter in ias.txt is fixed ((The EAP method is configured incorrectly during the migration process from a 32-bit or 64-bit version of Windows Server 2003 to Windows Server 2008 R2). So no need to mess around in there.

  • Copy the ias.tx file to a folder on your target NPS server & run the following command from an elevated prompt:

netsh nps import <path>ias.txt

image

  • Open the NPS MMC and check if this went well, normally you’ll have all your settings there.

image

When Network Policy Server (NPS) is a member of an Active Directory® Domain Services (AD DS) domain, NPS performs authentication by comparing user credentials that it receives from network access servers with the credentials that are stored for the user account in AD DS. In addition, NPS authorizes connection requests by using network policy and by checking user account dial-in properties in AD DS.

For NPS to have permission to access user account credentials and dial-in properties in AD DS, the server running NPS must be registered in AD DS.

Membership in Domain Admins , or equivalent, is the minimum required to complete this procedure.

  • All that’s left to do now is pointing the WAPs (or switches & other RADIUS Clients) to the new radius servers. On decent WAPs this is easy as either one of them acts as a controller or you have a dedicated controller device in place.
  • TIP: Most decent WAPS & switches will allow for 2 Radius servers to be configured. So if you want you can repeat this to create a second NPS server with the option of load balancing. This provides redundancy & load balancing very easily. Only in larger environments multiple NPS proxies pointing to a number of NPS servers make sense.Here’s a DELL PowerConnect W-AP105 (Aruba) example of this.

image