DELL PE1850 Domain Controller Upgrade to Windows 2008 R2

UPDATE May 20th 2010: DELL changed the location of the pdf’s I updated the links to them on this blog post as I see lost of people clicking on them. Hope this helps.

Just in case someone needs to do something similar, I’m posting some of issues I needed to resolve when I did an Active Directory upgrade for a partner’s IT department (September/October 2009). There was a need for Windows 2008 R2 DNS (for Direct Access), the Active Directory Recycle Bin as well as the desire to have as much servers as possible running the same OS to keep management easy. This is not an extensive manual of any tool used but it will give you some pointers.

The hardware in use was DELL. Seven Power Edge 1850’s servers spread around the country in corporate HQ and large branch offices. Those servers where 3 years old at the time and had 2 years support remaining on the contract. A hardware replacement or virtualization was not an option. So we needed to find out if the upgrade was possible. The good news was we could get our hands on a spare PE1850 for testing and if need be a "swing type" migration. But to reduce the work an in-place upgrade was preferred. The original installation of these domain controllers where x64 bit, otherwise that would have been a no go, W2K8R2 is x64 bit only. It was quite a smart and forward looking chap who did the original project. OK, I admit it was me, so this is blatant self-promotion. The anti-virus had W2K8R2 support; we got an updated agent for the UPS from the vendor, etc. It all looked pretty good.

I checked the following support documents on the dell website:

Microsoft® Windows Server® 2008 R2 for Dell™ PowerEdge™ Systems

Important Information Guide

https://support.dell.com/support/edocs/software/win2008/WS08_R2/en/IIG/WS08R2IG.pdf

Microsoft® Windows Server® 2008 R2 for Dell™ PowerEdge™ Systems

Installing Microsoft Windows Server 2008 R2

http://support.dell.com/support/edocs/software/win2008/WS08_R2/en/ING/WS08R2In.pdf

There found that all drivers and firm ware updates needed to support W2K8R2 on a PE1850 where available except for one and that was the driver for the raid controller, a PERC4e/si. That was a potential show stopper, but a driver was coming. So I kept a close eye on the DELL FTP site and around 3 September it showed up. Using the SUU 6.1 DVD or manually download installation packets I upgraded the firmware of the servers (BIOS, DRAC).

I also ran the upgrade advisor and found that we need to remove the Dell Open Manage Server Assistant version as the aging Dell® OpenManage Diagnostic Service used an unsigned driver (C:Program Files (x86)DellSysMgtoldiagspackagesPORTACCESSOR64.sys). All potential problematic software, unneeded tools and drivers like video, anti-virus, UPS … where removed as well as this makes any upgrade process less risky.

No Native RAID Driver

As the DELL PERC 4e/Si is not natively supported by Windows 2008 R2 we need to use the DELL driver (R227150.exe)
from the FTP site. You could put the drivers in a subfolder $WinPEDriver$ on the root of a volume that Windows can find during the upgrade (hard disk, usb thumb drive…). Now to make absolutely sure we didn’t have any issues with the raid controller we decided to inject the driver in to WIM files to build a custom ISO. That might be redundant but we wanted to have an ISO with all needed drivers for disaster recovery purposes anyway. The drivers need to be injected into the boot.wim and thee install.wim files using DISM (Deployment Image Servicing and Management from the WAIK for Windows 7 en W2K8R2) see The Windows® Automated Installation Kit (AIK) for Windows® 7 @ http://www.microsoft.com/downloads/details.aspx?familyID=f1bae135-4190-4d7c-b193-19123141edaa&displaylang=en.

We used the x64 bit versions of the tools as our WIM files are x64 bit

The following commands inject the driver into the boot wim file’s two indexes:

DISM /MOUNT-WIM /WIMFILE:D:InjectDriverboot.wim /INDEX:1 /MOUNTDIR:D:MOUNTHERE

DISM /IMAGE:D:MOUNTHERE /ADD-DRIVER /DRIVER:D:DRIVERSDELLR227150

DISM /UNMOUNT-WIM /MOUNTDIR:D:MOUNTHERE /COMMIT

DISM /MOUNT-WIM /WIMFILE:D:InjectDriverboot.wim /INDEX:2 /MOUNTDIR:D:MOUNTHERE

DISM /IMAGE:D:MOUNTHERE /ADD-DRIVER /DRIVER:D:DRIVERSDELLR227150

DISM /UNMOUNT-WIM /MOUNTDIR:D:MOUNTHERE /COMMIT

Index 1 is the Microsoft Windows Preinstallation Environment (WinPE) and Index 2 is the actual Windows Setup that you can run when booted into WinPE. DISM has a command to find out more info about the image files: DISM.exe /Get-WimInfo. The documentation in the WAIK is quite good. Read it!

The following commands inject the driver into the install wim file. You need to do that for any index you want or need (Web, Standard, Enterprise, core and full install …) Just paste everything you need in a cmd file and you’re good to go.

DISM /MOUNT-WIM /WIMFILE:D:InjectDriverinstall.wim /INDEX:1 /MOUNTDIR:D:MOUNTHERE

DISM /IMAGE:D:MOUNTHERE /ADD-DRIVER /DRIVER:D:DRIVERSDELLR227150

DISM /UNMOUNT-WIM /MOUNTDIR:D:MOUNTHERE /COMMIT

Video Driver Injection Hiccup

As we wanted to have a good screen resolution for the sys admins we also embedded the video drivers. The screen resolution with the native driver wasn’t very good so we looked around to find one that would work. We found the Radeon 7000M driver (ATI_Radeon-7000M_A00_R177829.exe) on the DELL website and also injected them into the boot.wim and install.wim image files using DISM. That way we didn’t need to update the video drivers after installation. Cool. Most video drivers are packed twice. The trick is that you still need to expand the drivers after you extracted them from the installer using WinZip, 7Zip, WinRAR or whatever it is your use or prefer. Otherwise you’ll get an error like this after dism has found the inf:

Searching for driver packages to install…Found 2 driver package(s) to install.
Installing 1 of 2 – D:DRIVERSDELLR177829DriverXP6A_INFCA_58688.inf: Error – An error occurred. The driver package could not be installed.For more information, check for log files in the <windir>inf folder of the target image.
Installing 2 of 2 – D:DRIVERSDELLR177829DriverXP_INFCX_58688.inf: Error – An error occurred. The driver package could not be installed.For more information, check for log files in the <windir>inf folder of the target image.

For more information, check for log files in the <windir>inf folder of the target image.  

Error 30

The command completed with errors. For more information, refer to the log file.

The DISM log file can be found at C:WindowsLogsDISMdism.log

If you look in the dism.log you’ll find an error code like 0x8007001E as the cause of the error. This is rather cryptic. Any way you can prevent this by extracting them:

Expand D:DRIVERSDELLR177829DriverXP6A_INFB_58469*.* D:DriversExpanded

Copy the expanded files into a copy of the original folder structure to replace the original files. Make sure that you repeat this exercise for any subfolder as well if needed or you’ll only expand none or only a portion of the files. When you’ve done that you can add the drivers to the WIM files. Beware that adding those large video drivers can take rather long.

Now that we have added all drivers to the wim files we write the customized installation to an ISO file (oscdimg.exe, WAIK). We can burn this to a CD to add to the disaster recovery kit or mount the ISO using the DRAC media.

"C:Program FilesWindows AIKToolsamd64oscdimg.exe" -n -m -bD:W2K8R2WithDellPerc4esiDriverW2K8R2bootetfsboot.com "D:W2K8R2WithDellPerc4esiDriverW2K8R2" "D:W2K8R2WithDellPerc4eDiAndRadeon7000M.iso"

Using the custom install we were able to upgrade the domain controllers fast and without any issues. All what was left to do after the upgrade was check if all was well with the DC. After that we had to clean up the post upgrade artifacts, install anti-virus, UPS software, Dell Management tools and provide for and schedule the backups.

So in in all we did 7 in place upgrades and now they have been running in native Windows 2008 Domain & forest functional level for over 4 months without any issues. Nice job. That’s the good thing about that partner, they always have some interesting jobs to do and helping out is always appreciated.

Setting Dates on Folders With PowerShell

A friend of mine with a Business Intelligence company asked me a favor. They have a lot of data (files & folders) that have to be copied around in the lab, at clients etc. This often leaves the date modified on the folders not reflecting the last modified date of the most recent modification in that folder’s sub structure.  This causes a lot of confusion in their processes, communication and testing.

The needed a script to correct that. Now they wanted a script, not an application (no installations, editable code). Good news they had a Windows machine (XP or higher) to run the code on and file sharing on Linux was using SAMBA so we could use PowerShell. VBScript/Java Script can only change dates on files using the Shell.Application object but NOT of folders. They also can’t directly call Windows API’s. First of all that’s “unmanaged code to the extreme” and using a  COM dll to get access to the Windows API violates the condition set out from the start.  But luckily PowerShell came to the rescue!

To accomplish the request we sort of needed to walk the tree backwards from all it’s branches back to the root. I’m no math guru so writing that sort of a reverse incursive algorithm wasn’t really an option. I decided to use plain good old recursion and count the depth of the folder structure to know how many times I needed to recursively parse through to get the correct modified date to “walk up” the folder structure. Here a snippet as a demo:


# Demo snippet

$root = &quot;E:TestRootTestDataStructure&quot; # The folder structure to parse
$DeepestLevel = 0 # A counter to persist the deepest level found up to that moment
$File
$LevelCheck
$Return

#Loop through the folder structure recursively to determine the deepest level.
foreach ($folder in Get-ChildItem $root -recurse | Where-Object {$_.PsIsContainer -eq &quot;True&quot;})

    {
        $search = $folder.FullName
        Write-Host &quot;Folder: $search&quot;
        #Sort the returned objects by modified date and select the most recent (last) one
  $Return = Get-ChildItem $search | Sort-Object LastWriteTime | Select-Object -last 1
  Write-Host &quot;Childe File/Subfolder most recently modified: $Return&quot;
        #Check how deep is the current level
  $LevelCheck = $Return.FullName.split(&quot;&quot;).Count -1
  # Compare above with deepest level foudn so far and set to new value if needed.
  if ($LevelCheck -gt $DeepestLevel) {$DeepestLevel = $LevelCheck}
  Write-Host &quot;LevelCheck: $LevelCheck&quot;
  Write-Host &quot;DeepestLevel: $DeepestLevel&quot;

    }
# Now actually recurively walk the folder structure x times where x = Deepestlevel
do {
  foreach ($folder in Get-ChildItem $root -recurse | Where-Object {$_.PsIsContainer -eq &quot;True&quot;})
        {
            $search = $folder.FullName
             #Sort the returned objects by modified date and select the most recent (last) one
  $Return = Get-ChildItem $search | Sort-Object LastWriteTime | Select-Object -last 1
            Write-Host &quot;Child File or Folder most recently modified: &quot; $Return.Fullname
            #Set the modified date on the parent folder to the one of most recent modified child object
  if ($Return -ne $null) {$folder.LastWriteTime = $Return.LastWriteTime}
            Write-Host &quot;Parent folder &quot; $search &quot; last modified date set to &quot; $Return.LastWriteTime
        }
  ; $DeepestLevel-- } #Counter -1
until ($DeepestLevel -eq 0)

Going through the folder structure to0 often is OK, going through it to0 few times is bad as it doesn’t accomplish the goal. So the logical bug in the code that loops once to much due to “\” in the UNC path isn’t an issue. Not really elegant but very effective. The speed is also acceptable. It ran through 30,000 files, 20 GB in all in about a minute. Quick & Dirty does the trick sometimes.

The code will work with PowerShell 1.0/2.0 against a local and a UNC path as long as you have the correct permissions.

This is just a code snippet, not the production code with error handling, so please test it in a lab & understand what it does before letting it rip through your folder structures.

Cheers

SCVMM 2008 R2 Phantom VM guests after Blue Screen

UPDATE: Microsoft posted an SQL Clean Up script to deal with this issue. Not exactly a fix and let’s hope it gets integrated into SCVMM vNext 🙂 Look at the script here http://blogs.technet.com/b/m2/archive/2010/04/16/removing-missing-vms-from-the-vmm-administrator-console.aspx. There is a link to this and another related blog post in the newsgroup link at the bottom of this article as well.

I’ve seen an annoying hick up in SCVMM 2008 R2 (November 2009) in combination with Hyper-V R2 Live migration two times now. In both cases a Blue Screen (due to the “Nehalem” bug http://support.microsoft.com/kb/975530) was the cause of this. Basically when a node in the Hyper-V cluster blue screens you can end up with some (never seen all) VM’s on that node being is a failed/missing state. The VM’s however did fail over to another node and are actually running happily. They will even fail back to the original node without an issue. So, as a matter of fact, all things are up and running. Basically you have a running VM and a phantom one. There are just multiple entries in different states for the same VM. Refreshing SCVMM doesn’t help and a repair of the VM is not working.

While it isn’t a show stopper, it is very annoying and confusing to see VM guest in a missing state, especially since it the VM is actually up and running. You’re just seeing a phantom entry. However be careful when deleting the phantom VM as you’ll throw away the running VM as well they point to the same files. 

Removing the failed/Orphaned VM in SCVMM is a no go when you use shared storage like for example CSV as it points to the same files as the running one and it is visible to both the good VM node and the phantom one. Meaning it will ruin your good VM as well.

Snooping around in the SCVMM database tables revealed multiple VM’s with the same name but with separate GUIDS. In production it’s really a NO GO to mess around with the records. Not even as a last resort because we don’t know enough about the database scheme and dependencies. So I have found two workarounds that do work (used ‘m both).

  1. Export the good VM for save keeping, delete the missing/orphaned VM entry in SCVMM (one taking the good one with it if you didn’t export it) and import the exported VM again. This means down time for the VM guest. 
  2. Remove the Hyper-V cluster from VMM and re add it. This has the benefit that it creates no down time for the good VM and that the bad/orphaned one is gone. 

Searching the net didn’t reveal much info but I did find this thread that discusses the issue as well http://social.technet.microsoft.com/Forums/en-US/virtualmachinemanager/thread/1ea739ec-306c-4036-9a5d-ecce22a7ab85 and this one http://social.technet.microsoft.com/Forums/en/virtualmachinemgrclustering/thread/a3b7a8d0-28dd-406a-8ccb-cf0cd613f666

I’ve also contacted some Hyper-V people about this but it’s a rare and not well-known issue. I’ll post more on this when I find out.

Enterprise Architecture Meets Technical Architect

During an introductory talk with the Enterprise Architecture consultants I made an accidental connection by using the word “coherency”. As it turns out, it’s used frequently in the new terminology in their profession, as in the “Coherent Enterprise”. So I guess that talk went well, they even laughed politely at my jokes and took plenty of notes. They are also concise, I like that. A talk lasting over one was hour reduced to its essence in one paragraph of  the report. As I told them, the success of their efforts & the results are determined by the execution. Let’s hope they can keep that concise approach for the duration. The last thing the world needs is another 2000 pages of cellulose no one ever reads.