Setting Dates on Folders With PowerShell

A friend of mine with a Business Intelligence company asked me a favor. They have a lot of data (files & folders) that have to be copied around in the lab, at clients etc. This often leaves the date modified on the folders not reflecting the last modified date of the most recent modification in that folder’s sub structure.  This causes a lot of confusion in their processes, communication and testing.

The needed a script to correct that. Now they wanted a script, not an application (no installations, editable code). Good news they had a Windows machine (XP or higher) to run the code on and file sharing on Linux was using SAMBA so we could use PowerShell. VBScript/Java Script can only change dates on files using the Shell.Application object but NOT of folders. They also can’t directly call Windows API’s. First of all that’s “unmanaged code to the extreme” and using a  COM dll to get access to the Windows API violates the condition set out from the start.  But luckily PowerShell came to the rescue!

To accomplish the request we sort of needed to walk the tree backwards from all it’s branches back to the root. I’m no math guru so writing that sort of a reverse incursive algorithm wasn’t really an option. I decided to use plain good old recursion and count the depth of the folder structure to know how many times I needed to recursively parse through to get the correct modified date to “walk up” the folder structure. Here a snippet as a demo:

[sourcecode language=”powershell”]

# Demo snippet

$root = "E:TestRootTestDataStructure" # The folder structure to parse
$DeepestLevel = 0 # A counter to persist the deepest level found up to that moment
$File
$LevelCheck
$Return

#Loop through the folder structure recursively to determine the deepest level.
foreach ($folder in Get-ChildItem $root -recurse | Where-Object {$_.PsIsContainer -eq "True"})

{
$search = $folder.FullName
Write-Host "Folder: $search"
#Sort the returned objects by modified date and select the most recent (last) one
  $Return = Get-ChildItem $search | Sort-Object LastWriteTime | Select-Object -last 1
  Write-Host "Childe File/Subfolder most recently modified: $Return"
#Check how deep is the current level
  $LevelCheck = $Return.FullName.split("").Count -1
  # Compare above with deepest level foudn so far and set to new value if needed.
  if ($LevelCheck -gt $DeepestLevel) {$DeepestLevel = $LevelCheck}
  Write-Host "LevelCheck: $LevelCheck"
  Write-Host "DeepestLevel: $DeepestLevel"

}
# Now actually recurively walk the folder structure x times where x = Deepestlevel
do {
  foreach ($folder in Get-ChildItem $root -recurse | Where-Object {$_.PsIsContainer -eq "True"})
{
$search = $folder.FullName
#Sort the returned objects by modified date and select the most recent (last) one
  $Return = Get-ChildItem $search | Sort-Object LastWriteTime | Select-Object -last 1
Write-Host "Child File or Folder most recently modified: " $Return.Fullname
#Set the modified date on the parent folder to the one of most recent modified child object
  if ($Return -ne $null) {$folder.LastWriteTime = $Return.LastWriteTime}
Write-Host "Parent folder " $search " last modified date set to " $Return.LastWriteTime
}
  ; $DeepestLevel– } #Counter -1
until ($DeepestLevel -eq 0)

[/sourcecode]

Going through the folder structure to0 often is OK, going through it to0 few times is bad as it doesn’t accomplish the goal. So the logical bug in the code that loops once to much due to “\” in the UNC path isn’t an issue. Not really elegant but very effective. The speed is also acceptable. It ran through 30,000 files, 20 GB in all in about a minute. Quick & Dirty does the trick sometimes.

The code will work with PowerShell 1.0/2.0 against a local and a UNC path as long as you have the correct permissions.

This is just a code snippet, not the production code with error handling, so please test it in a lab & understand what it does before letting it rip through your folder structures.

Cheers

SCVMM 2008 R2 Phantom VM guests after Blue Screen

UPDATE: Microsoft posted an SQL Clean Up script to deal with this issue. Not exactly a fix and let’s hope it gets integrated into SCVMM vNext 🙂 Look at the script here http://blogs.technet.com/b/m2/archive/2010/04/16/removing-missing-vms-from-the-vmm-administrator-console.aspx. There is a link to this and another related blog post in the newsgroup link at the bottom of this article as well.

I’ve seen an annoying hick up in SCVMM 2008 R2 (November 2009) in combination with Hyper-V R2 Live migration two times now. In both cases a Blue Screen (due to the “Nehalem” bug http://support.microsoft.com/kb/975530) was the cause of this. Basically when a node in the Hyper-V cluster blue screens you can end up with some (never seen all) VM’s on that node being is a failed/missing state. The VM’s however did fail over to another node and are actually running happily. They will even fail back to the original node without an issue. So, as a matter of fact, all things are up and running. Basically you have a running VM and a phantom one. There are just multiple entries in different states for the same VM. Refreshing SCVMM doesn’t help and a repair of the VM is not working.

While it isn’t a show stopper, it is very annoying and confusing to see VM guest in a missing state, especially since it the VM is actually up and running. You’re just seeing a phantom entry. However be careful when deleting the phantom VM as you’ll throw away the running VM as well they point to the same files. 

Removing the failed/Orphaned VM in SCVMM is a no go when you use shared storage like for example CSV as it points to the same files as the running one and it is visible to both the good VM node and the phantom one. Meaning it will ruin your good VM as well.

Snooping around in the SCVMM database tables revealed multiple VM’s with the same name but with separate GUIDS. In production it’s really a NO GO to mess around with the records. Not even as a last resort because we don’t know enough about the database scheme and dependencies. So I have found two workarounds that do work (used ‘m both).

  1. Export the good VM for save keeping, delete the missing/orphaned VM entry in SCVMM (one taking the good one with it if you didn’t export it) and import the exported VM again. This means down time for the VM guest. 
  2. Remove the Hyper-V cluster from VMM and re add it. This has the benefit that it creates no down time for the good VM and that the bad/orphaned one is gone. 

Searching the net didn’t reveal much info but I did find this thread that discusses the issue as well http://social.technet.microsoft.com/Forums/en-US/virtualmachinemanager/thread/1ea739ec-306c-4036-9a5d-ecce22a7ab85 and this one http://social.technet.microsoft.com/Forums/en/virtualmachinemgrclustering/thread/a3b7a8d0-28dd-406a-8ccb-cf0cd613f666

I’ve also contacted some Hyper-V people about this but it’s a rare and not well-known issue. I’ll post more on this when I find out.

Enterprise Architecture Meets Technical Architect

During an introductory talk with the Enterprise Architecture consultants I made an accidental connection by using the word “coherency”. As it turns out, it’s used frequently in the new terminology in their profession, as in the “Coherent Enterprise”. So I guess that talk went well, they even laughed politely at my jokes and took plenty of notes. They are also concise, I like that. A talk lasting over one was hour reduced to its essence in one paragraph of  the report. As I told them, the success of their efforts & the results are determined by the execution. Let’s hope they can keep that concise approach for the duration. The last thing the world needs is another 2000 pages of cellulose no one ever reads.