As after installing http://support.microsoft.com/kb/2770917 on Windows Server 2012 Hyper-V hosts the integration services components are upgraded from 6.2.9200.16384 to 6.2.9200.16433. Windows Server 2012 guest get that same upgrade and as such also the newer integration services components. The guest with older OS version needed a different approach. So I turned to all the great PowerShell support now available for Hyper-V to automate this. Pretty pleased with the results of our adventures in PowerShell scripting I let the script go on Hyper-V cluster dedicated to test & development. As such there are some virtual machines on there running Windows 2003 SP2 (X64) and Windows XP SP3 (x86). Guess what, after running my script and verifying the integration services version I see that those VM still report version 6.2.9200.16384 . No update. Didn’t my new scripting achievement “take” on those older guests?
So I try the install manually and this is what I get:
Why is there no upgrade for these guests? Are they not needed or do I have an issue? So I mount the ISO and dig around in the files to find a clue in the date:
It looks like there are indeed no update components in there for Windows XP/ W2K3. So then I look at the following registry key on the host where I normally use the Microsoft-Hyper-V-Guest-Installer-Win6x-Package value to find out what integration services version my hosts are running:
Bingo, there it seems indicated that we indeed need version for XP/W2K3 and version for W2K8(R2)/W2K12 and Vista/Windows 7/Windows 8. Cool, but I had to check if this was indeed as it should be and I’m happy to confirm all is well. Ben Armstrong (http://blogs.msdn.com/b/virtual_pc_guy/) confirmed that this is how it should be. There was a update needed for backup that only applied to Windows 8 / Windows Server 2012 guests. As this fix was in a common component for Windows Server 2008 and later they all got the update. But for the older OS versions this was not the case and hence no update is need. Which is reflected in all the above. In short, this means your XP SP3 & W2K3SP2 VMs are just fine running the version of the integration services and are not in any kind of trouble.
This does leave me with an another task. I was planning to do enhancements to my script like feedback on progress, some logging, some better logic for clustered and non clustered environments, but now I have to also address this possibility and verify using the registry keys on the host which IC version I should check against per OS version. Checking against just for the one related to the host isn’t good enough .
Making a million IOPS possible in a Windows Server 2012 Hyper-V VM
A lot of you will have seen the demos of a Hyper-V guest with VHDX disks running on Windows Server 2012 doing a million apps, if you haven’t yet, take a look here. While some quickly dismissed this as “irrelevant boasting” without real life relevance, I respectfully disagree. This is smart future proofing by Microsoft and provides a hypervisor ready for the future hardware capabilities and capable to handle the most demanding workloads today & in the years to come. Sure such a demo is under lab/ideal conditions and does not reflect the majority of real life environments but it’s nice to see what a hypervisor is capable of if and when you might need it. Remember there was a day that 4GB was a lot of RAM and 2TB sounded gigantic. Also remember that some people have larger needs than others. Until Windows Server 2008 R2 you had some limitations in storage IO performance that would not allow for a million IOPS. These had to be addressed or all the efforts with regards to capabilities and performance in regard to storage, CPU, networking and memory would just hit those particular bottlenecks. So it is addressing real needs and indeed also smart future proofing.
Capabilities of virtual machine storage IO throughput in Windows 2008 R2
The capabilities listed below dictate the IO capabilities in virtual machines running on Windows Server 2008 R2 Hyper-V:
Limited to one IO channel per virtual SCSI Controller
256 queue depth/SCSI for all devices attached to that SCSI adapter.
There was one fixed vCPU (0) dedicated to handling IO.
The picture above illustrates these limits. You see two virtual SCSI Controllers each having 2 VHD virtual disks attached. Each disk shares the one channel the controller it is attached to has.
These limits could become a bottle neck but that was never was too big of a problem with a maximum of 4 vCPUs in Windows 2008 R2 Hyper-V. If needed for performance we might have attached VHDs to different virtual SCSI controllers for the best possible performance in Windows Server 2008 R2 Hyper-V .
With 64 vCPUs and ever more demanding workloads these limitations would become a (serious) issue so this needed to be addressed. If not, despite all other efforts in regards to the 4 big resources (memory, storage and network) in Windows 2012, this would remain the limiting factor of IOPS inside a virtual machine on Windows 2012.
Windows Server 2012 improvements to virtual machine storage IO scaling
The picture above illustrates the improvements in Windows Server 2012 Hyper-V IO Scaling:
There is now 1 channel per 16 vCPUs, per virtual SCSI device, per controller. So that means you have 4 channels, per VHDX attached to a virtual SCSI Controller when you have 64 vCPUs in the virtual machine. Compared to before, this is a significant improvement and a much needed one with the 64 vCPUs capability there is now.
IO interrupt handling is now distributed amongst all vCPUs and this process is NUMA aware. This is a huge improvement!
There is now a 256 queue depth/device attached to a specific SCSI adapter. That’s another big improvement.
That people, is how you get a virtual machine to handle a million IOPS. Nice! The questions or doubts whether Hyper-V can deliver the capacity, throughput & performance have been wiped of the table, yes also for virtual storage IOPS. You can now go straight to how it will address your business needs. From my experience it does so brilliantly and very cost effectively. Life might not be perfect but it is very good
At the end of this day I was doing some basic IO tests on some LUNs on one of the new Compellent SANs. It’s amazing what 10 SSDs can achieve … We can still beat them in certain scenarios but it takes 15 times more disks. But that’s not what this blog is about. This is about goofing off after 20:00 following another long day in another very long week, it’s about kicking the tires of Windows and the SAN now that we can.
For fun I created a 300TB LUN on a DELL Compellent, thin provisioned off cause, I only have 250 TB
I then mounted it to a Windows 2008 R2 test server.
The documented limit of a Volume in Windows 2008 R2 is 256TB when you use 64K allocation size. So I tested this limit by trying to format the entire LUN and create a 300TB simple volume. I brought it online, initialized it to an GPT disk, created a simple volume with an allocation unit size of 64K and well that failed with following error:
There is nothing unexpected about this. This has to do with the maximum NTFS volume size supported on a GPT disk. It
depends on the cluster size that is selected at the time of formatting. NTFS is currently limited to 2^32-1 allocation units. This yields a 256TB volume, using 64k clusters. However, this has only been tested to 16TB, or 17,592,186,040,320 bytes, using 4K cluster size. You can read up on this in Frequently asked questions about the GUID Partitioning Table disk architecture. The table below shows the NTFS limits based on cluster size.
This was the first time I had the opportunity to test these limits I formatted part of that LUN to a size close to the limit and than formatted the remainder to a second simple volume.
I still need get a Windows Server 2012 test server hooked up to the SAN. To see if anything has changed there. One thing is for sure, you could put at least 3 64TB VHDX files on a single volume in Windows. Not too shabby . It’s more than enough to put just about any backup software into problems. Be warned, MSFT tested and guarantees performance & behavior up to 64TB in Windows Server 2012, but beyond that you’d better do your own due diligence.
The next thing I’ll do when I have a Windows Server 2012 host hooked up is, is create 64TB VHDX file and see if I can go beyond it before things break. Why, well because I can and I want to take the new SAN and Windows 2012 for a ride to see what boundaries we can push. The SANs are just being set up so now is the time to do some testing.
A lot of us who build failover clusters are bound to run into the fact that the node names as shown the Failover Cluster Management GUI is not always consistent in the names format it gives to the nodes. Sometimes they are lower case, sometimes they are upper case. See the example below of a Windows Server 2008 R2 SP1 cluster.
2
Many a system administrator has some slight neurotic tendencies. And he or she can’t stand this. I’ve seen people do crazy things like trying to fix this up to renaming a node in the registry. Do NOT do that. You’ll break that host. People check whether the computer object in AD is lower or upper case, whether the host name is lower or upper case, check how the node are registered in DNS etc. They try to keep ‘m all in sync at sometimes high cost But in the end you can never be sure that all nodes will have the same case using the GUI.
So what can you do?
Use cluster.exe to add the node to the cluster. That enforces the case you type in the name! An example of this is when you’d like upper case node names:
Some claim that when you add all nodes at the same time and they will all be the same. But ‘m not to sure this will always work.
Windows 2012
In Windows 2012 PowerShell replaces cluster.exe (it is still there, for backward compatibility but for how long?) and they don’t seem to enforce the case of the names of the node. For more info on Failover Clustering PowerShell look at Failover Clusters Cmdlets in Windows PowerShell, it’s a good starting point.
Don’t despair my fellow IT Pros. Learn to accept that fail over clustering is case insensitive and you’ll never run into any issue. Let it go …. Well unless you get a GUI bug like we had with Exchange 2010 SP1 or any other kind of bug that has issues with the case of the nodes .
If you want to use cluster.exe (or MSClus) for that matter you’ll need to add it via the Add Roles and Features Wizard / Remote Administration Tools /Feature Administration Tools / Failover Clustering Tools. Note that there are not present by default.
On an upgraded node I needed to uninstall failover clustering and reinstall it to get it to works, so even in that scenario they are gone and I needed to add them again.
MSClus and Cluster.EXE support Windows Server 2012, Windows 2008 R2 and Windows 2008 clusters. The Windows Server 2012 PowerShell module for clustering supports Windows Server 2012 and Windows Server 2008 R2, not Windows Server 2008.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Name
Description
Duration
Cookie Preferences
This cookie is used to store the user's cookie consent preferences.
30 days
These cookies are needed for adding comments on this website.
Name
Description
Duration
comment_author_email
Used to track the user across multiple sessions.
Session
comment_author_url
Used to track the user across multiple sessions.
Session
comment_author
Used to track the user across multiple sessions.
Session
These cookies are used for managing login functionality on this website.
Name
Description
Duration
wordpress_logged_in
Used to store logged-in users.
Persistent
wordpress_sec
Used to track the user across multiple sessions.
15 days
wordpress_test_cookie
Used to determine if cookies are enabled.
Session
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Google Analytics is a powerful tool that tracks and analyzes website traffic for informed marketing decisions.
Used to monitor number of Google Analytics server requests when using Google Tag Manager
1 minute
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_ga
ID used to identify users
2 years
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_gid
ID used to identify users for 24 hours after last activity
24 hours
__utma
ID used to identify users and sessions
2 years after last activity
__utmt
Used to monitor number of Google Analytics server requests
10 minutes
__utmb
Used to distinguish new sessions and visits. This cookie is set when the GA.js javascript library is loaded and there is no existing __utmb cookie. The cookie is updated every time data is sent to the Google Analytics server.
30 minutes after last activity
__utmc
Used only with old Urchin versions of Google Analytics and not with GA.js. Was used to distinguish between new sessions and visits at the end of a session.
End of session (browser)
__utmz
Contains information about the traffic source or campaign that directed user to the website. The cookie is set when the GA.js javascript is loaded and updated when data is sent to the Google Anaytics server
6 months after last activity
__utmv
Contains custom information set by the web developer via the _setCustomVar method in Google Analytics. This cookie is updated every time new data is sent to the Google Analytics server.
2 years after last activity
_gac_
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.