Learn & Evaluate Windows Server 2012 R2 Preview

If you are anything like a lot of people I know and myself you will be very eager to start testing the new features & capabilities of Windows Server 2012 R2 that is now available for testing purposes in preview. Now Microsoft has launched their IT Pro Summer Grand Prix campaign that might get you something extra next to the knowledge you will gain.

ITpro_banner_640x255

If you are going to do this why not surf to http://www.microsoft.com/nl-be/technet/summer-grandprix/#track1 and dive into Track 1 of the IT Pro Summer Grand Prix to download the public preview of Windows Server 2012 R2. image

When you do so, feel free to leave you contact information and be eligible to win a rather exclusive Windows Server headset.

Stay tuned because the next track will be all about System Center (June 15th) that holds even bigger benefits for being an early evaluator of the R2 wave.

Happy testing, learning and playing! One thing I found is that Windows 2012 R2 installations are that fast it feels like driving a race car Smile

ITpro_banner_728x90_metlogo

Teamed NIC Live Migrations Between Two Hosts In Windows Server 2012 Do Use All Members

Introduction

Between this blog NIC Teaming in Windows Server 2012 Brings Simple, Affordable Traffic Reliability and Load Balancing to your Cloud Workloads which states TCP/IP can recover from missing or out-of-order packets. However, out-of-order packets seriously impact the throughput of the connection. Therefore, teaming solutions make every effort to keep all the packets associated with a single TCP stream on a single NIC so as to minimize the possibility of out-of-order packet delivery. So, if your traffic load comprises of a single TCP stream (such as a Hyper-V live migration), then having four 1Gb/s NICs in an LACP team will still only deliver 1 Gb/s of bandwidth since all the traffic from that live migration will use one NIC in the team. However, if you do several simultaneous live migrations to multiple destinations, resulting in multiple TCP streams, then the streams will be distributed amongst the teamed NICsand other information out their such as support forum replies it is dictated that when you live migrate between two nodes in a cluster only one stream is active and you will never exceed the bandwidth of a single team member. When running some simple tests with a 10Gbps NIC team this seems true. We also know that you can consume near to all of the aggregated bandwidth of the members in a NIC Team for live migration if you these conditions are met:

1. The Live Migrations must not all be destined for the same remote machine. Live migration will only use one TCP stream between any pair of hosts. Since both Windows NIC Teaming and the adjacent switch will not spread traffic from a single stream across multiple interfaces live migration between host A and host B, no matter how many VMs you’re migrating, will only use one NIC’s bandwidth.

2. You must use Address Hash (TCP ports) for the NIC Teaming. Hyper-V Port mode will put all the outbound traffic, in this case, on a single NIC.

When we look at these conditions and compare them to the behavior we expect from the various forms of NIC teaming in Windows 2012 this is a bit surprising as one might expect all member to be involved. So let’s take a look at some of the different NIC Teaming setups.

Any form of NIC teaming with Hyper-V Port Mode

This one is easy as condition 2 above is very much true. In all my testing with any NIC team configuration in the Hyper-V Port mode traffic distribution algorithms I have not been able to exceed 10Gbps. I have seen no difference between dependent static of LACP mode or switch independent (active-active) for this condition. As you can see in the screenshot below, the traffic maxes out at 10Gbps.

clip_image002

clip_image004

This is also demonstrated in the following screenshots taking with the resource manager where you can see only half of the bandwidth of the Team is being used.

clip_image006

clip_image008

Exceeding a single NIC team member’s bandwidth when migrating between 2 nodes

The first condition of the previous heading doesn’t seem true. In some easy testing with a low number of virtual machines and not too much memory assigned you never exceed the bandwidth of one 10Gbps NIC team member. So on the surface, with some quick testing it might seem that way.

But during testing on a 2 node cluster with dual port 10Gbps cards and I have found the following

Switch Dependent LACP and Static

  1. Take a sufficient number of large memory virtual machines to exceed the capacity of a single 10Gbps pipe for a longer time (that way you’ll see it in the GUI).
  2. Live migrate them all from host A to host B (“Pause” with “Drain Roles” or “select all” + “Move”)
  3. Note that with a 2 node cluster there is no possibility to Live Migrate to multiple nodes simultaneous. It’s A to or B or B to A or both at the same time.

Basically it didn’t take long to see well over 10Gbpsbeing used. So the information out there seems to be wrong. Yes we can leverage the aggregated bandwidth when we migrate from host A to host B as long as we have enough memory assigned to the VMs and we migrate a sufficient number of them. Switch dependent teaming, whether it is static or LACP does its job as you would expect.

Let’s think about this. The number of VMs you need to lie migrate to see > 10Gbpss used is not fixed in stone. Could it be that there is some intelligence in the Live Migration algorithm where it decides to set up multiple streams when a certain number of virtual machines with sufficient memory are migrated as the sorting is mitigated by the amount of bandwidth that can leveraged? Perhaps he VMMS.EXE kicks off more streams when needed/beneficial? Further experimenting indicates that this is not the case. All you need is > 1 VM being live migrated. When looking at this in task manager you do need them to be of sufficient memory size and/or migrate enough of them to make it visible. I have also tried playing with the number of allowed simultaneous live migrations to see if this has an effect but I did not find one (i.e. 4, 6 or 12).

It looks like it is more like one TCP/IP connection per Live Migration that is indeed tied to one NIC member. So when you live migrate VMS between two hosts you see one VM live migration go over 1 member and the other the other as static/LACP switch dependent teaming did does its job. When you do enough live migrations of large VMs simultaneously you see this in Task Manager as shown below. In this case as each VM live migration stream sticks to a NIC team member you do not need to worry about out of order packets impacting performance.

clip_image010

But to make sure and to prevent falling victim to the fall victim to the limits of the task manger GUI during testing this behavior we also used performance monitor to see what’s going on. This confirms we are indeed using both 10Gbps NIC team member on both the target and the source host server. This is even the case with 2 virtual machines Live Migration. As long as it’s more than one and the memory assigned is enough to make the live migration last long enough you can see it in Task Manager; otherwise it might miss it. Performance Monitor however does not..clip_image012

clip_image002[4]

clip_image004[4]

This is interesting and frankly a bit unexpected as the documentation on this subject is not reflecting this. However it IS in agreement with the NIC teaming documented behavior for other tan Live Migration traffic. We took a closer look however and can reproduce this over and over again. Again we tested both switch dependent static and LACP modes and we found the behavior to be the same.

Switch Independent with Address Hash

Let’s test Live Migration over switch independent teaming with Address Hash. Here we see that the source server sends on the two member of the NIC team but that the target server receives on only one. This is normal behavior for switch independent teaming. But from the documentation we expect that one member on the source server would send and one member on the target server would receive. Not so.

Basically with Windows Server 2012 this doesn’t give you any benefit for throughput. You are limited to the bandwidth of one member, i.e. 10Gbps.

clip_image018

clip_image020

Red is Total Bytes received on the target host. It’s clear only one member is being used. Green is Bytes Sent/Sec on the source server. As you can see both team members are involved. In a switch independent scenario the receiving side limits the throughput. This is in agreement the documented behavior of switch independent NIC teaming with Address hash.

Helpful documentation on this is Windows Server 2012 NIC Teaming (LBFO) Deployment and Management (A Guide to Windows Server 2012 NIC Teaming for the novice and the expert).

Hope this helps sort out some of the confusion.

Using RAMDisk To Test Windows Server 2012 Network Performance

I’m testing & playing different Windows Server 2012 & Hyper-V networking scenarios with 10Gbps, Multichannel, RDAM, Converged networking etc. Partially this is to find out what works best for us in regards to speed, reliability, complexity, supportability and cost.

Basically you have for basic resources in IT around which the eternal struggle for the prefect balance finds place. These are:

  • CPU
  • Memory
  • Networking
  • Storage

We need both the correct balance in capabilities, capacities and speed for these in well designed system. For many years now, but especially the last 2 years it very save to say that, while the sky is the limit, it’s become ever easier and cheaper to get what we need when it comes to CPU, Memory. These have become very powerful, fast and affordable relative to the entire cost of a solution.

Networking in the 10Gbps era is also showing it’s potential in quantity (bandwidth), speed (latency) and cost (well it’s getting there) without reducing the CPU or memory to trash thanks to a bunch of modern off load technologies. And basically in this case it’s these qualities we want to put to the test.

The most trouble some resource has been storage and it has been for quite a while. While SSD do wonders for many applications the balance between speed, capacity & cost isn’t that sweet as for our other resources.

In some environments were I’m active they have a need for both capacity and IOPS and as such they are in luck as next to caching a lot of spindles still equate to more IOPS. For testing the boundaries of one resource one needs to make sure non of the others hit theirs. That’s not easy as for performance testing can’t always have a truck load of spindles on a modern high speed SAN available.

RAMDisk to ease the IOPS bottleneck

To see how well the 10Gbps cards with and without Teaming, Multichannel, RDMA are behaving and what these configuration are capable of I wanted to take as much of the disk IOPS bottle neck out of the equation as possible. Apart from buying a Violin system capable of doing +1 million IOPS, which isn’t going to happen for some lab work, you can perhaps get the best possible IOPS by combining some local SSD and RAMDisk. RAMDisk is spare memory used as a virtual disk. It’s very fast and cost effective per IOPS. But capacity wise it’s not the worlds best, let alone most cost effective solution.

image

I’m using free RAMDisk software provided by StarWind. I chose this as they allow for large sized RAMDisks. I’m using the ones of 54GB right now to speed test copying fixed sized VHDX files. It install flawlessly on Windows Server 2012 and it hasn’t caused me any issues. Throw in some SSDs on the servers for where you need persistence and you’re in business for some very nice lab work.

clip_image001

You also need to be aware it doesn’t persist data when you reboot the system or lose power. This is not an issue if all we are doing is speed testing as we don’t care. Otherwise you’ll need to find a workaround and realize those ‘”flush the data to persistent storage” isn’t full proof or super fast, the SSDs do help here.

You have to register but the good news is that they don’t spam you to death at all, which I find cool. As said the tool is free, works with Windows Server 2012 and allows for larger RAMDisks where other free ones are often way to limited in size.

It has allowed me to do some really nice testing. Perhaps you want to check this out as well. WARNING: The below picture is a lab setup … I’m not a magician and it’s not the kind of IOPS I have all over the datacenters with 4 Cheapo SATA disks I touched my special magic pixie dust.

image

With #WinServ 2012 storage costs/performance/capacity are the only thing limiting you  http://twitter.yfrog.com/mnuo9fp #SMB3.0 #Multichannel

Some quick tests with a 52GB NTFS RAMDisk formatted with a 64K NTFS Allocation unit size.

image image

I also tested with another free tool from SoftPerfect ® RAM Disk FREE. It performs well but I don’t get to see the RAMDisk in the Windows Disk Management GUI, at least not on Windows Server 2012. I have not tested with W2K8R2.

NTFS Allocation unit size 4K NTFS Allocation unit size 64K
image image

How To Deploy Windows Server 2012 on DELL UEFI Now–Notes From The field

The most current UEFI OS Deployment on a R810 is a bit finicky when you want to deploy Windows Server 2012 using the normal procedure & selecting “Other OS” as it’s obvious that the entry for Windows Server  2012 is not in there yet. The problem is that the Windows installer doesn’t seem to create the best practice UEFI partitions. It just seems to create a 320MB System Reserved partition and the rest is for your OS installation as Primary partition. In a good (by the book UEFI) install you’d see a layout like this (from Sample: Configure UEFI/GPT-Based Hard Drive Partitions by Using Windows Setup):

image

image

The reason for this seems to be that the firmware is still not 100% up to date for how Windows Server 2012 deals with UEFI installations. This I learned via my very helpful twitter friend Florian Klaffenbach

While an update for the system firmware is in the works and won’t be to long away let me share you how I dealt with this issue. It’s a bit more work but it get’s the job done. At least for me on a R810 with BIOS version 2.7.4.

I’m copying and adapting the step by step from Microsoft Windows Server 2012 Early Adopter Guide – Dell here and adapting it to how I worked around it. It’s “magic” Winking smile.

Installing Using Dell Unified Server Configurator

  1. Connect the keyboard, monitor, mouse, and any additional peripherals to your system
  2. Turn on the system and the attached peripherals.
  3. Press <F10> in the POST to start the System Services. The Initializing UEFI. Please wait… and the Entering System Services…Starting Unified Server Configurator messages are displayed.
  4. In the Unified Server Configurator window, if you want to configure hardware, diagnostics, or set changes, click the appropriate option. If no changes are required, press OS Deployment. => you can opt to start with a cleanly build VDisk. Which is best and should suffice. But is doesn’t. We’ll clean the disk later anyway later on in Step 14.
  5. In the Operating System Deployment window, click Deploy OS. The Configure or Skip RAID window is displayed. If Redundant Array of Independent Disks (RAID) is configured, the window displays the existing RAID configuration details.
  6. Select Go directly to OS Deployment. If RAID is not yet configured, configure it at this time.
  7. Click Next. The Select Operating System window is displayed with a list of compatible operating systems.
  8. Choose Microsoft Windows Server 2012 and click Next.NOTE: If Microsoft Windows Server 2012 is not listed, choose any other operating system
  9. Choose whether you want to deploy the operating system in UEFI or BIOS mode, and click Next => I do not get this choice if UEFI is already on in the BIOS settings
  10. In the Insert OS Media window, insert the Windows Server 2012 media and click Next.
  11. In the Reboot the System screen, follow the instructions on the screen and click Finish. If a Windows operating system is already installed on your system, the following message is displayed: Press any key to boot from the CD/DVD …Press any key to begin the installation. If you used a clean VDisk this is no issue
  12. In the Windows Setup screen, select the appropriate option for Language, Time and Currency Format, and Keyboard or Input Method.
  13. Click Next to continue.
  14. STOP => Select to REPAIR your system and launch a command line. Form there you start diskpart and run following commands on the disk where you want to deploy Windows Server 2012:
    • select disk 0
    • clean
    • convert gpt

      In my case this is Disk 0. This is what the installer should be able to do automatically with a clean disk any way but it doesn’t happen.

      Now DO NOT navigate to the X: root and launch setup again. Shut exit the repair console and shutdown the server.

  15. Start the server
  16. Press <F10> in the POST to start the System Services. The Initializing UEFI. Please wait… and the Entering System Services…Starting Unified Server Configurator messages are displayed. => DO NOT TOUCH ANYTHING ANYMORE. It will take longer than expected but you will boot into the installation of Windows 2012 again.
  17. In the Windows Setup screen, select the appropriate option for Language, Time and Currency Format, and Keyboard or Input Method.
  18. Click Next to continue.
  19. On the next page, click Install Now.
  20. In the Operating System Install screen, select the operating system you want to install. Click Next. The License Terms window is displayed, click Next.
  21. In the Which Type of Installation Do You Want screen, click Custom: Install Windows only (advanced), if it is not selected already.
  22. In the Where do you want to install Windows screen, specify the partition on which you want to install the operating system. To create a partition and begin installation:
    1. Click New
    2. Specify the size of the partition in MB, and click Apply. A Windows might create additional partition for system files message is displayed. => NOW THE UEFI partitions on the GPT disk are created Open-mouthed smile.
    3. Click OK.Select the newly-created operating system partition and click Next.
      The Installing Windows screen is displayed and the installation process begins. After the operating system is installed the system reboots. You must set the administrator password before you can log in for the first time
  23. In the Settings screen, enter the password, confirm the password, and click Finish.
    The operating system installation is complete.

image

Now, while this worked for me on the Dell R810 with BIOS 2.7.4,  I give no guarantees whatsoever. You’ll have to test it yourself or wait for the firmware update that is coming soon. Any way, perhaps it helps some of you out there!