July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2

Microsoft recently released another update rollup (aka cumulative update). The

July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2.

This rollup includes improvements and fixes but more importantly it also contains ‘improvements’ from June 2016 update rollup KB3161606 and May 2016 update rollup KB3156418. When it comes to the June rollup KB3161606 it’s fixes the bugs that cause concerns with Hyper-V Integration Components (IC) to even serious down time to Scale Out File Server (SOFS) users. My fellow MVP Aidan Finn discuses this in this blog post. Let’s say it caused a wrinkle in the community.

In short with KB3161606 the Integration Components needed an upgrade (to 6.3.9600.18339) but due to a mix up with the manifest files this failed. You could leave them in pace but It’s messy. To make matters worse this cumulative update also messed up SOFS deployments which could only be dealt with by removing it.

Bring in update rollup 3172614. This will install on hosts and guest whether they have  already installed or not and it fixes these issues. I have now deployed it on our infrastructure and the IC’s updated successfully to 6.3.9600.18398. The issues with SOFS are also resolved with this update. We have not seen any issues so far.

image

In short, CU should be gone from Windows Update and WSUS. It it was already installed you don’t need to remove it. CU will install on those servers (hosts and guests) and this time is does things right.

I hope this leads to better QA in Redmond as it really is causing a lot of people grief at the moment. It also feed conspiracy nuts theories that MSFT is sabotaging on-premises to promote Azure usage even more. Let’s not feed the trolls shall we?

Windows Server 2016 Active Memory Dump

Introduction

In Windows Server 2016 we have a new option when it comes to creating memory dumps when a system failure occurs. The new option – “Active Memory Dump” – to configure a memory dump is not strictly related to failover clustering or Hyper-V. But this is the poster child environment where this setting will make a significant impact when collecting MEMORY.DMP files trouble shooting.

Hyper-V clusters tend to exist out of multiple hosts with high amounts of RAM. 256GB to 1TB of RAM is not an exception anymore. This has two reasons. In general, virtual machine density increases as the servers become ever more capable and affordable. The second reason is that ever more high performance workloads that are resource intensive are being virtualized.

The N+X nature of clusters means that even more RAM is provisioned as we need to allow for the hosts to serve extra virtual machines during scheduled or unscheduled maintenance.

To trouble shoot issues with a Hyper-V host support engineers often request a complete memory dump. This contains the processor state and the content of what’s in memory at the time of the crash. The size of these memory dumps becomes problematically large on hosts with large amounts of memory. You run out of space to create them (who has 512GB or more free space to write that dump to?) and it is problematic and time consuming to copy such files and upload them for analysis.

Active Memory Dump

So how does active memory dump address these concerns? For trouble shooting issues with the Hyper-V hosts we usually do not need the part of the RAM that is assigned to the virtual machines. On large memory Hyper-V host the majority of the RAM goes to virtual machines. An active memory dump filters out that part of the RAM content. By doing so that memory dump contains the processors state and the memory content related to the parent partition, including the user mode space, which are truly relevant to troubleshooting Blue Screen of Death events. While it’s not the smallest of possible memory dump options it is significantly smaller than a complete memory dump.

How do I configure it?

There’s two ways to do this. Via the GUI or PowerShell. Both result in exactly the same changes and configuration but the PowerShell method gives us a better insight in how an active memory dump is created.

GUI

On the Advanced tab of system properties, you select the setting for “Startup and Recovery”. That’s where you can set the memory dump option under Write debugging information.

clip_image001

This is reflected in two registry setting under the HKLM:\System\CurrentControlSet\Control\CrashControl key

The REG_DWORD value CrashDumpEnabled is set to 1 (default is 7) which translates into a complete memory dump.

The REG_DWORD value FilterPages is created and is set to 1

This translates in what we explained above.

clip_image003

An active memory dump is a complete memory dump (CrashDumpEnabed value =1) that is filtered ( FilterPages value = 1). Note that when you choose another option in the GUI the FilterPages value is not set to 0 but is actually removed.

clip_image005

PowerShell

Using PowerShell this is achieved as follows.

clip_image007

Keep in mind that the FilterPages value doesn’t exist if you haven’t configured Active memory dump s trying to read it will throw an error.

clip_image009

If you want to mimic the GUI exactly via PowerShell you’ll need to remove the value instead of setting it to 0.

clip_image011

PoSh code

#Take a look at the settings

Get-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name CrashDumpEnabled

Get-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name FilterPages

#Configure Active memory dump

Set-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name CrashDumpEnabled –value 1

Set-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name FilterPages –value 1

#Set it back to Automatic memory dump (default)

Set-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name CrashDumpEnabled –value 7

Remove-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name FilterPages –value 1

NOTE: When you edit the registry to change this setting a reboot is required to active them. So the GUI might be your preferred way of doing things here. For the people using Windows Server Core there is a command “systempropertiesadvanced” you can run from your command prompt to get to the advanced tab of System properties. From there you get to the Startup and Recovery settings. Also note that some changes between these setting will always require a restart.

clip_image012

image

Results

To get an idea what this means on a large memory Hyper-V host I did some testing on an enterprise grade server in a simulated setup so in real life the Active dump might very well be a bit larger than here but still the ratios tell the story.

Active memory dump: 7,60 GB

Kernel memory dump: 6,62 GB

Complete Memory dump: 319 GB

No what you need to realize that to create that Active memory dump you don’t need to create a page file the size of your physical memory. That a big deal! In many situations you’ll be hitting the problem of insufficient disk space for the page file and the memory dump to achieve this.

clip_image014

The active memory dump option gives us (or the support engineers) all the most relevant information they need without the overhead and practical problems associated with a complete memory dump. It kind of has my vote to become the new default option.

NVMe Storage for Backup Targets

Introduction

I’ve used NVMe disks on a modest scale already for code build servers, SQL Server deployments (physical or virtual) and basically for any workload where the benefits of better storage performance outweigh the loss of high availability (clustering, live migration) such as workstation use, I can run a pretty nice lab on my workstation and not feel miserable due to disk IO contention. Let’s see what NVMe Storage for Backup Targets can do!

For the price you pay and the problems they solve, the performance benefits of NVMe are a great deal. Just run Windows Server 2016 with nested Hyper-V on an NVME as a developer with a dozen VMs for AD, IIS, Middle ware and SQL Server. You’ll see what it means. Anything less than 8 cores, DDR 4 and a modern motherboard need not apply by the way.

We’re looking forward to NVMe deployments where high available storage is available (shared or shared nothing) for virtualized workloads. We’re seeing the first examples of this in certain Storage Spaces Direct deployments with Windows 2016. I’m pretty sure the industry will push NVMe usage to new heights for use in such scenarios the coming years with NVMe Fabrics.

Recently we’ve been looking at NVMe disks as a high performant backup tier in our backup storage targets. Yup, read on. Sometimes I get this crazy idea I need to scratch, or better, test out in the lab.

NVMe Storage for Backup Targets

When needed you can build pretty solid backup target with cheap, “high capacity” SATA SSDs as well. The thing is that you’ll be limited by the capabilities of SATA itself. You also need decent controllers leading to costs associated with mitigating those. SATA isn’t exactly the best choice for high throughput, concurrent workloads either. You can move up to SAS in order to go beyond the limits of SATA for SSD but the cost goes up accordingly.

When it comes to cost versus performance, that’s where PCIe shines brighter than anything we have today. Sure it’s not yet feasible to do so for large data volumes but we’re not looking at this for the bulk of our VMs or data. We’re looking a use case where we need stellar performance in a reasonable volume we can drop into a server.

Some people will shout in a visceral reaction (*) that I’m nuts spending that amount of money on backup storage. Well no, I’m not. You have to look at the needs of the use case and the economics of achieving a solution. For a company that has the need to back up a number of state full virtual machines every 10 minutes and want to keep 12-24 or so restore points around NVMe disks can deliver a very cost effective solution. You’re probably running those VMs high available, shared tier 1 storage already, the cost of which is a multitude of a couple of NVMe disks. Let’s look at an example. Say we’re leveraging Scale-Out Repositories with Veeam Backup and Replication and we have 3 to 4 repositories. Dropping 1 or 2 NVME disks to every node can deliver 6 to 8 TB of stellar performance to your existing setup. In many of my deployments we get all the other resources in those nodes cost effectively because we typically recycle our Hyper-V hosts. So cores, memory and bandwidth are plentiful without huge investments in new dedicated servers. If you do buy some of the high density kit the cost of memory and the CPU cores won’t kill the project. So am I nuts for trying or not? Heck no, we’ll learn a lot and I’m sure prices will drop and capacities will rise without sacrificing on performance.

Really, the price isn’t that bad. Just look on Amazon for the cheapest pricing of Intel 750 series NVMe disks of 1.2 TB and come back.

clip_image002

Today you won’t be buying 20 of them anyway to put in a JBOD as those don’t exist yet. You’ll put one or 2 in 1 or more backup target servers to provide high performance backup storage.

clip_image004

Testing 64K 100% sequential writes with 8 worker nodes enabled … not too shabby

NVME disks have stellar IOPS and throughput at low latencies. If you ever wear them out they are cheap enough to swap out for a new one. They absolutely rock under concurrent use, with multiple sessions and heavy workloads. Their massive IO queues make them shine as server storage in many to one scenarios. So backing up many different Hyper-V nodes (clustered or not) concurrently and continuously throughout the day is a use case where they should rock. Just search for some of the reviews out there for details.

Do you need bigger sized NVMe disks and a bit more “enterprise grade” comfort? Look at the Intel 3700 series or equivalents. Simplistically these are the same family but the 750 series disk has been tuned to do better for workstation workloads. But even then most people won’t get to see their true capabilities. Anyway the 3700 are more expensive and the 2TB seize mark might be what pushes you to buy them. Compared to some OEM enterprise grade SAS SSDs you’re still getting a pretty good deal. In any case many workstations cannot even make the Intel 750 series break out in single drop of sweat. We can push them a bit more in server workloads.

If you need redundancy with local NVMe storage you have some options. You can make local NVMe disks redundant today via Storage Spaces if you want or mitigate the risk by using 2 and have to backup jobs protecting the same VMs to different targets.

clip_image006

The Intel 750 NVMe disk installed in a Dell R730 dual socket server

clip_image008

Booting the DELL R730 which provides sufficient resources to evaluate the capabilities of an NVMe disk.

I cannot share to much info on this yet but look at the screenshot below. The VMs run on Storage Spaces (pure SSD) and the backup Target is the Intel 750 1.2 TB NVMe disk.

When the delta in the VMs is low, the amount of data you’ll need to backup with Veeam and Windows 2016 CBT is minimal so backup target performance is not that a big deal. But when you have bigger delta’s and multiple backup jobs running simultaneously that becomes a point that requires attentions.

clip_image010

Look at the above screen shot of some tests backing up VMs on Storage Spaces (Windows Server 2016) ReFS v3 source storage to NVMe with ReFS v3 target storage. Continuously protecting a company’s gold doesn’t have to cost you a king’s ransom in diamonds. We’re running Windows Server 2016 TPv5 and Veeam backup & Replication 9.5 Beta. I hope to discuss the capabilities of Windows Server 2016, ReFS and Veeam Backup and Replication 9.5 in later posts.

What will that cost me?

So let’s say you need 2 TB of backup storage in your backup target for your “always on” mission critical, state full virtual machines. For under 1600 € you can have that in Intel NVMe 750 Series. Today this really is not the technology to build a 300TB backup capacity solution with but when used for the right reasons in the right place with the right use cases this is a good solution.

Now, this isn’t the cheapest per GB, far from, but it is the absolutely best offering when with comes to fantastic throughput even, or better, especially when hitting that target storage with multiple concurrent backups from multiple sources. That’s where its shines beyond anything we have today. The real challenge there will be for the other resources to keep up as well as for the operating system and backup software to be capable of delivering what the NVMe disk(s) can handle. Compared to the OEM prices for their enterprise SAS SSD’s this is still reasonable.

We’ll compare this to “standard” SSD with controllers and see where this gets us. You can learn whether this works for you at relatively low cost, gain experience (i.e. find the bottle necks in the rest of your stack) and deliver a great result for the workloads you’re testing it with. Good backup software lets you fine tune the backups and even throttle backups based on latency of the source storage so you don’t have to worry about it killing the performance of your primary workloads.

Disclaimer: Don’t run of to your boss telling her or him I told you do implement NVMe backup storage targets. Only do so if you have a use case for this and are willing to try it out. Heck, I bought one on my own dime. So I could try it out and see if we can leverage this. If not, I have a great use case for the disk in my workstation for all those Hyper-V virtual machines.

For those 20 ultra-special stateful virtual machines in an “Always-On” environment … this might be the current solution. And please think beyond backups, think recovery of those virtual machines!

clip_image012

It’s kind of cool to use Veeam’s Instant VM recovery when the backup resides on an NVMe.

The future

Today, even with the NVMe Fabric v1.0 specifications published recently we don’t yet have “NVMe JBODS” or fabrics we can buy as commodity components but I’m rather sure those will come soon. These are interesting times and I’ll keep a keep a keen eye on the evolutions around NVMe.

Until then I’ll leverage commodity SSDs for landing the short term backups of VMs. When speed & frequency of those backups become crucial I’ll add a one or more NVMe disks to the mix.

I can put long term backup to other backup targets either via different jobs that run at night and/or via copies.

On top of all this the availability of 7.5 and 15 TB 3D NAND disks are about to change the way we look at high capacity disk based storage solutions. Those capacities in small form factors provide tremendous opportunities to deliver high capacity and performance in small building blocks making the power & cooling economics significantly better. Needing half a rack or a full rack of 3 or 6TB HDD to get both capacity & IOPS doesn’t seem that attractive anymore looking at the TCO over 5 years compared with 2 disk bays full with 7.5 or 15TB SSDs. In the future, with the rise of high capacity SSDs and dropping prices we might soon find that ever bigger SSDs deliver the bulk of our storage & NVMe is reserved for the truly demanding workloads.

Slowly but surely we can put most businesses in my country in one or half a rack without compromising in anything or needing to by vendor lock in converged solutions to make it happen. The scenario where we deliver on premises where it makes the most sense and move to the public cloud where it matters the most is more and more cost effective for those that can’t make data center zero happen yet. Combine that with a software defined approach and you’re looking good.

(*) I had a discussion about using NVMe for certain backup loads with some data center architects recently and they were convinced it was too expensive, too early and needed a consulting engagement leading to a POC to determine if this was a good idea. That would involve project & administrative costs, time and materials etc. Well, we just bought a couple of NVMe disk with on our own budget to test out the idea and concept. It works and is affordable for the right use cases. Just make sure you don’t put an NVMe disk in an anemic budget server where all other resources will be the bottle necks. Also make sure you have the intra host bandwidth to deliver the throughput. Last but not least, it’s pretty silly to have super performant backup targets when your backup source storage can’t deliver the data fast enough. Use common sense and you’ll be alright. It doesn’t need to cost you 10K to find out if buying 800 or 1600 € of NVME storage will work for you. If it seems to work, we can drop 2TB worth of NVMe storage in 3 backup target servers for under 4800 €. Using that in production for 6 months will teach us more than an expensive POC anyway.

The Hyper-V Processor Relative Weight

Introduction

Hyper-V offers 3 ways of managing or tweaking the CPU scheduler to provide the best possible configuration for certain scenarios and use cases. The defaults normally work fine but of certain conditions you might want to tweak them for the best possible outcome.  The CPU resource controls at your disposal are:

  • Virtual machine reserve  – Think of this as the minimum CPU “QoS”
  • Virtual machine limit – Think of this as the maximum CPU “QoS”
  • Relative Weight – Think of this as the scale defining what VM is more important.

Note that you should understand what these setting are and can do. Threat them like spices. Select the ones you need and don’t overdo it. They’re there to help you, if needed you can leverage all three. But it’s highly unlikely you’ll need to do so. Using one or two will server you best if and when you need them.

In this blog post we’ll look at the relative weight.

Relative weight

Relative weight is a relative number between 1 and 10000 that you can assign to a virtual machine. This determines the relative importance of a virtual machines CPU resources in regards to other virtual machines. So it’s not a % or number of cycles, it’s just a arbitrary weight. By default this is set to 100.

image

You need to come up with a scale and stick to it. 100, 200 and 300 for low, medium and high important virtual machines is a good example. You could also create 10 “classes”  1000, 1500, 2000, 2500, 3000, 3500, 4000, 4500, 5000, 5500. This leaves room to even create even more (lower, in between and higher).

Note that as long as there are sufficient CPU resources on a host the relative weight does not come into play. It really doesn’t matter whether a virtual machine has relative weight of 1000 versus 5000 at that time. They both get whatever they need as there’s plenty to go around.

Relative weight kicks in when the demand is higher than the availability on a physical host. When you have left all the virtual machines at the default of 100 they will all get an equal share. But when you’ve set virtual machines with a higher relative weight these will be getting a higher share of the available CPU cycles.

Use Cases

Not all virtual machines are created equal. In reality some workloads are more important than others. This might be development and test versus production or high priority workloads versus lower priority workloads. The lower priority workloads are the once that you care about less when there is contention for CPY cycles. Or workloads where less CPU cycles and slower response times don’t make a real difference.

Another use case might be on your developers or lab host where you have a CPU sensitive workload you give a much higher weight and leave the others at the default of 100.

To make sure the high priority workloads or those that really depend on more CPU cycles being delivered fast don’t have to play second fiddle to those that don’t have those needs we use relative weight. It’s very flexible and only kicks in when needed, so there is no waste or inefficiency there.

Limitations

The biggest limitation is in the name. It’s all relative. Where as reserve or limit give you a minimum and a maximum respectively, the relative weight only defines what virtual machine more important than another in regards to CPU cycles. So some virtual machines get more than others but that might not be enough. It’s all about balance between virtual machines, not guaranteed minima or maxima.

You need to agree on a standard within the company to define weight. If everyone starts using a different scale you’re in trouble.

Let’s take one admin who uses 100 for less important virtual machines, 200 for standard virtual machines and 300 for the most important ones. That’s all great when he’s the only one defining the settings and when he does so consistently on all nodes/ cluster for all VMs. In that case all is well even when VMs move around between hosts or between clusters. But what happens when many admins use different “scales”. Well it’s a mess and the behavior won’t be what you want when your colleague used 1000, 2000 and 3000 respectively for the same definition. It’s also smart to not use 100, 101 and 102. leave some margin for adding a category when needed.

Conclusion

This is one handy tool to have at your disposal and I tend to use it to proactively set a higher weight for very important VMs. Even in an environment where there are no predefined categories or know minima this allows me to tell Hyper-V that, if there ever is contention for CPU cycles, the virtual machines with a higher weight are the one to serve a bigger share of the limited resources.