Heading Home after Ignite 2016

While traveling back home here are some musings on Microsoft Ignite 2016. I’m not going to regurgitate all the news and announcements here.

Image result

There were many and they were divers. Azure Identity, Security, storage, management, Windows Server 2016, Hyper-V, Storage Spaces, Storage Replica … are all offering a wide variety of new capabilities and options. It’s impressive now and it will be even more impressive in the future. When I connect the dots and look at the opportunities my take on what the future roadmap can and might be visualizes in front of my eyes. That’s the value I can add to an organization that’s committed to its future and realizes it needs to leverage IT to it’s fullest potential. That means you cannot treat IT as a facility because we build it on commodity products. Every success is build on creative and well directed use of the components and the capabilities. This requires a lot more than lip service or merely covering up bad choices and political ambitions with a thin layer of “big principles”. The key to success is speed, agility, insight in a world where mobile and cloud offer tremendous new opportunities. Large, long term, centralized projects have their place but sticking to them by default in the wrong place, the wrong way and manner will lead to failure in a 24/7/365 mobile world where federation, collaboration across boundaries are paramount. The small, cost effective and efficient projects delivering real value with a purpose will make giants, bot in government and the private sectors stumble and even fall.

We have so much opportunity here that many cannot see the trees through the forest anymore. This will lead to many failed projects, ambitions and organizations in combination with a waste of time and money. That’s were we can make the difference.

As an attendee and MVP I was very happy to be able to attend in order to calibrate my compass and correct course. In good tradition I signed the billboard for attending MVPs at Ignite 2016  I’m already looking forward to heading back to Redmond for the MVP Global Summit and continue the discussion at the Microsoft Head Quarters.

MVPIgnite

To me, the Ignite 2016 edition was one of intensive networking with Microsoft experts and management. This extended to 3rd party vendors and partners of Microsoft. This, in combination with the discussions with my peers  to discover their views and insights have given me a very up to date view on where we are at and where things are going. That’s the value I’m taking back home to work with and help people reach their full potential. That’s not an easy task as many today are or feel at least a bit out of balance to completely lost. Technologists are the one to step up all the way to the board level and steer their organizations towards a successful future.  Many companies are not ready for this and some management feels threatened by this. There’s basically no need for that fear as we are technologists, not politicians. We solve problems, we don’t create them. We drive companies towards success, if you let us.

DELL Compellent Storage Center 7.1 Certified for Windows Server 2016

When it comes to selecting storage, especially when it comes to a “traditional” SAN, you all know that price performance wise I’ve been using the DELL Compellent series with great success for many years now. It’s a very capable solution that also has some other benefits when it comes to Windows Server and Hyper-V. It has one of the better hardware VSS providers, way better than average support for ODX  and UNMAP etc. but it’s also very good at delivering fast support for new versions of Windows Server. This has allowed us to move from Windows Server 2008 R2 to 2012 and from there to Windows Server 2012 R2 very fast.

In that regards I’m very happy to see that Storage Center 7.1 is already in the catalog as certified for Windows Server 2016.

image

Customers that have up to date hardware and want to move fast to benefit from and leverage the new and improved capabilities in Windows Server 2016 Hyper-V, Clustering, Networking, …are ready to do so. Nice Smile.

Veeam Leads the way by leveraging ReFS v3 capabilities

Introduction

You might have noticed that I’m pretty impressed by what Microsoft is doing with ReFS v3 in Windows Server 2016. You can read some of my musing on it in ReFS vNext Block Cloning and ODX and take a look at a comparison between ReFS & ODX speeds when creating VHDX files in Lightning Fast Fixed VHDX File Creation Speed With ReFS on Windows Server 2016 .

Note that this is also leveraged for accelerated checkpoint merges, VHDX resizing etc.

Now it goes without saying that Hyper-V (they’re the tip of the spear at MSFT) and other Microsoft products would take advantage of the capabilities of ReFS. But now we know that Veeam Backup & Replication 9.5 has made use of ReFS to help with the resilience of their backups, the speed of their Synthetic Full backups and the space required.

image

To a Hyper-V MVP and a Veeam vanguard it was obvious these two combined just had to lead to way for others to follow.

Veeam Leads the way by leveraging ReFS v3 capabilities

Veeam Backup & Replication 9.5 will leverage ReFS v3 …

image

 

and by doing so they deliver the following benefits:

  • Shorted backup windows and a reduced backup storage load on the repository
  • Reduced backup target storage capacity which is reducing or eliminating the need for deduplication in many scenarios.
  • Better backup data protection by leveraging the ReFS native capabilities to protect against bit rot which was one of the prime goals for which Microsoft designed ReFS.

How is this done?

ReFS v3 has “fast cloning” technology which Veeam is leveraging. This results in up to 10 times  faster creation and transformation of synthetic full backup files!  ReFS fast cloning allows for creating new files without physically moving data blocks between files. This is what delivers even shorter backup windows and lower backup storage load on the repository or repositories.

They use what they call “Spaceless full backup technology” which allows multiple full backup files to reside on the same ReFS volume that share the same physical data blocks. As a result they need less storage capacity which can reduce or eliminate the need (and cost) of deduplication appliances whilst leveraging commodity storage.

Lest see how this is done. A “legacy” full backup is created an consumes 30% storage capacity. Then we make incremental backups.

image

3 incremental backups add 3 * 10% of delta to the needed backup storage capacity which adds up to 60%.

image

We create a synthetic full backup and the copies of the data require another 30% of space (90%). 

image

No let’s compare this to v9.5 that leverages a Windows Server 2016 ReFS formatted backup target repository. Instead of copying data ReFS references already existing data block for a new file. This saves on IO, space and time!

image

Is this safe? What if those data blocks that are reference multiple times are corrupted? Well Veeam does have protection against that in place already! But it goes the extra mile as ReFS has the capabilities to protect against that itself or it’s power would also become its biggest weakness.

Veeam’s data integrity streams integration leverages ReFS data integrity scanner and even proactive error correction when used in combination with Storage Spaces to protect backup files from bit rot and allows for more reliable forever-incremental archiving. This helps make the spaceless full backup technology trustworthy & safe alongside the health checking & error fixing capabilities already available in Veeam Backup & Replication.

Conclusion

I’m impressed by the forward looking and fast adoption of the capabilities of ReFS v3 by Veeam and I’m testing Backup & Replication v9.5 Beta today in the lab. They have more up their sleeve by the way as they have some interesting work with PowerShell Direct to make backups ever more resilient in ever more scenarios. More on that later.

Anyone who said Veeam would lose its edge in the world of Hyper-V backups when Microsoft introduced their own native change block tracking (resilient change tracking) has clearly never dealt with Veeam seriously and professionally. I have and I’m always happy to chat to them as they have serious technical skills combined with vision and business acumen that makes sure they’re leaders in the business of backup. It makes me proud to be a Veeam vanguard and a MVP with a specialization in Hyper-V.

A First look at Cloud Witness

Introduction

In Windows Server 2012 R2 Failover Clustering we have 2 types of witness:

  1. Disk witness: a shared disk that can be seen by all cluster nodes
  2. File Share Witness (FSW): An SMB 3 file share that is accessible by all cluster nodes

Since Windows Server 2012 R2 the recommendation is to always configure a witness. The reason for this is that thanks to dynamic quorum and dynamic witness. These two capabilities offer the best possible resiliency without administrator intervention and are enabled by default. The cluster dynamically assigns a quorum vote to node when it’s up and removes it when it’s down. Likewise, the witness is given a vote when it’s better to have a witness, if you’re better off without the witness it won’t get a vote. That’s why Microsoft now advises to always set a witness, it will be managed automatically. The result of this is that you’ll get the best possible uptime for a cluster under any given circumstance.

This is still the case in Windows Server 2016 but Failover clustering does introduce a new option witness option: cloud witness.

Why do we need a cloud witness?

For certain scenarios such a cluster without shared storage and especially when a stretched cluster is involved you’ll have to use a FSW. It’s a great solution that works as well as a disk witness in most cases. Why do I say most? Well there is a scenario where a disk witness will provide better resiliency, but let’s not go there now.

Now the caveat here is that you’ll need to place the FSW in a 3rd independent site. That’s a hard order for many to fulfill. You can put in on the desktop of the receptionist at a branch office or on a virtual machine on the cluster itself but it’s “suboptimal”. Ideally the FSW is independent and high available not dependent on what it’s supposed to support in achieving quorum.

One of the other workarounds was to extend AD to Azure, deploy a SOFS Cluster with an non CA file share on a cluster of VMs in Azure and have both other sites have access to it over VPN or express route. That works but in a time of easy, fast, cheap and good solutions it’s still serious effort, just for a file share.

As Microsoft has more and more use cases that require a FSW (site aware stretched clusters, Storage Spaces Direct, Exchange DAG, SQL Availability Groups, workgroup or multi domain clusters) they had to find a solution for the growing number of customers that do not have a 3rd site but do need a FSW. The cloud idea above is great but the implementation isn’t the best as it’s rather complex and expensive. Bar using virtual machines you can’t use Azure file services in the cloud as those are primarily for consumption by applications and the security is done via not via ACLing but access keys. That means the security for the Cluster Name Object (CNO) can’t’ be set. So even when you can expose a cloud file to on premises to Windows 2016 (any OS that supports SMB 3 actually) by mapping it via NET USE the cluster GUI can’t set the required security for the cluster nodes so it will fail. And no you can’t set it manually either. Just to prove this I tried it for you to save you the trouble. Do NOT even go there!

clip_image002

So what is possible? Well come Windows Server 2016 Failover Clustering now has a 3rd type of witness. The cloud witness. Functionally wise it’s like a FSW. The big difference it’s a dedicated, cloud based solution that mitigates the need and costs for a 3rd data center and avoids the cost of the workarounds people came up with.

Implementing the cloud witness

In your Azure subscription you create a storage account, for this purpose I’ve create one named democloudwitness in my resource group RG-Demo. I’m using a separate storage account to keep thing tidy and separated from my other demo storage accounts.

A storage account gets two Access keys and two connection strings. The reason for this is that we you need to regenerate the keys you can have your workloads use the other one this can be done without down time.

clip_image004

In Azure the work is actually already done. The rest will happen on premises on the cluster. We’ll configure the cluster with a witness. In PowerShell this is a one liner.

clip_image006

If you get an error, make sure the information is a correct and you can reach Azure of HTTPS over the internet, VPN or Express Route. You normally do not to use the endpoint parameter, just in the rare case you need to specify a different Azure service endpoint.

The above access key is a fake one by the way, just so you know. Once you’re done Get-ClusterQuorum returns Cloud Witness as QuorumResource.

clip_image008

In the GUI you’ll see

clip_image010

When you open up the Blobs services in your storage account you’ll see that a blob service has been created with a name of msft-cloud-witness. When you select it you’ll see a file with a GUID as the name.

clip_image012

That guid is actually the same as your cluster instance ID that you can find in the registry of your cluster nodes under the HKLM\Cluster key in the string value ClusterInstanceID.

Your storage account can be used for multiple clusters. You’ll just see extra entries each with their own guid.

clip_image014

All this consumes so few resources it’s quite possibly the cheapest ever way of getting a cluster witness. Time will tell.

Things to consider

• Cloud Witness uses the HTTPS REST (NOT SMB 3) interface of the Azure Storage Account service. This means it requires the HTTPS port to be open on all cluster nodes to allow access over the internet. Alternatively an Azure Site-2-Site VPN or Express Route can be used. You’ll need one of those.

• REST means no ACLing for the CNO like on a SMB 3 FSW to be done. Security is handled automatically by the Failover Cluster which doesn’t store the actual access key, but generates a shared access security (SAS) token using the access key and stores it securely.

• The generated SAS Token is valid as long as the access key remains valid. When rotating the primary access key, it is important to first update the cloud witness (on all your clusters that are using that storage account) with the secondary access key before regenerating the primary access key.

• Plan your governance between cluster & Azure admins if these are not the same. I see Azure resources governance being neglected and as a cluster admin it’s nice to have some degree of control or say in the Azure part of the equation.

For completeness I’ll mention that the entire setup of a cloud witness is also very nicely integrated in to the Failover Cluster GUI.

Right click on the desired cluster and select “Configure Cluster Quorum Settings” from menu under “More Actions”

clip_image015

Click through the startup form (unless you’ve never ever done this, then you might want to read it).

clip_image017

Select either “Select the quorum witness” or “Advanced quorum configuration”

clip_image019

We keep the default selection of all nodes.

clip_image021

We select to “Configure a cloud witness”

clip_image023

Type in your Azure storage account name, your primary access key for the “Azure storage account key” and leave the endpoint at its default. You’ll normally won’t need this unless you need to use a different Azure Service Endpoint.

clip_image025

Click “Next”to review what you’re about to do

clip_image027

Click Next again and let the wizard run.

clip_image029

You’ll get a report when it’s done. If you get an error, make sure the information is a correct and you can reach Azure of HTTPS over the internet, VPN or Express Route.

Conclusion

I was pleasantly surprised by how it easy it was to set up a cloud witness. The biggest hurdle for some might be access to Azure in secured environments. The file itself contains no sensitive information at all and while a VPN or Express Route are secured connectivity options this might not be allowed or viable in certain environments. Other than this I have found it to be very reliable, effective cheap and easy. I really encourage you to test it and see what it can do for you.