Bug when changing the “store this conditional forwarder in active directory” setting

Bug when changing the “store this conditional forwarder in active directory” setting

Recently I encountered a bug when changing the “store this conditional forwarder in active directory” setting. I have been doing quite some active directory extensions to Azure lately. Part of that, post-process, is making sure that DNS name resolution from on-premises to Azure and vice versa is working optimally. When it comes to resolving Azure private endpoints and other private DNS zones from on-premises we need to add the conditional forwarders for the respective Azure DNS zones.

As we have different needs for this configuration on-premises versus in Azure we disable “Store this conditional forwarder in Active Directory, and replicate as follows” for all zones. This is the defaultm when you add a conditional forwarder.

However, you will also need to do this, in certain cases for other conditional forwarders depending on the DNS infrastructure between Azure and on-premises. I tend to change those non-Azure resource conditional forwarders before I add the one needed for Azure.

Bug when changing the "store this conditional forwarder in active directory" setting
The “store this conditional forwarder in active directory” setting

While that sounds easy enough, you can easily get into a pickle. When you change this, while the configuration seems perfectly fine, the name resolution for those zones where you change this stops working. That is bad. No bueno!

That can break a lot of services and applications leading to support calls, causing upset application owners, and lost revenue while leaving you scrambling to find a fix.

So how do we fix this?

Well, the only solution is to remove each and every conditional forwarder involved and add them again, While re-adding it you might get an “unknown error” in the GUI, but ignore it. Just go ahead. When your reverse lookup zones are in order it will resolve to the FQDN and name resolution will start working again. You can also use PowerShell or the command line. It is worth checking if changing the setting via PowerShell or the command line triggers the bug or not.

Please note that, as your are not replication the conditional forwarders in Active Directory, you must do that on all DNS servers on-premises involved in resolving Azure resources.

Is this a known bug?

Well, it looks like it, but I have yet to find a knowledge base article about it. There are mentions of other people running into the issue. This is not per se Azure-related. Take a look here DNS Conditional Forwarder stops working as soon as it’s Domain Replicated – Microsoft Q&A and AD Integrating conditional DNS forwarders stops them working (microsoft.com).

Note that this bug when changing the “store this conditional forwarder in active directory” setting will appear when you either enable or disable it.

This bug has existed for many years and over many versions of Windows DNS. The last encounters I had was with Windows Server 2019 and 2022. But beware with Windows Server 2016 and 2012 (R2) as well.

Warning on Windows Server 2016 Deduplication Corruption

UPDATE 2 – 2017/02/06

DO NOT INSTALL KB3216755 if you don’t need it.  Huge memory leak reported to associated with this. If you need it I’d consider all my options.

UPDATE – GET KB3216755

As you can read it the comments, Microsoft reached out and confirms the issues are fixed as part of KB3216755 => https://support.microsoft.com/en-us/help/4011347/windows-10-update-kb3216755 . I commend them for responding so quickly and getting it sorted. Do not that at the time of writing this (late on January 30th CET) the Windows Sever 2016 update isn’t in the Windows Catalog yet, only the Windows 10 ones. But Microsoft confirms you should install the update  on their blog

Windows Server 2016 Data Deduplication users: please install KB3216755!

The issue

Good morning. A quick blog post to give a heads up to my readers who might not be subscribed to Anton Gostev (Veeam) his “The Word Form Gostev”. It concerns a warning on Windows Server 2016 Deduplication corruption.

Warning on Windows Server 2016 Deduplication Corruption

There are multiple reports of data corruption with Windows Server 2016 deduplication. One is related to file sizes over 2TB. The other with the loss of checksum values. Microsoft is aware these issues and a fix is coming for these issues.

I quote Gostev

I’ve already received the official confirmation from Microsoft that this is the know issue (ID 10165851) which is scheduled to be addressed in the next Windows Server 2016 servicing update. There are actually two separate issues, both leading to file corruption when using deduplication on very large files. One issue occurs when files grow to 2.2TB or larger, and another one causes loss of checksums for files with “smaller sizes” – this is the actual wording of the official note, so I have no idea how small

What to do?

If you use Windows Server 2016 deduplication for backups, create new full backups regularly. Also make sure you do backup integrity testing and restore tests. Follow up on the update when it arrives.

If you use the for production data make sure you have frequent and validated backups! Design & operate under the mantra of “Trust but verify”.

Also, we’ve heard reports and noticed that Windows Server 2016 Deduplication resource configuration isn’t always respected. I.e. it can take all resources away despite limitations being set. We hope a fix for this is also under way.

Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Introduction

When you have Windows Server 2016 RD Gateway server and you expect to be able to import a configuration XML file you’ll might find yourself in a pickle when you are also using local resources. Because the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean! With local resources I mean local user accounts and groups. These are leveraged more than I imagined at first.

When does it happen?

In the past I have blogged about migrating RD Gateway servers that contain policies referencing local resources here: Fixing Event ID 2002 “The policy and configuration settings could not be imported to the RD Gateway server “%1” because they are associated with local computer groups on another RD Gateway server”.

We used to be able to use the trick of making sure the local resources exist on the new server (either by recreating them there via the server migration wizard or manually) and changing the server name in the exported configuration XML file  to successfully import the configuration. That no longer works. You get an error.Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As far as migrations go from older versions, they work fins as long as you don’t have policies with local resources. Otherwise you’d better do an in place upgrade or recreate the resources & policies on the new servers. The method described in my blog is not working any more. That’s to bad. But it gets worse.

Import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

As said,it doesn’t end there. The issue is there even when you try to import the configuration on to the same server you exported it from.That’s really bad as it a quick way to protect against any mistakes you might make, and allows to get back to the original configuration.

What’s even worse, when the import fails it wipes ALL the policies in the RD Gateway Server => dangerous! So yes, the import of RD Gateway configuration file with policies referencing local resources wipes all policies clean!

Precautions

Only a backup or a checkpoint can save your then (or recreate the all manually)! Again this is only when the exported configuration file references local resources! The fasted way to clean out an RD Gateway configuration on Windows Server 2016 is actually importing a configuration export which contains a policy referring to local resource. Ouch! I’m not aware of a fix up to this date.

For now you only protection is a checkpoint or a backup. Depending on where and how you source your virtual machines you might not have access to a checkpoint.

You have been warned, be careful.

July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2

Microsoft recently released another update rollup (aka cumulative update). The

July 2016 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2.

This rollup includes improvements and fixes but more importantly it also contains ‘improvements’ from June 2016 update rollup KB3161606 and May 2016 update rollup KB3156418. When it comes to the June rollup KB3161606 it’s fixes the bugs that cause concerns with Hyper-V Integration Components (IC) to even serious down time to Scale Out File Server (SOFS) users. My fellow MVP Aidan Finn discuses this in this blog post. Let’s say it caused a wrinkle in the community.

In short with KB3161606 the Integration Components needed an upgrade (to 6.3.9600.18339) but due to a mix up with the manifest files this failed. You could leave them in pace but It’s messy. To make matters worse this cumulative update also messed up SOFS deployments which could only be dealt with by removing it.

Bring in update rollup 3172614. This will install on hosts and guest whether they have  already installed or not and it fixes these issues. I have now deployed it on our infrastructure and the IC’s updated successfully to 6.3.9600.18398. The issues with SOFS are also resolved with this update. We have not seen any issues so far.

image

In short, CU should be gone from Windows Update and WSUS. It it was already installed you don’t need to remove it. CU will install on those servers (hosts and guests) and this time is does things right.

I hope this leads to better QA in Redmond as it really is causing a lot of people grief at the moment. It also feed conspiracy nuts theories that MSFT is sabotaging on-premises to promote Azure usage even more. Let’s not feed the trolls shall we?