The Perfect Storm of Azure DNS resolver, a custom DNS resolver, and DNS configuration ambiguities

TL:DR

The very strict Azure recursive DNS resolver, when combined with a Custom DNS resolver, can cause a timeout-sensitive application to experience service disruption due to ambiguities in third-party DNS NS delegation configurations.

Disclaimer

I am using fantasy FQDNs and made-up IP addresses here. Not the real ones involved in the issue.

Introduction

Services offered by a GIS-driven business noticed a timeout issue. Upon investigation, this was believed to be a DNS issue. That was indeed the case, but not due to a network or DNS infrastructure error, let alone a gross misconfiguration.

The Azure platform DNS resolver (168.63.129.16) is a high-speed and very strict resolver. While it can return the IP information, it does indicate a server error.

nslookup pubdata.coast.be

Server:         127.0.0.11

Address:        127.0.0.11#53

Non-authoritative answer:

pubdata.coast.be canonical name = www.coast.be.

Name:   www.coast.be

Address: 154.152.150.211

Name:   www.coast.be

Address: 185.183.181.211

** server can’t find www.coast.be: SERVFAIL

Azure handles this by responding fast and reporting the issue. The Custom DNS service, which provides DNS name resolution for the service by forwarding recursive queries to the Azure DNS resolver, also reports the same problem. However, it does not do this as fast as Azure. Here, it takes 8 seconds (Recursive Query Timeout value), potentially 4 seconds longer due to the additional timeout value. So, while DNS works, something is wrong, and the extra time before the timeout occurs causes service issues.

When first asked to help out, my first questions were if it had ever worked and if anything had changed. The next question was whether they had any control over the time-out period to adjust it upward, which would enable the service to function correctly. The latter was not possible or easy, so they came to me for troubleshooting and a potential workaround or fix.

So I dove in with the tools of the trade. nslookup, Nameresolver, Dig, https://dnssec-analyzer.verisignlabs.com/, and https://dnsviz.net/. The usual suspects were DNSSEC and zone delegation mismatches.

First, I run:

Nslookup -debug pubdata.coast.be

In the output, we find:

Non-authoritative answer:

Name:    www.coast.be

Addresses:  154.152.150.211

          185.183.181.211

Aliases:  pubdata.coast.be

We learn that pubdata.coast.be is a CNAME for www.coast.be. Let’s see if any CNAME delegation or DNSSEC issues are in play. Run:

dig +trace pubdata.coast.be

;; global options: +cmd

.                       510069  IN      NS      a.root-servers.net.

.                       510069  IN      NS      b.root-servers.net.

..

.

.                       510069  IN      NS      l.root-servers.net.

.                       510069  IN      NS      m.root-servers.net.

.                       510069  IN      RRSIG   NS 8 0 518400 20250807170000 20250725160000 46441 . <RRSIG_DATA_ANONYMIZED>

;; Received 525 bytes from 1.1.1.1#53(1.1.1.1) in 11 ms

be.                     172800  IN      NS      d.nsset.be.

..

.

be.                     172800  IN      NS      y.nsset.be.

be.                     86400   IN      DS      52756 8 2 <DS_HASH_ANONYMIZED>

be.                     86400   IN      RRSIG   DS 8 1 86400 20250808050000 20250726040000 46441 . <RRSIG_DATA_ANONYMIZED>

;; Received 753 bytes from 198.41.0.4#53(a.root-servers.net) in 13 ms

coast.be.           86400   IN      NS      ns1.corpinfra.be.

coast.be.           86400   IN      NS      ns2.corpinfra.be.

<hash1>.be.             600     IN      NSEC3   1 1 0 – <next-hash1> NS SOA RRSIG DNSKEY NSEC3PARAM

<hash1>.be.             600     IN      RRSIG   NSEC3 8 2 600 20250813002955 20250722120003 62188 be. <RRSIG_DATA_ANONYMIZED>

<hash2>.be.             600     IN      NSEC3   1 1 0 – <next-hash2> NS DS RRSIG

<hash2>.be.             600     IN      RRSIG   NSEC3 8 2 600 20250816062813 20250724154732 62188 be. <RRSIG_DATA_ANONYMIZED>

;; Received 610 bytes from 194.0.37.1#53(b.nsset.be) in 10 ms

pubdata.coast.be. 3600   IN      CNAME   www.coast.be.

www.coast.be.       3600    IN      NS      dns-lb1.corpinfra.be.

www.coast.be.       3600    IN      NS      dns-lb2.corpinfra.be.

;; Received 151 bytes from 185.183.181.135#53(ns1.corpinfra.be) in 12 ms

The DNSSEC configuration is not the issue, as the signatures and DS records appear to be correct. So, the delegation inconsistency is what causes the SERVFAIL, and the duration of the custom DNS servers’ recursive query timeout causes the service issues.

The real trouble is here:

pubdata.coast.be. 3600 IN CNAME www.coast.be

www.coast.be.        3600 IN NS dns-lb1.corpinfra.be.

This means pubdata.coast.be is a CNAME to www.coast.be. But www.coast.be is served by different nameservers than the parent zone (coast.be uses ns1/ns2.corpinfra.be). This creates a delegation inconsistency:

The resolver must follow the CNAME and query a different set of nameservers. If those nameservers don’t respond authoritatively or quickly enough, or if glue records are missing, resolution may fail.

Strict resolvers (such as Azure DNS) may treat this as a lame delegation or a broken chain, even if DNSSEC is technically valid.

Workarounds

I have already mentioned that fixing the issue in the service configuration setting was not on the table, so what else do we have to work with?

  • A quick workaround is to use the Azure platform DNS resolver (168.63.129.16) directly, which, due to its speed, avoids the additional time required for finalizing the query. However, due to DNS requirements, this workaround is not always an option.
  • The other one is to reduce the recursive query timeouts and additional timeout values on the custom DNS solution. This is what we did. The timeout value is now 2 (default is 8), and the additional timeout value is now 2 (default is 4). That is what I did to resolve the issue as soon as possible. Monitor this to ensure that no other problems arise after taking this action.
  • Third, we could conditionally forward coast.be to the dns-lb1.corpinfra.be and dns-lb2.corpinfra.be NS servers. That works, but it requires maintenance when those name servers change, so we need to keep an eye on that. We already have enough work.
  • A fourth workaround is to provide an IP address from a custom DNS query in the source code to a public DNS server, such as 1.1.1.1 or 8.8.8.8, when accessing the pubdata.coast.be FQDN is involved. This is tedious and not desirable.
  • The most elegant solution would be to address the DNS configuration Azure has an issue with. That is out of our hands, but it can be requested from the responsible parties. For that purpose, you will find the details of our findings.

Issue Summary

The .be zone delegates coast.be to the NS servers:

dns-lb1.corpinfra.be

dns-lb2.corpinfra.be

However, the coast.be zone itself lists different NS servers:

ns1.corpinfra.be

ns2.corpinfra.be

This discrepancy between the delegation NS records in .be and the authoritative NS records inside the coast.be zone is a violation of DNS consistency rules.

Some DNS resolvers, especially those performing strict DNSSEC and delegation consistency checks, such as Azure Native DNS resolver, interpret this as a misconfiguration and return SERVFAIL errors. This happens even when the IP address(es) for pubdata.coast.be can indeed be resolved.

Other resolvers (e.g., Google Public DNS, Cloudflare) may be more tolerant and return valid answers despite the mismatch, without mentioning any issue.

Why could this be a problem?

DNS relies on consistent delegation to ensure:

  • Security
  • Data integrity
  • Reliable resolution

When delegation NS records and authoritative NS records differ, recursive resolvers become uncertain about the actual authoritative servers.

This uncertainty often triggers a SERVFAIL to avoid possibly returning stale or malicious data. When NS records differ between parent and child zones, resolvers may reject responses to prevent the use of stale or spoofed data.

Overview

Zone LevelNS RecordsNotes
.be (parent)dns-lb1.corpinfra.be, dns-lb2.corpinfra.beDelegation NS for coast.be
coast.bens1.corpinfra.be, ns2.corpinfra.beAuthoritative NS for zone

Corpinfra.be (see https://www.dnsbelgium.be/nl/whois/info/corpinfra.be/details) – this is an example, the domain is fictitious – operates all four NS servers that resolve to IPs in the same subnet, but the naming inconsistency causes delegation mismatches.

Recommended Fixes

Option 1: Update coast.be zone NS records to match the delegation NS

Add dns-lb1.corpinfra.be and dns-lb2.corpinfra.be as NS records in the coast.be zone alongside existing ones (ns1 and ns2), so the zone’s NS RRset matches the delegation.

coast.be.   IN  NS  ns1.corpinfra.be.

coast.be.   IN  NS  ns2.corpinfra.be.

coast.be.   IN  NS  dns-lb1.corpinfra.be.

coast.be.   IN  NS  dns-lb2.corpinfra.be.

Option 2: Update .be zone delegation NS records to match the zone’s NS records

Change the delegation NS records in .be zone to use only:

ns1.corpinfra.be

ns2.corpinfra.be

remove dns-lb1.corpinfra.be and dns-lb2.corpinfra.be

Option 3: Align both the .be zone delegation and coast.be NS records to a consistent unified set

Either only use ns1.corpinfra.be abd ns2.corpinfra.be for both the delegation and authoritative zone NS records, or only use dns-lb1.corpinfra.be and dns-lb2.corpinfra.be for both. Or use all of them; three or more geographically dispersed DNS servers are recommended anyway. Depends on who owns and manages the zone.

What to choose?

OptionDescriptionProsCons
1Add dns-lb1 and dns-lb2 to the zone fileQuick fix, minimal disruptionMaybe the zones are managed by <> entities
2Update .be delegation to match zone NS (ns1, ns2)Clean and consistentRequires coordination with DNS Belgium
3Unify both delegation and zone NS recordsMost elegantRequires a full agreement between all parties

All three options are valid, but Option 3 is the most elegant and future-proof. That said, this is a valid configuration as is, and one might argue that Azure’s DNS resolver’s strictness is the cause of the issue. Sure, but in a world where DNSSEC is growing in importance, such strictness might become more common? Additionally, if the service configuration could handle a longer timeout, that would also address this issue. However, that is outside my area of responsibility.

Simulation: Resolver Behavior

ResolverBehavior with MismatchNotes
Azure DNS resolverSERVFAILStrict DNSSEC & delegation checks
Google Public DNSResolves normallyTolerant of NS mismatches
Cloudflare DNSResolves normallyIgnores delegation inconsistencies
Unbound (default)May varyDepends on configuration flags
Bind (strict mode)SERVFAILEnforces delegation consistency

Notes

  • No glue records are needed for coast.be, because the NS records point to a different domain (corpinfra.be), so-called out-of-bailiwick name servers, and .be correctly delegates using standard NS records.
  • After changes, flush DNS caches

Conclusion

When wading through the RFC we can summarize the findings as below

RFC Summary: Parent vs. Child NS Record Consistency

RFCSectionPosition on NS MatchingKey Takeaway
RFC 1034§4.2.2No mandate on matchingDescribes resolver traversal and authoritative zones, not strict delegation consistency
RFC 1034§6.1 & §6.2No strict matching ruleDiscusses glue records and zone cuts, but doesn’t say they must be aligned
RFC 2181§5.4.1Explicit: child may differParents’ NS records are not authoritative for the child; the child can define its own set.
RFC 4035§2.3DNSSEC implicationsMismatched NS sets can cause issues with DNSSEC validation if not carefully managed.
RFC 7719GlossaryReinforces delegation logicClarifies that delegation does not imply complete control or authority over the child zone

In a nutshell, RFC 2181 Section 5.4.1 is explicit: the NS records in a parent zone are authoritative only for that parent, not for the child. That means the child zone can legally publish entirely different NS records, and the RFC allows it. So, why is there an issue with some DNS resolvers, such as Azure?

Azure DNS “Soft” Enforces Parent-Child NS Matching

Azure DNS resolvers implement strict DNS validation behavior, which aligns with principles of security, reliability, and operational best practice, not just the letter of the RFC. This is a soft enforcement; the name resolution does not fail.

Why

1. Defense Against Misconfigurations and Spoofing

Mismatched NS records can indicate stale or hijacked delegations.

Azure treats mismatches as potential risks, especially in DNSSEC-enabled zones, and returns SERVFAIL to warn about potential spoofed responses, but does not fail the name resolution.

2. DNSSEC Integrity

DNSSEC depends on a trusted chain of delegation.

If the parent refers to NS records that don’t align with the signed child zone, validation can’t proceed.

Azure prioritizes integrity over leniency, which is why there is stricter enforcement.

3. Predictable Behavior for Enterprise Networks

In large infrastructures (like hybrid networks or private resolvers), predictable resolution is critical.

Azure’s strict policy ensures that DNS resolution failures are intentional and traceable, not silent or inconsistent like in looser implementations.

4. Internal Resolver Design

Azure resolvers often rely on cached referral points.

When those referrals don’t match authoritative data at the zone apex, Azure assumes the delegation is unreliable or misconfigured and aborts resolution.

Post Mortem summary

Azure DNS resolvers enforce delegation consistency by returning a SERVFAIL error when parent-child NS records mismatch, thereby signaling resolution failure rather than silently continuing or aborting. While RFC 2181 §5.4.1 allows child zones to publish different NS sets than the parent, Azure chooses to explicitly flag inconsistencies to uphold DNSSEC integrity and minimize misconfiguration risks. This deliberate error response enhances reliability in enterprise environments, ensuring resolution failures are visible, traceable, and consistent with secure design principles.

This was a perfect storm. A too-tight timeout setting in the service (which I do not control), combined with the Azure DNS resolvers’ rigorous behavior, which is fronted by a custom DNS solution required to serve all possible DNS needs in the environment, results in longer times for recursive DNS resolution that finally tripped up the calling service.

Use DNS Application Directory Partitions with conditional forwarders to resolve Azure private endpoints

Use DNS Application Directory Partitions with conditional forwarders

Before I explain how to use DNS Application Directory Partitions with conditional forwarders, we need to set the stage. We will shortly revisit how DNS name resolution is set up and configured for hybrid Azure environments. Then it will help to understand why DNS Application Directory Partitions are useful in such scenarios.

Active Directory Domain Services extended to Azure

In the context of this article, an Azure hybrid environment is where you have Active Directory Domain Services (ADDS) extended to Azure. I.e., you have a least one AD site on-premises and at least one AD Site in Azure, with connectivity between the two all set up via ExpressRoute or a Site-to-Site VPN, firewall configured, etc. In most cases, the DNS servers for the ADDS environment are AD integrated. Which is what we want for this scenario.

You must have DNS name resolution sorted out effectively and efficiently in such an environment. Queries from on-premises to Azure need to be resolved, and queries from Azure to on-premises.

Regarding resolving private endpoints in Azure and potentially other private DNS zones in Azure, we need to leverage conditional forwarders. That means we must create all the public DNS zones as conditional forwarders on the on-premises domain controllers. We point these at our custom DNS servers in Azure or our Azure Firewall DNS proxy that points to our custom DNS servers in Azure. The latter is the better option if that is available. Those custom DNS Servers will most likely be our AD/DNS server in Azure. These will forward the queries to azure VIP 168.63.129.16, which will let Azure DNS handle the actual name resolution.

Conditional forwarders on-premises

There are two attention points in this scenario. First, the conditional forwarders should only exist on the on-premises DC/DNS servers. That is normal. The DC/DNS servers in Azure can just forward the queries to the Azure VIP, which will have the AD recursive DNS service query the private DNS zones and provide an answer. That is why we forward the on-prem queries to them directly or via the firewall DNS proxy.

Secondly, less frequent, but more often than you think, ADDS on-premises does not translate into a single Azure tenant or deployment. You can have multiple AD Sites in Azure for the same on-prem ADSS environment. That happens via different business units, mergers, and acquisitions, politics, life, or whatever.

Example on-prem / Azure ADDS environment with Azure FW DNS proxy.

Both attention points mean that we must ensure that the on-prem conditional forwarders only live on the DC/DNS servers that forward to the correct custom DNS services in Azure. Some DC/DNS servers on-premises might send their queries to Azure AD site 1 and others to Azure AD site 2, which might live in separate tenants or Azure deployments.

How do we achieve this?

One option is to create the conditional forwarders without storing and replicating them to all DC/DNS servers in the forest or the domain. That works quite well, but it leaves the burden to configure them on every DC/DNS server where required. That’s OK; PowerShell is your friend. See PowerShell script to maintain Azure Public DNS zone conditional forwarders – Working Hard In ITWorking Hard In IT for a script to do that for you.

But another option might be handy if you have 10 on-premises Active Directory Sites and 5 Azure Active Directory Sites. That option is called DNS partitioning. You can create your own Active Directory partitions, To those, you add the desired DC/DNS servers. You can now create your conditional forwarders, store them in Active Directory and replicate them to their respective custom partition. That leaves the flexibility to keep the conditional forwarders out of the Azure DC/DNS servers and enables different conditional forwarders configurations per on-premises Active Directory Site.

DNS Application Directory Partitions

To create the DNS application directory partitions, you can use PowerShell or the ‘dnscnd’ command line tool. I will use Powershell here. If you want to use the command line, look at How to create and apply a custom application directory partition on an Active Directory-integrated DNS zone on how to do this.

Add-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE"

The command above did two things. It created the application directory partition and registered the DNS server on which you ran the script. You can test that with Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” or Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” -ComputerName ‘DC01’ if you specify the computer name.

By the way, if you run just Get-DnsServerDirectoryPartition, you will see all partition info for the current node or the node you specify.

Register the second DC/DNS server in this partition with the following command.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

This returns nothing by default or an error in case something is wrong. Check your handy work with the below command.

Get-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'

Note that the Get-DnsServerDirectoryPartition only shows the registered DNS server for the node you are running it on or the one you specify. You do not get a list of all registered servers.

Now go ahead and store some zones in Active Directory and replicate to the BLUE partition on one of the DC/DNS servers you will see the ZoneCount go up on both. Just wait for replication to do its job or force replication to happen.

Storing the conditional forwarder in your DNS application directory partition

It is easy to store conditional forwarders in your custom DNS application directory partition. You can do this by adding or editing a conditional forwarding zone. However, be aware of the bug I wrote about in Bug when changing the “store this conditional forwarder in active directory” setting.

When you create the conditional forwarder zones for the private endpoints, you can store them in Active Directory and replicate them to their respective partitions. Just select the correct partition in the drop-down list. You will only see the partition for which your DC/DNS server has been registered, not every existing partition.

DNS Application Directory Partitions with conditional forwarders
Select the correct DNS application directory partition in the drop-down list

When done, the properties for the conditional forwarder will show that the zone is stored in Active Directory and replicated to all domain controllers in a user-defined scope.

DNS Application Directory Partitions with conditional forwarders
Conditional forwarder replicated to all DCs in a user-defined scope

Create a partition for any collection of DC/DNS servers that you want to have their Azure private endpoint DNS zones sent to a specific Azure deployment. So depending on your situation, that might be one or more.

As I mentioned, the custom partition will not even be offered on any DC/DNS server that has not enlisted for that zone. This protects against people selecting the wrong custom partition for their environment.

Conclusion

That’s it. You now have one more option on your tool belt when configuring on-premises to Azure name resolution in hybrid scenarios. The fun thing is that I have never seen more people learn about using DNS Application Directory Partitions with conditional forwarders now they have to design and configure DNS for hybrid on-premises/Azure ADDS environments. Maybe you learned something new today. If so, I am happy you did.

PowerShell script to maintain Azure Public DNS zone conditional forwarders

Introduction

I recently wrote a PowerShell script to maintain Azure Public DNS zone conditional forwarders. If you look at the list is quite long. Adding these manually is tedious and error-prone. Sure you might only need a few, but hey, I think and prepare long term.

Some background on DNS and private endpoints

When using private endpoints in Azure correct DNS name resolution is essential. While Azure can do a lot of things for you under the hood it is important to wrap your head around name resolution in Azure, for all your public, private, and custom DNS requirements. In the end, you need a DNS solution that is maintainable and works for current and future use cases. Your peaceful IT existence will fall apart fast without the ability to correctly resolve the private endpoint IP addresses to their fully qualified domain name (FQDN).

That in itself is a big subject I will not dive into right now. I will say that host files (if applicable) are OK for testing but not a maintainable solution, except for the smallest environments. In Azure, you can link virtual networks to Private DNS zones to resolve DNS queries for private endpoints. As an alternative, you can use your own custom DNS Server(s) with a forwarder to Azure’s VIP 168.63.129.16 and, at least on-premises conditional forwarders.

The latter is a requirement to resolve DNS queries for Azure resources with private endpoints for on-premises. At least until Azure DNS Private Resolver becomes generally available. That will be the way forward in the future if you otherwise have no need for custom DNS servers.

Please note that name resolution for private endpoints uses the public DNS zones. This allows existing Microsoft Azure services with DNS configurations for a public endpoint to keep functioning when accessed from the internet. Azure will intercept queries that originate from Azure or connected on-premises locations and reply with the private IP address of private endpoints. This configuration must be overridden to connect using your private endpoint.

On-premises DNS Servers

While your custom DNS servers in Azure can forward queries they are not authoritative for to the Azure VIP 168.63.129.16, on-premises servers cannot reach that IP address. They need to send the DNS queries for private Azure resources to a custom DSN Server in Azure via conditional forwarding. The Azure custom DNS server will forward the query to 168.63.129.16.

Example on-prem / Azure ADDS environment with Azure FW DNS proxy

PowerShell script to maintain Azure Public DNS zone conditional forwarders

Below you will find the code for the script. I created a CSV file with all the Azure public DNS Conditional forwarder zones. Zones with placeholders for regions, partitions, or SQL instances will be generated. For that, you need to provide the correct parameters. if not these are ignored.

PowerShell script to maintain Azure Public DNS zone conditional forwarders
Adding all Azure public DNS conditional forwarders to an on-premises DNS server

Another attention point is the fact that you can opt to store the zones in Active Directory or not. If so you can specify in what builtin or custom partition.

There are examples in the script TestAzurePublicDNSZoneForwardersScript.ps1 on how to use it. You will need at least one playground DNS server or better, 2 AD integrated DC/DNS servers for testing.

You can find the script at WorkingHardInIT/AzurePublicDnsZoneForwarders (github.com)

Conclusion

That’s it. I can extend the PowerShell script to maintain Azure Public DNS zone conditional forwarders with extra options when adding or updating conditional forwarders. Right now, for its current role, it does what I need. I do not plan to add an option to update the “store this conditional forwarder in Active Directory” setting as this has a bug.

See Bug when changing the “store this conditional forwarder in active directory” setting for more info. The gist is that it makes changing the setting causes DNS queries for the conditional forwarder to fail. We avoid that issue by removing and adding the conditional forwarders again. In many (most?) use cases so far, the default setting of not storing the conditional forwarder in Active Directory is what I need, so the script has no option to change that default setting until I might need it.

Code

Bug when changing the “store this conditional forwarder in active directory” setting

Bug when changing the “store this conditional forwarder in active directory” setting

Recently I encountered a bug when changing the “store this conditional forwarder in active directory” setting. I have been doing quite some active directory extensions to Azure lately. Part of that, post-process, is making sure that DNS name resolution from on-premises to Azure and vice versa is working optimally. When it comes to resolving Azure private endpoints and other private DNS zones from on-premises we need to add the conditional forwarders for the respective Azure DNS zones.

As we have different needs for this configuration on-premises versus in Azure we disable “Store this conditional forwarder in Active Directory, and replicate as follows” for all zones. This is the defaultm when you add a conditional forwarder.

However, you will also need to do this, in certain cases for other conditional forwarders depending on the DNS infrastructure between Azure and on-premises. I tend to change those non-Azure resource conditional forwarders before I add the one needed for Azure.

Bug when changing the "store this conditional forwarder in active directory" setting
The “store this conditional forwarder in active directory” setting

While that sounds easy enough, you can easily get into a pickle. When you change this, while the configuration seems perfectly fine, the name resolution for those zones where you change this stops working. That is bad. No bueno!

That can break a lot of services and applications leading to support calls, causing upset application owners, and lost revenue while leaving you scrambling to find a fix.

So how do we fix this?

Well, the only solution is to remove each and every conditional forwarder involved and add them again, While re-adding it you might get an “unknown error” in the GUI, but ignore it. Just go ahead. When your reverse lookup zones are in order it will resolve to the FQDN and name resolution will start working again. You can also use PowerShell or the command line. It is worth checking if changing the setting via PowerShell or the command line triggers the bug or not.

Please note that, as your are not replication the conditional forwarders in Active Directory, you must do that on all DNS servers on-premises involved in resolving Azure resources.

Is this a known bug?

Well, it looks like it, but I have yet to find a knowledge base article about it. There are mentions of other people running into the issue. This is not per se Azure-related. Take a look here DNS Conditional Forwarder stops working as soon as it’s Domain Replicated – Microsoft Q&A and AD Integrating conditional DNS forwarders stops them working (microsoft.com).

Note that this bug when changing the “store this conditional forwarder in active directory” setting will appear when you either enable or disable it.

This bug has existed for many years and over many versions of Windows DNS. The last encounters I had was with Windows Server 2019 and 2022. But beware with Windows Server 2016 and 2012 (R2) as well.