Passive FTP over SSL support in Azure Firewall

Introduction

Since Azure does not offer full-blown FTPS as a service, I usually migrate existing Windows Server-based FTPS solutions to IAAS in Azure. That means a virtual machine in Azure. Since many such implementations or lift and shift projects leverage Active Directory integrated authentication. As a result, they fit well into many projects where a small percentage of on-premises workloads have to live on in an IAAS ADDS Azure environment anyway.

The reasons for this vary, but most often, we see that these cannot be refactored or replaced by cloud-native solutions. This is due to the technology required or because of budget constraints. Ultimately, one cannot expect to swap all currently used technology with newer technology all of the time. Such is the reality of real-world IT unless you cherry-pick projects to only deal with cloud-native solutions. Since Azure landing zones often leverage Azure Firewall, we have arrived at the topic of this blog post. We need to do an FTP configuration for passive FTP over SSL support in Azure Firewall.

Passive FTP over SSL support in Azure Firewall

A quick search on the internet leads to results quickly confirming that Azure Firewall supports passive FTP. As expected, this requires DNAT rules for port 21 and the chosen port range for the FTP data channel. When looking at the official Microsoft documentation, this is confirmed. They mention that you must “configure FTP server to accept data and control channels from different source IP addresses.” I glanced over that one, and I should not have done that. On the other hand, that line is about accessing an FTP server on the Internet with the FTP client behind the Azure Firewall. That is the Outbound VNet – Internet (FTP client in VNet, server on Internet) section in the Microsoft docs.

Here I run the FTP Server behind Azure Firewall (a Secured Hub in Azure VWAN) with the client on the Internet. That is covered in the Inbound DNAT (FTP client on Internet, server in VNet) section which does not mention to configure FTP server to accept data and control channels from different source IP addresses.

The FTP Server

I stored the original FTPS virtual Machine (FTP via IIS on Windows Server 2022) to Azure using Veeam Backup and Recovery V12. That went smoothly, After fixing the IP configuration in the FTP sites, I was ready to configure the NSGs and Azure Firewall with all the needed rules.

Initial testing and troubleshooting

Using the Filezilla FTP client, I started testing the FTPS connectivity over Explicit FTPS, which initially seemed to work well. But browsing through the folders, I soon noticed that I had directory listing issues in about 50% of cases.I quickly tried this with WinSCP as an FTPS client to see if that worked as expeted, but I had the same issue.

I checked the Azure Firewall DNAT rules, and these were all in place and correct. Next, I checked the rules in the NSG on both the subnet and the NIC of the FTPS virtual machine to see if I missed something. Those were all in order as well. Next up was the Windows Firewall, but again, all was well. Finally, I tested FTP from a virtual machine in the same subnet, bypassing the Azure Firewall, and everything worked as expected. Comparing the FTP logs, I could see some differences in data channel logging but no errors.

So I tried Passive FTP without SSL. But again, I had the same issue. The error message is a bit different, as we don’t have TLS failures in this case.

OK, so what’s happening here?

Wireshark

As often, I resorted to using Wireshark in the FTPS virtual machine to get more insights into what was happening. Things got interesting very quickly.

Passive FTP without TLS

I am using passive FTP without TLS in the screenshots, which shows the FTP commands better.

Passive FTP over SSL support in Azure Firewall

Let’s look at the Wireshark capture we made when connecting and navigating the folders.

In the green rectangle, you see us connect and navigate to /Public/A-A-Test03. That works well. Observe that all traffic for control and data channel traffic (green fluorescent is data channel activity)comes from 10.250.250.69. Next, look at the red rectangle. Control (10.250.250.69) and data channel (red fluorescent, 10.250.250.70) traffic comes in from different IP addresses. We have found why the directory listing fails with IIS’s default and most secure data channel security configuration. We now also realize what “configure FTP server to accept data and control channels from different source IP addresses” means in the Azure Firewall documentation for Passive FTP.

Passive FTP with TLS

The same is true for passive FTP over TLS; the capture is slightly less readable and more verbose, but the directory listing fails again. Note that in FileZilla, we also see an extra error: “The data connection could not be established: ECONNABORTED – Connection aborted” as the TLS data connection also fails.

Passive FTP over SSL support in Azure Firewall

Note the red rectangle again where, at a given moment, while navigating directories, the data channel comes in from a different IP (red fluorescent), and the directory listing fails.

Passive FTP over SSL support in Azure Firewall

Searching for answers

The good news is that I know clearly when and why directory listing fails. That is possibly due to how the Azure Firewall redundant nodes behind the internal load balancers deal with Passive FTP. That i me speculating here, not an oficial statement ;-). The next step is figuring out how to solve this issue. I have no control over how Azure Firewall handles this. Replacing Azure Firewall with a 3rd party solution where we know this to work night not work in Azure. Also this should not be the first and final answer.

Some internet searching led me to a solution I was eager to try. Again, the information in the Azure Firewall docs should link to the IIS/FTP documentation on how to do this. The fact remains however that the Microsoft docsmention this for outbound access from a client on a VNET to an FTP server on the Internet. We do the opposite. We have our FTP server in a VNET behind Azure Firewall and the FTP client is on the internet. In that case, no mention is made of onfigure FTP server to accept data and control channels from different source IP addresses. Also note that in our case the source addresses are private, not public.

Read this article and note the following.

Implementing the workaround

Never the less, that does indeed ring a bell with what we see in Wireshark and whith what that line in the Azure Firewall documentation we glanced over so cavalierly. But it does not seem like a good idea for security reasons. But let’s try it! You can find where this setting lives at https://learn.microsoft.com/en-us/previous-versions/iis/settings-schema/ff713742(v=vs.90).

Navigate to the IIS Server and, under “Management” in the middle pane, open the configuration editor.

Once there, open the systems.applicationHost/Sites node and navigate to siteDefaults/ftpServer/security/dataChannelSecurity. Find the matchClientAddressForPasv element and set it to false.

Apply your configuration change, and your passive FTP(S) experience will consistently work when accessed over an Azure Firewall!

Now let’s try passive FTP again. With or without TLS, it now works like a charm. No more failures to retrieve directory listings at all! In Wireshark, you will still see that the data channel and control channel can come from a different IP (Redundant Azure Firewall with internal load balancers), but it no longer breaks FTP. Hoorah!

Musings on this setting

I have set up many FTPS solutions with Windows Server IIS FTP, and I have never needed to make this configuration change before to get passive FTP to work. Somehow, Checkpoint, ASA, Baracuda. My explanation is that these hide the two separate internal Ips of the redundant firewall by using a VIP. Fair enough. Ultimately, I was lucky to have always used FTP with IIS. That solution allowed us to solve the issue with a configuration change.

That used to be possible in other popular FTP server software like older versions of FileZilla Server,. Today this it is no longer available in recent versions. The reason is that this poses a security risk.

Passive FTP over SSL support in Azure Firewall

In the IIS documentation, they also mention that this is and are clear that it is preferable not to change this.

Ultimately, with Azure Firewall, in contrast to the MSFT docs we must set the matchClientAddressForPort to true under data channel security in IIS/FTP configuration. If not, client connections form the Internet to our passive FTP server behind an Azure Firewall will not work reliably. Maybe this is due to the fact this is a VWAN setup with a secured hub and not a tradditional hub/spoke network?

Finally, I am not looking forward to discussing this matter with the CISO office. The implication for the future of the Azure landing zone’s firewall of choice (i.e., Azure Firewall) is that it will be a point of discussion again. Now, I’ll leave it to security experts to decide how dangerous this setting is when the two servers involved are both nodes of a redundant firewall.

Conclusion

First of all, RTFM. That’s it. Read The Fucking Manual. Now that is all good and well, but you need to understand what you read. You also need to know what that means for all parts in your configuration. Combined with the fact that what I read was for the scenario where the FTP server lives on the internet and the FTP client sits behind Azure FIrewall. We have the opposite, but Wireshark does not lie and the “fix”makes it work. However, troubleshooting this made me comprehend it very well as I concluded what to try next before revisiting the documentation, The docs initially confused me as I was not supposed to apply this workaround in this scenario, but the hint about what to try was in there.

Another conclusion is that this gives ammo to the case for 3rd party firewalls with the Azure Firewall haters. They now have a less secure passive FTP server configuration which is always an excellent drum to beat on. Especially when other FTP servers no longer allow this configuration. To add insult to injury, Microsoft warns against making this FTP data channel security change.

You can say that this is a niche use case, perfection is not of this world, and that this setting’s security risk is benign. Maybe so, but there is always a but! First, FTPS is more common than you might want to hear or think for valid use cases. Second, it all adds up, and death by a thousand little cuts does occur.

Presenting at and attending Experts Live Europe 2023

Presenting at and attending Experts Live Europe 2023

I am happy to share that I am both presenting at and attending Experts Live Europe 2023. It runs September 18-20 2023 in Prague.

Presenting at and attending Experts Live Europe 2023

Isidora Katanic (@IsidoraKatanic) is the lead organizer and driving force behind Experts Live Europe. She’s dedicated to making this one of the best Microsoft technology-focused conferences in Europe. When you look at the pre-conference and session calendar you can already see this. She and her team lined up everything to make the 2023 edition a great professional and community experience. Experts Live Europe is a two-day conference (three days with the preconference workshops) and is scheduled in Prague, September 18-20,  2023.  It is the first edition since 2019 due to the Corona/Covid pandemic. I can share that I personally, and many others, are happy that this is possible again. Next to that, I am thrilled to share my research and expertise at this conference once more.

In my session “Azure Storage – The SMB over QUIC protocol is here!” I will be diving deeper into the why and how of SMB over QUIC.

Presenting at and attending Experts Live Europe 2023

This is a very powerful and promising, relatively recent addition to the SMB 3 stack. Once again it shows that file sharing is far from an obsolete protocol in the era of anything “cloud”.

Meet the experts and ask me anything galore

This conference is about you and me, about us, sharing insights, experiences, knowledge, and expertise. Both the concept and the setup of the conference facilitate this by design.

Presenting at and attending Experts Live Europe 2023

While I’m there, come say hi, and talk shop about networking, storage, clustering, Hyper-V, DevOps, Bicep, and Veeam data protection in on-premises, hybrid, and Azure scenarios. I’ll be around during the breaks for the “Ask The Expert” and at the dedicated speaker’s booth in the expo area. Now, next to a Microsoft MVP I am also a Veeam Vanguard. Veeam is a gold sponsor and I’ll be around their boot as well. So come find me if you want to talk about Veeam Backup & Replication, Hardened (immutable) repositories, and other related subjects.

I am there to learn as well

Finally,  I also look forward to the sessions other speakers are giving. One of those sessions, “Azure Firewall: The Legacy Firewall Killer”,  is presented by Aidan Finn (@joe_elway). That subject is both very interesting and a bit controversial. Many people know and master 3rd party firewall interfaces with their specific tooling and capabilities. While there is nothing wrong with that, many people scoff at Azure Firewall. But you should not write off Azure firewall with different products. This is especially true when you start delivering Azure Firewall via Infrastructure as Code (IaC).

Call to action

Do not delay! Register to attend Experts Live Europe and do not miss out on a ton of great sessions by expert speakers,  networking with knowledgeable attendees, and talking shop with your fellow IT professionals, who are as passionate about technology as you.  I look forward to seeing you there.

Presenting at and attending Experts Live Europe 2023

ConvertFrom-Json is not serializable

Introduction

While writing Bicep recently, I was stumped by the fact that my deployment kept failing. I spent a lot of time troubleshooting many possible ideas on what might be causing this. As JSON is involved and I am far from a JSON syntax guru, I first focused on that. Later I moved to how I use JSON in Bicep and PowerShell before finally understanding the problem was due to the fact that ConvertFrom-Json is not serializable.

Parameters with Bicep

When deploying resources in Azure with Bicep, I always need to consider who has to deliver or maintain the code and the parameters. It has to be somewhat structured, readable, and understandable. It can’t be one gigantic listing that confuses people to the point they are lost. Simplicity and ease of use rule my actions here. I know when it comes to IaC, this can be a challenge. So, when it comes to parameters, what are our options here?

  • I avoid hard-coding parameters in Bicep. It’s OK for testing while writing the code, but beyond that, it is a bad idea for maintainability.
  • You can use parameter files. That is a considerable improvement, but it has its limitations.
  • I have chosen the path of leveraging PowerShell to create and maintain parameters and pass those via objects to the main bicep file for deployment. That is a flexible and maintainable approach. Sure, it is not perfect either, but neither am I.

Regarding Bicep and PowerShell, we can also put parameters in separate files and read those to create parameters. Whether this is a good idea depends on the situation. My rule of thumb is that it is worth doing when things become easier to read and maintain while reducing the places where you have to edit your IaC files. In the case of Azure Firewall Policy Rules Collection Groups, Rules collections, and Rules, it can make sense.

Bicep and JSON files

You can read file content in Bicep using. With the json() function, you can tell Bicep that this is JSON. So far, so good. The below is perfectly fine and works. We can loop through that variable in a resource deployment.

var firewallChildRGCs = [

    json(loadTextContent('./AFW/Policies/RGSsAfwChild01.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild02.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild03.json'))

]

However, I am not entirely happy with this. While I like it in some aspects, it conflicts with my desire not needing to edit a working Bicep file once it is in use. So what do I like about it?

It keeps Bicep clean and concise and limits the looping to iterate over the Rules Collection Groups, thus avoiding the nested looping for Rules collections and Rules. Why is that? Because I can do this

@batchSize(1)

resource firewallChildPolicyWEUColGroups 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2022-07-01' = [for (childrcg, index) in firewallChildRGCs: {

  parent: firewallChildPolicyWEU

  name: childrcg.name

  dependsOn: [firewallParentPolicyWEUColGroups]

  properties: childrcg.properties

}]

As you can see, I loop through the variable and pass the JSON into the properties. That way, I create all Rule Collections and Rules without needing to do any nested looping via “helper” modules to get this done.

The drawback, however, is that the loadTextContent function in Bicep cannot use dynamic parameters or variables. As a result, the paths to the files need to be hard coded into the Bicep file. That is something we want to avoid. But until that is possible, it is a hard restriction. That is because parameters are evaluated during runtime (bicep deployment), whereas loadTextContent in Bicep happens while compiling (bicep build). So, in contrast to the early previews of Bicep, where you “transpiled” the Bicep manually, it is now done for you automatically before the deployment. You think this can work, but it does not.

PowerShell and JSON files

As mentioned above, I chose to use PowerShell to create and maintain parameters, and I want to read my JSON files there. However, it prevents me from creating large, long, and complex to maintain PowerShell objects with nested arrays. Editing these is not straightforward for everyone. On top of that, it leads to the need for nested looping in Bicep via “helper” modules. While that works, and I use it, I find it more tedious with deeply nested structures and many parameters to supply. Hence I am splitting it out into easier-to-maintain separate JSON files.

Here is what I do in PowerShell to build my array to pass via an Object parameter. First, I read the JSON filers from my folder.

$ChildFilePath = "../bicep/nested/AfwChildPoliciesAndRules/*"
$Files = Get-ChildItem -File $ChildFilePath -Include '*.json' -exclude 'DONOTUSE*.json'
$Files
$AfwChildCollectionGroupsValidate = @() # We use this with ConvertFrom-Json to validate that the JSON file is OK, but cannot use this to pass as a param to Bicep
    $AfwChildCollectionGroups = @()
    Foreach ($File in $Files) {
    try{
        $AfwChildCollectionGroupsValidate += (Get-Content $File.FullName -Raw) | ConvertFrom-Json
        # DO NOT PUT JSON in here - the PSCustomObject is not serializable and passing this param to Bicep will than be empty!
        $AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) # A string is serializable!
        }
        Catch
        {
        write-host -ForegroundColor Red "ConvertFrom-Json threw and error. Check your JSON in the RCG/RC/R files"
        Exit
        }
    }

I can then use this to roll out the resources, as in the below example.

// Roll out the child Rule Collection Group(s)

var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {

  name: json(rulecol).name

  properties: json(rulecol).properties

}]

Initially, the idea was that by using ConvertFrom-Json I would pass the JSON to Bicep as a parameter directly.

$AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) | ConvertFrom-Json

So not only would I not need to load the files in Bicep with a hard-coded path, I would also not need to use json() function in Bicep.

// Roll out the child Rule Collection Group(s)
var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {
  name: rulecol.name
  properties: rulecol.properties
}]

However, this failed on me time and time again with properties not being found and what not. Below is an example of such an error.

Line |
  30 |          New-AzResourceGroupDeployment @params -DeploymentDebugLogLeve …
     |          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | 2:46:20 PM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'The template variable 'ChildRCGs' is not valid: The language expression property 'name' doesn't exist, available properties are ''.. Please see    
     | https://aka.ms/arm-functions for usage details.'.

It did not make sense at all. That was until a dev buddy asked if the object was serializable at all. And guess what? ConvertFrom-Json creates a PSCustomObject that is NOT serializable.

You can check this quickly yourself.

((Get-Content $File.FullName -Raw) | ConvertFrom-Json).gettype().IsSerializable

Will print False

While

((Get-Content $File.FullName -Raw)).gettype().IsSerializable

Will print True

With some more testing and the use of outputs, I can even visualize that the parameter remained empty! The array contains three empty {} where I expected the JSON.

ConvertFrom-Json is not serializable

I usually do not have any issues with this in my pure PowerShell scripting. But here, I pass the object from PowerShell to Bicep, and guess what? For that to work, it has to be serializable. Now, when I do this, there are no warnings or errors. It just seems to work until you use the parameter and get errors that, at first, I did not understand. But the root cause is that in Bicep, the parameter remained empty. Needless to say, I wasted many hours trying to fix this before I finally understood the root cause!

As you can see in the code, I still use ConvertFrom-Json to test if my JSON files contain any errors, but I do not pass that JSON to Bicep as that will not work. So instead, I pass the string and still use the json() function in Bicep.

Hence, this blog post is to help others not make the mistake I made. It will also help me remember ConvertFrom-Json is not serializable.

Use DNS Application Directory Partitions with conditional forwarders to resolve Azure private endpoints

Use DNS Application Directory Partitions with conditional forwarders

Before I explain how to use DNS Application Directory Partitions with conditional forwarders, we need to set the stage. We will shortly revisit how DNS name resolution is set up and configured for hybrid Azure environments. Then it will help to understand why DNS Application Directory Partitions are useful in such scenarios.

Active Directory Domain Services extended to Azure

In the context of this article, an Azure hybrid environment is where you have Active Directory Domain Services (ADDS) extended to Azure. I.e., you have a least one AD site on-premises and at least one AD Site in Azure, with connectivity between the two all set up via ExpressRoute or a Site-to-Site VPN, firewall configured, etc. In most cases, the DNS servers for the ADDS environment are AD integrated. Which is what we want for this scenario.

You must have DNS name resolution sorted out effectively and efficiently in such an environment. Queries from on-premises to Azure need to be resolved, and queries from Azure to on-premises.

Regarding resolving private endpoints in Azure and potentially other private DNS zones in Azure, we need to leverage conditional forwarders. That means we must create all the public DNS zones as conditional forwarders on the on-premises domain controllers. We point these at our custom DNS servers in Azure or our Azure Firewall DNS proxy that points to our custom DNS servers in Azure. The latter is the better option if that is available. Those custom DNS Servers will most likely be our AD/DNS server in Azure. These will forward the queries to azure VIP 168.63.129.16, which will let Azure DNS handle the actual name resolution.

Conditional forwarders on-premises

There are two attention points in this scenario. First, the conditional forwarders should only exist on the on-premises DC/DNS servers. That is normal. The DC/DNS servers in Azure can just forward the queries to the Azure VIP, which will have the AD recursive DNS service query the private DNS zones and provide an answer. That is why we forward the on-prem queries to them directly or via the firewall DNS proxy.

Secondly, less frequent, but more often than you think, ADDS on-premises does not translate into a single Azure tenant or deployment. You can have multiple AD Sites in Azure for the same on-prem ADSS environment. That happens via different business units, mergers, and acquisitions, politics, life, or whatever.

Example on-prem / Azure ADDS environment with Azure FW DNS proxy.

Both attention points mean that we must ensure that the on-prem conditional forwarders only live on the DC/DNS servers that forward to the correct custom DNS services in Azure. Some DC/DNS servers on-premises might send their queries to Azure AD site 1 and others to Azure AD site 2, which might live in separate tenants or Azure deployments.

How do we achieve this?

One option is to create the conditional forwarders without storing and replicating them to all DC/DNS servers in the forest or the domain. That works quite well, but it leaves the burden to configure them on every DC/DNS server where required. That’s OK; PowerShell is your friend. See PowerShell script to maintain Azure Public DNS zone conditional forwarders – Working Hard In ITWorking Hard In IT for a script to do that for you.

But another option might be handy if you have 10 on-premises Active Directory Sites and 5 Azure Active Directory Sites. That option is called DNS partitioning. You can create your own Active Directory partitions, To those, you add the desired DC/DNS servers. You can now create your conditional forwarders, store them in Active Directory and replicate them to their respective custom partition. That leaves the flexibility to keep the conditional forwarders out of the Azure DC/DNS servers and enables different conditional forwarders configurations per on-premises Active Directory Site.

DNS Application Directory Partitions

To create the DNS application directory partitions, you can use PowerShell or the ‘dnscnd’ command line tool. I will use Powershell here. If you want to use the command line, look at How to create and apply a custom application directory partition on an Active Directory-integrated DNS zone on how to do this.

Add-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE"

The command above did two things. It created the application directory partition and registered the DNS server on which you ran the script. You can test that with Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” or Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” -ComputerName ‘DC01’ if you specify the computer name.

By the way, if you run just Get-DnsServerDirectoryPartition, you will see all partition info for the current node or the node you specify.

Register the second DC/DNS server in this partition with the following command.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

This returns nothing by default or an error in case something is wrong. Check your handy work with the below command.

Get-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'

Note that the Get-DnsServerDirectoryPartition only shows the registered DNS server for the node you are running it on or the one you specify. You do not get a list of all registered servers.

Now go ahead and store some zones in Active Directory and replicate to the BLUE partition on one of the DC/DNS servers you will see the ZoneCount go up on both. Just wait for replication to do its job or force replication to happen.

Storing the conditional forwarder in your DNS application directory partition

It is easy to store conditional forwarders in your custom DNS application directory partition. You can do this by adding or editing a conditional forwarding zone. However, be aware of the bug I wrote about in Bug when changing the “store this conditional forwarder in active directory” setting.

When you create the conditional forwarder zones for the private endpoints, you can store them in Active Directory and replicate them to their respective partitions. Just select the correct partition in the drop-down list. You will only see the partition for which your DC/DNS server has been registered, not every existing partition.

DNS Application Directory Partitions with conditional forwarders
Select the correct DNS application directory partition in the drop-down list

When done, the properties for the conditional forwarder will show that the zone is stored in Active Directory and replicated to all domain controllers in a user-defined scope.

DNS Application Directory Partitions with conditional forwarders
Conditional forwarder replicated to all DCs in a user-defined scope

Create a partition for any collection of DC/DNS servers that you want to have their Azure private endpoint DNS zones sent to a specific Azure deployment. So depending on your situation, that might be one or more.

As I mentioned, the custom partition will not even be offered on any DC/DNS server that has not enlisted for that zone. This protects against people selecting the wrong custom partition for their environment.

Conclusion

That’s it. You now have one more option on your tool belt when configuring on-premises to Azure name resolution in hybrid scenarios. The fun thing is that I have never seen more people learn about using DNS Application Directory Partitions with conditional forwarders now they have to design and configure DNS for hybrid on-premises/Azure ADDS environments. Maybe you learned something new today. If so, I am happy you did.