DCDIAG.EXE Problem On Windows 2008(R2): VerifyEnterpriseReferences indicates problem “Missing Expected Value” & points to Knowledge Base Article: Q312862

I was preparing to replace some 5 year old DELL PE1850 servers running Active Directory with new DELL R610 servers when the DCDIAG.exe output showed a possible issue with SYSVOL FRS and some missing expected value.

Starting test: VerifyEnterpriseReferences

The following problems were found while verifying various important DN

references.  Note, that  these problems can be reported because of

latency in replication.  So follow up to resolve the following

problems, only if the same problem is reported on all DCs for a given

domain or if  the problem persists after replication has hadreasonable time to replicate changes.

[1] Problem: Missing Expected Value

Base Object: CN=DC1,OU=CITY,OU=Domain Controllers,DC=corp,DC=com

Base Object Description: "DC Account Object"

Value Object Attribute Name: msDFSR-ComputerReferenceBL

Value Object Description: "SYSVOL FRS Member Object"

Recommended Action: See Knowledge Base Article: Q312862

The log points to a knowledge base article at  but that has no relevance here.This is a phantom error when found under following circumstances. It occurs on Windows 2008 or Windows 2008 R2 when you are running in Windows 2008 or Windows 2008 R2 domain functional level. Since Windows 2008 the File Replication Service (FRS) that sysvol uses has been replaced with the  Distributed File Replication service (DFRS) as used by DFS. If you’re not yet running DFRS when you can (which is highly recommend  http://blogs.technet.com/b/askds/archive/2010/04/22/the-case-for-migrating-sysvol-to-dfsr.aspx but not required), you’ll see this error show up when running DCDIAG.exe, so no real issue at all.

There are lots of posts on the internet pointing to various possible issues or causes: http://social.technet.microsoft.com/Forums/en-US/winserverDS/thread/2ce07c3f-9956-4bec-ae46-055f311c5d96/  & http://social.technet.microsoft.com/Forums/en-IE/winserverDS/thread/3062d40a-b73e-42ea-b27a-e817ee29abc1. But before you worry to much I suggest you check that everything that has to do with replication is running well. Is so and you’re running in Windows 2008 or Windows 2008 R2 domain functional level you’ll see this error go way once you complete your migration to DFRS.

So, to recapture, if you have a well maintained & working Active Directory, do not panic when you see some warning or failures in diagnostic test results. Make sure things are indeed fine and if you conclude that you don’t have any lingering problems, do some further research on what the real reason might . This pahnatom error is a fine example of this.

There is an absolute brilliant step by step guide to get the move from FRS to DFRS completed without a problem in a series by the storage team at Microsoft . You can find the first of a 5 part blog series over here http://blogs.technet.com/b/filecab/archive/2008/02/08/sysvol-migration-series-part-1-introduction-to-the-sysvol-migration-process.aspx.

While you are at it. If your still running DFS in Windows 2000 native mode, you might want to upgrade that as well. More on that later Smile

Windows 2008 R2: The system image restore failed. Error details: The parameter is incorrect. 0x80070057

Note to self: read your own blogs on Windows 2008 R2 Native Backup :-). Yes people, Windows 2008 R2 Bare Metal restore to dissimilar hardware does work as long as you follow the rules and guidelines. Those are not super evidently documented but still, if I can find ‘m you can too! But today we lost some time because we didn’t head one of the rules that trip people up frequently. That rule is that the disk layout on the restore server can’t differ from the original one. I literally wrote “Pay close attention to the disk layout/ boot order as well, the restore doesn’t allow for variation from the original layout” in https://blog.workinghardinit.work/2010/01/27/using-windows-2008-r2-backups-to-go-virtual-2/. That means you need to simulate the same disk layout on the new hardware. If the new server has an extra disk, disable that one for the restore, if it has one less, add one. Another situation where the disk layout comes into play is when you boot from an USB stick with W2K8R2. If you leave it plugged in there during the restore the recovery will fail. Because if that extra attached disk isn’t the one containing the backup image you’ll get a very harsh error:

“The system image restore failed. Error details: The parameter is incorrect. 0x80070057”

image

Not very helpful in explaining but that generally means you’ve got a disk layout issue. In this case because you have the bootable USB stick attached. Once you’ve booted to the “Repair your computer” functionality, selected “Select a system image backup” and found your image to restore you should remove the bootable USB stick from the server if you’re not going to be doing an install. Beware of this! Typically when you boot from DVD or PXE you wouldn’t even notice but when using a bootable USB device with W2K8R2 you might forget that this changes the disk layout. So again, always pull the bootable USB stick from the server before you restore and you’ll be fine. Yes the recovery will work a soon as you’ve booted, you don’t need the media anymore so you can unplug it safely. You can even attach another USB disk in its place containing the backups if you only have one USB port available. That will work because the disk with the backup itself is never taken into consideration and won’t cause any issues with the restore.

So we’ll never forget to head our own warnings again (I hope). The good thing is we had some refresh training on restoring today and it’s all refreshed in our minds 🙂

Reflections on Getting Windows Network Load Balancing To Work (Part 2)

This is part 2 in series on Windows Network Load Balancing. Part 1 can be found here: https://blog.workinghardinit.work/2010/07/01/reflections-on-getting-windows-network-load-balancing-to-work-part-1/

On Default Gateways, Routing & Forwarding.

Here’s a bullet list of what people tend to trip over when configuring NLB network settings.

  • No support for multiple Default Gateways that are on multiple subnets
  • The default gateway does not have to be empty on the NLB NIC
  • The Private and the NLB NIC can be on separate or the same subnets
  • You can have multiple Default Gateways if they are on the same subnet
  • Don’t forget about static routes where and when needed.
  • Beware of the strong host model in Windows 2008 (R2) for both IPv4 & IPv6 (WK3 it was only for IPv6)
  • Mind the order of the connections in Adapters and Bindings.

Now let’s address the subjects in this list.

No support for multiple Default Gateways that are on multiple subnets

When using IP addresses from different subnets you cannot have a default gateway on every NIC because that will cause routing issues. This is not different for the NIC’s used in Windows NLB. So you can have only one NIC with a Default Gateway and if the other NICs need to route somewhere you need to add static persistent routes. Those routes must be persistent or they will not survive a reboot of the server. In the figure below you see a classic two NIC NLB cluster with the Default Gateway Empty on the NLB NIC. This could be a valid setup for an intranet. You can add routes for the subnet in the company that need to be able to talk to the NLB Cluster and you’re golden. The Private NIC gets a default gateway and acts like any other NIC in your network.

In this example we have the Default Gateway on the Private NICs they can route internally and to the internet. If you need traffic to & from the internet form the NLB NIC you could enable forwarding on the NLB NIC or enable weak host behavior which can be done more atomic than what you achieve by enabling forwarding. If you only need to route internally we could use the same approach of enabling forwarding instead of adding static persistent routes for the NLB NIC. But then you don’t isolate & protect traffic that neatly and it will route to everywhere the default gateway can get.

So we prefer to play with static persistent routes in this case. We’ll briefly look at some examples now. If you only need to route internally (i.e. to reach the database or a client PC) from the NLB NIC we add the needed static persistent routes on the NLB NICs using the route command.

In order for the NLB NICs to reach the database with strong host model and no forwarding enabled:

Route add -p 10.30.0.0 mask 255.255.0.0 10.10.0.1

To reach the client PC’s:

Route add -p 10.20.0.0 mask 255.255.0.0 10.10.0.1

(Using route print you can look at the routes and using route delete you can get rid of them.)

Or by using netsh, (it’s advised to use netsh from Windows 2008 on)

netsh interface ipv4 add route 10.30.0.0/16 “NLB NIC” 10.10.0.1

netsh interface ipv4 add route 10.20.0.0/16 “NLB NIC” 10.10.0.1

(you can look at the routing table by using netsh interface ipv4 show route, with netsh interface ipv4 delete route you get ridd of then, see http://technet.microsoft.com/en-us/library/cc731521(WS.10).aspx for more information.

You could also connect to the database over the PRIVATE NIC and then you don’t need that route. If you can configure it like that it’s a good solution. But all situations differ.

You can also play with the weakhost / stronghost model behaviour:

netsh interface ipv4 set interface Private NIC weakhostsend=enabled

netsh interface ipv4 set interface Private NIC weakhostreceive=enabled

netsh interface ipv4 set interface NLB NIC weakhostsend=enabled

netsh interface ipv4 set interface NLB NIC weakhostreceive=enabled

Now don’t just blindly enable on every NIC you can find on the server. Test what you really need and use only that. I leave that as an exercise to the readers. It really depends on the situation and needs for your particular situationJ. Keep in mind that when you enable weakhostsend and weakhostreceive on every NIC this reverts your Windows 2008 servers back to Windows 2003 behavior and this might not be needed or wanted. So just enable what you need for optimal security.

Naturally enabling forwarding will do the trick as well, as this creates a weak host model. Depending on how many NICs you use and how traffic must flow you might have to do it on more than one NIC, normally the one(s) without a default gateway.

netsh interface ipv4 set interface “NLB NIC” forwarding=enabled

 

If you want to see the configuration of the NIC you can run:

           netsh interface ipv4 show interface l=verbose

That will produce something like below:

Interface Local Area Connection Parameters

IfLuid                             : ethernet_5
IfIndex                            : 3
State                              : connected
Metric                             : 10
Link MTU                           : 1500 bytes
Reachable Time                     : 21500 ms
Base Reachable Time                : 30000 ms
Retransmission Interval            : 1000 ms
DAD Transmits                      : 3
Site Prefix Length                 : 64
Site Id                            : 1
Forwarding                         : disabled
Advertising                        : disabled
Neighbor Discovery                 : enabled
Neighbor Unreachability Detection  : enabled
Router Discovery                   : dhcp
Managed Address Configuration      : enabled
Other Stateful Configuration       : enabled
Weak Host Sends                    : disabled
Weak Host Receives                 : disabled

Use Automatic Metric               : enabled
Ignore Default Routes              : disabled
Advertised Router Lifetime         : 1800 seconds
Advertise Default Route            : disabled
Current Hop Limit                  : 0
Force ARPND Wake up patterns       : disabled
Directed MAC Wake up patterns      : disabled


The default gateway does not have to be empty on the NLB NIC

It is not a hard requirement to leave the Default Gateway on the NLB NIC empty and put it on the private NIC. You can set it on the NLB NIC and leave the private NIC’s gateway empty instead. An example of this you can see in the demo. This is the best choice in my opinion when you need the NLB NIC to route to destinations you don’t know how to reach, i.e. the internet, so for public websites. The prime function of the default gateway is exactly to help with that. When you don’t know where to send it, send it to the Default Gateway. If you need to reach other internal subnets from the Private NIC, just use static routes. Don’t use the NLB NIC as that is internet facing in this case. You can see an example of this in the figure below. Also in this case you’ll find that you do not have to enable forwarding on the NIC using netsh, as the NIC that has to answer to the unknown IP Address has the Default Gateway. This setup works great for example in a managed domain environment for internet access where the NLB NICs are internet facing and the private NIC is for management, Active Directory, Backups, etc.

In this example we have the Default Gateway on the NLB NICs so it can route internet traffic. Any routes needed in the Private NIC subnet are added as persistent static routes. An example of this is to reach the database server.

As traffic from the Private range is never supposed to go via the NLB Public range and vice versa we do not need to care about forwarding or strong host /weak host models. We can keep traffic nicely separated and that is a good thing. If you build this on Windows 2008(R2) just like you did on Windows 2003 it would work out of the box and you might not even know about a change in default behavior from weak host model to strong host model.

To get the PRIVATE NIC to reach the database server you’d add static routes and be done with it.

Add needed static persistent routes using the route command:

Route add -p 10.20.0.0 mask 255.255.0.0 172.16.2.1

Or by using netsh, (it’s advised to use netsh from Windows 2008 on)

netsh interface ipv4 add route 10.20.0.0/16 “PRIVATE NIC” 172.16.2.1

No requirement to have different subnets for Private and NLB NICs  / Multiple Gateways When the subnets are the same

There is no requirement to have different subnets for every NIC. Sometimes I read that this is a requirement on forums when someone is having issues but it’s not. You can also experiment with multiple Default Gateways if they are on the same subnet (WARNINGS APPPLY*)

So here you can play with giving every NIC a default gateway (same subnet, so no issues), with static persistent routes, with enabling forwarding and weak host / strong host configuration. I tend to use only one gateway and use static persistent routes. If I need to relay I’ll go for weak host minimal configuration or revert to forwarding.

WARNINGS APPLY*: When you start having multiple NIC’s for multiple NLB Clusters on the same NLB nodes, things can get a bit complicated and unpredictable. So I prefer only to use a default gateway on both NICs when you have two NIC , one for private (management) traffic and one for the NLB cluster traffic. Once you have multiple NIC’s for multiple NLB clusters (1 private NIC + 2 or more NLB cluster NICs) you can no longer play this game safely, even if they are all on the same subnet, without running into trouble I have experienced. You can get an event id 18 “NLB cluster [X.X.X.X]: NLB detected duplicate cluster subnets. This may be due to network partitioning, which prevents NLB heartbeats of one or more hosts from reaching the other cluster hosts. Although NLB operations have resumed properly, please investigate the cause of the network partitioning” . Also in this situation you can’t have a default gateway on the management NIC and one on one of the NLB NIC’s without a default gateway on the second NLB NIC. Forget that. You can get issues with a node remaining in “converging” forever and what’s worse the NLB cluster will send traffic to all nodes so 1/x connections will fail. Rebooting one node might help but once you reboot ‘m both you run the risk of this happening and you really don’t want that. Once you dealing with multiple cluster IP addresses on multiple separate NIC’s you’d better stick to one default gateway on one of the NIC’s and nowhere else.  This kind of makes me wonder if it’s pure luck that it works with 2 cluster NICs or not, with multiple and with reboots of the nodes I know we run into trouble and that’s no good.

It’s also smart not to mix static routes with forwarding to achieve the same thing. And please have the exact same configuration on very particular NIC on every node. Not one node with NLB NIC 1 routing via static routes and the other node using forwarding on NLB NIC 1. That’s asking for inconsistent behavior.

We’ll briefly look at some examples now.

If you only need to route internally (i.e to reach the database or a client PC) we add the needed static persistent routes on the NLB NICs using the route command.

In order for the NLB NICs to reach the database with strong host model and no forwarding enabled:

Route add -p 10.30.0.0 mask 255.255.0.0 10.10.0.1

To reach the client PC’s:

Route add -p 10.20.0.0 mask 255.255.0.0 10.10.0.1

(Using route print you can look at the routes and using route delete you can get rid of them.)

Or by using netsh, (it’s advised to use netsh from Windows 2008 on)

netsh interface ipv4 add route 10.30.0.0/16 “NLB NIC” 10.10.0.1

netsh interface ipv4 add route 10.20.0.0/16 “NLB NIC” 10.10.0.1

(you can look at the routing table by using netsh interface ipv4 show route, with netsh interface ipv4 delete route you get ridd of then, see http://technet.microsoft.com/en-us/library/cc731521(WS.10).aspx for more information.

You can also just enter the default gateway on the NLB NICs as well. All NICs are on the same subnet this will cause no issues. Just remember that traffic will also go to where ever that gateway routes, even to the internet.

We already know we can play with the weakhost / stronghost model:

netsh interface ipv4 set interface Private NIC weakhostsend=enabled

netsh interface ipv4 set interface Private NIC weakhostreceive=enabled

netsh interface ipv4 set interface NLB NIC weakhostsend=enabled

netsh interface ipv4 set interface NLB NIC weakhostreceive=enabled

Again don’t just blindly enable on every NIC you can find on the server. Test what you really need and use only that. I leave that as an exercise to the readers. As I’ve said before, it really depends on the situation and needs for your particular situation. Keep in mind that when you enable weakhostsend and weakhostreceive on every NIC this will just revert your Windows 2008 server into Windows 2003 behavior and this might not be needed or wanted. So just enable what you need for optimal security.

There is a very good explanation of strong and weak host behavior by “The Cable Guy” at http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx I strongly advise you to go take a look.

And naturally enabling forwarding will do the trick in this scenario as well, as this creates a weak host model. Depending on how many NICs you use and how traffic must flow you might have to do it on more than one NIC, normally the one(s) without a default gateway.

netsh interface ipv4 set interface “NLB NIC” forwarding=enabled

When & Why Use Three NICs or more?

NLB supports using multiple network adapters to configure separate clusters. This allows for configuring multiple independent clusters on each host. We used to have only virtual clusters meaning that you could configure multiple clusters on a single network adapter. Anyone who ever had to trouble shoot some networking or configuration issues on a production NLB will appreciate the ability to limit interruptions and problems to one cluster instead of 2 or more. As an example of this I had to trouble shoot a CAS/HUB Exchange Implementation two node NLB implementation. The NLB Cluster of the CAS role had this very issue, but since it was running on its own cluster with a separate NIC the HUB role NLB cluster has no issues what so ever. Another good reason to use more NIC is to separate traffic, for example FTP versus HTTP on the same NLB Cluster.

One of the worst things that can happen is that an issue messes up the proper functioning of the NLB itself. That way even if the virtual IP remains available no host or only some of the hosts get network traffic. That means the cluster is unavailable or is only partially responding. This is a bad situation to be in and can be hard to trouble shoot. Since it’s a high availability technology you can bet someone is looking over your shoulder that has a vested interest in getting that resolved as soon as possible.

Mind the order of the connections in Adapters and Bindings

Make sure the PRIVATE NIC that is to be used for private network traffic (DNS, AD, RDP, …) is listed first. That prevent any issues (speed, functionality) of those services and you experience will be much better. This is illustrated in the figures below. LAN-HUB is the PRIVATE NIC here. The others are for NLB (yup it’s an Exchange 2010 setup).

Conclusion & recapitulation

I’ll finish with some closing musings on single & multiple default gateway and getting/sending network traffic where it needs to go.

When you enter a gateway on the second, third and so on NIC next to the one on the first NIC you’ll get a warning:

—————————

Microsoft TCP/IP

—————————

Warning – Multiple default gateways are intended to provide redundancy to a single network (such as an intranet or the Internet). They will not function properly when the gateways are on two separate, disjoint networks (such as one on your intranet and one on the Internet). Do you want to save this configuration?

—————————

Yes No

—————————

This will not work reliable when you have multiple subnets. This is why you use static persistent routing entries. Depending on your needs you can also use forwarding or the weak host model and even combine those with static persistent routes if needed of desired. Now the above also means that if you have multiple NICs with IP addresses on the same subnet you can indeed enter a Default Gateway on all of them.

If you don’t have or cannot have a Default Gateway filled in you are left with two options. If you know what needs to go where you can add static routes, which is basically telling the NIC the IP of a gateway to send traffic to for a certain destination. This is assuming you can reach that IP and that the traffic is not from a source/ to a destination that has no route defined and firewall allow for it, etc.

If you have no route or you can’t specify one (i.e. you can’t predict where traffic will have to go) you have one other option left and that is to route the traffic via the NIC that does have a Default Gateway. This used to work out of the box on Windows 2003 and earlier, but it doesn’t work out of the box since Windows 2008 (R2). That is because by default NICs in Windows 2008(R2) operate in a strong host model. So it will not receive or send traffic destined for some other IP than itself or send traffic originating somewhere else than itself. For that you’ll need to set the NIC properties to weak host send and receive or you need to enable forwarding. Actually forwarding is disabled by default on Windows 2003 as well. The big difference is that Windows 2003 operates in a weak host manner (send/receive) as opposed to Windows 2008 (R2) strong host mode. By enabling forwarding we put the Windows 2008 server in weak host mode and as such it works (see RFC1122). On the internet you’ll find both solutions, but the link between the two is often never made. Using weak host receiving and weak host sending allows for more atomic, custom configurations than forwarding.

Contact me via the web site or leave a comment if you have any questions or suggestions.

Post Script / Side Note because someone asked J

Basically you can have multiple gateways on a server but only one default gateway. You can add more than one default gateway on the same NIC but then they will only be used when the default gateway filled out in is not available, it will then try the next one and so forth. You can add multiple gateways to a single NIC or one or more to multiple NICs but that can, get messy very quickly. Whether it is wise to provide gateway redundancy in such a manner is another discussion. See also KB article http://support.microsoft.com/kb/157025. Be mindful of the extra configurations you’ll need (Dead Gateway Detection). This is a rather uncommon scenario on a windows server. You can use it for redundancy or when you want the traffic to go to a certain default gateway instead of another when it is available (so separate traffic for example for cost or to reduce the traffic load).

And then there’s adding a default gateway that’s on another subnet than the IP address of the NIC. In that case you get this warning:


—————————

Microsoft TCP/IP

—————————

Warning – The default gateway is not on the same network segment (subnet) that is defined by the IP address and subnet mask. Do you want to save this configuration?

—————————

Yes No

—————————

All pretty cool stuff you can do to mess with peoples head and understanding of what’s going on (it can work if the router on the local subnet has a route the subnet where that default gateway lives and PROXY ARP is working … but we’re not going to turn this into a networking course or pretty soon we’ll be installing RRAS and turn the server into a router.

Pollution of the Gene Pool a Real Life “FTP over SSL” Story

Imagine you get asked to implement a secure temporary data exchange solution for known and authenticated clients as fast as possible. You’re told to use what’s available already so no programming, buying products or using services. The data size can be a few KB to hundreds of megabytes, or even more. At that moment they already used FTP, both anonymous and with clear text authentication but obviously that’s very insecure. You’re told they need the solution a.s.a.p. meaning by the end of the week. So what do you? You turn to FTP over SSL in Windows 2008 (IIS 7.0, Release To Web -RTW- download) or Windows 2008 R2 (IIS 7.5, Integrated) as the one thing the company did allow for was the cost of a commercial SSL certificate and they had Windows 2008. If you want to read up on configuring that please have a look at the following entries http://learn.iis.net/page.aspx/304/using-ftp-over-ssl/ and http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/ where you’ll find lots of practical guidance.

You set it all up, test it, user folder isolation, NTFS permissions regulated with domain groups, virtual directories links are used for common data folders between users, etc. It all looks pretty good & is very cost effective. Customers start using it and if they have a problem they are helped out by the service desk. Good, mission accomplished you’d think. Except for someone who is not having any of that insecure firewall breaching FTP over SSL and starts kicking and screaming. The gross injustice of being forced into opening of some ports in their firewall is unacceptable. That same someone has been using clear text authentication for FTP downloads for many years and never even blinked at that has now discovered “security”.

FTP in a security Conscious World

We live, for all practical purposes, in a NAT/PAT & firewall world. These things became necessities of live after the FTP protocol was invented. You see, IPv4 has come a long way since its creation as have the protocols used over it. But originally, by design, it was not meant to provide security, just communications. Security in those early days was armed military personnel guarding physical buildings where you had access to the network and if you didn’t belong there they’d just shoot you. As a result TCP/IP is a lot like a flower power love child living a very secure universe where everyone loves everyone. Fast forward 30 years and that universe looks more like something out of a post-apocalyptic movie like Doomsday or Mad Max. If you don’t have security you become road kill and rather fast. So we built security on top of TCP/IP and we retrofitted it to the stack (a lot of the security in IPv6 was back ported to IPv4). We also invented firewalls acting like the walls of medieval castles. To add some more complexity there was not enough IPv4 love (i.e. public IP addresses) to go around which makes them expensive and/or unavailable. Network Address Translation came to the rescue. So we ended up where we are today with hundreds of millions of private IP range networks that are connected to the internet through NAT/PAT and are protected by firewalls. The size of these private networks ranges from huge corporate entities in the Fortune 500 list to all those *DSL & Cable Modem/Routers in our homes.

All of this makes the FTP protocol go “BOINK”. FTP needs two connections and quite liberal settings to work. But as the security story above indicates the internet world has moved from free love to the AIDS era so that doesn’t fly anymore. We need and have protection. But we also need to make FTP work.

Let’s first look at the basics. FTP client software needs two connections between the client and the server. One is the control channel (port 21 server side) the other is the data channel (port 20 server side). On the client side dynamic ports are used (1024-65535). These two connections present a problem for firewalls.

So port 21 needs to be allowed through the firewall on the FTP Server side. That’s pretty easy, but it’s not enough. Port 21 is the control channel that we use to connect, authenticate and even the delete and create directories if you have the correct file system permissions. To view and browse/traverse folders structures and to exchange data we need that data channel to pass through the firewall as well. That’s a dynamic port on the client that the server needs to connect to from port 20. Firewall admins and dynamic ports don’t get along very well. You can’t say “open range 1024 to 65553 for me will you?” to firewall administrators without being escorted out of the building by physical security people.

But still FTP seems to work, so how does this happen? For that purpose a lot of firewall/NAT devices make live a bit more secure and a lot easier by pro-actively looking at the network traffic for FTP packets and opening the required dynamic port automatically for the duration of the connection. This is called state full FTP. Now this is the default behavior with a lot of SOHO firewall/NAT devices so most people don’t even realize this is happening. You do not need to define rules that punch holes in the firewall. Instead the firewall punches them transparently when needed for FTP traffic. This is a risk as it happens without the users even being aware of this, let alone knowing what ports are being used. This isn’t very pretty but works quite well.

Here’s an illustration of Active FTP in action

clip_image002

You see initially there was only Active FTP, which is very client side firewall unfriendly because it means opening up dynamic ports on client side for traffic initiated by a remote FTP server. This needed to be fixed. That fix is Passive FTP and is described in RFC 1579”Firewall Friendly FTP”. Here it is the server that listens passively on a dynamic port and the client connects actively to that port. So Passive FTP makes the automatic punching of holes for incoming FTP traffic in the firewall/NAT devices more secure on the client side. With passive FTP the server does not initiate the data connection, the client does. When the client contacts the FTP server on port 21 it gets a response, then the client asks for passive FTP using the PASV command. The FTP server responds by setting up a dynamic port to which the client can connect. The client is notified about this using the Port command. Outgoing traffic initiated on the client from a dynamic to a port on the FTP Server is more firewall friendly (i.e. more secure) for the clients and thus more easily accepted by the security administrators. On the server side it is somewhat less secure.

clip_image004

Be aware that there are FTP clients which you need to explicitly configure for passive FTP (Internet browsers, basic FTP Client software). Some old or crappy clients don’t even support it, but that should be rare nowadays. When the client software automatically tries both active /passive to connect the user often doesn’t even know what’s being used which can lead to some confusion. Also keep in mind that often multiple firewalls are involved, both on the host as on the edge of both client and FTP server networks, that all need the proper configuration.

As an example of client side stuff to keep in mind: Configuring Internet Explorer to use Passive FTP and making sure ftp can also be used in Windows Explorer.

clip_image006

clip_image008

Improving FTP Security

One of the ways to reduce the number of ports that are used and as a result must be opened on the firewalls involved is to use a small predefined range of dynamic ports. Good FTP servers allow for this and so do IIS 7 and IIS 7.5. This reduces the number of ports to be allowed through and thus the conflicts with the security people enormously.

Now when we use FTP over SSL it becomes a practical necessity to use a small pre-defined range of dynamic ports to use. Snooping around in the packets to see if it’s SSL traffic so as dynamic ports can be opened just doesn’t work anymore because the traffic is encrypted. Opening thousands of ports is not an option. Those would become targets of attacks. Another hic up you can trip over is that some firewalls by default block SSL/TLS traffic on any other port than port than 443 (HTTPS).

So what do we need for FTP over SSL/TLS:

· Use Passive FTP and port 21 (Explicit SSL) or 990 (Implicit SSL)

· Select a small range of dynamics ports to define on the firewall and communicate that with your clients. This range needs to be opened in their outgoing rules for the clients that want to connect and the incoming/outgoing rules on the server side. Both the FTP server and the FTP clients need to respect this range.

· Use a FTP client that supports FTP over TLS. I used passive FTP with Explicit SSL to maintain the default port 21 for the connection channel. If the client doesn’t negotiate data encryption we refuse the connection. See FTPS on http://en.wikipedia.org/wiki/FTPS for more information on this.

· Buy a commercial SSL from a trusted source (VeriSign, Comodo, GoDaddy, Thawte, Entrust, …)

By using a commercial SSL certificate that securely identifies and verifies the FTP server, by limiting the communication through the firewall to some well-defined ports and by only allowing that traffic between a limited number of hosts, the risks are reduced immensely. The risks avoided are connecting to falsified hosts, password sniffing and data theft. The traffic that is allowed is far less risky and dangerous than anonymous or, what they used to do and allow, clear text authentication to non-verified servers on the internet. But still some people insisted that the FTP over SSL solution was introducing a serious security risk. Really and this isn’t the case with passive FTP without SSL? Sure it is, you just don’t realize that it happens and allow FTP traffic to wide range of dynamic ports and unknown hosts. So frankly crying wolf about properly configured FTP over SSL is like using “coitus interuptus” for birth control because you’ve read that condoms are not 100% failsafe. You’ll end up pregnant and infected with aids. That kind of logic is pure gene pool pollution. It’s also proof of an old saying: “never argue with an idiot, they drag you down to their level and beat you with experience”

Beware of NAT/PAT

As we mentioned in the beginning NAT has its own issues to deal with, so we still have to touch on the subject of NAT/PAT with FTP servers. Let’s first look at what is needed to make this work. You have already seen how the basics of passive FTP data connection work. The client sends a PASV command and the server responds by entering passive mode and telling the client what port to use.

Now with NAT/PAT devices the IP address needs to be swapped around. To do this these devices sniff the network traffic for the PASV command to find what port is used and turns the FTP server response from “227 Ok, Entering Passive Mode (192,168,1,32,203,8)” into 227 Ok, Entering Passive Mode (193,211,10,27,203,8).

As you can see the private IP address (blue, the first 4 numbers) is swapped to the public IP address (green) on which the FTP server is reachable and retains the port to use (red). The last to numbers in red describe the port number as follows: 203*256+8 =51976. When the client connects the reverse process takes place, the public IP is swapped for the private one.

PassiveFTPNatRewrite

You can already see where this is going with SSL. The NAT/PAT device cannot sniff the traffic for the PASV & PORT commands to see what on what dynamic port the client should establish the data channel and also due to the encryption it cannot alter the PASV command to swap around the IP addresses.

The best solution to this is to specify a firewall helper address for passive FTP which we can set to the public IP address of our FTP Server. Your FTP Server must support this; you’ll find that IIS 7.0 and IIS 7.5 do.

Other possible solutions and workaround are:

· FTP Clients that “guess” the address to use when the IP address in the PASV command doesn’t work (that would be an internal private range IP address). They then try to use the public IP address to establish the connection, which can work as the change is it is the public IP address of the FTP server or the public IP address of the NAT/PAT device. No guarantees are given that this will work.

· NAT/PAT devices sometimes allow for specified ranges to be forwarded to a specific IP address. So you could configure this to be the case for the small range of dynamic ports you defined for Passive FTP.

· Some FTP servers support he EPSV command (Extended Passive Mode), which only sends the port and where the IP address is the one used for establishing the control connection.

Be Mindful of Load Balancing on Server and/or Client Side

If Load Balancers are in play we must make sure that the communication always goes via the same node and IP address when using SSL or you’ll break SSL. If multiple IP addresses are used to route certain traffic via a certain device you make sure the FTP client doesn’t switch to another IP address for the data connection as this will fail. Both control and data channels must use the same IP address or passive FTP will fail even without using SSL. Also don’t forget some customers uses load balancers to route traffic based on purpose, cost, redundancy, etc. So this is also a concern on the client side. In the IIS log you’ll see that it complains about IP addresses that do not match. I’ve had this happen at 2 customer sites, which were easily fixed, but took some intervention of by their IT staff. Luckily they both had a competent SMB IT consulting firm looking after their infrastructure.

Table with FTP risks and mitigations

RISK MITIGATION RESULT
Server Connects to Client Use passive FTP Client initiates connection
Dynamic ports in use Select smaller fixed range of ports Less ports to open on firewall
Server not verified Use commercial SSL Certificate Server can be verified
Authentication not encrypted Use SSL for authentication Authentication encrypted
Data not encrypted Use SSL for data transport Data transport encrypted
Connections from & to unknown hosts Allow only trusted clients and/or servers No more FTP from/to any host.