Pollution of the Gene Pool a Real Life “FTP over SSL” Story

Imagine you get asked to implement a secure temporary data exchange solution for known and authenticated clients as fast as possible. You’re told to use what’s available already so no programming, buying products or using services. The data size can be a few KB to hundreds of megabytes, or even more. At that moment they already used FTP, both anonymous and with clear text authentication but obviously that’s very insecure. You’re told they need the solution a.s.a.p. meaning by the end of the week. So what do you? You turn to FTP over SSL in Windows 2008 (IIS 7.0, Release To Web -RTW- download) or Windows 2008 R2 (IIS 7.5, Integrated) as the one thing the company did allow for was the cost of a commercial SSL certificate and they had Windows 2008. If you want to read up on configuring that please have a look at the following entries http://learn.iis.net/page.aspx/304/using-ftp-over-ssl/ and http://learn.iis.net/page.aspx/309/configuring-ftp-firewall-settings/ where you’ll find lots of practical guidance.

You set it all up, test it, user folder isolation, NTFS permissions regulated with domain groups, virtual directories links are used for common data folders between users, etc. It all looks pretty good & is very cost effective. Customers start using it and if they have a problem they are helped out by the service desk. Good, mission accomplished you’d think. Except for someone who is not having any of that insecure firewall breaching FTP over SSL and starts kicking and screaming. The gross injustice of being forced into opening of some ports in their firewall is unacceptable. That same someone has been using clear text authentication for FTP downloads for many years and never even blinked at that has now discovered “security”.

FTP in a security Conscious World

We live, for all practical purposes, in a NAT/PAT & firewall world. These things became necessities of live after the FTP protocol was invented. You see, IPv4 has come a long way since its creation as have the protocols used over it. But originally, by design, it was not meant to provide security, just communications. Security in those early days was armed military personnel guarding physical buildings where you had access to the network and if you didn’t belong there they’d just shoot you. As a result TCP/IP is a lot like a flower power love child living a very secure universe where everyone loves everyone. Fast forward 30 years and that universe looks more like something out of a post-apocalyptic movie like Doomsday or Mad Max. If you don’t have security you become road kill and rather fast. So we built security on top of TCP/IP and we retrofitted it to the stack (a lot of the security in IPv6 was back ported to IPv4). We also invented firewalls acting like the walls of medieval castles. To add some more complexity there was not enough IPv4 love (i.e. public IP addresses) to go around which makes them expensive and/or unavailable. Network Address Translation came to the rescue. So we ended up where we are today with hundreds of millions of private IP range networks that are connected to the internet through NAT/PAT and are protected by firewalls. The size of these private networks ranges from huge corporate entities in the Fortune 500 list to all those *DSL & Cable Modem/Routers in our homes.

All of this makes the FTP protocol go “BOINK”. FTP needs two connections and quite liberal settings to work. But as the security story above indicates the internet world has moved from free love to the AIDS era so that doesn’t fly anymore. We need and have protection. But we also need to make FTP work.

Let’s first look at the basics. FTP client software needs two connections between the client and the server. One is the control channel (port 21 server side) the other is the data channel (port 20 server side). On the client side dynamic ports are used (1024-65535). These two connections present a problem for firewalls.

So port 21 needs to be allowed through the firewall on the FTP Server side. That’s pretty easy, but it’s not enough. Port 21 is the control channel that we use to connect, authenticate and even the delete and create directories if you have the correct file system permissions. To view and browse/traverse folders structures and to exchange data we need that data channel to pass through the firewall as well. That’s a dynamic port on the client that the server needs to connect to from port 20. Firewall admins and dynamic ports don’t get along very well. You can’t say “open range 1024 to 65553 for me will you?” to firewall administrators without being escorted out of the building by physical security people.

But still FTP seems to work, so how does this happen? For that purpose a lot of firewall/NAT devices make live a bit more secure and a lot easier by pro-actively looking at the network traffic for FTP packets and opening the required dynamic port automatically for the duration of the connection. This is called state full FTP. Now this is the default behavior with a lot of SOHO firewall/NAT devices so most people don’t even realize this is happening. You do not need to define rules that punch holes in the firewall. Instead the firewall punches them transparently when needed for FTP traffic. This is a risk as it happens without the users even being aware of this, let alone knowing what ports are being used. This isn’t very pretty but works quite well.

Here’s an illustration of Active FTP in action

clip_image002

You see initially there was only Active FTP, which is very client side firewall unfriendly because it means opening up dynamic ports on client side for traffic initiated by a remote FTP server. This needed to be fixed. That fix is Passive FTP and is described in RFC 1579”Firewall Friendly FTP”. Here it is the server that listens passively on a dynamic port and the client connects actively to that port. So Passive FTP makes the automatic punching of holes for incoming FTP traffic in the firewall/NAT devices more secure on the client side. With passive FTP the server does not initiate the data connection, the client does. When the client contacts the FTP server on port 21 it gets a response, then the client asks for passive FTP using the PASV command. The FTP server responds by setting up a dynamic port to which the client can connect. The client is notified about this using the Port command. Outgoing traffic initiated on the client from a dynamic to a port on the FTP Server is more firewall friendly (i.e. more secure) for the clients and thus more easily accepted by the security administrators. On the server side it is somewhat less secure.

clip_image004

Be aware that there are FTP clients which you need to explicitly configure for passive FTP (Internet browsers, basic FTP Client software). Some old or crappy clients don’t even support it, but that should be rare nowadays. When the client software automatically tries both active /passive to connect the user often doesn’t even know what’s being used which can lead to some confusion. Also keep in mind that often multiple firewalls are involved, both on the host as on the edge of both client and FTP server networks, that all need the proper configuration.

As an example of client side stuff to keep in mind: Configuring Internet Explorer to use Passive FTP and making sure ftp can also be used in Windows Explorer.

clip_image006

clip_image008

Improving FTP Security

One of the ways to reduce the number of ports that are used and as a result must be opened on the firewalls involved is to use a small predefined range of dynamic ports. Good FTP servers allow for this and so do IIS 7 and IIS 7.5. This reduces the number of ports to be allowed through and thus the conflicts with the security people enormously.

Now when we use FTP over SSL it becomes a practical necessity to use a small pre-defined range of dynamic ports to use. Snooping around in the packets to see if it’s SSL traffic so as dynamic ports can be opened just doesn’t work anymore because the traffic is encrypted. Opening thousands of ports is not an option. Those would become targets of attacks. Another hic up you can trip over is that some firewalls by default block SSL/TLS traffic on any other port than port than 443 (HTTPS).

So what do we need for FTP over SSL/TLS:

· Use Passive FTP and port 21 (Explicit SSL) or 990 (Implicit SSL)

· Select a small range of dynamics ports to define on the firewall and communicate that with your clients. This range needs to be opened in their outgoing rules for the clients that want to connect and the incoming/outgoing rules on the server side. Both the FTP server and the FTP clients need to respect this range.

· Use a FTP client that supports FTP over TLS. I used passive FTP with Explicit SSL to maintain the default port 21 for the connection channel. If the client doesn’t negotiate data encryption we refuse the connection. See FTPS on http://en.wikipedia.org/wiki/FTPS for more information on this.

· Buy a commercial SSL from a trusted source (VeriSign, Comodo, GoDaddy, Thawte, Entrust, …)

By using a commercial SSL certificate that securely identifies and verifies the FTP server, by limiting the communication through the firewall to some well-defined ports and by only allowing that traffic between a limited number of hosts, the risks are reduced immensely. The risks avoided are connecting to falsified hosts, password sniffing and data theft. The traffic that is allowed is far less risky and dangerous than anonymous or, what they used to do and allow, clear text authentication to non-verified servers on the internet. But still some people insisted that the FTP over SSL solution was introducing a serious security risk. Really and this isn’t the case with passive FTP without SSL? Sure it is, you just don’t realize that it happens and allow FTP traffic to wide range of dynamic ports and unknown hosts. So frankly crying wolf about properly configured FTP over SSL is like using “coitus interuptus” for birth control because you’ve read that condoms are not 100% failsafe. You’ll end up pregnant and infected with aids. That kind of logic is pure gene pool pollution. It’s also proof of an old saying: “never argue with an idiot, they drag you down to their level and beat you with experience”

Beware of NAT/PAT

As we mentioned in the beginning NAT has its own issues to deal with, so we still have to touch on the subject of NAT/PAT with FTP servers. Let’s first look at what is needed to make this work. You have already seen how the basics of passive FTP data connection work. The client sends a PASV command and the server responds by entering passive mode and telling the client what port to use.

Now with NAT/PAT devices the IP address needs to be swapped around. To do this these devices sniff the network traffic for the PASV command to find what port is used and turns the FTP server response from “227 Ok, Entering Passive Mode (192,168,1,32,203,8)” into 227 Ok, Entering Passive Mode (193,211,10,27,203,8).

As you can see the private IP address (blue, the first 4 numbers) is swapped to the public IP address (green) on which the FTP server is reachable and retains the port to use (red). The last to numbers in red describe the port number as follows: 203*256+8 =51976. When the client connects the reverse process takes place, the public IP is swapped for the private one.

PassiveFTPNatRewrite

You can already see where this is going with SSL. The NAT/PAT device cannot sniff the traffic for the PASV & PORT commands to see what on what dynamic port the client should establish the data channel and also due to the encryption it cannot alter the PASV command to swap around the IP addresses.

The best solution to this is to specify a firewall helper address for passive FTP which we can set to the public IP address of our FTP Server. Your FTP Server must support this; you’ll find that IIS 7.0 and IIS 7.5 do.

Other possible solutions and workaround are:

· FTP Clients that “guess” the address to use when the IP address in the PASV command doesn’t work (that would be an internal private range IP address). They then try to use the public IP address to establish the connection, which can work as the change is it is the public IP address of the FTP server or the public IP address of the NAT/PAT device. No guarantees are given that this will work.

· NAT/PAT devices sometimes allow for specified ranges to be forwarded to a specific IP address. So you could configure this to be the case for the small range of dynamic ports you defined for Passive FTP.

· Some FTP servers support he EPSV command (Extended Passive Mode), which only sends the port and where the IP address is the one used for establishing the control connection.

Be Mindful of Load Balancing on Server and/or Client Side

If Load Balancers are in play we must make sure that the communication always goes via the same node and IP address when using SSL or you’ll break SSL. If multiple IP addresses are used to route certain traffic via a certain device you make sure the FTP client doesn’t switch to another IP address for the data connection as this will fail. Both control and data channels must use the same IP address or passive FTP will fail even without using SSL. Also don’t forget some customers uses load balancers to route traffic based on purpose, cost, redundancy, etc. So this is also a concern on the client side. In the IIS log you’ll see that it complains about IP addresses that do not match. I’ve had this happen at 2 customer sites, which were easily fixed, but took some intervention of by their IT staff. Luckily they both had a competent SMB IT consulting firm looking after their infrastructure.

Table with FTP risks and mitigations

RISK MITIGATION RESULT
Server Connects to Client Use passive FTP Client initiates connection
Dynamic ports in use Select smaller fixed range of ports Less ports to open on firewall
Server not verified Use commercial SSL Certificate Server can be verified
Authentication not encrypted Use SSL for authentication Authentication encrypted
Data not encrypted Use SSL for data transport Data transport encrypted
Connections from & to unknown hosts Allow only trusted clients and/or servers No more FTP from/to any host.

Reflections on Getting Windows Network Load Balancing To Work (Part 1)

This is part 1 in series on Windows Network Load Balancing. Part 2 can be found here: https://blog.workinghardinit.work/2010/07/23/reflections-on-getting-windows-network-load-balancing-to-work-part-2/
Introduction

This will not be an extensive NLB installation & configuration manual. You’ll find plenty of material on that searching the internet. I would like to reflect on some issues and options when using Windows Network Load Balancing.

I will not be discussing NLB solutions using just one NIC with multicast. I think they lack so badly in resilience, configuration and troubleshooting capabilities that I never consider using them, not even in the lab. Even in a lab you need to work like in real live, bar some exceptions. Apart from no available slots in a server to add NICs you have no excuse not to and even then, just make sure you do. NIC ports are very cheap nowadays and especially in a virtual environment there is nothing stopping you from adding some extra virtual ports. Do yourself a favor and always use two or more NIC ports. Even in the year 2000 I grinned when I read that one of the drawbacks was the cost of the extra NIC. Really, you have a real business need and are prepared to pay for multiple servers to set up a Windows Network Load Balancing cluster but you can’t spring for an extra NIC? Remember in those days servers really meant hardware and in the Windows 2000 era you needed Windows 2000 Advanced Server or Windows 2000 Datacenter Server.

What I also will not discuss any further beyond the following is hardware load balancing. Yes good hardware load balancers have extra functions and features that can be very valuable and even necessary for certain deployments. They can be rather expensive for some budgets but they are very capable devices. It is up to you as an engineer to look at the needs, the budget, the risks and benefits of technologies for a business case and come up with good, affordable and working solutions. In some cases that solution will be Windows Load Balancing, in other cases it will be hardware load balancing. Needs, circumstances and environments differ, so do the solutions.

Another thing I’ll wipe of the map from the start is the use of a cross over cable to connect the private NIC. Do not use one. It is not supported and will cause issues or fail.

Then there is the confusion around the use of default Gateways, the fact if the private and the NLB NIC must or must not be on the same subnet, routing and forwarding differences between of Windows 2003 & Windows 2008 (R2). These are the issues I’ll address later in Part 2. But first we need to talk about unicast & multicast a bit. This is unavoidable when using Windows Network Load Balancing. To complete the information here I will provide some examples using two NICs on the same and on different subnets with different default gateway and routing solutions, and also an example using multiple independent clusters (3 NICs)

Things to consider when using unicast & multicast

A topic I will not address too much is which is better: unicast or multicast. Well that depends on the needs, the environment and if the products or solutions uses support it. For example when using VMware guests you’ll have to use multicast if you want it to work without breaking things like VMotion. Another example, ISA server 2006 didn’t support multicast until the release of a hotfix that was later included in SP1 and higher). It also depends on the network gear that’s available, etc.

My take on it all is the following. Use what works best given the circumstances. I you have no access to the switch configuration or your networking gear has issues with multicast NLB you can whine all day long that it’s better than unicast but you’ll won’t get anywhere. When practical I use unicast with multiple NICs and when the circumstances or the products used allow for it, I use multicast with multiple NICs. Which is best is a discussion that sometimes smells of “mine is bigger than yours” and I hope you never had that phase and if you did, you’ve left that far behind together with your other growing pains. Thank you.

Why are Unicast & Multicast so Important

Unicast or multicast mode defines how the cluster virtual IP its MAC address is handled. The network traffic sends packets for the cluster virtual IP based on the cluster MAC address advertised by the cluster. The cluster virtual IP MAC address is used because all traffic for the NLB cluster need be delivered to all nodes.

I will not go into detail on how unicast and multicast works. That has been done very well on CISCO’s web site http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_example09186a0080a07203.shtml), TechNet (http://technet.microsoft.com/en-us/library/cc782694(WS.10).aspx) and by Thomas Shindler (http://www.isaserver.org/articles/basicnlbpart2.html)

Unicast issues to consider
  • You need two NICs ports. This is because of the “bogus MAC address” (see the CISCO link above for an explanation). Oh please … give me a break already! Again don’t even consider using a single NIC NLB solution in production.
  • Port Flooding can’t be stopped on the switch level. A valid argument in many cases.
  • It does work in most environments and with just about all network gear.

The good news is that you can prevent flooding by using a hub or a switch configured as a hub to in front of the upstream switch. If you have enough nodes in the NLB this might be a good way to go as you will be attaching 8, 16 or more nodes anyway. If you have only two or three nodes that might be a bit overkill that takes up room in the rack and uses power. Another ways is to uses VLAN to separate the traffic. This works well unless you have a need for the NLB subnet to be the same as the rest or can’t get it configured (politics, rules, existing environment …)

Multicast issues to consider
  • You can use a one NIC solution. Multicast allows setting up an NLB cluster with only one NIC which, by some, is considered a benefit. I think I was very clear already about this. I never implement single NIC Windows Network Load Balancing solutions.
  • Port Flooding. But here we have some good news for switch admins. Multicast also allows you to stop port flooding by using static arp entries on the switches upstream of your server. This is very valuable. When you only have a couple of nodes in the NLB or can’t create or use VLANs to separate the NLB traffic this is a very good reason to use multicast. See also http://www.cisco.com/en/US/products/hw/switches/ps708/products_configuration_example09186a0080a07203.shtml. This one of the reasons multicast is considered better by some people, but as mentioned you can prevent flooding by using a “hub” in front of the upstream switch or by separating the traffic using another VLAN which for lager NLB clusters is not that much overhead. You might still need to do that if for some reason the static arp solution on the switch ports of the NLB NICs can’t be done. You can also use IGMP snooping to examine the contents of multicast packets and associate a port with a multicast address. If this is not possible the static arp entries come mentioned above do the job.
  • As mentioned on TechNet (http://technet.microsoft.com/en-us/library/cc782694(WS.10).aspx)upstream routers might not support mapping a unicast IP address (the cluster IP address) with a multicast MAC address. In these situations, you must upgrade or replace the router. If that’s not possible than you can’t use multicast.
  • So you’ll need to talk to your network people (or to yourself if you do the networking as well) to get it figured out and see what they prefer, allow, tolerate and support.
Virtualization comes into the picture

In a virtualize environment the discussion on the “best” way of preventing port flooding also changes a bit. You don’t need so many physical ports but they do often become more scares and valuable as the number of NIC ports on the virtualization hosts are limited. Also a lot of virtualization technologies need their specific little tweaks to get stuff working right depending on the version etc.

Closing thoughts on unicast/multicast

So in the end when choosing between unicast and multicast NLB take a long had look at the environment, the possibilities and needs, the politics, available skillsets than pick the one that is best suited for that particular situation. It’s not that on an issue until you meet some CISCO or Juniper networking guru’s who’ll jab on for hours on how the NLB/multicast implementation sucks.

In part 2 we’ll talk a bit about subnets, default gateways, routing, forwarding and the strong host model in Windows 2008 (R2).

Geeking Out

Did any of you ever use a disk duplex setup in a Windows server?  A disk duplex can be achieved using a Windows server that has two raid controllers on which you create two mirrored virtual disks. You than use software mirroring in the OS to create a mirror using those two virtual mirrored disks. That way a raid controller can malfunction completely and your systems stays up. Those kind of setup where hard to find or come by. So what does a geek do when he gets his hands on a Hyper-V host that has access to two EVA 8000 SANs? Well he creates a VM that has two disks. One on EVA 1(RAID 5 or 1 ) , the other on EVA 2 (RAID 5 or 1). When he’s done installing the OS, he converts the Windows basic disk to a dynamic disk and creates a software mirror. The end result: a disk duplex in a Virtual machine. Instead of using to raid controllers we used to SAN’s with storage controllers! I just had to do it, couldn’t resist 🙂

Calling x64 CLI Tools in x86 Scripting Tools and Processes

Every now and then I get the same question from people who only recently decided to make the switch to x64 bit Windows operating systems. I’ve been running on x64 since Vista RTM and I’m very happy with it. When those people start scripting with their tools, which are 32 bit, calling some CLI tool in %windir%System32 they can run into an annoying issue that express itself in the correct yet somewhat misleading “WshShell.Exec: The system cannot find the file specified.”. But you know it’s there in %windir%System32, you checked and double checked!

When your scripting tool is 32 bit and you run your script it usually launches an 32 bit version of the CLI tool you’re calling. This behavior is a result of file redirection. This is a transparent process that’s part of the Windows-on-Windows 64-bit (WOW64) subsystem that is used to run 32 bit apps. When a 32 bit applications calls a CLI tool in the %windir%system32 directory it silently redirects this to the %windir%SysWOW64 where 32 bit apps can happily run without a worry on an x64 bit operating system. Yes, indeed %windir%system32 is for x64 code only and %windir%SysWOW64 is for 32 bit code.

What’s in a name 🙂 Some people argue they should have use system32 for 32 bit and system64 for x64 bit but I’m sure they had their reasons for what they did (i.e. it would have been hell for some reason I guess). Other suggestions have also been made by people who are far better qualified than I am. For example by Mark Russinovich, a hard core systems developer, in http://blogs.technet.com/b/markrussinovich/archive/2005/05/07/running-everyday-on-64-bit-windows.aspx.

Now all this can happen transparently for the user if the tools used have both an x64 and a x86 version. Cmd.exe and ping.exe are fine examples. If you run some VBScript in my favorite scripting tool for example (Sapiens PrimalScript) which is 32 bit it will launch a 32 bit cmd.exe, that launches the cscript.exe 32 bit version and which will launch ping.exe (using WScript.Shell) in %windir%SysWOW64 by silently redirecting your %windir%system32 path. No worries, you don’t know any better and the result is the same. So it’s usually not a problem if there is both a x64 and a x86 version to the CLI tool as you have seen in the ping.exe example. When a 32 bit process calls a tool in %windir%system32 it’s redirected to %windir%SysWOW64 and uses the 32 bit version. No harm done.

The proverbial shit hits the fan when you call a CLI tool that only has a x64 bit version. As the scripting tool is x86 it’s call is redirected to the WOW64 and the script fails miserably as the CLI tool can’t be found. This can be pretty annoying when writing and testing scripts. The CLI backup tool of Windows Backup is a prime example. It does not have a 32 bit version. Consider this little script for example:

Option Explicit

Dim oShell
Dim oExecShell
Dim sBackupCommandString
Dim sText

Set oShell = CreateObject("WScript.Shell")
'sBackupCommandString = "%windir%sysnativewbadmin get disks"
sBackupCommandString = "%windir%system32wbadmin get disks"

Set  oExecShell = oShell.Exec(sBackupCommandString)

Do While oExecShell.Status = 0
    Do While Not oExecShell.StdOut.AtEndOfStream
        sText = oExecShell.StdOut.ReadLine()
        Wscript.Echo sText 
    Loop    
Loop

Set oShell = Nothing
Set oExecShell = Nothing

There is a lot of File Redirection going on here to %windir%SysWOW64 when running this code in the 32 bit scripting tool. That tool launches the 32 bit cmd.exe and thus the 32 bit cscript.exe which then launches a 32 bit shell and tries to run "%windir%system32wbadmin get disks" which is also redirected to %windir%SysWOW64 where wbadmin cannot be found throwing the error: “WshShell.Exec: The system cannot find the file specified.”. If you don’t have a 32 bit code editor just launch the script manually from an 32 bit command prompt to see the error.

The solution as demonstrated here is to use as in “%windir%Sysnativewbadmin.exe get disks”. Uncomment that line and put the line with sBackupCommandString = "%windir%system32wbadmin get disks" in comment. Do the same test again and voila. It runs. So there you have it, you can easily test your script now. Just make sure that when the time comes to put it out in the wild you replace it with the real path if the calling process is x64 bit, which for example wscript.exe and cscript.exe are when you launch the form a x64 bit shell (explorer.exe or cmd.exe), which is the default on a x64 operating system. The x86 version runs when you launch them from a x86 shell. But remember the default on x64 bit operating systems is x64 bit and sysnative only functions when called from a 32 bit process (it’s a virtual directory that doesn’t really exists).

Sysnative was introduced in Vista/Windows2008 x64 bit. Not only 32 bit script editor users a affected by this, all 32 bit processes launching tools in "%windir%system32 are. See more on MSDN via this link http://msdn.microsoft.com/en-us/library/aa384187(VS.85).aspx.  For the folks running XP or Windows 2003 x64 bit it is perhaps time you consider upgrading to Windows 2008 R2 or v7 x64 bit? If you can’t, no need to worry, you’re in luck. Microsoft did create a hot fix for you (http://support.microsoft.com/?scid=kb;en-us;942589) that introduces sysnative on those platforms. So welcome to the x64 bit universe, beware of file redirection in WOW64 and happy scripting 🙂