WEBSITE

Web Security Testing : Automating Specific Tasks with cURL - Checking for Directory Traversal with cURL

1/31/2015 2:44:30 AM

Problem

Directory traversal is a problem where the web server displays listings of files and directories. Often this can lead to unexpected disclosures of the inner workings of the application. Source code of files or data files that influence the application’s execution might be disclosed. We want to traverse the site, given known valid URLs, and look for directories that are implied by those URLs. Then we will make sure that the URLs don’t work.

Solution

Before you conduct the test, you need a list of directories or paths that you want to try. You might get the list of URLs by spidering your website. You might also consider what you know about your application and any particular paths that it protects with access control.

You need to create two files: a shell script, as shown in Example 1, and a plain-text file of URLs, similar to what is shown in Example 2.

Example 1. Testing directory traversal with cURL
#!/bin/bashCURL=/sw/bin/curl# a file with known pages, one URL per lineURLFILE=pages.txt# file descriptor 3 is our URLs3<"${URLFILE}"typeset -i FAILED# for each URL in the URLFILEwhile read -u 3 URLdo    FAILED=0    # call curl to fetch the page. Get the headers, too. We're    # interested in the first line that gives the status    RESPONSE=$(${CURL} -D - -s "${URL}" | head -1)    OIFS="$IFS"    set - ${RESPONSE}    result=$2    IFS="$OIFS"    # If we got something in the 200 series, it's probably a failure    if [ $result -lt 300 ]    then	echo "FAIL:	$result ${URL}"	FAILED=${FAILED}+1    else	# response in the 300 series is a redirect. Need to check manually	if [ $result -lt 400 ]	then	    echo "CHECK:	$result ${URL}"	    FAILED=${FAILED}+1	else	    # response in the 400 series is some kind of	    # denial. That's generally considered "success"	    if [ $result -lt 500 ]	    then		echo "PASS:	$result ${URL}"	    else		# response in the 500 series means server		# failure. Anything we haven't already accounted for		# will be called a failure.		echo "FAIL:	$result ${URL}"		FAILED=${FAILED}+1	    fi	fi    fidone
Example 2. Example pages.txt
http://www.example.com/imageshttp://www.example.com/images/http://www.example.com/css/http://www.example.com/js/

Discussion

The script will base its pass/fail decision on whether or not it was denied access to the directory, that is, an HTTP 200 response code (which normally indicates success) is considered failure because it means we actually saw something we shouldn’t. If our request is denied (e.g., HTTP 400-series codes), then it is considered a passing result because we assume we were not shown the directory’s contents. Unfortunately, there are lots of reasons why this simplistic approach might return false results.

Some applications are configured to respond with HTTP 200 on virtually every request, regardless of whether or not it was an error. In this case, the text of the page might say “object not found,” but the HTTP response code gives our script no clue. It will be reported as a failure, when it should technically pass.

Likewise, some applications redirect to an error page when there is an error. An attempt to access a protected resource might receive an HTTP 302 (or similar) response that redirects the browser to the login page. The solution in this recipe will flag that with “CHECK,” but it might turn out that every URL you try ends up being a “CHECK.”

The input to this script is the key to its success, but only a human can make good input. That is, someone has to know which URLs should be retrievable and which should not. For example, the site’s main page (http://www.example.com/) should definitely respond with HTTP 200, but that is not an error. In many cases, the main page will respond with HTTP 302 or 304, but that’s normal and okay as well. It is not (normally) an instance of directory traversal. Likewise, some sites use pretty URLs like http://www.example.com/news/, which will return HTTP 200, but again is not an error. A person must sit down with some of the directories in the filesystem and/or use clues in the HTML source and come up with examples like those shown in the example pages.txt file. The directories have to be chosen so that if the server responds with an HTTP 200, it is a failure.

Lastly, applications that respond consistently with a 200 or 302 response, regardless of input, can still be tested this way. You have to combine the existing solution with some of the techniques of Recipe 1. Remove −i from the command line so you fetch the page (instead of the headers) to a temporary file, and then grep for the correct string. The correct string might be <title>Access Denied</title> or something similar, but make sure it corresponds to your actual application.

Note

This solution flags all server responses 500 and above as errors. That is the official HTTP standard and it is pretty consistent across all web platforms. If your web server hands out an error 500 or above, something seriously wrong has probably occurred, either in the server itself or in your software. If you do modify this solution, we strongly recommend that you keep the check for HTTP 500 intact.

Other  
  •  Web Security Testing : Automating Specific Tasks with cURL - Following Redirects Automatically, Checking for Cross-Site Scripting with cURL
  •  Web Security Testing : Automating Specific Tasks with cURL - Fetching Many Variations on a URL
  •  Web Security Testing : Automating Specific Tasks with cURL - Fetching a Page with cURL
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 6) - Check In/Out, Versioning, and Content Approval
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 5) - Content Management - Putting It All Together
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 4) - Content Management - Master Pages,Page Layouts
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 3) - Content Management - Site Columns, Content Types
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 2) - Publishing Features
  •  Sharepoint 2013 : SharePoint Publishing Infrastructure (part 1) - A Publishing Site
  •  Show the whole car kingdom “What’ll you do?”, Porsche (Part 1)
  •  
    Most View
    Top 10 Camcorders – November 2012
    Seasonic And Enhance Junior PSUs - Compact Power Supply (Part 3) : Seasonic SS-300TGW
    Combat Virtual Threats With IPCop Firewall (Part 1)
    Vivid Giya G3 Loudspeaker Review (Part 1)
    Microsoft Windows Server 2008 R2 : Administering user and computers
    The Best Experience With Windows 8 Tablets And Hybrids (Part 1)
    10 Things To Try Right Now (Part 1)
    How To Deal With Windows And Network-Related Problems (Part 1)
    Razer Blade - Sharp Design At A Dull Price
    Microsoft Exchange Server 2007 : Components of a Secure Messaging Environment (part 4) - Establishing a Corporate Email Policy, Securing Groups
    Popular Tags
    Microsoft Access Microsoft Excel Microsoft OneNote Microsoft PowerPoint Microsoft Project Microsoft Visio Microsoft Word Active Directory Biztalk Exchange Server Microsoft LynC Server Microsoft Dynamic Sharepoint Sql Server Windows Server 2008 Windows Server 2012 Windows 7 Windows 8 Adobe Indesign Adobe Flash Professional Dreamweaver Adobe Illustrator Adobe After Effects Adobe Photoshop Adobe Fireworks Adobe Flash Catalyst Corel Painter X CorelDRAW X5 CorelDraw 10 QuarkXPress 8 windows Phone 7 windows Phone 8 BlackBerry Android Ipad Iphone iOS
    Top 10
    Review : Acer Aspire R13
    Review : Microsoft Lumia 535
    Review : Olympus OM-D E-M5 Mark II
    TomTom Runner + MultiSport Cardio
    Timex Ironman Run Trainer 2.0
    Suunto Ambit3 Peak Sapphire HR
    Polar M400
    Garmin Forerunner 920XT
    Sharepoint 2013 : Content Model and Managed Metadata - Publishing, Un-publishing, and Republishing
    Sharepoint 2013 : Content Model and Managed Metadata - Content Type Hubs