You will need to create three files like those shown in Examples1, 2, and 3. The shell script uses the two text files as input.
Example 1. Cross-site scripting test script using cURL
#!/bin/bashCURL=/usr/local/bin/curl# where do we put temporary output?TEMPDIR=/tmp# a file with URLs to attack, one per lineURLFILE=urls.txt# a file containing XSS attack strings, one per lineATTACKS=xss-strings.txt# file descriptor 3 is our URLs3<"${URLFILE}"# file descriptor 4 is our XSS attack strings4<"${ATTACKS}"typeset -i FAILED# for each URL in the URLFILEwhile read -u 3 URLdo TEMPFILE="${TEMPDIR}/curl${RANDOM}.html" FAILED=0 # attack with each attack in the ATTACKS file while read -u 4 XSS do # call curl to fetch the page. Save to temp file because we # need to check the error code, too. We'll grep if we got # anything. curl -f -s -o "${TEMPFILE}" "${URL}${XSS}" RETCODE=$? echo "ret: $RETCODE" # check to see if curl failed or the server failed if [ $RETCODE != 0 ] then echo "FAIL: (curl ${RETCODE}) ${URL}${XSS}" else # curl succeeded. Check output for our attack string. rm -f "${TEMPFILE}" result=$(grep -c "${XSS}" "${TEMPFILE}") # if we got 1 or more matches, that's a failure if [ "$result" != 0 ] then echo "FAIL: ${URL}${XSS}" FAILED=${FAILED}+1 else echo "PASS: ${URL}${XSS}" fi fi rm -f "${TEMPFILE}" done if [ $FAILED -gt 0 ] then echo "$FAILED failures for ${URL}" else echo "PASS: ${URL}" fidone
Example 2. Example urls.txt file
http://www.example.com/cgi-bin/test-cgi?test=http://www.example.com/servlet/login.do?user=http://www.example.com/getFile.asp?fileID=
Example 3. Example xss-strings.txt file
<script>alert('xss');</script>"><BODY%20ONLOAD=alert('XSS')><a%20name=""><BODY ONLOAD=alert('XSS')><a name="abc>xyzabc<xyzabc'xyzabc"xyzabc(xyzabc)xyzabc<hr>xyzabc<script>xyz
Realize that there are infinitely many possible test strings for cross-site scripting. Your goal is not to use just the ones we show in Example 3, nor to use every possible string that your time and budget allows. Choose representative samples that vary in interesting ways. Use a different sample set in each test run, so that you can always be testing some XSS, but not necessarily so many cases as to bog down your efforts.
This script uses a couple of loops to iterate across your website, trying lots of test strings on every URL you specify. You might get the list of URLs by spidering your website. The set of attack strings can come from lots of places: books, websites, vulnerability announcements, security consultants, etc.
The particular strings we chose in Example 3 are intended to help you zero in on what, if any, defenses the application has. You’ll note that we have used “abc” and “xyz” around each test string. That’s because we’re going to do a very simple grep
of the output. If I want to find out whether a single <
in input is reflected in the output, I have to be sure that it’s my <
that is reflected. Clearly, grepping for <
will return lots of spurious results unless I make it unique in this way. The examples get progressively worse. That is, reflecting a few dangerous characters, like <
, >
, and "
, is bad, but reflecting the whole string <script>
is an unmitigated failure. Also, we have seen applications that perform blacklisting as a defense. So, while they will allow some characters through, if they see <script>
in the input they will replace it with something harmless or remove it altogether. ColdFusion does this in some situations, for example.
There are a few things to note about this particular script. It is a primitive script that does not do anything graceful in the case of bad input. Blank lines, comments, or anything stray in the urls.txt
file will cause failures trying to connect to them as URLs. Likewise, stray data in the xss-strings.txt
file will be attempted during testing. It is possible to put bad parameters in the xss-strings.txt
file that actually cause cURL to fail. In such cases, cURL will fail, the script will say so, but you will have to go dig into the test case to figure out why it failed and what you want to do to fix it.
There are a few other interesting situations where the software being tested could fail, but the failures might not be detected by this simple script (called “false negatives”). Encoded strings might fail when the input is encoded in such a way that it bypasses input filtering and the result is an unencoded string that allows XSS. Imagine a test where you send the <
character encoded as %3C
in the attack string, but the actual unencoded <
character is returned in the page body. That could well be part of a failure, and this simple script won’t detect it because the string that was sent was not found verbatim in the output. Another possible false negative is a situation where the input is broken across several lines when it was sent as one line in the attack. The grep
will not notice that half the string was found on one line and the other half was found on the next line.
An improvement to this script would be to mimic Nikto and provide both an attack string and a corresponding failure string to look for in the xss-strings.txt
file. You’d want to separate the two strings by a character that is easy to work with, but unlikely to be significant (or present) in your attack strings—like Tab. You could manage the strings in Excel and save as tab-delimited, if that suits your test environment.
Warning
To be sure, passing this test is no guarantee that XSS is impossible in your web software. Equally sure, however, is that failing this test guarantees that XSS is possible. Furthermore, if your software has either been attacked successfully or a security audit turns up the possibility of cross-site scripting, you can add the successful attack strings to this script as a form of regression test. You can help ensure that known failures don’t recur.