1. Abusing High-Load Actions
1.1. Problem
When a single attacker is able to disable your entire web
application, we call that a denial-of-service (DoS) attack. Standard
quality efforts ensure performance and reliability; your security
testing should consider these factors as well. By identifying when
low-cost input triggers high-load actions, we can reveal areas where
your web application might be put under extreme stress and potential
down time.
1.2. Solution
There are a number of actions traditionally associated with high
load. These include common actions, such as executing complex SQL
queries, sorting large lists, and transforming XML documents. Yet it’s
best to take the guess work out of this—if you’ve performed load and
reliability testing, find out which actions generated the highest load
on the server or took the longest to issue a response. You might look at
your performance test results, database profiling results, or user
acceptance test results (if they show how long it takes to serve a
page).
For each of the highest load items, identify whether or not a user
may initiate the action repeatedly. Most often, a user may repeat the
same request simply by hitting the Refresh button.
If there are controls in place, preventing a single user from
executing the high-load item repeatedly, investigate possible ways to
circumvent this protection. If the action is controlled via a session
cookie, can the cookie be manually reset (as discussed in)? If
navigational steps prevent a user from going back and repeating the
step, can those steps be bypassed?
If a user is consistently prevented from repeating the high-load
action, consider the possibility of simultaneous execution by many cooperating users. If your
application allows one to sign up for additional user accounts, do just
that. Sign into one account, activate the high-load item, and log out
again. If you automate these steps, you can execute them sequentially at
high speed or even simultaneously using threads or multiple
computers.
1.3. Discussion
Web applications are built to remain responsive for many
simultaneous users. Yet because performance can have security
implications as well, sometimes it’s dangerous to provide too much
responsiveness to each and every user.
Your typical corporate web application will involve multiple
servers, divided up between application logic, database storage, and
other tiers. In one such case, with an impressive amount of hardware
being used to run an application, one display of this kind of abuse
comes especially to mind. In this example, a colleague wrote a
relatively simple Perl script. This script initiated twenty threads,
each logged in to the application and repeatedly executing a
particularly demanding request upon the servers. This small script ran
on a standard laptop via a normal wireless internet connection,
repeating the same command over and over. Yet in just a few minutes, the
script was able to completely overload the entire set of dedicated
servers and hardware.
Unfortunately, no matter how quickly your application responds, it
will always be possible to overburden it via an extreme load. This
recipe, and the general capability it describes, is commonly referred to
as a denial-of-service attack. When many computers are used
simultaneously to target specific applications or networks, even the
best hardware in the world may be brought down. These distributed
denial-of-service attacks have temporarily disabled such giants as
Yahoo!, Amazon, and CNN.com.
It is important to realize, as we think about designing to resist
attacks, that there exist some attacks that we probably cannot repel.
In the arms race of attacker versus defender on the Web, there are
those who have nuclear weapons and there are those who do not. Botnets
represent a kind of nuclear weapon against which most web applications
will surely fail.
“Bots” are computers—frequently personal computers at home,
work, or school—that have been compromised by some kind of malicious
software. By and large, they are PCs running some vulnerable version
of Microsoft Windows, but they don’t have to be. These computers work
more or less normally for their owners. The owners are usually
completely unaware that any malicious software is running. The malware
maintains a connection to a central communications channel where a
so-called bot herder can issue commands to his bots.
When a network of bots (a “botnet”) can consist of 10,000,
50,000, or even 100,000 individual computers, many defenses become
insufficient. For example, brute force guessing of passwords is often
thwarted by limits on number of attempts per connection, per host, or
per time period. Many of those defenses will fail if 10,000
independent requests come in, each originating from a completely
different computer. Attempts at blocking, for example, IP address
ranges will fail because botnets use computers all over the globe.
Many IP load balancers, switches, routers, and reverse proxies can be
configured in such a way that they operate well under normal or even
heavy load, yet they crumple in the face of a concentrated attack by a
botnet.
Bringing your attention to botnets simply helps you realize that
the software cannot always repel every attack. Furthermore, you may
find it necessary to plan for absolutely massive attacks that you
cannot hope to simulate. That is, you might have to plan how to
respond to a botnet attack, but have no way to test your plan.
2. Abusing Restrictive Functionality
2.1. Problem
Many applications restrict usage in some occasions, typically in the pursuit
of stronger security. This is necessary in many situations, but one must
be careful that it cannot be abused. Automatic restriction can often be abused by malicious
attackers in order to prevent normal usage by other, more legitimate
users.
2.2. Solution
In your application, identify an area where functionality is
restricted as a response to user actions. In most applications, this
will mean a time-out or lockout when user credentials are submitted
incorrectly.
To abuse this functionality, simply enter another user’s
credentials. If the prompt is for a username and password, you don’t have to know the user’s real password
to abuse these restrictions. Enter a known username and any random
password, and you’re likely to be denied access.
Repeat this step until the restriction locks that user’s account,
and you have effectively denied that user access until he or she
contacts an administrator or the time-out period expires.
2.3. Discussion
Overly strong restrictions, may be abused. This abuse
can lock out individual accounts or, if an attacker automates the
process, many known users. Even in the case where a lock out was
temporary, one could automate this process to permanently lockout an
individual user by prompting a temporary lockout every few
minutes.
One could even combine the automated multiusername lockout with
the automated repeated lockout, essentially shutting off all access to
an application. This latter scenario would take considerable bandwidth
and dedicated resources, but is well within the capabilities of a
sophisticated attacker.
Web applications offer another nice alternative: often a user may
reset her password and have the new password emailed. Emailing a new password can be considered a
temporary lockout as well, as it will take users some time to determine
why their password isn’t working.
A famous example of this attack is how it was used on eBay many
years ago. At the time, eBay locked an account for several minutes after
a number of incorrect password attempts. Ostensibly, this was to prevent
attackers from trying to guess passwords. However, eBay is known for its
fierce last-minute bidding wars, where two (or more) users bidding for
the same item will all attempt to bid on it during the last minute of
the auction. Yet eBay listed the usernames of all bidders on an auction,
so you could see whom you were bidding against.
Can you guess the attack? It’s both simple and ingenious—users
looking to avoid bidding wars would submit their bid, log out of eBay,
and then repeatedly attempt to log in as their competitors. After a
number of (failed) login attempts, the competitor would be locked out of
eBay for several minutes. These several minutes were just long enough
for the auction to end, and thus the devious attacking bidder prevented
any competing bids!
3. Abusing Race Conditions
3.1. Problem
A race condition is the situation where two actions take place on one
protected piece of data. This data can be a database record, a file, or
just a variable in memory. If the attacker is able to access or modify
the protected data while another action is operating on it, it is
possible to corrupt that data and behavior relying upon it.
3.2. Solution
Race conditions are difficult to explicitly test for; they require
insight into how an application works. There are warning signs, and
those are any situation where two users may act on a single piece of
data in rapid succession.
Imagine an online gambling system (such as a poker site) that
allows balance transfers to other accounts within that system. Because
such transfers are within the system itself, they may occur
instantaneously—as soon as the request is confirmed. If this transaction
is implemented in a non-atomic way, without the use of locking or a
database transaction, the following situation could arise:
User accounts A, B, and C are all controlled by a single
attacker.
User account A contains $1,000. Accounts B and C are
empty.
The attacker initiates two balance transfers at the exact same
moment (accomplished via automation—see the recipes on Perl). One
balance transfer sends all $1,000 to account B, and the other sends
all $1,000 to account C.
The application receives request 1 and checks to ensure that
the user has $1,000 in his account, and that the balance upon
completion will be $0. This is true.
The application receives request 2 and checks to ensure that
the user has $1,000 in his account, and that the balance upon
completion will be $0. This is true—as request 1 hasn’t been fully
processed yet.
The application processes request 1, adds $1,000 to account B,
and sets account A to $0.
The application processes request 2, adds $1,000 to account C,
and sets account A to $0.
The attacker has just succeeded doubling his money, at the expense
of the gambling application.
3.3. Description
This example is referred to as a TOCTOU (Time of Check, Time of Use) race condition.
Database management systems include strong mechanisms to protect against
these race conditions, but they are not enabled by default. Actions that
must be completed in a specific order need to be wrapped up into atomic
transaction requests to the database. Protections on files must include
locks or other concurrency methods. These things are not easy to
program, so please take the time to check your application.
The area where these issues have cropped up with the most severe
effects have been in multiplayer online games. The ability to duplicate
in-game money or items has lead to the collapse of in-game economies.
This might not be such a big deal, except for two aspects. First, if the
game is less fun due to rampant cheating, paying players may cancel
their accounts. Second, some games allow one to buy and sell in-game
items for real-world money. This represents a substantial profit motive
for a hacker.