Worker roles are blank slates—you can do almost
anything with them. In this section, we’re going to explore some common,
and some maybe not-so-common, uses for worker roles.
The most common use is to
offload work from the frontend. This is a common architecture in many
applications, in the cloud or not. We’ll also look at how to use
multithreading in roles, how to simulate a worker role, and how to break
a large process into connected smaller pieces.
15.3.1. Offloading work from the frontend
We’re all familiar with the
user experience of putting products into a shopping cart and then
checking out with an online retailer. You might have even bought this
How retailers process your cart and your order is one of the key
scenarios for how a worker role might be used in the cloud.
Many large online retailers
split the checkout process into two pieces. The first piece is
interactive and user-facing. You happily fill your shopping cart with
lots of stuff and then check out. At that time, the application gathers
your payment details, gives you an order number, and tells you that the
order has been processed. Then it emails all of this so you can have it
all for your records. This is the notification email shown in figure 1.
Figure 1. The
typical online retailer will process a customer’s order in two stages.
The first saves the cart for processing and immediately sends back a
thank you email with an order number. Then the backend servers pick up
the order and process it, resulting in a final email with all of the
After the customer-facing
work is done, the backend magic kicks in to complete the processing of
the order. You see, when the retailer gave you an order number, they
were sort of fibbing. All they did was submit the order to the backend
processing system via a message queue and give you the order number that
can be used to track it. One of the servers that are part of the
backend processing group picks up the order and completes the
processing. This probably involves charging the credit card, verifying
inventory, and determining the ability to ship according to the
customer’s wishes. Once this backend work is completed, a second email
is sent to the customer with an update, usually including the package
tracking number and any other final details. This is the final email
shown in figure 15.6.
By breaking the system into
two pieces, the online retailer gains a few advantages. The biggest is
that the user’s experience of checking out is much faster, giving them a
nice shopping experience. This also takes a lot of load off of the web
servers, which should be simple HTML shovels. Because only a fraction of
shoppers actually check out (e-tailers call this the conversion rate),
it’s important to be able to scale the web servers very easily. Having
them broken out makes it easy to scale them horizontally (by adding more
servers), and makes it possible for each web server to require only
simple hardware. The general strategy at the web server tier is to have
an army of ants, or many low-end servers.
This two-piece system also
makes it easier to plan for failure. You wouldn’t want a web server to
crash while processing a customer’s order and lose the revenue, would
This leaves the backend all
the time it needs to process the orders. Backend server farms tend to
consist of fewer, larger servers, when compared to the web servers.
Although you can scale the number of backend servers as well, you won’t
have to do that as often, because you can just let the flood of orders
back up in the queue. As long as your server capacity can process them
in a few hours, that’s OK.
Azure provides a variety
of server sizes for your instances to run on, and sometimes you’ll want
more horsepower in one box for what you’re doing. In that case, you can
use threading on the server to tap that entire horsepower.
2. Using threads in a worker role
may be times when the work assigned to a particular worker role
instance needs multithreading, or the ability to process work in
parallel by using separate threads of execution. This is especially true
when you’re migrating an existing application to the Azure platform.
Developing and debugging multithreaded applications is very difficult,
so deciding to use multithreading isn’t a decision you should make
The worker role does allow
for the creation and management of threads for your use, but as with
code running on a normal server, you don’t want to create too many
threads. When the number of threads increases, so does the amount of
memory in use. The context-switching cost of the CPU will also hinder
efficient use of your resources. You should limit the number of threads
you’re using to two to four per CPU core.
A common scenario is to spin up
an extra thread in the background to process some asynchronous work.
Doing this is OK, but if you plan on building a massive computational
engine, you’re better off using a framework to do the heavy lifting for
you. The Parallel Extensions to .NET is a framework Microsoft has
developed to help you parallelize your software. The Parallel Extensions
to .NET shipped as part of .NET 4.0 in April of 2010.
Although we always want to
logically separate our code to make it easier to maintain, sometimes
the work involved doesn’t need a lot of horsepower, so we may want to
deploy both the web and the worker sides of the application to one
single web role.
3. Simulating worker roles in a web role
application into discrete pieces, some of which are frontend and some of
which are backend, is a good thing. But there are times when you need
the logical separation, but not the physical separation. This might be
for speed reasons, or because you don’t want to pay for a whole worker
role instance when you just need some lightweight background work done.
Maintaining Logical Separation
If you go down this
path, you must architect your system so you can easily break it out into
a real worker role later on as your needs change. This means making
sure that while you’re breaking the physical separation, you’re at least
keeping the logical separation. You should still use the normal methods
of passing messages to that worker code. If it would use a queue to
process messages in a real worker instance, it should use a queue in the
simulated worker instance as well. Take a gander at figure 2
to see what we mean. At some point, you’ll need to break the code back
out to a real worker role, and you won’t want to have to rewrite a whole
bunch of code.
Figure 2. You can simulate a worker role in your web role if it’s very lightweight.
Be aware that the Fabric
Controller will be ignorant of what you’re doing, and it won’t be able
to manage your simulated worker role. If that worker role code goes out of
control, it will take down the web instance it’s running in, which
could cascade to a series of other problems. You’ve been warned.
If you’re going to do this,
make sure to put the worker code into a separate library so that primary
concerns of the web instance aren’t intermingled with the concerns of
the faux worker instance. You can then reference that library and
execute it in its own thread, passing messages to it however you would
like. This will also make it much easier to split it out into its own
real worker role later.
Utilizing Background Threads
The other issue is getting a
background thread running so it can execute the faux worker code. An
approach we’ve worked with is to launch the process on a separate thread
during the Session_Start event of the global.asax. This will fire up the thread once when the web app is starting up, and leave it running.
Our first instinct was to use the Application_Start event, but this won’t work. The RoleManager isn’t available in the Application_Start event, so it’s too early to start the faux worker.
We want to run the following code:
Thread t = new Thread(new ThreadStart(FauxWorkerSample.Start));
Putting the thread start code in the Session_Start
event has the effect of trying to start another faux worker every time a
new ASP.NET session is started, which is whenever there’s a new visitor
to the website. To protect against thousands of background faux workers
being started, we use the Singleton pattern. This pattern will make
sure that only one faux worker is started in that web instance.
When we’re about to create the thread, we check a flag in the application state to see if a worker has already been created:
object obj = Application["FauxWorkerStarted"];
if (obj == null)
Application["FauxWorkerStarted"] = true;
Thread t = new Thread(new ThreadStart(FauxWorkerSample.Start));
If the worker hasn’t been created, the flag won’t exist in the application state property bag, so it will equal null in that case. If this is the first session, the thread will be created, pointed at the method we give it (FauxWorkerSample.Start in this case), and it will start processing in the background.
When you start it in this manner, you’ll have access to the RoleManager
with the ability to write to the log, manage system health, and act
like a normal worker instance. You could adapt this strategy to work
with the OnStart event handler in
your webrole.cs file. This might be a cleaner place to put it, but we
wanted to show you the dirty work around here.
Our next approach is going to cover how best to handle a large and complex worker role.