programming4us
programming4us
DESKTOP

Windows Azure : Messaging with the queue - Patterns for message processing

- How To Install Windows Server 2012 On VirtualBox
- How To Bypass Torrent Connection Blocking By Your ISP
- How To Install Actual Facebook App On Kindle Fire
2/17/2011 9:20:32 AM
As simple as queues are, they can prove valuable in a lot of complex scenarios. This section will focus on some common approaches developers tend to use with queues.

1. Shared counters

You might run into a scenario where a piece of work is broken into many small pieces, and you need to make sure all of those small pieces are completed before you move on to the next step in your process. Sometimes these pieces are subsets of the main problem.

This is called single instruction, multiple data . The same processing will be performed to each piece of data, but each piece is a subset of the whole. Consider working on an image. If you break the image into 100 pieces and apply the same process to each piece, you need to know for sure that all 100 pieces are completed before you can stitch them back into the larger picture.

If you just break the image into 100 pieces and throw them into the queue, it can be difficult to know for sure when all of the 100 units of work have been completed. This has to do with the visibility timeout and the nondeterministic nature of queues. You might think that you could simply check the estimated length of the queue using q.ApproximateMessageCount:


If you do, you must call RetrieveApproximateMessageCount, which fetches the information into the ApproximateMessageCount property from the queue in the cloud, before you call the property itself. This property returns an approximate count of the items in the queue, not an exact count, for two reasons. The first is that the queue is running in triplicate, and an add operation might have completed on one instance but not the other two, which would lead to an inconsistent result. The second reason is that you might get a zero back from your check, only to have an invisible message turn visible again when its timeout expires. Then you would have a message in the queue you didn’t know about.

You need a deterministic way to know for sure that all 100 pieces have been processed. One way to do this is to use a shared counter, perhaps in an Azure table. You can see a visualization of this process in figure 1 .

Figure 1. Using a shared counter is one way to deterministically track how many messages have been processed. This is a good approach if you have a specific number of messages to process and you need a precise count.


When the processing starts, a table is made with a counter set to 0. As items are submitted into the queue by the producer , the counter is incremented . As the work is completed in the consumer and the messages are deleted , the counter is decremented .

There is one small flaw this approach suffers from, and it’s a flaw all shared counters have: it’s possible to run into a concurrency problem. If one process reads the counter, adds 1 to it, and then writes it back to the table while another process is doing the same thing, they could end up overwriting each other, resulting in losing track of the count. In order to fix this, you need to use locking on the counter. Another solution, which is used in eventually consistent scenarios, is to read the counter a second time before you write to it, to make sure it wasn’t changed by someone else while you weren’t looking.

This approach will give you a simple count indicating the progress of the work. What if you want to know which pieces are done and which aren’t? No problem. You can do this with a small change to the previous approach.

Instead of writing to a shared counter on each put operation, create a new record in the shared table. This will result in one record per queue message. As they’re processed and the queue messages are deleted, the corresponding row should be deleted in the table. Another option would be to mark a property of the row in the table as complete, or store a completed time and date for performance tracking.

In either of these ways—with the shared counter or the shared message tracking table—you can know with a simple query whether all of the work has been completed or not. You should think about wiring up a management portal that monitors the counter or table to show the progress to an administrator.

2. Work complete receipt

The preceding scenario works when you can control the producer and you have a closed loop. But what if you don’t own the producer, or there are too many producers for you to make them also manage a counter? In this case, you can use a return receipt, or a work complete receipt.

In this approach, as work is completed, a message is sent through a separate channel, perhaps another queue, back to the producer. This alerts the producer that the work is done. This is common in scenarios where the process takes a long time to complete, and the producer wants an asynchronous notification when the work is done.

Instead of using a return queue, we’ve also had the consumer call a small notification web service on the producer side, sending a simple message regarding the status of the work. This makes the consumer an active part of the process, and it removes the need for the producer to monitor a queue and become a consumer itself.

3. Asymmetric queues versus symmetric queues

Queues are decidedly one-way. They’re a way for one or more producers to communicate with one or more consumers, but not the other way around.

Using one queue in this manner is an asymmetric queue . Generally, in an asymmetric queue, the producer finds out about the work being completed in a passive way. The new file happens to be in the right place when the user hits Refresh, or the customer receives an email when the order is shipped, or any number of other scenarios.

Sometimes using symmetric queues can be useful. This makes the response from the consumer back to the producer an active one. Using a queue to do this does help decouple the two halves of the system, but it can lead to too much complexity. This also turns the original producer into a consumer in its own right, which can be hard to implement if the original producer was a website. Because websites only respond to outside requests (a person performs a GET or POST with a web browser to view the catalog), they don’t have a running process to proactively read the queue and respond to it. How this might work is laid out in figure 2 .

Figure 2. A symmetric queue is when two queues are used together to allow for two-way communication between a set of consumers and a set of producers. This keeps both sets of systems loosely coupled. The inbound queue is on the top, and the outbound queue is on the bottom.


In figure 2 , you can see two queues connecting the consumers and the producers together. The top queue is the inbound queue, sending messages to the consumers. The bottom queue is the outbound queue, bringing messages back to the consumers. A typical message pattern would be for the producer to send a message to the consumer by putting it in the inbound queue . The consumer picks up the message , does some critical business work, like ordering pudding, and submits a confirmation message back to the producer by putting it in the outbound queue . The producer finally receives the confirmation message .

4. Truncated exponential backoff

It’s quite common for a worker role to have an infinite loop that polls the queue, processes the work, and pauses for a period of time if the queue is empty. The following listing shows the typical infinite polling loop.

Listing 1. The typical infinite loop to poll a queue

In this listing, a permanent true condition starts the loop off, looping until true equals false , which shouldn’t ever happen (and if it does, we’re all in for a world of hurt). We grab the first message off of the queue and check to see if it’s null. If there aren’t any messages on the queue, the GetMessage would return a NULL . message. If there is a message, we process it, not deleting it until the real work is fully completed

If there wasn’t a new message retrieved, the loop sleeps for a period of time , and then tries again.

Sometimes you might find that a queue is polled too often. If this is a concern, you can dynamically change the wait time in the bottom of the loop. A common algorithm used in networking is called truncated exponential backoff . You can see an example of how to implement this in listing 16.6 . Under this system, each time a queue check doesn’t return a message, the loop delay is extended exponentially until a certain ceiling is reached. If the check does return a message, the loop delay is decreased, either back to the lowest setting, or to the next lowest setting in the progression.

Listing 2. Adjusting the delay in a polling loop

For example, suppose the start value for the delay loop was 2 seconds. After an empty poll, the length would be doubled to 4 seconds. Successive empty checks would result in the time delay increasing to 8, 16, 32, 64, 128, 256, and 512 seconds. You can see the code that produces this progression at . Figure 16.9 shows how the delay is increasing in this example as the queue remains empty.

At 512 seconds, the counter reaches the maximum set for the system, so it doesn’t rise above 512 seconds. After some time a message appears. The system processes the message, and then checks the length of the queue. Because there was activity on the queue, the delay setting is set to the next step down the ladder, to 256 seconds at line . The code also supports an immediate drop to the floor interval if you want an aggressive increase in the rate of polling.

Figure 3 shows the backoff polling in action. The sample code will randomly put messages in the monitored queue—there’s a one-in-three chance of it doing so. The interval started at 2 seconds, and was immediately bumped to 4. Then a message was processed, so it was dialed back down to 2. Then there was a succession of empty calls, leading the code to rapidly increase the polling interval all the way up to 16 seconds between checks.

Figure 3. The output from running the truncated exponential backoff polling algorithm. You can see the polling interval exponentially increase from 2 to 4 to 8 to 16 as the queue continues to be empty.


5. Queue creation on startup

One advantage of the Azure storage system is the easy creation and deletion of queues. A common trend is to inspect the storage system on system startup to determine if the needed entities (BLOBs, tables, and queues) exist. If they don’t exist, they’re created on the spot. This makes it easy to deploy a system, knowing that it will self-provision the storage resources it needs during startup.

A possible drawback to this approach is forgetting to manage the state of these resources carefully. If you’re rolling out an upgrade to the system, and the initialization steps clear out the work queues and other storage entities, it’s possible that you could lose valuable data or work in progress. Make sure that you test both new deployments and upgrades to your system in a safe environment before relying on the self-provisioning code. The automatic provisioning may also impact performance on system startup, because it will be busy checking the infrastructure and configuring things as needed.

6. Dynamic queues versus static queues

Most queues in your applications will be static in nature. The design of your system will require whatever queues it needs, and these will be provisioned when the application is deployed and left to run as is.

Alternatively, you can create queues dynamically. For example, you can create a new custom queue for each new order that’s being processed by the system. This helps your application dynamically scale, and it also helps you separate different concerns in your system.

Perhaps you’re building a system for a value-added network that manages the flow of purchase order messages from vendors and suppliers. All day long you’re signing up new vendors and suppliers, and some occasionally stop using your service. A great way to automate the provisioning of the queues that are needed for each customer that signs up is to dynamically create the queues and infrastructure as they sign up and are approved as users of the system.

You want to pay careful attention to the state of each customer, and make sure that any leftover data is cleaned up, and entities are deprovisioned as customers stop using your service. You don’t want to have to pay for unneeded infrastructure that’s forgotten and left lying around.

Another scenario would be even more short term than the preceding one. Think back to the image-processing scenario. Because each image needs a queue to manage its breakdown and processing, you could dynamically create a queue for each new image job that’s submitted. When the processing is complete, the queue could be torn down.

7. Ordered delivery

Some scenarios require guaranteed ordered delivery, such as some EDI scenarios, but the queue, as it is, doesn’t support ordered delivery. There are several approaches for adding this capability on top of the normal queue service.

The simplest approach hinges on the series of messages having a header message that declares the length of the series, or the existence of a trailer message that tells the system when the last message has been received.

The basic approach would be for a process to monitor the normal queue, pull the messages off, and store them in a temporary Azure table. The table should have an integer property that stores the order of that message in the series. Once the defined number of messages are received, or when the trailer message is received, the messages can be properly ordered (usually by some element present in the message themselves) and then sent on to the final system that needed the messages ordered properly.

An optimization would be to have the process that’s ordering the messages start trickling them on to the final destination when it knows that it has some of the messages already in order. For example, if messages arrive in the order 1, 2, 3, 5, 6, 4, the message collator could almost immediately send messages 1, 2, and 3. The collator would have to wait for message 4 to arrive before it could forward messages 4, 5, and 6.

8. Long queues

Most of the queues we’ve discussed so far have been in the form of an immediately serviced queue, where there are one or more consumers actively processing the messages in the queue.

There are times when it’s important to have a long queue in play. This might be a queue that receives messages all day long, without an active consumer. The messages would be processed in a batch later that evening, when the consumer comes online. You might have an application in the field that sends messages into the cloud during the day; then, in the evening, a backend system comes online, processes all of the messages, and goes back offline. This would be a useful scenario when the consuming system isn’t always available to process the messages you’re holding in the queue.

9. Dynamically scaling to meet queue demand

The promise of queues is that the processing of messages is decoupled from the production of those messages. By decoupling the backend, you’re free to scale the backend to meet the demands of the number of messages in the queue.

In a traditional environment, the number of producers is fairly static. The pool of consumers can be scaled, but it requires all the work of buying an additional server, provisioning it, and deploying it to the pool. This can be time consuming, and you’re likely to miss the spike in demand while you’re waiting for hardware to be shipped from your vendor.

The promise of cloud computing is the true dynamic allocation of resources to your computing needs. A management tool can be deployed that monitors the length of the queue in question. The tool can then dynamically create additional consumers (by increasing the number of deployed worker role instances using the service management API) based on the length of the queue. The management tool should define a cap on the number of instances that can be created, and also rules as to when instances should be created or destroyed.

For example, you might define the minimum number of instances as zero, with a maximum of five instances. The rule of thumb would be one instance per 20 messages in the queue. The management tool would need to determine the length of the queue on a regular basis, perhaps every 3 minutes. Your rules should always allow for a little reserve buffer capacity. If you run too close to actual demand, the slight delay it takes (a few minutes) to bring on new instances could have a deleterious effect on the performance of your system.

Because Azure is billed based on the number of active instances, and not the actual use of the CPU, this can be a way to not only meet spikes in demand with grace, but also to minimize the costs of the solution.

Other  
  •  Windows Azure : Messaging with the queue - Understanding message visibility
  •  Windows Azure : Messaging with the queue - Working with messages
  •  Windows 7 : Maintaining Your System Configuration (part 4) - Configuring Remote Access
  •  Windows 7 : Maintaining Your System Configuration (part 3) - Configuring User Profiles, Environment Variables, and Startup and Recovery
  •  Windows 7 : Maintaining Your System Configuration (part 2) - Creating or Joining a Homegroup & Viewing Hardware Settings
  •  Windows 7 : Maintaining Your System Configuration (part 1) - Configuring the Computer Name and Membership
  •  Windows Server 2008 : DHCP/WINS/Domain Controllers - Exploring DHCP Changes in Windows Server 2008 R2
  •  Windows Server 2008 : DHCP/WINS/Domain Controllers - Exploring the Dynamic Host Configuration Protocol (DHCP)
  •  Windows Server 2008 : DHCP/WINS/Domain Controllers - Understanding the Key Components of an Enterprise Network
  •  Windows Azure : Messaging with the queue - Working with basic queue operations
  •  
    Top 10
    - Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
    - Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
    - Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
    - Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
    - Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
    - Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
    - Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
    - Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
    - Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
    - Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
    REVIEW
    - First look: Apple Watch

    - 3 Tips for Maintaining Your Cell Phone Battery (part 1)

    - 3 Tips for Maintaining Your Cell Phone Battery (part 2)
    programming4us programming4us
    programming4us
     
     
    programming4us