The following subsections discuss some examples of BPEL simulation.
Scenarios
On
the ideal BPEL simulator, we run three scenarios. The first considers
the impact of synchronously invoking time-consuming services. Process
(a) in the following figure calls three such services (Invoke Sync System 1, Invoke Sync System 2, and Invoke Sync System 3)
in sequence. What we expect to find is that these expensive calls tie
up the process' inbound queue, creating a backlog of arrivals. Process
(b), which uses a BPEL flow structure
to call the services in parallel rather than in sequence, does not
improve on the first; it ties up the inbound queue for just as long.
Process (c), which invokes the services asynchronously and waits for
their replies in parallel, should reduce the backlog, but requires more
message traffic into the process. Is this an acceptable tradeoff?
The
second scenario, shown in the following figure, focuses on the
granularity of services. Process (a) invokes two services synchronously (Invoke Sync 1 and Invoke Sync 2) before proceeding to do useful work with the results. In process (b), these services are combined into one (Invoke Sync 1 and 2),
which does the same amount of work but does it in one call. The move to
coarser granularity in (b) by itself has marginal benefit, but it is a
move in the right direction. In process C, the synchronous call is
changed to a single asynchronous request (Invoke Async 1 and 2) with two parallel responses: Receive Reply 1 and Receive Reply 2. Process (d) is similar, but combines the responses into one: Receive Reply 1 and 2. In this discussion in the next section, we carefully compare processes (c) and (d) to see which performs better.
In the third and
final scenario, we consider the impact of having a huge service
message. Process (a) in the following figure asynchronously invokes a
service (Async Invoke) and, some time later, receives in response a message of a huge size (Async Response (Huge)).
Huge messages fill up queues quickly and require extra time to move
through communication pipes, so we expect the huge response to stress
the system. Process (b) reduces the impact by receiving the response in
smaller chunks, getting each chunk (Async Response (Chunk))
in an iteration of a while loop. There is an implied delay between the
arrivals chunks; there would be no benefit over process A if they came
in quick succession. Thus, we expect Process (b) to have a longer cycle
time than A. We let the statistics decide if this is an acceptable
compromise.
Queues and Bursts in the Scenarios
In the
simulations, we use two queues. BPELIN is the inbound queue for BPEL
processes. All starting and intermediate events for the process are
placed on this queue. BPELOUT is the outbound queue, where processes put
messages when invoking a partner service asynchronously. When the
partner service replies, it places the response message back on the
BPELIN queue. Synchronous invocations do not use the queues; the process
calls the service and gets back the response in a single step.
The work that a
process performs from the time it receives an event to the time it
pauses to wait for the next event is known as a burst. In a typical architecture, there is a process engine
that is responsible for getting the event off the BPELIN queue and
stepping the process through its burst. The longer the burst, the longer
the engine is tied up, and the less capacity it has to handle other
requests. A process with a number of short bursts makes frequent use of
the engine (once per burst), but the processing time for each burst is
quick.
The
figure that follows shows that process (a) of scenario 1 is a
single-burst process. The burst starts on the initial receive event
(marked with a large "B") and continues with three synchronous calls to
three separate systems. The event that starts the burst is the request
from the client application. The event is placed on the BPELIN queue,
bound for the process.
According to the following figure, process (c) of scenario 1, has four bursts (marked with Bs),
one triggered by the initial client request, the other three by
responses from three partner systems to asynchronous requests made by
the process. Each of these four events is placed on the BPELIN queue for
the process. The process puts its asynchronous requests on the BPELOUT
queue.
In
scenario 1, processes (a) and (b) have one burst, and process (c) has
four. In scenario 2, processes (a) and (b) have one burst, (c) has
three, and (d) has two. In scenario 3, process (a) has two bursts, and
(b) has one plus the number of chunks.