Frame-Based Animation
The most simple
animation technique is frame-based animation, which finds a lot of usage
in nongaming animations. Frame-based animation involves simulating
movement by displaying a sequence of pregenerated, static frame images. A
movie is a perfect example of frame-based animation: Each frame of the
film is a frame of animation, and when the frames are shown in rapid
succession, they create the illusion of movement.
Frame-based animation has no
concept of a graphical object distinguishable from the background;
everything appearing in a frame is part of that frame as a whole. The
result is that each frame image contains all the information necessary
for that frame in a static form. This is an important point because it
distinguishes frame-based animation from cast-based animation, which you
learn about in the next section. Figure 1 shows a few frames in a frame-based animation.
Figure 5.1
shows how a paratrooper is drawn directly onto each frame of animation,
so there is no separation between the paratrooper object and the sky
background. This means that the paratrooper cannot be moved
independently of the background. The illusion of movement is achieved as
each frame is redrawn with the paratrooper in a slightly different
position. This type of animation is of limited use in games because
games typically require the ability to move objects around independently
of the background.
Cast-Based Animation
A more
powerful animation technique employed by many games is cast-based
animation, which is also known as sprite animation. Cast-based animation
involves graphical objects that move independently of a background. At
this point, you might be a little confused by the usage of the term
“graphical object” when referring to parts of an animation. In this
case, a graphical object is something that logically can be thought of
as a separate entity from the background of an animation image. For
example, in the animation of a space shoot-em-up game, the aliens are
separate graphical objects that are logically independent of the star
field background.
Gamer’s Garage
The
term “cast-based animation” comes from the fact that sprites can be
thought of as cast members moving around on a stage. This analogy of
relating computer animation to theatrical performance is very useful. By
thinking of sprites as cast members and the background as a stage, you
can take the next logical step and think of an animation as a theatrical
performance. In fact, this isn’t far from the mark because the goal of
theatrical performances is to entertain the audience by telling a story
through the interaction of the cast members. Likewise, cast-based
animations use the interaction of sprites to entertain the user, while
often telling a story. |
Each graphical object in a cast-based animation is referred to as a sprite,
and has a position that can vary over time. In other words, a sprite
can have a velocity associated with it that determines how its position
changes over time. Almost every video game uses sprites to some degree.
For example, every object in the classic Asteroids game is a sprite that
moves independently of the background; even though Asteroids relies on
vector graphics, the objects in the game are still sprites. Figure 2 shows an example of how cast-based animation simplifies the paratrooper example you saw in the previous section.
In this example, the
paratrooper is now a sprite that can move independently of the
background sky image. So, instead of having to draw every frame manually
with the paratrooper in a slightly different position, you can just
move the paratrooper image around on top of the background.
Even though the
fundamental principle behind sprite animation is the positional movement
of a graphical object, there is no reason you can’t incorporate
frame-based animation into a sprite. This enables you to change the
image of the sprite as well as alter its position. This hybrid type of
animation is actually built into the sprite support in the MIDP 2.0 API,
as you soon learn.
I mentioned in the
frame-based animation discussion that television is a good example of
frame-based animation. But can you think of something on television that
is created in a manner similar to cast-based animation (other than
animated movies and cartoons)? Have you ever wondered how weather people
magically appear in front of a computer-generated map showing the
weather? The news station uses a technique known as blue-screening or
green-screening, which enables them to overlay the weatherperson on top
of the weather map in real time. It works like this: The person stands
in front of a solid colored backdrop (blue or green), which serves as a
transparent background. The image of the weatherperson is overlaid onto
the weather map; the trick is that the colored background is filtered
out when the image is overlaid so that it is effectively transparent. In
this way, the weatherperson is acting exactly like a sprite!
Seeing Through Objects with Transparency
The
weatherperson example brings up a very important point regarding
sprites: transparency. Because bitmapped images are rectangular by
nature, a problem arises when sprite images aren’t rectangular in shape.
In sprites that aren’t rectangular in shape, which is the majority of
them, the pixels surrounding the sprite image are unused. In a graphics
system without transparency, these unused pixels are drawn just like any
others. The end result is sprites that have visible rectangular borders
around them, which completely destroys the effectiveness of having
sprites overlaid on a background image.
What’s the solution? Well,
one solution is to make all your sprites rectangular. Because this
solution isn’t very practical, a more realistic solution is
transparency, which allows you to define a certain color in an image as
unused, or transparent. When drawing routines encounter pixels of this
color, they simply skip them, leaving the original background showing
through. Transparent colors in images act exactly like the
weatherperson’s colored screen in the earlier example.
Adding Depth with Z-Order
In many instances, you
will want some sprites to appear on top of others. For example, in a war
game you might have planes flying over a battlefield dropping bombs on
everything in sight. If a plane sprite happens to fly over a tank
sprite, you obviously want the plane to appear above the tank and,
therefore, hide the tank as it passes over. You handle this problem by
assigning each sprite a screen depth, which is also referred to as Z-order.
Z-order is the
relative depth of sprites on the screen. The depth of sprites is called
Z-order because it works sort of like another dimension—like a z axis.
You can think of sprites moving around on the screen in the XY axis.
Similarly, the z axis can be thought of as another axis projected into
the screen that determines how the sprites overlap each other. To put it
another way, Z-order determines a sprite’s depth within the screen.
Because they make use of a z axis, you might think that Z-ordered
sprites are 3D. The truth is that Z-ordered sprites can’t be considered
3D because the z axis is a hypothetical axis only used to determine how
sprite objects hide each other.
Construction Cue
The
easiest way to control Z-order in a game is to pay close attention to
the order in which you draw the game graphics. Fortunately, the MIDP API
provides a class called LayerManager
that simplifies the task of managing multiple graphics objects (layers)
and their respective Z-orders.
|
Just
to make sure that you get a clear picture of how Z-order works, let’s
go back for a moment to the good old days of traditional animation. You
learned earlier that traditional animators, such as those at Disney,
used celluloid sheets to draw animated objects. They drew on celluloid
sheets because the sheets could be overlaid on a background image and
moved independently; cel animation is an early version of sprite
animation. Each cel sheet corresponds to a unique Z-order value,
determined by where in the pile of sheets the sheet is located. If a
sprite near the top of the pile happens to be in the same location on
the cel sheet as any lower sprites, it conceals them. The location of
each sprite in the stack of cel sheets is its Z-order, which determines
its visibility precedence. The same thing applies to sprites in
cast-based animations, except that the Z-order is determined by the
order in which the sprites are drawn, rather than the cel sheet
location.
Detecting Collisions between Objects
No discussion of animation
as it applies to games would be complete without covering collision
detection. Collision detection is the method of determining whether
sprites have collided with each other. Although collision detection
doesn’t directly play a role in creating the illusion of movement, it is
tightly linked to sprite animation and extremely crucial in games.
Collision
detection is used to determine when sprites physically interact with
each other. In an Asteroids game, for example, if the ship sprite
collides with an asteroid sprite, the ship is destroyed and an explosion
appears. Collision detection is the mechanism employed to find out
whether the ship collided with the asteroid. This might not sound like a
big deal; just compare their positions and see whether they overlap,
right? Correct, but consider how many comparisons must take place when a
lot of sprites are moving around—each sprite must be compared to every
other sprite in the system. It’s not hard to see how the processing
overhead of effective collision detection can become difficult to
manage.
Not
surprisingly, there are many approaches to handling collision detection.
The simplest approach is to compare the bounding rectangles of each
sprite with the bounding rectangles of all the other sprites. This
method is efficient, but if you have objects that are not rectangular, a
certain degree of error occurs when the objects brush by each other.
Corners might overlap and indicate a collision when really only the
transparent areas are overlapping. The less rectangular the shape of the
sprites, the more error typically occurs. Figure 3 shows how simple rectangle collision works.
In the figure, the
areas determining the collision detection are shaded. You can see how
simple rectangle collision detection isn’t very accurate unless you’re
dealing with sprites that are rectangular in shape. An improvement upon
this technique is to shrink the collision rectangles a little, which
reduces the error. This method improves things a little, but it has the
potential of causing error in the reverse direction by allowing sprites
to overlap in some cases without signaling a collision. Figure 4
shows how shrinking the collision rectangles can improve the error on
simple rectangle collision detection. Shrunken rectangle collision is
just as efficient as simple rectangle collision because all you are
doing is comparing rectangles for intersection.
The most
accurate collision detection technique is to detect collision based on
the sprite image data, which involves actually checking to see whether
transparent parts of the sprite or the sprite images themselves are
overlapping. In this case, you get a collision only if the actual sprite
images are overlapping. This is the ideal technique for detecting
collisions because it is exact and enables objects of any shape to move
by each other without error. Figure 5 shows collision detection that uses the sprite image data.
Unfortunately, the technique shown in Figure 5
requires more processing overhead than rectangle collision detection
and can be a bottleneck in game performance. It really depends on the
importance of extremely accurate collision detection in your specific
game, and how much room you have to carry out the processing without
killing your frame rate. You’ll find that shrunken rectangle collision
detection is sufficient in a lot of games.