Textures are often the biggest memory hog in
graphically intense applications. Block compression
is a technique that’s quite popular in real-time graphics, even on desktop
platforms. Like JPEG compression, it can cause a loss of image quality,
but unlike JPEG, its compression ratio is constant and deterministic. If
you know the width and height of your original image, then it’s simple to
compute the number of bytes in the compressed image.Block compression is particularly good at
photographs, and in some cases it’s difficult to notice the quality loss.
The noise is much more noticeable when applied to images with regions of
solid color, such as vector-based graphics and text.
I strongly encourage you to use block
compression when it doesn’t make a noticeable difference in image quality.
Not only does it reduce your memory footprint, but it can boost
performance as well, because of increased cache coherency. The iPhone
supports a specific type of block compression called PVRTC, named after
the PowerVR chip that serves as the iPhone’s graphics processor. PVRTC has
four variants, as shown in Table 1.
Table 1. PVRTC formats
GL format | Contains alpha | Compression ratio | Byte count |
---|
GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG | Yes | 8:1 | Max(32, Width * Height / 2) |
GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG | No | 6:1 | Max(32, Width * Height / 2) |
GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG | Yes | 16:1 | Max(32, Width * Height / 4) |
GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG | No | 12:1 | Max(32, Width * Height / 4) |
Warning:
Be aware of some important restrictions with
PVRTC textures: the image must be square, and its width/height must be a
power of two.
The iPhone SDK comes with a command-line
program called texturetool that you can use to generate
PVRTC data from an uncompressed image, and it’s located here:
/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin
It’s possible Apple has modified the path since
the time of this writing, so I recommend verifying the location of
texturetool using the Spotlight feature in Mac OS X. By
the way, there actually several command-line tools at this location
(including a rather cool one called pngcrush). They’re
worth a closer look!
Here’s how you could use
texturetool to convert Grid16.png
into a compressed image called Grid16.pvr:
texturetool -m -e PVRTC -f PVR -p Preview.png -o Grid16.pvr Grid16.png
Some of the parameters are explained
here.
- -m
Generate mipmaps.
- -e PVRTC
Use PVRTC compression. This can be
tweaked with additional parameters, explained here.
- -f PVR
This may seem redundant, but it chooses
the file format rather than the encoding. The PVR
format includes a simple header before the image data that contains
size and format information. I’ll explain how to parse the header
later.
- -p PreviewFile
This is an optional PNG file that gets
generated to allow you to preview the quality loss caused by
compression.
- -o OutFile
This is the name of the resulting PVR
file.
The encoding argument can be tweaked with
optional arguments. Here are some examples:
- -e PVRTC --bits-per-pixel-2
Specifies a 2-bits-per-pixel
encoding.
- -e PVRTC --bits-per-pixel-4
Specifies a 4-bits-per-pixel encoding.
This is the default, so there’s not much reason to include it on the
command line.
- -e PVRTC --channel-weighting-perceptual -bits-per-pixel-2
Use perceptual compression and a
2-bits-per-pixel format. Perceptual compression doesn’t change the
format of the image data; rather, it tweaks the compression
algorithm such that the green channel preserves more quality than
the red and blue channels. Humans are more sensitive to variations
in green.
- -e PVRTC --channel-weighting-linear
Apply compression equally to all color
components. This defaults to “on,” so there’s no need to specify it
explicitly.
Note:
At the time of this writing,
texturetool does not include an argument to control
whether the resulting image has an alpha channel. It automatically
determines this based on the source format.
Rather than executing
texturetool from the command line, you can make it an
automatic step in Xcode’s build process. Go ahead and perform the
following steps:
Right-click the
Targets group, and then choose Add→New Build Phase→New Run Script Build Phase.
There is lots of stuff in next
dialog:
Leave the shell as
/bin/sh.
Enter this directly into the script
box:
BIN=${PLATFORM_DIR}/../iPhoneOS.platform/Developer/usr/bin
INFILE=${SRCROOT}/Textures/Grid16.png
OUTFILE=${SRCROOT}/Textures/Grid16.pvr
${BIN}/texturetool -m -f PVR -e PVRTC $INFILE -o $OUTFILE
Add this to Input Files:
$(SRCROOT)/Textures/Grid16.png
Add this to Output Files:
$(SRCROOT)/Textures/Grid16.pvr
These fields are important to set
because they make Xcode smart about rebuilding; in other words, it
should run the script only when the input file has been
modified.Close the dialog by clicking the X in
the upper-left corner.
Open the Targets group
and its child node. Drag the Run Script item so
that it appears before the Copy Bundle Resources
item. You can also rename it if you’d like; simply right-click it and
choose Rename.
Build your project once to run the script.
Verify that the resulting PVRTC file exists. Don’t try running
yet.
Add Grid16.pvr to your
project (right-click the Textures group, select Add→Existing Files and choose
Grid16.pvr). Since it’s a build artifact, I don’t
recommend checking it into your source code control system. Xcode
gracefully handles missing files by highlighting them in red.
Make sure that Xcode doesn’t needlessly
rerun the script when the source file hasn’t been modified. If it
does, then there could be a typo in script dialog. (Simply
double-click the Run Script phase to reopen the
script dialog.)
Before moving on to the implementation, we
need to incorporate a couple source files from Imagination Technology’s
PowerVR SDK.
Click the link for “Khronos OpenGL ES 2.0
SDKs for PowerVR SGX family.”
Select the download link under Mac OS /
iPhone 3GS.
In your Xcode project, create a new group
called PowerVR. Right-click the new group, and
choose Get Info. To the right of the “Path” label on the General tab,
click Choose and create a New Folder called PowerVR. Click Choose and
close the group info window.
After opening up the tarball, look for
PVRTTexture.h and
PVRTGlobal.h in the Tools
folder. Drag these files to the PowerVR group, check the “Copy items”
checkbox in the dialog that appears, and then click Add.
Enough Xcode shenanigans, let’s get back to
writing real code. Before adding PVR support to the
ResourceManager class, we need to make some
enhancements to Interfaces.hpp. These changes are
highlighted in bold in Example 1.
Example 5. Adding PVRTC support to Interfaces.hpp
enum TextureFormat { TextureFormatGray, TextureFormatGrayAlpha, TextureFormatRgb, TextureFormatRgba, TextureFormatPvrtcRgb2, TextureFormatPvrtcRgba2, TextureFormatPvrtcRgb4, TextureFormatPvrtcRgba4, };
struct TextureDescription { TextureFormat Format; int BitsPerComponent; ivec2 Size; int MipCount; };
// ...
struct IResourceManager { virtual string GetResourcePath() const = 0; virtual TextureDescription LoadPvrImage(const string& filename) = 0; virtual TextureDescription LoadPngImage(const string& filename) = 0; virtual void* GetImageData() = 0; virtual ivec2 GetImageSize() = 0; virtual void UnloadImage() = 0; virtual ~IResourceManager() {} };
|
Example 2
shows the implementation of LoadPvrImage (you’ll
replace everything within the class definition except
the GetResourcePath and LoadPngImage
methods). It parses the header fields by simply casting the data pointer
to a pointer-to-struct. The size of the struct isn’t necessarily the size
of the header, so the GetImageData method looks at the
dwHeaderSize field to determine where the raw data
starts.
Example 2. Adding PVRTC support to ResourceManager.mm
...
#import "../PowerVR/PVRTTexture.h"
class ResourceManager : public IResourceManager { public:
// ...
TextureDescription LoadPvrImage(const string& file) { NSString* basePath = [NSString stringWithUTF8String:file.c_str()]; NSString* resourcePath = [[NSBundle mainBundle] resourcePath]; NSString* fullPath = [resourcePath stringByAppendingPathComponent:basePath]; m_imageData = [NSData dataWithContentsOfFile:fullPath]; m_hasPvrHeader = true; PVR_Texture_Header* header = (PVR_Texture_Header*) [m_imageData bytes]; bool hasAlpha = header->dwAlphaBitMask ? true : false;
TextureDescription description; switch (header->dwpfFlags & PVRTEX_PIXELTYPE) { case OGL_PVRTC2: description.Format = hasAlpha ? TextureFormatPvrtcRgba2 : TextureFormatPvrtcRgb2; break; case OGL_PVRTC4: description.Format = hasAlpha ? TextureFormatPvrtcRgba4 : TextureFormatPvrtcRgb4; break; default: assert(!"Unsupported PVR image."); break; } description.Size.x = header->dwWidth; description.Size.y = header->dwHeight; description.MipCount = header->dwMipMapCount; return description; } void* GetImageData() { if (!m_hasPvrHeader) return (void*) [m_imageData bytes]; PVR_Texture_Header* header = (PVR_Texture_Header*) [m_imageData bytes]; char* data = (char*) [m_imageData bytes]; unsigned int headerSize = header->dwHeaderSize; return data + headerSize; } void UnloadImage() { m_imageData = 0; } private: NSData* m_imageData; bool m_hasPvrHeader; ivec2 m_imageSize; };
|
Note that we changed the type of
m_imageData from CFDataRef to
NSData*. Since we create the NSData
object using autorelease semantics, there’s no need to call a release
function in the UnloadImage() method.
Note:
CFDataRef and
NSData are said to be “toll-free bridged,” meaning
they are interchangeable in function calls. You can think of
CFDataRef as being the vanilla C version and
NSData as the Objective-C version. I prefer using
NSData (in my Objective-C code) because it can work
like a C++ smart pointer.
Because of this change, we’ll also need to make
one change to LoadPngImage. Find this line:
m_imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
and replace it with the following:
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
m_imageData = [NSData dataWithData:(NSData*) dataRef];
You should now be able to build and run,
although your application is still using the PNG file.Example 3 adds a
new method to the rendering engine for creating a compressed texture
object. This code will work under both ES 1.1 and ES 2.0.
Example 3. RenderingEngine::SetPvrTexture()
private: void SetPvrTexture(const string& name) const; // ...
void RenderingEngine::SetPvrTexture(const string& filename) const { TextureDescription description = m_resourceManager->LoadPvrImage(filename); unsigned char* data = (unsigned char*) m_resourceManager->GetImageData(); int width = description.Size.x; int height = description.Size.y; int bitsPerPixel; GLenum format; switch (description.Format) { case TextureFormatPvrtcRgba2: bitsPerPixel = 2; format = GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG; break; case TextureFormatPvrtcRgb2: bitsPerPixel = 2; format = GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG; break; case TextureFormatPvrtcRgba4: bitsPerPixel = 4; format = GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG; break; case TextureFormatPvrtcRgb4: bitsPerPixel = 4; format = GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG; break; } for (int level = 0; width > 0 && height > 0; ++level) { GLsizei size = std::max(32, width * height * bitsPerPixel / 8); glCompressedTexImage2D(GL_TEXTURE_2D, level, format, width, height, 0, size, data); data += size; width >>= 1; height >>= 1; } m_resourceManager->UnloadImage(); }
|
You can now replace this:
SetPngTexture("Grid16.png");
with this:SetPvrTexture("Grid16.pvr");
Since the PVR file contains multiple mipmap
levels, you’ll also need to remove any code you added for mipmap
autogeneration (glGenerateMipmap under ES 2.0,
glTexParameter with
GL_GENERATE_MIPMAP under ES 1.1).
After rebuilding your project, your app will
now be using the compressed texture.
Of particular interest in Example 5-19 is the section that loops over each mipmap
level. Rather than calling glTexImage2D, it uses
glCompressedTexImage2D to upload the data. Here’s its
formal declaration:
void glCompressedTexImage2D(GLenum target, GLint level, GLenum format,
GLsizei width, GLsizei height, GLint border,
GLsizei byteCount, const GLvoid* data);
- target
Specifies which binding point to upload
the texture to. For ES 1.1, this must be
GL_TEXTURE_2D.
- level
Specifies the mipmap level.
- format
Specifies the compression
encoding.
- width, height
Specifies the dimensions of the image
being uploaded.
- border
Must be zero. Texture borders are not
supported in OpenGL ES.
- byteCount
The size of data being uploaded. Note
that glTexImage2D doesn’t have a parameter like
this; for noncompressed data, OpenGL computes the byte count based
on the image’s dimensions and format.
- data
Pointer to the compressed data.
Note:
In addition to PVRTC formats, the iPhone also
supports compressed paletted textures to be conformant to the OpenGL ES
1.1 standard. But, paletted images on the iPhone won’t buy you much;
internally they get expanded into normal true-color images.