Some of the biggest gotchas in texturing are
the various constraints imposed on their size. Strictly speaking, OpenGL
ES 1.1 stipulates that all textures must have dimensions that are powers
of two, and OpenGL ES 2.0 has no such restriction. In the graphics
community, textures that have a power-of-two width and height are commonly
known as POT textures; non-power-of-two textures are NPOT.For better or worse, the iPhone platform
diverges from the OpenGL core specifications here. The POT constraint in
ES 1.1 doesn’t always apply, nor does the NPOT feature in ES 2.0.
Newer iPhone models support an extension to ES
1.1 that opens up the POT restriction, but only under a certain set of
conditions. It’s called
GL_APPLE_texture_2D_limited_npot, and it basically states the following:
Nonmipmapped 2D textures that use
GL_CLAMP_TO_EDGE wrapping for the S and T coordinates need not have
power-of-two dimensions.
As hairy as this seems, it covers quite a few
situations, including the common case of displaying a background texture
with the same dimensions as the screen (320×480). Since it requires no
minification, it doesn’t need mipmapping, so you can create a texture
object that fits “just right.”
Not all iPhones support the aforementioned
extension to ES 1.1; the only surefire way to find out is by
programmatically checking for the extension string, which can be done like
this:
const char* extensions = (char*) glGetString(GL_EXTENSIONS);
bool npot = strstr(extensions, "GL_APPLE_texture_2D_limited_npot") != 0;
The extensions string returned by OpenGL is
long and space-delimited, so it’s a bit difficult for humans to read. As
a useful diagnostic procedure, I often dump a “pretty print” of the
extensions list to Xcode’s Debugger Console at startup time. This can be
done with the following code snippet: void PrettyPrintExtensions() { std::string extensions = (const char*) glGetString(GL_EXTENSIONS); char* extensionStart = &extensions[0]; char** extension = &extensionStart; std::cout << "Supported OpenGL ES Extensions:" << std::endl; while (*extension) std::cout << '\t' << strsep(extension, " ") << std::endl; std::cout << std::endl; } |
If your 320×480 texture needs to be mipmapped
(or if you’re supporting older iPhones), then you can simply use a 512×512
texture and adjust your texture coordinates to address a 320×480
subregion. One quick way of doing this is with a texture matrix:
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glScalef(320.0f / 512.0f, 480.0f / 512.0f, 1.0f);
Unfortunately, the portions of the image that
lie outside the 320×480 subregion are wasted. If this causes you to
grimace, keep in mind that you can add “mini-textures” to those unused
regions.
If you don’t want to use a 512×512 texture,
then it’s possible to create five POT textures and carefully puzzle them
together to fit the screen, as shown in Figure 1. This is a hassle, though, and I don’t
recommend it unless you have a strong penchant for masochism.
By the way, according to the official OpenGL ES
2.0 specification, NPOT textures are actually allowed in
any situation! Apple has made a minor transgression
here by imposing the aforementioned limitations.
Keep in mind that even when the POT restriction
applies, your texture can still be non-square (for example, 512×256),
unless it uses a compressed format.
Think these are a lot of rules to juggle? Well
it’s not over yet! Textures also have a maximum allowable size. At the
time of this writing, the first two iPhone generations have a maximum size
of 1024×1024, and third-generation devices have a maximum size of
2048×2048. Again, the only way to be sure is querying its capabilities at
runtime, like so:
GLint maxSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxSize);
Don’t groan, but there’s yet another gotcha I
want to mention regarding texture dimensions. By default, OpenGL expects
each row of uncompressed texture data to be aligned on a 4-byte boundary.
This isn’t a concern if your texture is GL_RGBA with
UNSIGNED_BYTE; in this case, the data is always
properly aligned. However, if your format has a texel size less than 4
bytes, you should take care to ensure each row is padded out to the proper
alignment. Alternatively, you can turn off OpenGL’s alignment restriction
like this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Also be aware that the PNG decoder in Quartz
may or may not internally align the image data; this can be a concern if
you load images using the CGDataProviderCopyData method
presented in Example 2. It’s more
robust (but less performant) to load in images by drawing to a Quartz
surface, which we’ll go over in the next section.
Before moving on, I’ll forewarn you of yet
another thing to watch out for: the iPhone Simulator doesn’t necessarily
impose the same restrictions on texture size that a physical device would.
Many developers throw up their hands and simply stick to power-of-two
dimensions only; I’ll show you how to make this easier in the next
section.
1. Scaling to POT
One way to ensure that your textures are
power-of-two is to scale them using Quartz. Normally I’d recommend
storing the images in the desired size rather than scaling them at
runtime, but there are reasons why you might want to scale at runtime.
For example, you might be creating a texture that was generated from the
iPhone camera (which we’ll demonstrate in the next section).
For the sake of example, let’s walk through
the process of adding a scale-to-POT feature to your
ResourceManager class. First add a new field to the
TextureDescription structure called
OriginalSize, as shown in bold in Example 1.
Example 1. Interfaces.hpp
struct TextureDescription { TextureFormat Format; int BitsPerComponent; ivec2 Size; int MipCount; ivec2 OriginalSize; };
|
We’ll use this to store the image’s original
size; this is useful, for example, to retrieve the original aspect
ratio. Now let’s go ahead and create the new
ResourceManager::LoadImagePot() method, as shown in
Example 2.
Example 2. ResourceManager::LoadImagePot
TextureDescription LoadImagePot(const string& file) { NSString* basePath = [NSString stringWithUTF8String:file.c_str()]; NSString* resourcePath = [[NSBundle mainBundle] resourcePath]; NSString* fullPath = [resourcePath stringByAppendingPathComponent:basePath]; UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
TextureDescription description; description.OriginalSize.x = CGImageGetWidth(uiImage.CGImage); description.OriginalSize.y = CGImageGetHeight(uiImage.CGImage); description.Size.x = NextPot(description.OriginalSize.x); description.Size.y = NextPot(description.OriginalSize.y); description.BitsPerComponent = 8; description.Format = TextureFormatRgba;
int bpp = description.BitsPerComponent / 2; int byteCount = description.Size.x * description.Size.y * bpp; unsigned char* data = (unsigned char*) calloc(byteCount, 1);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big; CGContextRef context = CGBitmapContextCreate(data, description.Size.x, description.Size.y, description.BitsPerComponent, bpp * description.Size.x, colorSpace, bitmapInfo); CGColorSpaceRelease(colorSpace); CGRect rect = CGRectMake(0, 0, description.Size.x, description.Size.y); CGContextDrawImage(context, rect, uiImage.CGImage); CGContextRelease(context); m_imageData = [NSData dataWithBytesNoCopy:data length:byteCount freeWhenDone:YES]; return description; }
unsigned int NextPot(unsigned int n) { n--; n |= n >> 1; n |= n >> 2; n |= n >> 4; n |= n >> 8; n |= n >> 16; n++; return n; }
|
Example 2 is fairly
straightforward; most of it is the same as the
LoadImage method presented in the previous section,
with the exception of the NextPot method. It’s
amazing what can be done with some bit shifting! If the input to the
NextPot method is already a power of two, then it
returns the same value back to the caller; if not, it returns the
next power of two. I won’t bore you with the
derivation of this algorithm, but it’s fun to impress your colleagues
with this trick.