MULTIMEDIA

iPhone 3D Programming : Textures and Image Capture - Creating Textures with the Camera

2/11/2011 9:39:33 AM
Let’s create an app called CameraTexture that allows the user to snap a photo and wrap it around an ellipsoid (a squashed sphere). The embarrassingly simple user interface consists of a single button for taking a new photo, as shown in Figure 1. We’ll also add some animation by periodically spinning the ellipsoid along the x-axis.
Figure 1. CameraTexture sample


Unlike much of the sample code in this book, the interesting parts here will actually be in Objective-C rather than C++. The application logic is simple enough that we can dispense with the IApplicationEngine interface.

Using ModelViewer as the baseline, start by removing all the ApplicationEngine-related code as follows:

  1. Remove IApplicationEngine and CreateApplicationEngine from Interfaces.hpp.

  2. Remove the ApplicationEngine.ParametricViewer.cpp file from the Xcode project, and send it to the trash.

  3. Remove the m_applicationEngine field from GLView.h.

  4. Remove the call to CreateApplicationEngine from GLView.mm.

  5. Replace the call to m_applicationEngine->Initialize with m_renderingEngine->Initialize().

  6. Remove touchesBegan, touchesEnded, and touchesMoved from GLView.mm.

The code won’t build until we fill it out a bit more. Replace the IRenderingEngine interface in Interfaces.hpp with Example 1, and move the TextureFormat and TextureDescription type definitions to the top of the file.
Example 1. CameraTexture’s IRenderingEngine interface
struct IRenderingEngine {
virtual void Initialize() = 0;
virtual void Render(float zScale, float theta, bool waiting) const = 0;
virtual void LoadCameraTexture(const TextureDescription&, void* data) = 0;
virtual ~IRenderingEngine() {}
};

We’ll go over the implementation of these methods later. Let’s jump back to the Objective-C since that’s where the interesting stuff is. For starters, we need to modify the GLView class declaration by adopting a couple new protocols and adding a few data fields; see Example 2. New code is shown in bold.

Example 2. CameraTexture’s GLView.h
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import <OpenGLES/EAGL.h>
#import "Interfaces.hpp"

@interface GLView : UIView <UIImagePickerControllerDelegate,
UINavigationControllerDelegate> {
@private
IRenderingEngine* m_renderingEngine;
IResourceManager* m_resourceManager;
EAGLContext* m_context;
UIViewController* m_viewController;
bool m_paused;
float m_zScale;
float m_xRotation;
}

- (void) drawView: (CADisplayLink*) displayLink;

@end

Next, open GLView.mm, and rewrite the drawView method as in Example 3. The code that computes the time step is the same as previous examples; perhaps more interesting are the mathematical shenanigans used to oscillate between two types of useless and silly animation: “spinning” and “pulsing.”

Example 3. CameraTexture’s drawView method
- (void) drawView: (CADisplayLink*) displayLink
{
if (m_paused)
return;

if (displayLink != nil) {
float t = displayLink.timestamp / 3;
int integer = (int) t;
float fraction = t - integer;
if (integer % 2) {
m_xRotation = 360 * fraction;
m_zScale = 0.5;
} else {
m_xRotation = 0;
m_zScale = 0.5 + sin(fraction * 6 * M_PI) * 0.3;
}
}

m_renderingEngine->Render(m_zScale, m_xRotation, false);
[m_context presentRenderbuffer:GL_RENDERBUFFER];
}

While we’re still in GLView.mm, let’s go ahead and write the touch handler. Because of the embarrassingly simple UI, we need to handle only a single touch event: touchesEnded, as shown in Example 4. Note that the first thing it does is check whether the touch location lies within the bounds of the button’s rectangle; if not, it returns early.

Example 4. CameraTexture’s touchesEnded method
- (void) touchesEnded: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];

// Return early if touched outside the button's area.
if (location.y < 395 || location.y > 450 ||
location.x < 75 || location.x > 245)
return;

// Instance the image picker and set up its configuration.
UIImagePickerController* imagePicker =
[[UIImagePickerController alloc] init];
imagePicker.delegate = self;
imagePicker.navigationBarHidden = YES;
imagePicker.toolbarHidden = YES;

// Enable camera mode if supported, otherwise fall back to the default.
UIImagePickerControllerSourceType source =
UIImagePickerControllerSourceTypeCamera;
if ([UIImagePickerController isSourceTypeAvailable:source])
imagePicker.sourceType = source;

// Instance the view controller if it doesn't already exist.
if (m_viewController == 0) {
m_viewController = [[UIViewController alloc] init];
m_viewController.view = self;
}

// Turn off the OpenGL rendering cycle and present the image picker.
m_paused = true;
[m_viewController presentModalViewController:imagePicker animated:NO];
}


Warning:

When developing with UIKit, the usual convention is that the view controller owns the view, but in this case, the view owns the view controller. This is acceptable in our situation, since our application is mostly rendered with OpenGL, and we want to achieve the desired functionality in the simplest possible way. I’m hoping that Apple will release a lower-level camera API in future versions of the SDK, so that we don’t need to bother with view controllers.


Perhaps the most interesting piece in Example 4 is the code that checks whether the camera is supported; if so, it sets the camera as the picker’s source type:

UIImagePickerControllerSourceType source =
UIImagePickerControllerSourceTypeCamera;
if ([UIImagePickerController isSourceTypeAvailable:source])
imagePicker.sourceType = source;

I recommend following this pattern even if you know a priori that your application will run only on devices with cameras. The fallback path provides a convenient testing platform on the iPhone Simulator; by default, the image picker simply opens a file picker with image thumbnails.

Next we’ll add a couple new methods to GLView.mm for implementing the UIImagePickerControllerDelegate protocol, as shown in Example 5. Depending on the megapixel resolution of your camera, the captured image can be quite large, much larger than what we need for an OpenGL texture. So, the first thing we do is scale the image down to 256×256. Since this destroys the aspect ratio, we’ll store the original image’s dimensions in the TextureDescription structure just in case. A more detailed explanation of the code follows the listing.

Example 5. imagePickerControllerDidCancel and didFinishPickingMediaWithInfo
{
[m_viewController dismissModalViewControllerAnimated:NO];
m_paused = false;
[picker release];
}

- (void) imagePickerController:(UIImagePickerController*) picker
didFinishPickingMediaWithInfo:(NSDictionary*) info
{
UIImage* image =
[info objectForKey:UIImagePickerControllerOriginalImage];

float theta = 0;
switch (image.imageOrientation) {
case UIImageOrientationDown: theta = M_PI; break;
case UIImageOrientationLeft: theta = M_PI / 2; break;
case UIImageOrientationRight: theta = -M_PI / 2; break;
}

int bpp = 4;
ivec2 size(256, 256);
int byteCount = size.x * size.y * bpp;
unsigned char* data = (unsigned char*) calloc(byteCount, 1);

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo =
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGContextRef context = CGBitmapContextCreate(data,
size.x,
size.y,
8,
bpp * size.x,
colorSpace,
bitmapInfo);
CGColorSpaceRelease(colorSpace);
CGRect rect = CGRectMake(0, 0, size.x, size.y);
CGContextTranslateCTM(context, size.x / 2, size.y / 2);
CGContextRotateCTM(context, theta);
CGContextTranslateCTM(context, -size.x / 2, -size.y / 2);
CGContextDrawImage(context, rect, image.CGImage);

TextureDescription description;
description.Size = size;
description.OriginalSize.x = CGImageGetWidth(image.CGImage);
description.OriginalSize.y = CGImageGetHeight(image.CGImage);
description.Format = TextureFormatRgba;
description.BitsPerComponent = 8;

m_renderingEngine->LoadCameraTexture(description, data);
m_renderingEngine->Render(m_zScale, m_xRotation, true);
[m_context presentRenderbuffer:GL_RENDERBUFFER];

CGContextRelease(context);
free(data);

[m_viewController dismissModalViewControllerAnimated:NO];
m_paused = false;
[picker release];
}

@end

1. CameraTexture: Rendering Engine Implementation

Crack your OpenGL ES knuckles; it’s time to implement the rendering engine using ES 1.1. Go ahead and remove the contents of RenderingEngine.ES1.cpp, and add the new class declaration and Initialize method, shown in Example 6.

Example 6. RenderingEngine class declaration and initialization
#include <OpenGLES/ES1/gl.h>
#include <OpenGLES/ES1/glext.h>
#include <iostream>
#include "Interfaces.hpp"
#include "Matrix.hpp"
#include "ParametricEquations.hpp"

using namespace std;

struct Drawable {
GLuint VertexBuffer;
GLuint IndexBuffer;
int IndexCount;
};

namespace ES1 {

class RenderingEngine : public IRenderingEngine {
public:
RenderingEngine(IResourceManager* resourceManager);
void Initialize();
void Render(float zScale, float theta, bool waiting) const;
void LoadCameraTexture(const TextureDescription& description,
void* data);
private:
GLuint CreateTexture(const string& file);
Drawable CreateDrawable(const ParametricSurface& surface);
void RenderDrawable(const Drawable& drawable) const;
void UploadImage(const TextureDescription& description,
void* data = 0);
Drawable m_sphere;
Drawable m_button;
GLuint m_colorRenderbuffer;
GLuint m_depthRenderbuffer;
GLuint m_cameraTexture;
GLuint m_waitTexture;
GLuint m_actionTexture;
IResourceManager* m_resourceManager;
};

IRenderingEngine* CreateRenderingEngine(IResourceManager* resourceManager)
{
return new RenderingEngine(resourceManager);
}

RenderingEngine::RenderingEngine(IResourceManager* resourceManager)
{
m_resourceManager = resourceManager;
glGenRenderbuffersOES(1, &m_colorRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);
}

void RenderingEngine::Initialize()
{
// Create vertex buffer objects.
m_sphere = CreateDrawable(Sphere(2.5));
m_button = CreateDrawable(Quad(4, 1));

// Load up some textures.
m_cameraTexture = CreateTexture("Tarsier.png");
m_waitTexture = CreateTexture("PleaseWait.png");
m_actionTexture = CreateTexture("TakePicture.png");

// Extract width and height from the color buffer.
int width, height;
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_WIDTH_OES, &width);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES,
GL_RENDERBUFFER_HEIGHT_OES, &height);
glViewport(0, 0, width, height);

// Create a depth buffer that has the same size as the color buffer.
glGenRenderbuffersOES(1, &m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_depthRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width, height);

// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
m_colorRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
m_depthRenderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, m_colorRenderbuffer);

// Set up various GL state.
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_LIGHT0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_DEPTH_TEST);

// Set up the material properties.
vec4 diffuse(1, 1, 1, 1);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse.Pointer());

// Set the light position.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
vec4 lightPosition(0.25, 0.25, 1, 0);
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition.Pointer());

// Set the model-view transform.
mat4 modelview = mat4::Translate(0, 0, -8);
glLoadMatrixf(modelview.Pointer());

// Set the projection transform.
float h = 4.0f * height / width;
mat4 projection = mat4::Frustum(-2, 2, -h / 2, h / 2, 5, 10);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projection.Pointer());
glMatrixMode(GL_MODELVIEW);
}

} // end namespace ES1

There are no new concepts in Example 5-31; at a high level, the Initialize method performs the following tasks:

  1. Creates two vertex buffers using the parametric surface helper: a quad for the button and a sphere for the ellipsoid.

  2. Creates three textures: the initial ellipsoid texture, the “Please Wait” text, and the “Take Picture” button text.

  3. Performs some standard initialization work, such as creating the FBO and setting up the transformation matrices.

Next, let’s implement the two public methods, Render and LoadCameraTexture, as shown in Example 7.

Example 7. Render and LoadCameraTexture
void RenderingEngine::Render(float zScale, float theta, bool waiting) const
{
glClearColor(0.5f, 0.5f, 0.5f, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();

// Draw the button.
glTranslatef(0, -4, 0);
glBindTexture(GL_TEXTURE_2D, waiting ? m_waitTexture : m_actionTexture);
RenderDrawable(m_button);

// Draw the sphere.
glBindTexture(GL_TEXTURE_2D, m_cameraTexture);
glTranslatef(0, 4.75, 0);
glRotatef(theta, 1, 0, 0);
glScalef(1, 1, zScale);
glEnable(GL_LIGHTING);
RenderDrawable(m_sphere);
glDisable(GL_LIGHTING);

glPopMatrix();
}

void RenderingEngine::LoadCameraTexture(const TextureDescription&
desc, void* data)
{
glBindTexture(GL_TEXTURE_2D, m_cameraTexture);
UploadImage(desc, data);
}

That was simple! Next we’ll implement the four private methods (Example 8).

Example 8. CreateTexture, CreateDrawable, RenderDrawable, UploadImage
GLuint RenderingEngine::CreateTexture(const string& file)
{
GLuint name;
glGenTextures(1, &name);
glBindTexture(GL_TEXTURE_2D, name);
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER,
GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER,
GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
UploadImage(m_resourceManager->LoadImagePot(file));
return name;
}

Drawable RenderingEngine::CreateDrawable(const ParametricSurface& surface)
{
// Create the VBO for the vertices.
vector<float> vertices;
unsigned char vertexFlags = VertexFlagsNormals | VertexFlagsTexCoords;
surface.GenerateVertices(vertices, vertexFlags);
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBufferData(GL_ARRAY_BUFFER,
vertices.size() * sizeof(vertices[0]),
&vertices[0],
GL_STATIC_DRAW);

// Create a new VBO for the indices if needed.
int indexCount = surface.GetTriangleIndexCount();
GLuint indexBuffer;
vector<GLushort> indices(indexCount);
surface.GenerateTriangleIndices(indices);
glGenBuffers(1, &indexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
indexCount * sizeof(GLushort),
&indices[0],
GL_STATIC_DRAW);

// Fill in a descriptive struct and return it.
Drawable drawable;
drawable.IndexBuffer = indexBuffer;
drawable.VertexBuffer = vertexBuffer;
drawable.IndexCount = indexCount;
return drawable;
}


void RenderingEngine::RenderDrawable(const Drawable& drawable) const
{
int stride = sizeof(vec3) + sizeof(vec3) + sizeof(vec2);
const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);
const GLvoid* texCoordOffset = (const GLvoid*) (2 * sizeof(vec3));
glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
glVertexPointer(3, GL_FLOAT, stride, 0);
glNormalPointer(GL_FLOAT, stride, normalOffset);
glTexCoordPointer(2, GL_FLOAT, stride, texCoordOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
glDrawElements(GL_TRIANGLES, drawable.IndexCount,
GL_UNSIGNED_SHORT, 0);
}

void RenderingEngine::UploadImage(const TextureDescription& description,
void* data)
{
GLenum format;
switch (description.Format) {
case TextureFormatRgb: format = GL_RGB; break;
case TextureFormatRgba: format = GL_RGBA; break;
}

GLenum type = GL_UNSIGNED_BYTE;
ivec2 size = description.Size;

if (data == 0) {
data = m_resourceManager->GetImageData();
glTexImage2D(GL_TEXTURE_2D, 0, format, size.x, size.y,
0, format, type, data);
m_resourceManager->UnloadImage();
} else {
glTexImage2D(GL_TEXTURE_2D, 0, format, size.x, size.y,
0, format, type, data);
}
}


Much of Example 5-33 is fairly straightforward. The UploadImage method is used both for camera data (where the raw data is passed in) and for image files (where the raw data is obtained from the resource manager).

We won’t bother with an ES 2.0 backend in this case, so you’ll want to turn on the ForceES1 flag in GLView.mm, comment out the call to ES2::CreateRenderingEngine, and remove RenderingEngine.ES2.cpp from the project.

This completes the CameraTexture sample, another fun but useless iPhone program!

Other  
  •  iPhone 3D Programming : Textures and Image Capture - Dealing with Size Constraints
  •  Programming with DirectX : Game Math - Vectors
  •  iPhone 3D Programming : Textures and Image Capture - Generating and Transforming OpenGL Textures with Quartz
  •  iPhone 3D Programming : Textures and Image Capture - The PowerVR SDK and Low-Precision Textures
  •  Building LOB Applications : Using Visual Studio 2010 WCF Data Services Tooling
  •  Building LOB Applications : Accessing RESTful Data using OData
  •  Programming with DirectX : Additional Texture Mapping - Image Filters
  •  Microsoft XNA Game Studio 3.0 : Making a Game Program
  •  iPhone 3D Programming : Textures and Image Capture - Texture Compression with PVRTC
  •  iPhone 3D Programming : Textures and Image Capture - Texture Formats and Types
  •  iPhone 3D Programming : Textures and Image Capture - Fight Aliasing with Filtering
  •  iPhone 3D Programming : Textures and Image Capture - Texture Coordinates Revisited
  •  Programming with DirectX : Additional Texture Mapping - Sprites
  •  Programming with DirectX : Additional Texture Mapping - Alpha Mapping
  •  Microsoft XNA Game Studio 3.0 : Writing Your First Program (part 2) - Running the Same XNA Game on Different Devices
  •  Microsoft XNA Game Studio 3.0 : Writing Your First Program (part 1)
  •  Programming with DirectX : Shading and Surfaces - Additional Texturing Topics
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 4) - Enabling Textures with ES2::RenderingEngine
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 3) - Enabling Textures with ES1::RenderingEngine
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 2) - Generating Texture Coordinates
  •  
    Top 10
    Nikon 1 J2 With Stylish Design And Dependable Image And Video Quality
    Canon Powershot D20 - Super-Durable Waterproof Camera
    Fujifilm Finepix F800EXR – Another Excellent EXR
    Sony NEX-6 – The Best Compact Camera
    Teufel Cubycon 2 – An Excellent All-In-One For Films
    Dell S2740L - A Beautifully Crafted 27-inch IPS Monitor
    Philips 55PFL6007T With Fantastic Picture Quality
    Philips Gioco 278G4 – An Excellent 27-inch Screen
    Sony VPL-HW50ES – Sony’s Best Home Cinema Projector
    Windows Vista : Installing and Running Applications - Launching Applications
    Most View
    Bamboo Splash - Powerful Specs And Friendly Interface
    Powered By Windows (Part 2) - Toshiba Satellite U840 Series, Philips E248C3 MODA Lightframe Monitor & HP Envy Spectre 14
    MSI X79A-GD65 8D - Power without the Cost
    Canon EOS M With Wonderful Touchscreen Interface (Part 1)
    Windows Server 2003 : Building an Active Directory Structure (part 1) - The First Domain
    Personalize Your iPhone Case
    Speed ​​up browsing with a faster DNS
    Using and Configuring Public Folder Sharing
    Extending the Real-Time Communications Functionality of Exchange Server 2007 : Installing OCS 2007 (part 1)
    Google, privacy & you (Part 1)
    iPhone Application Development : Making Multivalue Choices with Pickers - Understanding Pickers
    Microsoft Surface With Windows RT - Truly A Unique Tablet
    Network Configuration & Troubleshooting (Part 1)
    Panasonic Lumix GH3 – The Fastest Touchscreen-Camera (Part 2)
    Programming Microsoft SQL Server 2005 : FOR XML Commands (part 3) - OPENXML Enhancements in SQL Server 2005
    Exchange Server 2010 : Track Exchange Performance (part 2) - Test the Performance Limitations in a Lab
    Extra Network Hardware Round-Up (Part 2) - NAS Drives, Media Center Extenders & Games Consoles
    Windows Server 2003 : Planning a Host Name Resolution Strategy - Understanding Name Resolution Requirements
    Google’s Data Liberation Front (Part 2)
    Datacolor SpyderLensCal (Part 1)