MULTIMEDIA

Programming with DirectX : Shading and Surfaces - Additional Texturing Topics

1/21/2011 7:48:17 PM
A tremendous number of texture-based techniques can be performed in computer graphics. In this section we will examine a few very common techniques to give you an idea of what else can be done with textures. Throughout this book we are looking at additional techniques such as bump mapping, shadow mapping, and so forth.

Manually Loading and Generating Textures

There might come a time when you do not want to use Direct3D functions to load your textures, but instead you want to do so manually. Loading your own data into a texture object is fairly straightforward, and in this section you will see how to do it by calling the CreateTexture2D() function to create the 2D texture object and the CreateShaderResourceView() to create the shader resource view. This discussion will be kept brief since manually loading textures essentially requires two function calls.


So far, we have loaded images from files using D3DX utility functions. If you wanted to manually place color data into an ID3D10Texture2D object, you would need a texture description of the type D3D10_TEXTURE2D_DESC and a subresource description of type D3D10_SUBRESOURCE_DATA. Once you’ve created the ID3D10Texture2D object, you can create the shader resource view and use the texture as normal. The D3D10_TEXTURE2D_DESC object represents the characteristics of the texture being created such as its width, height, and size in bytes. An example of creating and filling in such an object is as follows.

D3D10_TEXTURE2D_DESC textureDesc;

textureDesc.ArraySize = 1;
textureDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
textureDesc.Usage = D3D10_USAGE_DYNAMIC;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.Width = image_width;
textureDesc.Height = image_height;
textureDesc.MipLevels = 1;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
textureDesc.MiscFlags = 0;

The width and height from the texture description is the image’s resolution. MipLevels is the number of mip maps contained in the data. ArraySize is the number of textures created by the CreateTexture() function call. Format is the color format of the image. SampleDesc describes the image’s multi-sampling . Usage describes how the image will be read or written to. BindFlags are flags describing how the data will be used in the rendering pipeline. CPUAccessFlags determines if the CPU can read or write to the texture. MiscFlags can be any of the miscellaneous flags.

The CPUAccessFlags can be D3D10_CPU_ACCESS_READ, D3D10_CPU_ACCESS-WRITE, or both using the logical OR operator.

The UsageFlags can be D3D10_USAGE_DEFAULT (the resource uses read and write operations), D3D10_USAGE_IMMUTABLE (the resource can only be read by the GPU), D3D10_USAGE_DYNAMIC (the resource can be read by the GPU and written by the CPU), and D3D10_USAGE_STAGING (the resource can have data transfer from the GPU to the CPU). You can combine flags using the logical OR operator as long as the flags do not conflict with each other. For example, you cannot use D3D10_USUAGE_IMMUTABLE, which says the only read access can occur using the GPU, with another type that allows writing by the GPU or read/write by the CPU.

The BindFlags can be any of the following:

  • D3D10_BIND_VERTEX_BUFFER if we are creating a vertex buffer

  • D3D10_BIND_INDEX_BUFFER if it’s an index buffer

  • D3D10_BIND_CONSTANT_BUFFER for constant buffers

  • D3D10_BIND_SHADER_RESOURCE for shader resources

  • D3D10_BIND_STREAM_OUTPUT if the object is to be used for output

  • D3D10_BIND_RENDER_TARGET for rendering targets

  • D3D10_BIND_DEPTH_STENCIL for binding a texture as a depth and stencil buffer output

With the texture description object filled in, the next object you will need is the subresource description. In the D3D10_SUBRESOURCE_DATA structure we can set the pSysMem variable, which will hold the actual image data, the SysMemPitch variable, which represents the number of bytes for a row of image data (width × the number of components, where RGB would be 3 and RGBA would be 4), and the SysMemSlicePitch variable, which is the depth of the image multiplied by the number of components for each pixel. The SysMemSlicePitch variable is only used for 3D textures. An example of creating and filling in a D3D10_SUBRESOURCE_DATA object and calling CreateTexture2D() to create a 2D texture is shown as follows. image_data is assumed to be an array of bytes, and image_width is the width of the image.

D3D10_SUBRESOURCE_DATA resData;
resData.pSysMem = (void*)image_data;
resData.SysMemPitch = image_width * 4;
resData.SysMemSlicePitch = 0;

ID3D10Texture2D *texture;

hr = g_d3dDevice->CreateTexture2D(&textureDesc, &resData, &texture);

Keep in mind that the number of components depends on the format type you’ve specified when creating the texture descriptor. With the ID3D10Texture2D object created, you can then create a shader resource view by calling CreateShaderResourceView(). Once you have the shader resource view, you can use the texture as normal. An example of creating a shader resource view is shown as follows, where Format is the texture’s format, MipLevels is the total number of mip maps, MostDetailedMip is the mip map level with the largest resolution, and ViewDimension describes the resource type:

D3D10_SHADER_RESOURCE_VIEW_DESC svDesc;

svDesc.Format = textureDesc.Format;
svDesc.Texture2D.MipLevels = 1;
svDesc.Texture2D.MostDetailedMip = 0;
svDesc.ViewDimension = D3D10_SRV_DIMENSION_TEXTURE2D;

ID3D10ShaderResourceView *shaderResourceView = NULL;
g_d3dDevice->CreateShaderResourceView(texture, &svDesc,
&shaderResourceView);

Compressed Textures

Compression is a term used to refer to algorithms that take a source data and represent it using fewer bytes. The two main types of compression are lossless and lossy.

Lossless compression does not affect the original quality. If you view something with lossless compression, it not only has the same quality as the original, but re-compression shouldn’t degrade the quality of the data.

For a simple example of lossless compression, let’s say you have an image with 100 pixels made up of only 10 unique colors. The original image would be 300 bytes if you assume 3 bytes per-pixel. If you place those 10 colors in an array of RGB values, you need 30 bytes to store the unique colors. If you used 1-byte array indexes for each pixel of the image, you would need 130 bytes to store the image (100 for the pixel indexes and 30 for the array of unique colors). This is a difference of 170 bytes. If you used 4 bits for the array index instead of a byte (where four bits can store a range from 0 to 16, which would be more than enough for a 0 to 9 array index), it would take 30 bytes for the unique colors array and 40 bytes for the image. This means you could have taken a 300-byte image and reduced it to 70 bytes using lossless compression.

You can perfectly re-create the original image using this data, which means quality wasn’t reduced. If you used fewer bits for the indexes, that would not be enough to represent the entire index range. For example, three bits (0 to 7) caused you to have to choose to drop some colors since the number of bits cannot be used to index every element in the array. You could then choose to clamp the colors where the highest index in the array is used for all pixels that originally referenced colors above that element. For example, you can drop the last two colors, and any image pixels that use those values can use the eighth (index 7) color instead. This is lossy compression: the data is reduced in a way that cannot be decompressed perfectly to match the original source since those values were dropped and replaced. The replaced data can change the image so that under close examination it is clearly not accurate.

Lossy compression compresses data by altering it so that compressed data does not retain its original quality. Lossy compression works by sacrificing quality and accuracy for file size. If you recompress data that is already lossy compressed, the quality will suffer even more because the data being compressed is not the original data but is data that already has reduced quality. Therefore, if you need to recom-press data, it is best to only compress the original data rather than trying to compress data that is already lossy compressed.

Take a JPEG image, for example, which uses lossy compression. The quality decreases as the compression ratio increases. The higher the compression ratio that is used, the smaller the file size and thus the worse the image quality for JPEG images. This is illustrated in Figure 1, which shows the original image, the compressed version, and the image recompressed from an already compressed version.

Figure 1. Original (left), compressed (middle), and re-compressed from an already compressed image (right).


Compression is an extremely important topic in video game development. For example, if you are able to compress all textures by a factor of four to one, making the textures 25% the size of the originals, you can reduce the disk space and memory requirements for your game. This means that the amount of data that needs to be processed by the rendering pipeline is reduced, disk space is reduced, loading times are reduced since there is not as much information to load, texture streaming technology can work faster, and so forth.

Looking at the issue strictly from a graphics rendering perspective, this also means that the reduced data size allows developers to use four times as many textures in a game level (assuming all textures are the same size and you compressed them all by 25%). It could also mean that developers have more room to use textures that are four times larger in resolution since you reduced all textures by one-fourth.

All major graphics hardware for the past few generations has supported compressed textures. To use compressed textures in Direct3D, you do not need to enable anything or provide any special code. The only thing you need to do is save your images using a compressed file format. When Direct3D 10 loads the compressed images that the hardware supports, they are used directly in the graphics hardware. The graphics hardware itself handles the decompressing and returning the correct color value when a compressed texture is sampled. To create compressed textures you can save your textures in a tool such as Adobe Photoshop, or you can use the DirectX Texture tool that comes with the DirectX SDK.

Modern graphics hardware supports the S3 Texture Compression (S3TC) algorithms. There are five versions of these algorithms, DXT1 through DXT5. To save your images to one of these formats, you can save your .DDS images using Photoshop, the DirectX Texture tool, and so on and specifying one of these format types.

The DXT1 format uses four bits for each pixel in the image and is the format that can give the largest compression and the worst quality, depending on the image. The DXT1 format did not originally support an alpha channel, but there is an extended version that does. DXT1 offers eight-to-one compression without the alpha channel and six-to-one with the alpha channel.

The DXT2 and DXT3 formats use an additional four bits for an alpha channel. The DXT2 format’s alpha is premultiplied with the color, while the DXT3 format has an explicit alpha that is not premultiplied. DXT2 and DXT3 offer a four-to-one compression ratio.

DXT4 uses an interpolated alpha channel that is premultiplied with the color. DTX5 is similar to DXT4 but without the premultiplication. DXT4 and DXT5 also offer a four-to-one compression ratio.

When looking at compressed textures, keep in mind that there is nothing you need to do with Direct3D 10 to use them other than to save your textures using DXT1 through DXT5. Some images will look better with some formats than others, and usually you can experiment when saving textures to see which formats give you the best compression while retaining an acceptable level of quality. The 3Dc and S3TC (DXT) compression algorithms use lossy compression.

Multi-Sampling

Multi-sampling is any technique where a texture is sampled more than once to find the averages of colors and decrease blocky artifacts in images. You can enable multi-sampling in the texture description so that when a texture is sampled in the pixel shader, instead of sampling a single color around the pixel to use, the hardware sends an average color value based on the surrounding pixels. For example, if 2 × 2 multi-sampling is used to sample the pixel above, below, and to each side of each pixel, the color that is fetched is not the center (original) pixel but the average of all pixels that surround it plus itself.

Multi-sampling aims to smooth out color differences of nearby pixels. For example, in Figure 2 the color from one pixel to the next can change quite sharply. By averaging the neighboring pixels, you can blur these sharp edges so that the blocky staircase rendering artifacts do not show up or at least are not as noticeable. This is simply a blurring operation and in some cases can help improve the quality of textured surfaces.

Figure 2. Original (top) center pixel versus averaged (bottom) center pixel.


Multi-sampling is a hardware-accelerated technique, and to use it you simply enable it when creating textures in the texture description. Texture filtering such as bilinear, trilinear, and so on also averages pixels to create this effect of smoothing out sharp variations. You can also multi-sample the rendering targets so that the entire rendered scene is smoothed out a little.

Adaptive super-sampling is essentially multi-sampling that only occurs on pixels that require it, rather than all pixels in the image.

Other  
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 4) - Enabling Textures with ES2::RenderingEngine
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 3) - Enabling Textures with ES1::RenderingEngine
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 2) - Generating Texture Coordinates
  •  iPhone 3D Programming : Adding Textures to ModelViewer (part 1) - Enhancing IResourceManager
  •  Programming with DirectX : Shading and Surfaces - Implementing Texture Mapping (part 2) - Multi Texture Demo
  •  Programming with DirectX : Shading and Surfaces - Implementing Texture Mapping (part 1) - 2D Texture Mapping Demo
  •  Building Out Of Browser Silverlight Applications - Using COM Interoperability and File System Access
  •  Building Out Of Browser Silverlight Applications - Controlling the Application Window
  •  iPhone 3D Programming : Adding Depth and Realism - Loading Geometry from OBJ Files
  •  iPhone 3D Programming : Adding Depth and Realism - Better Wireframes Using Polygon Offset
  •  Programming with DirectX : Textures in Direct3D 10 (part 2)
  •  Programming with DirectX : Textures in Direct3D 10 (part 1) - Textures Coordinates
  •  Programming with DirectX : Shading and Surfaces - Types of Textures
  •  iPhone 3D Programming : Adding Shaders to ModelViewer (part 2)
  •  iPhone 3D Programming : Adding Shaders to ModelViewer (part 1) - New Rendering Engine
  •  iPhone 3D Programming : Adding Depth and Realism - Shaders Demystified
  •  Programming with DirectX : Transformation Demo
  •  Programming with DirectX : View Transformations
  •  Programming with DirectX : World Transformations
  •  Programming with DirectX : Projection Transformations
  •  
    Most View
    Microsoft Systems Management Server 2003 : Modifying SQL Server Parameters
    Tricky Art Of Finding Parts For Retro Computers (Part 1)
    Compliance & The Cloud (Part 2)
    The Outer RIM (Part 1)
    SharePoint 2010: Business Connectivity Services - The Secure Store Service (part 1) - Configuring the Secure Store Service
    ECS Z77H2-A2X v1.0 - Golden LGA 1155 Mainboard From The Black Series (Part 3)
    PaintSupremeor Mac - Create, Edit And Polish Images
    Humax DTR-T1000 – Freeview HD PVR With YouView
    Intel Core i7-3970X Extreme Edition Processor Review (Part 2)
    Windows Server 2008 and Windows Vista : Administering GPOs (part 2) - Starter GPOs
    Top 20
    Test Stereo Amplifiers - Driving Your Tunes Forward (Part 3) : Rotel RA-10
    Test Stereo Amplifiers - Driving Your Tunes Forward (Part 2) : Marantz PM6004, Onkyo A-9050
    Test Stereo Amplifiers - Driving Your Tunes Forward (Part 1) : Denon PMA-720AE
    The Cowon EM1 Earphones - Musically Yours
    The JBL J22i - A Lot More Than Just Earphones
    Code Sport - Scale-Up vs. Scale-Out Storage
    Network Deployment And Alternative OSs
    Simplifying Deployment On The Cloud With Heroku
    Karbonn Smart Tab 10 - Cosmic - The Komplete Tablet
    Razer Edge - PC Gaming Has Never Been More Portable
    Musical Fidelity M1SDAC - A Wonderfully Musical-Sounding DAC
    Skullcandy's Navigator Headphones - The Aviator's Kin
    Ten Top Synths (Part 3) : Synapse Audio Software, GForce Software, u-he, Native Instruments
    Ten Top Synths (Part 2) : FXpansion, Rob Papen, Native Instruments
    Ten Top Synths (Part 1) : Native Instruments, KV331 Audio, LennarDigital
    A Look At Open Source Nosql Databases And Cloud Computing (Part 2)
    A Look At Open Source Nosql Databases And Cloud Computing (Part 1)
    Apache Cassandra The Crash-Proof Nosql Database (Part 2)
    Apache Cassandra The Crash-Proof Nosql Database (Part 1)
    Dell XPS 10 - Windows 8 Tablet for Business