Shader Series Primer - Fundamentals of the Programmable Pipeline in Xna Game Studio Express
Essay by review • February 24, 2011 • Term Paper • 2,805 Words (12 Pages) • 2,096 Views
Essay Preview: Shader Series Primer - Fundamentals of the Programmable Pipeline in Xna Game Studio Express
Shader Series Primer: Fundamentals of the Programmable Pipeline in XNA Game Studio Express
Level: Intermediate
Area: Graphics Programming
Summary
This document is an introduction to the series of samples, tutorials, and articles known as the Shader Series. This is a serial set of educational documentation and sample code that should allow an intermediate 3D developer to begin to explore the programmable graphics pipeline.
Audience
This document will be most useful for developers with some previous experience with either the fixed-function pipeline or the BasicEffect type in the XNA Framework. This document assumes no previous experience writing shaders or effects.
Background
History
In 1999, Microsoft introduced an important new feature in DirectX 7 that came to be known as hardware transformation and lighting, or hardware T&L. The new technology moved the expensive vertex transformation and lighting calculations from the CPU to the GPU. The DirectX 7 API exposed an elaborate set of state values that allowed a fixed number of lights, textures, and other states to be applied to a given draw function.
While hardware texturing and lighting had an incredible impact on the quality of 3D graphics on the personal computer, there was a significant drawback. The calculations used to light and display the world were hard-wired on the GPU. As a result, games of that time began to look very similar, since they couldn’t differentiate their lighting models except by applying different states. The era of universal 3D acceleration had raised the overall quality bar, but at the cost of the flexibility afforded by software rendering.
In the field of offline software image rendering for movie special effects, small programs or functions called shaders were increasingly being used to define custom interactions between a variety of materials and lighting conditions. Real-time applications were soon to follow; in 2002, the first consumer level programmable GPUs became available. Game developers were eager to make use of the new functionality. Most early consumers of programmable GPUs associated shaders with the classic rippling water effect, which at the time was considered the height of real-time graphical “eye candy.”
DirectX 8.0 was the first Microsoft graphics API to support programmable shaders, though initially, all shader programs had to be written in assembly code. As shader hardware increased in complexity, so did the programs. High-level shading languages were introduced to make shader development manageable. Today, Microsoft high-level shader language (HLSL) is the standard language used by all Microsoft 3D APIs, including the XNA Framework. The language compiles directly to the byte code used to execute shaders on GPUs.
Shaders have evolved greatly over the years. Generations of shader hardware are usually categorized by the DirectX shader models they support. Early shader models had extreme limits on the kinds and number of instructions that could be run on each vertex or pixel. Later models defined more instructions, added larger numbers of instructions per shader program, and enabled looping and branching functionality. Many XNA Windows materials are written with a minimum bar of Shader Model 2.0, while the Xbox 360 platform supports its own version of Shader Model 3.0. These models have the flexibility to support a huge number of rendering and optimization scenarios.
High-Level Shader Language (HLSL)
HLSL is the programming language created by Microsoft for writing shaders. It is similar to many C-style languages. The DirectX SDK is a great place to get more information about HLSL. A very complete set of documentation on HLSL can be found here:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_c/HLSL_Shaders.asp.
A full description of HLSL is outside the scope of this document. Familiarity with HLSL is not required for the rest of this document, but the link above is a great starting point for further reading.
Often the best way to learn something is to jump straight in. All of the upcoming example shaders are labeled and commented in such a way to make comprehension more straightforward. This document assumes no previous experience with HLSL and attempts to clarify new HLSL grammars as they come up.
Other Shader Languages
There are plenty of other high-level shader languages available, but HLSL is the only language supported intrinsically by XNA and DirectX. Other common languages are GLSL (Open GL Shading Language) and NVidia’s Cg language. When non-XNA materials reference these languages, they essentially fill the role that HLSL does for DirectX.
Effects
Introduction to Vertex Shaders
Vertex shaders expose functionality that was originally hard-coded into fixed-function hardware texture and lighting. Vertex shader programs are functions run once on each vertex passed into a Draw call. They are responsible for transforming raw, unlit vertices into processed vertex data usable by the rest of the graphics pipeline. The input of the vertex shader corresponds to the untransformed data in the vertex buffer.
At the bare minimum, a vertex shader only needs to return a transformed position.
Introduction to Pixel Shaders
Pixel shaders add a level of control not available in classic fixed-function pipelines. To understand a pixel shader, you have to know a bit about what happens after the vertex shader runs. The processed vertex data is used to set up triangles, which in turn are used to determine which pixels on the screen will be drawn. The input to the pixel shader is calculated using the vertex shader outputs from each of the triangle's three vertices.
The inputs of a pixel shader function are therefore bound to the outputs of the vertex shader. So if a vertex shader returns color data, the pixel shader inputs may include this color data. The data is typically interpolated using the three surrounding vertices. For example, imagine an equilateral triangle. Each of its three vertices is a different color. One is red, one is green, and one is blue. The color input to the pixel shader will be calculated by linear interpolation on those three colors. Pixels that are close to the red vertex will be mostly red, pixels that are closer to the blue vertex
...
...