duminică, 21 noiembrie 2010

Shaders Part I: Hello World!

This post is the first in a series involving shader programming. Shaders are programs executed by the GPU. Vertex Shaders are executed for each vertex that goes trough the graphics pipeline, Geometry Shaders are executed for each primitive(triangle, line, etc), and pixel shaders(OpenGL calls them fragment shaders) are executed for each fragment. This will become clearer once you see some code. We'll be using AMD's Render Monkey for writing shaders, since it allows us to focus on the shaders themselves, and not on the application code(I'll try to cover that in later posts). Go ahead and download Render Monkey from here. After you download and install it, fire it up. You should see something like this:

Right click on Effect Workspace -> Add Effect Group -> Effect Group W/ DirectX Effect
You should now see something like this:

 Now you have a red ball..isn't that exciting? Well, not really, but it is a start. Let's have a look at what's involved in rendering this red ball. First off, look at the tree panel on the left, where you can see 3 variables and a rendering pass. The first variable is the ViewProjection Matrix, and it's supplied by the application code(in this case by Render Monkey); variables that are set by the application, are called uniform(in this case it's actually declared as a global variable). The next variable is named stream mapping and it's used to correlate model data to vertex shader inputs. The third variable is your model, which should be pretty self explanatory(you can right click it to change models or coordinate system orientation).
Next we have a rendering pass with a vertex and pixel shader. Shader syntax is very similar to C, so it should look pretty familiar. Keep in mind that conditionals(and loops) are only available in shader model 3 or higher, and even there, they are very slow. The rendering pass also has a reference to our model and to our Stream Mapping(you can recognize that it's a reference by the small arrow icon). You need such references because models and Stream Maps are implicit(you don't use your model's name anywhere in your shader code), so Render Monkey needs to know what to use for that pass.
The vertex shader simply takes every vertex and transforms it by the ModelViewProjection matrix(in this case just the ViewProjection matrix, since in RM object space is actually world space). Let's look at it line by line.

float4x4 matViewProjection;


This line declares the view projection matrix as a 4x4 matrix of floats(most variables in shaders tend to be floating point, so either half(16 bits), float(32 bits), or double(64 bits); you should keep in mind that using halfs is significantly faster on most GPUs) .


struct VS_INPUT
{
   float4 Position : POSITION0;
  
};


This defines our input structure, in this case a 4 float vector. POSITION0 is something called a semantic; it tells the shader compiler to bind an incoming vertex's position to this variable. 

struct VS_OUTPUT
{
   float4 Position : POSITION0;
  
};



Our output looks similar to our input, in that we only write the position. A vertex shader must always write POSITION0, since vertices go trough other stages in the pipeline after the vertex shader. 

VS_OUTPUT vs_main( VS_INPUT Input )
{
   VS_OUTPUT Output;

   Output.Position = mul( Input.Position, matViewProjection );
  
   return( Output );
  
}


Our entry point is called vs_main(this is the main function of the vertex shader, we can have other functions as well), and it takes a VS_INPUT parameter and returns a VS_OUTPUT.  Using structures isn't mandatory, we could also have something like(On a side note, POSITION0 and POSITION are the same thing):


float4 vs_main( float4 Position : POSITION ) : POSITION

The only actual computation in the vertex shader is multiplying the input position by the ViewProjection matrix. The mul function is very flexible, you can multiply two matrices or a matrix and a vector, as long as they are properly sized. 

On to the pixel shader:

float4 ps_main() : COLOR0
{  
   return( float4( 1.0f, 0.0f, 0.0f, 1.0f ) );
  
}
In this case, the pixel shader takes no parameters(the ball is uniformly red), and returns a color(the pixel shader has to write COLOR0). 

Let's try changing our model, and adding a texture.  First, right click the model node(not the reference to it), go to Change Model, and choose Cracked Quad.3ds. Now we need some texture coordinates, and to do that, you need to double click the stream mapping and add TEXCOORD to the stream list, like so:
Now we need an actual texture: right click the top node->Add Texture->Add 2D Texture and choose Fieldstone.tga.

We also have to add a texture sampler to our render pass. Right click Pass0->Add Texture Object and select our texture. This links a sampling unit to your texture image. The sampling unit is in charge of, well...sampling the texture(this involves filtering), and you have to know that they are a limited resource(D3D9 GPUs have 16 samplers or more, for example).
Now, let's modify the pixel shader code(btw, I renamed my sampler to Diffuse, Texture0 is pretty non-descriptive for a name).First, we need to add our sampler, as a global variable:
sampler2D Diffuse;
Now we need to add the texture coordinates as an input, so our definition of PS_MAIN becomes:
float4 ps_main(float2 texcoords:TEXCOORD0) : COLOR0
Let's also sample the texture:
float4 diffuse = tex2D(Diffuse, texcoords); 
return( diffuse );
The first argument of the tex2D instruction is the sampler, and the second one is a set of coordinates(for a 2d texture we only need 2 coordinates, but we can also sample 1D textures, 3D textures or cube maps). 
If all went well, you should now see a beige colored model. Now, that doesn't look like our image, so what went wrong? There are two things that could go wrong here: the sampler might not be set up correctly, or the texture coordinates aren't the right ones. The latter is easier to test: just output them to the screen. Change the return line to the following:
return( float4(texcoords,0.0,1.0) ); 
You should now see a black screen; this means our texture coordinates are always (0.0,0.0). That can't be right... The problem is that except for uniform variables, anything that gets to the pixel shader needs to pass trough the vertex shader as well, which doesn't happen with our tex coords. Modify the vertex shader to this:



float4x4 matViewProjection;

struct VS_INPUT
{
   float4 Position : POSITION0;
   float2 texcoord : TEXCOORD;
  
};

struct VS_OUTPUT
{
   float4 Position : POSITION0;
   float2 texcoord : TEXCOORD;
  
};

VS_OUTPUT vs_main( VS_INPUT Input )
{
   VS_OUTPUT Output;

   Output.Position = mul( Input.Position, matViewProjection );
   Output.texcoord = Input.texcoord;
   return( Output );
  
}
 


This simply forwards the tex coordinates to the pixel shader. You should now see something like this:




Now simply change the return statement in the pixel shader back, and you should have a textured cracked quad.



 

Niciun comentariu:

Trimiteți un comentariu