Rabid Lion Games


First up, apologies to everyone who’s had to wait a little while for this second part of the series. I wanted to make sure that the series was as accessible to everyone as possible, even if they’ve never written a shader before, and so I’ve ended up going into rather more depth than I intended! That and an unexpected exile to a place without the internet over Christmas & New Year has led to a couple of weeks delay, so sorry about that! Fingers crossed there’ll be a much shorter wait for the next installment…
On with the show!


Shaders: a quick introduction

I’m going to start the series with a quick introduction to shaders. I’ll talk a bit about how shaders work, and then we’ll go through a small sample where we write a simple shader that we’ll actually use in our Lighting system. For those of you who are already comfortable with shaders, feel free to skip this section, I’ll do a quick recap of the shader we create here when we come to use it later in the series.
For those following on, I’ve linked to the full Visual Studio solution for this part of the series at the end of this post, so if you’d like to just review the code and move on then feel free to skip to the end.


What are shaders?

Shaders, very simply, are small programs that run on the graphics processors of your PC, console, or smart phone. There are a number of different kinds of shaders, but for our lighting system we will only talk about one kind of shader: Pixel Shaders.
A pixel shader is called for every pixel of the surface that the graphics processor is drawing to. Usually that means the pixels on your monitor or TV. The job of the pixel shader is to determine what the final color of that particular pixel will be.

If that seems a bit complicated, think of it like this. When the graphics processor wants to draw to the screen, it’s actually deciding for each pixel what color it should be. For each pixel on the screen, it runs the pixel shader and the pixel shader returns a color back to the graphics processor, and the graphics processor then sets the pixel to that color.

So if we wanted to color the whole screen red using a shader, our pixel shader would just contain the line:

return float4(1, 0, 0, 1);

Woah now, I hear you cry. What’s this float4(1, 0, 0, 1) business? Well, in shaders, a color is represented by a float4, which is like a Vector4 in XNA. It just holds 4 float values. Each of the 4 values in the float4 has a special meaning.

As an example, let’s think about a float4 called Col. The first element of Col can be accessed either through Col.x (just like a Vector4 in XNA) or by Col.r. The ‘r’ stands for red, as this value represents how much red there is in the color Col. The second value is Col.y or Col.g (for green) and the third is Col.z or Col.b (for blue). The fourth element of Col is Col.w or Col.a. The ‘a’ stands for alpha, and basically represents how opaque the color is, i.e. How transparent it is, with 1 being not transparent at all and 0 being fully transparent. All of the values in a float4 representing a color are between 1 and 0, with 1 being the maximum.

So above when we said return float4(1, 0, 0, 1); we were returning a Color with the maximum amount of red, no blue or green, and fully opaque.

I’m going to walk through the shader that we’re going to write first, and then we’ll see how it fits in with a simple xna sample so that you can test it on your own machine.

In our lighting system, each frame we’re going to create light maps. You can think of these a bit like stencils you use for spray painting, like a sheet that covers the scene we’re lighting, and has cutouts showing which bits will be lit with everything else in shadow. For the areas that are lit, the light map will also show us how much light is falling on that part of the scene. This ‘light value’ can be represented for each small section of the scene by a value between 1 and 0, where 1 represents a bit of the scene that is fully lit and 0 represents a bit that is fully in shadow.

This should sound familiar. It’s exactly the way that we talked about storing the red, green, blue, and alpha components in our color Col above. In fact, we can go further than just a single value of 1 and 0 for each bit of the scene in our light map. Instead we can have a value between 1 and 0 for the amount of red light, the amount of green light, and the amount of blue light at each bit of the scene. This will allow our light map to show not just how much light each bit of the scene is getting, but what color that light is.

We will leave the detail of how we generate the light map each frame for later tutorials. Our aim right now is to write a shader that takes a light map, a background texture, and combines them to show what that background looks like with lights applied to it.

Just to illustrate, here is the light map that we’re going to be using in our example:


You can see where the objects that are casting shadows are, even though they’re not drawn on the light map.

Here is the background texture we will be using (yep, just plain grey):



Note that again we can’t see the objects that are creating the shadows. That’s because we don’t want the objects that cast the shadows to be in shadow themselves, so we don’t want to apply the light map to them.

Here is the background when it’s been lit with the light map, using the shader that we’re about to write, notice that it is slightly lighter than just the light map, this is from the background:



And finally, here is the whole scene including the texture with the objects that are casting the shadows:



Hopefully that gives you an idea of where we’re heading.So without further ado, let’s write our first useful shader!


The LightBlend shader

Before we start, we’re going to need an XNA project. Head into Visual Studio and create a new Windows XNA game. Once you’re done, right click on the content project and go to add-> New… . In the dialog box that pops up, select Effect file, and call it LightBlend. Click ok.

Visual Studio will open up a very intimidating looking code file, full of some familiar, and some not so familiar looking code. Don’t worry about this for now. Select it all and hit delete. For our first shader, we’re going to be writing everything from scratch.


Structure of a pixel shader

Shaders (in XNA and DirectX) are written in a language called the High Level Shader Language, or HLSL, which is based on the syntax of the language C, that C# also evolved from.

A shader looks a lot like a function in C#. It’s enclosed in curly braces (i.e. {}), it takes arguments, and returns a type. In fact, it is a function, just a special one that runs on the graphics processor and not the CPU.

So let’s start by writing the skeleton of our shader/ function, which we will call ‘PixelShaderFunction’.

First up, just like in C#, we need to state what our function returns. For pixel shaders this is basically always a float4, as its job is to tell the graphics processor what color the pixel should be. So for the moment our shader looks like this:


float4 PixelShaderFunction()



Now, we said earlier that the pixel shader gets called for each pixel on the screen, and in the example we gave earlier that colored the screen red, we didn’t care which pixel we were drawing to, because the result was always the same. Normally, however, we’ll want to return a different color for different pixels depending on where they are on the screen. If we think of it in terms of a sprite that covers the whole screen, usually that sprite will have a texture which will determine what color each pixel of that sprite should be.

In XNA we would just use SpriteBatch.Draw() to draw the texture over a rectangle with the same dimensions as the screen. This is something we’re going to need to do in our shader when we draw the background, and when we apply the light map. In order for the shader to know which bit of the texture to look at, it needs texture coordinates. This is just a float2 (like a Vector2) that holds a pair of floats between 1 and 0. These represent x and y coordinates for the texture, telling us where to look on the texture to find the color for this pixel (with (0,0) in the top left hand corner of the texture, and (1, 1) in the bottom right).

Fortunately, when we use SpriteBatch.Draw() in this way, the texture coordinates are passed to our pixel shader. In order to use them in our shader we need to add a parameter to our arguments list:


float4 PixelShaderFunction(float2 texcoords)



We actually need to add something else to be able to use that parameter in our shader, but we’ll come back to that later on. For now let’s just treat this like a C# function, and assume that the graphics processor will pass it the right information.

So we have a function, and we have coordinates. What now? Well, if we want to draw the background with the light map applied, a good first step would be just drawing the background.

To do this is pretty straight forward, but there are a few new concepts to introduce along the way. First, let’s think about what we want the graphics processor to do.

Since we already have access to the texture coordinates for the current pixel in our shader, all we need the graphics processor to do is to lookup what the color of the texture is at that coordinate, and display that color. Now hopefully at this point you’re thinking ‘hang on, what texture?’.

Good point. We need to give our shader a way to reference a texture. The way we do that in shaders is to create something called a sampler. This is used by the graphics processor to ‘sample’ a specific texture in a specific way. As it happens, using SpriteBatch.Draw(), most of the information about the texture and how we want to sample it is passed into the shader by our game code. In order to get access to that information in our shader we need to add the following at the top of our file:

sampler textureSampler;

Again, there’s something else we’ll need to add later to make it work, but this will do for now. All we’ve done here is declare a sampler object called textureSampler.

To actually sample the texture we need to call a function that’s built into HLSL called tex2D(). tex2D() takes as it’s arguments a sampler and a float2 holding the texture coordinates that we want to sample. It returns a float4 containing the color of the texture at those coordinates.

We’ll see how to make sure that our sampler is associated with the texture we want to draw a bit later, but for now let’s assume that it is. To draw our background all we need to do is sample from the texture coordinates that were passed into our shader function. We do this by adding the following line in our shader function:

float4 backgroundCol = tex2D(textureSampler, texcoords);

So now we have a float4 holding the color of the bit of our background texture that corresponds to the pixel we’re trying to draw at the moment. So all we need to do to draw our background is to return this color, and the Graphics Processor will change the pixel on the monitor to the correct color:

return backgroundCol;

So our full shader for outputting the background texture looks like this:


sampler textureSampler;

float4 PixelShaderFunction(float2 texcoords)
    float4 backgroundCol = tex2D(textureSampler, texcoords);
    return backgroundCol;


If we were to write the accompanying XNA code to make this work, and use the texture provided with the sample, we’d get this result:



Not exactly exciting, but a good start.

That wasn’t so complicated. However, like most examples, it’s also a little pointless, as it’s much easier to just use spriteBatch to draw a fullscreen texture (and we’re also missing a couple of bits of code to make it work, they’re coming I promise!).

In order to make it more interesting, we need to apply our lightMap. As mentioned earlier, our lightMap will represent fully lit by all 1’s in each of the r, g, b, and a components, and minimum light by 0 for each of r, g, and b, and 1 for a (as we don’t want it to be invisible, just dark).

So how do we apply change the color of our background to make it look like it’s been lit up with the light map? Let’s start with a simple case. Let’s assume that there is no ambient light in the scene whatsoever. By that I mean that if the light isn’t touching a pixel, then it should display as completely black. In this case the minimum light value for any pixel is (0, 0, 0, 1).

If the pixel is fully lit, it should show exactly the color of the background pixel, i.e (background.r, background.g, background.b, background.a). What about if the light map has a value of say, (1, 0.5, 0.25, 1)? What does this mean?

It means that we want the full contribution of the red component of background color, half of the blue component of the background color, a quarter of the green component of the background color, and all of the alpha component of the background color. Hopefully you’re way ahead of me and already thinking ‘so in general, it’s:

(background.r * lightMap.r, background.g * lightMap.g, background.b * lightMap.b, background.a * lightMap.a).

If so, then you’re right! If not though, don’t worry, just go back and read the last paragraph or so again and see if you can come to the same conclusion.

Before we can use the lightMap in this way, first we need to be able to sample it. For this we need another sampler object, so add the following at the top of the file after we declared textureSampler:

sampler lightMapSampler;

And we need to use it in the same way inside our shader, so add the following before the return statement in our function:

float4 lightMapCol = tex2D(lightMapSampler, texcoords);

Now we can change our ‘return’ statement to apply the lightMap values to the background color:

return float4(backgroundCol.r * lightMapCol.r, backgroundCol.g * lightMapCol.g, backgroundCol.b * lightMapCol.b, backgroundCol.a * lightMapCol.a);

Our updated shader would now produce the following output. If we were to apply this lightMap:


to this background:



we would get this result:



That’s looking bit more interesting. However, it looks a little harsh, since in reality you don’t just get pitch black if there are no lights, there’s always some light that’s reflected off of other surfaces. To approximate this in our light system we let the programmer define minimum value between 0 and 1 that we will let the light go down to.

public float4 MinLight = -1f;

We’ll see how to set this value later on. First we need to work out how we’re going to use it. There are two main ways we could enforce a minimum value for the light in our scene. The first is to simply say that any pixel which has a light value less than MinLight we’ll set equal to MinLight. This would certainly work, however it would have another effect. Let’s compare the screen capture we had earlier to the one we get if we use this method:



As you can see, the area that appears to be lit by the lights is much lower (note that the areas affected by the two lights now fall a long way short of one another). In fact, as you’ll see as we go through the series, we want the programmer to be able to control the radius themselves directly, so we don’t want to affect that here.

Another method we can use is to use the lightMap value to linearly interpolate (or lerp) between (1, 1, 1, 1) and the value of MinLight. What this means is that we want the area covered by the light to be same, with the outer most edge having the value MinLight instead of (0, 0, 0, 1), and the pixel at the point the light originates from (where the bulb would be on the flashlight, if you like) to be the color of the light itself, with a smooth gradient in between.

There is a built in function for this, but I want you to understand what’s happening, so we will do it manually. For now, let’s just focus on the red component of the lightMap (the process will be the same for blue and green, and we want to leave alpha as it is, so we don’t need to worry about it).

The sort of thing we want to do would look something like this:


float4 newLightMapCol;

newLightMapCol.r = (lightMapCol.r * (1 - MinLight.r)) + MinLight.r;


So, what are we doing here? Let’s think about it in terms of lightMapCol.r. lightMapCol.r tells us how much (on a scale of 0 to 1) red light there should be at that pixel, where 0 means we have MinLight.r red light, and 1 means we have the maximum amount (1) of red light. E.g. when lightMapCol.r is 0.5 we should have a red light value for that pixel exactly have way (or 0.5 of the way) between MinLight.r and 1.

To get to half way between MinLight.r and 1, we need to add half of the difference between them (1 – MinLight.r) to the smallest of them (MinLight.r). In other words, half way between MinLight.r and 1 is:

0.5 * (1 - MinLight.r) + MinLight.r

If we repeat that exercise again, except this time we want lightMapCol.r to be 0.1, we get to:

0.1 * (1 - MinLight.r) + MinLight.r

So in general, we want the amount of red light at that pixel to be:

lightMapCol.r * (1 - MinLight.r) + MinLight.r


As I said above, the same will be true of lightMapCol.b and lightMapCol.g, and we don’t want to change lightMapCol.a. So we could write the following:


float4 newLightMapCol;

newLightMapCol.r = (lightMapCol.r * (1 - MinLight.r)) + MinLight.r;

newLightMapCol.g = (lightMapCol.g * (1 - MinLight.g)) + MinLight.g;

newLightMapCol.b = (lightMapCol.b * (1 - MinLight.b)) + MinLight.b;

newLightMapCol.a = lightMapCol.a;



However, HLSL lets us do this much more efficiently. I’m going to
write the code first, and then explain it:

float4 newLightMapCol.rgb = (lightMapCol.rgb * (1 - MinLight.rgb)) + MinLight.rgb;

This, my friends is called swizzling (I kid you not). Basically, for float4s (and float3s and float2s) we can work with multiple columns at once. Here, we can think of it as being the same as the code we wrote above, except condensed into 1 line, and the Graphics Processor just picks r, then g, then b, and executes each line one after the other. In reality it does some magic on its end in order to do all 3 at once in a super-efficient way, but we don’t need to worry about how it does it.

In fact, while we’re on the subject of efficiency, we don’t really need a new variable to store the altered lightMapCol value, as we won’t need the original value of the lightMap again, so we can simplify to:


lightMapCol.rgb = (lightMapCol.rgb * (1 - MinLight.rgb)) + MinLight.rgb;
sampler textureSampler;

sampler lightMapSampler;

float4 MinLight;

float4 PixelShaderFunction(float2 texcoords)
    float4 backgroundCol = tex2D(textureSampler, texcoords);
    float4 lightMapCol = tex2D(lightMapSampler, texcoords);

    lightMapCol.rgb = (lightMapCol.rgb * (1 – MinLight.rgb)) + MinLight.rgb;

    return float4(backgroundCol.r * lightMapCol.r, backgroundCol.g * lightMapCol.g, backgroundCol.b * lightMapCol.b, backgroundCol.a * lightMapCol.a);


There’s one more thing for now that we need to add to our effect file. This is called a technique. To get certain effects, graphics programmers sometimes need the Graphics Processor to run one shader immediately followed by another shader.

The technique is the place where we tell the Graphics Processor what shader functions to run and in what order. If we were using a Vertex Shader for example (you don’t need to worry about what these are!) they have to run before the pixel shader, so our technique would tell the Graphics Processor to first run the vertex shader, then our pixel shader.

The technique has one more important role. It tells the compiler which version of HLSL it is using, so that the compiler can compile the code correctly. For us, since we’ve only written a pixel shader, we need to tell it what Pixel Shader version we’ve used. For our code we need Pixel Shader v. 2.0.

In general the earlier the version you can use, the older the hardware that will be able to run it, so for compatibility, you generally want to use the lowest number version you can get away with. There are exceptions of course. If you’re targeting fixed hardware (like the Xbox 360 for instance) you don’t need to worry about supporting older versions. There are other differences, but we don’t need to worry about those.

So, at the bottom of the effect file, add the following:


technique Technique1
    pass Pass1
        PixelShader = compile ps_2_0 PixelShaderFunction();


This is going to be identical in every shader we write throughout this series, so I’m not going to go into what it all means. All you need to know is what I said earlier, that this tells the Graphics Processor what shader to run, and the compiler what version to compile it against.

At this point we’re mostly done with our shader code. There are a couple of details that we’ll need to add to enable our game code to talk to the shader properly, but the meat of the shader is all there.

Let’s head back to our Game1.CS file that was created when we created our project earlier.

First of all we need to add a field to the class for our shader. In XNA shaders are Effect objects, so at the top of the Game1 class add:

Effect lightBlendEffect;

We’ll also need our textures. They can be downloaded from here:


Background Texture
Midground Texture

LightMap Texture


Add these to your content project and add fields for them at the top of Game1:

Texture2D background;Texture2D midground;Texture2D lightMap;

Next, in LoadContent() add the following line that loads and compiles our shader when we build the solution, as well as lines to load our textures:


lightBlendEffect = Content.Load<Effect>(@"LightBlend");

background = Content.Load<Texture2D>(@"Background");

midground = Content.Load<Texture2D>(@"Midground");

lightMap = Content.Load<Texture2D>(@"LightMap");


Finally, in Draw(), just after the code that clears the screen to Cornflower blue, add the following lines (I’ll explain them shortly):


GraphicsDevice.Textures[1] = lightMap;

lightBlendEffect.Parameters[“MinLight”].SetValue(Vector4.One * 0.3f);

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlendEffect);

spriteBatch.Draw(background, new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), Color.White);


GraphicsDevice.Textures[1] = null;


spriteBatch.Draw(midground, new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), Color.White);



So, what are we doing here? A lot of this should look familiar, but some of it is new. The first line is

GraphicsDevice.Textures[1] = lightMap;

The GraphicsDevice actually keeps an array of all the textures it is using at any one time. All we’re doing here is manually adding our lightMap texture to the array. You may have noticed that we’re actually adding it as the second texture in the array.

This (as you may have guessed) is because spritebatch automatically puts its texture parameter (in our case background) as the first element in that array, which would have overwritten the lightMap texture.

Once more you may be wondering how the Graphics Processor matches these textures to the samplers that we created in our shader. I promise, all will be explained shortly!

The next line is a little more self-explanatory:

lightBlendEffect.Parameters[“MinLight”].SetValue(Vector4.One * 0.3f);

When we load our shader into our lightBlendEffect object inside LoadContent(), it gives us a way to access each of the parameters in our shader, by storing them by name in what is effectively a Parameters dictionary. To set our MinLight value in our shader, we just set the parameter that matches the name we put in the braces.

Next up is a version of spriteBatch.Begin() that you may not have come across before. There are a lot of parameters here, but we only really care about the last one, which tells spriteBatch to draw using our shader that we loaded into lightBlendEffect instead of spriteBatch’s default pixel shader.

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlendEffect);

The only other line that might need explaining is

graphics.GraphicsDevice.Textures[1] = null;

All we are doing here is un-setting the texture we set at the beginning of this block of code.

With that, we’re very close to being done. On to the final hurdle!



For quite a while now I’ve been promising you that I’ll explain how the Graphics Processor can possibly match up all the information in our XNA code with the information we wrote in our effect file.

I explained how it knows what value to give MinLight, but what about the two samplers? Or the texture coordinates? In fact, how does it know for certain that the output of this shader is meant to be a color that it’s supposed to draw to the screen?

For that, there is a feature in HLSL called semantics, and it requires some small editing to our effect file, as well as a bit of explanation as to how the Graphics Processor stores textures.

Semantics are basically labels in the code of the shader that tells the Graphics Processor that certain variables or values have a special meaning, and should be treated accordingly.

The first example we’ll look at is the texcoords parameter that our PixelShaderFunction takes as an argument. At the moment the first line of our function looks like this:

float4 PixelShaderFunction(float2 texcoords)

You probably asked at the time, where is this function called from? How do the correct texture coordinates get passed as an argument? Well, in a way it’s all an illusion.

When we call SpriteBatch.Draw(), XNA tells the graphics processor what the texture coordinates are for each of the four corners our sprite (which are the four corners of the screen in our case), and the Graphics Processor interpolates the correct coordinates for each pixel.

In order for the value that the Graphics Processor has calculated to end up as the value for the texcoords parameter, we have to add a label (or rather, a semantic) to the texcoords parameter to let the Graphics Processor know that it needs to set it to the texture coordinates for this pixel.

To do this, we change the first line of the shader function to the following:

float4 PixelShaderFunction(float2 texcoords : TEXCOORD0)

This label, signified by a ‘:’ followed by one of a set number of labels, tells the Graphics Processor that it needs to set texcoords equal to the texture coordinate for this pixel. The ‘0’ at the end indicates that we are referring to the first set of texture coordinates that Graphics Processor is holding in memory. In more complex operations the Graphics Processor might have a number of different sets of texture coordinates in memory at the same time.

Before we move on to the textures/ samplers, let’s just think for a moment about the return value of our function. We know that it’s meant to be read as the color for the Graphics Processor to set the pixel to, but can the Graphics Processor be sure that’s what we mean? Although based on what we’ve discussed so far the answer might seem like ‘yes’, actually it’s a bit more complicated.

The Graphics Processor can do a lot more than just drawing to the screen. It can render to a texture (like drawing to the screen, but instead of the finished image appearing on the monitor, it is saved as a texture to memory – we will be using this later in the series), it can render to multiple textures at once, it can also write out depths (single float values rather than float4s) and perform a whole host of other functions.

To make it clear to the Graphics Processor that what we’re outputting is a color and should be used as such, we need to add a semantic to the return value like so:

float4 PixelShaderFunction(float2 texcoords : TEXCOORD0) : COLOR0

Once again, we’re adding a ‘:’ followed by a specific label that tells the Graphics Processor how to interpret the value (COLOR in this case) and a number to tell it which one of that type of value it might be holding we’re talking about (i.e. if we were rendering to two textures instead of the screen we might refer to them using the semantics COLOR0 and COLOR1).

Finally we move onto the textures/ samplers. As we discussed earlier, the Graphics Processor can hold an array of textures, which we can assign to in our XNA code. The question is, how does it know which of our samplers to use with which of the textures? What is to stop it sampling the lightMap when we want it to sample the background?

To solve this we again need to add semantics, but these are going to take a bit more explaining:

sampler textureSampler : register(s0);sampler lightMapSampler : register(s1);

Yep, those are definitely going to take some explaining! The first thing to talk about is what these objects actually are. As discussed before these are samplers, rather than the textures themselves. They’re basically an interface that tells the Graphics Processor not just which texture to sample, but how to sample it.

So actually, we need each of these to reference a fully formed sampler, not just a texture. Fortunately we don’t need to worry about creating a sampler, XNA and SpriteBatch creates them for us based on the textures we’ve put in the GraphicsDevice’s textures array, or that we’ve passed to SpriteBatch.Draw(). XNA then stores the samplers that it’s created in the memory that belongs to the Graphics Processor.

In particular the Graphics Processor has a number of special areas of memory called registers. These are each dedicated to a type of object, such as textures, samplers, normal maps, and other resources. In our case we’re only interested in the sampler registers.

The Graphics Processor has a number of registers dedicated to samplers. XNA stores the samplers that it creates for us in these registers, storing the sampler for the first texture in register s0 (s being for sampler!), the sampler for the second texture in s1, and so on.

As discussed before, the first texture is always the one passed into spriteBatch.Draw(), which for us was the background texture. So the sampler for the background texture must be stored in register s0.

In our shader we want the sampler that we’ve called textureSampler to sample our background texture, and so we need to tell the Graphics Processor that this sampler can be found in register s0. To do this we add the semantic ‘register(s0)’ to the end of the line that we declared our sampler on:

sampler textureSampler : register(s0);

Similarly, as our lightMap texture was the second texture on the GraphicsDevice, it’s sampler will be in register s1, so the line in our effect file needs to be:

sampler lightMapSampler : register(s1);

And that’s it for semantics! Hopefully that all made some kind of sense to you, but if not don’t worry about it, you won’t need a detailed knowledge of semantics and registers for this series, just enough that you don’t need to worry about why we’re adding them to our shaders.

And in fact, that’s if for this introduction to shaders! Save everything and run the program. If everything has gone well you should now be seeing the image we saw at the beginning of this tutorial:



I’ve uploaded the full Visual Studio solution for this project at the link below, so if you’re solution isn’t working and you’re having trouble figuring out where you went wrong you can compare your code to completed solution:


Part 2: Full solution


This series is all based around shaders, so if you don’t feel that you’ve gotten a good grasp of what shaders are and what they do from this post then I highly suggest that you take a look at some of the other simple shader tutorials linked to below, otherwise you may struggle to follow the shader code in the rest of this series.

I hope that this hasn’t scared you off of shader code, it really is some of the most interesting technology that XNA lets you play with, and if you take the time to play around with them you can find yourself creating many weird and wacky effects for your games! Hopefully I’ll see you all back here soon for the next installment in this series, where I’ll outline the algorithm for the 2D lighting engine.

The next part should *hopefully* be significantly shorter than this, and fingers crossed I’ll be able to get it posted up faster too!

Until next time!


Other shader/ HLSL tutorials




Leave a Reply

− 3 = four

Proudly powered by WordPress. Theme developed with WordPress Theme Generator.
Copyright © Rabid Lion Games. All rights reserved.