Rabid Lion Games


Last time we figured out how to use our occlusion map to generate a light map for a point light in our scene. Had we altered the XNA rendering code to show us what a typical light map generated looks like, we would have seen something like this:

If you click on it to see the larger version you’ll see that it’s not particularly attractive is it? Although it fades nicely towards the edge of the light radius, edges caused by shadow casting objects are very sharp (not to mention heavily aliased). Of course what we want is for these edges to also be nice and smooth. Further more, we also want the shadows to be crisper closer to the light, and more indistinct the further from the light the shadow is. As I mentioned in Part 2, this section of the algorithm is heavily based on @CatalinZima’s method for accomplishing the same. So if my explanation of the techniques we use doesn’t quite make sense to you I suggest that you head over to @CatalinZima’s blog (linked to again at the end of this part) for the explanation over there.

Our light maps actually need to be blurred twice, once vertically and once horizontally. The basic idea is that for each pixel we want to replace it’s value with a sort of weighted average of the pixels around it, with those closest to the original pixel having the biggest weighting, and those furthest from it having the lowest weighting. The main considerations in deciding on a blurring algorithm are how many samples to take when creating this average, and how spread out they should be. The more spread out the pixels, the more blurred the final image, whereas the more samples we take, the fewer artefacts we have in the final image.

For now lets just create a simple vertical blur shader, and then we’ll adapt it for our purposes.

Fire up our Visual Studio Solution and add a new Effect file called ‘VerticalBlur.fx’. As usual we delete the contents that XNA gives us by default, and add the following stubb:


float4 PixelShaderFunction(float2 texcoord : TEXCOORD0) : COLOR0


technique Technique1
    pass Pass1
        PixelShader = compile ps_2_0 PixelShaderFunction();


Now we create a variable to hold the sum of each of our samples at the top of our Pixel Shader function:


float sum = 0;


We need to decide how many samples we want our shader to take. I’ve followed @CatalinZima here and have borrowed his table of distances/ weightings. The normal name for this is a sampling kernel. Lets now add it to our shader. At the top of the file add the following:


static const int KernelSize = 13;

static const float2 OffsetAndWeight[KernelSize] =
    { -6, 0.002216 },
    { -5, 0.008764 },
    { -4, 0.026995 },
    { -3, 0.064759 },
    { -2, 0.120985 },
    { -1, 0.176033 },
    {  0, 0.199471 },
    {  1, 0.176033 },
    {  2, 0.120985 },
    {  3, 0.064759 },
    {  4, 0.026995 },
    {  5, 0.008764 },
    {  6, 0.002216 },


If you’re wondering where these values come from, they’re just a weighting for each distance that add up to approximately 1, so that by multiplying each sample by it’s weighting and summing them we’ll get a weighted average of the samples (the maths behind what weighting to give to each sample is slightly more complex, so I won’t go into it here).

We now use these to create our weighted average. Let’s add the following code to our PixelShaderFunction(), and then we’ll discuss exactly what it does:


for (int i = 0; i < KernelSize; i++)
    sum += tex2D(textureSampler, texcoord + OffsetAndWeight[i].x / ScreenDims.y * float2(0, 1)).r * OffsetAndWeight[i].y;


Before we discuss this let’s quickly add the textureSampler and ScreenDims parameters to the top of the file:


sampler textureSampler : register(s0);

float2 ScreenDims;


So what is going on in this for loop? Well, essentially we’re sampling a pixel for each entry in our kernel, multiplying it by some value, and adding it to our sum to give us our average. To decide which pixel to sample each time through the loop, we’re adding some offset value to the texture coordinates of the current pixel. That offset is calculated by taking one of the values in our kernel (which are stored as a number of pixels), dividing it by the height of the screen to get it into the texture coordinate scale, and multiplying that by a normal pointing vertically downwards, to give us a float2 that we can add to our texture coordinates. We also only look at the r channel of each sampled pixel. This is because at the moment on our lightmap we only have pixels where the r, g, and b components are all the same, so to simplify matters we will only look at one channel.

Finally, we add the following lines to the bottom of our function in order to return the correct value:


float4 result = sum;
result.a = 1;
return result;


All we are doing here is creating a return color from our sum value, and setting the alpha channel to 1, as we want every pixel in our light map to be opaque.

So, that’s a simple, uniform, vertical blur.

However, as discussed above, we actually want the amount of blur to depend on the distance of the pixel from the light. Or to be more precise we want our sampling kernel to be more ‘spread out’ the further from the light we get. To do this we need to multiply the offset value of our kernel by some value based on the distance to from the pixel to the light.  The obvious way to do this is to linearly interpolate between some minimum blur value at the light position and a maximum blur value at the radius of the light. Which is in fact exactly what we do! The code for our altered PixelShaderFunction() to achieve this is as follows:


float sum = 0;
float2 screenPos = float2(texcoord.x * ScreenDims.x, texcoord.y * ScreenDims.y);
float dist = length(screenPos - LightPos);

for (int i = 0; i < KernelSize; i++)
    sum += tex2D(textureSampler, texcoord + OffsetAndWeight[i].x * lerp(minBlur, maxBlur, saturate(dist/ Radius))/ ScreenDims.y * float2(0, 1)).r * OffsetAndWeight[i].y;

float4 result = sum;
result.a = 1;
return result;


Here we use the inbuilt lerp function rather than manually linearly interpolating ourselves. We also use the inbuilt function ‘saturate’ which just clamps dist/ Radius to the range 0-1. Before we finish with this shader we need to add some parameters to the top of our shader file:


float2 LightPos;
float Radius;

float minBlur;
float maxBlur;


And we’re done with VerticalBlur.fx. Let’s barrel on to HorizontalBlur.fx. Create a new effect file named HorizontalBlur.fx  in the usual way, and empty it as normal. Now I’m going to tell you to do something that will make some of you shake your head in disgust, but I’m doing it to illustrate a point (ok, and because I’m a bit lazy!).

Copy the contents of your VerticalBlur shader and paste them into the new file. You can type it all out again if you really want to, I’ll wait. The rest of you grab a coffee or something…

Right, now we’re all caught up we only actually need to change 3 characters in the whole file (see why I told you to copy and paste now?).


First up, inside our call to tex2D() inside our for loop, look for our reference to ScreenDims.y. Change this to ScreenDims.x.

Secondly (and thirdly) follow a bit further along that line to where we create the normal float(0, 1). Change this to float(1, 0).


And that’s it. You’ve just created a Horizontal blur. Everything else is exactly the same, as you should probably expect. If you’re scratching your head wondering why we didn’t have change our sampling kernel, remember that the kernel just holds single values. In order to turn them into coordinates we are multiplying them with a normal either pointing directly down or directly right, to give us vertical or horizontal offsets from our texture coordinates.

Before we finish we need to add some code to our LightRenderer class to set our shader parameters. Open up LightRenderer.cs and add the following to the top of the class:


public float minBlur = 0.0f;
public float maxBlur = 5.0f;


These are just values that I’ve found to work well, feel free to experiment at your leisure. Next up in prepare resources add the following lines:





Finally we go to the BlurLightMaps() method. Just before the first call to spriteBatch.Begin(), add the following lines:




and then finally, just before the second call to spriteBatch.Begin(), add these lines:




And we’re done! We now have all the code we need to blur our light maps. If you’d like to see the before and after shots, I’ve added them below:



If you recall from Part 3, after the light maps are blurred they are additively rendered to a texture that accumulates all of the maps for each light, eventually leading to the final light map that we can use with our shader from Part 2. The final 2 steps in our series before we can write some game code to actually use our system are to do with spot lights. Once again these need to be ‘unwrapped’ before their rays can be used to create an occlusion map, and then we’ll need to create a shader that uses that occlusion map to generate a light map. Its a bit different to what we did with point lights though, so we’ll still have plenty of ground to cover.

Finally, as usual, here is an upload of the solution so far up to this point:


Part 6 Solution


Until next time!

(Link to @CatalinZima’s blog: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/ )


In the last part of the series we saw how to ‘unwrap’ the rays of a point light onto a texture. In Part 3 we saw how to use blend states to collapse these unwrapped rays into an occlusion map, a 1D texture holding the distance of the closest shadow casting pixel to the light along each ray. In this part of the series we will see how to use that occlusion map to create a light map that can be used with the shader we created in Part 2 to create the final lit scene.


Creating the light map can be broadly split into the following steps:

1) Calculating the vector from the light to the pixel

2) Determining which ray the pixel lies on

3) Sampling the value stored in the cell of the occlusion map that corresponds to that ray

4) Using the sampled value to calculate the distance along that ray to the closest shadow casting pixel

5) Calculating what the value of the light map for that pixel would be assuming that it is not obscured by a shadow caster

6) Testing whether the pixel is closer to the light than the closest shadow caster on that ray (taking into account a small bias to avoid jagged edges being visible): if it is then store the value calculated in step 5. If not, store (0, 0, 0, 1).


Most of these steps are self explanatory, so we simply need to implement them in our shader, but one or two will need further explanation as and when we come to write the code for them.


The PointLight shader

First things first. Fire up your solution from last time and create a new .fx file in the Content project called ‘PointLight.fx’. As usual, delete the contents and add the usual stubb:


float4 PixelShaderFunction(float2 Texcoord : TEXCOORD0) : COLOR0


technique Technique1
   pass Pass1
       PixelShader = compile ps_2_0 PixelShaderFunction();


The first thing we need to do is calculate the screen coordinates (rather than texture coordinates) of the current pixel. To do that all we need to do is multiply the screen dimensions by the texture coordinates as we’ve done in previous parts of the series. Add the following to the top of the shader function:


float2 screenCoords = float2(ScreenDimensions.x
    * Texcoord.x, ScreenDimensions.y * Texcoord.y);


Obviously for this to work we’ll need to add a ScreenDimensions parameter to the top of the shader file:


float2 ScreenDimensions;


Now to get the vector from the light to the current pixel we simply subtract the light’s screen coordinates from the current pixel’s screen coordinates:


float2 lightToPix = screenCoords - LightPos;


And so we’ll need a parameter at the top of our file again for our Light position:


float2 LightPos;


And that’s step 1) done. On to step 2!


Before we move on, we will quickly calculate the distance of the pixel to the light. This is of course just the length of the vector we just calculated:


float pixDist = length(lightToPix);


Now we look at calculating which ray the pixel lies on. To do that we need some trigonometry:


float occlusionU = acos(dot(float2(0, 1), normalize(lightToPix)));


Lets pause and break this down. Inside the brackets we have a function dot() which takes the dot product of two vectors. The dot product is actually given by the following formula:


dot(a, b) = cos(theta) * length(a) * length(b)


Where theta is the angle between the two vectors. In our case both of the vectors we’re feeding it have length 1 (since we normalize lightToPix), so this formula becomes:


dot(a, b) = cos(theta)


Then it should be clear that taking acos() of this will give us the original theta. Note that the other vector we are using in the dot product is our zero angle vector that we decided on in the last part.

There is one more decision from the last part that we should remark on at this point. Recall that we decided to make the angle positive in both directions from the zero line since we were storing the rays on each half of the texture separately. This now comes in handy, since we don’t have to do any work to figure out whether the result from acos() should be positive or negative, we can just use the positive value that it gives us.

So this then gives us the angle of the ray that the pixel lies on, which completes step 2 of our procedure above.


Step 3 involves sampling the occlusion map to get the value stored that corresponds to our ray. In order to do this we need to convert our angle that we calculated in step 2 into a texture coordinate. Since the occlusion map is a 1D row of pixels we know that the y coordinate will be 0.5 (the centre of the row vertically). To calculate the x coordinate we need to transform the angle from its current range of 0 – PI to the texture coordinate range of 0 – 1. We accomplish this simply by dividing the angle by PI:


occlusionU /= PI;


And again we’ll need to add our definition of PI to the top of the shader file:


#define PI 3.14159265


Now we just need to sample the occlusion map in the normal way:


float4 occlusionDist = tex2D(occlusionSampler, float2(occlusionU, 0.5));


And clearly we’ll need to create a sampler for our occlusion map at the top of the file:


sampler occlusionSampler : register(s0);


And that’s it for step 3, as we now have the value stored in the occlusion map for our ray.

For step 4, calculating the actual distance along the ray to the first shadow casting pixel, we have two stages to go through. The first is to convert the value from the occlusion map to pixel/ screen coordinates. Recall from the last part that we made this conversion by dividing the distance of the pixel from the light by the diagonal length of the screen in order to get the values in the 0 – 1 range. So to get the distance from the value we need to multiply the sampled value by the diagonal distance of the screen like so:


occlusionDist.xyz = mul(occlusionDist.xyz, DiagonalLength);


Note that we use the .xyz suffix as we don’t want to multiply the alpha component by the DiagonalLength, since this wouldn’t make a lot of sense!

Once again we’ll need a parameter for DiagonalLength for this to work:


float DiagonalLength;



We haven’t quite completed step 4 yet, as if you recall, there are actually 2 distance encoded in each cell of our occlusion map, one in the r channel for rays to the left of the light and one in the g channel for rays to the right of the light. Clearly in our case we are only interested in one of these. So how do we determine which? Simple, we need to determine whether our pixel that we’re lighting is to the left or the right of the light. Then we can use the outcome to select the right channel of our occlusionDist vector:


float dist = (screenCoords.x < LightPos.x) ? occlusionDist.r : occlusionDist.g;


And we’re done with step 4!


Step 5 is going to need unpacking a bit more before we can determine what shader code we need to add. Assuming that there are no shadow casters in the scene, how do we go about deciding what light value to give our pixel? There are several factors.

Each light has a radius determined by the developer. Clearly any pixel outside of that radius won’t receive any light at all. Inside the radius there are several ways of deciding how the light should fall-off as it gets further away from the light source. The simplest way would be to simply linearly decrease the light from maximum power at the light source itself, down to 0 at the edge of the radius. For a moment let’s assume that this is the method we’ve chosen. How would we do this?

For our current pixel we already know how far it is from the light source. Assuming that it is within the radius then this distance lies on a range between 0 – r, where r is the radius of the light in pixels. However, we want to transform this to the range 1 – 0, crucially where 1 corresponds to 0 pixels from the light and 0 corresponds to r pixels from the light. We also want the value to decrease linearly.

The first step to solve this puzzle is to transform the distance from the light to the range 0 – 1 (rather than 1 – 0). Once we have that we can simply take our answer away from 1 to transform this new value to the range 1 – 0. This is a much simpler problem, one that we have solved several times before. We simply divide the distance to the light by the radius, which transforms it to the range 0 – 1, with 0 being the light source and 1 being the edge of the radius. Or, in the form of a formula:

lighting = 1 – (pixDist / Radius)

However, if we limit the developer to only having lights that decrease linearly we won’t be able to represent lights that are more or less powerful. What if the developer wants a powerful spot light? Or a dimmer light that gets lighter and darker? To handle these situations, we allow the developer to specify the power of the light. Given that we are also allowing them to decide on the light radius this means that they can decide how much of the area covered by the light will receive maximum power, and how quickly it will fall-off to nothing. If the developer specifies a high enough power there would be no fall-off at all and they would have a very stark, bright light. Alternatively if they choose a power value less than 1 then can ‘dim’ the light so that it is darker at the light source and gradually fades to nothing.

There are many other, more realistic ways that we could have used to decide how fast the light falls off. However, by allowing the developer to change the radius, light power, and light color (see later in this post) the developer can simulate most kinds of light that they would want to represent, and we still have a relatively simple shader. If you’re implementing your own system and want to go for more realism then feel free to experiment with other methods.

So, what does this look like in code?:


float3 lighting = (1 - (pixDist / Radius)) * LightPow;


There are a couple of new parameters here to specify, so we’ll need to add the following to our parameters at the top of the file:


float Radius;

float LightPow;


Now I imagine there’s a few people wondering how exactly this handles pixels outside the light’s radius, and you’re right, it doesn’t. We actually deal with those pixels as part of Step 6, which we will now move on to, since we’re done with Step 5.


The final step for our shader to accomplish is actually pretty straight forward. We simply test whether the pixel we’re drawing to is closer or further away from the light than the value we calculated in step 4. If it’s closer, return the value we calculated in step 5, otherwise return (0, 0, 0, 1). Ok, so I lied, it’s not quite that simple.

We also need to determine whether the pixel is inside the light radius. However, it turns out this is rather easy to combine with testing against the value in step 5. If the pixel is closer than the value from step 4, but further than the radius, we want to return (0, 0, 0, 1). If it’s closer than the radius, but further than the step 4 value, then we also want to return (0, 0, 0, 1). In other words, we only return the lighting value from step 5 if the distance from the pixel to the light is less than the minimum of the light radius and the step 4 value. Or, in a forumula, we return lighting only if the following is true:

pixDist < min(Radius, dist))

Where dist is the value we calculated in step 4. There’s one more element we need to add before we write the code though. If we leave this as it is then tiny bits of shadow may be visible along the edges of the shadow casting objects nearest to the light. This is because we deem any partially transparent pixels to cast a shadow, but of course the shadow that is ‘underneath’ these pixels will be partially visible. To counteract this, we have a bias value (which can be tinkered with by the developer), which causes the shadow to start slightly further away from the light. This ensures that the start of the shadow is hidden underneath the object that is casting it. So our final comparison becomes:

pixDist – Bias < min(Radius, dist))

Or in code:


if (pixDist - Bias < min(Radius, dist))
    return float4(lighting, 1);
return float4(0, 0, 0, 1);


Clearly we need to add Bias to our parameters:


float Bias;


And we’re done with our shader! Now on to the C# code (and don’t worry, I haven’t forgotten about the light color I mentioned earlier).

Open up the LightRenderer class and add the following field at the top of the class:


public float lightBias = -1f;


We want our developer to set the lightBias when they initialize the system, so by setting the value to -1 it should create some horrible visual effects with shadows poking out from under sprites to remind them! (I know, I know, terrible API design, but it served as a reminder for me, feel free to change it).

Next in PrepareResources() add the following:






And then add the following to CreateLightMap(PointLight pLight) after graphics.GraphicsDevice.Clear(Color.Black):






And we’re done! There’s one more point I promised to address earlier in the post. I mentioned that we also allow the developer to tune the light’s color. That actually happens in the AccumulateLightMaps() method. This is called after we generate each lightMap to add the results to our final light map texture. When we do this we pass in light.color as the Color parameter to spriteBatch.Draw. This has the effect of tinting our light map, so that instead of going from Pure white at maximum and black at minimum, instead the maximum value for the light map is light.color. In this way, if the developer wants to dim a light without altering the fall-off, they can simply reduce light.color’s magnitude by multiplying it by a value between 0 and 1.


As usual I’ve uploaded the solution so far on the codeplex site for you to check your code. I’ve also corrected the version I uploaded at the end of the last part, as I managed to miss out some of the code in the original version!


Part 5 solution


That’s it for now. Next time we’ll be looking at how we process the lightMap we get as a result of our pointLight shader to give us some pleasantly soft shadows.


Until then!



In the last part of this series we outlined the algorithm that our lighting system will be using, as well as writing some of the code for our Light Renderer class. If you haven’t already done you’ll need to go back and work through that part, otherwise I’m afraid this tutorial won’t make a lot of sense!

There are still some fairly big gaps to fill in our system, namely the shaders that will be doing all of the work, as well as some of the code that supports these shaders. Over the next few parts of the series we’ll be focussing on writing these shaders, and adding them to our system.

To start with we will be looking at PointLights.

PointLights and SpotLights, whilst similar, require different shaders to unwrap their rays into the columns of our render target ‘unwrapTarget’. This might seem odd at first. Surely a PointLight is just a SpotLight with an arc of 360 degress?

Well, yes, and we could have written our shaders that way, but it would have made our lives (and the lives of any other programmer using our system) rather complicated. In fact, it turns out that because PointLights always light a full 360 degrees around them, that their shaders are rather simpler than those of SpotLights. Let’s have a look at why this is.


The PointLight ‘Unwrap’ algorithm

With SpotLights we have to determine for each pixel whether it is inside or outside the arc of the light, as those outside of this arc won’t lie on any of the light’s rays. For PointLights we don’t need to do this, as every pixel is within the arc of the light, and so every pixel will map onto one of the light’s rays.

Let’s outline the steps we need to take in order to Unwrap our rays. Remember – the Graphics Processor effectively iterates over each pixel of the surface we are drawing to, i.e. the unwrapTarget. Recall that for any given pixel on our unwrapTarget, the ‘column’ of pixels it is in (i.e. it’s x coordinate) represents a ray eminating from the light, and the ‘row’ (y coordinate) represents how far along that ray the pixel is. With this in mind, the steps are:

1) Determine which ray on the Shadow casters texture corresponds to the current pixel on the unwrapTarget.

2) Calculate the normal of that ray (i.e. vector pointing from the light along the ray with length 1).

3) Determine the length of that ray (i.e. how long would it appear to be on the shadow casters texture.

4) Use the current pixel’s y coordinate to determine how far along that ray it lies (and so how far from the light it is).

5) Multiply the normal from step 2 by the result in step 4.

6) Add the coordinates of the Light to those found in Step 5, convert the result to texture coordinates and sample that pixel from the shadow casters texture.

7) If the sampled pixel does not cast a shadow, then store the value ‘1’ in the current pixel of the unwrapTexture. Otherwise, store the distance of the sampled pixel from the light (divided by the diagonal distance of the screen to scale it to the range 0 – 1).

Essentially we want to know what pixel on the shadow casters texture corresponds to the current pixel on the unwrapTarget, and then depending on whether or not that pixel casts a shadow, we store a distance from the light to that pixel. Hopefully all will become clear as we write the shader, and I’ll throw in some illustrations which should more light on the issue (pun semi-intended).


The Unwrap Shader

Open up your solution from Part 3 and add a new effect file to the Effects folder in the content project that we created. Name this file Unwrap.fx.

Delete the contents of the file that XNA has kindly added for us, we’ll be writing our shader from scratch. If your not familiar with shaders I suggest you go back and work through Part 2, or else try one of the tutorial series that I linked to at the bottom of that post.

First up, let’s create a stubb for the PixelShader we’ll be writing:


float4 PixelShaderFunction(float2 texCoord : TEXCOORD0) : COLOR0



And while we’re here, we’ll add the technique:


technique Technique1
    pass Pass1
        PixelShader = compile ps_2_0 PixelShaderFunction();


Now, before we continue, recall that we are intending to store the left and right halves of the screen in two different channels of the texture. This means that effectively each column of the unwrapTarget corresponds to two rays: one for the part of the target to the left of the light (which we represent in the red channel of the target) and one for the part of the target to the right of the light (which we represent in the green channel.

So for each step of our algorithm we’ll need two sections of code, one for each of the two points on the shadow caster texture that our pixel maps to.

The first step in our shader is to determine which ray (or rather rays) our pixel lies on. To do this we’ll need some trigonometry. First we need to decide on which line should represent the angle zero. The obvious choices are either directly up or down. Up is the normal choice, however in normal geometry ‘up’ is the direction of the positive y axis.

In the case of texture coordinate, the positive y axis points down, and so we shall choose this as our zero line.

Next we need to choose which direction (clockwise or anti-clockwise) is the positive direction. Normally this direction is clockwise, however, in our case, since we are dealing with the two sections of the texture separately, we can choose both directions to be positive. This will simplify some calculations we need to make down the line.

In order to determine what the angle of our ray is, we need to convert it’s x texture coordinate to an angle between zero (our line pointing straight down) and 180 degrees (the angle of the line pointing straight up). However working in degrees isn’t particularly useful in trigonometry, as you may know. Instead we work in radians, in which case the line pointing directly upward from the light would have the angle PI.

So we need to map our x texture coordinate (between 0 and 1), to our angle range of 0 – PI. To do this we simply need to multiply the x texture coordinate by PI:

float rayAngle = texCoord.x * PI;

For this to work we first need to define PI. Add the following line at the top of the file:

#define PI 3.14159265

Next, as described in step 2), we will calculate the ray normals (these are just vectors pointing from the light along the rays with length 1):

float sinTheta, cosTheta;

sincos(rayAngle, sinTheta, cosTheta);

float2 norm1 = float2(-sinTheta, cosTheta);

float2 norm2 = float2(sinTheta, cosTheta);

The intrinsic function sincos() takes an angle, and two floats, and stores sin of the angle in the first float, and cos of the angle in the second. Then we use the fact that, for angles that increase in a clockwise direction, the normal at an angle theta is given by (-sin(theta), cos(theta)).

Note for mathematicians: Normally the normal is (+sin(theta), cos(theta)) for angles that increase clockwise. However this assumes that the axes have the positive x direction at 90 degrees clockwise from the positive y direction, similar to the normal way of drawing axes for a graph. Texture coordinates are the opposite of this (positive x is 90 degress anti-clockwise of positive y), so the sign of the x component in our normals is reversed.

To get the normal for the ray on the other side of the light, we just need to change the sign of the x coordinate. This is because essentially the coordinate system for the other side of the light is a reflection of the normal coordinate system, reflected in the line that passes vertically through our light. I’ve drawn a diagram below to illustrate:



Next we need to calculate the length of the rays. This is actually quite complex so we will start only by considering those rays to the left of the light and will then extend the technique to cover those the right of the light as well.

So, how do we determine the length of a given ray? Well, the length of the ray is determined by the distance between the light and whichever edge of the texture the ray hits. So first of all we’ll need to know the coordinates of our light. The only way we’re going to get that is through a parameter, so we’ll add the following at the top of our shader file:

float2 LightPos;

Next we need to determine which edge the ray will hit. This is a bit trickier than it sounds, and took me a bit of time to figure out. My first instinct was to calculate the angle between the corners and the light and compare it to the angle of the ray, but that involves inverse trigonometry, which is expensive, and once that’s done we would still need to calculate where along that edge the ray hit.

The alternative is to calculate the point at which the ray intersects with each of the edges of the rectangle, and then whichever point of intersection is the closest to the ray is the one we want. So how do we do this?

Well, let’s start with the top edge of the rectangle. When the ray intersects this edge, the y coordinate of that point will be 0 (as the y coordinate of the top edge of the rectangle is 0). We also know that the point lies somewhere on the ray. Any point along this line can be described as some multiple of the normal we calculated above, added to the position of the light. In other words a point that is say distance 5 away from the light on the ray with normal norm1 would have the following coordinates:

(5 * norm1.x + LightPos.x, 5 * norm1.y + LightPos.y)

However, we know that norm1 is (sinTheta, cosTheta), from above, so a point that is distance d from the light would have the following coordinates:

(d * sinTheta + LightPos.x, d * cosTheta + LightPos.y)

Now, we know from above that at the point our ray intersects the top edge of the rectangle, the y coordinate is 0, which means that d * cosTheta + LightPos.y = 0.<

We can rearrange this to get d = -LightPos.y / cosTheta, which tells us how far from the light the point of intersection between the ray and the top edge of the texture is!

If we follow the same procedure for the other two possible edges (not 3, as a ray on the left of the light can never hit the right hand edge of the texture), we find that the distance to the left edge is:

d = LightPos.x / sinTheta;

and the distance to the bottom edge is:

d = (TextureHeight – LightPos.y) / cosTheta;

Where TextureHeight is the height of the texture. As an aside, we’ll need to add TextureHeight as a parameter at the top of the file:

float TextureHeight;

So now we simply need to choose the smallest positive distance of these 3 distances, and we’ll have the length of our ray! Why the smallest positive distance? Well, let’s imagine we have a ray that hits the top of the screen. If you extend this ray on the other side of the light, it will also hit the bottom of the screen. However, our equations above will give us the distance as a negative, because it is in the opposite direction to the normal vector. Clearly we want to ignore this result, so we only choose from the positive results.

What does this look like in code? Something like this (but don’t copy it down just yet!):


float LightDist;

float topHit = -LightPos.y / cosTheta;

float leftHit = LightPos.x / sinTheta;

float bottomHit = (TextureHeight - LightPos.y) / cosTheta;

topHit = (topHit < 0) ? 2 * TextureWidth : topHit;

leftHit = (leftHit < 0) ? 2 * TextureWidth : leftHit;

bottomHit = (bottomHit < 0) ? 2 * TextureWidth : bottomHit;

LightDist = min(topHit, min(leftHit, bottomHit));


You may be wondering why we’re setting each of the ‘Hit’ values to 2 * TextureWidth if they are less than 0. This is because we need them to be positive (otherwise min would return them instead of the smallest positive value), but we need to ensure it is longer than any of the positive values, hence we choose a value for it longer than any ray can possibly be. We’ll need to add TextureWidth as a parameter while we’re here:

float TextureWidth;

However, each of those ternary operators (?: operators) is a ‘branching’ instruction, which are quite expensive in shader programming. So, as a bit of an optimisation (this actually should result in fewer instructions), we change this code to the following (note, still not final code!):


float LightDist;

float3 hit = float3(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta);

hit = (hit < 0) ? 2 * TextureWidth : hit;

LightDist = min(hit.x, min(hit.y, hit.z));


The line where we use the ternary operator on hit actually performs the same operator on all 3 components at once, which is exactly what we need.  However we can’t quite use this code yet unfortunately, as we haven’t yet considered the other side of the light.

Now we get to see where choosing to have the angles increase positively in both directions from the zero line will benefit us. If you look back at the equations for the distance along the ray to the top and bottom of the texture, you’ll see that they only involve cosTheta. This in turn is because they only involve the y component of the normal.

Now, if you look at the equations for our two normals, you will see that the y coordinate is the same for both, meaning that the distance along the ray to the top and bottom edges of the texture will be the same for both sides of the light. So, in order to incorporate the rays on both sides of our light into our code, we only need to calculate the distance to one more edge. In this case, using the same method as above, the equation will be:

d = (TextureWidth – LightPos.x) / sinTheta

Before we add that into our code, let’s consider what will happen to LightDist. At the moment we only need a single float value, since we’re only measuring a single distance. However, the distance to the nearest edge could easily be different between the rays on either side of the light (e.g. the light is in the middle vertically but very close to one of the side edges, and so very far from the other side edge). So we will need to record two LightDist’s, which we will do by converting LightDist to a float2. We’ll also need to take this into consideration when it comes to calculating the value to store in LightDist.

So lets have a look at the new code (this will actually be the final version of this bit of code this time!):


float2 LightDist;


float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

hit = (hit < 0) ? 2 * TextureWidth : hit;

LightDist = min(hit.wy, min(hit.x, hit.z));


The main thing we need to explain is the line where we assign a value to LightDist. Essentially we need to end up with LightDist.x holding the minimum of the distance to the top, bottom, and left side of the screen, which are held in hit.x, hit.z, and hit.y respectively. LightDist.y needs to end up holding the minimum between the top, bottom, and right hand side of the screen, i.e. hit.x, hit.z, and hit.w.

Since both LightDist.x and LightDist.y need to know the minimum between hit.x and hit.z, we do that inside the nested min(), and then in the outer min() we find the minimum of that result with each of the right and left distance respectively.

We have one more issue to deal with before we can move on to the next part of the shader. At certain angles either sinTheta or cosTheta may have the value zero. If that happens we will have an issue, as we will be dividing by zero, which in some environments would crash. In shaders it’ll will give us very strange results. Either way we don’t want to do it, so we’ll need to add some code to handle these occassions. Fortunately, it’s very easy to compute the distances manually in these situations. If cosTheta is zero it means are rays are pointing right/ left respectively, which means that the ray lengths are just TextureWidth – LightPos.x and LightPos.x.

If the sinTheta is zero then either both rays are pointing up or both are pointing down. The way to determine which is to check cosTheta. If cosTheta is 1 then the rays are pointing down, if it’s -1 then they are pointing up. We can use some clever maths to avoid using any if statements or ternary operators, and still get the right result. Let’s write the final version of the code to determine the length of the ray:


float2 LightDist;

if (cosTheta == 0)
    LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
else if (sinTheta == 0)
    LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
    float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

    hit = (hit < 0) ? 2 * TextureWidth : hit;

    LightDist = min(hit.wy, min(hit.x, hit.z));


Phew! You’ll be pleased to know that that’s the hardest part out of the way. Now all that’s left is for us to use the information we’ve gathered so far to determine which pixel on the texture we want to sample. So next up is Step 4) from above, use the y texture coordinate and the length of the ray to find out how far from the light our pixel is. Since the y coordinate is between 0 and 1 we can just multiply it by the length of the ray (as 1 would give us the intersection between the ray and the edge of the texture, and 0 would give us the light itself). So we can add the following line to our shader:

LightDist = mul(LightDist, texCoord.y);

Remember that LightDist is a float2, which means that this line actually calculates the distance from the light for both the pixel on the ray to the left of the light and the pixel on the ray to the right of the light.

Next we move on to Step 5). Here we simply multiply the normals by the correct components of LightDist to get the offset from the light (in horizontal and vertical pixels) of the pixels we want to sample:

norm1 = mul(norm1, LightDist.y);

norm2 = mul(norm2, LightDist.x);

Next up is Step 6). Here we add the position of the light to our offset to get the actual coordinates (in pixels) of the pixels we want to sample. We then convert them to texture coordinates and sample them from the shadow casters texture:

float4 sample1 = tex2D(shadowCastersSampler, float2((LightPos.x + norm1.x) / TextureWidth, (LightPos.y + norm1.y) / TextureHeight));

float4 sample2 = tex2D(shadowCastersSampler, float2((LightPos.x + norm2.x) / TextureWidth, (LightPos.y + norm2.y) / TextureHeight));

In order for this to work we’ll need to add a sampler for our shadow caster texture to the top of our shader file:

sampler shadowCastersSampler : register(s0);

So now finally we’ve sampled the pixels from our shadow caster texture so that we can determine whether or not they are shadow casting pixels. All that remains is for us to store our results for our two rays in the first two channels of our render target. As we described in Step 7), if the pixel we’ve sampled isn’t casting a shadow then we want to store 1 as the result, otherwise we want to store the distance of that pixel from the light divided by the diagonal length of the shadow caster texture:

return float4((sample1.a < 0.01) ? 1 : LightDist.x / DiagonalLength, (sample2.a < 0.01) ? 1 : LightDist.y / DiagonalLength, 0, 1);

Once again we’ll need to add a parameter to our shader file for this to work. This time it’s the variable DiagonalLength. We could of course calculate this in the shader from the TextureHeight and TextureWidth. However, it would be expensive to calulate this for every single pixel on our render target when we can just calculate it once on the CPU and pass it in as a parameter:

float DiagonalLength;

And… we’re done with our shader! The final code file, fully completed, should look something like this:


#define PI 3.14159265

float2 LightPos;

float TextureWidth;

float TextureHeight;

float DiagonalLength;

sampler shadowCastersSampler  : register(s0);

float4 PixelShaderFunction(float2 texcoord : TEXCOORD0) : COLOR0
    float sinTheta, cosTheta;

    sincos((texCoord.x * PI), sinTheta, cosTheta);

    float2 norm1 = float2(-sinTheta, cosTheta);

    float2 norm2 = float2(sinTheta, cosTheta);

    float2 LightDist;

    if (cosTheta == 0)
        LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
    else if (sinTheta == 0)
        LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
        float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

        hit = (hit < 0) > 2 * TextureWidth : hit;

        LightDist = min(hit.wy, min(hit.x, hit.z));

    LightDist = mul(LightDist, texCoord.y);

    norm1 = mul(norm1, LightDist.y);

    norm2 = mul(norm2, LightDist.x);

    float4 sample1 = tex2D(shadowCastersSampler, float2((LightPos.x + norm1.x) / TextureWidth, (LightPos.y + norm1.y) / TextureHeight));

    float4 sample2 = tex2D(shadowCastersSampler, float2((LightPos.x + norm2.x) / TextureWidth, (LightPos.y + norm2.y) / TextureHeight));

    return float4((sample1.a < 0.01) ? 1 : LightDist.x / DiagonalLength, (sample2.a < 0.01) ? 1 : LightDist.y / DiagonalLength, 0, 1);


technique Technique1
    pass Pass1
        PixelShader = compile ps_2_0 PixelShaderFunction();


Before we finish we need to add some code to the project we started in the last part of the series to set the shader parameters.

Open up the LightRenderer class file. First of all add the following field to the top of the class:

Vector2 screenDims;

And then add the following to the bottom of Initialize():

screenDims = new Vector2(graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Followed by the following at the top of PrepareResources():





And finally, add the following lines to UnwrapShadowCasters(PointLight pLight):





And that’s it! We now have a working Unwrap shader for point lights. This generates the input for our CreateOcclusionMap() method which produces our occlusion map. I’ve uploaded the solution so far to codeplex as normal, which you can find here (with some typos fixed from the one I uploaded for the last part in the series. The code in the tutorial itself should be fine, I just failed to copy it correctly!):


Part 4 solution


In the next part of the series we’ll see how we use the occlusion map to generate our light map for the scene. See you soon!




Welcome to the third instalment of this series. In the last part we explained how we will be using light maps to light our scene. In this part we will be talking about the algorithm we will be using to generate these light maps each frame.

This algorithm will not only deal with lighting pixels based on how far they are from the light, but also which pixels are in shadow based on the parts of the scene that the developer wishes to cast shadows. The details of the implementation of the algorithm will be left for later parts, but there are one or two parts that fit best in this part where I will go into slightly greater detail with some code.

After describing the algorithm I will go over the structure of the LightRenderer class which will be the main way that the developer interacts with the lighting system, and will go over the structure of the code the developer will need to write to use the system once it’s finished.

Let’s get going!


The algorithm

The idea for this algorithm was inspired by the lighting system described in @CatalinZima’s blog (which you can find here: http://www.catalinzima.com/2010/07/my-technique-for-the-shader-based-dynamic-2d-shadows/).

This algorithm deviates from @CatalinZima’s technique in a number of ways, and also adds extra features such as spot lights, and an unlimited light radius (although I’m sure @CatalinZima’s version could easily be adapted to add these features).

I haven’t done any performance testing between my system and @CatalinZima’s, however I have attempted to tackle the step in the original version which @CatalinZima highlighted as being the bottleneck of the algorithm. I leave it to the reader to decide which to use (and I encourage you to find ways of improving either/ both and publishing them for the community to benefit from!).



The basic idea of the algorithm is to cast a large number of rays out from the light in the arc of the light (360 degrees on a point light), and find the first pixel that has a shadow casting object drawn on it for each ray, and keep a record of how far it is from the light.

Then for each pixel on the screen we determine which ray it falls closest to and test whether it is nearer to the light than the first shadow casting pixel on that ray. If it is, we determine how much light it gets based on factors such as how far from the light it is, and what color the light is. If it isn’t then it is in shadow, so we don’t light it at all.

We use this information to create a light map for each light, and then combine them to get an overall light map for the scene, and then finally we light the scene using the shader we wrote in part 2. Since we regenerate the light maps each frame if the lights, shadow casting objects, or background change at all this will be reflected in the lighting/ shadows.


Step 1: Caching the background and shadow casters

First of all, the developer will need to draw all of the sprites that make up the background that will be lit by our lighting system onto a single texture. We will make this as easy as possible for the developer by keeping the interface with our system nice and familiar. We also have to do the same for the shadow casting sprites. It’s important for our system that we have one single texture with the background sprites and one single texture for the shadow casting sprites, but we don’t want to force the developer to handle this themselves.


Step 2: Casting the rays

For those of you that are thinking ‘hang on, this whole method sounds like ray tracing’, well, it is, sort of. However the way we’ll be approaching this means that the expensive bit of ray tracing, finding the point at which the ray intersects the scenery, can be done for all rays simultaneously using a single draw call (we shall come to that later).

The first stage in casting the rays is clearly borrowed from @CatalinZima’s method. As illustrated below, we transform the scene so that all of the rays are aligned linearly (in our case vertically):




Now, when we ‘unwrap’ the full set of rays into one channel of a texture, the number of rays we can cast is limited by the horizontal resolution of the texture. Let me explain. Lets revisit the first image above, this time labelling the rays:



Now we unwrap them so that they are all vertical, keeping the labelling:



The maximum number of rays we can cast is limited by the number of ‘columns’ of pixels in our unwrapped texture, i.e. the horizontal resolution of the texture. However, as we discussed above, all we actually care about is how far from the light the first opaque pixel is for each ray.

Let’s take a step back and unpack that statement. There are two bits of information we need: 1) is the pixel opaque? If it is, we want to know 2) how far from the light is it.

Based on this we can actually encode the information we want for each pixel into a single float as follows:

float result = (sample.a < 0.01) ? 1 : LightDistance;


If the pixel is opaque, store it’s distance from the light (we’ll discuss methods for transforming this into the range 0 – 1 in a moment). Otherwise set it at 1 (the maximum distance from the light).

Then all we need to do is find the minimum value for that column to get the distance of the first opaque pixel from the light (as all the non-opaque pixels will automatically have values further away than the opaque ones).

So by doing this we can encode the information from each ray into a single texture channel. In a standard texture there are 4 channels, so we could increase the maximum number of rays we can cast to 4 times the horizontal resolution of the texture we’re unwrapping to.

In reality we won’t use all 4 channels, for reasons I’ll come onto later. In fact we use 2 channels. We unwrap all of the rays to the left of the light to the red channel of the texture and all of the rays to the right of the light to the green channel of the texture. To see what this looks like, the following image is the unwrapped texture of the image above:




A bit psychedelic eh?

As I mentioned above, we needed to transform the distances from the light so that they sit in the range 0 – 1 (otherwise they will be clamped to the 0 – 1 range by the graphics processor). There are few ways we could have done this.

I chose to do this by simply taking a value guaranteed to be longer than the distance from the light to any pixel on the screen (the diagonal length of the screen), and dividing by that. Some information is lost, as most of the time the pixel distances will only be represented by a small number of the values between 0 and 1, however the final quality seemed good enough.

The most accurate would have been to divide the distance by the length of each ray. I discounted this as it would have meant calculating the length of each ray twice, once when unwrapping the rays, and once when using the value to retrieve the distance again when we decide which parts of the ray are in shadow when creating the light map.

A cheap compromise would have been to calculate the longest distance of the light to the furthest corner of the screen on each side (since we store each side of the screen separately) and divide by that. In the worst case it would be the same as our method, but in general would theoretically provide better quality. Were I to write the system again I would probably use this method, as these distances would only need to be calculated once per frame on the CPU.

Edit: Another method, which has only occurred to me since writing this post, would be to divide by the ‘radius’ or maximum reach of the light, as any values beyond this would be in shadow anyway. This will be something I will look into in the future.

There is one more issue we need to deal with when unwrapping the rays. If we were to just rotate the rays as they are, then we would have lots of columns of different lengths.

There were 3 possible solutions to this.

The first would be to make our texture resolution big enough to hold the longest ray. Since this length will change as we move the light (and we can’t create a new texture each frame without incurring the wrath of the garbage collector on the xbox) this would need to be equal to the longest value a ray can be, i.e the diagonal length of the screen. For a 1280 x 720 screen that would mean the texture would need to be 1468 pixels high.

The second option would be to scale all of the rays down so that the longest ray fits the texture. So if we had a square texture of say 1280 x 1280, then if any ray was longer than 1280 pixels, all of the rays would be compressed to fit into the texture.

The third option is to scale each ray individually so that it fits into the texture. That way only rays longer then 1280 long would be compressed, while those shorter would lose no information. This is the option I went for, as the first would likely end up taking up too much texture memory (particularly for a higher resolution), and the second seemed to unnecessarily penalise shorter rays just because there’s a longer ray in the scene.

So by this point we would have a texture for the current light with all of the rays unwrapped, with each pixel holding a distance from the light, scaled down to the range 0 – 1 by dividing by the diagonal length of the screen.


Step 3) Find the nearest shadow caster

The next step is to find the distance along each ray to the first shadow casting pixel. As discussed above, because of the way we’ve encoded our values in the texture all we need to do is find the minimum value in each column. @CatalinZima uses a folding technique and a special shader to find the minimum, which he states is probably the bottleneck of his technique.

We will take a different approach. Ideally what we want to end up with is a texture 1 pixel high with the same width as our unwrapped texture, with each pixel holding the minimum value from the corresponding columns in the unwrapped texture, illustrated below:



We will call this our occlusion map, as it holds, for each ray, the distance at which pixels start to become occluded.

In order to do this, we must look at ‘row’ of the unwrapped texture at least once and compare each one to the value already in our occlusion map.  If the value is less than the one we already have stored in our occlusion map, then we want to write it to the occlusion map, otherwise we discard it. Unfortunately there isn’t (to my knowledge) an easy way of doing this with shaders in a single pass.

However, there is a way we can compare the value we are trying to write to a surface to the value already there, and have the final value depend on some function of the two: Blending.


Aside: Blending and BlendStates

You’ve probably already come across blending in XNA. The three types most people come across are Opaque Blending, Alpha Blending and Additive Blending.

Opaque Blending is probably what you used when you started with XNA. Alpha Blending is what you use when you want to display partially transparent sprites, whereas Additive Blending is used for effects like fire, lightning, and neon glows. I’ll take a moment to use these to explain exactly what blending is, before I go on to explain how we use it to find the minimum distance in each column of our texture.

I’ll start with Opaque Blending.

Opaque Blending is very simple, in fact its not really even blending per se. When you draw to a surface (texture, the screen etc), after the shader has determined the color of the pixel it’s working on (which we call the Source), it goes to output it to the surface at the correct position, but there is already a color there (the Destination).

Possibly we’ve already drawn a sprite that covers that pixel, or maybe it’s just the color from the GraphicsDevice.Clear() call that we always put at the beginning of Draw() in XNA.

The Graphics Processor has a choice: Does it simply overwrite the Desitnation color with the Source color, or does it use it somehow to determine the final color that gets stored on the surface.

In Opaque Blending, it takes the first option, and just overwrites the Desitnation color with the Source color. This means that sprites drawn later that overlap with sprites drawn earlier will appear to be ‘in front’ of the earlier sprites.

Additive Blending is only slightly more complex. With Additive Blending the Graphics Processor takes the Source color and adds it to the Destination Color. This makes the image in this area seem brighter, as well as blending the colors together.

Alpha Blending is slightly more complex, and I won’t go into great detail here, you can read about it on this MSDN blog: http://blogs.msdn.com/b/etayrien/archive/2006/12/07/alpha-blending-part-1.aspx

In general though, it uses the alpha value of the Source Color to determine how much of the Destination Color to add to it. We can generalise the blending process to the following equation:

Final Color = BlendFunction(Source * SourceBlendFactor, Destination * DestinationBlendFactor)

Where BlendFunction, SourceBlendFactor, and DesitnationBlendFactor can be one of a set number of options.

E.g. for Additive blending we have:

FinalColor = Add(Source * 1, Desination * 1)


BlendFunction = Add, SourceBlendFactor = 1 & DestinationBlendFactor = 1.

And for Opaque blending we have:

Final Color = Add(Source * 1, Destination * 0)


Using the Min BlendFunction

For our purposes, we’re interested in a different Blend function: Min.

As the name suggests, Min finds the minimum of it’s two parameters, so the blend function we want is

Final Color = Min(Source * 1, Destination * 1)

This way, if we can draw all of the ‘rows’ of the unwrapped texture onto our occlusion map texture, we should be left with the minimum value of each column in the corresponding pixel of our occlusion map. What does this look like in code? Like this:




spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, sampleState, null, null);

for (int j = 0; j < fullScreen.Width; j++)
    spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, j, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);


During initialization we create the BlendState we want to use:


collapseBlendState = new BlendState();
collapseBlendState.ColorBlendFunction = BlendFunction.Min;

collapseBlendState.AlphaBlendFunction = BlendFunction.Min;

collapseBlendState.ColorSourceBlend = Blend.One;

collapseBlendState.ColorDestinationBlend = Blend.One;

collapseBlendState.AlphaSourceBlend = Blend.One;

collapseBlendState.AlphaDestinationBlend = Blend.One;


Then we begin a SpriteBatch using this BlendState:

spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, sampleState, null, null);

Then, assuming that we’ve set the graphics device to draw to our occlusion map (more on this later), we simply use a for loop to draw a sprite containing a single row of the unwrapped texture at a time onto the occlusion map:


for (int j = 0; j < fullScreen.Width; j++)
    spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, j, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);


And then we simply class End() on the spritebatch:


Because this is only using a single texture as a source, SpriteBatch should batch this up into a single draw call, meaning that we effectively find the first shadow casting pixel for each ray in one go! Huzzah!


Step 4) Creating the light map

In this step we need to work out how much light each pixel receives from our light and generate a light map, using our occlusion map to determine whether a pixel is in shadow or not.

We do this using a special shader, which, for each pixel:

1) Determines which ray that pixel lies on

2) Determines the pixel’s distance from the light

3) Looks up the distance from the light to the first shadow casting pixel

4) Compares the two distances to determine if the pixel is in shadow: if it is, color it black, if not continue

5) Based on how far it is from the light (and the angle it makes with the light, for spotlights) determine how much light it should get.

6) Multiply the light’s color by this value and color the pixel with the result

In reality we use a method which avoids the branching logic of an if statement in step 4, but this is the basic idea.


Step 5) Blurring the light map

As you may know (or can observe), shadows that are close to a light have very crisp edges, whereas shadows further from a light source are more indistinct.

In reality there are some fairly complex equations that govern this, but I again took a leaf from @CatalinZima’s book and used a radial blur to approximate this effect. If we had a lot of processing headroom in our game and wanted our shadows to be a bit more realistic we could do some research and attempt a more complex method to create these soft shadows.

In our case however we simply blur the light map for our light using two shaders, one that blurs the image horizontally, and the other vertically. The shader works by taking a number of samples from the light map around the current pixel, and averaging them to give a final color.

The more spread out these samples are, the more blurred the image. In our case we use the distance from the light of the current pixel to decide how spread out these samples should be, so that the light map gets more blurred the further from the light it is.


Step 6) Adding the light maps together

Once we’ve completed steps 1 – 4 with our light we can add the result to a texture representing our final light map. Because in the real world lights work additively (if you have two lights lighting the same point the resulting light reflected is the sum of the two lights) we can simply additively blend the light maps together into the final light map texture.


Step 7) Render the final scene

Finally we draw our background lit by the light map using the light blend shader we wrote in Part 2, followed by the shadow casters in the scene, and any foreground sprites that are not affected by the light system (e.g. a Heads Up Display).

We will expand on each of these stages as we get to them in the series, but that should give you an outline of the algorithm our light system uses.

For the rest of the post we will talk about the main classes that we will be writing that the developers using our lighting system will need to interact with, and how their game code will use the lighting system.


The LightRenderer class

Fire up Visual Studio and create a new Windows Game project, which we shall call “2DLightingSystem”. Visual studio will create the default files with a namespace of ‘_2DLightingSystem’.

Once it’s been created we need to create 4 new classes.

The first 3 will just be stubs that we’ll flesh out later on in the series, so create 3 empty code files called “Light.cs”, “PointLight.cs”, and “SpotLight.cs”, and add the following code to each respectively:


using Microsoft.Xna.Framework;

namespace _2DLightinSystem
    public abstract class Light
        public Vector2 Position;
        public float Power = 0f;
        public Color color;
        public float radius = 0f;

        public Light(Vector2 pos)
            : this(pos, 0f, Color.White)

        public Light(Vector2 pos, float power, Color color)
            Position = pos;
            Power = power;
            this.color = color;

using Microsoft.Xna.Framework;

namespace _2DLightingSystem
    public class PointLight : Light
        public PointLight(Vector2 pos)
            : base(pos, 0f, Color.White)

        public PointLight(Vector2 pos, float power, float radius)
            : base(pos, 0f, Color.White)
            this.radius = radius;

        public PointLight(Vector2 pos, Color color)
            : base(pos, 0f, color)

        public PointLight(Vector2 pos, float radius)
            : base(pos, 0f, Color.White)
            this.radius = radius;

        public PointLight(Vector2 pos, float power, float radius, Color color)
            : base(pos, power, color)
            this.radius = radius;

using System;

using Microsoft.Xna.Framework;

namespace _2DLightingSystem
    public class SpotLight : Light
        public Vector2 direction;
        public float innerAngle;
        public float outerAngle;

        public SpotLight(Vector2 pos, Vector2 dir, float inner, float outer, float power, float _radius, Color col)
            : base(pos, power, col)
            direction = dir;
            innerAngle = inner;
            outerAngle = outer;
            radius = _radius;

        public SpotLight(Vector2 pos, Vector2 dir, float angle, float power, Color col)
            : base(pos, power, col)
            direction = dir;
            innerAngle = angle;
            outerAngle = angle;

        public SpotLight(Vector2 pos, Vector2 dir, float angle, float power, Color col)
            : base(pos, 0f, col)
            direction = dir;
            innerAngle = outerAngle = angle;

        public float GetAngleBias()
            float diffAngle = (float)Math.Acos(Vector2.Dot(direction, Vector2.UnitY));
            if (float.IsNaN(diffAngle))
                diffAngle = (float)(((Math.Sign(-direction.Y) + 1) / 2f) * Math.PI);
            if (diffAngle - (outerAngle / 2f) < 0)
                return 0;
            return MathHelper.Pi * 2f;


Next up create an empty file called “LightRenderer.cs”, and add the following stub to it:


using System;

using System.Collections.Generic;

using Microsoft.Xna.Framework;

using Microsoft.Xna.Framework.Graphics;

using Microsoft.Xna.Framework.Content;

namespace _2DLightingSystem
    public class LightRenderer



We’ll need to keep hold of a reference to the GraphicsDeviceManager, so add the following to the top of the class:

public GraphicsDeviceManager graphics;

And create the following constructor:


public LightRenderer(GraphicsDeviceManager _graphics)
    graphics = _graphics;


The first thing we need to do is decide how our developer will interface with the system. Since this system is built on the assumption that the developer is familiar with XNA, they will likely also be familiar with SpriteBatch’s Begin(), Draw(), End() pattern, so we shall borrow from that.

As mentioned in Step 1 above, we will need the developer to draw all of their background sprites onto a single texture, and all of their shadow casting sprites onto another. In order to signal this to the developer, we will require them to use the following pattern:



//Developer draws sprites with spritebatch





//Developer draws sprites with spritebatch




So let’s create stubs for these methods in our LightRenderer class:


public void BeginDrawBackground()
public void EndDrawBackground()

public void BeginDrawShadowCasters()

public void EndDrawShadowCasters()


We’ll come back to what we need to do in these methods shortly.

After the developer has done this, we have what we need to draw everything (almost) to draw the scene, with the exception of the foreground, so we will let the developer just use a simple method call to make that happen:


Then the developer can continue to draw any sprites, such as a foreground, HUD etc as usual.

The final missing element for the developer is a way to add lights to the scene. To keep this simple we will just keep two public lists of the lights (one for point lights and one for spot lights), which the developer can manipulate as they wish. All we need to add is the following at the top of our LightRenderer class:

public List<SpotLight> spotLights;

public List<PointLight> pointLights;

And that’s it for the public interface to the LightRenderer class.

Next we’ll flesh out some of these methods a bit, leaving space for us to come back later in the series to add code that we’re not quite ready for yet.

We’ll start with BeginDrawBackground(). Before the developer can start drawing their background sprites, we need to make sure they’re drawing to our texture, and not the screen. Before we write the code for this, we’ll need a texture. More specifically, we need a special kind of texture that we can draw to, called a render target or, in XNA, a RenderTarget2D.

So add the following to the top of the class:

public RenderTarget2D backBufferCache;

And create an Initialize(): method containing the following code:


public void Initialize()
    backBufferCache = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

    spotlights = new List<SpotLight>();
    pointlights = new List<PointLight>();


So now we have a render target, we need to make sure that when the developer starts drawing their background sprites the Graphics Processor draws them to our render target and not the screen.

We do this by adding the following at the start of BeginDrawBackground():


This code does exactly as you’d expect, it tells the Graphics Processor that from now on it should draw to our render target, and not the screen.

And we’re done with BeginDrawBackground().

For EndDrawBackground, there’s actually nothing we need to do. You could in fact omit it entirely. The reason I’ve left it in is because it demarcates the area that the developer should draw their background sprites and matches the API pattern they’re used to with XNA and SpriteBatch.

Next up is BeginDrawShadowCasters(). Once again we’ll need a RenderTarget2D for the developer to draw to, so add the following at the top of the class:

public RenderTarget2D midGroundTarget;

And the following to Initialize():

midGroundTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Then in the method we tell the GraphicsDevice to use our new RenderTarget2D.

Note: by setting another render target on the GraphicsDevice using SetTarget we are implicitly un-setting the current render target.

We also need to clear the render target. The developer shouldn’t clear the render target, as they are effectively drawing the mid-ground of their scene, and clearing the target at this point normally would mean erasing their background (it would have a similar effect here).

However the render target needs to be cleared, as uninitialized bits of render target default to a horrible lovely  shade of purple. In our case we want to clear it to Color.Transparent, as we want the alpha values of all non-shadow casting pixels to be 0. So we add the following to our BeginDrawShadowCasters() method:




And we’re done with BeginDrawShadowCasters().


Once again, EndDrawShadowCasters() is empty, but again, we want to use the familiar Begin…End pattern).


Now we move on to the heavy lifter of our system – DrawLitScene(). For now most of this method will just refer to method stubs, or else use place-holder comments, and we will flesh them out later in the series. For now though, the structure of the method is going to look like this:


public void DrawLitScene()
    //Error checking

    for (int i = 0; i < spotLights.Count; i++)
        //Spotlight specific calculations

        UnwrapShadowCasters(spotLights[i] /*,other params*/);


        CreateLightMap(spotLights[i] /*,other params*/);



    for (int i = 0; i < pointLights.Count; i++)







Let’s have a quick look at each part of this. First up is some error checking in case our developer has set some nonsense values for some of the parameters. In this series I haven’t bothered with throwing exceptions or returning error codes, but obviously you’d want to follow your normal error checking strategy in a game to be released.

Next up is PrepareResources(). This is mostly concerned with setting the parameters in our shaders that hold true for all lights in the scene, such as scene dimensions, shadow bias (we will discuss what that is in a later part) etc.

This could actually all be done outside of the rendering loop, using properties to update the Effect objects when the developer changes them, but we’re being slightly lazy setting them each frame.

Also in PrepareResources() we’ll need to set the first of a number of RenderTarget2D’s that we’ll be using throughout the method, so add the following to the beginning of the class:

public RenderTarget2D lightMap;

Then add this to Initialize():


lightMap = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height, false, SurfaceFormat.Color, DepthFormat.None, 1, RenderTargetUsage.PreserveContents);


That’s a slightly different RenderTarget2D constructor than we’re used to. In particular the last parameter sets the RenderTargetUsage for our render target. In our case we’re setting it to RenderTargetUsage.PreserveContents. This tells the Graphics Processor not to throw away the current contents of the render target when we set it as the active target (which is normally does by default). This is important because, as discussed above, we want to add the light maps of each of our lights in turn to the final light map.

Then create a the PrepareResources() method like so:


private void PrepareResources()
    //Set effect parameters


At the beginning of the method we will eventually be setting some parameters in our shaders, however for now we’ll just leave the place-holder as above.

Next, add this code to the end of the method to set and clear our render target:




And that’s it for PrepareResources().


The next parts of our DrawLitScene() method are those actions that we need to do per light.

First of all we loop through our list of spot lights and calculate some values that we’ll need to pass in our shaders. We’ll cover these in the part of the series on spot lights.

The first call in our loop through the spot lights list is to UnwrapShadowCasters(), which takes the current light as a parameter along with the values that we will eventually be calculating at the start of the loop.

There are actually 2 different versions of UnwrapShadowCasters() that take different parameters. For now we can distinguish between them by the first parameter, as one takes a spot light and the other a point light. Also for now they will both hold the same contents, so you can create 2 stubs with the same code:


private void UnwrapShadowCasters(SpotLight sLight /*,other params*/)
    //More setting of effect parameters

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, unwrapSpotlight);
    spriteBatch.Draw(midGroundTarget, new Rectangle(0, 0, fullScreen.Width, fullScreen.Width), Color.White);

private void UnwrapShadowCastsers(PointLight pLight)

    //Set effect parameters

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, unwrap);
    spriteBatch.Draw(midGroundTarget, new Rectangle(0, 0, fullScreen.Width, fullScreen.Width), Color.White);


Visual Studio will now complain about several of the objects we’ve referenced in that snippet, as most of them don’t exist yet! Let’s fix that.

First, add the following declarations at the top of the class:


public RenderTarget2D unwrapTarget;

public Effect unwrapSpotlight;

public Effect unwrap;

Rectangle fullScreen;

SpriteBatch spriteBatch;


followed by the following in Initialize():

unwrapTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Width, false, SurfaceFormat.HdrBlendable, DepthFormat.None);

fullscreen = new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);


and then add a LoadContent() method with the following content:


public void LoadContent(ContentManager Content)
    spriteBatch = new SpriteBatch(graphics.GraphicsDevice);
    unwrap = Content.Load<Effect>(@"EffectsUnwrap");

    unwrapSpotlight = Content.Load<Effect>(@"EffectsUnwrapSpotlight");


First let’s look at the spotLight version of UnwrapShadowCasters(). The unwrapSpotlight Effect is exactly what you’d expect, the shader that handles the ‘unwrapping’ of the rays that we discussed earlier. The full screen rectangle just caches the screen dimensions, as we use them a large number of times throughout the class.

The unwrapTarget RenderTarget2D is where we will be storing the unwrapped rays so that we can use them in the next stage of the algorithm. This target is slightly different to the others so far. Rather than go into the reason for this here, I will discuss them at the end of the post.

The same goes for the SamplerState parameter in spriteBatch.Begin(). For now, all you need to know is that because we are using a different type of RenderTarget2D, we need to sample it (i.e. pick out individual pixels) in a slightly different way.

Now let’s look at what will be in the point light version of the method.

As you can see, the only new object is the unwrap Effect. As you can probably guess, this is the equivalent unwrap shader for point lights that we shall be looking at later in the series.


Returning to DrawLitScene(), the next method that we indicated that we’d be calling is CreateOcclusionMap(). This method is the same for both spot lights and point lights, and doesn’t take any parameters.

As we discussed earlier once we’ve unwrapped our rays into a texture so that each ray is represented by a column of the texture, we use a special blend state to find the minimum value in each column. So first up, let’s create the method stub:


private void CreateOcclusionMap()


Next we need our RenderTarget2D, which needs to be the same size as a single row of the unwrap texture, i.e. the same width but only 1 pixel high. Lets add a declaration for it at the top of the class:

public RenderTarget2D occlusionMap;

with the following in Initialize():

occlusionMap = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, 1, false, SurfaceFormat.HdrBlendable, DepthFormat.None);

Back in CreateOcclusionMap(), the first thing we need to do is set our occlusionMap texture as the active render target:


Next, let’s create our special BlendState. First let’s declare our BlendState at the top of the class:

public BlendState collapseBlendState;

I’ve named this the collapseBlendState because each column ‘collapses’ down to it’s the minimum value of any pixel in the column. To create a blend state we need the following in Initialize():


collapseBlendState = new BlendState();
collapseBlendState.ColorBlendFunction = BlendFunction.Min;
collapseBlendState.AlphaBlendFunction = BlendFunction.Min;
collapseBlendState.ColorSourceBlend = Blend.One;
collapseBlendState.ColorDestinationBlend = Blend.One;
collapseBlendState.AlphaSourceBlend = Blend.One;
collapseBlendState.AlphaDestinationBlend = Blend.One;


As you can see, we have various fields in collapseBlendState. Above we discussed the following equation:

Final Color = BlendFunction(Source * SourceBlendFactor, Destination * DestinationBlendFactor)

Now, you might be slightly confused by the fact that instead of a single BlendFunction, we have ColorBlendFunction and AlphaBlendFunction. The reason for this is that we can specify different functions for the rgb, and the a components of the colors that we’re blending. In other words the actual equation is something like this:

FinalColor.rgb = ColorBlendFunction(Source.rgb * ColorSourceBlend, Destination.rgb * ColorDestinationBlend);

FinalColor.a = AlphaBlendFunction(Source.a * AlphaSourceBlend, Destination.a * AlphaDestinationBlend);

In our case, for simplicity, we will set alpha to behave in the same way as rgb. Also notice that are SourceBlendFactor has been split into ColorSourceBlend and AlphaSourceBlend, and that DestinationBlendFactor has been split into ColorDestinationBlend and AlphaDesinationBlend.

For us we want our equations to be:

FinalColor.rgb = Min(Source.rgb * 1, Destination.rgb * 1);

FinalColor.a = Min(Source.rgb * 1, Destination.a * 1);

So we set the values in our blend state accordingly.

Returning to CreateOcclusionMap(), we now need to use our blend state with spritebatch to draw each row of the unwrap texture one after the other onto our render target:


spriteBatch.Begin(SpriteSortMode.Deferred, collapseBlendState, SamplerState.PointClamp, null, null);

for (int i = 0; i < fullScreen.Width; i++)
        spriteBatch.Draw(unwrapTarget, new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, 1), new Rectangle(0, i, graphics.GraphicsDevice.Viewport.Width, 1), Color.White);



Note that our unwrap texture was fullscreen.Width high as well as wide, which is why we are looping up to fullscreen.Width. And that’s it for CreateOcclusionMap(). 

Next up is CreateLightMap(). Much like UnwrapShadowCasters(), there are two version of this method, one for spot lights and one for pointlights. Once again, the spot lights version takes some as parameters some of the values that we will be calculating at the beginning of the spotlight loop in DrawLitScene(). Let’s create the stub for both versions of the method:


private void CreateLightMap(SpotLight sLight /*,other params*/)

private void CreateLightMap(PointLight pLight)


In fact, for the moment there will only be one difference between our two methods. The contents of the spotlight version of CreateLightMap() looks like this:


//Set params

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, spotLight);
spriteBatch.Draw(occlusionMap, fullScreen, Color.White);


This should all be very self explanatory by now, but as always, we need to create and initialize a few variables so that we don’t run into problems later. Add the following to the top of the class:

public RenderTarget2DpostProcessTarget;
public Effect spotLight;

And then this to Initialize():

postProcessTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

Followed by this to LoadContent():

spotLight = Content.Load<Effect>(@"EffectsSpotLight");

The render target is called postProcessTarget because it’s going to be acting as the source for the various processes that we will be performing to the light map after it’s been rendered, i.e. post-process.

The spotLight effect will be using our occlusionMap to create the light map for this light.Similarly, the contents of the point light version will be the following:


//Set params

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, SamplerState.PointClamp, null, null, pointLight);
spriteBatch.Draw(occlusionMap, fullScreen, Color.White);


As you can see, the only difference is that we’re using the pointLight Effect instead of the spotLight Effect, as the two create different light maps (as you’d expect!). Before we move on we need to declare and initialize our pointLight Effect. Add the following to your other declarations:

public Effect pointLight;

And this line to LoadContent():

pointLight = Content.Load<Effect>(@"EffectsPointLight");

And we’re done for this part with CreateLightMap(). Obviously we’ll be coming back to these methods later in the series.

In DrawLitScene() once more, the next method we’ll call is BlurLightMaps(). There is actually only one version of this, and it takes a Light object as a parameter (recall both SpotLight and PointLight inherit from Light).

Our BlurLightMaps() method is going to look something like this:


private void BlurLightMaps(Light light)
    //Set some params

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, null, null, null, horizontalBlur);
    spriteBatch.Draw(postProcessTarget, fullScreen, Color.White);


    //Set some more params

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Opaque, null, null, null, verticalBlur);
    spriteBatch.Draw(horizontalBlurTarget, fullScreen, Color.White);


As you can see we have two more render targets and two more effects to add declarations for:


public RenderTarget2DhorizontalBlurTarget;
public RenderTarget2D verticalBlurTarget;

public Effect horizontalBlur;

public Effect verticalBlur;


along with code in Initialize():

horizontalBlurTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);
verticalBlurTarget = new RenderTarget2D(graphics.GraphicsDevice, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height);

and code in LoadContent():

verticalBlur = Content.Load<Effect>(@"EffectsVerticalBlur");
horizontalBlur = Content.Load<Effect>(@"EffectsHorizontalBlur");

This should all be pretty much self-explanatory by now. As discussed above, we blur the lightMap first in one direction, and then the other.

One point to note is that we need two different render targets. The reason for this is that we need the results of the horizontal blur in order to do the vertical blur, and a texture can’t be set as the render target AND appear in the list of textures at the same time, so instead we have two separate render targets.

Back in DrawLitScene() and we’re onto the last method that we’ll be calling from within our two loops – AccumulateLightMaps().

Again there is only one version of this method. The idea of this method is to ‘add’ our light map onto a single, final light map which we will eventually use to light the scene. It will look something like this:


private void AccumulateLightMaps(Light light)
    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive, null, null, null);
    spriteBatch.Draw(verticalBlurTarget, fullScreen, light.color);


As you can see, it’s remarkably simple. We already declared and initialized the render target when we cleared it in PrepareResources. The only part we need to explain is the use of light.color in spriteBatch.Draw(). As you can probably work out, by passing the color of the light to spriteBatch we are tinting the lightmap in the color of the light.

Since the light at each point needs to be added together to get the correct value, we can just use additive blending as described above, and blend the light maps for each light onto the final lightMap target.

We should note at this point, that is we’re tight on memory we can easily reuse some of these render targets for different purposes at different points in our algorithm. For our purposes we’ll be fine with one for each usage, just to keep things clear.

We return to DrawLitScene() one final time. After we have closed our second loop we have just one more method to call – RenderFinalScene().

By this point we have our final lightMap and so all that remains is to use our lightBlend shader from part 2 to render the final scene. If you read part 2 this should all look familiar, so I won’t dwell on it long. The contents RenderFinalScene() looks like this:


private void RenderFinalScene()
    graphics.GraphicsDevice.Textures[1] = lightMap;

    spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlend);
    spriteBatch.Draw(backBufferCache, fullScreen, Color.White);
    graphics.GraphicsDevice.Textures[1] = null;

    spriteBatch.Draw(midGroundTarget, fullScreen, Color.White);


Obviously we’ll need to declare all of the resources we used in part 2:

public float minLight = -1f;
public Effect lightBlend;

We will set the value of minLight from within our game code, which we’ll write later in the series. Next we need to load the content of the effect:

lightBlend = Content.Load<Effect>(@"EffectsLightBlend");

We’ll add the game code to load and draw the textures later in the series, but for now you can add the textures and effect file to the content project. You should create one sub-folder entitled ‘Textures’, and a second called ‘Effects’. You can find the textures here:


Midground texture

Background Texture


And for those of you who didn’t follow along in Part 2, the code for the LightBlend shader is here:


LightBlend shader


This shader just linearly interpolates the values in the lightMap so that they are between MinLight and 1, where MinLight is a value specified by the developer, and is the the color that the parts of the scene that are in shadow get multiplied by. It then multiplies the background texture by this new value.

And that’s it for LightRenderer!

We will revisit bits of the code as we go through the series to add in the necessary parameters after we’ve written each of the shaders.

Unfortunately you won’t be able to build and run your code until the end of the series, but save the project as I will assume that you’ve followed along when we come to adding code to the class later in the series. If you want to double check your code I’ve uploaded the full source of the project up to this point here:


2DLightingSystem (Part 3)


Before we stop, there’s one more thing I promised I’d talk about – Different types of render target.


Render target formats and precision

When we store colors in a texture, the range of colors we can store is limited by how many bits (data item that can be either 1 or 0) we use to store the color for each pixel.

For example, if we only had 1 bit for each of r, g, and b, then we could only have the following colors: Black (0, 0, 0), Blue (0, 0, 1), Green (0, 1, 0), Cyan (0, 1, 1), Red (1, 0, 0), Magenta (1, 0, 1), Yellow (1, 1, 0), White (1, 1, 1). A whole 8 colors! By contrast if you had 8 bits (also known as a byte) for each of r, g, and b, then you could show 16777216 colors!

This is the default amount of storage each pixel gets in XNA. In general the more bits you have to store the color, the more precisely you can represent a given color, i.e. the higher the precision.

The different ways of storing data in a texture are called texture formats, or in the case of render targets, render target formats. The standard XNA format is called ‘Color’.

We have the same concern when storing our data on the distance of the first shadow casting pixel from the light. If we use a render target format with too low a precision, then we’ll be limited as to how precisely we can measure how far the pixel is from the light. This could lead to very jagged looking shadows, which could be particularly noticeable if the shadow casting object is very thin. This effect would look something like this (blurring turned off for clarity):



Note the jagged edges along curved shadow casters that are close to the light.

In fact, the normal color format isn’t quite enough for our purposes. Since we are only storing our values in one or two channels, we only have 256 different values to represent our distance, which are stored in even intervals between 1 and 0.

In the worst case, when our light is in the corner of the screen, a ray could need to cover the diagonal distance across the screen, which at 720p is ~1469 pixels. This means that we could only be accurate up to the nearest ~6 pixels. However if your shadow starts 6 pixels to close or 6 pixels too far away from the light, in some circumstances this could be quite noticeable.

To solve this problem we had a couple of options.

The first that I considered was to somehow use the extra channels in the texture to encode a higher precision into the standard texture format. We could do this in many ways, some which are simpler but not very efficient, and some which are more accurate, but we pay the price in more complexity in our shaders, along with making it more complicated to understand.

In a production environment, if we were short on memory budget for textures then I would use one of these methods.

A simpler way is to choose a render target format which offers more precision. If we want to guarantee that our system will work on PC and Xbox there is only one choice – the HdrBlendable format. HDR stands for ‘high dynamic range’, and is used to create certain effects in 3D games.

To do this it requires higher precision, which is exactly what we need!=Looking at the XNA docs, HdrBlendable gives 16 bits to each of r, g, b, and alpha channels on PC, and 10 for each of r, g, b, and 2 for alpha on the Xbox. <

The Xbox version may not seem much better than the 8 bits we had in the Color format, but it actually allows us 4 times the precision (for every extra bit in a channel, you double the number of values you can represent in that channel).

This means that on the Xbox we can represent 1024 different distances on our ray, giving us worst case accuracy of ~1.5 pixels on our rays. This is probably acceptable, as the worst case will only occur rarely in most scenes. On the PC we can easily be precise enough to represent pixel-perfect shadows.

For this reason, in any texture that we are storing the distance of the pixel to the light, we require an HdrBlendable format. However, this raises another issue – Sampling.

Without going into too much detail, Sampling is the term to describe how, given a set of texture coordinates, the Graphics Processor decides what the color of the texture is at that point. Now imagine we have a texture that is precisely half Red and half Blue, divided left and right, like so:



Now imagine we’re sampling texture coordinates (0.5, 0.5). We’re exactly in between the two halves of the texture, and so half-way between the two colors. The Graphics Processor needs to decide which color to display, or some mixture of the two.

One option is to choose to round down or round up, and pick one of the colors. In general this method chooses the nearest pixel to our coordinates to give the final sampled color. This is called Point sampling.

Our other option is to use some method of mixing the values of the pixels we’re between to get our final color. Choosing the nearest pixel to the texture coordinates can potentially give us slightly jagged edges around high contrast areas of our textures, e.g. if we have a diagonal black line running across a white background, the edge of the line should run through individual pixels, but each of those pixels need to be either white or black, not half and half as they would be in reality.

In general XNA uses one of a number of methods to take a kind of average of the nearby pixels to determine the final sampled color. However, this only works on textures that have format Color. For HdrBlendable textures we can only use point sampling, hence the need for a special sample state when we use this format for our render targets.



That’s it for this part. We’ve talked over the outline of the algorithm that our system will be using, described the way that a developer using our system will interact with it, and coded a skeleton of the main class that our system relies on.

Over the next few parts we’ll be focusing on writing the shaders for the various stages of the algorithm, adding the necessary code to our LightRenderer class as we go along.

’til next time!


First up, apologies to everyone who’s had to wait a little while for this second part of the series. I wanted to make sure that the series was as accessible to everyone as possible, even if they’ve never written a shader before, and so I’ve ended up going into rather more depth than I intended! That and an unexpected exile to a place without the internet over Christmas & New Year has led to a couple of weeks delay, so sorry about that! Fingers crossed there’ll be a much shorter wait for the next installment…
On with the show!


Shaders: a quick introduction

I’m going to start the series with a quick introduction to shaders. I’ll talk a bit about how shaders work, and then we’ll go through a small sample where we write a simple shader that we’ll actually use in our Lighting system. For those of you who are already comfortable with shaders, feel free to skip this section, I’ll do a quick recap of the shader we create here when we come to use it later in the series.
For those following on, I’ve linked to the full Visual Studio solution for this part of the series at the end of this post, so if you’d like to just review the code and move on then feel free to skip to the end.


What are shaders?

Shaders, very simply, are small programs that run on the graphics processors of your PC, console, or smart phone. There are a number of different kinds of shaders, but for our lighting system we will only talk about one kind of shader: Pixel Shaders.
A pixel shader is called for every pixel of the surface that the graphics processor is drawing to. Usually that means the pixels on your monitor or TV. The job of the pixel shader is to determine what the final color of that particular pixel will be.

If that seems a bit complicated, think of it like this. When the graphics processor wants to draw to the screen, it’s actually deciding for each pixel what color it should be. For each pixel on the screen, it runs the pixel shader and the pixel shader returns a color back to the graphics processor, and the graphics processor then sets the pixel to that color.

So if we wanted to color the whole screen red using a shader, our pixel shader would just contain the line:

return float4(1, 0, 0, 1);

Woah now, I hear you cry. What’s this float4(1, 0, 0, 1) business? Well, in shaders, a color is represented by a float4, which is like a Vector4 in XNA. It just holds 4 float values. Each of the 4 values in the float4 has a special meaning.

As an example, let’s think about a float4 called Col. The first element of Col can be accessed either through Col.x (just like a Vector4 in XNA) or by Col.r. The ‘r’ stands for red, as this value represents how much red there is in the color Col. The second value is Col.y or Col.g (for green) and the third is Col.z or Col.b (for blue). The fourth element of Col is Col.w or Col.a. The ‘a’ stands for alpha, and basically represents how opaque the color is, i.e. How transparent it is, with 1 being not transparent at all and 0 being fully transparent. All of the values in a float4 representing a color are between 1 and 0, with 1 being the maximum.

So above when we said return float4(1, 0, 0, 1); we were returning a Color with the maximum amount of red, no blue or green, and fully opaque.

I’m going to walk through the shader that we’re going to write first, and then we’ll see how it fits in with a simple xna sample so that you can test it on your own machine.

In our lighting system, each frame we’re going to create light maps. You can think of these a bit like stencils you use for spray painting, like a sheet that covers the scene we’re lighting, and has cutouts showing which bits will be lit with everything else in shadow. For the areas that are lit, the light map will also show us how much light is falling on that part of the scene. This ‘light value’ can be represented for each small section of the scene by a value between 1 and 0, where 1 represents a bit of the scene that is fully lit and 0 represents a bit that is fully in shadow.

This should sound familiar. It’s exactly the way that we talked about storing the red, green, blue, and alpha components in our color Col above. In fact, we can go further than just a single value of 1 and 0 for each bit of the scene in our light map. Instead we can have a value between 1 and 0 for the amount of red light, the amount of green light, and the amount of blue light at each bit of the scene. This will allow our light map to show not just how much light each bit of the scene is getting, but what color that light is.

We will leave the detail of how we generate the light map each frame for later tutorials. Our aim right now is to write a shader that takes a light map, a background texture, and combines them to show what that background looks like with lights applied to it.

Just to illustrate, here is the light map that we’re going to be using in our example:


You can see where the objects that are casting shadows are, even though they’re not drawn on the light map.

Here is the background texture we will be using (yep, just plain grey):



Note that again we can’t see the objects that are creating the shadows. That’s because we don’t want the objects that cast the shadows to be in shadow themselves, so we don’t want to apply the light map to them.

Here is the background when it’s been lit with the light map, using the shader that we’re about to write, notice that it is slightly lighter than just the light map, this is from the background:



And finally, here is the whole scene including the texture with the objects that are casting the shadows:



Hopefully that gives you an idea of where we’re heading.So without further ado, let’s write our first useful shader!


The LightBlend shader

Before we start, we’re going to need an XNA project. Head into Visual Studio and create a new Windows XNA game. Once you’re done, right click on the content project and go to add-> New… . In the dialog box that pops up, select Effect file, and call it LightBlend. Click ok.

Visual Studio will open up a very intimidating looking code file, full of some familiar, and some not so familiar looking code. Don’t worry about this for now. Select it all and hit delete. For our first shader, we’re going to be writing everything from scratch.


Structure of a pixel shader

Shaders (in XNA and DirectX) are written in a language called the High Level Shader Language, or HLSL, which is based on the syntax of the language C, that C# also evolved from.

A shader looks a lot like a function in C#. It’s enclosed in curly braces (i.e. {}), it takes arguments, and returns a type. In fact, it is a function, just a special one that runs on the graphics processor and not the CPU.

So let’s start by writing the skeleton of our shader/ function, which we will call ‘PixelShaderFunction’.

First up, just like in C#, we need to state what our function returns. For pixel shaders this is basically always a float4, as its job is to tell the graphics processor what color the pixel should be. So for the moment our shader looks like this:


float4 PixelShaderFunction()



Now, we said earlier that the pixel shader gets called for each pixel on the screen, and in the example we gave earlier that colored the screen red, we didn’t care which pixel we were drawing to, because the result was always the same. Normally, however, we’ll want to return a different color for different pixels depending on where they are on the screen. If we think of it in terms of a sprite that covers the whole screen, usually that sprite will have a texture which will determine what color each pixel of that sprite should be.

In XNA we would just use SpriteBatch.Draw() to draw the texture over a rectangle with the same dimensions as the screen. This is something we’re going to need to do in our shader when we draw the background, and when we apply the light map. In order for the shader to know which bit of the texture to look at, it needs texture coordinates. This is just a float2 (like a Vector2) that holds a pair of floats between 1 and 0. These represent x and y coordinates for the texture, telling us where to look on the texture to find the color for this pixel (with (0,0) in the top left hand corner of the texture, and (1, 1) in the bottom right).

Fortunately, when we use SpriteBatch.Draw() in this way, the texture coordinates are passed to our pixel shader. In order to use them in our shader we need to add a parameter to our arguments list:


float4 PixelShaderFunction(float2 texcoords)



We actually need to add something else to be able to use that parameter in our shader, but we’ll come back to that later on. For now let’s just treat this like a C# function, and assume that the graphics processor will pass it the right information.

So we have a function, and we have coordinates. What now? Well, if we want to draw the background with the light map applied, a good first step would be just drawing the background.

To do this is pretty straight forward, but there are a few new concepts to introduce along the way. First, let’s think about what we want the graphics processor to do.

Since we already have access to the texture coordinates for the current pixel in our shader, all we need the graphics processor to do is to lookup what the color of the texture is at that coordinate, and display that color. Now hopefully at this point you’re thinking ‘hang on, what texture?’.

Good point. We need to give our shader a way to reference a texture. The way we do that in shaders is to create something called a sampler. This is used by the graphics processor to ‘sample’ a specific texture in a specific way. As it happens, using SpriteBatch.Draw(), most of the information about the texture and how we want to sample it is passed into the shader by our game code. In order to get access to that information in our shader we need to add the following at the top of our file:

sampler textureSampler;

Again, there’s something else we’ll need to add later to make it work, but this will do for now. All we’ve done here is declare a sampler object called textureSampler.

To actually sample the texture we need to call a function that’s built into HLSL called tex2D(). tex2D() takes as it’s arguments a sampler and a float2 holding the texture coordinates that we want to sample. It returns a float4 containing the color of the texture at those coordinates.

We’ll see how to make sure that our sampler is associated with the texture we want to draw a bit later, but for now let’s assume that it is. To draw our background all we need to do is sample from the texture coordinates that were passed into our shader function. We do this by adding the following line in our shader function:

float4 backgroundCol = tex2D(textureSampler, texcoords);

So now we have a float4 holding the color of the bit of our background texture that corresponds to the pixel we’re trying to draw at the moment. So all we need to do to draw our background is to return this color, and the Graphics Processor will change the pixel on the monitor to the correct color:

return backgroundCol;

So our full shader for outputting the background texture looks like this:


sampler textureSampler;

float4 PixelShaderFunction(float2 texcoords)
    float4 backgroundCol = tex2D(textureSampler, texcoords);
    return backgroundCol;


If we were to write the accompanying XNA code to make this work, and use the texture provided with the sample, we’d get this result:



Not exactly exciting, but a good start.

That wasn’t so complicated. However, like most examples, it’s also a little pointless, as it’s much easier to just use spriteBatch to draw a fullscreen texture (and we’re also missing a couple of bits of code to make it work, they’re coming I promise!).

In order to make it more interesting, we need to apply our lightMap. As mentioned earlier, our lightMap will represent fully lit by all 1’s in each of the r, g, b, and a components, and minimum light by 0 for each of r, g, and b, and 1 for a (as we don’t want it to be invisible, just dark).

So how do we apply change the color of our background to make it look like it’s been lit up with the light map? Let’s start with a simple case. Let’s assume that there is no ambient light in the scene whatsoever. By that I mean that if the light isn’t touching a pixel, then it should display as completely black. In this case the minimum light value for any pixel is (0, 0, 0, 1).

If the pixel is fully lit, it should show exactly the color of the background pixel, i.e (background.r, background.g, background.b, background.a). What about if the light map has a value of say, (1, 0.5, 0.25, 1)? What does this mean?

It means that we want the full contribution of the red component of background color, half of the blue component of the background color, a quarter of the green component of the background color, and all of the alpha component of the background color. Hopefully you’re way ahead of me and already thinking ‘so in general, it’s:

(background.r * lightMap.r, background.g * lightMap.g, background.b * lightMap.b, background.a * lightMap.a).

If so, then you’re right! If not though, don’t worry, just go back and read the last paragraph or so again and see if you can come to the same conclusion.

Before we can use the lightMap in this way, first we need to be able to sample it. For this we need another sampler object, so add the following at the top of the file after we declared textureSampler:

sampler lightMapSampler;

And we need to use it in the same way inside our shader, so add the following before the return statement in our function:

float4 lightMapCol = tex2D(lightMapSampler, texcoords);

Now we can change our ‘return’ statement to apply the lightMap values to the background color:

return float4(backgroundCol.r * lightMapCol.r, backgroundCol.g * lightMapCol.g, backgroundCol.b * lightMapCol.b, backgroundCol.a * lightMapCol.a);

Our updated shader would now produce the following output. If we were to apply this lightMap:


to this background:



we would get this result:



That’s looking bit more interesting. However, it looks a little harsh, since in reality you don’t just get pitch black if there are no lights, there’s always some light that’s reflected off of other surfaces. To approximate this in our light system we let the programmer define minimum value between 0 and 1 that we will let the light go down to.

public float4 MinLight = -1f;

We’ll see how to set this value later on. First we need to work out how we’re going to use it. There are two main ways we could enforce a minimum value for the light in our scene. The first is to simply say that any pixel which has a light value less than MinLight we’ll set equal to MinLight. This would certainly work, however it would have another effect. Let’s compare the screen capture we had earlier to the one we get if we use this method:



As you can see, the area that appears to be lit by the lights is much lower (note that the areas affected by the two lights now fall a long way short of one another). In fact, as you’ll see as we go through the series, we want the programmer to be able to control the radius themselves directly, so we don’t want to affect that here.

Another method we can use is to use the lightMap value to linearly interpolate (or lerp) between (1, 1, 1, 1) and the value of MinLight. What this means is that we want the area covered by the light to be same, with the outer most edge having the value MinLight instead of (0, 0, 0, 1), and the pixel at the point the light originates from (where the bulb would be on the flashlight, if you like) to be the color of the light itself, with a smooth gradient in between.

There is a built in function for this, but I want you to understand what’s happening, so we will do it manually. For now, let’s just focus on the red component of the lightMap (the process will be the same for blue and green, and we want to leave alpha as it is, so we don’t need to worry about it).

The sort of thing we want to do would look something like this:


float4 newLightMapCol;

newLightMapCol.r = (lightMapCol.r * (1 - MinLight.r)) + MinLight.r;


So, what are we doing here? Let’s think about it in terms of lightMapCol.r. lightMapCol.r tells us how much (on a scale of 0 to 1) red light there should be at that pixel, where 0 means we have MinLight.r red light, and 1 means we have the maximum amount (1) of red light. E.g. when lightMapCol.r is 0.5 we should have a red light value for that pixel exactly have way (or 0.5 of the way) between MinLight.r and 1.

To get to half way between MinLight.r and 1, we need to add half of the difference between them (1 – MinLight.r) to the smallest of them (MinLight.r). In other words, half way between MinLight.r and 1 is:

0.5 * (1 - MinLight.r) + MinLight.r

If we repeat that exercise again, except this time we want lightMapCol.r to be 0.1, we get to:

0.1 * (1 - MinLight.r) + MinLight.r

So in general, we want the amount of red light at that pixel to be:

lightMapCol.r * (1 - MinLight.r) + MinLight.r


As I said above, the same will be true of lightMapCol.b and lightMapCol.g, and we don’t want to change lightMapCol.a. So we could write the following:


float4 newLightMapCol;

newLightMapCol.r = (lightMapCol.r * (1 - MinLight.r)) + MinLight.r;

newLightMapCol.g = (lightMapCol.g * (1 - MinLight.g)) + MinLight.g;

newLightMapCol.b = (lightMapCol.b * (1 - MinLight.b)) + MinLight.b;

newLightMapCol.a = lightMapCol.a;



However, HLSL lets us do this much more efficiently. I’m going to
write the code first, and then explain it:

float4 newLightMapCol.rgb = (lightMapCol.rgb * (1 - MinLight.rgb)) + MinLight.rgb;

This, my friends is called swizzling (I kid you not). Basically, for float4s (and float3s and float2s) we can work with multiple columns at once. Here, we can think of it as being the same as the code we wrote above, except condensed into 1 line, and the Graphics Processor just picks r, then g, then b, and executes each line one after the other. In reality it does some magic on its end in order to do all 3 at once in a super-efficient way, but we don’t need to worry about how it does it.

In fact, while we’re on the subject of efficiency, we don’t really need a new variable to store the altered lightMapCol value, as we won’t need the original value of the lightMap again, so we can simplify to:


lightMapCol.rgb = (lightMapCol.rgb * (1 - MinLight.rgb)) + MinLight.rgb;
sampler textureSampler;

sampler lightMapSampler;

float4 MinLight;

float4 PixelShaderFunction(float2 texcoords)
    float4 backgroundCol = tex2D(textureSampler, texcoords);
    float4 lightMapCol = tex2D(lightMapSampler, texcoords);

    lightMapCol.rgb = (lightMapCol.rgb * (1 – MinLight.rgb)) + MinLight.rgb;

    return float4(backgroundCol.r * lightMapCol.r, backgroundCol.g * lightMapCol.g, backgroundCol.b * lightMapCol.b, backgroundCol.a * lightMapCol.a);


There’s one more thing for now that we need to add to our effect file. This is called a technique. To get certain effects, graphics programmers sometimes need the Graphics Processor to run one shader immediately followed by another shader.

The technique is the place where we tell the Graphics Processor what shader functions to run and in what order. If we were using a Vertex Shader for example (you don’t need to worry about what these are!) they have to run before the pixel shader, so our technique would tell the Graphics Processor to first run the vertex shader, then our pixel shader.

The technique has one more important role. It tells the compiler which version of HLSL it is using, so that the compiler can compile the code correctly. For us, since we’ve only written a pixel shader, we need to tell it what Pixel Shader version we’ve used. For our code we need Pixel Shader v. 2.0.

In general the earlier the version you can use, the older the hardware that will be able to run it, so for compatibility, you generally want to use the lowest number version you can get away with. There are exceptions of course. If you’re targeting fixed hardware (like the Xbox 360 for instance) you don’t need to worry about supporting older versions. There are other differences, but we don’t need to worry about those.

So, at the bottom of the effect file, add the following:


technique Technique1
    pass Pass1
        PixelShader = compile ps_2_0 PixelShaderFunction();


This is going to be identical in every shader we write throughout this series, so I’m not going to go into what it all means. All you need to know is what I said earlier, that this tells the Graphics Processor what shader to run, and the compiler what version to compile it against.

At this point we’re mostly done with our shader code. There are a couple of details that we’ll need to add to enable our game code to talk to the shader properly, but the meat of the shader is all there.

Let’s head back to our Game1.CS file that was created when we created our project earlier.

First of all we need to add a field to the class for our shader. In XNA shaders are Effect objects, so at the top of the Game1 class add:

Effect lightBlendEffect;

We’ll also need our textures. They can be downloaded from here:


Background Texture
Midground Texture

LightMap Texture


Add these to your content project and add fields for them at the top of Game1:

Texture2D background;Texture2D midground;Texture2D lightMap;

Next, in LoadContent() add the following line that loads and compiles our shader when we build the solution, as well as lines to load our textures:


lightBlendEffect = Content.Load<Effect>(@"LightBlend");

background = Content.Load<Texture2D>(@"Background");

midground = Content.Load<Texture2D>(@"Midground");

lightMap = Content.Load<Texture2D>(@"LightMap");


Finally, in Draw(), just after the code that clears the screen to Cornflower blue, add the following lines (I’ll explain them shortly):


GraphicsDevice.Textures[1] = lightMap;

lightBlendEffect.Parameters[“MinLight”].SetValue(Vector4.One * 0.3f);

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlendEffect);

spriteBatch.Draw(background, new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), Color.White);


GraphicsDevice.Textures[1] = null;


spriteBatch.Draw(midground, new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), Color.White);



So, what are we doing here? A lot of this should look familiar, but some of it is new. The first line is

GraphicsDevice.Textures[1] = lightMap;

The GraphicsDevice actually keeps an array of all the textures it is using at any one time. All we’re doing here is manually adding our lightMap texture to the array. You may have noticed that we’re actually adding it as the second texture in the array.

This (as you may have guessed) is because spritebatch automatically puts its texture parameter (in our case background) as the first element in that array, which would have overwritten the lightMap texture.

Once more you may be wondering how the Graphics Processor matches these textures to the samplers that we created in our shader. I promise, all will be explained shortly!

The next line is a little more self-explanatory:

lightBlendEffect.Parameters[“MinLight”].SetValue(Vector4.One * 0.3f);

When we load our shader into our lightBlendEffect object inside LoadContent(), it gives us a way to access each of the parameters in our shader, by storing them by name in what is effectively a Parameters dictionary. To set our MinLight value in our shader, we just set the parameter that matches the name we put in the braces.

Next up is a version of spriteBatch.Begin() that you may not have come across before. There are a lot of parameters here, but we only really care about the last one, which tells spriteBatch to draw using our shader that we loaded into lightBlendEffect instead of spriteBatch’s default pixel shader.

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, null, null, null, lightBlendEffect);

The only other line that might need explaining is

graphics.GraphicsDevice.Textures[1] = null;

All we are doing here is un-setting the texture we set at the beginning of this block of code.

With that, we’re very close to being done. On to the final hurdle!



For quite a while now I’ve been promising you that I’ll explain how the Graphics Processor can possibly match up all the information in our XNA code with the information we wrote in our effect file.

I explained how it knows what value to give MinLight, but what about the two samplers? Or the texture coordinates? In fact, how does it know for certain that the output of this shader is meant to be a color that it’s supposed to draw to the screen?

For that, there is a feature in HLSL called semantics, and it requires some small editing to our effect file, as well as a bit of explanation as to how the Graphics Processor stores textures.

Semantics are basically labels in the code of the shader that tells the Graphics Processor that certain variables or values have a special meaning, and should be treated accordingly.

The first example we’ll look at is the texcoords parameter that our PixelShaderFunction takes as an argument. At the moment the first line of our function looks like this:

float4 PixelShaderFunction(float2 texcoords)

You probably asked at the time, where is this function called from? How do the correct texture coordinates get passed as an argument? Well, in a way it’s all an illusion.

When we call SpriteBatch.Draw(), XNA tells the graphics processor what the texture coordinates are for each of the four corners our sprite (which are the four corners of the screen in our case), and the Graphics Processor interpolates the correct coordinates for each pixel.

In order for the value that the Graphics Processor has calculated to end up as the value for the texcoords parameter, we have to add a label (or rather, a semantic) to the texcoords parameter to let the Graphics Processor know that it needs to set it to the texture coordinates for this pixel.

To do this, we change the first line of the shader function to the following:

float4 PixelShaderFunction(float2 texcoords : TEXCOORD0)

This label, signified by a ‘:’ followed by one of a set number of labels, tells the Graphics Processor that it needs to set texcoords equal to the texture coordinate for this pixel. The ‘0’ at the end indicates that we are referring to the first set of texture coordinates that Graphics Processor is holding in memory. In more complex operations the Graphics Processor might have a number of different sets of texture coordinates in memory at the same time.

Before we move on to the textures/ samplers, let’s just think for a moment about the return value of our function. We know that it’s meant to be read as the color for the Graphics Processor to set the pixel to, but can the Graphics Processor be sure that’s what we mean? Although based on what we’ve discussed so far the answer might seem like ‘yes’, actually it’s a bit more complicated.

The Graphics Processor can do a lot more than just drawing to the screen. It can render to a texture (like drawing to the screen, but instead of the finished image appearing on the monitor, it is saved as a texture to memory – we will be using this later in the series), it can render to multiple textures at once, it can also write out depths (single float values rather than float4s) and perform a whole host of other functions.

To make it clear to the Graphics Processor that what we’re outputting is a color and should be used as such, we need to add a semantic to the return value like so:

float4 PixelShaderFunction(float2 texcoords : TEXCOORD0) : COLOR0

Once again, we’re adding a ‘:’ followed by a specific label that tells the Graphics Processor how to interpret the value (COLOR in this case) and a number to tell it which one of that type of value it might be holding we’re talking about (i.e. if we were rendering to two textures instead of the screen we might refer to them using the semantics COLOR0 and COLOR1).

Finally we move onto the textures/ samplers. As we discussed earlier, the Graphics Processor can hold an array of textures, which we can assign to in our XNA code. The question is, how does it know which of our samplers to use with which of the textures? What is to stop it sampling the lightMap when we want it to sample the background?

To solve this we again need to add semantics, but these are going to take a bit more explaining:

sampler textureSampler : register(s0);sampler lightMapSampler : register(s1);

Yep, those are definitely going to take some explaining! The first thing to talk about is what these objects actually are. As discussed before these are samplers, rather than the textures themselves. They’re basically an interface that tells the Graphics Processor not just which texture to sample, but how to sample it.

So actually, we need each of these to reference a fully formed sampler, not just a texture. Fortunately we don’t need to worry about creating a sampler, XNA and SpriteBatch creates them for us based on the textures we’ve put in the GraphicsDevice’s textures array, or that we’ve passed to SpriteBatch.Draw(). XNA then stores the samplers that it’s created in the memory that belongs to the Graphics Processor.

In particular the Graphics Processor has a number of special areas of memory called registers. These are each dedicated to a type of object, such as textures, samplers, normal maps, and other resources. In our case we’re only interested in the sampler registers.

The Graphics Processor has a number of registers dedicated to samplers. XNA stores the samplers that it creates for us in these registers, storing the sampler for the first texture in register s0 (s being for sampler!), the sampler for the second texture in s1, and so on.

As discussed before, the first texture is always the one passed into spriteBatch.Draw(), which for us was the background texture. So the sampler for the background texture must be stored in register s0.

In our shader we want the sampler that we’ve called textureSampler to sample our background texture, and so we need to tell the Graphics Processor that this sampler can be found in register s0. To do this we add the semantic ‘register(s0)’ to the end of the line that we declared our sampler on:

sampler textureSampler : register(s0);

Similarly, as our lightMap texture was the second texture on the GraphicsDevice, it’s sampler will be in register s1, so the line in our effect file needs to be:

sampler lightMapSampler : register(s1);

And that’s it for semantics! Hopefully that all made some kind of sense to you, but if not don’t worry about it, you won’t need a detailed knowledge of semantics and registers for this series, just enough that you don’t need to worry about why we’re adding them to our shaders.

And in fact, that’s if for this introduction to shaders! Save everything and run the program. If everything has gone well you should now be seeing the image we saw at the beginning of this tutorial:



I’ve uploaded the full Visual Studio solution for this project at the link below, so if you’re solution isn’t working and you’re having trouble figuring out where you went wrong you can compare your code to completed solution:


Part 2: Full solution


This series is all based around shaders, so if you don’t feel that you’ve gotten a good grasp of what shaders are and what they do from this post then I highly suggest that you take a look at some of the other simple shader tutorials linked to below, otherwise you may struggle to follow the shader code in the rest of this series.

I hope that this hasn’t scared you off of shader code, it really is some of the most interesting technology that XNA lets you play with, and if you take the time to play around with them you can find yourself creating many weird and wacky effects for your games! Hopefully I’ll see you all back here soon for the next installment in this series, where I’ll outline the algorithm for the 2D lighting engine.

The next part should *hopefully* be significantly shorter than this, and fingers crossed I’ll be able to get it posted up faster too!

Until next time!


Other shader/ HLSL tutorials





2D Lighting System tutorial series

Welcome to the first in a series of blog posts that are going to walk you through the creation of a fully featured 2d lighting system to use in you games. The system focuses on creating soft lighting and shadows that can be cast by any arbitrary sprite in your scene.
For a preview of the kind of system we’ll be creating, have a look at the following video:

YouTube Preview Image

The video quality isn’t great as I only have the free version of fraps to record with, but hopefully it’s given you an idea of what we are working towards.

For those of you who are already comfortable with shaders and digging around in other people’s source code, I’ve uploaded the full source to both the lighting system and a basic sample that creates 2 different colored spotlights and lets you move them around the scene:


For the rest of you, in this post I’m going to set out what we’ll be covering in the rest of the series.

The series will be split into 9 (fingers crossed!) parts:

1) Introduction, and series contents.

2) Introduction to shaders and the LightBlend shader.

3) Structure of the lighting system, overview of the algorithm, and the LightRenderer class.

4) Point lights: ‘Unwrapping’ the shadow casters, and creating the occlusion map.

5) Point lights: Lighting the scene.

6) Blurring the LightMap: Creating soft shadows.

7) Spotlights: ‘Unwrapping’ the shadow casters part 2.

8) Spotlights: Lighting the scene and soft shadows.

9) Conclusion: Optimisations for the future.

I’ll try and keep each part to a manageable length, but there is a lot of material to cover, so they may be a couple of long posts along the way!


December 17th, 2011

Well, what a surprise! Yet another massive hiatus in updating my blog.

For the last few months I’ve been in something of a rut. I started designing maps for the-game-formally-known-as-Sphero, and found myself lacking motivation. I was clear in the experience I wanted to player to have, but I was struggling to convert that into what I thought would be compelling puzzles and an interesting environment to traverse. I’d also stopped programming, which is the bit of game development that I enjoy more than anything else.

As a break I decided to work on a side project. I started off by starting work on a 2D lighting engine, based very loosely on the same principle as the one described by @CatalinZima here:


I had some innovative ideas to improve the approach, and I thought it would be a good short break from designing. Also, I’ve always wanted to play around with shaders and write some from scratch based on my own ideas, so this seemed a great place to start. I also thought it would give me the coding fix I needed, and let me recharge my batteries for a fresh run at designing Sphero levels.

Unfortunately, I encountered a few tricky hurdles, and ended up with some very stubborn bugs in my shader code that left me stranded once more, mid-project, with no momentum. Rather than persevere, as I should have, I let myself get distracted by Ludum Dare.

For those of you who don’t know, Ludum Dare is a quarterly, 48 hour game development competition. There are no prizes, just prestige. Participants vote on the theme in the run up to the competition, and the theme is announced at the start of the competition. I’d been toying with a new, very simple, mobile game idea for a while, and this seemed like the perfect opportunity to prototype it, if the theme fit. The theme that particularly fit with my idea was ‘Escape’. So I up-voted escape and decided that I would only compete if that ended up being the theme. I’m not sure whether I expected it or not, but when I awoke on the morning of the competition, low and behold, the chosen theme was ‘Escape’.

So I stuck to my guns and dived head first into creating the game idea that had been brewing in my head for a week or so. The premise was simple. You have balls falling through space, and by tapping on the screen you create small black holes, the gravity of which alters the ball’s path, the goal being to alter their path so that they hit an ‘exit’ portal. When all of the balls have been deposited in the exit portal, the player has passed the level. Any balls that get sucked into one of the black holes reappear at the top of the screen again.

In my head, this seemed like a pretty simple game. I decided to use XNA and Farseer for Ludum Dare, with the intention of porting to IOS later if I liked the game.

Even using APIs I was familiar with, I failed miserably to create anything playable in the time limit. There were other distractions, but the main reason was that I completely underestimated how difficult it would be to code everything from scratch.

Despite this, I still thought that the game had potential, so I carried on with it after the competition. Some readers may have noticed a pattern emerging at this point, but I won’t spoil it for those that haven’t figured it out yet!

I ended up spending over 2 weeks creating something approaching a basic playable prototype. It took a lot of tweaking of physics values to create a system that let the player actually influence the path of the balls with any degree of control. But eventually I got there. There was a lot more that I had planned for it, but the core of the game was there.

And it was BORING! I realised very quickly that there was no way I was going to come up with an interesting game based on this mechanic. It was almost impossible to come up with level designs where the solution wasn’t immediately obvious, and then it became a game of trial and error playing with the placement of blackholes to get them just right to solve the puzzle. In order to come up with interesting levels they would have had to be massively more complex, at which point they no longer leant themselves to the limited screen space of a mobile device.

Deflated, but determined not to stall yet again, I ploughed on half-heartedly trying to implement another feature, portals, which I thought might make the game a bit more interesting.

Then came the Microsoft Build conference, which was the last time I posted to this blog. Microsoft released the developer build of Windows 8, and the XNA community immediately noticed that something was missing: XNA!

There was a lot of noise about XNA being dead, which led me to write my last post on here. Incidentally, my confidence in the future of XNA has actually slipped further in the months since, but that’s for another time. In light of the fact that, at least initially, it was clear that XNA games weren’t going to be a viable option in Metro style apps, and therefore the Windows marketplace, I decided that there was a gap in the market for a new API.

For a while I’d been toying with the idea of throwing in my current day-job/ career when I finish my part-time CS MSc course and trying to get a programming job in an established game studio. For that I need real-world C++ coding experience, and I’d been looking for a chance to use C++ in a project.

These two events collided, and I decided I could kill two birds with one stone by writing a C++ version of the XNA API, written in DirectX 11 so as to allow XNA devs to port to C++ easily and use the new Metro UI. Even at the time I knew it was massively ambitious, but my fall back position was to write enough of the API so that I could port Alta, and so Sphero, to give me a new motivation to work on Sphero – get it done as a launch game for Windows 8.

I set about the task, and actually got pretty far, completing a number of the smaller XNA classes in their entirety. Then I hit the real meat of the Graphics classes and started struggling. I wouldn’t be in a position to properly test any of them until I’d written more, and without knowing DirectX 11 properly I could be heading in completely the wrong direction, but might not realise it for weeks. Still, I persevered (again).

Then a few things happened at once. Work got very busy. I was leaving for work and hour and a half earlier than usual and leaving at the same time. I was averaging maybe 5-6 hours sleep a night. I’d started lectures again for my MSc, which was taking up a lot of my time with assignments. And then Steve Jobs died.

It’s strange. I’m not a huge apple fan. I have a macbook, but spend most of my time on MS products. I have an iPod touch, but my phone runs Windows Phone 7. I’d always seen Steve Jobs as an impressive leader and a good salesman, but I would never have considered him an idol.

Yet when he died, I felt sad. Even now, months later, I have no idea why. But I did. This was before all of the media hype surrounding his death, before watching his famous Stanford address, or reading his sister’s eulogy. When I heard the news I felt sad. And, as it played out in the media for the next few days, I, like tech enthusiasts and wannabe entrepreneurs everywhere, started re-evaluating my life.

I started making changes. I took up running. For reference, at school I could never finish the 1500m without stopping to walk at some point. I started following a program recommended by the NHS called Couch to 5k, which aims to get you from doing no exercise at all to running 5k over the course of 9 weeks. I’ve just finished week 8, and my last run was 4.9k, 28 minutes. I’m literally running further and for longer than I ever have in my entirely life. Why? To prove to myself that I can.

I knew I needed to work some exercise into my routine, but I never feel that I have the time. I knew running took the least time and would burn a lot of calories quickly, that I could do it in the park next to my house so there’d be no travel time built in, but I ruled it out because ‘I can’t run long distances’. In the aftermath of Steve Jobs’ death I started questioning any and all assumptions that started with ‘I can’t’.

As you might expect, I also applied this approach to my game development. By this point I had 4 projects on the go. That was at least 2, maybe 3 too many, so I started by questioning why I had so many. I wasn’t ready to throw in the towel on Sphero, I still believed it was a genuinely fun game, I just had to get the level design right. My lighting engine was maybe 70% done, but the bugs I’d hit had made me lose momentum, and I’d let them beat me. My mobile game was just not any fun, and my native XNA port was horrifically over-ambitious.

So I started cutting down my projects. My lighting engine was the closest to completion, I was genuinely learning a lot about shaders, and so it was fulfilling it’s goals. The best way to get that project off of the books was to finish it, so that would be my first priority.

My mobile game was DoA. The only reason I hadn’t written it off was that I didn’t want to give up on a project. But in this case, it was dead weight. The whole point of prototyping was to weed out the ideas that are no good, and this was one of those. Holding on to it would do me no good. So I declared it scrapped, with no intention of ever returning to it.

My Native XNA port was tricky. The reasons for starting it were based on conflicting objectives. On the one hand, porting Sphero assumed both that I’d finished it, and that I’d be staying an indie developer. On the other, I wanted to do it to get C++ experience to get into the industry. It was an embodiment of my own indecision over my future path in games development. So, reluctantly, I put in into a perma-hiatus. I may one day resurrect it, but only if I actually need a C++ XNA port.

That leaves me with 2 projects, my lighting engine, which I’d work on first, and then back to Sphero. It was important to me to get the lighting engine finished first. I wanted to prove to myself that I could actually finish a project that I’d designed myself from the ground up, not just a clone of an existing game.

2-3 months on, and I’ve finished the lighting engine. The source is on codeplex, and I’m in the process of writing an accompanying tutorial series to guide others through how to develop their own. I’ll be posting the series up here as I complete it.

And then, come the new year, it’ll be back to Sphero. I had toyed with the idea of taking up a new project that would help me bring Sphero to more platforms than just the Xbox and Windows, and might also help speed up the process of developing Sphero, but I’ve resisted the temptation for now. Again, if the reasons are right, I might consider it, but I’m done with changing projects to try and get myself out of a rut.

So, without further ado, I shall shortly be posting the first (and possibly second) tutorial in my new series on developing a 2D lighting/ shadow engine. I hope other people can get as much from it as I did developing it.


Its been some time since I wrote here, and hopefully I’ll soon have time to fill you all in on what I’ve been up to the last couple of months, but before I do that, I want to comment on the current state of XNA, and my opinion as to it’s future.


Earlier this week, Microsoft kicked off it’s BUILD conference, looking at Windows 8, the new Metro UI, and all of the new technologies that sit underneath it. As the keynotes and session videos started to appear online, there was one technology conspicuous by it’s abscence: XNA. Naturally Twitter started to get worried.

On the second day of the conference, in a session on developing DirectX games for Metro UI, some brave attendee asked the question: What about XNA?

The response was essentially: You can’t use XNA with Metro UI.

Later on Giant Bomb posted with an official statement from Mircosoft: http://www.giantbomb.com/news/the-future-of-xna-game-studio-and-windows-8/3667/

The statement said (reprinted from Giant Bomb):

“XNA Game Studio remains the premier tool for developing compelling games for both Xbox LIVE Indie Games and Windows Phone 7; more than 70 Xbox LIVE games on Windows Phone and more than 2000 published Xbox LIVE Indie Games have used this technology. While the XNA Game Studio framework will not be compatible with Metro style games, Windows 8 offers game developers the choice to develop games in the language they are most comfortable with and at the complexity level they desire. If you want to program in managed C#, you can. If you want to write directly to DirectX in C++, you can. Or if you want the ease of use, flexibility, and broad reach of HTML and Javascript, you can use that as well. Additionally, the Windows 8 Store offers the same experience as the current App Hub marketplace for XNA Game Studio, providing a large distribution base for independent and community game developers around the world.”

Why people are worried

The upshot of all of this is the following – XNA games can still be made for Windows 8, but only as Desktop apps, not Metro apps.

This also means that XNA games can be listed in the Windows 8 app store, but won’t be sold through it, instead the listing will link to an external website of the developer’s choice to allow users to buy the game.

This is an obvious barrier to people buying XNA games, especially when you consider that both casual games built with HTML5/ Javascript and high performance games built with DirectX 11 will have access to the Metro UI, and so can be sold directly in the app store. Why would someone click through to a site they might not have heard of, and fill out their credit card details, when the app store already knows their details and they can buy safely with a single click?

The final blow for XNA devs is that Microsoft have announced that the ARM version Windows 8 will only support Metro UI apps, not Desktop apps. So if you were hoping to bring your XNA game to a Windows-powered tablet audience, you’re out of luck.

You’d be forgiven for thinking that this clearly shows Microsoft is abandoning XNA as a future technology for gaming on it’s Windows platform. It also calls into doubt whether XNA will be supported on future versions of Windows Phone, and on the next version of the XBOX.

So that brings us to our next question:

Why isn’t XNA supported with Metro Apps?

It seems to many that if Microsoft was serious about XNA as a technology then this was a prime opportunity to make XNA a first class citizen in their Windows eco system. Both large and small studios have been using XNA for their games on Windows Phone 7, and more and more successful indie titles on Steam are using XNA as well (Terarria, Magicka, and most recently Bastion, to name only a few). Surely allowing XNA developers to build games for a new generation of Windows powered tablets is a no brainer?

Turns out, it’s not as simple as you might think. The reason? I think it comes down to one thing: DirectX 9.

XNA is build on top of DirectX 9. DirectX 9 is now a pretty old technology, and Microsoft has decided that DirectX 9 will not work with Metro UI. Personally I agree with this decision, DirectX 9 games are still built in order to reach Windows XP users, and more importantly, because it allows code sharing with Xbox 360 builds. However, the speculation in the gaming press is that the Xbox vNext is in development, and that we’ll probably see an announcement at some point next year.

So if DirectX 9 doesn’t work with Metro UI, then by extension neither does XNA. And let’s face it, Microsoft was never going to spend time and effort supporting DirectX 9 with Metro UI just for XNA games. It would mean including a DX9 runtime in the ARM version exclusively for XNA games, and probably a million other headaches I’ve not even thought of.

Why I’m not worried (yet)…

Strangely, since finding out why XNA isn’t supported in Metro UI, most of my fears that XNA was dead have faded away. I’m going to try and explain why.

Let’s think about this another way. What would Microsoft have needed to do in order to support XNA with Metro Apps out of the box? The way I see it they had 2 options:

They could have supported DirectX 9, but there are plenty of reasons they wouldn’t want to do that (see above).

Or they could have re-written XNA to run on DirectX 11 under the hood.

Lets unpack that second option for a second. The Xbox 360, and probably Windows Phone 7, use some form of DirectX 9. That means that either a DirectX 11 version of XNA would be Windows only, and we’d still need to use the current XNA version for developing on Xbox and the phone, or else the API would need to be static and a DirectX 11 code path would need to be put in place for Windows.

That would be a fair amount of work, but you might think that it’s worth it if Microsoft is serious about XNA. Unless you consider the Xbox vNext.

Most likely the Xbox vNext will run on DX11 or some form of it. If Microsoft plans to put XNA on Xbox vNext (as I hope they do if they’re serious about it), then it would make sense to do a full DX11 rewrite of XNA at that point. They can’t do it now, because I doubt the software stack for the new Xbox is anywhere near being nailed down yet.

So in conclusion, I believe the lack of Metro support for XNA means one of two things. Either our worries are justified, and Microsoft plans to cut XNA loose OR Microsoft is actually really serious about XNA as a future technology, and is waiting until it can be rewritten for the Xbox vNext and Windows 8 at the same time.

Until we know if XNA is going to be on the Xbox vNext, I haven’t given up hope.

Design-time: first area map

July 28th, 2011


It’s been a while since my last update for one reason or another. I took some time out from working on Sphero to start-up a small side project. I won’t elaborate on it here now, as I’m not sure if its going to go anywhere. If it does then I’ll be sure to post an update 🙂

I started back on Sphero a bit over a week ago, and I’ve finally turned my hand to designing the map/ puzzles for the game.

The game is essentially a cross between a Metroidvania and a puzzle platformer. The world is split into 5 areas: the Forest, the Ice Wastelands, the Mine, the City, and the Core. In each of the first four areas there are one of the 4 totems which unlock one of the gameplay mechanics (double-jumping, turbo speed, wall-crawling and er, being on fire!).

The two images below show what I’ve designed so far. The first is a general guide for the 5 areas, and the second is the detailed design for the Forest area. The overlay with arrows/ words etc is just a guide so that I know when I need to design the puzzles in such a way that the player will need a certain ability to proceed:

You may have noticed the little yellow section marked ‘ship’ in the first image. You’ll just have to wait and see what that’s all about… 😉

The map might not look like much, but it’s taken a lot, lot longer to design these puzzles than I’d ever expected. I seem to be saying this a lot lately, but design is hard! Given that a puzzle platformer lives or dies by the quality of its puzzles, its balance between being challenging enough and not being frustrating, and its learning curve, there’s a lot to conisder, and I’d be very surprised it I don’t have to make significant changes to all of the puzzles I’ve designed so far before I’m through, but its a start.

Anywho, that’s all for now, I’ll post again when I have the map for the next area finished.

Catch you next time.


And as promised, a video showing off switching mode with the right thumb-stick, Crysis style. It should probably be noted that all the artwork, including the interface for mode switching, is just place-holder art at the moment.

A quick note on what each of the modes are: Black is normal, Red currently is the same as normal (but will be on fire!), Grey allows a double jump (Air), Blue increases movement speed (Lightning), and Brown allows wall crawling (Rock).

You’ll just have to use your imaginations until I get some proper art work in place!

Here’s the video, enjoy!

YouTube Preview Image
Proudly powered by WordPress. Theme developed with WordPress Theme Generator.
Copyright © Rabid Lion Games. All rights reserved.