Rabid Lion Games

Dilemma

May 13th, 2013

Firstly, I’ve been working steadily on Blip legend for a few months now. I’ve prototyped lots of puzzle elements and have started building levels. Unity is a dream to work with once you get used to it, especially combined with 2d toolkit. Generally everything is going well, but I haven’t got to the point of having something to share yet.

And that’s the problem.

Despite me feeling that this game was of a manageable scope for first commercial title, the pace of development is just too slow. I estimate that I put in between 20-30 hours a week, and taking me ~2 weeks to complete a small area of the game. I estimate that at a minimum I’ll need 36 areas, and that doesn’t include boss battles.

What’s more, I’ll probably want to remake everything at least once more on average because I’m not happy with the quality. Totalled up that’s 72 more weeks of development time in a best case scenario. The reality could be double that because stuff always takes longer that expected.

By this point, I’ve realised that I’m staring down the barrel of 18 months to 3 years dev time on my first game. It’s not impossible, but everything I’ve read says alarm bells should be ringing.

So what are my options?

Well right now I’m creating all content in the game myself, including art, music, and sound effects. That’s easily taking up 75% of my dev time, so getting others in to handle those would speed things up significantly.

The problem is that there is a LOT of content and I’m not in a position to pay people up front for work. Without a proven track record I’m unlikely to be able to get quality help for profit share, and the same goes for crowd funding.

I could drastically shorten the game, but quite frankly it’s already going to be a short experience. Cutting it further would mean that it just wasn’t worth the effort.

The final option is to put this project on hold and start again with a game that’s smaller in scope for my first release. I have a game in mind that would be perfect – low art requirements, smaller game, purely based on a couple of interesting mechanics. I genuinely think I could get to the point of a build ready for play testing by the end of 2013, if not earlier.

It sounds like a no brainer, but I really don’t like the idea of putting my current project on hold.

Dilemma!

(Don’t) show me the money

March 2nd, 2013

Micro-transactions. They’ve been in the news a lot in the last few days. First EA CFO  Blake Jorgensen claimed that “Consumers are enjoying and embracing that way of the business.” Then, after the inevitable community backlash, Cliff Bleszinski jumped into the fray, defending games developers’ right to make money from their products, and calling out the community for demonising EA but holding Valve (who have embraced micro-transactions in games such as Team Fortress 2) up a shining example of a model game developer.

I can’t offer a counter argument as a game developer, since I’ve never (yet) actually released a game. However I can offer my opinion as a gamer, and so that’s what I’ll do.

So here’s the thing. I wouldn’t buy a game that includes micro-transactions. Before I explain that statement, I should point out that I’m not the typical ‘core’ gamer that publishers like EA are targeting. For one, I don’t play multiplayer games. I’m a strictly single player only person, occasionally playing a co-op game like Portal 2 with my better half. I like immersive, usually story driven games where it’s just me and the pocket universe that the game developer has created.

Because of this, the Team Fortress 2 model of micro-transactions is immediately out. Paying real money for cosmetic changes to a character becomes far less appealing if you’re the only one that will ever see them. It’s kind of like spending lots of money looking good when you’re a hermit, it just doesn’t make sense. Instead we’re left with the other two popular models of micro-transactions which I shall call Pay-to-play and Pay-to-win.

The first of these is the kind of game where you pay to get more content in the game. This could be paying for extra levels, or for more turns in a given time period in a turn based game. The problem with this is that, when you purchase the game in the first place, you’re not *actually* purchasing the game. It’s more like putting a deposit down on a car. Yes, you can enjoy it to a certain extent, but eventually if you want more, you need to pay more. And depending on how the micro-transactions are set up, it could be impossible to know exactly how much the game is going to cost you when you buy it. To some people, that’s not a problem, but to me it puts me off getting the game in the first place.

Why? Because if I’m deeply immersed in a game, and then get told that to continue the illusion I need to pay more money I’m dragged out of that fantasy world and made to consider that most ugly feature of reality – do I have enough money to pay for something. Personally, I play games to escape from reality, so I’m not going to buy a game where an integral part of it requires considering something as grounded in reality as my bank balance on a regular basis.

The other category was Pay-to-win. These usually involve offering in-game items that actually make the game easier to play, in exchange for real money. A recent example is Dead Space 3. In some cases you can also get these items by progressing through the game, in others you can only purchase them with real money.

If you can only purchase these items with real money, then they either fall into the ‘I haven’t bought the whole game’ problem with Pay-to-play, or the ‘there’s no point in vanity in a single player game’ problem, depending on whether or not the item substantially changes the way that the game plays. If you can get the item anyway by, say, grinding up levels in an rpg-esque way, then what I’m essentially being told by the developer is ‘you can pay some money and skip this boring bit of the game’. But this bit of the game is boring, and you’ve acknowledged that by allowing me to skip it by paying money, then why is it still in the game anyway? If the answer is ‘to encourage players to spend money’ then all you’re going to do is leave me feeling like I’m being scammed. And I don’t play games to feel like I’m being shaken down for cash.

So there you have it, one gamers view on why, for me, micro-transactions in a game mean that if  your game has micro-transactions in it, I won’t be buying it. Perhaps if you hide them away, make it easy to ignore them, and they don’t alter the game at all, then I might overlook the fact that your game has them. But if that’s the case, then surely the defeats the purpose of having them in the first place?

Either way, if micro-transactions are the future of AAA, then I want no part of it.

Start your engines

January 17th, 2013

I’m still a couple of months off starting/ restarting my game in earnest, but I’ve started planning out how I’m going to keep development time to a minimum.

My biggest consideration is the engine I will use. Here, I’m fighting a battle.

I’m a programmer, and although I’m working hard at being a better game designer, I will always enjoy writing my own tools and engine from scratch. The goal of using an off-the-shelf engine is the complete opposite of that, to leave me needing to write as little code as possible. But I love writing code.

So it’s been a struggle to objectively evaluate the engine options out there. I’m hoping that going through my thought process out here in the open will keep me honest, and stop me from straying from my objective: actually finishing a game.

 

So what are my requirements for an engine?

Essential criteria:

1) Has to handle ‘proper’ 2D – Whilst  2D can be ‘faked’ with a low FOV perspective projection I’m aiming at pixel perfect platforming, which is significantly harder without an orthographic projection

2) Must support 2D physics

3) Has to use either C++ or C# as the development/ scripting language – these are the two languages I know the best, and becoming proficient in a different language would eat into development time

4) Must have tools that either provide the facilities I need to place arbitrary sprites and colliders in the scene and to create/ edit an arbitrary 2D surface for the ‘ground’ – or else needs to be easily extensible for me to quickly code these facilities myself (or that I can add through 3rd party plug-ins).

5) Must be robust – As a one person team with a full time non-game job my time for supporting the game and bug fixing will be limited, so I’ll need a base that I can be confident isn’t full of bugs.

6) Must be able to handle per-sprite shaders for certain effects that are integral to the gameplay.

7) Must be able to handle large numbers of particles.

 

Desirable Criteria:

1) To support custom post-process shaders.

2) To be cross-platform.

3) To use a familiar API.

4) Is used ‘in the wild’ so that obscure bugs that only occur in certain hardware configurations are already documented.

 

The contenders:

There are 3 paths I’m considering:

A. Unity3D

B. SunBurn

C. Rolling my own using DXUT, and DirectXTK

I’ve already eliminated other engines that don’t really meet my essential criteria. But I’ve kept option C, which technically doesn’t meet any of the options since it doesn’t exist yet, but it could meet all but number 5, and given that I will know the source code, bugs in the engine code are not as terminal as they are with a 3rd party binary only engine.

It should come as no surprise, given my introduction to this post that my heart is screaming to go with option C. But, in the spirit of trying to be objective, let’s have a look at each option in turn against my criteria.

Unity3D:

Essential criteria:

Unity can handle orthographic 2D out of the box. 2D physics seems to work just fine by adding constraints to the existing 3D physics engine. C# is the recommended scripting option. The tools are easily extensible, and in fact between the 2D toolkit and smooth moves plugins I might not even need to build custom tools at all. It certainly seems robust (and the bugs that do exist are usually well documented given the sheer number of users Unity has). It has support for per sprite shaders in it’s material system. It has a built in particle system, but it’s performance is unknown to me at the moment.

So that’s 6.5-7/7 on essential criteria (the half is for the particles since it’s an unknown).

Desirable criteria:

Custom post-process effects are a pro – only feature – I won’t be paying $1500 dollars for pro just for those unless I feel the game *really* needs them. Unity is the definition of a cross platform engine. The API is brand new to me, so I will be starting from scratch there. It is commonly used, so I’m hopeful that obscure bugs will be google-able.

Score: 2/4

Not bad overall, but let’s have a look at SunBurn.

SunBurn:

Essential Criteria:

SunBurn is perfectly capable of orthographic 2D. I could either use a constrained BEPU or integrate Farseer Physics for 2D physics. It’s C#. The tools are not straight forward to extend, but they are extensible – it’s just some work, and there are no existing plugins providing the functionality I need. It seems robust and the team has a strong middleware history. Per-sprite shaders are straightforward. It’s high performance, so lots of particles should be fine, but again it’s untested for me.

Score: 6-6.5/7

Desirable criteria:

Custom post-process effects are available. SunBurn is sort of cross-platform at the moment and they’ve hinted other platforms are in the pipeline. The XNA part of the API is familar to me, but the rest isn’t. It’s not that widely used, certainly not to the same extent as Unity, and even less so on PC.

Score 2/4

That’s pretty tight between SunBurn and Unity. There’s also nothing in cost between them, as I already have a SunBurn license and I’d be using Unity’s free version.

Finally, rolling my own:

Custom engine:

There is no point comparing this against the criteria. With a blank piece of paper it could go above and beyond all of the essential criteria, even robustness, given enough time. It could even meet most of the essential criteria, baring the ‘used in the wild’ criterion. But that’s the point. The purpose of using an engine is to save time. To roll my own when there are existing engines that can do exactly what I need, or can be made to with little effort, is just unjustifiable.

Conclusion?:

Between Unity and SunBurn, there’s not a lot in it. Overall I’m drawn to Unity as it’s more commonly used, and for a smallish investment I could save weeks, if not longer, on writing my own editor and 2D extensions.

Despite this, I’m drawn heavily to rolling my own engine. At my core I think I’ll always be a programmer and want to get elbow deep in code. And, if I’m honest, I think it would be more fun.

But fun isn’t my goal, getting a game finished and out the door is. I suspect that I’ll pick up a side project of writing my own game engine (something of a resurrection of the Alta Engine that I started this blog to write about all that time ago!) but my focus will be on getting my game done in Unity.

Development  is due to start in April, and I’ll be blogging every step of the way.

Watch this space!

2013 – Looking forwards

January 1st, 2013

And that was 2012. It was something of a consolidation year for me. I finished up my lighting tutorial series, completed my Masters, including a 6 month project writing a Garbage Collector which was my first medium size C++ project, and learnt a lot more about C++, OpenGL, DirectX, Box2D, assembly, programming, and computers in general.

One thing that was notable by it’s absence was any work on actual games. The main reason for this was that I spent a lot of time thinking about my motivation for making games, and what my goals were. I’ve always struggled with the design side of gamedev, not to mention art and audio. My strength has always been programming and solving interesting problems with code, and I’ve not pushed myself beyond that, generally accepting that I’m not a good designer, and treating game design as a means to an end (i.e. doing something interesting in code).

As a result, I all but concluded that I’d be better off trying to get a job at a AAA studio than I would be being an indie. That way I could focus on programming and get to a point where making games is the ‘day job’ much faster. If that was my goal than my focus needed to be on creating an impressive and polished portfolio, rather than fun, interesting games.

I’ve had a few nagging doubts about this decision however. For one, I enjoy making games, actually seeing them come to life and having the satisfaction that, whatever small amount of progress I made was down to me and me alone. I also felt like this was a cop out. I’d found something that was hard and decided to work around it rather than work at being better at it. But my biggest doubt was whether I would even enjoy working in the AAA industry. The horror stories are too prevalent to be entirely exaggeration, the pay is terrible compared to my current job, and job security seems non-existent.

For a while I considered trying to team up with a designer and work on their ideas in my spare time. That still has an appeal, but it doesn’t get rid of that nagging feeling that I’m running away from design just because it’s hard.

So over Christmas I’ve spent a lot of time soul searching, reading,watching, and listening to people talk about game design, in particular Jonathan Blow and his and Marc Ten Bosch’s Indiecade talk on one design aesthetic. And I’ve come out the other side feeling as though design *is* something I can do if I work at it, and that in fact in my last prototype (Sphero, now renamed Blip Legend as a working title) I had the embryo of a game with a lot of potential, but I was being too mechanical with it’s design and spending too long focusing on interesting technology.

So, for 2013, I will:

Rebuild the Blip Legend mechanics in Unity

Get a vertical slice of the game implemented as early in the year as possible

Focus on the design of the puzzles above all else, not worrying about fun, but making the game interesting

Be in a position to bring an artist on board towards the end of the uear.

It’s a tall order given that I have a fairly demanding day job, but I’ve finally decided that this really is what I want. It may take me another 10 years to get there, but I’m determined that one day I will be indie full time.

Wish me luck!

Avast ye!: Piracy

December 22nd, 2012

***Warning: Long opinion post. If you want the tl;dr version, scroll down the bottom for the ‘So what’s the answer?’ section.***

 

Games are in the news again this week because of the tragic events in the US and the NRA pointing the finger at games (among other media) for glorifying violence.

I try and make it a point not to get involved in topical issues because temperatures are generally running high, people will repeat the arguments that they have read in the media for or against, and people quickly become entrenched, knowing in their heart of hearts that its their argument which is correct, and everyone else is an idiot or lying outright.

However, I thought now would be a great time to write up some thoughts on another controversial issue that affects our industry – Piracy.

 

I’ll start with full disclosure: I have never pirated a game. Ever. That’s not to say I’ve never pirated *anything*, but never a game. The reason for that is simple, I will never run an executable from a source I don’t trust on my PC, and you can’t get a much more untrustworthy source than torrents/ cyber-lockers where people are posting cracked executables. It’s that simple. No moral high-ground, no holier-than-though, just being sensible with the security of my machine. So what’s my interest in Piracy? Two reasons – Firstly, as a gamer, I am affected by the steps companies take to keep their games from being pirated (including Ubisoft’s adorable ‘always connected’ DRM). Secondly, as a developer who *will* one day release games for sale, it is inevitable that one day my game will be pirated.

 

What is Piracy?

So, now that’s out of the way, let’s start from the beginning. What is piracy?

Traditionally piracy is no more than armed robbery at sea (with some kidnapping thrown in for good measure). Pirates would hijack ships carrying cargo, steal the cargo, probably kill the crew, and leave.

So how do we get from shenanigans on the high-seas to downloading a dodgy copy of Call of Duty? On this one, the internet comes up blank, but it’s fairly clear that the term ‘piracy’ has been used to describe the act of copyright infringement (an important term we’ll come back to later) for around 400 years (according to the font of all knowledge, Wikipedia). Originally it was applied to people who were illegally copying and printing books.

If I were to guess I’d say that the term piracy was used not in relation to the act of printing the books, but rather the ‘hi-jacking’ of the right to copy the books and using it for financial gain, in the same way that a traditional pirate would hi-jack a ship in order to use it’s cargo for it’s own financial gain.

In the case of games, what we’re really talking about is copyright infringement. This means that legally only certain people have the right to make and distribute copies of specific material, but others copy/ distribute it anyway.

 

Is Piracy theft?

Piracy, or rather copyright infringement, is not theft. And that’s not my opinion, that’s the considered opinion of the US Supreme Court in Dowling v. United States 1985. The following is an extract from the judgement which makes the distinction between copyright infringement and theft :

“The phonorecords in question were not “stolen, converted or taken by fraud” for purposes of [section] 2314. The section’s language clearly contemplates a physical identity between the items unlawfully obtained and those eventually transported, and hence some prior physical taking of the subject goods. Since the statutorily defined property rights of a copyright holder have a character distinct from the possessory interest of the owner of simple “goods, wares, [or] merchandise,” interference with copyright does not easily equate with theft, conversion, or fraud. The infringer of a copyright does not assume physical control over the copyright nor wholly deprive its owner of its use. Infringement implicates a more complex set of property interests than does run-of-the-mill theft, conversion, or fraud.”

The point is here that nothing is taken from the copyright holder. They are not deprived of any property. If you walk into a shop and steal a loaf of bread, that shop will lose the money that they would have made from selling that loaf of bread. If you copy a game and give the copy to your friend, you have not directly taken anything from the copyright owner. 

 

Piracy = lost sales

“But wait!” I hear you cry “If someone downloads an illegal copy of a game they’re depriving the copyright holder of a sale, so they’ve basically stolen that money from the copyright holder!”

It’s an interesting argument, but there’s a big assumption there: that the person would have purchased the material legally had they not illegally downloaded it. The question is, can we legitimately make that assumption? The answer has to be no. Let’s consider a group of people that we can safely say this doesn’t apply to: Students.

I can personally vouch for the fact that there are plenty of students and teenagers who are flat broke and download several new releases a month (films, games, music, everything). These people do not have the means to pay for the material, and so the argument that they would have paid for the material if they couldn’t get it illegally cannot apply. You might argue that they probably would have eventually purchased the material, but that argument can also be dismissed. People who consume material at this rate could never ‘catch-up’ in terms of purchasing all of that material at a later date, as in order to afford to buy it they probably have a job, and so won’t have enough time to get through a game, two movies, and 4 episodes of their favourite TV shows a week. That’s without taking into account the rate at which new material comes out. Whichever way you look at it, the argument that these people would have paid for the material just doesn’t stack up.

So, am I saying that piracy never leads to lost sales? Not at all. There will always be some people who would pay if they had to, but choose to pirate to save money. For some this ‘saved’ money might go on other copyrighted material that they wouldn’t have bought if they had paid for the material they downloaded, but it could equally have been spent on a takeaway, or an extra flutter on the horses.

But even where someone would have bought the material otherwise, is it always a lost sale? No. A good friend of mine has been watching the TV show ‘Fringe’ illegally on-line. Knowing that it was something her father would enjoy as well she bought him the box set of the first 3 seasons for his birthday. Had she bought them for herself she would have still reached the conclusion that her father would enjoy it, but she would have lent him her copy, rather than buy a second box set for him.

 

So what does this mean?

Quite simply, it means that, although copyright holders lose some money through piracy, no one can ever know how much. I dare say research has been done surveying illegal downloaders to ask them whether they would have bought the material they downloaded if they couldn’t get it for free, but there are several factors that would be very difficult to capture accurately, such as the available disposable income to purchase the material, what it was spent on otherwise, loss of ‘secondary’ sales (e.g. word of mouth, gift purchases) as a result of not having downloaded the material etc.

What can be measured, or at least estimated, is the number of times a piece of material has been downloaded illegally. We’ve all read ridiculous figures in the media from publishers/ record labels who claim that piracy has ‘cost’ them some amount which turns out to be the cost of the material multiplied by the number of illegal downloads. That sort of stunt helps no one, because people know it’s not representative, which makes it look like the publisher is trying to pull the wool over the eyes of the public.

 

Are there other ways that piracy can cause harm other than lost sales?

For games, absolutely. If you have any features in your game like dedicated mutli-player servers, centralised high score tables, or if your entire game is run on servers you have to pay for, then piracy has a direct cost, because those players are costing you money in keeping the servers running without giving anything back. You might have to buy or rent extra server space or bandwidth so that people who haven’t paid for your game can play. This is bad for everyone in the long run, because if enough people pirate that game the costs could simply get too high for the developer/ publisher to carry on supporting it.

 

So, what’s the answer? (or tl;dr)

If I knew that, I’d probably have so many job offers that the offer letters wouldn’t fit through my door. At some point soon I hope to write up a survey of the different solutions that have been presented to the world so far, covering technical solutions like DRM, different business models like Free to Play and Freemium, legal solutions such as tougher laws and cracking down on illegal downloaders, and new funding models like Kickstarter. Here though I want to draw the logical conclusion from the discussion above.

There are some people that, if they couldn’t illegally download games, would buy them. Of them some would buy them at full price, some would wait for a sale, and others would buy them but not buy some other game as a result. There are also many people that would simply do without. We have no way of knowing what that split looks like, and we probably never will.

Any solution that is put in place to tackle piracy runs the risk of alienating people who might have bought your game. I know specifically of two PC gamers who stopped playing the Assassin’s Creed franchise because of Ubisoft’s ‘Always connected’ DRM, and I know another who refuses to use Steam. What we, as developers, publishers, copyright holders, need to do is judge how many customers we may lose from trying to tackle Piracy, vs how many of the ‘Would have bought’ pirates will be nudged into paying for our game. Given that we don’t have a clue how many people are in either camp, whatever we do is a gamble. If we don’t have overheads per person playing our game (i.e. servers that we have to pay for etc) then maybe, just maybe, it would be a better business decision to take the bird in the hand, and focus our energies instead on making content that people want to buy.

I’m baaaack

November 25th, 2012

Wow, longest break from updating the blog yet!

There’s a good reason for that. I spent from March until September working pretty much non-stop on the final Project for my part time Masters, which was a Garbage Collection library designed with Game development in mind (at some point I’ll be open sourcing the code).

Since then I’ve built a new PC and been taking the opportunity to start looking at what I want to do in the game development world from here. I should *hopefully* soon have a pretty good Masters degree in Computer Science, and so soon I’m going to have to make the choice: AAA or indie.

That decision will make decide what projects I focus on from here on in. If I’m aiming to go down the AAA route, I need to spend the next year or so building a solid portfolio (I only have my free time since I have a fulltime day job). The portfolio pieces would probably mostly be C++ tech demos, since the tools/ engine side of programming is where I’d want to end up, but I’d probably need some good gameplay pieces in there as well.

If I go indie then the next step is to jump into my first commercial project. If I continue on solo then that would probably be resurrecting my ‘Sphero’ project in Unity. The other option is to look for an established team or someone else who wants to start something up and has some awesome ideas. That would let me focus on the programming side of things, but comes with the pressure of being responsible for bringing someone else’s vision to life.

So lots of decisions to make. In the meantime I’m honing my programming skills in C++, because whatever I end up doing, knowing a ‘close(ish) to the metal’ language will help me when working in higher level languages. I’ve learnt so much working on my Garbage Collection library about the nitty-gritty of how languages like C++ and C# work under the hood that I don’t want to stop there.

So I’m reading the books I should have read years ago (Stroustrup, Meyers, Sutter, Go4 etc), and diving into DirectX and OpenGL to get a feel for what engines like Unity and frameworks like XNA are actually doing behind the scenes.

All of this means it might be quiet here for a while until I finally decide which way I want to jump.

 

A question came up on the forums/ twitter about a Linear Burn effect. The basic effect can be defined by the formula Final = Color1 + Color2 – 1, clamped to 0 – 1 as usual. If the two textures you’re using to sample Color1 and Color2 from use the same texture coordinates, then this is very straight forwards, but what if you want to use different coordinates for two textures, e.g you want to rotate and scale the textures independently?

This may cause you problems if you wanted to write a shader that uses spritebatch, because spritebatch only sets one set of texture coordinates in the pixel shader. If you specify a second set of pixel coords as a parameter in your pixel shader, they will have the value (0, 0) whatever you do with spritebatch, which isn’t very helpful.

Also, you might be looking to create this effect on Windows Phone 7, in which case you don’t have the luxury of writing custom shaders. So can we achieve the same effect without using shaders at all? The answer is yes, we can achieve this with a combination of additive and subtractive blending using the normal spritebatch shader and a RenderTarget2D. However, it’s not *quite* that simple.

The main issue with not using shaders is that the backbuffer (or a normal render target with Color format) can only hold values in the range 0 – 1. If Color1 + Colo2 is greater than 1, it is clamped to 1. So if we were to break down our Linear Burn into two stages like so: temp = Color1 + Color2, final = temp – 1, then we’d have a problem for certain values of Color1 and Color2. E.g. lets have a look at what happens when Color1 and Color2 have components that are set to 0.75:

Our original forumla gives:

0.75 + 0.75 – 1 = 0.5.

Our two stage formula gives:

0.75 + 0.75 = 1.5 => 1 (clamped because of the limitations of the render target).

1 – 1 = 0.

So clearly this does not give us the same result as our original forumla. However, we can be sneaky to get around this.

Let’s rewrite our original formula like this:  final = 2 * (0.5 * (Color1 + Color2 – 1)). This can be re-written as final = 2 * (0.5 * (Color1 + Color2) – 0.5).

So we can now split this up into 3 stages:

temp = (0.5 * Color1) + (0.5 * Color2)

temp2 = temp – 0.5

final = temp + temp

Each of these stages we can easily do with simple additive and subtractive blending, and we can guarantee the value of temp does not exceed 1 and so does not get clamped.

Now, you might be worrying about what happens when we subtract 0.5. Surely, you might saying, this might become negative and so get clamped to 0. Well, yes it might, but only when the original formula would be clamped to 0.

Note that our original formula gives a negative answer (which would be clamped to 0) exactly when Color1 + Color2 < 1. This implies that (0.5 * Color1) + (0.5 * Color2) < 0.5, which means temp2 in our split-up formula will also be negative and get clamped to 0. Since 0 + 0 = 0, our final value will still be 0, matching our original formula.

Can temp2 ever get clamped to 0 when our original formula wouldn’t have? No.

temp2 is less than 0 only when (0.5 * Color1) + (0.5 * Color2) < 0.5, which implies that Color1 + Color2 < 1, and hence the original would have been clamped as well.

Therefore, our split up formula gives us exactly the same answer as our original. Lets have a look at some code snippets to see how to implement this.

First up we’ll need a special blend state:

 

BlendState subtractive;

 

Which we initialize like so:

 

subtractive = new BlendState();
subtractive.ColorBlendFunction = BlendFunction.Subtract;
subtractive.ColorDestinationBlend = Blend.One;
subtractive.ColorSourceBlend = Blend.One;
subtractive.AlphaBlendFunction = BlendFunction.Add;
subtractive.AlphaDestinationBlend = Blend.One;
subtractive.AlphaSourceBlend = Blend.One;

 

There are two different subtractive blend functions. Effectively we have the choice between FinalColor = SourceColor – DestinationColor, or FinalColor = DestinationColor – SourceColor. In our case we’ll be clearing the back buffer to (0.5, 0.5, 0.5, 1), so our we’ll want SourceColor – DestinationColor, which is the function given by BlendFunction.Subtract.

We’ll also need a Color set to (0.5, 0.5, 0.5, 1), which we create like so:

 

Color halfColor;

 

And initialize like so:

 

halfColor = new Color(new Vector3(0.5f));

 

Finally we’ll need two RenderTarget2Ds to draw to:

 

RenderTarget2D target1;
RenderTarget2D target2;

 

And initialize (I’m assuming this is a full-size effect, if not then you might need to tweak these to meet your needs):

 

target1 = new RenderTarget2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height);
target2 = new RenderTarget2D(GraphicsDevice, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height);

 

For the purposes of these snippets I’m assuming your two textures are called tex1 and tex2, and that their source rectangles are rec1 and rec2. Since we’re only dealing with SpriteBatch as normal you can alter anything about the SpriteBatch.Draw() call to match your needs.

So what does our draw call look like? Something like this:

 

GraphicsDevice.SetRenderTarget(target1);
GraphicsDevice.Clear(Color.Transparent);
spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive);
spriteBatch.Draw(tex1, rec1, halfColor);
spriteBatch.Draw(tex2, rec2, halfColor);
spriteBatch.End();

GraphicsDevice.SetRenderTarget(target2);
GraphicsDevice.Clear(halfColor);

spriteBatch.Begin(SpriteSortMode.Deferred, subtractive);
spriteBatch.Draw(target1, target1.Bounds, Color.White);
spriteBatch.End();

spriteBatch.Begin(SpriteSortMode.Deferred, BlendState.Additive);
spriteBatch.Draw(target2, target2.Bounds, Color.White);
spriteBatch.Draw(target2, target2.Bounds, Color.White);
spriteBatch.End();

GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(Color.CornflowerBlue);

spriteBatch.Begin();
//Draw the parts of the scene that lie behind your Linear Burn sprites
spriteBatch.Draw(target1, target1.Bounds, Color.White);
//Draw the parts of the scene that lie infront of your Linear Burn sprites
spriteBatch.End();

 

Again, you’ll have to adapt this to fit into the rendering of the rest of your scene, but that should give you an idea of what it should be like. I’ve added a sample on codeplex linked to below that takes the following two textures:

 

 

And outputs this as the result:

 

 

Here’s the link to the download:

 

Linear Burn sample

 

Enjoy!

 

Well, it’s been a long haul, but we’ve finally got here! In the last part we finished writing our Lighting system and got it to build. Now we can finally start using it. This is also the part where, if you just want to use the system, not understand how it works, then you’ll see how to do that.

Once we’ve written the code to create the demo we saw way back in the first part of the series we’ll discuss ways that the system could be improved, both in optimisation to improve performance/ resource usage, and in extra features that we could add to improve the image quality of our final scene.

 

First up, Game code!

Fire up our Visual Studio solution one last time and open up the Game1 class. This currently has some auto-generated XNA code in it. Leave that there.

We’ll start by adding the fields we’re going to need. Add the following to the top of the class:

 

Texture2D Background;
Texture2D MidGround;

List<PointLight> pointLightHandles;
List<SpotLight> spotLightHandles;

public LightRenderer lightSystem;

 

These are just the textures we’ll be using to draw, a list to hold each of our types of light (we’ll only be creating spot lights in this example, but you should create both to make sure both shaders are working), and our lighting system itself. Next up we head to the Game1() constructor and add the following code after the code that’s already there:

 

graphics.PreferredBackBufferWidth = 1280;
graphics.PreferredBackBufferHeight = 720;

spotLightHandles = new List<SpotLight>();
pointLightHandles = new List<PointLight>();

lightSystem = new LightRenderer(graphics);

 

All we’re doing is setting the screen resolution (feel free to set it to whatever resolution you prefer), and creating our light lists and lighting system. Next we go to Initialize(). Add this at the top of the method:

 

lightSystem.Initialize();
lightSystem.minLight = 0.5f;
lightSystem.lightBias = 3f;

spotLightHandles.Add(new SpotLight(
    new Vector2(GraphicsDevice.Viewport.Width/2f, GraphicsDevice.Viewport.Height / 2f), 
    Vector2.UnitY * 1.0001f, 0.8f, 1.5f, 1.25f, 500f, Color.Blue));
spotLightHandles.Add(new SpotLight(
    new Vector2((GraphicsDevice.Viewport.Width/2f)+100, 
    (GraphicsDevice.Viewport.Height / 2f) + 100),
    Vector2.UnitY * -1.0001f, 0.8f, 1.5f, 1.25f, 500f, Color.Red));

lightSystem.spotLights.Add(spotLightHandles[0]);
lightSystem.spotLights.Add(spotLightHandles[1]);

 

Here we’re calling initialize() on the lightSystem to set up all the internals of the system, setting a couple of the parameters that we need to specify, creating two spot lights 100 pixels apart, one blue one pointing down, and one red one pointing up. The other parameters we’ve set to values that seem to work. Feel free to play around with them to get different effects.

Next we head to LoadContent() and add the following after the auto-generated code:

 

Background = Content.Load<Texture2D>(@"Textures\Background");
MidGround = Content.Load<Texture2D>(@"Textures\Midground");

lightSystem.LoadContent(Content);

 

Here all we’re doing is loading our two textures, and calling LoadContent() on our lighting system so that it can load the effect files that it needs from the Content project.

Now we get to the meatier stuff. First, the Update() method. In this we add some code to make our spot lights move according to the left and right thumbsticks, and rotate based on the A and B buttons. This code is really just showing how to manipulate spot lights rather than how to interact with our Lighting system. Add the following code before the call to base.Update():

 

if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.X < -0.25f)
{
    spotLightHandles[0].Position.X -= 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[0].Position.X < 0)
        spotLightHandles[0].Position.X = 0;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.X > 0.25f)
{
    spotLightHandles[0].Position.X += 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[0].Position.X > GraphicsDevice.Viewport.Width)
        spotLightHandles[0].Position.X = GraphicsDevice.Viewport.Width;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.Y > 0.25f)
{
    spotLightHandles[0].Position.Y -= 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[0].Position.Y < 0)
        spotLightHandles[0].Position.Y = 0;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Left.Y < -0.25f)
{
    spotLightHandles[0].Position.Y += 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[0].Position.Y > GraphicsDevice.Viewport.Height)
        spotLightHandles[0].Position.Y = GraphicsDevice.Viewport.Height;
}

if (GamePad.GetState(PlayerIndex.One).Buttons.A == ButtonState.Pressed)
{
    spotLightHandles[0].direction = Vector2.Transform(spotLightHandles[0].direction,
         Quaternion.CreateFromAxisAngle(Vector3.UnitZ, 
             0.0015f * gameTime.ElapsedGameTime.Milliseconds));
}

if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Right.X < -0.25f)
{
    spotLightHandles[1].Position.X -= 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[1].Position.X < 0)
        spotLightHandles[1].Position.X = 0;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Right.X > 0.25f)
{
    spotLightHandles[1].Position.X += 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[1].Position.X > GraphicsDevice.Viewport.Width)
        spotLightHandles[1].Position.X = GraphicsDevice.Viewport.Width;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Right.Y < -0.25f)
{
    spotLightHandles[1].Position.Y += 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[1].Position.Y > GraphicsDevice.Viewport.Height)
        spotLightHandles[1].Position.Y = GraphicsDevice.Viewport.Height;
}
else if (GamePad.GetState(PlayerIndex.One).ThumbSticks.Right.Y > 0.25f)
{
    spotLightHandles[1].Position.Y -= 0.25f * gameTime.ElapsedGameTime.Milliseconds;
    if (spotLightHandles[1].Position.Y < 0)
        spotLightHandles[1].Position.Y = 0;
}

if (GamePad.GetState(PlayerIndex.One).Buttons.B == ButtonState.Pressed)
{
    spotLightHandles[1].direction = Vector2.Transform(spotLightHandles[1].direction,
        Quaternion.CreateFromAxisAngle(Vector3.UnitZ, 
            0.0015f * gameTime.ElapsedGameTime.Milliseconds));
}

 

This should all look familiar if you’re used to moving objects around in XNA. You might not be completely familiar with Quaternion rotation, but all we do is specify the angle we want to rotate by and the axis that we rotate around.

Finally we get to the part of the API that we designed way back in part 3. Find the Draw method, delete the auto-generated call to GraphicsDevice.Clear() and add the following code to the beginning of the method:

 

lightSystem.BeginDrawBackground();
spriteBatch.Begin();
spriteBatch.Draw(Background, 
    new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), 
    Color.White);
spriteBatch.End();
lightSystem.EndDrawBackground();

lightSystem.BeginDrawShadowCasters();
spriteBatch.Begin();
spriteBatch.Draw(MidGround, 
    new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height), 
    Color.White);
spriteBatch.End();
lightSystem.EndDrawShadowCasters();

lightSystem.DrawLitScene();

 

This is basically exactly the API we described back in part 3. If any foreground sprites need to be drawn they should be drawn after the call to DrawLitScene(). And that’s it, we’ve written our last line of code. Now for the moment of truth. Hit F5 or Build & Run and see your light system in action!

The code for the finished solution is at the link below:

 

Finished Solution

 

Optimisations and Improvments

 

So we have our basic light system working. The question is then, now what. There are several optimisations we could make to improve performance if you’re already pushing your PC/ Xbox to it’s limits. For starters, the code could use a tidy up. Because a lot of the system was exposed to game code while I was developing it, many of the fields are public that don’t need to be.

The biggest single performance optimisation we could make involves combining the Unwrap stage with the creation of the occlusion map. We would do this in the following way. At the moment we are creating a full texture with all of rays unwrapped on it and then we draw each row of that texture on top of each other using our special blend state. Instead, we could create those rows from scratch using a special version of our unwrap shader and draw them on top of each other using our blend state. This would save us one draw call per light, and the memory of our large unwrap texture.

There main issue with this is that we need to tell the new version of our unwrap shader which row we are currently drawing. If we do this using a parameter then we would effectively be using a separate draw call for each row of the texture, which would be several orders of magnitude worse than our current method. So we need to find another way to pass this to the shader. The way I was intending to do this was to encode the row number inside the Color parameter for spriteBatch. There are several ways we could do this, but I won’t go into them here. There is plenty of material on the internet about this problem.

You may also need to learn a bit about Vertex Shaders, as when I attempted this technique it didn’t quite work with the normal spriteBatch Vertex shader. I’ll leave it for the reader as an exercise :)

 

Other micro-optimisations can be made, such as moving some of the shader parameters out of the Game loop, since we only really need to set them once at the beginning.

 

As for improvements, there are many. The most obvious is to allow the lighting (but not shadows) to affect the shadow casters. This would give the scene a bit more realism, as at the moment the light behaves as though it is on the same level as the shadow caster, but we might want it to appear as though it is slightly closer to the gamer i.e. a bit further ‘out of the screen’. This is not as simple as it sounds, and as far as I can tell would require an extra pass over the scene, or else it would require splitting the pointLight/ spotLight shaders into two steps, one to create the lights as though there were no shadow casters, and the second to black out those pixels that are obscured from the light by shadow casters. Then the non-blacked out map could be used to light the shadow casters themselves.

The second big enhancement would be to adapt the pointLight and spotLight to use normal maps, making it seem like the background (and shadow casters) have depth and texture. There’s lots of material out there on normal mapping, and I don’t think this should be too difficult.

These are just a few suggestions for optimisation and improvements. Hopefully this series has started to give you the tools you need to start reading more on lighting techniques, and, more importantly, the confidence to start experimenting with shaders to create your own effects.

And if you come up with something cool, then share it with the community! We all learn from each other and we’re all better developers for it in the end.

 

That’s it for the series. I really hope you’ve enjoyed working through these tutorials, and that you’ve maybe learnt something along the way. I’d love to hear about projects people use the system in, or about your own systems that you’ve built based on what you’ve learnt here.

 

Greetings! We’ve come a long way so far, and we have a little way to go yet, but you’ll be pleased to know that this is the last part of the series where we’re actually working on our lighting system. By the end of this post you’ll be able to build your lighting engine (and work through all the inevitable typos that come from us not having built it up until now… oops!).

 

In this part we’ll be using the light map generated from the occlusion map that we wrote the code for in the last part of the series. In the last part we briefly talked about the idea of a spot light having an inner cone and an outer cone, or in our case, an inner and outer triangle. Pixels within the inner triangle are treated exactly as they would be if they were lit by a point light. The pixels in the outer triangle are treated differently. The power of the light in these pixels drops off linearly between the edge of the inner triangle and the edge of the outer triangle.

In other words, a pixel just on the edge of the inner triangle will be lit the same as if it were lit by a point light, a pixel on the edge of the outer triangle will have the value minLight, and a pixel halfway between the two (measured by angle) will have a value exactly half way between the two.

The steps for creating our light map are broadly similar to those that we used for creating our pointLight light map, with a few additions towards the end (note however that even where we have the same step the implementation may be different):

1) Calculating the vector from the light to the pixel

2) Determining which ray the pixel lies on

3) Sampling the value stored in the cell of the occlusion map that corresponds to that ray

4) Using the sampled value to calculate the distance along that ray to the closest shadow casting pixel

5) Determining if the pixel is visible

6) Calculating what the value of the light map for that pixel would be assuming that it is not obscured by a shadow caster AND that it is within the inner triangle

7) Calculating how far, on a scale on 0 – 1, the ray that the pixel lies on is between the edge of the inner triangle and the edge of the outer triangle, with everything inside the inner triangle up to it’s edge having the value 1 and everything outside the outer triangle up to it’s edge having the value 0.

8 ) Using the results from step 5, 6, and 7 to determine the final lighting value for this pixel and returning it

 

Once again, we’ll explain the steps that aren’t self-explanatory (and weren’t covered in part 5) as go along. For now though, lets get started on our implementation!

 

The SpotLight shader

As you’re probably tired of doing now, fire up our Visual Studio solution from the end of the last part and create a new Effect file, this time called SpotLight.fx. As normal, delete the contents, and add the normal stubb:

 

float4 PixelShaderFunction(float2 TexCoord : TEXCOORD0) : COLOR0
{

}

technique Technique1
{
    pass Pass1
    {
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

 

First up let’s implement the first step: Calculating the vector from the light to the pixel. You’ll be pleased to know this identical to the code from this step in the PointLight shader, so either copy paste from there or add the following code to the top of the function:

 

float2 screenCoords = float2(ScreenDimensions.x * TexCoord.x, ScreenDimensions.y * TexCoord.y);
float2 lightToPix = screenCoords - LightPos;
float pixDist = length(lightToPix);

Once again we drop the line in here to calculate the distance from the pixel to the light as it’s convenient.

The next step is more involved, and is more complicated than it was for our point lights. Due to the fact that we may or may not need to add a bias to some of our angles for them to make sense (see the discussion at the end of the last part), we need to do some calculations to get to the correct angle for our ray. Add the following code to your shader and then we’ll discuss it:

 

float rayAngle = acos(dot(float2(0, 1), normalize(lightToPix)));
float leftOrRight = sign(-lightToPix);

rayAngle *= leftOrRight;
rayAngle += (1 - ((leftOrRight + 1) / 2.0f)) * AngleBias;

rayAngle = clamp(rayAngle, OuterMinAngle, OuterMaxAngle);

float occlusionU = (rayAngle - OuterMinAngle) / (OuterMaxAngle - OuterMinAngle);

 

Before we explain this, let’s quickly add the parameters for both this, and the last section of code to the top of the file:

 

float2 ScreenDimensions;

float2 LightPos;

float AngleBias;

float OuterMinAngle;
float OuterMaxAngle;

 

Right, so what were we doing in that snippet? First of all we were using the inverse cos function to find the angle of the ray in the same way we did for our point light shader. However, we can’t just use this value as-is. acos() returns an angle between -PI and PI. Before we can use this value we need to calculate whether we need to add our angle bias to it.

The first step in determining if we need to add our angle bias to the angle is to see whether this ray is to the left or the right of the light. As we discussed in the last part, those rays to the left of the light are treated in the same way as they were with our point light. Those to the right may need a bias adding to them depending on which direction the light is pointing. However, we already wrote the code to calculate the angle bias in the last part, so we don’t need to worry about that. All we need to know is whether our pixel potentially needs a bias adding to it (i.e. whether it is the right of the light) and if so add the bias to it. Rather than using an if statement to accomplish this, I have again used some mathematical trickery to avoid this expense.

This code effectively adds the AngleBias (which may be zero remember) to our pixel if and only if it lies on the right hand side of the light. There is one edge case we need to mention: what if the pixel is directly above or below the light (i.e. lightToPix.x == 0)?

Actually, our maths still holds. lightToPix.x == 0 if and only if the ray is pointing vertically up or down, i.e. if the angle is either zero or PI. Since sign(0) = 0, leftOrRight will be zero. If we substitute zero for leftOrRight we get the following:

 

rayAngle *= 0 // which equals zero

rayAngle += (1 – ((0 + 1) / 2.0f)) * AngleBias; // which cancels down to 0.5f * AngleBias

 

So our final value for rayAngle (before we clamp it in the next stage) is 1/2 of AngleBias. Now recall that if the light’s arc includes the ray that points straight down (i.e. the correct value for rayAngle = 0) then we want our AngleBias to be zero (since we need the angles be continuous over 0). In that instance our formula gives us the correct answer (0.5f * 0 == 0).

On the other hand if our light’s arc includes the ray that points straight up (i.e. the correct value for rayAngle = PI) then we want our AngleBias to be 2 * PI in order for the angles to be continuous either side of the angle PI. Again, our code gives us the correct answer, as 0.5f * (2 * PI) == PI. Hence our code holds for our edge case and we don’t need to handle it as a special case.

The next step in our code is to clamp the angle to within the arc of the outer triangle of our light (i.e. between OuterMinAngle and OuterMaxAngle). This is because any pixel on rays outside of these bounds gets treated in exactly the same way to one on the edge of them, they received no light from the spot light. In this way we will get the correct value for the rest of our scene whilst not wasting any calculations testing each ray to see if it lies within the arc of the light.

Finally we convert this angle to the range 0 – 1 (with zero representing OuterMinAngle and 1 representing OuterMaxAngle) by linearly interpolating between them as normal. We could have used the built in lerp() function here, but it makes little difference.

The next step is identical to the equivalent step in the pointLight shader, we simply sample our occlusion map at the x coordinate calculated in the previous step:

 

float4 occlusionDist = tex2D(occlusionSampler, float2(occlusionU, 0.5));

 

As usual we need to add the occlusionSampler’s declaration to the top of the file:

 

sampler occlusionSampler : register(s0);

 

 

Next up is step 4, using this value to calculate the distance to the nearest shadow casting pixel. This is very similar to the code we used in the point light shader, with one minor difference:

 

occlusionDist.xyz = mul(occlusionDist.xyz, DiagonalLength);

float dist = occlusionDist.x;

 

And we add the DiagonalLength parameter to the top of the file:

 

float DiagonalLength;

 

 

The only real difference here is that we are only interested in the x channel. As mentioned before, we could in theory pack multiple spotLight occlusion maps into a single texture if we wanted, in which case we’d need to look at the other channels as well.

From here we deviate from the steps used in the point light shader. For starters, the next step is to determine the visibility of the current pixel. The difference here is that in the point light shader we used this as the conditional in an if statement to determine what color to return. Here we simply cache the value as a 1 (for visible) or a 0 (for not visible) to be used later:

 

float visible = (pixDist - Bias < min(Radius, dist)) ? 1 : 0;

 

And we’ll need to add Bias and Radius as parameters:

 

float Bias;

float Radius;

 

This is exactly the same test as in the point light shader, just returning a 1 or a 0 instead of being used inside an if statement.

Next up is step 6: Calculating what the light value would be if it were in inside the light’s inner triangle. This is again identical to the point light shader:

 

float MaxLight = (1 - (pixDist / Radius)) * LightPow;

 

And again we’ll need to add a parameter at the top of the file, this time LightPow:

 

float LightPow;

 

On to step7, and the last of our non-trivial steps. We effectively need to linearly interpolate our angle between the min/max angle of the inner triangle and the min/max angle of the outer triangle, where we use min if the angle is anticlockwise from the centre ray of the light, and max otherwise. Any ray within the inner triangle can be clamped to one or other of these intervals, since pixels at the edge of the inner triangle are treated the same as those that lie fully within it.

In fact, it’s much simpler if can somehow use the fact that the light is symmetrical around the centre ray of the light. We do this by subtracting this centerAngle from our rayAngle, and then throwing away the sign of the answer we get (i.e. taking the absolute value). We can then clamp this value between the distances from the center ray to the edge of the inner and outer triangles respectively, before mapping this to the range 0 – 1, with 1 being the distance from the center ray to the edge of the inner triangle, and 0 being the distance from the center ray to the edge of the outer triangle. Or, in code:

 

float lerpVal = (clamp(abs(rayAngle - CenterAngle), HalfInnerArc, HalfOuterArc) - HalfInnerArc) / (HalfOuterArc - HalfInnerArc);

lerpVal = 1 - lerpVal;

 

And we have to add a whole bunch of parameters for that code to work:

 

float CenterAngle;

float HalfInnerArc;

float HalfOuterArc;

 

Finally we move on to our last step, using everything we’ve calculated so far (the Maximum lighting value, the lerp value, and the visibility) to get our final lighting value for our pixel and return it:

 

float3 lighting = (visible * lerpVal * MaxLight);

return float4(lighting, 1);

 

And that’s it, our final line of shader code written! The full code for the shader file is set out below:

 

sampler occlusionSampler : register(s0);

float2 ScreenDimensions;
float DiagonalLength;
float2 LightPos;
float LightPow;
float Radius;
float AngleBias;
float Bias;
float OuterMinAngle;
float OuterMaxAngle;
float CenterAngle;
float HalfInnerArc;
float HalfOuterArc;

float4 PixelShaderFunction(float2 TexCoord : TEXCOORD0) : COLOR0
{
	float2 screenCoords = float2(ScreenDimensions.x * TexCoord.x, ScreenDimensions.y * TexCoord.y);
	float2 lightToPix = screenCoords - LightPos;
	float pixDist = length(lightToPix);
	float rayAngle = acos(dot(float2(0, 1), normalize(lightToPix)));
	float leftOrRight = sign(-lightToPix.x);
	rayAngle *= leftOrRight;
	rayAngle += (1 - ((leftOrRight + 1) / 2.0f)) * AngleBias;

	rayAngle = clamp(rayAngle, OuterMinAngle, OuterMaxAngle);

	float occlusionU = (rayAngle - OuterMinAngle) / (OuterMaxAngle - OuterMinAngle);
	float4 occlusionDist = tex2D(occlusionSampler, float2(occlusionU, 0.5));
	occlusionDist.xyz = mul(occlusionDist.xyz, DiagonalLength);
	float dist = occlusionDist.x;
	float visible = (pixDist - Bias < min(Radius, dist)) ? 1 : 0;

	float MaxLight = (1 - (pixDist / Radius)) * LightPow;
	float lerpVal = (clamp(abs(rayAngle - CenterAngle), HalfInnerArc, HalfOuterArc) - HalfInnerArc)/ (HalfOuterArc - HalfInnerArc);
	lerpVal = 1 - lerpVal;
	float3 lighting = (visible * lerpVal * MaxLight);
	return float4(lighting, 1);
}

technique Technique1
{
    pass Pass1
    {
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
}

 

Now on to our LightRenderer code. Open up the LightRenderer class as normal. Jump to the PrepareResources() method and add the following code:

 

spotLight.Parameters["ScreenDimensions"].SetValue(screenDims);
spotLight.Parameters["DiagonalLength"].SetValue(screenDims.Length());
spotLight.Parameters["Bias"].SetValue(lightBias);

 

Then go to the CreateLightMap(SpotLight sLight) method. We left a comment here that there would be more parameters. Add these now by changing the name of the method to the following:

 

private void CreateLightMap(SpotLight sLight, float lightDirAngle, float angleBias)

 

And add the following lines of code after the call to graphics.GraphicsDevice.Clear():

 

spotLight.Parameters["LightPos"].SetValue(sLight.Position);
spotLight.Parameters["LightPow"].SetValue(sLight.Power);
spotLight.Parameters["Radius"].SetValue(sLight.radius);
spotLight.Parameters["OuterMinAngle"].SetValue(lightDirAngle - (sLight.outerAngle / 2f));
spotLight.Parameters["OuterMaxAngle"].SetValue(lightDirAngle + (sLight.outerAngle / 2f));
spotLight.Parameters["CenterAngle"].SetValue(lightDirAngle);
spotLight.Parameters["HalfInnerArc"].SetValue(sLight.innerAngle / 2f);
spotLight.Parameters["HalfOuterArc"].SetValue(sLight.outerAngle / 2f);
spotLight.Parameters["AngleBias"].SetValue(angleBias);

 

Phew, that’s a  lot of parameters! Finally, lets head to the DrawLitScene() method and, with trembling fingers, write the last bits of code for our lighting system. Find the call to CreateLightMap(spotLights[i]) inside our spotLight loop and change it to the following:

 

CreateLightMap(spotLights[i], lightDirAngle, angleBias);

 

And we’re done! Now the moment of truth… Go to Build-> Build Solution… and look at the nice long stream of errors! Yep, I made several typos in the course of the series. We haven’t missed out any code, but where I’ve swapped names about to try and make the code make more sense I’ve added tiny errors which we’re only finding now. Oops! Fortunately I’ve added a clean copy of the code below if you don’t fancy wading through and fixing the typos. I’ve also gone back and corrected the earlier parts of the series, so if you started the series after I’ve published this part, you may be wondering what on earth I’m going on about! It also means that if you’ve got big differences between your code and the code I’ve linked to below you can go back to the appropriate part and figure out where you/ I went wrong.

 

Part 8 Solution

 

Next time we’ll be writing some code in our Game class to actually use our Lighting system to create the demo that you saw all those weeks ago. I’ll also discuss some of the enhancements that you could make if you want to take your engine further and create some more ambitious effects with it.

We’re nearly there, see you back here soon for the final installment!

 

Welcome back!

Last time we looked at how to blur a light map to create soft shadows. We were looking at point lights, but actually we use the same method and shaders to blur all of our light maps, not just point lights.
In this part we’re moving away from point lights and instead looking at the other type of light that our lighting system will be able to handle: spot lights.

A very simple model of spot lights (and one often used in simple 2D lighting) is to just model it as a ‘slice’ of a point light. To achieve this all we’d need to do is generate an occlusion map using our point light unwrap shader, and then in the spot light shader (in place of the point light shader), you would just need to add an extra test to determine whether ray that the current pixel is on is one of those covered by the spot light. If it is, continue as normal, if it isn’t then color it black.

If that’s the effect you’re looking for, then feel free to have a crack at writing the spot light shader yourself (and remember to share it with the community when you’re finished :) ). For this series however, we’re going to be looking at a slightly more complex model of a spot light.

We essentially use a 2D version of the model described here: D3DBook

The basic idea is that there is an ‘inner cone’ or inner triangle in 2D, that gets the full power of the light, and an outer cone/ triangle in which the light power decreases the further from the centre of the light it gets.

To keep things simple the rate at which the light decreases in the outer triangle will be linear, however if you want a slightly more realistic effect check out the formulas & code in the link above.

But that’s all a problem for next time. For now we’re only worried about the unwrap shader for spot lights.

Technically there is nothing stopping us using the point light unwrap shader as described above. However, this would be rather wasteful, as the vast majority of the occlusion map would go unused by storing rays that we will ignore because they are not in the arc of the light. Despite the waste, if I were to start over, this is probably the approach I’d use, as it took me quite a while to work out how to code the method described below!

Instead we will use the whole texture to store an occlusion map for just the arc of the light. This has the effect of squeezing more rays into the arc of our light, in theory giving us a bit better accuracy.

So how are we going to do this? Actually it’s surprisingly similar to the unwrap shader for point lights, but with a few major changes:

We want our x texture coordinate to map from 0 – 1 to MinAngle – MaxAngle instead of 0 – PI (where MinAngle is defined as the angle of the leftmost ray if you are looking down the centre ray of the light away from the source, and MaxAngle is the angle of the rightmost ray).

We only store one occlusion map in the texture, rather than one in each of 2 different channels as we did with point lights.

We can’t guarantee which direction any of the rays point in (unlike our left/ right divide for point lights), so we need to check all 4 edges of the screen for collision with each ray to determine it’s length.

 

So let’s try and alter our unwrap shader for point lights to deal to meet these requirements. Create a new Effect file called UnwrapSpotlight.fx, and delete it’s contents as normal.

First up, let’s take the code from Unwrap.fx and paste it in:

 

#define PI 3.14159265

float2 LightPos;

float TextureWidth;

float TextureHeight;

float DiagonalLength;

sampler shadowCastersSampler  : register(s0);

float4 PixelShaderFunction(float2 texcoord : TEXCOORD0) : COLOR0
{
    float rayAngle = texCoord.x * PI;

    float sinTheta, cosTheta;

    sincos(rayAngle, sinTheta, cosTheta);

    float2 norm1 = float2(-sinTheta, cosTheta);

    float2 norm2 = float2(sinTheta, cosTheta);

    float2 LightDist;

    if (cosTheta == 0)
    {
        LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
    }
    else if (sinTheta == 0)
    {
        LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
    }
    else
    {
        float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

        hit = (hit < 0) > 2 * TextureWidth : hit;

        LightDist = min(hit.wy, min(hit.x, hit.z));
    }

    LightDist = mul(LightDist, texCoord.y);

    norm1 = mul(norm1, LightDist.y);

    norm2 = mul(norm2, LightDist.x);

    float4 sample1 = tex2D(shadowCastersSampler, float2((LightPos.x + norm1.x) / TextureWidth, (LightPos.y + norm1.y) / TextureHeight));

    float4 sample2 = tex2D(shadowCastersSampler, float2((LightPos.x + norm2.x) / TextureWidth, (LightPos.y + norm2.y) / TextureHeight));

    return float4((sample1.a < 0.01) ? 1 : LightDist.x / DiagonalLength, (sample2.a < 0.01) ? 1 : LightDist.y / DiagonalLength, 0, 1);

}

technique Technique1
{
    pass Pass1
    {
        PixelShader = compile ps_2_0 PixelShaderFunction();
    }
 }

 

 

Next, we change it so that we’re only looking at the ray that pointed left.

 

float4 PixelShaderFunction(float2 texcoord : TEXCOORD0) : COLOR0
{
    float rayAngle = texCoord.x * PI;

    float sinTheta, cosTheta;

    sincos(rayAngle, sinTheta, cosTheta);

    float2 norm = float2(-sinTheta, cosTheta);

    float LightDist;

    if (cosTheta == 0)
    {
        LightDist = float2(TextureWidth - LightPos.x, LightPos.x);
    }
    else if (sinTheta == 0)
    {
        LightDist = abs((((cosTheta + 1) / 2.0) * TextureHeight) - LightPos.y);
    }
    else
    {
        float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (TextureWidth - LightPos.x) / sinTheta);

        hit = (hit < 0) > 2 * TextureWidth : hit;

        LightDist = min(hit.wy, min(hit.x, hit.z));
    }

    LightDist = mul(LightDist, texCoord.y);

    norm = mul(norm, LightDist.y);

    float4 sample = tex2D(shadowCastersSampler, float2((LightPos.x + norm.x) / TextureWidth, (LightPos.y + norm.y) / TextureHeight));

    return float4((sample.a < 0.01) ? 1 : LightDist / DiagonalLength, 0, 0, 1);

}

 

 

The reason we’ve chosen this ray is because of the way we defined MinAngle and MaxAngle. Imagine our spot light covers a 90 degree arc, with the MinAngle ray pointing straight down (our zero direction from when we were working with point lights). The rays of the spot light would be exactly those of the left hand side of the equivalent point light, with MinAngle = 0, and MaxAngle = PI.

 

Next we change rayAngle so that is lies in the range MinAngle – MaxAngle:

 

float rayAngle = MinAngle + (texCoord.x * (MaxAngle - MinAngle));

 

 

We’ll also need to add parameters for MinAngle and MaxAngle at the top of the file:

 

float MinAngle;

float MaxAngle;

 

 

Now recall that we had to handle special cases when cosTheta == 0 or sinTheta == 0. Because we only ever had angles in the range 0 – PI we were able to simplify the case when cosTheta == 0. Since our angles might lie outside this range now we have to alter this case to handle all possibilities (it ends up very similar to our sinTheta case):

 

LightDist = abs((((1 - sinTheta) / 2.0) * TextureWidth) - LightPos.x);

 

I won’t go over the trigonometry involved, but essentially if cosTheta == 0 then we are pointing either directly right or directly left. If left then sinTheta == 1, if right then sinTheta == -1. Armed with that information you should be able to work out that the code gives LightDist as LightPos.x if the ray is pointing directly left and TextureWidth – LightPos.x if it’s pointing right, which is exactly what we want.

In the general case we need to test against all 4 sides of the screen when trying to find out which our ray hits first. All that means for our code is that the length we’re looking for is the minimum of all four values in our hit float4, rather than separating out the top/ bottom and sides. We also need to change one of the four values to account for the fact that for our point light rays hitting the right hand side had their angles increasing in the opposite direction to those we are dealing with in our spot light:

 

float4 hit = float4(-LightPos.y / cosTheta, LightPos.x / sinTheta, (TextureHeight - LightPos.y) / cosTheta, (LightPos.x - TextureWidth) / sinTheta);

hit = (hit < 0) ? 2 * TextureWidth : hit;

LightDist = min(min(hit.x, hit.y), min(hit.w, hit.z));

 

Finally we need to change the two lines that transform norm to give us the offset from the light in pixels of our sample. The reason for this is that the intrinsic function mul doesn’t work on single values, so we need to fall back to the good old * symbol:

 

LightDist = LightDist * texCoord.y;

norm = norm * LightDist;

 

And that’s it! Well, sort of. That’s it for our shader, but I’ve left  a rather glaring hole in all of this, which we’ll have to deal with in our LightRenderer code. We’ve already said, in a round about way, that for the rays to the left of the light we’ll be using the same angles as we did for point lights, with down having angle 0, left having angle PI/2, and up having angle PI. But what about the rays on the other side of the light? For point lights we used the same scale, because we dealt with each half separately, but clearly that won’t work with spot lights. We have 2 other options, each of which have their own issues.

We could say that the angles will continue to increase from PI, meaning that for the rays on the right hand side, up will still be PI, right would be (3 * PI) / 4, and down would be 2 * PI. The problem with this, of course, is what happens if the arc of the spot light straddles the zero line. Then we’d have MinAngle somewhere between PI and 2 * PI, and MaxAngle between 0 and PI. Clearly our shader wouldn’t work in this scenario.

The other option is to instead extend the range of angles anti-clockwise from 0, so that for the rays on the right hand side of the light, down remains 0, right becomes (- PI) / 2, and up becomes -PI. However, here again we have a problem, this time if the arc of the light includes the up direction, as we’d have MinAngle somewhere in the range 0 – PI, and MaxAngle in the range (-PI) – 0. Again, our shader wouldn’t work in this scenario.

To solve this problem, we need to use different methods of labelling the angles in different situations. To make this work we need to limit our Spot light to a maximum arc of 90 degrees (which is fine, because it’d look weird otherwise!). Then we can determine which model we want to use based on the sign of the y component of the spot light’s direction (i.e. the ‘centre’ ray). If this ray is pointing ‘up’ at all (i.e. it’s y component is negative, recall that down is positive in texture coordinates) then we can safely use the model that ranges from 0 – 2 * PI. Otherwise we need to use the model that ranges from -PI – PI.

In order to switch models we need to test which quadrant of the circle around the light the spot light’s direction lies in, and add an appropriate bias to it to change the model that we’re using. We’ll also need to take account of which quadrant MinAngle and MaxAngle lie in, in order to determine whether they need to have that bias added to them as well. Originally I wrote this as a mind-numbing pile of  if/else if statements to handle every permutation. However in the end I condensed it into a few lines manipulating the sign of the direction vector’s components and wrapped them in two methods within the SpotLight class. Feel free to dissect them in your own time, or leave a comment if you need more explanation. I know this is far from the best way of achieving this result, but hey, it works….

Open up the SpotLight class and add the following code to the bottom of it:

 

public float GetAngleBias()
{
    float diffAngle = (float)Math.Acos(Vector2.Dot(direction, Vector2.UnitY));
    if (float.IsNaN(diffAngle))
        diffAngle = (float)(((Math.Sign(-direction.Y) + 1) / 2f) * MathHelper.Pi);
    if (diffAngle - (outerAngle / 2f) < 0)
        return 0;
    return MathHelper.Pi * 2f;
}

public float GetBiasedDirAngle()
{
    float lightDirAngle = Vector2.Dot(direction, Vector2.UnitY);
    lightDirAngle = (float)(Math.Acos(lightDirAngle) * Math.Sign(direction.X));

    if (float.IsNaN(lightDirAngle))
        lightDirAngle = ((Math.Sign(-direction.Y) + 1) / 2f) * (float)Math.PI;

    float angleBias = GetAngleBias();
    lightDirAngle += (Math.Abs(Math.Sign(lightDirAngle))) * (angleBias * (1 - ((Math.Sign(lightDirAngle) + 1) / 2f)));
    return lightDirAngle;
}

 

 

So how do we use these methods? Flick back to the LightRenderer class and find the UnwrapShadowCasters(SpotLight sLight) class. If you recall we left a comment here to indicate that there were extra parameters to be added to this method. So, change the name of the method to the following:

 

private void UnwrapShadowCasters(SpotLight sLight, float lightDirAngle, float angleBias)

 

And then within the method itself add the following after the line graphics.GraphicsDevice.Clear(Color.Transparent):

 

unwrapSpotlight.Parameters["LightPos"].SetValue(sLight.Position);
unwrapSpotlight.Parameters["MinAngle"].SetValue(lightDirAngle - (sLight.outerAngle / 2f));
unwrapSpotlight.Parameters["MaxAngle"].SetValue(lightDirAngle + (sLight.outerAngle / 2f));

 

 

Now find where we called this method inside DrawLitScene(), just inside the for loop for looping through the spotlights. At the beginning of the loop add the following code:

 

float lightDirAngle = spotLights[i].GetBiasedDirAngle();

float angleBias = spotLights[i].GetAngleBias();

 

 

And then change the following line:

 

UnwrapShadowCasters(spotLights[i] /*,other params*/);

 

To:

UnwrapShadowCasters(spotLights[i], lightDirAngle, angleBias);

Finally, head to PrepareResources() and add the following code:

 

unwrapSpotlight.Parameters["TextureWidth"].SetValue(screenDims.X);
unwrapSpotlight.Parameters["TextureHeight"].SetValue(screenDims.Y);
unwrapSpotlight.Parameters["DiagonalLength"].SetValue(screenDims.Length());

 

And we’re done! We now have all we need to create an occlusion map for our spot lights.

As usual you can find the solution for the series so far up on codeplex at the link below so that you can compare your code:

 

Part 7 solution

 

Next time we’ll be adding the last of the code we need for our lighting system itself, specifically a shader to use this occlusion map to generate light maps for our spot lights. We’re nearly there!

 

Till then!

 

 

Proudly powered by WordPress. Theme developed with WordPress Theme Generator.
Copyright © Rabid Lion Games. All rights reserved.