Quote: "I want to learn all this stuff, for myself"
The same for me. Of course there may be lots of great shaders and techniques out there, but I just want to make my own
Quote: "Is there any chance you could briefly explain"
Sure. One of my intentions for this thread is to exchange some knowledge. It's a very complex subject after all so here you go:
Vertex Lighting:
One of the most basic lighting techniques that has been used in the past a lot. It works like this: Light contribution is done on a Per-Vertex basis, so every vertex of a model/scene gets light applied. This is very fast since you're not working on pixels here, just on vertices, which are far less of course. The lighting calculations (distance, attenuation,...) takes place in the vertex shader. The drawback is, that the quality of the light is just as good as the amount of vertices, because light just gets interpolated between a polygons vertices.
Vertex Lighting is used in DarkGDK by default.
Per-Pixel Lighting:
This is one step further than Vertex Lighting. Here you do the light calculations on a per-pixel basis, which means they take place in the pixel shader. Because every pixel of the model/scene has to be calculated, this is a lot more power consuming. But the results are a lot better and precise, because it doesn't matter how many vertices or polygons you have.
You can see this in comparing the two screenshots above: in the first one the cube's faces have light distributed on them a bit strange and unrealistic (interpolated from one vertex to another).
In the second image you can see that the light is distributed more fluently and smooth even on low-poly objects like the cube.
Deferred Rendering:
Ok, this is a bit more complicated, as I'm getting into this for myself at the moment. I'll go into detail later.
For now just remember that this is basically a very different approach to rendering. You're separating (
deferring) rendering steps with this. So you first process all the data you need (geometry, normals, depth, materials/specularity,...) and write that data to what is called a G-Buffer. Then in a second step you perform the lighting based on that data, but just in the areas that are affected by light (lights are actually rendered objects here, like spheres and cones). This basically allows you to have a much greater amount/flexibility of dynamic lights in a scene. But the programming is not that easy, since you have to take care of everything what's normally done by the
fixed function pipeline.
So, I hope this little info helps you.
Please consider that we haven't even touched on
shadows here! Things get a lot more complex with that.
Quote: "will you be sharing any more of your shaders with us?"
Yes, but I'd like to finish things up and build some small demos/usage examples around it, so people will actually learn from it.
Questions/suggestions are very welcome!
Now the plot thickens, the fps decreases, and the awesomeness goes through the roof.