Quote: "but how you attach this to the scene with your cube mapping still doesn't make complete sense to me"
The cube mapping commands just apply the cube map to texture stage 1 for use in the shader.
Quote: "I take it there are 6 camera renders per light, one for each face of the cube map?"
Yes.
Quote: "Out of interest, do you think it's be possible to expand the FOV to 180, or maybe even the complete 360 if that's possible, perform fewer renders and extract 6 images from these larger perspectives?"
Interesting question. The advantage of using six images with 90 FOV is that it all pieces of the entire scene fit together neatly - and shader cube map lookups do all the awkward projection maths for you. Even if you could fiddle with the FOV and get, say, four renders rather than six, I suspect the final image extraction/lookup stage would be too expensive. If you could render to triangles you could render the whole scene to a tetrahedron. Some kind of masking procedure might be needed. My gut feeling though is that it would be a lot of work to get the idea working. It should be simpler to get the bugs out of DBPro ...
Quote: "Without seeing the code, I don't know fully get how you're rendering to specific channels"
Have a look at the attached simplified demo which shows you how easy it is in DBPro. If you can't compile it just look at the fx and dba code, it's very basic (
). The zip file contains a sample camera image where you can see the result of the red and green renders added together.
Quote: "I like that approach because I think 3 of 4 dynamic lights mapped to separate colour components is plenty. More than 4 is a overkill I think."
Agreed.
But if we could get the DBPro bugs/limitations removed then masochists could experiment with 16 lights and four cube maps or worse.
After all, if you're in a street with lots of street lights you'd want dynamic shadows from all of them, regardless of the processing cost, wouldn't you?
Quote: "Also, how do the final shadow map renders become shadows from the camera's perspective? This is the bit I don't get. Does the shader calculate how the shadow map is applied to the final camera render, or does cube mapping handle it?"
Somewhat surprisingly, that's actually the easy bit. Cube map lookups in shaders are trivial. Suppose you need to decide whether a particular point in your main scene is in shadow. The shader knows the world space position of your point. It also knows the position of the lights (assuming you remembered to tell it of course
). You then just calculate the direction of your world point from the light's position, i.e. simply the difference between the two sets of XYZ coordinates. You then use that vector in your cube map lookup for the light. There's no need to normalize the direction vector - the hardware/DX9 does that automatically for you when looking up a cube map. The texture lookup tells you the distance of the nearest rendered point in that particular direction. If your object's point is further away than that rendered point then it must be in shadow for that light. You then just repeat that process for each light using a different colour component or cube map as appropriate. Yes, all the real work is done in the shader, the cube mapping simply applies the cube map to a texture stage.
I wonder if it's too late to ask IanM to look into this???