Quote: "1) What kinds of datatypes are these? What can they hold? What are the REAL datatypes of their subvariables, if any?"
http://msdn2.microsoft.com/en-us/library/bb509647(VS.85).aspx
With, I believe all of these data-types you aren't required to use all components, i.e. for a float4 like COLOR(should be colour!!
) you can just use float2 if you don't require all 4 components, but be aware that certain semantics such as COLOR use saturated values(clamped to 0.0-1.0), whereas UV/W etc aren't.
Quote: "4) This is the only texture defined for Bloom, which is understandable. Why can't the shader combine all three techniques to need only a single camera in DBP? Why make everything so complicated?"
For the sake of efficiency, in the case of tone mapping and the blur passes using for bloom, this is all done in the fragment(pixel) shader, when drawing this the quad in the case of bloom that's being rendered is drawn from left to right, the down a pixel and so on, let's say I was drawing the first pixel, I apply tone mapping to this, then blur X instantly there is a problem, X blur requires me to sample left and right, but to the right those pixels haven't been toned yet, thus I'd either have to tone every sample I take(lots of wasted GPU cycles), or I could do away with tone.
Great now we don't have the sampling issue so we can just combine X and Y blur into one pass right? No. Because let's say I render a scene that has a one pixel dot in the middle and say my blur passes blur by 5 pixels each way. If I used a single pass, then as I scanned through the image I'd only be sampling to the left and right(by 5 pixels) as well at up and down, now as this one pixel is only a single pixel, with X and Y blur my final blurred image will end up being a big + sign because the only time the samples will sample this single pixel is when the current pixel being drawn is above or below the pixel by 5px or less, or to the left or right side, with a distance of also 5px. This would look highly undesirable. So if you use 2 passes, doing 5 samples on either side per pass(the same as the single pass scenario) when combined we are effectively doing the job of 100 samples per pixel if we only used a single pass. I'll explain it using the above scenario, I have a single pixel, as I render my quad and sample the rendered texture I will only sample my one pixel if I'm within 5px distance only on the X axis, this will result in me having a - or a 10pixel horizontal line, then if I then do a vertical blur pass I now have essentially 10 lit pixels which can be blurred up and down by 5 pixels, so I end up with one bright pixel in the middle and a nice box of a glow around it.
Also the reason why so many cameras are used is because if you have camera 1 render the main scene to texture, you then texture a quad with this camera render target but you can't use the same camera to render this quad because what the camera is rendering is being outputted to the image you're rendering. Or at least I believe that's the issue last time I tested(I recall getting nothing but a black output). But either way using multiple cameras makes the code easier to understand, as you can just associate specific cameras with specific shader passes.
Quote: "5) If my understanding of the sample code is correct, then it captures the screen image and passes it to a plain in front of another camera passing it through the Tone technique, then that camera captures the iamge and passes it onto a plain in front of another camera after passing it through the Blur X technique, then that camera captures it and puts onto the plain in front of the real camera after passing it through the BlurY technique. Why is it that the shader can't do all of this in one go? Why are we taking such a round-about manner for this? Why would it be necessary?"
Answered above.
Quote: "6) Do we get performance increases out of doing it the round-about method? If so, then why don't we seperate BlurXA and BlurXB into two different camera renders as well? Wouldn't that increase performance?"
Answered above also, but using 3 cameras is far less efficient than using a single one, but there isn't much that can be done about it. Ideally there would be some variant of the Sync command that you could specify a single object ID to render, or a range, because most multiple camera techniques are very slow in any realistic(non tech-demo) game, because the sync command(even fastsync and with excluded objects) adds considerable drag to the render performance.
Also another issue with most DBPro implementations of bloom and other post-processing(full-screen) effects is that you must render the scene to a second camera(i.e. render the complete visible scene twice) just to get the same general output. Some of evolved's latest demos(and my own bloom one) got around this by using camera 1 to render the scene using the display mode resolution and applying the last bloom pass(blur y) to a quad that camera 0 renders. So all-in-all you only end up rendering your complete scene and a few quads which is great, however doing this adds some limitations. For instance DBP currently lacks any native AA enabling feature, however you can force the GPU drivers to override these settings via your display control panel, but this doesn't affect camera to image render targets so you can't hope to get AA in your scene. There are other issues too, like some people's GPUs don't support non-square textures, this means you can't set camera 1 to render to the display mode(viewport) resolution, but you can get around this by rounding up or down to the nearest multiple of 2 size, but this could result in impaired visual quality(for smaller RTs) or loss of performance(larger RTs).
[edit, didn't read the last page]
Quote: "Sounds like you need to get Dark Shader. Among other things it gives you the additional set camera effect command which is useful for things like bloom as you suggest."
Does this not just force a quad with a shader in-front of the designated camera? Something which could easily be done without the plug-in?
Quote: "Especially if it'll make Bloom 10x faster."
Why would it? My aforementioned idea to render a single object(the RT object) would be the only way to really speed up shaders like bloom, but rendering a single object isn't everything, as if you're only rendering a quad you could do away with the ZBufer for that scene which would likely speed up rendering quite a bit.