Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

DarkBASIC Professional Discussion / Single geometry pass deferred rendering possible in DBP?

Author
Message
Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 15th Jul 2013 18:54
I've been looking at deferred shading a bit lately and it seems pretty handy. I know I can put geometry data (position, normals, diffuse colour etc.) in separate textures through capturing the scene with different shader techniques applied to all (relevant) objects several times with cameras set to render to images and then provide these to the post-effect shader responsible for the lighting / what-have-you calculations using different texture stages, but from looking at some online tutorials it would seem that you can put all the mesh data in a single texture with high enough bit-depth in just a single vertex shading pass. I have no idea how to go about this in DBPro however, if it is at all possible.
At first I thought I would just call set camera to image with a large enough image format to accommodate all the needed data (such as D3DFMT_A32R32G32B32) and then texture a quad with this for post-processing the pixels, but that didn't seem to work as I could only ever manage to sample the standard RGBA floats.
I was wondering if any shader-savvy member around here may know more about this and could point me in the right direction?

Thanks dor any advice,
Rudolpho


"Why do programmers get Halloween and Christmas mixed up?"
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 15th Jul 2013 19:30
I don't know if it's possible to access the geometry buffer directly through native DBPro commands, and even if you could, I think DBPro internally still applies both vertex and pixel shaders to each object anyway, though this is just a guess about DBPro's fixed function pipeline.

In any case, ImageKit helps separate vertex and pixel shaders. It allows you to run only pixel shaders on render targets without having to create a quad, texture it, and position it in front of the screen so the vertex shader is satisfied. I'm sure that's going to save you some head-aches.

The command set camera to image returns a render target which can directly be used by ImageKit.

TheComet

TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 15th Jul 2013 20:08 Edited at: 15th Jul 2013 20:15
Sorry for the double post.

Quote: "it would seem that you can put all the mesh data in a single texture with high enough bit-depth in just a single vertex shading pass."


This is impossible, because every object requires a new rendering state (new matrices need to be calculated, new textures need to be bound, new stream mapping).

The reason is: When you set an object's position/rotation/scale or whatever, you aren't actually modifying the object's vertices. A lot of people don't grasp the concept that a 3D object loaded into memory never ever changes position/rotation/scale (unless you actually go and modify the vertices, such as with the vertexdata commands). No, what you're really changing with these operations are the matrices affecting the object when it is rendered.

That is why the object goes through these steps:
1) use World matrix to project object into world-space (this is where all of the positions, rotations, and scaling of the object get applied)
2) use View matrix to project object into view space (this is where the position and rotation of the camera is applied)
3) use Projection matrix to project object into screen space (this is where the frustum of the camera is applied to the object)

More on that here: http://robertokoci.com/world-view-projection-matrix-unveiled/

So given that, you can see why it's impossible: You cannot render objects in a single vertex shader if they have different render states (such as different positions).

It is possible (in other languages) to have all objects only execute their vertex shader, but you will lose any data you do not save in the geometry buffer, for instance, varyings (OpenGL).

It is also possible (in other languages) to have a geometry buffer passed to a pixel shader, which then renders everything to a render target (be it a texture or a screen).

Note that rasterization occurs directly before calling the pixel shader and not before.

[EDIT] So in short: When using deferred rendering, for every object you have in your scene, you are forced to run its vertex shader and its pixel shader before being able to do deferred shading. There's no getting around that. These aren't really that costly though, because you'll only be doing basic transformations in the vertex shader, and sampling the objects texture (if any) in the pixel shader.

TheComet

Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 15th Jul 2013 22:01
I think you are misunderstanding my intentions a bit.
I don't mean to put all meshes in the texture, nor use it for instancing the same object several times in order to reduce draw calls.
What I mean to do is to store vertex data to an image so that it can later be used by a post-processing shader. As such, vertex data will be written to a texture, and later overwritten if any other are found to be overlapping the previous entry. The idea is that you will only have to do more intense calculations for each pixel that will actually be rendered, rather than performing it for all rendered pixels, including those that will later be overwritten because their polygons are overlapped from the camera's point of view.

As such a position may be stored and then resolved using the WVP matrices (which will of course be the same in the last pass when the post-processing pixel shader is applied).
There will also be a pixel shader in the effect responsible for storing this "geometry data" (which is what will write it to the texture in the first place) so I don't need to separate vertex and pixel shaders either.


And finally, no, I don't really know what I'm talking about, but that is how I understand that it is supposed to work and I'm just trying to see if I could do it


"Why do programmers get Halloween and Christmas mixed up?"
TheComet
16
Years of Service
User Offline
Joined: 18th Oct 2007
Location: I`m under ur bridge eating ur goatz.
Posted: 15th Jul 2013 22:21
Oh, you're trying to save the transformed geometry in a texture? That's impossible I believe, because the vertex shader doesn't know anything about the screen space. I suppose you could forward the vertex data to the pixel shader, which could write it into screen space? But then you have the issue again with textures. How do you texture objects which have already been rendered to a texture? Or rather, how do you know what texture belongs to what object after you've rendered them to a texture?

Quote: "What I mean to do is to store vertex data to an image so that it can later be used by a post-processing shader."


Here are the issues with this idea:
1) You will only have a bunch of dots (vertices)
2) You will have no idea which vertex is connected to which vertex, thus losing the information to create surfaces
3) You will have no idea how to texture your objects (or what's left of them)

The way deferred rendering works (how I understand it, at least) is you render all objects to a render target without any fancy effects, and then you pass the resulting render target to a pixel shader which computes all of the fancy stuff.

So this would be the process involved (in pseudo code):



Would you like me to write up an example?

TheComet

Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 15th Jul 2013 22:55
That would be what I'm after yes, sorry if my descriptions were unclear.
The problem as I saw it was how to put more data than the standard 32-bit colour values into the render target and then sample it from a render quad's pixel shader.

Quote: "Would you like me to write up an example?"

If you have nothing better to do that would be awesome


"Why do programmers get Halloween and Christmas mixed up?"
Green Gandalf
VIP Member
19
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 16th Jul 2013 02:50
Doesn't Evolved's Advanced Lighting system do something along these lines?
Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 16th Jul 2013 03:29
It might, but I'd rather write together a smaller, more concise system that I can understand myself at this point. His is way over my head at the moment

Also my final goal, after prototyping some in DBP, is to move the solution (if I can find any) over to a game engine written in DGDK, which it is a well known fact that AL doesn't work with. I figure writing workarounds would be a lot easier if I knew what the shaders and managing code was actually doing then


"Why do programmers get Halloween and Christmas mixed up?"
Green Gandalf
VIP Member
19
Years of Service
User Offline
Joined: 3rd Jan 2005
Playing: Malevolence:Sword of Ahkranox, Skyrim, Civ6.
Posted: 16th Jul 2013 18:20
Quote: "but I'd rather write together a smaller, more concise system that I can understand myself at this point. His is way over my head at the moment "


Same here - but I thought you merely wanted to know whether it was possible.

Quote: "Also my final goal, after prototyping some in DBP, is to move the solution (if I can find any) over to a game engine written in DGDK"


I believe davidw is using something like this in DGDK (or C++ possibly - you'd need to contact him).
Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 18th Jul 2013 15:11
Quote: "but I thought you merely wanted to know whether it was possible"

True enough.
I've been looking through some of Evolved's shaders and found out that it seems he too is using a 128-bit texture and packing values into that, which obviously work but means relatively lossy resolution of what is stored (ie. 8-bit normals).
What I was mostly interested in was if there was any way to utilize multiple render targets with DBP. This would allow output to up to 4 different textures from the same shader pass (using the COLOR0 - COLOR3 semantics). It doesn't seem like this works though; I even tried recompiling the camera DLL and force it in there with a crowbar, but all render targets but the primary one (the image set using set camera to image) just ended up blank and were never rendered to as far as I could tell. Maybe further testing could make it work, but that's besides the point - standard DBP does not seem to handle it.

Anyhow, back to the 128-bit textures, which might be enough for what I'm after. I've found it to be surprisingly hard to actually pack and then unpack data into floats. Using integers it's a breeze but somehow the floats, while working for certain values, always end up being expanded wrongly when unpacked. This has happened in all ways I've tried doing it, including one home-written solution and two approaches stolen straight off the internet.
Is this a common issue and is there any way to solve it?
After all a float is a 32-bit value; surely you should be able to store three 8-bit values in it and be able to extract them with predictable results (24 bits are used to store the mantissa, wherein the 3 bytes should then be possible to store, 7 bits are used for the exponent and one sign bit)?


Quote: "I believe davidw is using something like this in DGDK"

Hm, that sounds interesting.


"Why do programmers get Halloween and Christmas mixed up?"
Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 19th Jul 2013 15:55 Edited at: 19th Jul 2013 15:57
Well, I've finally solved the packing issue I believe.
The following works well when you for example return float4(Unpack(Pack(float3(red, green, blue))), 1.0) from a pixel shader:


However it seems that this information is not carried over well between shaders.
I currently have a object shader that uses the Pack function above to store a RGB colour into the X component of it's pixel output. The scene is rendered with this shader by a camera that is set to render to an image with the D3DFMT_A32R32G32B32 format (value of 116, at least according to the help files). The camera image is then set as the texture of a screen quad which is rendered by camera 0. The quad then uses the Unpack function to resolve the packed colour back to a float4 (the alpha component is always set to 1). However this always results in a pure black screen being drawn.

It's hard to debug the shaders as you can't get any numerical outputs from them, but I was thinking if perhaps set camera to image might not be working with the additional parameters as such:
?
I could imagine that there would be problems if the resolution of the render target was further downsampled to 32 bits instead of 4 x 32 bits, but I have no idea how to check that (saving the render target image outputs a normal 32-bit image, but that might just be a limitation of the save image function that converts it into integer pixel values rather than floats anyway).

Any ideas would be greatly appreciated, I'm kind of stuck with this at the moment


"Why do programmers get Halloween and Christmas mixed up?"
Rudolpho
18
Years of Service
User Offline
Joined: 28th Dec 2005
Location: Sweden
Posted: 29th Jul 2013 13:12 Edited at: 29th Jul 2013 13:13
Found the problem; this works for packing / unpacking 3 floats into 8-bit resolution in a single (32-bit) float:



"Why do programmers get Halloween and Christmas mixed up?"

Login to post a reply

Server time is: 2024-04-19 08:09:02
Your offset time is: 2024-04-19 08:09:02