Well, I've finally solved the packing issue I believe.
The following works well when you for example return float4(Unpack(Pack(float3(red, green, blue))), 1.0) from a pixel shader:
float Pack(float3 colour) {
// The bit-layout of a floating point value is like so (sign bit, exponent and mantissa bits):
// SEEEEEEE EMMMMMMM MMMMMMMM MMMMMMMM
// All exponent bits must not be either 0 or 1, which are used for representing special meta-values.
int packed = (floor(colour.x * 255) * 65536) + (floor(colour.y * 255) * 256) + floor(colour.z * 255);
if(packed > 16777216)
return -(float)(packed - 16777216); // Store the final bit in the sign bit of the float
return (float)packed;
}
float3 Unpack(float f) {
int3 vec;
int col = (int)abs(f);
if(f < 0.0f)
col += 16777216;
vec.x = col / 65536;
col -= (vec.x * 65536);
vec.y = col / 256;
vec.z = col - (vec.y * 256);
return float3(saturate(vec.x * 0.00390625), saturate(vec.y * 0.00390625), saturate(vec.z * 0.00390625)); // 1 / 256 = 0.00390625
}
However it seems that this information is not carried over well between shaders.
I currently have a object shader that uses the Pack function above to store a RGB colour into the X component of it's pixel output. The scene is rendered with this shader by a camera that is set to render to an image with the D3DFMT_A32R32G32B32 format (value of 116, at least according to the help files). The camera image is then set as the texture of a screen quad which is rendered by camera 0. The quad then uses the Unpack function to resolve the packed colour back to a float4 (the alpha component is always set to 1). However this always results in a pure black screen being drawn.
It's hard to debug the shaders as you can't get any numerical outputs from them, but I was thinking if perhaps
set camera to image might not be working with the additional parameters as such:
set camera to image 2, 2, screen width(), screen height(), 3, 116 rem Yes, camera 1 does exist but is currently not in use
?
I could imagine that there would be problems if the resolution of the render target was further downsampled to 32 bits instead of 4 x 32 bits, but I have no idea how to check that (saving the render target image outputs a normal 32-bit image, but that might just be a limitation of the save image function that converts it into integer pixel values rather than floats anyway).
Any ideas would be greatly appreciated, I'm kind of stuck with this at the moment
"Why do programmers get Halloween and Christmas mixed up?" Because Oct(31) = Dec(25)