Actually currently there is a discussion going on in the forums about this; although a few members refuse to believe what ATi are doing, some of us are actually wondering if what they're doing is constituted as cheating or genius.
The arguments go both ways. Not that it matters because the 6-Series at it's retail clocks is quite noticably faster than the X800;
but from developers point of view, does using 16bit floating point calculations within a 24bit colour space and using a 128bit register space (escentially doubling pixel shader operation speeds), constitute as a cheat or a clever design feature?
Also the question of the Colour quality loss.
It is like chooseing between the DirectX Texture formats;
There is no actual data loss, the only loss is colour wise in Radeon stuff looks more washed out in it's palettes. So artists aren't getting the results they'd expect exactly; however with the digital vibrance in play that ATi use it relatively compensates;
I mean what do you guys think?
If anyone wants I can post the entire explaination of what is going on; and i know some will be like 'ATi don't cheat, burn the heathen' but from everything i've shown in the developer forums some people don't want to see what is right infront of thier faces.
Like ATi Radeons output 24/32/48/64/128bit Colour Ranges, they accept 64/128bit Floating Point Images, and they have 128bit Colour Space.
As they don't even support for the formats it can use natively, why would ATi use 96bit Registers? It just painly makes no damn sense!!
It further makes no sense to then have a 32bit ZDepth Buffer; if your colour range is 24bit MAX!!
Why would you deliberately make a card processor that has to recalculate the standard formats?!
There are so many inconsistancies, like how come the Radeon can only push around half the pixels per clock as the Geforce; but pixel operations are double the speed!?!
It's like the F-Buffer which allows infinite Shader programs... that's complete bull, it doesn't; it buffers the max shader routine in the video ram than moves it across which although decreases latency from the main ram by around 80%, it still means that running longer shader programs means that Radeons hang even loop when they have to load the next pixel buffer instruction set.
I'm not having a go here, these are legitimate questions which ATi has actually refused outright to answer stating that 'information on how the technology used with in our hardware is prohibited'...
I don't give a damn what they're doing to gain speed, i just give a damn that what i program up in FX Composer or something is actually going to appear exactly how I want it to at the other end.