Quote: "Which graphics card is the most future-proof ? ie directx 9 ? I think it's the radeon 9800XT... okay it's a $500 card...still whoops the geforcefx5950 ultra in directx9 as the fx's are missing a vital key dx9 feature which is lack of support for floating-point texture formats"
a) Radeon 9800XT you'll be lucky to find one under $580, you can get a Pro for around $520 but they lack alot of features.
FX5950 is going to be no more than $500, thats what only the top cards will cost you'll be able to get OEMs for much less.
b) As for Floating-Point Texture, somehow i think you've misheard what has been said. Because they were the FIRST to support Floating Point Calculations for Shaders, this extends to the Images.
What they don't support however is the Floating Point Stencil nor do they support Floating Point Images within the texture pipelines, which means they must be compiled direct rather than pre-processed. However this is a driver issue and only for DirectX, not a card capability issue.
Quote: "The NV3x chipset uses 12bit integer shader operations in Doom 3, whereas the R3xx chipset uses 24bit floating point shader operations in Doom 3. When the NV3x is forced to use 24bit floating point operations, the R3xx exceeds the NV3x performance wise."
Really, try again ... NV30 uses 16Bit Half Precision and 32Bit Full Precision Floating Point Shader Operations, it is also capable of 16Bit Full Precision Integer Operations (NV35+ is capable of 32Bit Double Precision)
Whereas the R300 is only capable of 24Bit Compressed Floating Point which is then widened to the 32Bit, add to this the Texture Pipelines only use 16Bit allowing thier drivers to compress 2 Texture cycles into the called 32Bit operation.
The way Radeon's are setup technically they ARE capable of double the geometry pipelines and half the texture pipelines.
This SHOULD make thier Vertex and Rendering capabilities far quiter than the FX's whilst leaving thier Texturing abilities only half the speed.
However when you look at the actual benchmark results what you find is in the Polygon-for-Polygon Tests the Radeons actually fall short even at comperable speeds, but thier Texturing seems to show that they're really only comperably at the single level rather than dual or more levels.
Which if we take a game suchas Quake2, run it at 1280x1024x8
on the FX5900 Ultra and the 9800PRO ...
what we see is the FPS stay at a very level 320fps for both cards
Yet we take Quake3 and run that with its Multitexturing Shaders at 1024x1024x16 and we notice that the difference in texture speed hit the Radeon hard.
Now the FX5900 Ultra is doing 160fps whereas the 9800PRO is only producing around 98fps.
The even more interesting thing is that if you then put the game into 32bit mode, the speeds somehow start to show more compariable ... with the 9800Pro still doing 98fps whereas the FX5900 Ultra is now down to 119fps.
That right there is a very very interesting point to what the Radeon drivers are doing, although 32bit Texture mode allows the APis more precision to blend the textures ... the textures themselves within the Radeons are being stripped of the additional 16bit precision in order to push double the textures down the pipelines - this effectively turns 1 Pipeline into the 2 making it equal to the FX.
The only major problem with this being is that this causes a noticable quality drop. Although Reviewers seem not to notice this drop, gamers have and so has the industry as a whole. When a Reviewer reffers to Image Quality all they're reffering to is the FSAA abilities of the Card. Whilst it is admitable that the Radeons are capable of slightly better looking FSAA at equal levels, the Radons and FX do FSAA in totally different mannors.
the Radeons use an onboard chip which has direct access to the framebuffer, what this chip does is simply supersample the ENTIRE framebuffer by a Nth level ... then adverages it down.
Smoothvision 6x for example expands the image x3.0 - then it adveraged the image by a 3x3 Sample Range quality.
IntelliSample 6xS on the other hand doesn't do anything to the actual image, what it does is use its 24bit Z buffer to create a depth image and from that it then calculated within a 3x9 samples adverage. It then uses that image as a template and blurs the onscreen pixels by 3.0 that correspond to the template.
The difference in result is that the Radeons FSAA blurrs the ENTIRE screen, whereas the GeForce simply blurrs the affected edges, this gives the Radeons a softer more console look - whereas it gives the GeForce a much sharper and personally better overall image quality.
But then again alot of people don't agree with me when i think a game has good graphics to one that has crap graphics.
Half-Life2 generally has very bland and crappy graphics in my eyes, whereas Final Fantasy XI has truely breathtaking graphics.
Just like Half-Life I think had very little atmosphere and its graphics were too bland again, whereas Quake2 has a superb atmosphere to it.
But then I know too many people who'd totally disagree, some cause they don't like me and a good few cause they honestly have a different taste and perspective.
Really the more you look into what ATI are doing, and what is comming across is just stupid.
They're affecting the overall quality fo thier games and graphics for the sake of thier speed, although you probably have been brainwashed into thinking that nvidia are doing the same ... quite fankly you'd be a fool to think so.
These are from the same people who think that the Radeons' current graphics levels are no difference.
And the speed difference that the Radeons have over the FX in DirectX9 is not a simply case of "the FX technology is unoptimised"
Its because the Catalyst drivers actually replace DirectX9 aspects on your system.
ATI haven't just optimised thier drivers, they've optimised DirectX9 for thier own use - you want proof in this pudding then checkout your Dx9 library on your computer, the sizes from a normal installation are different. More proof is in the fact that during the Half-Life2 scandal it was uncovered that ATI were caught tampering with DirectX9.0 and 8.0 information which slipped out underneath the media's attention because they were all relieved that an ATI employee wasn't the main cause.
But the fact of the matter remains that ATI have done EVERYTHING they can... you want to know why they have to patch everysingle game that has a bug? It's because your not running the mircosoft release of DirectX9.
Sure it has the MS Front to it, but the actual libraries being used AREN'T.
You think this makes it all cool that you get your extra 5fps because of this? At the end of the day this just goes to show that the Radeons really aren't upto what everyone believes they are.
Effectively Radeon have set your system up like a console are are maintaining it in such a way that will keep suspicion from them.
It is also why under OpenGL thier graphics cards quite rightly suck. The FX in alot of instance are getting oftenly double speed.
And for saying the Radeon's new shader technology is suppose to be equal, the R380 vs the NV35 id software has show than quite frankly there was NO competition at all. And the FX5950 just extends the gap.
ATI ofcourse have been blaming this speed difference on CgFX but quite frankly the speed difference has nothing to do with CgFX because it has ZERO optimisations for any cards in it.
At the end of the day the FX's minorly disappointing speed in OpenGL is down to unoptimised and not yet fully featured drivers, the ATI however are just happy that most games that are being released are for DirectX, because as Jedi Knight Academy recently showed - They're close to useless in OpenGL.
This also makes them pretty much bin fodder for anyone using Macintosh or Linux based Systems.
Considering i know how many of you love Linux, I'd be interested to know of anyone using it who has high praises to say about the ATI Radeon line.
To Survive You Must Evolve... This Time Van Will Not Escape His Fate!