think about what my primary job is Fallout ... and then think about who i have been working for these past 6months.
i know the FX series inside and out.
i call them Ti (Titanium) because this is what they were called in development and its only with the FX's line they actually took up the totally stupid Ultra name again for retail.
GeForce2, was confusing because there were some Ti and some Ultra - the GeForce3 set the Ti name for the most advanced versions and the GeForce4 came in 2 flavours MX or TI.
so i'm gonna keep reffering to them like that because it's simpler and it doesn't sound so naff.
That aside... there is FAR more difference between the 5600 & 5800 than just pure shaders.
the 5800 will not get ANY performance hit from using FSAA upto 4x as it has HTC which is different to IntelliSample as it is hardware not software based.
It can also handle a Maximum of 16 Hardware Shadows as opposed to 6 in the 5200/5600, it can also handle more Shaders.
and before you say not everything is about Shaders, there are 32titles due next year which use them - and more are no doubt on the way as people are finally learning them. The more and better Shaders you card can handle the better.
Not to mention the 5800 has access to the extended nv30/v3.0 shaders which cover Light & Shadow as well as having the 2.5 extensions.
and if you somehow feel that Shaders are just PURE graphical updates, you should understand that most games now use Vertex Shader 1.0 for thier animations which most of the GeForce line can do either hardware or software - however the FX line are capable of enhancing the performance here even further especially with the enhanced floating point values.
The 5800 has a 128bit Integer and 128bit Floating Point, which is different to the 5200/5600s 32bit Integer and 64bit Floating Point and can add a great deal of speed dealing with shaders if they're programmed using Cg. i know alot of cretins don't believe there is a difference between using DirectX HLSL and Cg, but there is which will come in handy in titles like Half-Life2 ... and is the difference of almost 30fps.
Not to mention the particle routines to handle upto 63million particle instances, which can be used by programmers for anything.
you want to see what your card can do that Time Machine demo is a damn good example - even more so when you realise you can put FSAA to 4x like the demo intends you to do at a resolution of 1280x960x32 & it still runs as smoothly as if it wasn't using it.
you can't run that demo on a Radeon or 5600 you wanna know why? because it just doesn't have the Shader support ... to you they might not seem important.
But ask any programmer which they'd prefer to program for an Intel Pentium4 with its 16 PreSet Registers or the GC's StrongArm Gecko with its 16 Free Registers what do you think they'll choose and why?
Shaders are not just graphical extras, they are registers for the Graphical Processing Units - sure on the surface it might seem like the ASM shaders only do standard tasks, but the GPUs are now TRUE processors. They are fantastic RISC processors which give PC developers the ability to finally have Console style standard to program towards... and they ARE doing so.
so the 5800ti can't physically push anymore polygons than the Radeon 9800pro ... the Pentium200 and the Pentium200mmx had no difference between them than 57 operation registers for 16bit multimedia operations, and you know what you use that technology everyday now in every single current processor for the past 8years we have used it ... and what performance does it yeild?
non what so ever for standard programs, but you activate the MMX extensions when you compile a Windows Application and you instantly get 10% speed increase when performing colour and desktop operations.
people are looking at shaders as technically just graphics, but they're not even by far ... they're giving you an EXTRA processor inside your home system with additional power and registers to use however you see fit.
the 5800 & 5900 FX are THE MOST ADVANCED shader cards on the market, technical power is nothing. Think about it, how can your 5800 standard actually match head-to-head with a Radeon 9700pro when it is 1/4 of the speed?