Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Dark Physics & Dark A.I. & Dark Dynamix / NVIDIA's acquisition of AGEIA

Author
Message
pdq
18
Years of Service
User Offline
Joined: 20th Jul 2006
Location:
Posted: 5th Feb 2008 21:27 Edited at: 5th Feb 2008 21:27
I'm glad I did not buy a PPU.

Is this move by NVIDIA good for TGC and Dark Physics?
Is support for Dark Physics going to get harder?
monotonic
18
Years of Service
User Offline
Joined: 24th Mar 2006
Location: Nottinghamshire, England
Posted: 6th Feb 2008 18:04
I would hazard a guess at no. I would assume it would make it easier due to the fact that the PPU will be built into the graphics hardware, and just like AGEIA they want to make money from the hardware. So restricting developers by imposing heavy licensing fees would hamper this.

But this is just speculation!

Projects Started: 20
Projects Completed: 0
Pixel Perfect
17
Years of Service
User Offline
Joined: 21st Feb 2007
Location: UK
Posted: 6th Feb 2008 21:43
I agree with monotonic. With Intel having bought Havok for use with their up and coming GPU and now NVidea going for Ageia the competition's hotting up. Makes you wonder what ATI are going to do!

Combined Graphics and Physics acceleration is definately on the way which is probably only a good thing for end users but may see game developers having to support multiple physics interfaces or side with one or the other. Interesting times!

No matter how good your code is, someone will improve on it
monotonic
18
Years of Service
User Offline
Joined: 24th Mar 2006
Location: Nottinghamshire, England
Posted: 7th Feb 2008 11:37
Quote: "but may see game developers having to support multiple physics interfaces or side with one or the other."


Yeah this could be problematic. I heard a while ago that Microsoft was in talks with ATI and nVidia about adding a DirectPhysics layer to the DirectX API, but this kind of throws the whole thing into the air.

Projects Started: 20
Projects Completed: 0
Airslide
20
Years of Service
User Offline
Joined: 18th Oct 2004
Location: California
Posted: 7th Feb 2008 16:39
DirectPhysics as a part of DirectX could at the very least provide a bridge between multiple physics systems and handle the usage of them based on what is supported by the hardware/software. That way it handles all the multiple interface support. Of course the problem with that is that the physics engines naturally act a little different, which they could prehaps compensate for, but you can't really compensate for one lacking certain features of another.

monotonic
18
Years of Service
User Offline
Joined: 24th Mar 2006
Location: Nottinghamshire, England
Posted: 7th Feb 2008 16:49
Airslide,

Yeah that is true, you would still have to write some variations of your code to suite the different hardware capabilities. But this is the case when utilising the different graphics hardware at the moment to a certain extent of course, I believe TGC worked closely with nVidia when developing FPSC X10.

Projects Started: 20
Projects Completed: 0
General Reed
18
Years of Service
User Offline
Joined: 24th Feb 2006
Location:
Posted: 9th Feb 2008 15:51
Quote: "Intel having bought Havok for use with their up and coming GPU"


Intel make gpu's? im aware of the onboard ones but there hardly woth intergrating havok into

CPU: AMD X2 6000+ 3.0ghz GFX: NVIDIA BFG Geforce 8800GTS 640MB OC-550mhz core RAM: 2048mb

david w
18
Years of Service
User Offline
Joined: 18th Dec 2005
Location: U.S.A. Michigan
Posted: 9th Feb 2008 17:50
I am sure intel has the means to produce a "REAL" gpu. Think about that. Also I wouldnt count ATI out on this either consider AMD processors.

Perhaps we are witnessing the beginning of a 3 way race for top phsyics + GPU, maybe even with custom CPU's that compliment them some way. Think about it.

Maybe we will have to start making games for each chip/gpu. Like console programmers do. Maybe 3 SDK's.
Pixel Perfect
17
Years of Service
User Offline
Joined: 21st Feb 2007
Location: UK
Posted: 9th Feb 2008 20:04
@General Reed

Intel have been keeping fairly quiet about this one but it's belived their new GPU, code named 'Larrabee', will be launched sometime Q2 2008. Plans have already been revealed showing a PCIe 2 board layout based on the Larrabee chip so it definitely looks like they are aiming at the Graphics Card marketplace.

No matter how good your code is, someone will improve on it
Visigoth
19
Years of Service
User Offline
Joined: 8th Jan 2005
Location: Bakersfield, California
Posted: 11th Feb 2008 07:01
I think ATI has Havock integrated into their GPUs for some time. I think NVidea is just playing catch up.
geecee3
20
Years of Service
User Offline
Joined: 25th Feb 2004
Location: edinburgh.scotland.
Posted: 11th Feb 2008 15:31
nVidia buying agiea, AMD buying ATi, Intel joining the 'real' GPU race, this can only mean one thing in the end, better, faster and more innovative technology for us with even quicker turnaround.

It's no surprise really that intel have gotten into the gpu market in a more serious way, because the buzz at the moment is affordable stream processing and parallelism where ATi and nVidia have an advantage. A simple explination of this would be a highly accurate physics simulation based on multiple results in parallel from the same input data, this intermediate set of multiple results could then be processed further to produce what would be deemed a very accurate final result from many concurrent simulations. The only real cost effective solution to this at the moment is to use the stream processors on modern GPU's. We now have the ability to perform scientific level supercomputing tasks on desktop hardware through GPU technology, This will translate into deeper and faster simulations of physics and very soon AI based on differences produced from parallel problem solving, all being accelerated in hardware and ending up in the games you play. Even more amazing is the fact that the technology is scalable through SLi and CrossfireX type technologies, allowing multiple GPU's to work in concert on a specific task.

now anyone can have a supercomputer.

Who really gives a carp about who's in the lead or what technology is best, the truth of the matter is that it makes no odds, the balance of power is different depending on your precept, and is fairly balanced at the end of the day when taken as a whole. there's nothing worse than a narrow minded fanboy's view of GPU technology, its not all about framerates and pretty pictures anymore. Some might say it's sacralidge to use graphical processing power for 'other' tasks, I say embrace the power because nothing else comes close.

http://ati.amd.com/products/streamprocessor/specs.html

http://www.nvidia.com/object/tesla_gpu_processor.html

Ohd Chinese Ploverb say : Wise Eskimo, not eat yerrow snow.
TinTin
18
Years of Service
User Offline
Joined: 16th May 2006
Location: BORG Drone Ship - Being Assimilated near Roda Beta (28)
Posted: 11th Feb 2008 16:38
Just drooling about a physics based particle system,
There is a little overlooked problem that is going to hold back any development.(let me explain...)

For the reasons of framerate, especialy in first person shooters. the game level/arena is optomised to look like it has more scale and detail than it actualy has, skybox's are an example. now true to life physics calculations would (say dropping a brick in the Ocean) calculate the resulting wave as it travels across the ocean. in the 3D environment this would hit the skybox prety quickly thus the effect would get trunctuated. To resolve this, programmers would have to be aware of the level design and write routines to handle this. Level designers would also have to consider these too when designing levels, imaging an explosion sending vital level completing equipment out of the arena.

Now with the amount of extra coding required to achieve this, we return back to code that will eventualy grind to a hault so limits will have to be put in place to control the overhead, hence the reason most games that employ physics, limit them to projectiles and objects (crates) at the moment. someday someone may create a tool to automate the proccess like the ones used to calculate lighting at the moment but I'd guess they`d be limited in there application as a trully realistic simulation should allow the player to create unlimited possibilities using the availiable in-game resources.

Cyberspace was becoming overcrowded and slummy so I decided to move. These nice chaps gave me a lift.
Dewi Morgan
16
Years of Service
User Offline
Joined: 16th Jan 2008
Location: London, UK
Posted: 25th Feb 2008 01:10
Emergent behaviour has caused headaches for game designers for a long time now: basically, the answer is to ignore the nondamaging parts (let players stack paintbrushes to get wherever they want if they really want to), and code around the damaging bits: instead of having keys that you can knock off the edge of the world, either mark them item as unsusceptible to the physics simulation, or have something else instead of a key - a lever, a keycode, etc.

Many also mark quest items as "important" and if they are destroyed (eg by falling outside the skybox and into the "object destruction" drain), respawn them either in the user inventory or their initial start point.

The best solution, though, is to avoid singlepath bottlenecks in sandbox games. Aim to always have more than one possible solution to a problem, to allow for multiple different play styles, and multiple different ways of messing up before you complete the mission.

Can't unlock that door? Then blow it open, lockpick it, smash a window and deal with the alarmed guards, stack crates to get in through the roof, lure the inhabitant to come out and unlock it...

Yet another game programmer
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Natflash Games
18
Years of Service
User Offline
Joined: 7th Feb 2006
Location:
Posted: 3rd Mar 2008 02:00
I heard that they were gonna enable PhysX on CUDA, does this mean with a Geforce 8 series card we'll be able to use hardware mode in DP??

If this is the case, I may not be so quick as to get rid of my old 8600, one card for graphics and the other for physics!

Its a very exciting thought!


Check out my site for the latest on my games.
Natflash Games
18
Years of Service
User Offline
Joined: 7th Feb 2006
Location:
Posted: 11th Mar 2008 20:26 Edited at: 11th Mar 2008 23:58
Anyone know wether this is the case?

Oops... hehe


Check out my site for the latest on my games.
jason p sage
17
Years of Service
User Offline
Joined: 10th Jun 2007
Location: Ellington, CT USA
Posted: 11th Mar 2008 20:47
Sunny, clear Skies cold... Oh.. Whether... No Idea. LOL (Like I'm one to salk about wpelling!)

I'd suspect you could do that, I mean if you have two cards, I would think you'd be able to select which one is your primary display adapter, and the Physics hardware in thoery would be accessable as the separate module it is.... that is of course unless the GeSeries, when not to to current, shuts down everything.

Hmm... I think it will work in your favor though!

Login to post a reply

Server time is: 2024-11-24 22:27:43
Your offset time is: 2024-11-24 22:27:43