Below you will find a video demonstrating real-time physics simulations running on a NVIDIA Geforce GTX 680.
I wonder how many years it will be before we see this level of simulation in games. And I don’t mean rubber duck simulation games. I don’t mind seeing realistic physics for the sake of visual flair but how will the realistic jiggle of water balloons affects gameplay.
Saw this article on Tech Report and thought it was interesting. I’ve always wondered why PhysX couldn’t be implemented on the CPU and it looks like there’s no real reason.
The x87 floating point instructions are positively ancient, and have been long since deprecated in favor of the much more efficient SSE2 instructions (and soon AVX). Intel started discouraging the use of x87 with the introduction of the P4 in late 2000. AMD deprecated x87 since the K8 in 2003, as x86-64 is defined with SSE2 support; VIA’s C7 has supported SSE2 since 2005. In 64-bit versions of Windows, x87 is deprecated for user-mode, and prohibited entirely in kernel-mode. Pretty much everyone in the industry has recommended SSE over x87 since 2005 and there are no reasons to use x87, unless software has to run on an embedded Pentium or 486.
No Technical Reason to Run on x87
The truth is that there is no technical reason for PhysX to be using x87 code. PhysX uses x87 because Ageia and now Nvidia want it that way. Nvidia already has PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE. It would probably take about a day or two to get PhysX to emit modern packed SSE2 code, and several weeks for compatibility testing. In fact for backwards compatibility, PhysX could select at install time whether to use an SSE2 version or an x87 version – just in case the elusive gamer with a Pentium Overdrive decides to try it.
It may not be as effective as running on a fast GPU, but it would be better than it is now if it were properly optimized and not running on x87 instructions. Multi-threaded CPUs are already underutilized in modern PC gaming and proprietary tactics like this (intentional or not) do not help things.
Of course, this is NVIDIA’s baby and they could raise it however they wish. The real takeaway from this little debacle is the need for some kind of physics standard like what we have for graphics. I wonder who will step up.
NVIDIA shed some light with how they were going to utilize their recently acquired Ageia PhysX property. NVIDIA plans to incorporate the PhysX engine into the GeForce Compute Unified Device Architecture (CUDA) SDK.
What will this mean for the end user? Every single GeForce 8 owner will have GPU accelerated physics capability out of the box. If NVIDIA’s CEO is to be believed, it may also entice high-end gamers into buying more video cards.
Ultimately, this is all moot if no one supports it. I assume Unreal Engine 3 will just work with the PhysX enabled CUDA, but that’s just speculation on my part. Until developers begin announcing their endorsement and utilize this technology, there’s no need to celebrate yet.
AGEIA made a name for itself with the introduction of their PhysX software and hardware solutions. The PhysX SDK is available for a wide variety of platforms including the XBOX 360, Wii, PlayStation 3 and the PC. While their hardware solutions have been less than stellar, their SDK have garnered some attention by the likes of Epic Games and their Unreal 3.0 Engine.
Well, it now appears that NVIDIA have deemed them worthy enough for purchase. A press release earlier today announced that NVIDIA are going to “bring amazing physics dynamics to millions of gamers” through the acquisition of AGEIA. Does this mean we are going to see PhysX physics processing units (PPU) along side the upcoming GeForce video cards? I have no idea. One thing for certain though: this is the best thing to happen to AGEIA since their inception.