Twilight of the GPU: an epic interview with Tim Sweeney

Inflection points

"Inflection point" is a much abused word these days, but if it's appropriate anywhere, then it's appropriate for describing the moment in the history of computing that we're rapidly approaching. It's a moment in which the shift to many-core hardware and multithreaded programming has quite literally broken previous paradigms for understanding the relationship between hardware and software, and the industry hasn't yet sorted out which new paradigms will replace the old ones. HangZhou Night Net

Importantly, the entire computing industry won't pass through this inflection point all at once; it will happen at different times in different markets, as Moore's Law increases core and thread counts for different classes of processors. The first type of device to pass this inflection point will be the GPU, as it goes from being a relatively specialized, function-specific coprocessor to a much more generally programmable, data-parallel device. When the GPU has fully made that shift, game developers will have the opportunity to rethink real-time 3D rendering from the ground up.

For Tim Sweeney, co-founder of Epic Games and the brains behind every iteration of the widely licensed Unreal series of 3D game engines, this inflection point has been a long time coming. Back when Unreal 1 was still in stores and the 3Dfx Voodoo owned the nascent discrete 3D graphics market, Sweeney was giving interviews in which he predicted that rendering would eventually return to the CPU. Take a 1999 interview with Gamespy, for instance, in which he lays out the future timeline for the development of 3D game rendering that has turned out to be remarkably prescient in hindsight:

2006-7: CPU's become so fast and powerful that 3D hardware will be only marginally beneficial for rendering, relative to the limits of the human visual system, therefore 3D chips will likely be deemed a waste of silicon (and more expensive bus plumbing), so the world will transition back to software-driven rendering. And, at this point, there will be a new renaissance in non-traditional architectures such as voxel rendering and REYES-style microfacets, enabled by the generality of CPU's driving the rendering process. If this is a case, then the 3D hardware revolution sparked by 3dfx in 1997 will prove to only be a 10-year hiatus from the natural evolution of CPU-driven rendering.

Sweeney was off by at least two years, but otherwise it appears more and more likely that he'll turn out to be correct about the eventual return of software rendering and the death of the GPU as a fixed-function coprocessor. Intel's forthcoming Larrabee product will be sold as a discrete GPU, but it is essentially a many-core processor, and there's little doubt that forthcoming Larrabee competitors from NVIDIA and ATI will be similarly programmable, even if their individual cores are simpler and more specialized.

At NVIDIA's recent NVISION conference, Sweeney sat down with me for a wide-ranging conversation about the rise and impending fall of the fixed-function GPU, a fall that he maintains will also sound the death knell for graphics APIs like Microsoft's DirectX and the venerable, SGI-authored OpenGL. Game engine writers will, Sweeney explains, be faced with a C compiler, a blank text editor, and a stifling array of possibilities for bending a new generation of general-purpose, data-parallel hardware toward the task of putting pixels on a screen.

Goodbye, graphics APIs

JS: I'd like to chat a little bit about Larrabee and software rendering. I'm sure you're NDA'd on it, but Intel just did a pretty substantial reveal so we can talk in more detail about it. So first off, I'm wondering if you're looking at any of the Larrabee native stuff. What do you think about the prospects of this whole idea of not doing Direct3D or OpenGL, but writing directly to Larrabee's micro-OS?

TS: I expect that in the next generation we'll write 100 percent of our rendering code in a real programming language—not DirectX, not OpenGL, but a language like C++ or CUDA. A real programming language unconstrained by weird API restrictions. Whether that runs on NVIDIA hardware, Intel hardware or ATI hardware is really an independent question. You could potentially run it on any hardware that's capable of running general-purpose code efficiently.

JS: So you guys are just going to skip these graphics APIs entirely?

TS: That's my expectation. Graphics APIs only make sense in the case where you have some very limited, fixed-function hardware underneath the covers. It made perfect sense back with the 3Dfx Voodoo and the first NVIDIA cards, and the very first GeForces, but now that you have completely programmable shaders, the idea that you divide your scene up into triangles rendered in a certain order to a large framebuffer using fixed-function rasterizer features is really an anachronism. With all that general hardware underneath, why do you want to render scenes that way when you have more interesting possibilities available?

Comment are closed