COMBATSIM.COM: The Ultimate Combat Simulation and Strategy Gamers' Resource.
 

3d Hardware Part I
   Leonard "Viking1" Hjalmarson
 

What Is 3d Hardware?

3D video accelerators have arrived on the scene in the last two years, having erupted out of the pages of commercial graphics work stations and onto the PC gaming platform. A short time ago it was difficult to find many games which supported any set 3D standard, but today the majority of releases support 3D acceleration in one of three major APIs (application programming interfaces, the software system used to communicate with your hardware).

In spite of the common APIs, the hardware choices are boggling, and the technology is quite complex. Why 3d hardware anyway? Who are the players? What are the differences? What is cutting edge today (as opposed to the last generation a mere three months ago!)

Its staggering to realize how fast the technology is advancing. When 3dfx' first chipset arrived about eighteen months ago it was stunning in speed and quality. When their second generation chipset arrived in the spring of 1998 it was roughly 2.5x faster, and with more advanced features.

Similarly, the second generation Riva chipset produced by nVidia arrived about twelve months ago on boards like the STB Velocity 128 in a 4 meg incarnation. The third generation has been on the streets for barely a month, sporting 16 megabytes of memory, an advanced AGP interface, and roughly 2.5x the speed of the last generation at much better image quality.

History Lessons

If we look back only two years, all the 3D rendering was being done by the software/application. In other words, everything you saw on the screen was pretty much being computed, processed, and delivered via the CPU to the video card. With the advent of the 3D accelerator card, polygons could be drawn using only the polygon's 3-dimensional vertices - from that, textures could be mapped on those polygons.

In addition to the 3D acceleration, other acceleration features such as shading, anti-aliasing, fogging, and dithering have been carried over and improved from previous 2D graphic accelerators.

The 3D accelerator goes beyond the 2-dimensional world by calculating the hidden element; the z-axis. If you look at your monitor, the up-and-down direction could be called the X-axis. The left-and-right direction could be called the Y-axis. The Z-axis is actually goes into the monitor (we call it the "depth"). By utilizing a 3D accelerator in your computer, applications (such as Flight Simulators) can offload a huge majority of the 3D rendering to the video card, thus allowing the CPU to perform other important tasks. So, there you have it, the one minute explanation of what a 3D accelerator card is and does!

The Players….

There are a bundle of major chipsets on the 3D market: 3Dfx (Voodoo/Voodoo2), nVidia (Riva 128 and TNT), NEC (PowerVR2), Intel (i740), Rendition (V2200 and soon Redline), ATI (Rage Pro and soon Rage Pro 128), S3 (the new Savage3D), and last [but definitely not least], Matrox (the new G200). Conspicuously absent, No.9 will soon enter the 3d superhighway with their Revolution IV. Each chipset has its strengths and weaknesses, but increasingly the feature list is becoming standard and the bar is rising.

Talking to Your Hardware

There are different ways for a game or application to communicate or interface with the 3D video card. Applications communicate to the card via Application Programming Interfaces (API's). This interface is what we call a "layer". The most common API on the PC side, embedded in Windows 95, is DirectX [by none other than Microsoft]. DirectX is actually a suite of software components; it includes:

  • DirectDraw for enhanced 2-D graphics services.
  • Direct3D for enhanced 3-D graphics services.
  • DirectSound for enhanced sound-mixing and playback services.
  • DirectPlay for enhanced multiplayer game connectivity over the Internet.
  • DirectInput for enhanced joystick and other input device performance.

OpenGL is another popular API, however it is not supported by all video cards and will not accelerate 3D in a window using a 3Dfx based video card. OpenGL was originally created by SGI as a cross-platform technology and became popular in the gaming world with the release of OpenGL Quake. For the most part, this API has not been used as widely as some other API's, but that may change!

As an important note, OpenGL drivers for many video cards are still in beta and some are considered "mini-GL" drivers because they only support a subset of the OpenGL functionality (mostly for Quake compatibility). So, as a general rule, if you have an application that requires 3D acceleration in a Window (as opposed to full-screen mode), you will have to shy away from 3Dfx.

Click to continue . . .

 

Performance

The Glide API was developed by 3dfx to serve as an API layer between the 3Dfx card and the operating system. Glide is a more efficient, and thus quicker than other, popular API's because it was written exclusively for 3Dfx based hardware.

The most common question our readers ask is: what is Glide and do I need it? Which cards support Glide? How does Glide relate to Direct3d? In short, all recent 3d boards support Direct3d, but only boards that sport chipsets made by 3dfx support Glide.

In the past (read, until a few months ago), Glide was considerably better than Microsoft's API as incarnated in DX5. So for example, if one ran a Voodoo one based board like the Canopus Pure3d in F22: ADF, the game looked very good and the frame rate was also quite good. Moving to Direct3d under DX5, the game looked poor and the frame rate dropped.

Now, however, the situation is changing, and a simulation like EAW looks about the same under Glide or Direct3d (DX6), and the frame rate is about the same. DX6 has added many new features to the API, while Glide hasn't evolved as much. Under Glide 3 new features for added speed have arrived for Voodoo 2 based boards, but the simulations that will support these features will be rare until the new year (Janes WWII Fighters, Empire's MiG Alley, Microprose Falcon 4.0 for example).

In some new simulations Glide is no longer supported at all, and instead D3d has become the only API. Microsoft's own Combat Flight Sim, for example, runs ONLY under Direct3d, so that a 3dfx V1 or V2 owner has to run their board under D3d rather than Glide. Because of certain limitations in the 3dfx hardware, the sim looks better on other D3d boards like the new Matrox G200, and higher resolutions become available.

In other cases where both Glide and D3d are supported, Direct3d as incarnated in DX6 looks considerably better than Glide. This is the case in Falcon 4.0. Why is this so? There seem to be at least two reasons. First, because Voodoo chipsets prior to Banshee only support 16 bit color, whereas the rest of the new generation of 3d hardware makers have 32 bit rendering pipelines that then output data at 16 bits.

Second, the new generation of 3d boards sport up to 16 meg of onboard RAM and are AGP 2x compliant, which means that they can access system memory for very large textures beyond what they can store onboard.

3d video cards have two types of memory: texture memory, and frame buffer memory. Frame buffer memory determines the maximum resolution at which a card can operate. Texture memory is used to store the surface images that are applied to the polygons; the polygons themselves are still rendered in the framebuffer memory.

Its worth noting that the average game uses a LOT more than 2Mb of textures. Any time a game has to draw a texture that is NOT present in the texture memory it must either cheat and not do the work at all or discard some textures that are currently stored in memory and load the new texture. Loading the new textures means lost time which means a speed reduction, the whole point in having a hardware accelerator in the first place.

The significant information associated with the great 3dfx vs. "other" debate is that even the most advanced 3dfx boards currently available max out at 4 meg of texture memory (in the case of V2, 4 megs per texture unit). With the pace of software development, that is rapidly becoming a minimum rather than an ideal amount of memory. 6-8 megabytes of texture memory is much more desirable and 4 megabytes of memory will limit the speed and image quality of coming simulations.

This is already evident in Microprose coming simulation, Falcon 4.0. It will be even more evident in simulations released in 1999. 3dfx coming Banshee board takes a first step toward remedy by increasing texture memory to 8 meg, equivalent to that on the best D3d chips like the Matrox MGA G200 and the Riva TNT. However, the Banshee board is a compromise. Unlike the TNT and the Voodoo2 chipsets, it does not have dual texture processing units, and it does not support even standard AGP features (the second revision Banshee in 1999 will do so).

Go to Part II

 

 
This material is copyrighted and may not be reprinted in any form without permission of the publisher.
Last Updated September 29th, 1998

© 2014 COMBATSIM.COM - All Rights Reserved