COMBATSIM.COM: The Ultimate Combat Simulation and Strategy Gamers' Resource.
 

3d Accelerators
by COMBATSIM.COM
 

3D Accelerator Information

What's A 3D Accelerator?

Just as a graphics accelerator is optimized for making graphics appear on your screen as quickly as possible, and a multimedia accelerator contains specialized hardware for making video playback smooth and realistic, a 3D accelerator is designed to enhance performance when using software that presents a three-dimensional environment on your display.

Although affordable 3D accelerators are new, people have been displaying three dimensional environments on two dimensional (flat) displays for years. Architects and engineers have used computers to create 3D projections of plans and models since the late 1960's. On the entertainment front, Atari's Battlezone, with its abstract geometrical shapes and cold green lines, was unleashed in 1982, and 3D charts and graphs have been a staple of spreadsheets and financial programs for years. The benchmark for quality is continually raised -- Doom was revolutionary when it was released in 1993, but today's 3D games like EF2000 ,Jane's Longbow and Earthsiege II have advanced even further.

However, the large amount of computing work necessary to perform the mathematical calculations needed to draw complex 3D scenes has kept the level of graphics quality lower than that of non-3D games. Just compare the image quality of Doom to that of Myst for an example. As far as computing technology has gone, 3D games and applications have lots of potential to look even better.

But looks alone are NOT the issue. Because the host CPU is tied up rendering perspective corrected 3d objects in a fluid environment, it can't be used for other things, like special effects and communicating with other computers via a modem or network.This is where 3D accelerators come in. They can render a 3D scene much faster than the processor on your motherboard can. And, by performing this processor-intensive work, they free up the processor on your motherboard to work on other things. As a result, 3D games and applications can run at higher screen resolutions, with more colors, more realistically shaded and textured objects, all at more frames per second.

The results can be phenomenal. Imagine a first-person game with the speed and response of Doom but with the gorgeous, high resolution detail of Myst. Imagine arcade-quality graphics on your PC. Imagine workstation-level architectural rendering speed.

But the Internet too will benefit from these advances. Imagine using a VRML browser to cruise through staggeringly realistic virtual cyberspace worlds that come alive with liquid-smooth response.

Simply put, 3D accelerators represent a quantum leap in affordable computing technology -- in the words of PC Gamer in their March, 1996 issue, "The potential is exciting. 3D acceleration technology actually gives game developers a way to develop superior games with far less time wasted worrying about hardware. It's a watershed event, comparable to the introduction of sound cards, CD-ROMs, or the original VGA card."

Software Support

Several
3D APIs are coming into maturity, making it easier than ever to write a powerful 3D game or application. Some existing APIs are Intel's 3DR, Criterion's RenderWare, Argonaut's BRender, and OpenGL. Of particular importance is Microsoft's upcoming Direct3D / Reality Lab interface.

Which one is best? It depends on what sort of 3D program you're going to develop. Some are better for simulation and engineering; others are aimed at game developers. Some are attractive because they're cross-platform -- for example, BRender is useful for development on the PC and porting to the Sony PlayStation and other consoles. Many people believe that Direct3D / Reality Lab will be come the de facto standard for 3D development under Windows 95.

A Quick Course In 3D Terminology

3D accelerators bring with them a whole new vocabulary. Here are brief definitions of the terms you'll find on a spec sheet or advertisement for a 3D accelerator:

3D API

API stands for application programming interface. It's a collection of routines, or a "cookbook," for writing a program that supports a particular type of hardware or operating system. A 3D API allows a programmer to create 3D software that automatically utilizes a 3D accelerator's powerful features. 3D engines can be very different when you program them at a low level by talking directly to registers and memory, so without an API that offered support for multiple 3D accelerators, it would be hard for a software developer to port their game or application to a lot of cards.

A reasonable chipset, like the s3 ViRGE, supports every major 3D API, including Direct3D / Reality Lab, OpenGL, 3DR, RenderWare, and BRender.

Alpha Blending

Alpha blending is a technique which provides for transparent objects. A 3D object on your screen normally has red, green and blue values for each pixel. If the 3D environment allows for an alpha value for each pixel, it is said to have an alpha channel. The alpha value specifies the transparency of the pixel. An object can have different levels of transparency: for example, a clear glass window would have a very high transparency (or, in 3D parlance, a very low alpha value), while a cube of gelatin might have a midrange alpha value. Alpha blending is the process of "combining" two objects on the screen while taking the alpha channels into consideration – for example, a monster half-hidden behind a large cube of strawberry gelatin (hey, it could happen!) would be tinted red and blurred where it was behind the gelatin.

A good chipset, like the S3 ViRGE, supports alpha blending in hardware. This means that the application or game developer doesn't need to use a slower software routine to make sure that transparent objects are drawn correctly. The developer just defines the transparency of each object, and the hardware takes care of the rest. Because this powerful feature is in hardware, developers will be encouraged to create more realistic environments by adding transparent objects, previously difficult to do.

Depth Cueing and Fogging

Fogging is just what it sounds like: the limits of the virtual world are covered with a haze. The amount of fog, color, and other particulars are set by the programmer.

Depth cueing is reducing an object's color and intensity as a function of its distance from the observer. For example, a bright, shiny red ball might look duller and darker the farther away it is from the observer.

Both of these tools are useful for determining what the "horizon" will look like. They allow the developer to set up a 3D virtual world (for a game, interactive walk-through, and so on) without having to worry about extending it infinitely in all directions, or far-away items appearing as bright points that confuse the user -- features can fade away into the distance for a natural effect.

Click to continue . . .

 

Shading: Flat, Gouraud, and Texture Mapping

Most 3D objects are made up of polygons, which must be "colored in" in some fashion so they don't look like wire frames. Flat shading is the simplest method and the fastest. A uniform color is assigned to each polygon. This yields unrealistic results, and is best for quick rendering and other environments where speed is more important than detail. Gouraud shading is slightly better. Each point of the polygon is assigned a hue, and a smooth color gradient is drawn on the polygon. This is a quick way of generating lighting effects -- for example, a polygon might be colored with a gradient that goes from bright red to dark red.

Texture mapping is the most compelling and realistic method of drawing an object, and the version that most modern games like Doom require. A picture (this can be a digitized image, a pattern, or any bitmap image) is mapped onto the polygon. A developer designing a racing game might use this technology to draw realistic rubber tires or to place decals on cars.

Video texture mapping is a particularly exciting form of texture mapping that fits in well with 3d chipsets like the ViRGE, which employs high-speed video processing. A video stream (either live, or from an AVI or MPEG file) is treated like a texture, and is mapped to a 3D surface.

Perspective Correction

This process is necessary for texture mapped objects to truly look realistic. It’s a mathematical calculation that ensures that a bitmap correctly converges on the portions of the object that are "farther away" from the viewer. This is a processor-intensive task, so it’s vital for a state-of-the-art 3D accelerator to offer this feature. Just as importantly, the 3D accelerator must do this in a robust way in order to preserve realism. The quality of a 3D accelerator’s perspective correction is an excellent overall quality indicator.

Bilinear and Trilinear Texture Filtering

These are methods employed in texture mapping. Trilinear filtering is more sophisticated and requires MIP-mapping (see below) as well.

Anti-Aliasing

Low res graphics have a tendancy to look blocky, especially when viewing a diagonal line, anti-aliasing will slightly blur these "stairsteps" which fools the eye into thinking it sees a smoother line.

Dithering

The following is only a simple example of a widely used trick of the eye. If you only had two colors - red and yellow, but wished to display green you can do so by making a checkerboard pattern of red and yellow pixels. This same technique when properly applied can produce the appearance of many more colors than are actually being displayed.

Z-buffering

Z-Buffering is a technique for performing "hidden surface removal" – the act of drawing objects so that items which are "behind" others aren’t shown. Performing Z-buffering in hardware frees software applications from having to perform the intensive hidden surface removal algorithm.

How big are the polygons?


Flat Shaded: This is the fastest to compute, it also appears the most primative. Think of what a large ball made from sheets of plywood would look like.

Gouraud shading: While not nearly as fast for the computer to generate, these polygons have a much smoother look most of the current games use this type of shading, games like Tie Fighter, Flight Sim 5.1, TFX.

Phong Shading: Now the computer really has to work hard to maintain a respectable framerate when using this type of shading, however the results are much more impressive. Phong shading is used in many rendering programs like 3D Studio and others. This is where the 3D cards will be headed in the near future, and when they are capable of this type of smoothing you can expect simulators to look more like Myst and less like SU27.


Sub Pixel Correction

A bit-map is in essence a grid, if the line you are trying to draw does not fall directly on a pixel, it is snapped to the nearest one. This process induces an error which will show up on the screen as uneven diagonal lines or blocky circles or flickering dots. The first step to removing this error is to break the pixel "grid" into smaller so-called sub-pixels in memory. Now instead of going to pixel ( X-32, Y-41 ) you can go to ( X-32.25, Y-40.75 ). The second step now that you have more exact points along the line is to anti-alias the resulting pixels.


Bi-Linear Filtering

A form of anti-aliasing where image smoothing is based not only on the pixels on each side, but up and down as well.


Quadratic Mapping

This is a feature only a few cards have but which holds much promise. Most objects are made of polygons, for angular subjects this is fine but when it comes to representing a curved surface you can only approximate. The closer you get to the curved object the more apparent this approximation becomes. The usual alternative to this is to add more polys, the problem is you only have a certain number of polys to work with if you are going to maintain a respectable frame rate. Quadratic mapping on the other hand replaces the polygons with 9 control points, these points serve as a guide by which an image is bent or curved to the desired shape.


Interpolation

A procedure where after low-res image ie: 320 x 200 is scaled to a higher resolution such as 640 x 480, the resulting gaps between the pixels are filled in with pixels that have values based on the surrounding pixels. This is most commonly found on cards that have real-time video playback. While this feature is great for seeing low-res animations and movies full screen, be sure that the card can support 640 x 480 animations in real-time as well.


Transparency Mapping

This capability is important in that it will allow a programmer to "cut" holes or windows into an object without having to actually model and additional faces.


Reflection Mapping

The ability to map a image onto an object to simulate reflection. This makes glass look much more convincing.

 

© 1997 - 2000 COMBATSIM.COM, Inc. All Rights Reserved.
Last updated on June 30, 1999.
This article was originally publisherd on COMBATSIM.COM in 1998.

© 2014 COMBATSIM.COM - All Rights Reserved