|S3's Transform and Lighting Engine
White Paper Courtesy of S3
While digging about on the S3 site one day I found their white paper on the Savage2000 transform and lighting engine. It's well written and very informative. Most of the information here applies to any geometry processor, therefore also to the Nvidia GeForce 256.
Hardware Transformation And Lighting
The most significant upgrade in consumer 3D capabilities since 1996, the introduction of hardware transform and lighting into mainstream 3D accelerators will revolutionize application design and bring higher quality, higher performance 3D graphics to every PC user.
What is hardware transformation and lighting?
The process of generating a 3D scene from supplied data (a 3D 'world') is carried out by two functional units. The first functional unit is a 'geometry engine,' which converts (transforms) the 3D coordinates of the world into 2D coordinates. This has to be done because the monitor on which the 3D world is to be viewed is a 2D device, which cannot directly accept 3D coordinates. The second functional unit is a 'rasterization engine' which performs the task of drawing the (now-2D) world onto the screen.
In addition to transforming 3D coordinates into 2D coordinates, the geometry engine can be used to generate lighting information for the world - storing information about light sources and calculating the amount of light each source casts on any given point in the world.
While rasterization engines (part of a standard 3D video card) have become ubiquitous on consumer PCs over the last few years, it has only been the high-end workstation markets that have had access to hardware geometry calculations. Now, with the announcement of its Savage2000 accelerator featuring a completely integrated geometry engine, S3 is bringing hardware transformation and lighting technology to the to mainstream PC market for the very first time.
S3's geometry engine
A tightly integrated design, S3's highly efficient geometry engine is optimized to work directly with S3's advanced rasterizing engine to provide the highest possible performance.
First, the 3D data making up the scene to be rendered is accepted - this can be directly from the CPU or, more efficiently, from AGP/video memory, so that the CPU is never involved.
This data then passes through a transformation stage, in which it is converted from the original 3D scene data (known as 'object space') into 3D data as seen in the view to be rendered ('view space').
In the view space each point can be lit from the stored lighting data, before transforming the data into the position of the eye of the user - the correct position for viewing (known, obviously, as 'eye space'). From here it is converted to 2D screen coordinates and corrected for distance (an operation known as perspective transformation).
Finally, the resulting data is clipped to the screen - eliminating all data that is off screen to avoid wasting effort in later rendering. It is the action of taking these several stages of transformation onto a pipelined hardware architecture that enables the benefits of hardware transformation and lighting.
The OpenGL lighting model
S3's hardware lighting model is based around the OpenGL lighting model (which with a very small number of changes - also supported in hardware - is equivalent to the Direct3D lighting model).
There are quite a bewildering number of parameters that can be used to control each individual light (S3 supports the OpenGL standard of 8 simultaneously), but OpenGL lights can be divided into three basic types: Directional lights, Point lights, and Spot lights.
Go to Part II
Copyright © 1997 - 2000 COMBATSIM.COM, INC. All Rights Reserved.
Last Updated September 15th, 1999