|Sound Advice: Aureal3d
An Interview with Skip McIlvaine, Developer Relations Manager, Aureal Inc.
Not long ago I was reading something about A3d version 2, and I realized that I had a great deal of unanswered questions about A3d, EAX, and sound APIs in general.
Sound APIs, like video APIs, allow the software program to communicate with your hardware, in this case your sound board. These interfaces vary in their abilities and efficiency, but the goal is the same: greater immersion in the game of choice. The two most popular positional sound APIs used by game designers are EAX and A3d.
Sound hardware has been evolving even more rapidly than video hardware, so I decided to contact Aureal with my questions and try to achieve some clarity. Having received this interview back, I then forwarded it to Creative Labs for a response.
Aureal seems to have the more sophisticated technology, but some game designers choose to support only one or the other API. With this in mind we have also fired off a question to the producers of a host of coming simulation products: which do they prefer and why? We'll post the results of that query as soon as we have the responses back.
Q: Thanks for taking the time to talk to us. To begin, let's talk in general about sound APIs (application programming interface.) What are they and why do we need them?
A: Thank you. It's my pleasure.
Sound APIs are the high-level languages designed to make controlling sound cards easier and more "English-like." Chips and computers process binary information, or in other words, they process 1's and 0's. This is not a natural way for humans to communicate. Setting forth a number of commands that are in "English" that translate into a million 1's and 0's make programming the chip much easier.
A sound API is basically a dictionary of commands that both computers and humans can utilize to "communicate." An API aims to provide a large set of various commands via a single simple "phrase." Reducing these millions of instructions into a set number of "phrases" greatly reduces the work required of a programmer.
Q. How many different sound API's are out there?
The ones I am familiar with are: A3D by Aureal, DirectSound(3D) by Microsoft, Miles 5.0 and Miles 5.5 by RAD, Q3DMX(?) by QSound, IAS by EAR, EAX from Creative, and MacroFX by Sensaura. The first four are complete sound APIs and engines, the others are simply extensions to other APIs, but are nonetheless APIs.
Q. Tell us about the relationship of DirectSound and DirectSound3D to A3D?
In the case of A3D 1.0, DirectSound provides the base API and engine, and A3D 1.0 is simply an extension to it to provide added functionality. A3D 1.0 has a similar relationship to DirectSound3D as EAX has to DirectSound3D: it's an extension of the DirectSound3D functionality.
In the case of A3D 2.0, things are completely different. While A3D 2.0 is a complete sound engine (or simply an added feature) that does not require another API or engine to operate, and is designed to be cross-platform portable, it still sends DirectSound3D calls to the hardware in the cases where it is optimal. These cases exist when the user does not have an Aureal sound card but has another hardware accelerated DirectSound3D PCI sound card under Windows. While the programmer only has to write to one API, the user still gets the best possible experience according to his or her sound card's capabilities.
A3D 2.0 in this case also augments the DirectSound3D support with the addition of Aureal Wavetracing-based occlusions (the effect upon sounds as they are absorbed and blocked by objects and walls), advanced resource management and improved acoustic effects.
On Aureal Vortex 2 hardware all the features of A3D 2.0 are turned on and run optimally, but in the absence of such hardware, improved audio performance, 3D positioning and other features are still provided to the best of the abilities of the sound card.
So, while A3D 2.0 doesn't require DirectSound3D it's as if the programmer coded to DirectSound3D (with the added bonus of A3D's additional features). It's our way of helping unclutter the sound API picture and still provide more functionality to all gamers.
Q. Please define A3D for us. What is it and what does it accomplish?
A3D stands for Aureal 3D. What it accomplishes depends on whether you are the developer or the gamer. For the developer, A3D 2.0 allows you to code to one API and support most, and eventually all, of the features of all the other APIs. It provides the most robust and complete audio functionality set "under one roof." It simplifies the work necessary to support all of these features and ensures that they run as optimally as the end-user's sound card is capable.
If you're a gamer it provides support for all the cool features the developer wants you to experience when listening to their game. A short list of these features includes things like:
- the ability to attach sounds and sound properties to objects in the game and have them play in accordance with the interactive, ever-changing world created by the game
- hardware accelerated 3D positional audio (in the case where the gamer's sound card supports hardware acceleration; host processing is used on older non-accelerated cards)
- highly advanced resource management (which helps to keep the game running as fast as possible without dropping sounds)
- realistic and dynamic acoustic reflection effects, as sound waves bounce off objects in the game; this helps to tell you where objects (including the listener) are in relation to other objects and in relation to the structure of the environment, as well as indicate what the objects and surfaces in this space are made of (carpet, wood, brick, steel, concrete, glass, water, etc. (in the case of Aureal Vortex 2 and future hardware)
- realistic and dynamic occlusion effects; the effect of sound waves being blocked or absorbed by other objects, surfaces and structures in the game; this tells you exactly where things are in the world: behind a door, around the corner, outside your car, etc. and what they are made of
- equalizations and environmental effects; as an example: under water, sound waves travel slower and their high frequencies are absorbed; sounds are more muffled and quieter; this sort of effect better describes the environment you are listening in and the environment from which the sounds are emanating
- the ability to render acoustic effects like Doppler shifts, pitch shifts, distance modeling, etc.
- a whole heck of a lot more
Go to Part II
Copyright © 1997 - 2000 COMBATSIM.COM, INC. All Rights Reserved.
Last Updated June 26th, 1999