Object Types

The most fundamental type of object is the DirectSound object, which represents the sound card itself. The IDirectSound Component Object Model (COM) interface controls the DirectSound object; the methods of this interface allow the application to change the characteristics of the card.

The second type of object is a sound buffer. DirectSound uses primary and secondary sound buffers. Primary sound buffers represent the audio data that is actually heard by the user, while secondary sound buffers represent individual source sounds. DirectSound provides controls for primary and secondary sound buffers in the IDirectSoundBuffer interface.

Primary buffers control sound characteristics, such as output format and total volume. Also, your application can write directly to the primary buffer. In this case, however, the DirectSound mixing and hardware-acceleration features are not available. In addition, writing directly to the primary buffer can interfere with other DirectSound applications. When possible, your application should write to secondary buffers instead of the primary buffer. Secondary buffers allow the system to emulate features that might not be present in the hardware; they also allow an application to share the sound card with other applications in the system.

Secondary buffers represent single sound sources that an application uses. An application can play or stop each buffer independently. DirectSound mixes all playing buffers into the primary buffer, and then it outputs the primary buffer to the sound device. Secondary buffers can reside in hardware or system buffers; hardware buffers are mixed by the sound device without any system-processing overhead.

Secondary sound buffers can be either static or streaming sound buffers. A static sound buffer means that the buffer contains an entire sound. A streaming sound buffer means that the buffer contains only part of a sound, and, therefore, your application must continually write new data to the buffer while it is playing. DirectSound attempts to store static buffers by using sound memory located on the sound hardware, if available. Buffers stored on the sound hardware do not consume system processing time when they are played because the mixing is done in the hardware. Reusable sounds, such as gunshots, are perfect candidates for static buffers.

Your applications will work with two significant positions within a sound buffer: the current play position and the current write position. The current play position indicates the location in the buffer where the sound is being played. The current write position indicates the location where you can safely change the data in the buffer. The following illustration shows the relationship between these two positions.

Although DirectSound buffers are conceptually circular, they are implemented by using contiguous, linear memory. When the current play position reaches the end of the buffer, it wraps back to the beginning of the buffer.

This section discusses the DirectSound object, the DirectSoundBuffer object, and how your applications can use these objects.

·The DirectSound Object

·The DirectSoundBuffer Object