Playback Overview

The DirectSound buffer object represents a buffer containing sound data. Buffer objects are used to start, stop, and pause sound playback, as well as to set attributes such as frequency and format.

The primary sound buffer holds the audio that the listener will hear. Secondary sound buffers each contain a single sound or stream of audio. DirectSound automatically creates a primary buffer, but it is the application's responsibility to create secondary buffers. When sounds in secondary buffers are played, DirectSound mixes them in the primary buffer and sends them to the output device. Only the available processing time limits the number of buffers that DirectSound can mix.

It is your responsibility to stream data in the correct format into the secondary sound buffers. DirectSound does not include methods for parsing a sound file or a wave resource. However, there is code in the accompanying sample applications that will help you with this task. For more information, see Using Wave Files and Reading Wave Data from a Resource.

Depending on the card type, DirectSound buffers can exist in hardware as on-board RAM, wave-table memory, a direct memory access (DMA) channel, or a virtual buffer (for an I/O port based audio card). Where there is no hardware implementation of a DirectSound buffer, it is emulated in system memory.

Multiple applications can create DirectSound objects for the same sound device. When the input focus changes between applications, the audio output automatically switches from one application's streams to another's. As a result, applications do not have to repeatedly play and stop their buffers when the input focus changes.

In order to know when a streaming buffer is ready to receive new data, or when any buffer has stopped, an application can use the IDirectSoundNotify interface to set up notification positions. When the play cursor reaches one of these positions, an event is signaled. Alternatively, an application can regularly poll the position of the play cursor.