Buffer Basics

DirectSound buffer objects control the delivery of waveform data from a source to a destination. The source might be a synthesizer, another buffer, a WAV file, or a resource. For most buffers, the destination is a mixing engine called the primary buffer. From the primary buffer, the data goes to the hardware that converts the samples to sound waves.

Information about using DirectSound buffers is contained in the following topics:

For information about capture buffers, see Capturing Waveforms.

Your application must create at least one secondary sound buffer for storing and playing individual sounds.

A secondary buffer can exist throughout the life of an application, or it can be destroyed when no longer needed. It can be a static buffer that contains a single short sound, or a streaming buffer that is refreshed with new data as it plays. To limit demands on memory, long sounds should be played through streaming buffers that hold no more than a few seconds of data.

You mix sounds from different secondary buffers simply by playing them at the same time. Any number of secondary buffers can be played at one time, up to the limits of available processing power.

Secondary buffers are not all created alike. Characteristics of buffers include the following:

See Also