Programming for high-performance applications and games requires efficient and dynamic sound production. Microsoft provides two methods for achieving this: MIDI streams and DirectSound. MIDI streams are actually part of the Windows 95 multimedia application programming interface. They provide the ability to time stamp MIDI messages and send a buffer of these messages to the system, which can then efficiently integrate them with its processes. For more information about MIDI streams, see the documentation included with the Win32 SDK.
DirectSound implements a new model for playing back digitally-recorded sound samples and mixing different sample sources together. As with other object classes in the DirectX 2 SDK, DirectSound uses the hardware to its greatest advantage whenever possible and emulates hardware features in software when the feature is not present in hardware. You can query hardware capabilities at run time to determine the best solution for any given personal computer configuration.
DirectSound is built on the COM-based interfaces IDirectSound and IDirectSoundBuffer, and is extensible to other interfaces. For more information about the COM concepts required for programming applications using the DirectX 2 SDK, see The Component Object Model.
The DirectSound object represents the sound card and its various attributes. The DirectSoundBuffer object is created using the DirectSound object's IDirectSound::CreateSoundBuffer method and represents a buffer containing sound data. Several DirectSoundBuffer objects can exist and be mixed together into the primary DirectSoundBuffer object. DirectSound buffers are used to start, stop, and pause sound playback, and to set attributes such as frequency, format, and so on.
Depending on the card type, DirectSound buffers can exist in hardware as onboard RAM, wave table memory, a direct memory access (DMA) channel, or a virtual buffer (for an I/O port-based audio card). Where there is no hardware implementation of a DirectSound buffer, it is emulated in system memory.
The primary buffer is generally used to mix sound from secondary buffers, but can be accessed directly for custom mixing or other specialized activities. (Use caution in locking the primary buffer, because this blocks all access to the sound hardware from other sources).
The secondary buffers can store common sounds played throughout an application, such as in a game. A sound stored in a secondary buffer can be played as a single event or as a looping sound that plays repeatedly.
Secondary buffers can also play sounds larger than available sound buffer memory. When used to play a sound that is larger than the buffer, the secondary buffer serves as a queue that stores the portions of the sound about to be played.