Buffers


Microsoft DirectSound buffer objects control the delivery of waveform data from a source to a destination. For most buffers, the destination is a mixing engine called the primary buffer. From the primary buffer, the data goes to the hardware that converts digital samples to sound waves.

Information about using DirectSound buffers is contained in the following topics.

For information about capture buffers, see Capturing Waveforms.

Buffer Basics

When DirectSound is initialized, it automatically creates and manages a primary buffer for mixing sounds and sending them to the output device.

Your application must create at least one secondary sound buffer for storing and playing individual sounds. For more information on how to do this, see Creating Secondary Buffers.

A secondary buffer has a waveform format that must match the format of the data. This format is described in a WaveFormat structure. The format of a secondary buffer cannot be changed. Sounds of different formats can be played in different secondary buffers, and are automatically mixed to a common format in the primary buffer.

A secondary buffer can exist throughout the life of an application or it can be destroyed when no longer needed. It can contain a single sound that is to be played repeatedly, such as a sound effect in a game, or it can be filled with new data from time to time. The application can play a sound stored in a secondary buffer as a single event or as a looping sound that plays continuously. Secondary buffers can also be used to stream data, in cases where a sound file contains more data than can conveniently be stored in memory.

Buffers can be located either in hardware or in software. Hardware buffers are mixed by the sound card processor, and software buffers are mixed by the CPU. Software buffer data is always in system memory; hardware buffer data can be in system memory or, if the application requests it and resources are available, in on-board memory. For more information, see Dynamic Voice Management and Hardware Acceleration on ISA and PCI Cards.

You mix sounds from different secondary buffers simply by playing them at the same time. Any number of secondary buffers can be played at one time, up to the limits of available processing power.

Normally, you do not have to concern yourself at all with the primary buffer; DirectSound manages it behind the scenes. However, if your application is to perform its own mixing, DirectSound will let you write directly to the primary buffer. If you do this, you cannot also use secondary buffers.

Creating Secondary Buffers

To create a secondary sound buffer, use one of the five overloaded constructors of SecondaryBuffer, with an instantiated Device object passed to the constructor. As appropriate, also pass to the constructor a Stream Leave Site source data stream object, or a String Leave Site file name object.

Buffer objects are owned by the Device object that created them. When the Device object is released, all buffers created by that object also will be released and should not be referenced.

DirectSound allocates hardware resources to the first buffer that can take advantage of them. Because hardware buffers are mixed by the sound card processor, they have much less impact on application performance. To specify the buffer location, pass a BufferDescription object to the constructor as follows.

BufferDescription Options

An instantiated BufferDescription object may be passed to the SecondaryBuffer constructor in order to describe the characteristics and location of the buffer.

If you want to specify the location of a buffer instead of letting DirectSound decide where it belongs, set either the BufferDescription.LocateInHardware property or BufferDescription.LocateInSoftware property. If LocateInHardware is set and there are insufficient hardware resources, the buffer creation request fails.

To take advantage of the voice management features of DirectSound, set the BufferDescription.DeferLocation property when creating the buffer. This property defers the allocation of resources for the buffer until it is played. For more information, see Dynamic Voice Management.

You can ascertain the location of an existing buffer by getting the Buffer.Caps property (inherited by SecondaryBuffer). Check the BufferCaps structure for either LocateInHardware or LocateInSoftware properties. One or the other is always specified.

Setting BufferDescription.StaticBuffer lets DirectSound know that the buffer should be created in on-board hardware memory if possible. Buffer creation does not fail if a hardware buffer is not available. Hardware static buffers should be used only for short sounds that are to be played repeatedly. This property has no effect on most modern sound cards, which use system memory for their buffers.

See Also

Hardware Acceleration on ISA and PCI Cards.

Buffer Control Options

When creating a sound buffer, your application must specify the control options needed for that buffer. This is done with the BufferDescription class, which can contain one or more of the following control properties from the BufferCaps structure. This is not a complete list of all available properties.

Property Description
Control3D Buffer supports 3-D control. Cannot be combined with ControlPan.
ControlFrequency Buffer supports frequency control.
ControlEffects Buffer supports effects processing.
ControlPan Buffer supports pan control. Cannot be combined with Control3D.
ControlPositionNotify Buffer supports notifications of cursor position.
ControlVolume Buffer supports volume control.

To obtain the best performance on all sound cards, your application should specify only control options it will use.

DirectSound uses the control options in determining whether hardware resources can be allocated to sound buffers. For example, a device might support hardware buffers but provide no pan control on those buffers. In this case, DirectSound would use hardware acceleration only if the ControlPan property was not specified.

If your application attempts to use a control that a buffer lacks, the method call fails. For example, if you attempt to set the Buffer.Volume property, an exception occurs unless the ControlVolume property was specified when the buffer was created.

See Also

Playback Controls

3-D Algorithms for Buffers

When you create a secondary buffer with the BufferCaps.Control3D property and the buffer is in software, you can specify an algorithm to simulate 3-D spatial location of a sound. By default, no head-related transfer function (HRTF) processing is performed, and the location of the sound relative to the listener is indicated by panning and volume only. You can request two levels of HRTF for the buffer.

See Also

BufferDescription.Guid3DAlgorithm.

Filling and Playing Static Buffers

A secondary buffer that contains an entire self-contained sound is called a static buffer. Although it is possible to reuse the same buffer for different sounds, typically data is written to a static buffer only once.

Static buffers are created and managed just like streaming buffers. The only difference is in the way they are used: static buffers are filled once and then played, but streaming buffers are constantly refreshed with data as they are playing.

Note: A static buffer is not necessarily one created by setting the BufferCaps.StaticBuffer property in the buffer description. This property requests allocation of memory on the sound card, which is not available on most modern hardware. A static buffer can exist in system memory and can be created with either the LocateInHardware or LocateInHardware properties. For more information, see Hardware Acceleration on ISA and PCI Cards.

When you create a secondary buffer using the SecondaryBuffer(String,BufferDescription,Device) or SecondaryBuffer(Stream,BufferDescription,Device) constructors, the buffer is automatically given the correct format and size, and is filled with the data. If you use the SecondaryBuffer(String,Device) or SecondaryBuffer(Stream,Device) constructors, you are responsible for setting the format and size of the buffer, and you must write the data to the buffer by using the Buffer.Write method (inherited by SecondaryBuffer).

To play the buffer, call Buffer.Play.

The following C# example creates a buffer from a file and plays it. The buffer supports volume control, panning, and frequency control. Assume that FileName is a valid path to a WAV file, and device is a Device object.
[C#]
SecondaryBuffer secBuff = null; BufferDescription desc = new BufferDescription(); desc.Flags = BufferCaps.ControlPan | BufferCaps.ControlVolume | BufferCaps.ControlFrequency; secBuff = new SecondaryBuffer(FileName, desc, device); secBuff.Play(0, 0);

Because the BufferPlayFlags.Looping flag is not set in the example, the buffer automatically stops when it reaches the end. You can also stop it prematurely, or stop a looping buffer, by using Buffer.Stop.

Using Streaming Buffers

A streaming buffer plays a long sound that cannot all fit into the buffer at once. As the buffer plays, old data is periodically replaced with new data. Play a streaming sound with the following procedure.

  1. Call SecondaryBuffer(Stream,BufferDescription,Device) to create a buffer with the correct waveform format, and of a convenient size. A buffer large enough to hold one or two seconds of data is typical; smaller buffers can be used, but they have to be refreshed more frequently, which can be inefficient.
  2. Set notification positions so that your application knows when to refresh a portion of the buffer. For example, you can set notifications halfway through the buffer and at the end. When the play cursor reaches the halfway point, the first part of the buffer is refreshed; when it reaches the end, the second part is refreshed. Alternatively, you can poll the play cursor as the buffer plays, but this is less efficient than notification. For more information, see Play and Write Cursors.
  3. Load the entire buffer with data by using Buffer.Write.
  4. Call Buffer.Play (inherited by SecondaryBuffer), specifying the BufferStatus.Looping property.
  5. When the play cursor reaches the first point at which you want to refresh data, call Write again, writing new data from the start of the buffer up to the play cursor. Save the position of the last byte written.
  6. Call Write each time the cursor reaches a refresh point, writing data to the part of the buffer that lies between the saved position and the play cursor. Note that wraparound is handled automatically: if the buffer is 10,000 bytes long, and you write 2000 bytes to offset 9000, the last 1000 bytes are written to the front of the buffer.
  7. When all data has been written to the buffer, set a notification position at the last valid byte, or poll for this position. When the cursor reaches the end of the data, call Buffer.Stop.

Never write to the entire buffer while it is playing. Refresh only a portion of the buffer each time. Do not write to the part of the buffer that lies between the play cursor and the write cursor.

Playback Controls

To retrieve and set the volume at which a buffer is played, your application can use the Buffer.Volume property. (All properties are inherited by SecondaryBuffer.) Setting the volume on the primary buffer changes the waveform-audio volume of the sound card. Volume is measured in hundredths of a decibel and is a negative value. Because the decibel scale is not linear, effective silence may occur long before the minimum volume of -10,000 is reached.

The Buffer.Frequency property controls the frequency at which audio samples play, in samples per second. Set this property to 0 to reset the frequency to the original value, as specified in the WaveFormat structure that describes the format of the buffer. You cannot change the frequency of the primary buffer.

The Buffer.Pan property controls the right-left position of the sound. The pan effect is achieved by attenuating one channel. When the sound is at the center (a Pan value of 0), both channels are at full volume.

In order to use any of these controls, you must set the appropriate flags when creating the buffer. See Buffer Control Options.

Play and Write Cursors

DirectSound maintains two pointers into the buffer: the play cursor and the write cursor. These positions are byte offsets into the buffer.

The Buffer.Play method always starts playing at the buffer's play cursor. When a buffer is created, the cursor position is set to 0. As a sound is played, the cursor moves and always points to the next byte of data to be output. When the buffer is stopped, the cursor remains at the next byte of data.

The write cursor is the point after which it is safe to write data into the buffer. The block between the current play position and the current write position is already committed to be played, and it cannot be changed safely.

You might visualize the buffer as a clock face, with data written to it in a clockwise direction. The play position and the write position are like two hands sweeping around the face at the same speed, the write position always keeping a little ahead of the play position. If the play position points to the 1 and the write position points to the 2, it is only safe to write data after the 2. Data between the 1 and the 2 may already have been queued for playback by DirectSound and should not be touched.

The write position moves with the play position, not with data written to the buffer. If you are streaming data, you are responsible for maintaining your own pointer into the buffer to indicate where the next block of data should be written.

An application can retrieve the play and write cursors from the PlayPosition and WritePosition properties. You can also set the PlayPosition property, which indirectly changes WritePosition.

To ensure that the play cursor is reported as accurately as possible, always specify the BufferCaps.CanGetCurrentPosition property when creating a secondary buffer.

Play Buffer Notification

When streaming audio, you may want your application to be notified when the play cursor reaches a certain point in the buffer, or when playback is stopped. By using the Notify.SetNotificationPositions method, you can set any number of points within the buffer where events are to be signaled. You cannot do this while the buffer is playing.

To set up notifications, do the following:

  1. Create an AutoResetEvent Leave Site for each notification position.
  2. Obtain the Notify object by passing the SecondaryBuffer object to the Notify constructor.
  3. Create an array of BufferPositionNotify structures, one for each notification position. Set the Offset property to the byte offset where you want to be notified. Set the EventNotifyHandle property to the AutoResetEvent Leave Site.Handle Leave Site of one of the events you created in step 1.
  4. Call Notify.SetNotificationPositions, passing in the array of BufferPositionNotify structures.

You can now play the buffer in a separate thread and use WaitHandle Leave Site.WaitAny Leave Site to wait for notifications.

Mixing Sounds

Mixing is done automatically when you play multiple secondary buffers at the same time.

Under earlier driver models, the DirectSound mixer produces the best sound quality if all your application's sounds use the same WAV format and the hardware output format is matched to the format of the sounds. If this is done, the mixer does not need to perform any format conversion.

Your application can change the hardware output format by creating a primary sound buffer object and setting the Buffer.Format property. This primary buffer object is for control purposes only; creating it is not the same as obtaining write access to the primary buffer, and you do not need the CooperativeLevel.WritePrimary cooperative level. However, you do need a cooperative level of CooperativeLevel.Priority. DirectSound will restore the hardware format to the format specified in the last call every time the application gains the input focus.

You must set the format of the primary buffer before creating any secondary buffers.

Note: With Microsoft Windows Driver Model (WDM) drivers, setting the primary buffer format has no effect. The format is determined by the kernel mixer. For more information, see DirectSound Driver Models.

Losing and Restoring Buffers

Memory for a sound buffer can be lost in certain situations: for example, when buffers are located in sound card memory and another application gains control of the hardware resources. Loss can also occur when an application with the write-primary cooperative level moves to the foreground; in this case, DirectSound makes all other sound buffers lost so that the foreground application can write directly to the primary buffer.

An exception is raised when the application attempts to play or write data to a lost buffer. When the application that caused the loss either lowers its cooperative level from write-primary or moves to the background, other applications can attempt to reallocate the buffer memory by calling the Buffer.Restore method. If successful, this method restores the buffer memory and all other settings for the buffer, such as volume and pan settings. However, a restored buffer may not contain valid sound data, so the owning application should rewrite the data to the buffer.


Send comments about this topic to Microsoft. © Microsoft Corporation. All rights reserved.

Feedback? Please provide us with your comments on this topic.
For more help, visit the DirectX Developer Center