DirectShow Animated Header -- Write an Audio Capture Filter DirectShow Animated Header -- Write an Audio Capture Filter* Microsoft DirectShow SDK
*Index  *Topic Contents
*Previous Topic: Write a Video Capture Filter
*Next Topic: Write a Transform Filter in C/C++

Write an Audio Capture Filter


This article outlines important points to consider when writing an audio capture filter. The Microsoft® DirectShow™ SDK includes a standard Audio Capture filter.

Contents of this article:

Audio Capture Pin Requirements

The capture filter's capture pin and preview pin (if there is one) must support the IKsPropertySet interface. See Capture and Preview Pin Requirements for more details and sample code for implementing IKsPropertySet on your capture pin.

You must have one input pin for every sound source the capture card can mix before it digitizes the audio. For instance, if your sound card has a line in, microphone in, and CD-ROM input, you would have three input pins. You don't typically connect these input pins to any other filters — you just support the IAMAudioInputMixer interface on each pin and an application will set recording levels, balance, treble, and so on, on each pin using that interface.

Registering an Audio Capture Filter

You must register your filter in the audio capture filter category. See the AMovieDllRegisterServer2 function for more information.

Producing Data

Produce data on the capture pin only when the filter graph is in a running state. Do not send data from your pins when the filter graph is paused. This will confuse the filter graph unless you return VFW_S_CANT_CUE from the CBaseFilter::GetState function, which warns the filter graph that you do not send data when paused. The following code sample shows how to do this.


CMyVidcapFilter::GetState(DWORD dw, FILTER_STATE *State)
{
	*State = m_State;
	if (m_State == State_Paused)
		return VFW_S_CANT_CUE;
	else
		return S_OK;
}

Controlling Individual Streams

All output pins should support the IAMStreamControl interface, so an application can turn each pin on or off individually (for instance, to preview without capturing). IAMStreamControl enables you to switch between preview and capture without rebuilding a different graph. See the source code for the VidCap Sample (Video Capture Filter) sample for details.

Time Stamping

When you send captured audio samples, the starting time stamp for each group equals the start time of the graph's clock when the first sample in the packet was captured. The ending time stamp equals the start time plus the duration that the audio packet represents. If your audio capture filter is not providing the clock, the time stamps won't match up exactly (where the end of one package is the same as the beginning time stamp of the next package), but that's okay. See Write a Video Capture Filter and the source code for the VidCap Sample (Video Capture Filter) sample for time stamping examples.

You should also set the media time of the sample you deliver, as well as the regular time stamp. The media time is the sample number in the packet. So if you are sending one-second packets of 44.1 kilohertz (kHz) audio, you would set media time values of (0, 44100) (44100, 88200), and so on. This enables the downstream filters to know if any audio samples were dropped, even when the regular time stamps are a little random because the clock being used is not the audio digitizing clock.

One other thing: If the filter graph is in a running state, and then paused, and then run again, you must not produce a sample with a time stamp less than the last one you produced before pausing. Time stamps can never go back in time, not even back to before a pause occurred.

Necessary Interfaces

Read about the following interfaces and consider implementing them. You should implement these interfaces to provide functionality that applications might rely on, so these interfaces are strongly recommended.

© 1998 Microsoft Corporation. All rights reserved. Terms of Use.

*Top of Page