DirectMusic Producer Frequently Asked Questions

Chanel Summers
Microsoft Corporation

January 3, 2000

Summary: This article answers the most frequently asked questions about Microsoft DirectMusic Producer and is based on Microsoft DirectX, version 7.0. (13 printed pages)

Contents

General DirectMusic Issues
Planning Issues/Communication with the Developer
Troubleshooting Issues
Instrument Issues
Segments
Styles, Patterns, and Bands
Tips, Tricks, and Techniques

General DirectMusic Issues

General Tip   Please make sure to check out the Microsoft® DirectMusic® Producer Release Notes, a text file that provides some general information on the DirectX® 7.0 release of Producer, as well as a list of the new features and a section titled “Some Major Things to Watch Out For.”

What is the minimum sampling rate for a sound/instrument that DirectMusic can handle?

Within a DLS collection, and even within a single instrument, you can mix and match wave files with different sample rates. Although the software synthesizer can be set to output 11, 22, or 44.1 kilohertz (kHz), it will automatically convert the frequencies of the source waves to match the output rate. It is recommended that waves placed in DLS collections have sample rates between 3 kHz and 80 kHz. If a wave’s sample rate happens to fall outside of these boundaries, its pitch will be incorrect on playback.

What are some recommendations for optimization ?

Why is it bad practice to use AutoDownload? Would you use it for anything?

If you’re working on a typical game application, you probably don’t want to use AutoDownload, as it can cause a performance hit when playing back Segments. Instead, manually download with Segment->SetParam( GUID_Download, pIPerformance) to tell the Segment to download the DLS instruments associated with the Segment. This should be called at a convenient time (like a scene change) or you can call it in a separate thread prior to playback. The Band should be placed in a Band Track in your Segment to ensure that this will work properly.

After playing the Segment, call Segment->SetParam(GUID_Unload, pIPerformance) when you’re done with the Segment. For this to work, all collections must be referenced properly from within the Band. When you load the Segment, the Band Track reads the name, file name, and GUID for each referenced collection and asks the Loader to load those as well. The easiest way for the Loader to know where to find them is to rely on file names. If you store your data in a resource, then you should call SetObject on each resource chunk first so the Loader will know where to find it.

When using AutoDownload, if you are using only the instruments from the default collection (GM.DLS) in your primary Segment and the Band in your Secondary Segment references only instruments from a custom collection (replacing GM.DLS), then the instruments from the default collection should be returned automatically when the Secondary Segment playback stops, if the primary Segment is still playing.

If you write a basic Play Segment/MIDI file app, you can use AutoDownload so you don’t have to manage downloading the instruments. However, in a typical game situation, AutoDownload incurs a performance hit if you ever play a Segment more than once. AutoDownload also causes the downloading of instruments to occur right at the start of Segment playback, causing a blip in CPU at that point and potential delay in performance. Downloading and unloading repeatedly (which AutoDownload may do) takes time, and can potentially degrade performance. If you are concerned about CPU performance in your application, consider turning AutoDownload off.

Relying on AutoDownload can cause other problems. You might also want to turn off AutoDownload if you have a Band in a Secondary Segment (Secondary Segments are played on top of a primary Segment). Otherwise the instruments in the Band may be downloaded automatically when the Secondary Segment starts, changing your instruments. If the Secondary Segment stops playing before the primary Segment stops, AutoDownload will then unload the Band. If this happens, you will not revert to the original instruments, as you may expect. Rather, you may lose sound output entirely because you now have no Bands loaded.

What does the new Echo MIDI feature enable me to do?

With the Echo MIDI feature, you can define a virtual MIDI driver that can route the output from a separate MIDI software application to Producer. This makes it possible for a separate application, such as a sequencer program, to play using the Band setup currently open in Producer.

Any Band that you set up in Producer can be used. The Band can include custom DLS instruments, sounds in a hardware device, or any combination of ports and instruments. The greatest benefit of this feature is that a separate application can use the DLS instruments you create in Producer.

The virtual MIDI driver you use serves as an electronic “patch bay” to route the output signals from a separate MIDI application directly to the input channels of Producer without the use of hardware, and without having to send the output from the separate application to any playback device. You select the virtual MIDI driver as the playback port (device) in the separate MIDI application and select the same virtual MIDI driver as the input port in Producer.

Virtual MIDI drivers are available at numerous commercial sites on the Web. A recommended driver, called Hubi’s LoopBack Device, is available at no cost from the Web site http://www.geocities.com/SiliconValley/Vista/2872/hmidilb/hmdlpbk.html.

The file you download may be in a compressed format that requires expanding in a separate folder on your computer before you can use it. After downloading and expanding the file, you can install the virtual MIDI driver using the instructions provided.

After you have installed the virtual MIDI driver on your computer as a MIDI device, you can select it as an input port in Producer and as an output port in the separate MIDI application.

How do I enable Echo MIDI?

  1. In the Transport controls, click the MIDI Options button.

  2. In the Echo MIDI Input Device box, select the virtual MIDI driver. (Hubi’s LoopBack Driver offers a choice of four channels: LB 1, LB 2, LB 3, and LB 4.)

  3. Click the Echo MIDI In check box to enable the Echo MIDI feature for the selected virtual MIDI device.

  4. In the Output PChannels box, select a range of 16 PChannels (such as 1 through 16, 17 through 32, or 33 through 48) for the virtual MIDI device.

  5. Open any band file in Producer using the Band Editor.

  6. In the separate MIDI application, select the same virtual MIDI driver that you selected in step 2.

After you have enabled the Echo MIDI connection between Producer and the separate MIDI application, you can play any file in the separate application and hear it play using the currently opened Band in Producer.

Note   To ensure that all DLS instrument sounds used in the Band are loaded into your computer and are available for playback by the separate MIDI application, you may need to select either a Segment or a Style that uses the Band and play it briefly in Producer.

Planning Issues/Communication with the Developer

What are some planning tips?

Up-front planning is key! Keep good records and label everything. Make sure to agree on triggers for interactive audio elements.

What are the parameters that are not included in the run-time files—essentially “audition only” in Producer—and need to be communicated to the developer?

Troubleshooting Issues

Why won’t my Style Bands play consistently in my Segments?

In most cases a Segment will require a Band Track with its own Band objects, not references. People often edit Style Bands and think that they’ll show up automatically in the Segments. This is complicated by the fact that since DirectMusic persists whatever has been previously downloaded—like the Style—and they’ll actually hear this in the Segment even though they have no Band Track. If played through the API, though, it won’t play the proper instruments or possibly anything at all.

Why can’t I hear PChannel 98?

Most likely it has not been created in the Port Configuration Editor. Therefore, you need to make sure the channel group for channels 97 to 112 has been created in the Port Configuration Editor.

What is a GUID and why does Producer use them?

A GUID is a globally unique identifier. Producer uses GUID numbers instead of file names to keep track of which components are being used in combination with other components. Because it is essentially impossible to create two files with the same GUID number, this is a much more accurate means of distinguishing files than using file names. Also, this ensures that two components with the exact same name are not confused with each other. If you distribute Producer files over the Internet or by other means where there is a chance that another file with the same name may exist, you may want to keep careful record of the GUID numbers assigned to your files. Producer can create a new GUID for any component that is virtually guaranteed to be unique.

Important Note   When you duplicate files within Producer, the GUIDs are identical in both files. If you end up delivering both run-time files, it is recommended that you change one of the GUIDs (see the Edit GUID... button on the Info tab in the property page).

Why should the first Band in a Segment usually have a negative time?

The inherent behavior of the DirectMusic architecture is that if you have multiple events with the same timestamp (that is, notes, patch/Band changes, tempo events, and so on) the order in which they are played is indeterminable. For example:

Suppose you have a 2-bar Segment that has a Trumpet patch change at measure 1, beat 1, tick 0 and a Bagpipe patch change at measure 2, beat 1, tick 0, and that you have a note also at measure 2, beat 1, tick 0. When you play the Segment back, you may hear the note play as a Trumpet or maybe as the Bagpipe. To ensure that you get what you expected, change the time of the Bagpipe patch change to measure 2, beat 1, tick –1.

How do Bands work as a “snapshot” separate from CC tracks that may be affecting pan and volume?

Bands are one-shot settings of volume and pan (as well as priority, patch changes, and so forth). Compare these to CC tracks, which can have either one-shot (instantaneous) settings or curves, which change value over time. If you place a Band in a Band Track right in the middle of a CC curve, the data of the curve will almost immediately run over the Band’s data (you might notice a brief jump in pan or volume). I’d suggest using mid-Segment Bands with volume and pan information only when changing Styles (that need a new Band) or when you know that your current Style has no volume and pan information in CC tracks. If you’re just changing an instrument’s patch, turn off the volume and pan change data in the Band so that only the patch itself changes.

How does the Producer environment differ from the actual game environment?

Producer automatically downloads all DLS collections, keeps several versions of files around for editing, and uses a lot more CPU.

Why do my notes sometimes clip each other?

Why do I sometimes get the wrong instrument on a PChannel?

Why can’t I hear the instruments in my DLS collection in a Non-Microsoft Synthesizer?

If your synthesizer is DLS-2-compliant, some DLS collections authored in DirectMusic Producer may not be heard. This is because the DirectX 7.0 release of Producer sets the velocity range (at what note velocities this wave is triggered) to 0-0 instead of 0-127. In DLS-1, region velocity is ignored, because multiple layers are not supported, but this becomes a major problem for DLS-2 synthesizers that do respond to velocity layering. To address this issue, you should install the DirectX 7.0a SDK, which contains an updated version of Producer that will save the velocity ranges with proper parameters. Open and resave all DLS collections in this updated version of Producer to remedy this problem. (We are planning on releasing a reduced-download web patch that would update Producer as well. We will provide a pointer to it when it's up on the Web site.)

Important Note   We recommend that you install the DirectX 7.0a SDK, regardless of whether you think this will ever be an issue, because if you ever have content with 0-0, there’s a good chance that it will someday play on a default synthesizer that supports DLS-2 and therefore it may not be heard.

Also, what is the priority setting that I can turn on and off for every instrument in the Band used for?

The priority sets which instruments to play when you reach the maximum voices allocated. The number of voices defaults to 32, and can be set from the configuration editor (right-click the little green button with a 1 or a 2 on it).

I have created a sustained pad sound with three notes in the pattern (C, E and G). If I change the chord in the segment from major to minor, I have no problems, but if I want a major 7th, it still only plays a major triad. How do I create 4-note chords?

In order to have a 4-note chord you need to actually play 4 notes in the Pattern. The chord Play Mode will reduce it to three if that’s all you need. So in this case:

Put notes C, E, G, B in your Pattern. In the Part Properties, set the Default Play Mode to be Chord (this particular mode makes 7ths optional depending on the chord). So now any 4-note chords in your Chordmap or Segment will play as 4, 3 notes will play as 3.

Instrument Issues

DirectMusic breaks through the 16-channel limitation of MIDI with the concept of Channel Groups, but is there a limitation in the number of voices that can be played simultaneously with DirectMusic? If so, what is it?

The number of voices is determined by the synthesizer. You specify how many voices you want when you open it, the synthesizer responds with how many it actually allocated. The default for the software synthesizer is 32 voices, but it will allocate up to a thousand if you ask for it. Therefore, there is no limitation.

Do the instruments in a DLS collection take up much memory or is it primarily the waves that take up the memory?

It is the waves.

I want to create more instruments using the same waves because I am combining two projects into one and want to have separate instruments.

So, just use the wave once and have it referenced by multiple instruments within the same DLS collection file.

Can you have several instruments within the same PChannel?

DirectMusic goes with the “one instrument per PChannel” at any given time—it can be changed over time with Band changes, of course.

The easiest way to manage multiple instrument layering is to create new parts as linked parts (rather than copying over the data to a new PChannel, which would add unnecessarily to file size). If you have variations that you wish to be locked, choose a common variation lock ID in the part properties. That way, when PChannel 3 flute plays variation 4, then linked part PChannel 4 horn plays variation 4 also.

Is it possible to pan regions independently in a single DLS instrument?

You can do it in DLS1, specifically, in a drum instrument. You assign a unique articulation to each region with a different pan. DLS Designer supports it. Because the DirectMusic synthesizer supports multiple articulations in melodic instruments, you can actually do this with any instrument.

Important Note   This is technically outside the boundaries specified by the DLS Level 1 specification, so be advised that other synthesizers may enforce this limitation.

Can I create a single “16-instrument” drum channel for all my drum sounds, or do I need 16 separate channels, even though only 3 drum sounds play at the same time?

You should definitely create regions within your DLS instruments—and for drum sounds, it is usually a single wave per region. So you can create 128 regions for a single drum instrument.

Using our software synthesizer, you can also create 128 regions for melodic (that is, non-drum) instruments, but you could run into trouble if hardware is DLS1-compliant—only 16 regions for each melodic instrument are allowed (128 for drums).

So how do I go about doing this?

In the DLS Designer, you right-click import waves into the waves folder, then add a new instrument by right-clicking the Instrument folder: Insert Instrument. Double-click the instrument to open the DLS Editor.

A single instrument with one region and one wave is inserted. To add more regions, first resize the existing region by grabbing the ends and dragging or by using the range spin controls. Now there is room to add other regions. You can control-click in the region strip to add regions (alternately right-click insert region). They can be resized and reassigned different waves as you go along.

I want to set up a drum instrument. I’ve assigned the samples to different key regions and clicked the “drums” button, but can find no way to assign different volume and panning info to these samples. My only solution so far has been to assign each drum to its own PChannel, and move them around in the Bands window. This is not a good solution because key changes move the drum notes as well. What do I need to do?

You can set up drum regions that have an articulation for each region (if they don’t, they’ll follow the instrument articulation). In the region articulation, set the pan slider (bottom of the editor). In the Region Property Page, you can set attenuation that would override any wave settings you may have. You can set each drum instrument to its own PChannel, which has advantages (you can have independent variations and CCs) and disadvantages (it’s a little more work to keep organized). In order to stop each part from transposing with chord and key changes, set the Play Mode in the Part Property Page to “fixed - absolute.”

Is it possible to change the LFO rate of an instrument while the game is playing? Specifically, I may need to adjust the “frequency” rate, to ensure that the instrument’s oscillation rate will match any tempo changes in the music.

No, there is no way to interactively change any of the DLS parameters on the fly. If you want to have vibrato or tremolo that changes with the rate of the music, you should place pitch or volume curves in your music to do it. This guarantees exact synchronization to the tempo. Alternatively, you (or your developer) could write a Tool or Track Type to interject them under program control.

Can you tell me how to adjust the Pitch Bend range for an instrument?

Right-click in a CC track and “Insert Pitch Bend Range Curve” to place it on the timeline (at the point you inserted it) and place four curve tracks that control this value. It remains in place for that part until it is set otherwise. Also, CC6 allows you to change that value in the middle of playback without having to insert the whole thing again.

Segments

What are Controlling Segments? How do I audition them in Producer?

Any tracks in the Controlling Segment affect and essentially replace the corresponding tracks of the primary Segment. For example, let’s say you have a melodic theme playing on a primary Segment, and a certain event triggers more of a minor feeling. You can interactively replace the major chord changes with minor ones in the Controlling Segment, while continuing the overall flow of the primary segment. When playback of the Secondary Segment stops, the primary Segment reverts to its original state.

To audition a Segment as a Controlling Secondary Segment:

Can you change the tempo setting mid-Segment?

Yes, you can change the tempo setting either by putting in a Tempo Track and setting it to what you want or, interactively, by setting up a Controlling Segment that has a Tempo Track with your new tempo.

Styles, Patterns, and Bands

Does DirectMusic Producer support multiple time signatures?

Producer supports multiple time signatures in Segments (just add a Time Signature Track), but not within a single Style. You must use different Styles to change time signatures, and if a Segment has a Style Track, the Style Track owns the time signature.

Is there a way to make one measure a different time signature than the rest of the measures in the pattern? For example, I have a piece that is 4/4 for three measures, then one bar of 2/4, but I cannot change the last measure to 2/4. If I change the properties it simply does not give me access to the last two notes, but still plays them.

DirectMusic Producer only allows a single “master” time signature for each Style. We are looking into changing this for a future version of DirectX—but there are a number of issues, especially if you have to display a particular Groove Level that might be in different time signatures, depending on which Pattern is selected.

The workarounds for now include making the time signature 14/4, or just creating a different Style that is 2/4 used for that one bar in the Segment.

When I change the volume/panning of a part on the volume/panning grid, the changes I have made do not register in my Segment, although they do register in a Pattern. This is despite the fact that I have created the appropriate “Band Track” in the Segment.

The Band track in a Segment does not contain references to Bands in a Style—it actually contains Bands themselves. When you drag a Band into a Segment, the Band actually goes into the Segment. If you change the original Band in the Style, you will have to drag it back into the Segment. Alternatively, however, you can double-click on the Band in the Band Track of the Segment and make your changes there. You could then drag it back to the Style.

Is it possible to have two or more different Bands within a single Style played at random by one or more Patterns? I tried assigning a Style Pattern to two Style Bands, hoping that the Bands would alternate at random, but only one or the other would play, never both.

Unfortunately, Bands are designed to be interactive, but not variable. This could be done with just a little bit of code.

However, you could create this effect by duplicating (and linking) all of your parts on separate PChannels that will pick up the second Band (for example, PChannels 1-6 are Band 1 and PChannels 7-12 are Band 2). Then assign them to unique variations and lock them (for example, when variation 1 plays flute, variation 1 in the tuba part is silent, and the reverse). You could set it up so that either PChannels 1-6 play, or PChannels 7-12 (or combinations in between if you wanted).

Is it possible to have, for example, a 4-bar Pattern looping around and playing variations, and then over the top of that Pattern play one, two, or maybe three melodies of, for example, 12 bars each? I want the melodies to be set, but the Patterns to contain variations. However, I would only want these set melodies to play over certain Patterns, but not others. If this is possible, how can it be done?

It could be done with Motifs, but it would require the programmer to make sure they are cued on 4-bar boundaries. It would be better accomplished by creating a second Style Track (Track Group 2) and layer a 12-bar Pattern on top of the 4-bar Patterns in the Style Track in Track Group 1. That way, when one Track Group changed grooves, then the other would as well, so the melodies would be in sync with the appropriate underlying Pattern.

Is it possible to place a priority on a variation(s)? I wish to have a theme play using variations 1 on all the Patterns first time around, then as the Segment repeats from then on, for the variations to play at random.

Unfortunately there’s no way to do exactly what you’re describing. However, by copying the Pattern to a second Pattern, you could assign a Groove Level 1 to a Pattern where all of the variations are disabled except for 1, then assign the other Pattern to Groove Level 2 where all variations would play randomly. You could also set all of the parts in the first Pattern to “play in numerical order,” so they’d all play variation 1, then switch.

Tips, Tricks, and Techniques

Use Part Linking to minimize file size.

Whenever there is an element to be reused as-is in Styles (that is, a drum groove, comping figure, and so forth), don’t copy and paste, but create a part link. It’s very economical because it is a pointer to the same data.

Mix and match extended audio files with short (single note) instrumental samples as the creative circumstance dictates.

With DirectMusic it’s not an either/or proposition. Where the performance cannot be encapsulated into single samples (such as a sax solo), go ahead and incorporate “mini performances” into the mix.

Add a chord track to a sequence track (or extended audio track) so that interactive elements can be added.

This is a fairly easy way to get into interactivity because the underlying music is a MIDI file or an audio file. Secondary Segments layered on top will be shaped harmonically by the Chord Track.

Add a Groove Track to an extended sequence (or theme) so that any transition out of the sequence would have the perfect transition to the new material.

Every bar might be a different groove, so the music at bar 5 would have a fill of groove 5 to match the melody or intensity of the current music.

Use Part Locking to fatten up thinner sounds or to create flange/delay effects while still maintaining the variability.

Create Secondary Segments with control changes to interactively shape the mix.

This can be used to bring in/out certain elements that match the drama of the music. It can also be used to smoothly fade from one texture to another while keeping the form intact.

Gradually combine sound effects into a rhythmic pattern that takes on musical characteristics.

Create musical textures by only combining Secondary Segments, each with its own length and time signature.

The new multiple Secondary Segment Toolbar in DirectX 7.0 is very handy for this!

Use the “play variations in numerical order” feature in an interesting way by creating different numbers of variations in each part.

So the drums might have three variations, the bass five, the sitar eight. Different juxtapositions keep the resulting music fresh.

Separate the elements of a musical performance into 4 categories that can be interactively reconstructed: the content (rhythmic and melodic tendencies), the chords, the texture (sparse or full), the timbres (what instruments).

Use random settings in velocity and start time.

Use inversion boundaries to give the illusion of voice leading for chordal parts.