Chanel Summers
Microsoft Corporation
January 3, 2000
Summary: This article answers the most frequently asked questions about Microsoft DirectMusic Producer and is based on Microsoft DirectX, version 7.0. (13 printed pages)
General DirectMusic Issues
Planning Issues/Communication with the Developer
Troubleshooting Issues
Instrument Issues
Segments
Styles, Patterns, and Bands
Tips, Tricks, and Techniques
General Tip Please make sure to check out the Microsoft® DirectMusic® Producer Release Notes, a text file that provides some general information on the DirectX® 7.0 release of Producer, as well as a list of the new features and a section titled “Some Major Things to Watch Out For.”
Within a DLS collection, and even within a single instrument, you can mix and match wave files with different sample rates. Although the software synthesizer can be set to output 11, 22, or 44.1 kilohertz (kHz), it will automatically convert the frequencies of the source waves to match the output rate. It is recommended that waves placed in DLS collections have sample rates between 3 kHz and 80 kHz. If a wave’s sample rate happens to fall outside of these boundaries, its pitch will be incorrect on playback.
Table 1. CPU usage regarding reverb and sampling rate
Reverb Status | Sampling Rate | CPU Usage |
Reverb off | 22 kHz | Least CPU. |
Reverb on | 22 kHz | Better sounding. |
Reverb off | 44.1 kHz | Probably not that useful; 22 kHz with reverb on usually sounds better, but use your own taste. |
Reverb on | 44.1 kHz | Sounds best if you are using 44.1-kHz samples. |
You can also give the user ultimate control through the audio control panel in your game. Of course, if all your samples are 22 kHz, you should run the synthesizer no faster than 22 kHz.
If you’re working on a typical game application, you probably don’t want to use AutoDownload, as it can cause a performance hit when playing back Segments. Instead, manually download with Segment->SetParam( GUID_Download, pIPerformance) to tell the Segment to download the DLS instruments associated with the Segment. This should be called at a convenient time (like a scene change) or you can call it in a separate thread prior to playback. The Band should be placed in a Band Track in your Segment to ensure that this will work properly.
After playing the Segment, call Segment->SetParam(GUID_Unload, pIPerformance) when you’re done with the Segment. For this to work, all collections must be referenced properly from within the Band. When you load the Segment, the Band Track reads the name, file name, and GUID for each referenced collection and asks the Loader to load those as well. The easiest way for the Loader to know where to find them is to rely on file names. If you store your data in a resource, then you should call SetObject on each resource chunk first so the Loader will know where to find it.
When using AutoDownload, if you are using only the instruments from the default collection (GM.DLS) in your primary Segment and the Band in your Secondary Segment references only instruments from a custom collection (replacing GM.DLS), then the instruments from the default collection should be returned automatically when the Secondary Segment playback stops, if the primary Segment is still playing.
If you write a basic Play Segment/MIDI file app, you can use AutoDownload so you don’t have to manage downloading the instruments. However, in a typical game situation, AutoDownload incurs a performance hit if you ever play a Segment more than once. AutoDownload also causes the downloading of instruments to occur right at the start of Segment playback, causing a blip in CPU at that point and potential delay in performance. Downloading and unloading repeatedly (which AutoDownload may do) takes time, and can potentially degrade performance. If you are concerned about CPU performance in your application, consider turning AutoDownload off.
Relying on AutoDownload can cause other problems. You might also want to turn off AutoDownload if you have a Band in a Secondary Segment (Secondary Segments are played on top of a primary Segment). Otherwise the instruments in the Band may be downloaded automatically when the Secondary Segment starts, changing your instruments. If the Secondary Segment stops playing before the primary Segment stops, AutoDownload will then unload the Band. If this happens, you will not revert to the original instruments, as you may expect. Rather, you may lose sound output entirely because you now have no Bands loaded.
With the Echo MIDI feature, you can define a virtual MIDI driver that can route the output from a separate MIDI software application to Producer. This makes it possible for a separate application, such as a sequencer program, to play using the Band setup currently open in Producer.
Any Band that you set up in Producer can be used. The Band can include custom DLS instruments, sounds in a hardware device, or any combination of ports and instruments. The greatest benefit of this feature is that a separate application can use the DLS instruments you create in Producer.
The virtual MIDI driver you use serves as an electronic “patch bay” to route the output signals from a separate MIDI application directly to the input channels of Producer without the use of hardware, and without having to send the output from the separate application to any playback device. You select the virtual MIDI driver as the playback port (device) in the separate MIDI application and select the same virtual MIDI driver as the input port in Producer.
Virtual MIDI drivers are available at numerous commercial sites on the Web. A recommended driver, called Hubi’s LoopBack Device, is available at no cost from the Web site http://www.geocities.com/SiliconValley/Vista/2872/hmidilb/hmdlpbk.html.
The file you download may be in a compressed format that requires expanding in a separate folder on your computer before you can use it. After downloading and expanding the file, you can install the virtual MIDI driver using the instructions provided.
After you have installed the virtual MIDI driver on your computer as a MIDI device, you can select it as an input port in Producer and as an output port in the separate MIDI application.
After you have enabled the Echo MIDI connection between Producer and the separate MIDI application, you can play any file in the separate application and hear it play using the currently opened Band in Producer.
Note To ensure that all DLS instrument sounds used in the Band are loaded into your computer and are available for playback by the separate MIDI application, you may need to select either a Segment or a Style that uses the Band and play it briefly in Producer.
Up-front planning is key! Keep good records and label everything. Make sure to agree on triggers for interactive audio elements.
In most cases a Segment will require a Band Track with its own Band objects, not references. People often edit Style Bands and think that they’ll show up automatically in the Segments. This is complicated by the fact that since DirectMusic persists whatever has been previously downloaded—like the Style—and they’ll actually hear this in the Segment even though they have no Band Track. If played through the API, though, it won’t play the proper instruments or possibly anything at all.
Most likely it has not been created in the Port Configuration Editor. Therefore, you need to make sure the channel group for channels 97 to 112 has been created in the Port Configuration Editor.
A GUID is a globally unique identifier. Producer uses GUID numbers instead of file names to keep track of which components are being used in combination with other components. Because it is essentially impossible to create two files with the same GUID number, this is a much more accurate means of distinguishing files than using file names. Also, this ensures that two components with the exact same name are not confused with each other. If you distribute Producer files over the Internet or by other means where there is a chance that another file with the same name may exist, you may want to keep careful record of the GUID numbers assigned to your files. Producer can create a new GUID for any component that is virtually guaranteed to be unique.
Important Note When you duplicate files within Producer, the GUIDs are identical in both files. If you end up delivering both run-time files, it is recommended that you change one of the GUIDs (see the Edit GUID... button on the Info tab in the property page).
The inherent behavior of the DirectMusic architecture is that if you have multiple events with the same timestamp (that is, notes, patch/Band changes, tempo events, and so on) the order in which they are played is indeterminable. For example:
Suppose you have a 2-bar Segment that has a Trumpet patch change at measure 1, beat 1, tick 0 and a Bagpipe patch change at measure 2, beat 1, tick 0, and that you have a note also at measure 2, beat 1, tick 0. When you play the Segment back, you may hear the note play as a Trumpet or maybe as the Bagpipe. To ensure that you get what you expected, change the time of the Bagpipe patch change to measure 2, beat 1, tick –1.
Bands are one-shot settings of volume and pan (as well as priority, patch changes, and so forth). Compare these to CC tracks, which can have either one-shot (instantaneous) settings or curves, which change value over time. If you place a Band in a Band Track right in the middle of a CC curve, the data of the curve will almost immediately run over the Band’s data (you might notice a brief jump in pan or volume). I’d suggest using mid-Segment Bands with volume and pan information only when changing Styles (that need a new Band) or when you know that your current Style has no volume and pan information in CC tracks. If you’re just changing an instrument’s patch, turn off the volume and pan change data in the Band so that only the patch itself changes.
Producer automatically downloads all DLS collections, keeps several versions of files around for editing, and uses a lot more CPU.
If your synthesizer is DLS-2-compliant, some DLS collections authored in DirectMusic Producer may not be heard. This is because the DirectX 7.0 release of Producer sets the velocity range (at what note velocities this wave is triggered) to 0-0 instead of 0-127. In DLS-1, region velocity is ignored, because multiple layers are not supported, but this becomes a major problem for DLS-2 synthesizers that do respond to velocity layering. To address this issue, you should install the DirectX 7.0a SDK, which contains an updated version of Producer that will save the velocity ranges with proper parameters. Open and resave all DLS collections in this updated version of Producer to remedy this problem. (We are planning on releasing a reduced-download web patch that would update Producer as well. We will provide a pointer to it when it's up on the Web site.)
Important Note We recommend that you install the DirectX 7.0a SDK, regardless of whether you think this will ever be an issue, because if you ever have content with 0-0, there’s a good chance that it will someday play on a default synthesizer that supports DLS-2 and therefore it may not be heard.
The priority sets which instruments to play when you reach the maximum voices allocated. The number of voices defaults to 32, and can be set from the configuration editor (right-click the little green button with a 1 or a 2 on it).
In order to have a 4-note chord you need to actually play 4 notes in the Pattern. The chord Play Mode will reduce it to three if that’s all you need. So in this case:
Put notes C, E, G, B in your Pattern. In the Part Properties, set the Default Play Mode to be Chord (this particular mode makes 7ths optional depending on the chord). So now any 4-note chords in your Chordmap or Segment will play as 4, 3 notes will play as 3.
The number of voices is determined by the synthesizer. You specify how many voices you want when you open it, the synthesizer responds with how many it actually allocated. The default for the software synthesizer is 32 voices, but it will allocate up to a thousand if you ask for it. Therefore, there is no limitation.
It is the waves.
So, just use the wave once and have it referenced by multiple instruments within the same DLS collection file.
DirectMusic goes with the “one instrument per PChannel” at any given time—it can be changed over time with Band changes, of course.
The easiest way to manage multiple instrument layering is to create new parts as linked parts (rather than copying over the data to a new PChannel, which would add unnecessarily to file size). If you have variations that you wish to be locked, choose a common variation lock ID in the part properties. That way, when PChannel 3 flute plays variation 4, then linked part PChannel 4 horn plays variation 4 also.
You can do it in DLS1, specifically, in a drum instrument. You assign a unique articulation to each region with a different pan. DLS Designer supports it. Because the DirectMusic synthesizer supports multiple articulations in melodic instruments, you can actually do this with any instrument.
Important Note This is technically outside the boundaries specified by the DLS Level 1 specification, so be advised that other synthesizers may enforce this limitation.
You should definitely create regions within your DLS instruments—and for drum sounds, it is usually a single wave per region. So you can create 128 regions for a single drum instrument.
Using our software synthesizer, you can also create 128 regions for melodic (that is, non-drum) instruments, but you could run into trouble if hardware is DLS1-compliant—only 16 regions for each melodic instrument are allowed (128 for drums).
In the DLS Designer, you right-click import waves into the waves folder, then add a new instrument by right-clicking the Instrument folder: Insert Instrument. Double-click the instrument to open the DLS Editor.
A single instrument with one region and one wave is inserted. To add more regions, first resize the existing region by grabbing the ends and dragging or by using the range spin controls. Now there is room to add other regions. You can control-click in the region strip to add regions (alternately right-click insert region). They can be resized and reassigned different waves as you go along.
You can set up drum regions that have an articulation for each region (if they don’t, they’ll follow the instrument articulation). In the region articulation, set the pan slider (bottom of the editor). In the Region Property Page, you can set attenuation that would override any wave settings you may have. You can set each drum instrument to its own PChannel, which has advantages (you can have independent variations and CCs) and disadvantages (it’s a little more work to keep organized). In order to stop each part from transposing with chord and key changes, set the Play Mode in the Part Property Page to “fixed - absolute.”
No, there is no way to interactively change any of the DLS parameters on the fly. If you want to have vibrato or tremolo that changes with the rate of the music, you should place pitch or volume curves in your music to do it. This guarantees exact synchronization to the tempo. Alternatively, you (or your developer) could write a Tool or Track Type to interject them under program control.
Right-click in a CC track and “Insert Pitch Bend Range Curve” to place it on the timeline (at the point you inserted it) and place four curve tracks that control this value. It remains in place for that part until it is set otherwise. Also, CC6 allows you to change that value in the middle of playback without having to insert the whole thing again.
Any tracks in the Controlling Segment affect and essentially replace the corresponding tracks of the primary Segment. For example, let’s say you have a melodic theme playing on a primary Segment, and a certain event triggers more of a minor feeling. You can interactively replace the major chord changes with minor ones in the Controlling Segment, while continuing the overall flow of the primary segment. When playback of the Secondary Segment stops, the primary Segment reverts to its original state.
To audition a Segment as a Controlling Secondary Segment:
Yes, you can change the tempo setting either by putting in a Tempo Track and setting it to what you want or, interactively, by setting up a Controlling Segment that has a Tempo Track with your new tempo.
Producer supports multiple time signatures in Segments (just add a Time Signature Track), but not within a single Style. You must use different Styles to change time signatures, and if a Segment has a Style Track, the Style Track owns the time signature.
DirectMusic Producer only allows a single “master” time signature for each Style. We are looking into changing this for a future version of DirectX—but there are a number of issues, especially if you have to display a particular Groove Level that might be in different time signatures, depending on which Pattern is selected.
The workarounds for now include making the time signature 14/4, or just creating a different Style that is 2/4 used for that one bar in the Segment.
The Band track in a Segment does not contain references to Bands in a Style—it actually contains Bands themselves. When you drag a Band into a Segment, the Band actually goes into the Segment. If you change the original Band in the Style, you will have to drag it back into the Segment. Alternatively, however, you can double-click on the Band in the Band Track of the Segment and make your changes there. You could then drag it back to the Style.
Unfortunately, Bands are designed to be interactive, but not variable. This could be done with just a little bit of code.
However, you could create this effect by duplicating (and linking) all of your parts on separate PChannels that will pick up the second Band (for example, PChannels 1-6 are Band 1 and PChannels 7-12 are Band 2). Then assign them to unique variations and lock them (for example, when variation 1 plays flute, variation 1 in the tuba part is silent, and the reverse). You could set it up so that either PChannels 1-6 play, or PChannels 7-12 (or combinations in between if you wanted).
It could be done with Motifs, but it would require the programmer to make sure they are cued on 4-bar boundaries. It would be better accomplished by creating a second Style Track (Track Group 2) and layer a 12-bar Pattern on top of the 4-bar Patterns in the Style Track in Track Group 1. That way, when one Track Group changed grooves, then the other would as well, so the melodies would be in sync with the appropriate underlying Pattern.
Unfortunately there’s no way to do exactly what you’re describing. However, by copying the Pattern to a second Pattern, you could assign a Groove Level 1 to a Pattern where all of the variations are disabled except for 1, then assign the other Pattern to Groove Level 2 where all variations would play randomly. You could also set all of the parts in the first Pattern to “play in numerical order,” so they’d all play variation 1, then switch.
Whenever there is an element to be reused as-is in Styles (that is, a drum groove, comping figure, and so forth), don’t copy and paste, but create a part link. It’s very economical because it is a pointer to the same data.
With DirectMusic it’s not an either/or proposition. Where the performance cannot be encapsulated into single samples (such as a sax solo), go ahead and incorporate “mini performances” into the mix.
This is a fairly easy way to get into interactivity because the underlying music is a MIDI file or an audio file. Secondary Segments layered on top will be shaped harmonically by the Chord Track.
Every bar might be a different groove, so the music at bar 5 would have a fill of groove 5 to match the melody or intensity of the current music.
This can be used to bring in/out certain elements that match the drama of the music. It can also be used to smoothly fade from one texture to another while keeping the form intact.
The new multiple Secondary Segment Toolbar in DirectX 7.0 is very handy for this!
So the drums might have three variations, the bass five, the sitar eight. Different juxtapositions keep the resulting music fresh.