This section describes Microsoft® DirectAnimation® media behaviors and how to use them. Behaviors that are rendered for presentation to the user are called media behaviors and include two-dimensional (2-D) image behaviors (the DAImage class and Java ImageBvr class), three-dimensional (3-D) geometry behaviors (the DAGeometry class and Java GeometryBvr class), and sound behaviors (the DASound class and Java SoundBvr class). Textures are also covered in this section since texturing often involves an image textured onto a geometry.
This section contains the following topics.
DirectAnimation can construct or import 2-D images (DAImage objects or Java ImageBvr objects), and can perform sequences of operations on any source of animated images in any order. This section discusses the following topics.
Empty Images
DirectAnimation has two types of empty images. An EmptyImage is the null image. It is transparent and undetectable. A DetectableEmptyImage has infinite extentthat is, it extends throughout the entire viewand is transparent and detectable throughout its extent. The DetectableEmptyImage is useful for specifying hot spots in certain images. An image is said to be tangible at a certain point if it is either detectable or nontransparent at that point. Otherwise, the image at that point is said to be nontangible.
Constructing Images
The following approaches can be used to construct 2-D images:
This section includes the following:
Importing Images
A basic way to create images is to import bitmaps represented in commonly used formats. This provides good leverage from the many image-authoring tools that currently exist.
The following JScript code shows how to import an image:
mediaBase = "..\\..\\..\\..\\media\\"; imgBase = mediaBase + "image\\"; myImage = m.ImportImage(imgBase + "bird.gif");
The following Java code shows how to import an image:
//Create a URL base URL imageBase = buildURL(getImportBase(),"file:/c:/DxM/Media/image"); // Create an image behavior by importing an image file ImageBvr img = importImage(buildURL(imageBase, "picture.jpg"));
DirectAnimation supports the importation of bitmaps in the .jpg, .gif, and .bmp formats. The string reference to the image can be any valid URL. Importing an image has the following results:
The relationship between pixels and meters changes depending on the display resolution. While one display might have a ratio of 18,000 pixels per meter, another display might have a ratio of 22,000 per meter. This means that the bounding box returned for an imported image varies depending on the specific display resolution.
There are ways to work around the device dependency of imported bitmaps. They involve scaling the bitmap in a device-dependent way so that the resultant image size is of some fixed, desired value across the different device resolutions. See Meter-Based Space for a description.
Creating Images
You can create several kinds of 2-D images: solid color, lines, text, geometric images, gradient filled images, and montages. For a discussion of geometric images, see Using Geometries.
This section discusses the following:
One of the simplest images to construct is a solid color image, as shown in the following example.
im = m.solidColorImage(m.Blue);
This results in an infinite-extent image with the DAColor behavior Blue. Typically, such an image is cropped or clipped before being used. Solid color images are commonly used to obtain solid-colored polygons or as viewport backgrounds, where the color can be time-varying.
Hatch Filled Boxes
You can also construct images by creating hatch marks in boxes, as shown in the following example with HatchCross:
im = m.HatchCross(m.Green, 0.2);
Text
You can construct images by rendering text, as shown in the following example with StringImageAnim and StringImage:
myText = "This is the default font style."; defaultFS = m.DefaultFont; myTextImage = m.StringImage(myText, defaultFS);
The text, which can have attributes such as font type, color, and style (bold or italic), is rendered into an image with its extent centered on the origin. By default, the text's color is black, its font family is Times-Roman, its size is 12 points, and it is neither bold nor italic.
You can set the text attributes and where the text is displayed. The following code sample renders bold, red text to the screen centered on position posX, posY.
im = m.TextImage("Hello, World!", m.Font("Arial", 10, m.Red).Bold; im2 = im.Transform(m.Translate2(posX, posY));
Gradient Fill Images
Gradient fills provide a very compact form of color images, where regions are filled with smoothly interpolated colors between specified colors at given vertices. For example, you can construct an image from a gradient-filled square, as shown in the following example with GradientSquare:
im = m.GradientSquare(m.Red, m.Green, m.Blue, m.Yellow);
You can obtain the most general form of a gradient fill through the DAStatics function GradientPolygon(pts, colors), which specifies an array of points and a corresponding array of colors. The arrays specify a triangular mesh with one color per vertex. The resultant image is based on a linear interpolation of the colors across each triangle in the RGB color space. Although interpolation in other color spaces might be desired, DirectAnimation maps this CPU-intensive operation to hardware, which currently performs the interpolation only in RGB.
Other more specialized forms of gradient fills, which are essentially shortcuts based on the general form described in this section, include the following DAStatics methods:
Rendering Geometry into an Image
You can construct 2-D images by rendering 3-D images (geometries) into two dimensions, as shown in the following example.
Geo = m.ImportGeometry("cube.x"); Camera = m.PerspectiveCamera(projPointNum, maxZNum); Img = Geo.Render(Camera);
Note that the Camera parameters must be calculated from the size of the geometry and the display viewport.
Montages
An image can be the rendering of a montage (DAMontage) object which consists of a group of DAImage objects, each with a number that indicates its z-order in the list. The ImageMontage(Image, z-order) and ImageMontageAnim(Image, z-order) functions can be used to add images to a montage. The z-order determines how the images will overlay each other. Images with larger z-values are always on top (in front) of images with smaller z-values. If the z-order number is animated (a DANumber object), the order in which the images are layered changes with time. Montages are ideal for situations where the layering of the different images is dynamic.
Operations on Images
DirectAnimation operations can generate new and more complex images from given images. Also, some of these operations generate useful data, such as the bounding-box around a given image.
The operations for composing animated 2-D images include the following:
You can perform any sequence of the above operations on any source of animated images in any order.
Transformations
You can obtain a transformed version of an image by applying the Transform(xf)method, where xf is of type DATransform2 and can be an arbitrary 2-D affine transform. The DATransform2 type has operations for constructing scales, translations, rotations, and shears from basic parameters such as DANumber objects and DAVector2 objects. You also can construct transforms based on 2 by 3 matrices. Additional operations can compose transforms, obtain their inverses, and check whether they are singular.
You can use combinations of time-varying shears, scales, and translations to construct animation from a single image, for example, to move a sprite over a background image.
In DirectAnimation, you can construct or import 3-D graphics (DAGeometry objects), and perform sequences of operations on any source of animated images in any order.
This section discusses the following topics:
Constructing Geometries
The simplest way to construct a geometry behavior is to import a geometry from a .X formatted file by using ImportGeometryAsync or ImportGeometry(URL). You can also augment geometry with lights, including ambient, directional, point, and spot lights. You can attach a sound source to a geometry, which is used to give 3-D spatial characteristics to a sound by embedding it in other geometric models.
EmptyGeometry is a constant DAGeometry object and is empty.
You can also construct your own geometries with the scripting function TriMesh or the COM method TriMeshEx. With these, you can create a set of triangle meshes that build almost any geometry. See the reference pages for TriMesh and TriMeshEx for more information and examples.
DAGeometry objects (and Java GeometryBvr objects) can be:
All of these attributes can vary with time.
Operations on Geometries
The operations for composing 3-D geometric animations include:
You can perform any sequence of the preceding operations on any source of animated geometry in any order.
DirectAnimation uses a camera to project a 3-D model (a DAGeometry object) into a 2-D image (an DAImage object) with the DAGeometry function render.
Using Direct3-D as the underlying 3-D mechanism, the Render operation renders/projects a geometry through a camera.
DirectAnimation supports two cameras: the PerspectiveCameraAnim (and PerspectiveCamera) type for perspective projection, and the ParallelCameraAnim (and ParallelCamera) type for parallel projection.
These cameras consist of the following three elements:
Rendering involves projecting the geometry located on the side of the near plane opposite the projection point into an image on the image plane. The resultant image extends infinitely. If you are interested in only a section of the image, you must use 2-D operations such as Crop or Clip on the image to extract that section.
The following figure shows the relationship between the projection point, the near clip plane, and the image plane.
DirectAnimation cameras have an origin at (0,0,0) that can be transformed. Such transformations affect the placement of the projection point, the near clip plane, and the image plane. You can use translation and rotation for positioning the camera relative to the geometric model being projected.
You can use the x- and y- scales to zoom in or out in relation to the rendered model. Increasing the XY scale results in a zoom out (the rendered image shrinks), while decreasing the XY scale results in a zoom in (the rendered image grows). Control the perspective effect by changing the distance of the projection point along the positive z-axis in relation to the origin (that is, the image plane). The closer the projection point is to the projected object, the more pronounced the perspective effect becomes.
Just as you can project a 3-D object (a DAGeometry object) as a 2-D image (a DAImage object) through a camera, you can apply a 2-D image as a texture to a 3-D object through texture mapping.
The DAGeometry TextureImage function takes an image and an existing geometry, and returns a geometry textured with that image. Because images and geometries are such rich data types, this simple texture method is extremely powerful. The following example shows how to apply a texture to an imported sphere.
Geo = m.ImportGeometry("sphere.x"); Marble = m.ImportImage("image/marble.jpg")); MarbledSphere = Geo.TextureImage(Marble);
Often a geometry is constructed by importing an existing geometry file. These files usually have 2-D texture coordinates associated with the vertices, generally ranging from (0,0) to (1,1). When these geometries are texture-mapped with images, the texture coordinates are mapped to the identical image coordinates.
Thus, texture map coordinates of (0.3, 0.7) in the geometry map directly to the (0.3, 0.7) point in texture image coordinate space.
The correspondence between texture coordinate systems and image coordinate systems is simple, but also powerful and flexible. Consider the task of making an image wiggle along the first dimension as a texture on a cube. You can achieve this by texture mapping a horizontally wiggling image onto a static cube, as shown in the following example.
WiggleTexture = OrigTexture.Transform(m.Translate2(m.Sin(m.LocalTime), 0)); NewCube = OldCube.TextureImage(WiggleTexture);
If you want to infinitely tile a small texture onto a geometry, you can create a tiled image using the DAImage Tile function, then use that resultant image as a texture.
The DASound object and the Java SoundBvr behavior class implement sound in DirectAnimation. You can import sound from many audio formats, including WAV, MIDI, and MP2 files, or synthesize sound using built-in synthesizer sound sources. You can mix different sounds even if they originated from files of different audio formats. You can also spatialize sound by embedding sounds in 3-D objects and rendering them with a microphone (DAMicrophone or MicrophoneBvr).
Currently, up to two channels are supported for sound in DirectAnimation. Sound also has infinite resolution, and each sound wave is continuous which means that in the abstract model there are no discrete sample values.
This section gives an overview of the following topics:
Constructing Sound
You can import sound from a file in the WAV, MIDI, and MP2 formats with the DAStatics functions ImportSound and ImportSoundAsync, or with the Java method importSound.
For example, to import and play a sound in JScript, you can use the following code:
m = DAControl.PixelLibrary; mySound = m.ImportSound("file://c:/dxmedia/media/sound/clock1.mp2").Sound; DAControl.Sound = mySound.Loop();
The ImportSound function returns a DAImportationResult object. To return a DASound object you access the Sound property of the DAImportationResult object by appending ".Sound".
If you use the Java SoundBvr method importSound(URL, length), the length of the imported sound, in seconds, is returned as the length parameter. For example:
URL soundBase = buildURL(getImportBase(), "file:/c:/dxmedia/media/sound/"); NumberBvr length[] = new NumberBvr[1]; SoundBvr snd = importSound(buildURL(soundBase,"earth.wav"), length);
You can then use the length parameter as follows:
SoundBvr twoSounds = (SoundBvr)until(snd, timer(length[0]), anotherSound);
If you are not interested in the length, you can set the length to NULL.
DirectAnimation currently has an internal standard for audio format. The format is a dynamic range of 16 bits and a sampling rate of of 22,050 Hz. For best results, you should import audio files with this format because DirectAnimation will convert the files to its internal standard, which may result in degradation if the files are not in the same format.
You can also synthesize sound. As an example, you can mix multiple sine waves. Each sine wave can have different attributes, so the resulting mix can produce many diverse sounds. The DAStatics property and SoundBvr type sinSynth represents a constant tone. The silence property and type represents complete silence. By default, sinSynth produces a 1 Hz tone which is subaudible, so you need to increase the frequency. For example, the following JScript code produces a tone:
sndA4 = m.SinSynth.Rate(440);
The following Java code also produces a tone.
SoundBvr sndA4 = sinSynth.rate(toBvr(440));
Operations on Sound
You can obtain new sounds from other sounds by modifying sound parameters or by mixing. In DirectAnimation, the DASound class functions Loop, GainAnim and Gain, RateAnim and Rate, PhaseAnim and Phase, PanAnim and Pan (and the Java SoundBvr methods loop, gain, rate, phase, and pan), and the DAStatics Mix function allow creation of complex-sounding animations with few operations.
The Loop() function takes a sound and repeats it continuously. If you have a sound whose composition changes with an event or user interaction and you tell the sound to loop, it will loop on its individual parts, not on the composition. For example, assume you have a sound that is sound1 until the left mouse button is pressed, and then becomes sound2, and you loop this composite sound, as shown in the following JScript code:
snd = m.Until(sound1, m.LeftButtonDown, sound2); loopsnd= snd.Loop();
The resulting sound is a continuous loop of sound1 until the left mouse button is pressed, then becomes a continuous loop of sound2. It does not loop the composite behavior in that it does not loop sound1 until the button is pressed, then cycle through sound2 once, then go back to playing sound1 until the button is pressed again.
The Gain (volume) function scales the amplitude of the sound wave. The Rate function changes the rate of sample playback; for digital audio and synthesized sounds, this scales the frequency and changes the pitch. For audio in MIDI format, this changes the tempo. The GainAnim, and RateAnim parameters are animated.
The PanAnim and Pan (balance) functions vary the gain in the left and right channels of a stereo sound. Consider the following JScript panning sample. This sample draws a red circle that moves in a straight line. The pan parameter varies with the position of a moving ball. To view the sample, click on the "Show Sample" button; to view source code, click on the "Show Sample Code" button.
Pan Sample
The PhaseAnim and Phase functions shift the point in the sound cycle where the sound starts (all sound is cyclic except pure white noise). For positive phase values, the sound starts later in the sound cycle. For negative values, the sound starts earlier in the sound cycle. For example, assume you have a sound that contains a person counting "one, two, three, four, five", who says "one" at time zero and takes two seconds to say each number. A positive phase shift moves the timeline forward, so with a positive phase shift of 4, the sound would start with the person saying "three." A negative phase shift moves the timeline back, so with a negative phase shift of 2, you would hear silence for two seconds, then the person saying "one." See the following diagram for an illustration.
Consider the following phase sample. This sample lets you enter different phase values for a sound dynamically as it plays back. You can set the sound to loop by clicking the "Loop" box. If you click the "Mix" box, the sound with the phase offset and the original sound with no phase offset are mixed together. This sometimes makes it easier to hear the effect of the phase shift. This sample is a combination of VBScript and JScript. To see the source code, click the right-mouse button while the mouse is over the sample, and choose View Source from the menu that appears.
Phase Sample
You can perform any sequence of the preceding operations, phase, pan, rate, and so on, on any source of animated sounds in any order.
You can mimic sound effects by using these DASound functions and SoundBvr methods. For example, you can vary the distance between a moving sound source and a listener, by varying the parameter of GainAnim. However, you can achieve much more realistic effects by embedding sounds in a geometry. This also relieves you of the overhead of tracking relative positions and modifying sounds yourself. For example, the following JScript code embeds the sound Snd in the geometry Geo:
//Get the sound and the geometry. Snd = m.ImportSound("bird.wav").Sound; Geo = m.ImportGeometry("sphere.x"); //Embed the sound in a blank geometry. GeoSound = m.SoundSource(Snd); //Join the sound with the desired geometry. GeoWithSound= m.UnionGeometry(Geo, GeoSound); //Create a microphone at the origin. Mic = m.DefaultMicrophone; //Move the microphone somewhere else. Mic2 = Mic.Transform(m.Translate3(5, 4, 5); //Render the geometric sound. AmbSound = GeoWithSound.RenderSound(mic2);
Pan
The PanAnim and Pan functions and pan Java method take a sound and move the energy between the left and right channels according to a pan parameter. The pan parameter varies between 1 and +1. In the stereo case, negative values of the pan parameter increase the energy sent to the left channel and decrease the energy sent to the right channel. Positive values of the pan parameter increase the energy sent to the right channel and decrease the energy sent to the left channel. A value of 0 balances the energy between the two channels.
Gain
When authoring sounds in DirectAnimation you should use the full 16-bit dynamic range to provide the best resolution, then use the Gain function or Java gain method to scale the volume of the sounds in your animation.
For example, the following JScript code varies the gain by the the sin of LocalTime:
DAControl.Sound = mySound.Loop().GainAnim(m.Sin(m.LocalTime));
DirectAnimation assumes all sounds have been normalized to use the full 16-bit dynamic range. Consequently, all sounds seem equally loud. For example, a jet plane and a whisper will sound as though they are the same volume. Sounds can be prescaled with the Gain function. Because there is no way to change the actual gain of a personal computer's amplifier, sounds can only be attenuated. Attenuating a sound is the same as multiplying it by a value between 0 and 1.
In the following Java example, two sounds, which are heard in the left and right channels respectively, will have an equal volume, even though their gains are 1 and 5 because the gain is clamped at 1:
URL soundBase = buildURL(getImportBase(), "file:/c:/DxM/Media/sound/"); SoundBvr snd1 = importSound(buildURL(soundBase, "seagull.wav"), null); SoundBvr snd2 = importSound(buildURL(soundBase, "surf.wav"), null); setSound( mix( snd1.gain( toBvr( 1 ) ).pan( toBvr( -1 ) ), snd2.gain( toBvr( 5 ) ).pan( toBvr( 1 ) ) ) );
In the following example, the sound in the left channel is louder than the sound in the right channel because their gains are 1 and 0.2:
setSound(mix( snd1.gain(toBvr(1)).pan(toBvr(-1)), snd2.gain(toBvr(0.2)).pan(toBvr(1))));
When sounds are embedded in a geometry and spatialized (using the Java GeometryBvr method soundSource and a MicrophoneBvr), even though gains greater than 1 do not produce louder sounds, they do produce sounds that can be heard at a greater distance by increasing the spatial volume that is at maximum sound (for example, a small sphere of maximum sound or a large sphere of maximum sound).
In particular, loud sounds such as a jet engine should be scaled by a gain greater than 1. Unlike sounds in the real world, the sound only becomes louder as you approach until the gain equals 1. Once this limit is reached, there is no more amplification, and the sound's volume remains constant regardless of how close you get to the source of the sound.
The following diagram shows that, for gains larger than 1, the volume is clamped at 1.
Mix
The DAStatics Mix function and the Java mix method merge two sound waves into one by adding the corresponding waves. You can mix any sounds, even if they originated from different audio formats. If mixing produces an overflow value, the result is clamped.
Sound Layering Technique
The operations provided by DirectAnimation on DASound objects (and Java SoundBvr objects) enable the generation of dynamic high-quality synthetic sounds from basic sound seeds by parameterization and layering (or mixing). These sounds can be always fresh and responsive to the action in the animation.
Traditionally, people have authored loops of compressed sounds for ambient noise. This reduces the size of the audio needed but also produces boring sounds, because a loop soon starts to sound repetitive and unrealistic. With DirectAnimation, you can produce much more realistic sounds, for example, a wave sound that amplifies only when the wave actually breaks on the shore in your animation, a seagull cry that follows the motion of the seagull across the viewport, and wind levels that can be controlled by user interaction (see the Lighthouse sample in DXMedia\Samples\Multimedia\DAnim\Java\Showcase\Lighthouse.html).
You create synthetic sounds by modifying seed sounds with parameters that are random or are related to your animation. These parameters are usually time-varying. You then mix the results together in a way suitable to your animation. Thus, sound can be as flexibly and synthetically generated as 3-D models are. Simple parts are transformed, colored, or textured and then combined together into more interesting models.
For example, the SinSynth function produces a sine-wave based sound, which can be given a variety of attributes to create a diverse set of synthetically generated sound waves. The Lighthouse sample in DXMedia\Samples\Multimedia\DAnim\Java\Showcase\Lighthouse.html generates an ocean ambient sound modified by the weather condition parameter. The weather condition parameter is controlled by the user with the slider controls. The Lighthouse sample demonstrates two basic techniques for creating sound. One is the generation of parameterized cyclic sounds and the other is the generation of random periodic sounds (the latter has a silent period between the successive occurrences).
Top of Page
© 2000 Microsoft and/or its suppliers. All rights reserved. Terms of Use.