This section covers a variety of concepts and techniques useful to many types of Microsoft® DirectAnimation® authors: Web site builders, script writers, and application developers. The following topics are discussed in this section.
This section introduces the basic elements of the DirectAnimation model, including media expressions in DirectAnimation, and suggests analogies to number expressions and spreadsheets.
Media Expressions
DirectAnimation can be explained in terms of the familiar concept of expressions on numbers in programming languages (Fortran, C++, Visual Basic, Java, and so on). Such expressions are based on operations that construct new numbers from existing numbers. For example, (5 + 7) * 4 + 3 constructs the number 51.
Before expressions were introduced into programming languages, it took the following tedious steps to evaluate this expression:
Load 5 -> x Load 7 -> y Add x,y -> x Load 4 -> y Mult x,y -> x Load 3 -> y Add x,y -> x
DirectAnimation uses an expression-based approach to construct more complex media values based on simpler media values and operations working on these values. For example, the expression
frontImage*rotation + backImage
Evaluates to a rotating image overlaying a second image, just as the above expression evaluates to 51.
In algebra, operations and numbers combine to return a result according to certain rules. So too, in DirectAnimation, do operations on animation and media elements combine to produce interesting interactive, animated and mixed-media content. You can apply many of the concepts of algebra to constructing interesting animation in DirectAnimation.
DirectAnimation has a set of abstract data types; for example, a number behavior (NumberBvr) or a color behavior (ColorBvr). Abstract types are described in terms of the high-level operations they support without exposing implementation or representation details. Since it is typical to have thousands of behaviors in an animation, thinking of behaviors as operations on abstract data types can help make the complexity manageable. Operations create composite behaviors; for example, angle = sin(NumberBvr theta), or redGeo = geo.diffuseColor(ColorBvr red), applies the color constant red to a geometry.
As in algebra, where in an expression such as angle = sin(t* 2*pi/period) t is time, behaviors in DirectAnimation can be time-varying. DirectAnimation generalizes time-varying values beyond numbers. For example, you can construct a time-varying image as follows:
ImageBvr image3 = overlay(image1.transform(rotate(localTime)), image2)
This expression sets image3 to be image1 rotating at one radian per second and overlaying image2.
Behaviors Are Retained
Time-varying behaviors in DirectAnimation are retained, in the sense that executing a behavior expression (such as the one in the previous section) constructs a data structure that is retained after the execution concludes. This is unlike traditional number expressions in programming languages where the expression evaluates conclusively into a result at the time of execution. In this respect, DirectAnimation expressions are more like cells in a spreadsheet; they don't execute just once, but are retained, and their value changes as their parameters (other behaviors) change.
Using behavior expressions relieves the developer of significant maintenance work. For example, assume you have defined a path behavior (Path2Bvr ) called line and a transformation behavior (Transform2Bvr) called wiggle that is a sinusoidal translation. You can then combine them into a time-varying image behavior (ImageBvr ) called wigglingLine. You can then import an image (perhaps a JPEG from a file) and call it background and overlay wigglingLine on background. Call this time-varying image myImage.
Now, assume you have imported or synthesized a three-dimensional (3-D) cube and assigned it a path to traverse or a rate at which to expand. You can texture the animated cube with myImage and get a cube that is both animated and has an animated texture. Since this behavior is retained, once you have joined the texture to the cube the texturing operation keeps checking the time-varying value of its wiggling line; it calculates the result and applies the new texture to the cube. The programmer constructs the relation between the texture and the cube and then doesn't need to intervene any further. If, on the other hand, the line is fixed and not wiggling, DirectAnimation is optimized not to calculate texture values over and over and thus saves CPU cycles.
Reactive Behaviors
There are two key concepts in DirectAnimation: continuous behaviors, and events. These are combined into reactive behaviors. A reactive behavior varies continuously with time and reacts to specific events by switching to new behaviors. For example, a ball bouncing (moving on its path) in a room is a continuous behavior. The ball colliding with the wall or floor is an event.
Assume you have imported two geometries (from two .X files, for instance) that are 3-D balls with radii r1 and r2 separated by a time-varying distance d. You can define a collision event as:
Collide = Predicate(leq(d, add(r1, r2))
That is, when the distance between the two balls is less than or equal to the sum of their radii, they collide.
This event can be used as follows:
bvr1 = until (initial_behavior, Collide, bounce_behavior)
The until operation constructs a behavior which starts as an initial behavior (such as "parabolic path") until the event occurs, then switches to a bounce behavior. The behavior switched to after the event can be constant (the ball stops) or can be calculated from parameters at the time of the event (a new path is calculated based on the ball's position and the paths the two balls were traveling at the time of the collision). Thus, until creates a new reactive behavior (bvr1) that is the first behavior until the event occurs, and then becomes the second behavior.
Similarly, it is possible to define an event based on the user selecting one of the balls by pointing and clicking:
Grab= andEvent(Pick, leftButtonDown)
And used as follows:
bvr2= until(parabolic_path_behavior, Grab, follow_mouse_position_behavior)
Or you can make the ball explode on either event as follows:
Explosion = orEvent(Collide, Grab) Explosion_Bvr = until(parabolic_path_behavior, Explosion, flying_ball_fragments_behavior)
DirectAnimation supports both arrays and tuples. An array is a homogeneous list of behaviors (behaviors that are all of the same type, such as all colors or all images). A tuple is a heterogeneous list of behaviors (behaviors that can be different, such as a color behavior and an image behavior). Tuples are especially useful for grouping behaviors that all switch at the same event. For example:
Synch_tuple = until(tuple1, event, tuple2)
This helps to synchronize the switching of different behaviors at one event.
Interactive Versus Reactive Behaviors
Reactivity is the notion of switching from one behavior to another based on some event. Interactivity is the notion of user input and how it influences the progression of the animation. Events can be based on user input (such as a button press) or they can be synthetic (based on computations, such as the collision event discussed previously). User input can be in the form of an event or it can be in the form of a continuous behavior such as moving the mouse cursor. For example, a user can drag the ball with a mouse to give it continuous motion, or click on the ball and stop it with an event. Both plain behaviors and events can be either interactive or computed (synthetically generated).
Media Graphs
DirectAnimation supports operations on media types that result in expressions producing composite media values. These expressions build retained structures called media graphs which operate on time-varying entities. Media graphs are akin to scene graphs in 3-D graphics systems, but they differ in that they are mixed-media based and contain combinations of behaviors, events, and user input. User input (as both continuous behaviors and events) feeds into constructed media graphs, which in turn produce visual and audio media behaviors that are displayed on the user's system.
DirectAnimation supports the following media file types.
DirectAnimation supports all the audio and video file types supported in Microsoft® DirectShow®. If the user has a DirectShow filter installed for a particular format, such as DV or line 21 (for closed captions), then this format also will be supported.
The DirectAnimation scripting interfaces make it easy to use DirectAnimation from an HTML environment. The DirectAnimation library provides a set of methods for animating multimedia elements such as images, sprites, movies, and sound, in addition to 2-D and 3-D objects. It includes support for animation paths, rotations, and other transformations. A timeline sequencing feature makes it possible to build lifelike animations using simple components as modular building blocks.
The library works in cooperation with the DirectAnimation integrated-media control (DAViewerControl). Library calls are made through a scripting language such as JScript or VBScript to construct an animation and play it in the dynamic HTML compositing space.
Typically you declare your DAViewerControl object as shown in the following JScript sample. The object's name can be anything. Here it is DAControl. The CLSID must be as shown:
<DIV ID=controlDiv> <OBJECT ID="DAControl" STYLE="position:absolute; left:10; top:10;width:500;height:450" CLASSID="CLSID:B6FFC24C-7E13-11D0-9B47-00C04FC2F51D"> </OBJECT> </DIV> <SCRIPT LANGUAGE="JScript"> <!-- m = DAControl.PixelLibrary; ... //--> </SCRIPT>
The variable "m" is shorthand for the DAStatics Library. All functions and properties contained in that library must be preceded by an "m" so the interpreter can find them. For example:
myImage = m.SolidColorImage(m.Red);
In this example, SolidColorImage is a DAStatics function that creates a DAImage object, and Red is a DAStatics property that defines the color red.
Once you have constructed the renderable media type (an image or sound), you tell the DAViewerControl object what to display, as shown in the following JScript code:
DAControl.Image = myImage; DAControl.Sound = mySound;
Then tell the object to start the model with the Start subroutine, as shown in the following JScript code:
DAControl.Start();
The DAViewerControl has the CLSID B6FFC24C-7E13-11D0-9B47-00C04FC2F51D and is a windowless control, so it can be used with dynamic HTML, and it can be used over or under other objects on the screen. DirectAnimation also provides the more traditional windowed control (class DAViewerControlWindowed) with the CLSID 69AD90EF-1C20-11D1-8801-00C04FC29D46. If you are working in a traditional environment, such as Visual Basic, it is recommended that you use the windowed control.
The DAStatics class (in Java, the com.ms.dxmedia.Statics class) collects all the static functions and constants provided by the other classes and makes them available as static methods on the DAStatics (Java Statics) class. In scripting languages this means all the static functions and constants are available through the control object. For example:
<DIV ID=controlDiv> <OBJECT ID="DAControl" STYLE="position:absolute; left:10; top:10;width:500;height:450" CLASSID="CLSID:B6FFC24C-7E13-11D0-9B47-00C04FC2F51D"> </OBJECT> </DIV> <SCRIPT LANGUAGE="JScript"> <!-- m = DAControl.PixelLibrary; ... //--> </SCRIPT>
After this declaration, each function, property, and constant that is contained in the DAStatics library must be preceded by an "m" so the interpreter can find it. For example:
myImage = m.SolidColorImage(m.Red);
In Java, all of the static methods in the Statics library become available without qualification. For example:
ImageBvr im = solidColorImage(blue);
If you want to write DirectAnimation code outside of a Model class without qualifying your calls to static methods, you can create your own class that extends the DAStatics class. If this is not feasible (because the class already extends another class), then explicit name qualifications are required.
This section discusses the naming conventions used in the COM API, which are exposed to JScript and VBScript users. In COM, and thus in scripting, when two similar methods differ only in the type of at least one of their parameters, then these two methods must have different names. This is in contrast to other languages such as Java and C++, which make it possible to use the same method name in such a case. For this reason, in the Scripting Reference you'll find different variations of essentially the same method, with slightly adapted names. The different suffixes that are used are as follows:
Anim Naming Convention
The functions with Anim in the name, such as Point2Anim, are counterparts of the non-Anim versions, such as Point2. The difference is that the Anim versions take time-varying parameters, and produce animated results. Functions with Anim take animated parameters of type DANumber or DAString. Functions without Anim take parameters that are doubles, integers, and regular strings. For example the following two statements are equivalent:
M = DAControl.PixelLibrary; P1 = m.Point2(34, 100); P2 = m.Point2Anim(m.DANumber(34), m.DANumber(100));
(Note that DANumber(x) converts the regular number x into a number behavior.)
Because the second form is longer, what is its value? The longer form becomes necessary if you want to construct a time-varying point. For example:
// Constructs a number that varies between 34 and 50 in 3 seconds. xNum = m.SlowInSlowOut(34, 50, 3, 0); // Constructs a point that travels on the Y = 100 line, between // the two X values in 3 seconds. P2 = m.Point2Anim(xNum, m.DANumber(100));
P2 is a time-varying point that can be used to build other time-varying entities. For example, it can be used as a parameter to Translate2Point. Note that in cases where there is no non-Anim version of a certain function then the Anim suffix is omitted even if the function's parameters are time-varying, as is the case with Translate2Point.
Rate Naming Convention
Sometimes to construct time-varying values it is easier to specify the rate of change of the value than to specify the value itself. DirectAnimation provides variations to functions that accept rate-of-change parameters as a convenience. For example:
RotXf = m.Rotate2Rate(Math.PI/3);
This code constructs a time varying 2-D rotation of PI/3 per second, and is equivalent to:
RotXf = m.Rotate2Anim(m.Mul(m.LocalTime, m.DANumber(Math.PI/3)));
Note that Mul is for multiplying two DANumbers together, and LocalTime is time, which increases by one unit per second.
Degrees Naming Convention
Methods with the Degrees suffix take their angle parameters in degrees as opposed to radians. These are convenience functions that spare the user from doing the conversion explicitly, if the user chooses to specify angle parameters in degrees. For example, consider the following statement with radian parameters:
RotXf = m.Rotate2Rate(Math.PI/3);
This can be expressed with degrees as:
RotXf = m.Rotate2RateDegrees(60);
To view the DirectAnimation library functions in Visual Basic, follow these steps:
The JScript sample displayed when you click the Show Sample button below creates a red oval and spins it around. JScript samples generally have the following basic steps:
You can use any name for the control. You can set the control's position and size on the HTML page by specifying the value of the left position, the value of the top position, and the height and width of the control. You can also specify that the windowless control appears underneath other images and text on the page by specifying z-index: -1
.
<OBJECT ID="DAControl" STYLE="position:absolute; left:30%; top:100;width:300;height:300;z-index: -1" CLASSID="CLSID:B6FFC24C-7E13-11D0-9B47-00C04FC2F51D"> </OBJECT>
m = DAControl.PixelLibrary;
myImg = m.SolidColorImage(m.Red);
DAControl.Image = myImg;
DAControl.Start()
Click the Show Sample button to display the JScript example. To see the code, click the Show Sample Code button.
You can add interaction to this sample by changing the code to respond to a user-initiated event. You can make an image red until the left mouse button is clicked, then turn it green with the following statement:
fillImg = m.Until(m.SolidColorImage(m.Red), m.LeftButtonDown, m.SolidColorImage(m.Green));
The Until function causes fillImg to be red until the LeftButtonDown event occurs, then turns fillImg green.
The following "Hello, World" example demonstrates some of the basic steps involved in developing a DirectAnimation application. The first step uses DirectAnimation for Java to construct an applet that, when displayed, results in a rendered text string that says "Hello, World." There is no animation and no interaction.
import com.ms.dxmedia.*; public class MyModel extends Model { public void createModel(BvrsToRun blist) { FontStyleBvr fs = defaultFont.family(toBvr("Arial").color(blue) .bold(); ImageBvr tx = (toBvr("Hello, World"), fs); setImage(tx); } } public class MyApplet extends DXMApplet { public void init(){ // Always call the superclass's init() first to ensure codeBase is set. super.init(); // Now set the model. setModel(new MyModel()); } }
This DirectAnimation applet simply renders a piece of static text. You can make this example more interesting by adding just a few more lines of code. First, however, you must understand how the initial example works.
The DirectAnimation Model class includes the abstract createModel method. MyModel subclasses Model and implements createModel to build behaviors; this example has a string behavior and an image behavior. The createModel method then calls the setImage method to set the model's image behavior.
The DirectAnimation ImageBvr type is constructed by converting a character string ("Hello, World") into an ImageBvr behavior with the toBvr method. The toBvr method converts the literal string "Hello, World" into an image of the text with the font style specified in the defaultFont method.
Next, MyApplet subclasses the DirectAnimation DXMApplet subclass of the Abstract Windows Toolkit (AWT) Applet class. This initializes its superclass DXMApplet to take an instance of the MyModel class. When the applet is invoked, DirectAnimation builds the model (by invoking createModel) and displays it in the applet.
A small addition to the previous code causes the text to change color continuously as it is rendered. The MyApplet class doesn't change at all, while the MyModel class is changed to the following:
public class MyModel extends Model { public void createModel(BvrsToRun blist) { ColorBvr col = colorHsl(localTime, toBvr(0.5), toBvr(0.5)); FontStyleBvr fs = defaultFont.family(toBvr("Arial")).color(col) .bold(); ImageBvr tx = (toBvr("Hello, World"), fs); setImage(tx); } }
One line was added to the createModel method. It defines a color, using the colorHsl method. This method allows you to define a color using the Hue, Saturation, Lightness (HSL) model. The colorHsl method takes 0.5 for its saturation and lightness arguments, but uses the built-in behavior called localTime for the hue. The localTime behavior is a time-varying value of type NumberBvr that increases at the rate of one unit per second. Values of all the defined DirectAnimation types are potentially time-varying and interactive.
Using localTime in a color-producing method such as colorHsl yields a time-varying color. Using this time-varying color as an argument in the font style color(col) method yields a time-varying font color. Attaching the time-varying color value to text that is converted into an image produces a time-varying image. Because setImage now sets a time-varying image, the result of this code is an image that is animated.
Note that the entire applet is as you see it. No other methods are required and, in particular, there is no need for a frame loop, even though the applet is displaying an animation. For programmers who have been using the awt.Graphics package to do animation, this means you do not need to worry about threads, while() loops, or repainting the screen.
The next step is to add some simple interactivity. The new version of MyModel, shown here, uses the time-varying color until the left mouse button is pressed. It then changes the color to red.
public class MyModel extends Model { public void createModel(BvrsToRun blist) { ColorBvr col = colorHsl(localTime, toBvr(0.5), toBvr(0.5)); ColorBvr mouseCol = (ColorBvr)until(col, leftButtonDown, red); FontStyleBvr fs = defaultFont.family(toBvr("Arial")).color(mouseCol) .bold(); ImageBvr tx = (toBvr("Hello, World"), fs); setImage(tx); } }
A single line of code adds the ability to respond to the mouse:
until(col, leftButtonDown, red);
This expression produces a color behavior that is initially col and remains so until the leftButtonDown event occurs. When this happens, col changes to red. There is still no need to provide a frame loop. In addition, there is no need to provide an event detection/response loop (to wait for the leftButtonDown), because this is dealt with explicitly in the implementation of the until method.
As this sample shows, time-varying, interactive behaviors are constructed out of media data types and operations. The DirectAnimation run-time system then takes on the task of animation, event detection, and media presentation.
The DirectAnimation API uses continuous temporal and spatial 2-D and 3-D coordinate systems. The basic unit of time is the second. The basic unit of space is the meter. The x-axis is the horizontal axis, increasing to the right. The y-axis is the vertical axis, increasing upward. Three-dimensional coordinates are y-axis up, positive z-axis near, and negative z-axis far.
The following diagram illustrates these coordinate systems.
DirectAnimation provides several mechanisms for handling these differences. Fundamentally, the DirectAnimation coordinate system is a meter-based system. When PARAM tags are used to specify all or part of the model, the coordinate system will implicitly be that of HTML. When the library mode of the DirectAnimation control is set to pixel mode the coordinate system is converted from the DirectAnimation default to the pixel convention. With these methods, the scripter can construct models in resolution-independent units. For example, in JScript:
m = DAControl.PixelLibrary;
- or -
m = DAControl.MeterLibrary;
This section discusses the following topics:
HTML and Pixel Coordinates
HTML uses device pixels as the standard unit of measure with a left-handed coordinate system with the y-axis down and the origin in the upper left corner. When PARAM tags are used to specify animations the coordinate system is that of HTML. The DirectAnimation client controls use a pixel-coordinate system with the y-axis down and the origin in the center of the window.
DirectAnimation is fundamentally a meter-based coordinate system, which is preferred for resolution independent animation. However, DirectAnimation provides the pixel construction mode as a convenience for users who are familiar with the HTML coordinate system and would like their animation coordinates to match that as closely as possible.
When using the PixelLibrary (as opposed to the MeterLibrary) the 2-D coordinate system is left-handed, having the positive Y-axis going downward, and it has a centered origin (not an origin in the upper-left-corner, as in the HTML coordinate system). See the sample in DXMedia\Samples\Multimedia\DAnim\JScript\Templates\CoordsAndPath.html for a detailed example of the pixel construction mode. Also, compare the PixelMode.html and MeterMode.html samples in DXMedia\Samples\Multimedia\DAnim\JScript\Exercises. All 2-D coordinate references such as points, paths, and translation factors in this construction mode are interpreted as pixel-valued. Note that this is a construction mode only, so when you extract information back, like extents, you get them in meters, because these are the units in the internal representation. For advanced animations, the meter mode is strongly recommended, especially because the conversion from meter to HTML coordinate space is a single transform.
Units
The DirectAnimation API is designed to ensure that one unit is actually one meter in physical space. However, variables such as monitor curvature and monitor controls that change the display area preclude absolute accuracy.
DirectAnimation provides predefined constants that you can use as multipliers to convert to other units, including cm, foot, inch, meter, mm, and pixel.
Working with Pixel Values
Given that pixel size differs from one display to another and even across different settings of the same display device, DirectAnimation provides a built-in number behavior, pixel, which is the dimension of a pixel in meters. The pixel is a scalar because the pixel is a square.
The pixel constant enables an author to coordinate synthetic imagery precisely with imported bitmaps regardless of the display resolution.
Continuous Coordinate Systems
Continuous coordinate systems, such as those used by DirectAnimation, provide some key advantages. They remove problems of device- and resolution-dependence, allow for more portable content, and provide more flexibility to the implementation. However, continuous coordinate systems do not always meet the needs of programmers. For example, in addition to continuous time, DirectAnimation also provides the mechanism for applications to control frame generation through application-generated time ticks. Similarly, in spatial coordinates, it is generally the case that a programmer who imports a GIF or JPEG file wants it displayed at the same resolution at which it was authored and stored. For this reason, DirectAnimation supports pixels.
Image Coordinate System
The image coordinate system is called the image plane. Depending on whether the PixelLibrary or MeterLibrary is used, or if PARAM tags are used, the default unit of measure is pixels or meters, and the coordinate system is in pixel coordinates (origin centered, y down), meter coordinates (origin centered, y up), or HTML coordinates (origin in the upper left, y down).
All image and 2-D geometric primitives live in this same continuous coordinate system. This includes all of the Vector2, Point2, Transform2, Path2, Matte, Text, Montage, and Image values.
On its own, the image plane extends infinitely, with an origin and x- and y-axes. DirectAnimation images are constructed in this abstract coordinate system. However, when it is time to display a DirectAnimation image, a certain section of the infinite plane is mapped on a region of a display device called a viewport, as shown in the following illustration.
The programmer decides what the display region or viewport will be. The mapping from the image plane into the viewport is straightforward. The origin of the image plane is mapped to the center of the viewport, and then the mapping happens in like units of measure. For example, if there is a red point 2 centimeters above the origin in the image plane, this point would map to a red pixel 2 centimeters above the center of the viewport. (Note that windowless controls commonly don't occupy their full viewports.)
Geometry Coordinate System
The geometry coordinate system is a 3-D coordinate system of infinite extent where geometric models are specified and transformed. The default unit of measure is the meter. It has an origin and x-, y-, and z-axes. The coordinate system is a right-handed one, as shown in the following illustration.
The following illustration shows the direction of positive rotation along an axis of rotation (the arrow points in the positive direction).
DirectAnimation geometric models are constructed in this abstract coordinate system. However, you must have a camera to display a model. The camera projects infinite space into an infinite plane, which is the image plane discussed in the Image Coordinate System section. In other words, the 2-D image plane is the 3-D projection plane, because the result of projecting geometry through a camera is an image. The resulting abstract image is amenable to the same operations and rules as other images, including the display through a viewport.
Meter-Based Space
In a meter-based 2-D-composition space, 3-D models are authored to a certain size in meters, which is to be preserved in all renderings, and images are imported and scaled into a predetermined size, also in meters, independent of the pixel resolution. You can determine how much to scale an image by determining the bounding box of the imported image, comparing the desired size to the bounding box, and using that factor to scale the image. This effectively compensates for pixel size variability.
While this technique has the advantage of being independent of the device resolution, you need to scale images before displaying them. It thus requires more processing than simple pixel blitting and is prone to pixel aliasing. (Pixel blitting is bit block transfer of pixels used to transfer all or part of a bitmap from a source such as memory or the screen, to a destination such as another memory or display surface. Pixel aliasing causes ragged edges because pixels are blocks. This can be improved by averaging edge pixels together.)
Pixel-Based Space
In a pixel-based composition space, images are imported and displayed in a pixel-to-pixel mapping onto the screen, which is the traditional method in 2-D sprite systems and content (for example, 2-D sprite-based games and Director content). While the result is device-resolution dependent, there are several benefits. One is the large amount of legacy material written in this method. The reuse of this content requires using this pixel-based space.
In addition, because the pixels are mapped pixel-to-pixel, without any transformations, this method tends to be very fast and avoids pixel aliasing. The 3-D parts must be scaled based on the pixel size, so that when they are projected into the image plane they have consistent proportionality and coordination with the 2-D images.
Center-Based Positioning
When DirectAnimation imports or constructs an image, it is centered on the origin in the image plane. Similarly, when an image is created by rendering text (which takes no coordinates as parameters), it also is centered on the origin. The center of a camera's field of view coincides with the origin of the image plane. This means that scenes, when rendered into images, are centered on the origin.
A key concept behind DirectAnimation is the idea of "video editing" or "media timelines." Editing a video sequence and moving a sprite (subimage) along an animation path are conceptually very similar. Both scripted animation, such as that used in multimedia applications, and natural media, such as recorded sound and video, are sequences of clips. Each clip has a local timeline. Each can be used in more than one place on the screen (compositing) or in more than one place in time (sequencing). DirectAnimation uses the term behavior to denote these fundamental building blocks of media presentation.
A DirectAnimation behavior is a subsequence of media with its own timeline. You build an animated presentation by sequencing and compositing (overlaying or rendering) the behavior subsequences. DirectAnimation behaviors support a range of media types. For instance, there are image behaviors that treat video, cel animation, sprite (subimage) animation, and animated line graphics uniformly. There is a sound behavior for mixed, parameterized audio. Animated light behaviors illuminate three-dimensional scenes.
Timelines are animation fragments of (typically) fixed duration that can be sequenced together to form complex behaviors. Timelines are a flexible mechanism for constructing, manipulating, and running sequences of animation. For example, consider the following JScript code:
startPt = m.Point2(25, 50); endPt = m.Point2(150, 50); myPath = m.Line(startPt, endPt); followPath = m.FollowPath(myPath, 5);
The FollowPath method creates a behavior of fixed duration that represents movement along a geometric path (myPath) that lasts five seconds. There are several ways to create timelines. The most general way is to construct an animation fragment by setting a Duration for a behavior. In the following JScript example, colors are given a duration of 5 seconds:
color1 = m.Red.Duration(5); color2 = m.Green.Duration(5); myImage = m.SolidColorImage(m.Sequence(color1, color2));
In general, you can think of behaviors as animations (movies, sounds, sprite animations, 3-D animations, and so on) that run forever.
Consider the following JScript sample:
This script draws a red circle that moves in a straight line. It starts at the beginning of the line, waits 1 second, and then starts to move. It goes from the beginning of the line to the end in 4 seconds. It repeats these actions 3 times and then stops at the end. Click the Show Sample button to display the JScript example. To see the code, click the Show Sample Code button. All measurements are in pixels, so the code uses the PixelLibrary. Because the control is windowless, the red circle can pass on top of or underneath the button. It passes underneath the button if you specify z-index: -1 in the <STYLE> tag within the <OBJECT> tag, as shown in this example.
A common way to construct behaviors is to loop through a sequence of other behaviors over time. Examples include flipping through pre-rendered images or a list of colors. DirectAnimation supports this through the DAArray and Java ArrayBvr types.
The following JScript sample constructs a DAArray object from an array and uses the NthAnim function with a time-varying parameter to cycle through the array. Click the Show Sample button to display the JScript example. To see the code, click the Show Sample Code button.
The following Java code constructs an ArrayBvr out of a Java array, and uses the nth method with a time-varying parameter to cycle through that list.
ColorBvr[] arr = { red, green, blue, yellow, green, cyan, magenta }; ArrayBvr arrBvr = new ArrayBvr(arr); // build an indexer to go from 0 to length - 1, then back to 0, etc. // going at a rate of one unit per second. NumberBvr indexer = mod(localTime, toBvr(arr.length -1)); // Use this to index into the ArrayBvr. ColorBvr cyclingCol = (ColorBvr)arrBvr.nth(indexer);
Note that while nth takes a NumberBvr, it uses the greatest integer value less than the number's value to determine the index. The array's index starts at base 0, and any attempt to index beyond its length generates a run-time error.
Tuple behaviors (DATuple and Java TupleBvr) are the same as array behaviors, except that a tuple object can contain behaviors of different types (such as a DAColor and a DAImage), while all the behaviors in an array object must be the same type. Tuple objects are useful for grouping behaviors that all switch at the same event. For example:
SynchTuple = m.Until(tuple1, event, tuple2);
This helps to synchronize many different behaviors to one event.
Note that unlike array behaviors, a tuple behavior cannot have an animated index.
Paths in DirectAnimation are created by building a URL base, and then adding relative paths to the base. For example:
mediaBase = "..\\..\\..\\..\\media\\"; sndBase = mediaBase + "sound\\"; imgBase = mediaBase + "image\\"; geoBase = mediaBase + "geometry\\"; mySnd = m.ImportSound(sndBase + "clock1.mp2").Sound; myImage = m.ImportImage(imgBase + "bird.gif"); cubeGeo = m.ImportGeometry(geoBase + "cube.x");
When you install the DirectX Media SDK, the geometry, image, and sound files are installed by default in DXMedia\Media\geometry, DXMedia\Media\image, and DXMedia\Media\sound directories, respectively. Your path should be the relative path to your media files from where your HTML file resides.
Alternatively, you can specify the URL absolutely. For example:
mySnd = m.ImportSound("file://c:/dxmedia/media/sound/clock1.mp2").Sound; myImage = m.ImportImage("file://c:/dxmedia/media/image/bird.gif");
In Java, a base URL is set for you automatically. When you build a Java applet, the base URL is set by default to the directory from which your class files are loaded. If you are creating an application instead of an applet, the base URL is set by default to the current working directory. You can also set the base URL explicitly with the setImportBase method, as shown in the following Java code:
try { setImportBase(new URL("file:/c:/dx/dxm/media/image/")); } catch (MalformedURLException exc){ System.out.println("Bad URL - " + exc); }
The URL you specify in setImportBase can be any fully-qualified URL. For example:
setImportBase(new URL("http:/www.example.microsoft.com/MediaLib/"));
To build your full URL relative to the URL base, you retrieve the URL base with the getImportBase method, and then complete your path relative to this base. For example, if your applet was built in the directory C:\Animations\MyAnimations\DA\Java\MyApp, the following code would set the URL mediaBase to C:\Animations\Media.
URL mediaBase = buildURL(getImportBase(),"../../../Media/");
You then use the URL base as shown in the following code:
ImageBvr image1 = importImage(buildURL(mediaBase, "image/apple.gif")); ImageBvr image2 = importImage(buildURL(mediaBase, "../../MyMediaLibrary/MyImages/peach.gif"));
You can also override the base URL and specify a fully-qualified URL. Note that the URL does not become the default base URL.
URL altURL = buildURL(getImportBase(),"file:/c:/dx/dxm/media/image/); URL anotherURL = buildURL(getImportBase(), "http:/www.mycompany/MediaLib/");
You could then import images as show in the following code:
ImageBvr img1 = importImage(buildURL(altURL, "apple.gif")); ImageBvr img2 = importImage(buildURL(anotherURL, "pictures/orange.gif"));
You can also import from absolute URLs without building a URL base. For example:
ImageBvr img1 = importImage(buildURL(getImportBase(), "file:/c:/dx/dxm/media/image/apple.gif")); ImageBvr img2 = importImage(buildURL(getImportBase(), "http:/www.mycompany/MediaLib/pictures/orange.gif"));
Note: You should use the HTTP protocol or local files to import audio and video. If you use a UNC path such as "file:\\SERVER\SHARE\FOLDER\MEDIA_FILE", the imported audio or video is not downloaded. Instead, the media file is used over the network as if it is a local file. If your network does not have the bandwidth to do this effectively you will see severe performance degradation such as delays and skipping.
The following illumination equation describes the general lighting model used by DirectAnimation. Note that the actual implementation depends on the rendering device, but this equation gives the general model.
I = emissive + (La * ambient) + foreach(i) {Li * diffuse * cos(theta)} + foreach(i) {Li * specular * cos**specularExp(phi)} + ((white - opacity) * Lt)
In the preceding equation, La is the ambient light; Li parameters are point, spot or directional lights; and Lt is the light coming through the object. The properties that describe the material are emissive, ambient, diffuse, specular, and specularExp.
The emissive property is the color that the object emits independent of lighting. In other words, if an object has an emissive color of red, it will appear red even if you're in a dark room with all of the lights turned off. Emissive color is useful for making objects that glow (LEDs, for example) or objects that are fluorescent, or for turning invisible (ultra-violet) illumination into visible illumination, for black light or neon effects. In addition, emissive color is useful for representing light sources. For example, if you want to model a light bulb, you will probably put a white-emitting sphere around a point light source. The emissive color can be set with EmissiveColor.
Ambient light (La in the preceding equation) approximates the light in a scene that is everywhere, due to reflection from the many surfaces in a scene. In reality, ambient light is not constant; otherwise, you'd have no darker areas in a room. However, this helps ensure that surfaces not directly lit by a light don't appear completely black. For images in space, you usually set the ambient light to black, since surfaces in shadow receive almost no illumination. You can also set the ambient light to a color, if most of the lights in the room are that color. For example, if you're modeling a submarine interior under red-lighting, you'd probably want to set your shadow color to a red tint also.
The ambient property is the color reflected for ambient light. You will almost always want to set this value to the same color as the diffuse color. In other words, your material in shadow will be the color of the object, but darker. In some rare situations, you might set the ambient color of the object to be shifted to the hue of its neighboring surface colors, but this is unusual.
The diffuse property is the diffuse color of the material. Think of it as the fundamental color of the surface. This color is multiplied by cos(theta), where theta is the angle between the surface and the light. If the surface is facing the light, then it is brightly lit; otherwise, it fades to shadow. The diffuse color can be set with DiffuseColor.
The specularExp property gives the shininess of the surface. Shiny surfaces are approximated by high values of specularExp, and less shiny surfaces by lower values. The lowest value this should be set to is 1, which yields a very broad highlight that closely resembles diffuse reflection. A value of 10-20 is quite reasonable for a moderately glossy surface, and a value of 100 yields a shiny object. The specular exponent can be set with SpecularExponentAnim.
Imagine a ray going from the light to the surface, and then bouncing off. If you as the viewer are hit in the eye with the ricocheting ray, then you're looking at a direct reflection of the light, so it appears bright. As you move away from that ray, you move away from the reflection of the light, so its reflection looks less bright. The parameter Phi is the angle between you and the reflecting ray. As specularExp increases, it makes this reflection sharper. Specifically, the specular exponent is the power of the cosine function between the reflection vector and the sight vector. Thus, the higher the specular exponent, the higher the falloff, and the sharper the highlight.
The specular property is the color of the specular reflection. Surfaces such as plastic, ceramic, or colored glass tend to reflect the light without shifting the light's color. Thus, you generally want the specular color to be a grey-level so that it doesn't alter the color of the light source. For example, the reflection of a white light on a blue plastic material looks white. The specular property can be set with SpecularColor.
On the other hand, metallic surfaces color the reflection of the light with their surface color. For example, a white light reflected off a green metallic surface will look green. This is also true for materials such as satin or pearl. For these materials, you should set the specular color to some intensity of the diffuse color.
For some materials, the specular reflection can be a different color than the diffuse reflection. For example, some beetle shells have diffuse green colors with yellow specular highlights.
Lt in the preceding equation is the light transmitted through the object. If the object is opaque, then no light is transmitted. In simple transparency (the way most simple scan-line renderers do it today), Lt is just the color of the scene behind the material (that is, no refraction).
The opacity property is the color in which the material appears completely opaque. If the opacity color is blue, for example, the material looks completely opaque to blue light, and completely transparent to colors that have no blue. DirectAnimation supports only scalar transparency, which is a value from 0 to 1 (invisible to opaque, respectively). The opacity can be set with OpacityAnim.
Note that opacity does not affect the emissive or specular components of a material. This means that if you make the material of an object more transparent, you'll still see the highlight at full strength. This mimics what you'd see, for example, as you made the glass for a sphere more and more transparent - you'd still see the light's reflection from the surface. Similarly, if you take a glass sphere and heat it red-hot, you still see the redness (emissive color), regardless of how clear the glass is.
If you want to modulate specular and emissive color of the object, you need only multiply the specular and emissive color by the opacity. The end result is the same as rendering the object at full opacity, and then applying a single transparency value to the resulting image.
This set of examples gives you the property settings you need to mimic different materials.
The core DirectAnimation API is based on values and expressions. In contrast, the Drawing Surface API provides an alternative style in which methods are used to set a context and then primitives are rendered based on this context. The context can be saved into a stack and restored. The Drawing Surface API is limited to constructing 2-D vector models, so to get to broader features such as sound, images, and 3-D, use the core API.
You can use the Drawing Surface API in conjunction with the core API. After a 2-D vector model is constructed, it can be rendered into an image and then combined with images based on the core API. In the VBScript sample Runners (DXMedia\Samples\Multimedia\DAnim\VBScript\Showcase\Runners.html), the runner model is constructed using the Drawing Surface API, while the animation is constructed using the core API. For basic Drawing Surface samples, see DXMedia\Samples\Multimedia\DAnim\JScript\Templates\BasicDS.html and DXMedia\Samples\Multimedia\DAnim\VBScript\Templates\BasicDS.html.
The DADrawingSurface class uses a style of drawing that depends on a graphics context. This is in contrast to the other DirectAnimation classes that use a constructive approach. The constructive approach combines primitive data types into a more complex model, in a hierarchical fashion.
The graphics context approach is analogous to a pen plotter, where the pen acts as the mechanical extension of a hand, and the plotter acts as a piece of paper. To draw something, the pen is directed to a location on the plotter and then draws the shape. Many people find this approach to be more familiar than hierarchical modeling. For example, to draw the sun in the sky, it may seem more obvious to simply draw a yellow circle exactly where you want it, instead of first drawing a circle, then creating a yellow circle, and then, with a transform, creating yet another yellow circle that is now correctly placed. The difference is in the approach rather than in capabilities.
A DADrawingSurface object maintains a graphics state object for 2-D graphics. This state includes attributes such as line style, fill style, border style, font style, and 2-D transformations. The drawing commands (such as Oval and RoundRect are always interpreted in terms of the current state. See DADrawingSurface in the Scripting Reference for details about the subroutines and functions.
To use a DADrawingSurface functions and subroutines, create a DAViewerControl object in your HTML as usual, then use the DAStatics subroutine NewDrawingSurface to create the DADrawingSurface object, as shown in the following JScript code:
<DIV ID=controlDiv> <OBJECT ID="DAControl" STYLE="position:absolute; left:10; top:10;width:450;height:450" CLASSID="CLSID:B6FFC24C-7E13-11D0-9B47-00C04FC2F51D"> </OBJECT> </DIV> <SCRIPT LANGUAGE="JScript"> m = DAControl.MeterLibrary; ds = m.NewDrawingSurface();
Once you have the DADrawingSurface object ds, you can use its functions and subroutines. For example, the following JScript code draws a line between the specified x and y coordinates.
ds.Line(-0.1,.04, 0.01,.04); DAControl.Image = ds.Image; DAControl.Start(); </SCRIPT>
Consider the following JScript sample:
This script draws a green oval and spins it around. Click the Show Sample button to display the JScript example. To see the code, click the Show Sample Code button.
You can use the Structured Graphics control with the Drawing Surface interface. For example, the following VBScript code uses the Drawing Surface methods with the Structured Graphics control's drawing surface:
<OBJECT ID=SG1 STYLE="POSITION: absolute; HEIGHT:200;WIDTH:200;TOP:130;LEFT:215; VISIBILITY:VISIBLE; ZINDEX:1" CLASSID="CLSID:369303C2-D7AC-11d0-89D5-00A0C90833E6"> ... </OBJECT> <SCRIPT LANGUAGE=VBSCRIPT> ... Sub renderNew Set Lib = SG1.Library Set DRS = SG1.DrawSurface Call DRS.LineDashStyle(0) Call DRS.FillColor(Lib.green) Call DRS.SecondaryFillColor(Lib.red) Call DRS.GradientExtent(0,0,200,0) Call DRS.GradientRolloffPower(1) Call DRS.FillStyle(9) Call DRS.Rect(-75,-75,100,100) SG1.DrawSurface = DRS End Sub </SCRIPT>
The statement SG1.DrawSurface = DRS resets the Structured Graphics control's drawing surface to what has been created in DRS.
Note on using the DirectAnimation Structured Graphics control with the Drawing Surface interface in the Pixel construction mode:
Although the positive y-axis goes downward, angles in the arc and pie functions are interpreted as counter-clockwise. For example, a pie shape created by calling PieDegrees with a starting angle of 0 degrees and an ending angle of 90 degrees will be above the x-axis (negative y), while a pie created with a starting angle of 0 degrees and an ending angle of -90 degrees will be below the x-axis (positive y).
DirectAnimation supports animation splines. These are effectively animation paths for numbers, 2-D points and vectors, and 3-D points and vectors. The paths can be linear, quadratic, or cubic b-splines, both rational and non-rational. In DirectAnimation, an animation spline is constructed by supplying an array of knots, control points, and, possibly, weights (all of which themselves are behaviors and are potentially time-varying). This creates either a DANumber, DAPoint2, DAPoint3, DAVector2, or DAVector3 behavior.
The CubicBSplinePath function is a form of spline that constructs a 2-D cubic b-spline polynomial path (other forms construct numbers, points, and vectors). This path, like any other path, can be stroked as a line, filled by any fill style, or used for extracting a motion path with the FollowPath, FollowPathAngle, and FollowPathAngleUpright functions. See the JScript sample AnnotatedLogo.html in DXMedia\Samples\Multimedia\DAnim\JScript\Templates where this spline is used as both the shape of the logo and as an animation path for HTML text in a DIV tag.
You need a knot vector and a list of control elements to construct a spline. The control elements (numbers, points, or vectors) relate intuitively to the shape of the spline, while the knots don't. If you want to construct your spline in terms of control elements and not in terms of the knot vector, chose a uniform knot vector with standard end conditions (interpolate first and last points). This means the first and last knots are repeated d times where d is the degree of the spline, and intermediate knots are 1 unit apart. For example, the following JScript function (used in the sample CoordsAndPaths.html in DXMedia\Samples\Multimedia\DAnim\JScript\Templates) takes a list of 2-D control points, generates the uniform knot vector automatically, and constructs and returns a CubicBSplinePath.
function SimpleSpline(pts) { // divide by 2 since 2-D coordinates. numPts = pts.length/2; // We need 2 more knots (since cubic spline) than control points. knts = new Array(numPts + 2); // Set the uniform knots. Note that the knot vector doesn't need // to start from 0, only the relative spacing between knots // is significant. for (i=2; i < numPts; i++) knts[i] = i; // first knot must have duplicity 3 (the degree) to interpolate // first point. knts[0] = knts[1] = knts[2]; // last knot must have duplicity 3 (the degree) to interpolate // last point. knts[numPts+1] = knts[numPts] = knts[numPts-1]; // Finally construct and return the spline path. return(m.cubicBSplinePath(pts, knts)); }
In the absence of a visual authoring tool for creating splines, it may be easiest to create them by using a pencil and paper with a square grid. Draw the X and Y axes and draw the desired spline, then place points that follow the general shape of the spline, starting and ending with the same point that needs to be on the spline. You need to pick either the PixelLibrary or the MeterLibrary. If you use pixels, place the spline points only on the grid points, and map each square length to 25 pixels.
The COM/C++ Reference contains methods that can only be accessed from C++, typically because they use IUnknown pointers or arrays with unspecified bounds. However, C++ programmers can also access all the properties and methods listed in the Scripting Reference. In order to use the Scripting Reference to create C++ code, you need to adapt the scripting syntax to C++ syntax. How you adapt the scripting syntax depends on whether you include the line #import "danim.dll"
in your header file.
The easiest way to use DirectAnimation from C++ is by including #import "danim.dll"
in your header file. When you use #import "danim.dll"
, the DirectAnimation objects are wrapped and you are provided with smart pointers to the objects, such as IDAStaticsPtr, IDATransform3Ptr, IDAGeometryPtr, and so on. When wrapped, calls to COM methods return a behavior object or property value directly, rather than returning an HRESULT with the desired value as one of the method's parameters. See Using DirectAnimation from C++ for more COM/C++ programming information.
The following sections discuss how to perform various tasks in C++ and script, so you can see how to adapt the Scripting Reference syntax.
In scripting, you access the Statics library as follows:
m = DAControl.PixelLibrary;
You can then use the DAStatics class functions and properties by prepending an "m", as shown in the following code.
myImage = m.SolidColorImage(m.Red);
In C++, if you use #import "danim.dll"
, you access View and the Statics library as follows:
CDAViewerCtl::CDAViewerCtl() :_view(NULL) { _view.CreateInstance(__uuidof(DAViewerControlWindowed)); } IDAStaticsPtr e; e = _view>GetPixelLibrary();
If you are not using #import "danim.dll"
, you need to use CoCreateInstance to initialize the View and the Statics library.
hr = CoCreateInstance(CLSID_DAView, NULL, CLSCTX_INPROC, IID_IDAView, (void**) &_view); hr = CoCreateInstance(CLSID_DAStatics, NULL, CLSCTX_INPROC, IID_IDAStatics, (void**) &e);
After you have a pointer to the Statics library, you can use the DAStatics functions and properties by pointing to them with e->
, as shown in the following code.
IDAImagePtr myImage = e->SolidColorImage(e->Red);
If you use #import "danim.dll"
, you can access properties directly, as shown with the Black property in the following example, where e is a pointer to the Statics library.
IDAColorPtr col = e->Black;
If you do not use #import "danim.dll"
, you need to prepend get_ or put_ to the property. Use get_PropertyName to retrieve the value of a property, as shown in the following code.
pCol = NULL; hr = e->get_Black( &pCol );
If you use #import "danim.dll"
, you can set a property directly, as shown in the following code, where view is a pointer to a DAView object.
view->CompositeDirectlyToTarget = TRUE;
If you do not use #import "danim.dll"
, use put_PropertyName to set the value of a property, as shown in the following code.
hr = view->put_CompositeDirectlyToTarget(TRUE);
If you use #import "danim.dll"
, you can use methods with nearly the same syntax they have when used in script. For example, you can create a rotation in JScript as shown in the following code.
3DXform = m.Rotate3RateDegrees(m.yVector3, 60);
This same rotation in C++ with #import "danim.dll"
included is as follows:
IDATransform3Ptr 3DXform = e->Rotate3RateDegrees( (e->yVector3), 60);
This same rotation in C++ without #import "danim.dll"
included is as follows:
yVector3 = NULL; p3DXform = NULL; hr = e->get_YVector3( &yVector3 ); hr = e->Rotate3RateDegrees( yVector3, 60, &p3DXform );
If you include #import "danim.dll"
, the correspondence between scripting syntax in the Scripting Reference and the C++ syntax is close and you should be able to adapt the Scripting Reference to C++.
The following table compares how to create and render a spinning cube in JScript and C++. The C++ code assumes you have included #import "danim.dll"
in your C++ header file.
JScript | C++ |
---|---|
Initialize the control | Initialize the control |
<OBJECT ID="DAControl" STYLE="width:300;height:300" CLASSID="CLSID: B6FFC24C-7E13-11D0-9B47-00C04FC2F51D"> </OBJECT> |
CDAViewerCtl::CDAViewerCtl() :_vc(NULL) { _vc.CreateInstance(__uuidof(DAViewerControlWindowed)); } |
Initialize library | Initialize library |
m = DAControl.MeterLibrary; |
IDAStaticsPtr e; e = _vc->GetMeterLibrary(); |
Set media path | Set media path |
mediaBase = "..\\..\\..\\media\\"; geoBase = mediaBase + "geometry\\"; imgBase = mediaBase + "image\\"; |
TCHAR szMediaBase[_MAX_PATH]; TCHAR szGeo[_MAX_PATH]; TCHAR szImg[_MAX_PATH]; GetModuleFileName(GetModuleHandle(NULL), szMediaBase,sizeof(szMediaBase)); char *pos = strrchr( szMediaBase, (int)'\\' ); int result = pos - szMediaBase + 1; szMediaBase[result]= NULL; TCHAR* bin = NULL; bin = _tcsstr(szMediaBase, "\\bin\\"); if(bin) _tcscat(szMediaBase,_T("../../../media/")); else _tcscat(szMediaBase,_T("../../../../media/")); _tcscpy(szGeo,szMediaBase); _tcscpy(szImg,szMediaBase); _tcscat(szGeo,_T("geometry/")); _tcscat(szGeo,_T("image/")); |
Import media | Import media |
cubeTexture = m.ImportImage("apple.gif"); cubeGeo = m.ImportGeometry(geoBase + "cube.x"); |
IDAImagePtr cubeTexture = e->ImportImage((_bstr_t(szImg) + _bstr_t("apple.gif")); IDAGeometryPtr cubeGeo = e->ImportGeometry(_bstr_t(szGeo) + _bstr_t("cube.x")); |
Create spinning cube | Create spinning cube |
cubeGeo = cubeGeo.TextureImage(cubeTexture); cubeTransRot = m.Rotate3Rate(m.YVector3, 30); ubeGeo = cubeGeo.Transform(cubeTransRot); |
cubeGeo = cubeGeo. TextureImage(cubeTexture->MapToUnitSquare()); IDATransform3Ptr cubeTransRot = e->Rotate3Rate(e->YVector3, 30); cubeGeo = cubeGeo->Transform(cubeTransRot); |
Create camera, lights, final image | Create camera, lights, final image |
camera = m.PerspectiveCamera(0.06, 0.033); light = m.PointLight.Transform( m.Translate3(0.015, 0.0075, 0.0225)); finalImg = m.UnionGeometry(cubeGeo, light). Render(camera); |
IDACameraPtr camera = e->PerspectiveCamera(0.06, 0.033); IDAGeometry light = e->PointLight->Transform(e->Translate3(0.015, 0.0075, 0.0225)); IDAImagePtr finalImg = e->UnionGeometry(cubeGeo, light)->Render(camera); |
Render | Render |
DAControl.Image = finalImg; DAControl.Sound = m.Silence; DAControl.Start(); |
_vc->PutImage(finalImg); _vc->PutSound(e->Silence); _vc->Start(); |
You can access DirectX Transform objects from DirectAnimation to create interesting effects, such as exploding geometries and melting images. To use the 3-D effects, you must have DirectX foundation version 6 or later installed on your computer. To use some of the 2-D effects, you must have optional DLLs installed. You can install these DLLs by choosing "Optional DirectX Transforms" during installation of the DirectX Media SDK. DirectX foundation is included on the DirectX Media SDK CD. For a demonstration of the DirectX Transform effects, see the DirectX Transform Effects interactive script demo.
You access the effects through new ActiveXObject("type.creator.name");
, where type is the type of transform, creator is the company that created it, and name is the name of the effect. If the effect is a 2-D transform, the type is DXImageTransform
; for example, xf=new ActiveXObject("DXImageTransform.Microsoft.Compositor");
. If the effect is 3-D, the type is DX3DTransform
; for example, xf=new ActiveXObject("DX3DTransform.Microsoft.Explode");
. The string "type.creator.name"
is the ProgId (programmatic identifier) of the effect.
The following code shows how to use the Wipe effect, which will wipe between two images. First, you import the two images. Then you create the transform object with new ActiveXObject
, and then you create an evaluator. The evaluator is usually a set of behaviors that describe how to map the DirectX Transform effect's progress to time. DirectX Transform effects measure progress from 0 to 1, with 1 being complete. In this example, the evaluator is a sequence that holds at 0 for two seconds, moves to 1 over three seconds, holds at 1 for two seconds, and moves back to 0 over three seconds. This means the initial image will be displayed for two seconds. Over the next three seconds, the second image will wipe across the first. The second image will then be displayed for two seconds, and then the first image will wipe across the second image in three seconds. Then the sequence is repeated.
rawImg1 = m.ImportImage(imageBase + "tigerstripe.jpg"); rawImg2 = m.ImportImage(imageBase + "metablob.jpg"); xf = new ActiveXObject("DXImageTransform.Microsoft.Wipe"); holdTime1 = m.DANumber(0).Duration(2); holdTime2 = m.DANumber(1).Duration(2); forward = m.Interpolate(0, 1, 3); back = m.Interpolate(1, 0, 3); evaluator = m.Sequence(holdTime1, m.Sequence(forward, m.Sequence(holdTime2, back))).RepeatForever(); result = m.ApplyDXTransform(xf, new Array(rawImg1, rawImg2), evaluator); realImg = result.OutputBvr;
Note that if you have a pickable image or geometry and you apply a DirectX Transform effect to it, you must make the image or geometry pickable again after the effect is applied. You can see an example of this in the JScript sample Explode.html located in the Samples\Multimedia\danim\JScript\showcase subdirectory, where applying the effect is immediately followed by calling the DAGeometry method Pickable.
explodeCow = simpleApply("DX3DTransform.Microsoft.Explode", unpickedGeo, eval); pickCowGeo = explodeCow.Pickable();
The DirectX Transform documentation gives the ProgId of each transform and also specifies what software in addition to the DirectX media run time you need to have installed to use the effect, if any.
Top of Page
© 2000 Microsoft and/or its suppliers. All rights reserved. Terms of Use.