Last modified: January 30, 1996. This document can be found at http://webspace.sgi.com/moving-worlds/spec/spec.main.html
This document describes the complete specification for VRML 2.0. It contains the following sections:
Sections describing features this proposal adds to VRML 1.0 are indicated by the word new in the section heading. Features in VRML 1.0 that have been changed are indicated by the word modified in the section heading.
If you want to print the specification, it may be convenient to download it as a single document. (To download the PostScript versions, use your browser's Save Link As... feature (Shift+click in Netscape); if you just follow the link, the document might come up in a PostScript viewer instead of letting you save it to disk.) You can
January 30, 1996
January 30, 1996
This section describes key concepts related to the use of VRML, including how nodes are combined into scene graphs, how fields receive and generate events, how to create nodes and node sets using prototypes, how to add node types to VRML and export them for use by others, and how to incorporate programmatic scripts into a VRML file.
This subdocument includes the following sections:
For easy identification of VRML files, every VRML 2.0 file must begin with the characters:
#VRML V2.0 utf8
The identifier utf8 allows for international characters to by displayed in VRML using the UTF-8 encoding of the ISO 10646 standard. Unicode is an alternate encoding of ISO 10646. UTF-8 is explained under the Text node.
Any characters after these on the same line are ignored. The line is terminated by either the ASCII newline or carriage-return characters.
The '#' character begins a comment; all characters until the next newline or carriage return are ignored. The only exception to this is within double-quoted SFString and MFString fields, where the '#' character will be part of the string.
Note: Comments and whitespace may not be preserved; in particular, a VRML document server may strip comments and extraneous whitespace from a VRML file before transmitting it. WorldInfo nodes should be used for persistent information like copyrights or author information. Info nodes could also be used for object descriptions. New uses of named info nodes for conveying syntactically meaningful information are deprecated. Use the extension nodes mechanism or prototyping instead.
Blanks, tabs, newlines and carriage returns are whitespace characters wherever they appear outside of string fields. One or more whitespace characters separates the syntactical entities in VRML files, where necessary.
After the required header, a VRML file can contain the following:
Field names start with lowercase letters, Node types start with uppercase. The remainder of the characters may be any printable ascii (21H-7EH) except curly braces {}, square brackets [], single ' or double " quotes, sharp #, backslash \\ plus +, period . or ampersand &.
Node names must not begin with a digit but they may begin with and contain any UTF8 character except those below 21H (control characters and white space), and the characters {} [] ' " # \\ + . and &.
VRML is case-sensitive; "Sphere" is different from "sphere" and "BEGIN" is different from "begin."
A URL (Universal Resource Locator) specifies a file located on a particular server and accessed through a specified protocol. A URN (Universal Resource Name) specifies only a file within a particular domain. It does not specify the server or the protocol. The contents of a URN are guaranteed not to change. If the contents change, the name must change as well.
VRML2.0 browsers are not required to support URNs. If they do not support URNs, they should ignore any URNs that appear in MFString fields along with URLs. URN support is specified in a separate document at http://earth.path.net/mitra/papers/vrml-urn.html, which may undergo minor revisions to keep it in line with parallel work happening at the IETF.
The file extension for VMRL files is .wrl (for world).
The MIME type for VRML files is defined as follows:
x-world/x-vrml
The MIME major type for 3D world descriptions is x-world. The MIME minor type for VRML documents is x-vrml. Other 3D world descriptions, such as oogl for The Geometry Center's Object-Oriented Geometry Language, or iv, for SGI's Open Inventor ASCII format, can be supported by using different MIME minor types.
It is anticipated that the official type will change to "model/vrml", at this time servers should present files as being of type x-world/x-vrml, browsers should recognise both x-world/x-vrml and model/vrml.
At the highest level of abstraction, VRML is just a way for objects to read and write themselves. Theoretically, the objects can contain anything--3D geometry, MIDI data, JPEG images, and so on. VRML defines a set of objects useful for doing 3D graphics. These objects are called nodes. Nodes contain data, which is stored in fields.
VRML defines several different classes of nodes. Most of the nodes can be classified into one of two categories; grouping nodes or leaf nodes. Grouping nodes gather other nodes together, allowing collections of nodes (referred to as their children) to be treated as a single object. Some grouping nodes also control whether or not their children are drawn. Grouping nodes can have only other grouping nodes or leaf nodes as children.
A leaf node is any node that can be added to a grouping node. A leaf node is not itself a grouping node and it cannot have children. Leaf nodes include shapes, lights, viewpoints, sounds, and nodes that provide information to the browser. Shape nodes contain two kinds of additional information: geometry and appearance. For purposes of discussion, this specification also uses a third node grouping, subsidiary nodes, for nodes that are always used within fields of other nodes and are not used alone. These nodes include geometry (for example, Cone and Cube), geometric property (for example, Coordinate3 and Texture2), appearance (Appearance) and appearance property nodes (for example, Material and FontStyle).
Nodes can be prototyped and shared. Nodes are arranged in hierarchical structures called scene graphs. A Frame node is a kind of grouping node that defines a coordinate system for its child (leaf) nodes. Each Frame node defines a coordinate system relative to its parent nodes (see Coordinate Systems and Transformations).
Applications that interpret VRML files need not maintain the scene graph structure internally; the scene graph is merely a convenient way of describing objects.
A node has the following characteristics:
The syntax chosen to represent these pieces of information is as follows:
objecttype { fields eventsIn eventsOut children }
Only the object type and curly braces are required; nodes may or may not have fields, events, and children.
For example, this file contains a simple scene defining a view of a red cone and a blue sphere, lit by a directional light:
#VRML V2.0 utf8 Frame { DirectionalLight { direction 0 0 -1 # Light shining from viewer into scene } Frame { # The red sphere translation 3 0 1 Shape { geometry Sphere {radius 2.3} appearance Appearance [ Material {diffuseColor 1 0 0} ] # Red } } Frame { # The blue cube translation -2.4 .2 1 rotation 0 1 1 .9 Shape { geometry Cube {} appearance Appearance [ Material {diffuseColor 0 0 1} ] # Blue } } }
This section describes the general scene graph hierarchy, how to reuse nodes within a file, coordinate systems and transformations in VRML files, and the general model for viewing and interaction within a VRML world.
A scene graph consists of grouping nodes and leaf nodes. Grouping nodes, such as Frame, LOD, and Switch, can have child nodes. These children can be other grouping nodes or leaf nodes, such as shapes, browser information nodes, lights, cameras, and sounds. Appearance, appearance properties, geometry, and geometric properties are contained within Shape nodes.
Transformations are stored within Frame nodes. Each Frame node defines a coordinate space for its children. This coordinate space is relative to the parent (Frame) node's coordinate space--that is, transformation accumulate down the scene graph hierarchy. Geometric sensors are contained within a Frame node.
Some nodes are not part of the scene graph hierarchy. These nodes are the Script and TimeSensor nodes.
A node may be the child of more than one group. This is called instancing (using the same instance of a node multiple times; called "aliasing" or "multiple references" by other systems) and is accomplished by using the DEF and USE keywords.
The DEF keyword gives a node a name. The USE keyword indicates that a named node should be used again. If several nodes were given the same name, then the last DEF encountered during parsing "wins." DEF/USE is limited to a single file. There is no mechanism for using nodes that are defined in other files. Nodes cannot be shared between files. For example, if a node is defined inside a file referenced by a WWWInline node, the file containing the WWWInline node cannot USE that node.
Rendering the following scene results in three spheres being drawn. Both of the spheres are named "Joe"; the second (smaller) sphere is drawn twice:
#VRML V2.0 utf8 Frame { DEF Joe Sphere { } Translation { translation 2 0 0 } Frame { DEF Joe Sphere { radius .2 } } Translation { translation 2 0 0 } USE Joe # radius .2 sphere will be used here; most recent one defined }
VRML uses a Cartesian, right-handed, 3-dimensional coordinate system. By default, objects are projected onto a 2-dimensional device by projecting them in the direction of the positive Z axis, with the positive X axis to the right and the positive Y axis up. A camera or modeling transformation can be used to alter this default projection.
The standard unit for lengths and distances specified is meters. The standard unit for angles is radians.
VRML scenes may contain an arbitrary number of local (or object-space) coordinate systems, defined by the transformation fields of the Frame node. These fields are translation, rotation, scaleFactor, scaleOrientation, and center.
Given a vertex V and a series of transformations such as:
Frame { translation T rotation R scaleFactor S Shape { geometry[ PointSet { ... }] }
the vertex is transformed into world-space to get V' by applying the transformations in the following order:
V' = T·R·S· V (if you think of vertices as column vectors) OR V' = V·S·R·T (if you think of vertices as row vectors)
Conceptually, VRML also has a world coordinate system. The various local coordinate transformations map objects into the world coordinate system, which is where the scene is assembled. Transformations accumulate downward through the scene graph hierarchy, with each Frame inheriting the transformations of its parents. (Note however, that this series of transformations takes effect from the leaf nodes up through the hierarchy. The local transformations closest to the Shape object take effect first, followed in turn by each successive transformation upward in the hierarchy.)
A camera node operates within the local coordinate system defined by its parent Frame node. Its position in the scene is determined by its location in the scene graph. The camera "sees" everything in the scene that is in front of it, regardless of where those objects are defined in the VRML file.
This specification assumes that there is a user viewing and interacting with the VRML world. It is expected that a future extension to this specification will provide mechanisms for creating multi-participant worlds. The viewing and interaction model that should be used for the single-participant case is described here.
The world creator may place any number of cameras in the world, described using VRML's camera nodes (PerspectiveCamera and OrthographicCamera). Cameras exist in a particular coordinate system, and either the camera or the coordinate system may be animated. Cameras are the same as "viewpoints"--interesting places from which the user might wish to view the world.
It is expected that browsers will support both user-interface and scripting language mechanisms by which users may "teleport" or "attach" (and unattach) themselves from one camera to another. If a user teleports to a camera that is moving (it or one of its parent coordinate systems is animating), then the user should move along with that camera.
The browser may provide a user interface that allows the user to change his or her view, in which case all changes should be relative to the camera in the world to which the user is attached (if any). The cameras in the world are controlled solely by the behaviors in the world; they should not change when the user manually changes the view. The only mechanisms behaviors have for controlling what the user sees is to attach or unattach the user to or from the cameras in the scene.
The browser controls the passage of time in a world by causing TimeSensors to generate events as time passes. Specialized browsers or authoring applications may cause time to pass more quickly or slowly than in the real world, but typically the times generated by TimeSensors will roughly correspond to "real" time.
A world's creator must make no assumptions about how often a TimeSensor will generate events but can safely assume that each time event generated will be greater than any previous time event.
Typically, a TimeSensor affecting a visible (or otherwise perceptible) portion of the world will generate events once per "frame," where a "frame" is one rendering frame or one timestep in a simulation.
Most nodes can receive events, which have names and types corresponding to their fields, with the effect that the corresponding field is changed to the value of the event received. For example, the Frame node can receive setTranslation events (of type SFVec3f) that change the Frame's translation field (it may also receive setRotation events, setScaleFactor events, and so on).
Nodes can also generate events that have names and types corresponding to their fields when those fields are changed. For example, the Frame node generates a translationChanged event when its translation field changes.
The connection between the node generating the event and the node receiving the event is called a route. A node that produces events of a given name (and a given type) can be routed to a node that receives events of the same type using the following syntax:
ROUTE NodeName.eventOutName -> NodeName.eventInName
Routes are not nodes; ROUTE is merely a syntactic construct for establishing event paths between nodes.
Sensor nodes generate events. Geometric sensor nodes (BoxProximitySensor, ClickSensor, and PlaneSensor) generate events based on user actions, such as a mouse click or navigating close to a particular object. TimeSensor nodes generate events at regular intervals, as time passes.
The BoxProximitySensor node generates events giving the camera's position and orientation whenever the camera is within a box of a given size centered at a given point. These events can then be routed to other nodes for processing.
The ClickSensor node is associated with a particular piece of geometry in the scene. When the user moves the pointing device over that geometry, a ClickSensor can generate an isOver event. While the pointing device is still pointing to the object, if the user cicks the pointing device's button, the ClickSensor generates several other events indicating where on the object the user cicked.
The PlaneSensor notices when the user clicks and drags a pointing device, interpreting the dragging motion as a translation in two dimensions (parallel to the xy plane of the sensor's local coordinate space). It generates the same events as a ClickSensor as well as translation events indicating where the pointer is being dragged to.
The TimeSensor node is similar to other sensors in that it generates events, but it does so according to a given start and end time, not in response to any user activity. The values of the generated events depend on what mode the TimeSensor is set to use. They can be actual times, or numbers varying over time from 0 to 1, or any of several other possibilities. TimeSensor nodes are not part of the scene graph hierarchy. Like Script nodes, they sit apart from the scene and communicate with the scene and with Script nodes by way of ROUTE statements.
Prototyping is a mechanism that allows the set of node types to be extended from within a VRML file. It allows the encapsulation and parameterization of geometry, behaviors, or both.
A prototype definition consists of the following:
Square brackets enclose the list of events and fields, and braces enclose the definition itself:
PROTO typename [ eventIn eventtypename name eventOut eventtypename name field fieldtypename name defaultValue ... ] { node { ... } Script and/or ROUTES and/or PROTOs }
A prototype is NOT a node; it merely defines a prototype (named typename) that can be instantiated later in the same file as if it were a built-in node. The implementation of the prototype is contained in the scene graph rooted by node. That node may be followed by Script and/or ROUTE declarations, as necessary to implement the prototype.
The eventIn and eventOut declarations export events from the scene graph rooted by node. Specifying the type of each event in the prototype is intended to prevent errors when the implementation of prototypes is changed and to provide consistency with external prototypes. Events generated or received by nodes in the prototype's implementation are associated with the prototype using the keyword IS. For example, the following statement exposes the built-in setTranslation event by giving it a new name (setPosition) in the prototype interface:
setTranslation IS setPosition
Fields hold the persistent state of VRML objects. Allowing a prototype to export fields allows the initial state of a prototyped object to be specified when an instance of the prototype is created. The fields of the prototype are associated with fields in the implementation using the IS keyword. For example:
translation IS position
A prototype is instantiated as if typename were a built-in node. For example, a simple chair with variable colors for the leg and seat might be prototyped as:
PROTO TwoColorChair [ field MFColor legColor .8 .4 .7 field MFColor seatColor .6 .6 .1 ] { Frame { Frame { DEF seat Material { diffuseColor IS seatColor } Cube { ... } } Frame { Transform { ... } DEF leg Material { diffuseColor IS legColor } Cylinder { ... } } } # End of root Frame } # End of prototype
The prototype is now defined. Although it contains a number of nodes, only the legColor and seatColor fields are public. Instead of using the default legColor and seatColor, this instance of the chair has red legs and a green seat:
TwoColorChair { legColor 1 0 0 seatColor 0 1 0 }
A prototype instance can be used in the scene graph wherever its root node can be used. For example:
PROTO MyObject [ field ... field...] { Frame { ... } }
can be used wherever a Frame can be used, since the root object of this small hierarchy is a Frame node.
Prototype definitions can be nested. A prototype instance may be DEF'ed or USE'ed. Prototype or DEF names declared inside the prototype are not visible outside the prototype.
The set of built-in VRML nodes can be extended using either prototypes or external prototypes. External prototypes provide a way to extend a system in a manner that all browsers will understand. If a new node type is defined as an external prototype, other browsers can parse it and understand what it looks like, or they can ignore it. An external prototype uses the URL syntax to refer to an internal or built-in implementation of a node. For example, suppose your system has a Torus geometry node. This node can be exported to other systems using an external prototype:
EXTERNALPROTO Torus [ field SFFloat bigRadius field SFFloat smallRadius ] ["internal:Torus", "http://machine/directory/protofile" ]
The browser first looks for its own, internal implementation of the Torus node. If it does not find one, it goes to the next URL and searches for the specified prototype file. In this case, if the file is not found, it ignores the Torus. If more URLs are listed, the browser tries each one until it succeeds in locating an implementation for the node or it reaches the end of the list.
Unlike a prototype, an external prototype does not contain an inline implementation of the node type. Instead, the prototype definition and implementation is found in a set of URLs. The other difference between a prototype and an external prototype is that external prototypes do not contain default values for fields. The external prototype points to a file that contains the prototype implementation, and this file contains the default values.
The syntax for defining prototypes in external files is as follows:
EXTERNPROTO typename [ eventIn eventtypename name eventOut eventtypename name field fieldtypename ... ] URL or [ URL, URL, ... ]
The external prototype is then given the name typename in this file's scope (allowing possible naming clashes to be avoided). It is an error if the eventIn/eventOut declaration in the EXTERNPROTO is not a subset of the eventIn/eventOut declarations specified in the PROTO referred to by the URL. If multiple URLs are specified, the first one that can be fetched is used.
Check the "File Syntax and Structure" section of this standard for the rules on valid characters in names.
To avoid namespace collisions with nodes defined by other people, any of the following conventions should be followed.
Usually, events from sensors are not directly routed to a node. Some logic is often necessary to decide what effect an event should have on the scene -- "if the vault is currently closed AND the correct combination is entered, THEN open the vault." These kinds of decisions are expressed as Script nodes that take in events, process them, and generate other events. A Script node can also keep track of some information between invocations, "remembering" what its internal state is over time.
The event processing is done by a program contained in (or referenced by) the Script node's behavior field. This program can be written in any programming language that the browser supports, but most scripting will probably be done using Java, as that's the only language all browsers are required to support.
A Script node is activated when it receives an event. At that point the browser executes the program in the Script node's behavior field (passing the program to an external interpreter if necessary). The program can perform a wide variety of actions: sending out events (and thereby changing the scene), performing calculations, communicating with servers elsewhere on the Internet, and so on.
Two of the most common uses for scripts will probably be animation (using interpolators [[edit this if interps don't get put back in]] to smoothly move objects from one position to another) and network operations, connecting to servers to allow multi-user interaction.
[[Should we talk about scripts running as separate threads? What happens if a script doesn't exit? how many copies of a given script can run at once? does a script run concurrently with user interaction, or does the browser finish executing the script before going on?]]
Scripts can be written in a variety of languages, including Java, C, and Perl. However, browsers aren't required to support any language other than Java. In fact, a browser is only required to execute Java bytecode (that is, browsers are not guaranteed to be able to compile Java source code). Two appendices to the Moving Worlds spec describe the bindings for the VRML API in Java and C.
The scriptType field of the Script node indicates what language is used. The only scriptType that all browsers are required to support is "javabc", corresponding to bytecode-compiled Java in base64 format.
Every time a Script node receives an eventIn, it executes its script. (Scripts aren't executed at any other time.) First, all pending eventIn values are queued. For each queued event, in timestamp order from oldest to newest, the eventIn method or function that has the same name is called. (Any given eventIn calls exactly one method.) When the queue is empty, the eventsProcessed() method of the script is called to do any final post-processing that might be needed. For instance, the eventIn methods can simply collect data, leaving eventsProcessed() to process all the data at once, in order to prevent duplication of work.
After execution of the eventsProcessed() method, values stored during script execution as eventOuts are sent as events, one for each eventOut that was set at least once during script execution. At most one message is sent for each eventOut value, and all eventOuts have the same time stamp.
In languages that allow multiple threads, such as Java, you can use the standard language mechanisms to start new threads. When the browser disposes of the Script node (as, for instance, when the current world is unloaded), it calls the shutdown() method for each currently active thread, to give threads a chance to shut down smoothly. [[should this method be added to the Script class, or is it part of Java?]].
When an exception occurs, the browser passes a string (indicating the type of the exception) to the script's exception handler. In Java, this procedure involves passing the appropriate string to the exception() method. [[should this method be added to the Script class, or is it part of Java?]]
If you want to keep static data in a script (that is, to retain values from one invocation of the script to the next), you can use instance variables -- local variables within the script, declared private. However, the value of such variables can't be relied on if the script is unloaded from the browser's memory; to guarantee that values will be retained, you have to store them in fields of the Script node.
[[pretty much covered above; do we need to say more here about events per se?]]
The API provides a data type in the scripting language for every field type in VRML. For instance, the Java bindings contain a class called SFFloat, which defines methods for getting and setting the value of variables of type SFFloat. A script can get and set the value of its own fields using these data types and methods. For values that can't be changed, the API provides read-only data types (the names of which are all prefixed with Const in the Java binding), the values of which can't be changed by the script. For instance, Java's ConstSFFloat class defines a getValue() method but no setValue() method. For a full listing of Java classes corresponding to fields, see the Java appendix.
The API also provides a way to access other nodes in the scene. It allows getting the value of any field of any named node. [[Not clear to me what the postEventIn() method does, but it ought to be described here.]]
The API provides ways for scripts to find out and change information about the browser. When a browser reads in a scene, it determines certain information based on the fields of the scene's NavigationInfo node. If you want to change that information later, use these browser calls -- changing the fields of the NavigationInfo node via routes wouldn't work even if it were possible. [[I'm assuming the fields will no longer be exposed -- if they stay exposed, this explanation needs to be reworked slightly.]]
Here are descriptions of the functions/methods that the browser API supports. The syntax given is the Java syntax; bindings for other languages are not necessarily supported by all browsers.
public static String getName(); public static String getVersion();
The getName() and getVersion() methods get the "name" and "version" of the browser currently in use. These values are defined by the browser writer, and identify the browser in some (unspecified) way. They are not guaranteed to be unique or to adhere to any particular format, and are for information only. If the information is unavailable these methods return empty strings.
public static String getNavigationType(); public static void setNavigationType(String type) throws Exception;
The getNavigationType() and setNavigationType() methods get and set the navigation type currently in use. An empty string indicates that no navigation is currently being used; such a setting may be useful in cases when the browser has control over the viewer's motion, for instance. A value of "unknown" is returned if the browser performs some sort of navigation but the type cannot be determined. For information on standard navigation types, see the NavigationInfo node section. A navigation type used only by a particular browser should follow the convention specified in the "Naming Conventions" section. [[what Naming Conventions section?]] If the browser does not support the navigation type requested by setNavigationType(), an exception is generated.
public static float getNavigationSpeed(); public static void setNavigationSpeed(float speed);
The getNavigationSpeed() and setNavigationSpeed() methods get and set the "navigation speed" currently in use. Navigation speeds are given in meters per second; the given value indicates the normal or average speed the browser should travel at, not the actual current speed the user is traveling at at any given moment (see below). The interpretation of navigation speed values beyond that definition is left to the browser.
public static float getCurrentSpeed();
The getCurrentSpeed() method returns the speed at which the viewpoint is currently moving, in meters per second. If speed of motion is not meaningful in the current navigation type, or if the speed cannot be determined for some other reason, 0.0 is returned.
public static float getNavigationScale(); public static void setNavigationScale(float scale);
The getNavigationScale() and setNavigationScale() methods get and set the scale to use for an avatar. This is typically used to determine the size of the avatar surrounding the camera, for collision detection. A value of 1.0 indicates that the "normal" size should be used. [[do we need to define "avatar" somewhere? do we even discuss avatars anywhere else in the spec?]]
public static boolean getHeadlight(); public static void setHeadlight(boolean onOff);
The getHeadlight() and setHeadlight() methods get and set information about the browser's headlight. A headlight is usually a directional-style light, illuminating parallel to the direction that the viewpoint is facing, but different browsers may interpret the headlight in different ways. For more information on headlights, see the headlight field of the NavigationInfo node. [[does a headlight have a location? the original description said "attached to the viewer," but isn't it a DirectionalLight? Or is that browser-dependent?]]
public static String getWorldURL(); public static void loadWorld(String [] url);
The getWorldURL() method returns the URL for the root of the currently loaded world. loadWorld() loads one of the URLs in the passed string and replaces the current scene root with the VRML file loaded. The browser first attempts to load the first URL in the list; if that fails, it tries the next one, and so on until a valid URL is found or the end of list is reached. If a URL cannot be loaded, some browser-specific mechanism is used to notify the user. It's up to the browser whether to block on a loadWorld() until the new URL finishes loading, or whether to return immediately and at some later time (when the load operation has finished) replace the current scene with the new one. [[The original statement here, about whether loadWorld() returns or blocks, was awfully unclear; I think I got the sense of it, but someone should check this to make sure.]]
public static float getCurrentFrameRate();
The getCurrentFrameRate() method returns the current frame rate in frames per second. The way in which this is measured and whether or not it is supported at all is browser dependent. If frame rate is not supported, or can't be determined, 100.0 is returned.
public static Node createVrmlFromURL( String[] url ) throws Exception; public static Node createVrmlFromString( String vrmlSyntax ) throws Exception;
The createVrmlFromURL() method takes the URL of a VRML file and returns the root node of the VRML scene described by that file. [[and what happens to that scene? does it replace the current scene?]]
The createVrmlFromString() method takes a string consisting of a VRML scene description and returns the root node of the corresponding VRML scene.
public void addRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn) throws Exception; public void deleteRoute(Node fromNode, String fromEventOut, Node toNode, String toEventIn) throws Exception;
These methods respectively add and delete a route between the given event names for the given nodes. An exception is generated if the given nodes do not have the given event names or if an attempt is made to delete a route that does not exist. Note that these methods are part of the browser interface; they add or remove routes from the browser's current internal model of the scene, without changing any actual nodes.
public void bindBackground(Node background); public void unbindBackground(); public boolean isBackgroundBound(Node background);
bindBackground() allows a script to specify which Background node should be used to provide a backdrop for the scene. Once a Background node has been bound, isBackgroundBound() indicates whether a given Background node is the currently bound one, and unbindBackground() restores the Background node in use before the previous bind. If unbindBackground() is called when nothing is bound, nothing happens. Changing the fields of a currently bound Background node changes the currently displayed background.
public void bindNavigationInfo(Node navigationInfo); public void unbindNavigationInfo(); public boolean isNavigationInfoBound(Node navigationInfo);
bindNavigationInfo() allows a script to specify which NavigationInfo node should be used to provide hints to the browser about how to navigate through a scene. Once a NavigationInfo node has been bound, isNavigationInfoBound() indicates whether a given node is the currently bound one, and unbindNavigationInfo() restores the NavigationInfo node in use before the previous bind. If unbindNavigationInfo() is called when nothing is bound, nothing happens. A script can change the fields of a NavigationInfo node using events and routes. Changing the fields of a currently bound NavigationInfo node changes the associated parameters used by the browser.
public void bindViewpoint(Node viewpoint); public void unbindViewpoint(); public boolean isViewpointBound(Node viewpoint);
In some cases, a script may need to manipulate the user's current view of the scene. For instance, if the user enters a vehicle (such as a roller coaster or elevator), the vehicle's motion should also be applied to the viewer. bindViewpoint() provides a way to bind the viewer to a given Viewpoint node. This binding doesn't itself change the viewer location or orientation; instead, it changes the fields of the given Viewpoint node to correspond to the current viewer location and orientation. (It also places the viewer in the coordinate space of the given Viewpoint node.) Once a Viewpoint is bound, the script can animate the transformation fields of the Frame that the Viewpoint is in (probably using an interpolator to generate values) and move the viewer through the scene.
Note that scripts should animate the Viewpoint's frame of reference (the transformation of the enclosing Frame) rather than the Viewpoint itself, in order to allow the user to move the viewer a little during transit (for instance, to let the user walk around inside the elevator while it's between floors). Fighting with the user for control of the viewer is a bad idea.
Note also that results are undefined for vehicle travel if the user is allowed to move out of the vehicle while the animation is running. This problem is best resolved by using collision detection to prevent the user leaving the vehicle while it's in motion. Another option is to turn off the browser's user interface during animation by setting the current navigation type to "none".
When the script has finished transporting the user, unbindViewpoint() releases the viewer from the influence of the currently bound Viewpoint, returning the viewer to the coordinate space of the previous viewpoint binding (or the base coordinate system of the scene if there's no previous binding). The fields of the now-unbound Viewpoint node return to the values they had before the binding. [[Or do they return to the original values they had at file read-in? position and orientation are exposedFields, so those two things are not necessarily the same.]]
And of course isViewpointBound() returns TRUE if the specified Viewpoint node is currently bound to the viewer (which implies that the fields of that Viewpoint node indicate the current position and orientation of the viewer). The method returns FALSE if the specified Viewpoint is not bound.
[[Unsure what to say here.]]
[[should this supplant the example in the Java API appendix?]]
[[Someone ought to check the syntax here to be sure it's right.]]
A Script node that decided whether or not to open a bank vault might receive vaultClosed and combinationEntered messages, produce openVault messages, and remember the correct combination and whether or not the vault is currently open. The VRML for this Script node might look like this:
DEF OpenVault Script { # Declarations of what's in this Script node: eventIn SFBool vaultClosed eventIn SFString combinationEntered eventOut SFBool openVault field SFString correctCombination "43-22-9" field SFBool currentlyOpen FALSE # Implementation of the logic: scriptType "javabc" behavior "data:java bytecodes in base64 format go here" }
The bytecodes in the behavior field might be a compiled version of the following Java source code:
import vrml; class VaultScript extends Script { // Declare fields private SFBool currentlyOpen = (SFBool) getField("currentlyOpen"); private SFString correctCombination = (SFString) getField("correctCombination"); // Declare eventOuts private SFBool openVault = (SFBool) getEventOut("openVault"); // Handle eventIns public void vaultClosed() { currentlyOpen.setValue(FALSE); } public void combinationEntered(ConstSFString combo) { if (currentlyOpen.getValue() == FALSE && combo.getValue() == correctCombination) { currentlyOpen.setValue(TRUE); openVault.setValue(TRUE); }
[[could do above in eventsProcessed() instead, but would require another field, right? In order to pass the data to eventsProcessed(), I mean.]]
} public void eventsProcessed { } }
January 29, 1996
This section provides a detailed description of each node in VRML 2.0. It is organized by functional group. Nodes within each group are listed alphabetically. (An Index of Nodes is included at the end of this document.)
Intrinsic nodes are nodes whose functionality cannot be duplicated by any combination of other nodes; they form the core functionality of VRML. The functional groups used in this section are as follows:
These nodes provide common functionality that all VRML implementations are required to support, but that can be created using one or more of the intrinsic nodes. A reference PROTO implementation is given for these nodes (note: we didn't have time before the VRML 2.0 RFP to do all implementations, for several nodes we just sketch out what the PROTO would look like).
The last item in each node description is the public interface for the node, with default values. (The syntax for the public interface is the same as that for prototypes.) For example:
DirectionalLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFColor color 1 1 1 exposedField SFVec3f direction 0 0 -1 }
Fields that have associated implicit set_ and _changed events are labeled exposedField. For example, the on field has a set_on input event and an on_changed output event.
Note that this information is arranged in a slightly different manner in the file format for each node. Using the same example, the file format would be
DirectionalLight { on TRUE intensity 1 color 1 1 1 direction 0 0 -1 }
The file format for nodes lists field names and their values but does not indicate field types or exposed fields.
Grouping nodes can contain other grouping nodes or leaf nodes as children. Grouping nodes include the Collision, Group, LOD, Frame, Switch, WWWAnchor, and WWWInline nodes.
The children of a grouping node are specified using an MFNode field.
The Collision grouping node specifies to a browser what objects in the scene should not be navigated through. It is useful to keep viewers from walking through walls in a building, for instance. Collision response is browser-defined. For example, when the user comes sufficiently close to an object to register as a collision, the browser may have the user bounce off the object or simply come to a stop.
The children of a Collision node are drawn as if the Collision node was a Group.
By default, collision detection is ON. The collide field in this node allows collision detection to be turned off, in which case the children of the Collision node will be "invisibe" to collisions, even though they will be drawn.
Since collision with arbitrarily complex geometry is computationally expensive, one method of increasing efficiency is to be able to define an alternate geometry that could serve as a proxy for colliding against. This collision proxy, contained in the proxy field, could be as crude as a simple bounding box or bounding sphere, or could be more sophisticated (for example, the convex hull of a polyhedron).
If the value of the collide field is FALSE, then no collision is performed with the affected geometry. If the value of the collide field is TRUE, then the proxy field defines the geometry against which collision testing is done. If the proxy value is NULL, the children of the collision node are collided against. If the proxy value is not NULL, then it contains the geometry that is used in collision computations.
If children is empty, collide is TRUE and a proxy is specified then collision detection is done against the proxy but nothing is displayed-- this is a way of colliding against "invisible" geometry.
The collision eventOut will generate an event containing the time when the path of the user through the scene intersected a geometry against which collisions are being checked. An ideal implementation would compute the exact moment of intersection, but implementations may approximate the ideal by sampling the positions of geometries and the viewer.
Collision { exposedField SFBool collide TRUE field SFNode proxy NULL exposedField MFNode children [] eventOut SFTime collision }
A Frame is a grouping node that defines a coordinate system for its children and inherits the transformations of its parents. A Frame's children can include any leaf nodes: lights, viewpoints, sounds, shapes, and browser information nodes. See also "Coordinate Systems and Transformations."
The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside this Frame. These are hints to the browser that it may use to optimize certain operations such as determining whether or not the Frame needs to be drawn. If the specified bounding box is smaller than the true bounding box of the Frame, results are undefined.
The add_children event adds the nodes passed in to the Frame's children field. Any nodes passed in the add_children event that are already in the Frame's children list are simply ignored. The remove_children event removes the nodes passed in from the Frame's children field. Any nodes passed in the remove_children event that are not in the Frame's children list are simply ignored.
The translation, rotation, scale, scaleOrientation and center fields define a geometric 3D transformation consisting of (in order) a (possibly) non-uniform scale about an arbitrary point, a rotation about an arbitrary point and axis, and a translation. The Frame node:
Frame { translation T1 rotation R1 scaleFactor S scaleOrientation R2 center T2 ... }
is equivalent to the netsted sequence of:
Frame { translation T1 Frame { translation T2 Frame { rotation R1 Frame { rotation R2 Frame { scaleFactor S Frame { rotation -R2 Frame { translation -T2 ... }}}}}}} Frame { field SFVec3f bboxCenter 0 0 0 field SFVec3f bboxSize 0 0 0 exposedField SFVec3f translation 0 0 0 exposedField SFRotation rotation 0 0 1 0 exposedField SFFloat scale 1 1 1 exposedField SFRotation scaleOrientation 0 0 1 0 exposedField SFVec3f center 0 0 0 exposedField MFNode children [ ] eventIn MFNode add_children eventIn MFNode remove_children }
This section describes the leaf nodes in detail and is organized into the following subsections:
This functional group includes nodes that provide information to the browser (Background, NavigationInfo, Viewpoint, and WorldInfo).
The Background node is used to specify a ground and sky plane as well as an environment texture, or panorama, that is placed behind all geometry in the scene and in front of the ground and sky planes.
The groundRanges field is a list of floating point values that indicate the cutoff for each groundColor. Its implicit initial value is 0 radians (downward), and the last value is the elevation angle of the groundPlane. The skyRanges field implicitly starts at 0 radians (upward) and works its way down to pi radians. If groundColors is NULL, no ground plane exists.
The panorama is the image that is to be wrapped around the user. The image will specify a full sphere around the user. An alpha value in the panorama will allow the user to specify that the panorama is transparent, and the groundColors and skyColors will be visible.
If multiple URLs are specified for the panorama field, then this expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URNs.
The first Background node found during reading of the world is used. Subsequent Background nodes are ignored.
The ground plane does not translate with respect to the viewer. Panoramic images do not translate with respect to the viewer, which allows the simple implementation of texturing a cube with the projected images. Thus, the image will be interpreted as the six faces of a texture-mapped cube.
Background{ field MFColor groundColor [ 0.14 0.28 0.00, # light green 0.09 0.11 0.00 ] # to dark green field MFFloat groundRange [ .785 ] # horizon = 45 degrees field MFColor skyColor [ 0.02 0.00 0.26 # twilight blue 0.02 0.00 0.65 ] # to light blue field MFFloat skyRange [ .785 ] # horizon = 45 degrees field MFString panorama [ ] }
The NavigationInfo node contains information for the viewer through several fields: type, speed, collisionRadius, and headlight.
The type field specifies a navigation paradigm to use. The types that all VRML viewers should support are "walk", "examiner", "fly", and "none". A walk viewer would constrain the user to a plane (x-z), suitable for architectural walkthroughs. An examiner viewer would let the user tumble the entire scene, suitable for examining single objects. A fly viewer would provide six-degree-of-freedom movement. The "none" choice removes all viewer controls, forcing the user to navigate using only WWWAnchors linked to viewpoints. The type field is multi-valued so that authors can specify fallbacks in case a browser does not understand a given type.
The speed is the rate at which the viewer travels through a scene in meters per second. Since viewers may provide mechanisms to travel faster or slower, this should be the default or average speed of the viewer. In an examiner viewer, this only makes sense for panning and dollying--it should have no effect on the rotation speed.
The collisionRadius field specifies the smallest allowable distance between the user's position and any collision object (as specified by Collision) before a collision is detected.
The headlight field specifies whether a browser should turn a headlight on. A headlight is a directional light that always points in the direction the user is looking. Setting this field to TRUE allows the browser to provide a headlight, possibly with user interface controls to turn it on and off. Scenes that enlist precomputed lighting (e.g. radiosity solutions) can specify the headlight off here. The headlight should have intensity 1, color 1 1 1, and direction 0 0 -1.
NavigationInfo { field MFString type "walk" field SFFloat speed 1.0 field SFFloat collisionRadius 1.0 field SFBool headlight TRUE }
The Viewpoint node defines an interesting location in a local coordinate system from which the user might wish to view the scene. Viewpoints may be animated, and Script nodes may "bind" the user to a particular viewpoint using Script API calls to the browser. A world creator can automatically move the user's view through the world by binding the user to a viewpoint and then animating that viewpoint.
The position and orientation fields of the Viewpoint node specify relative locations in the local coordinate system. Position is relative to the coordinate system's origin (0,0,0), while orientation specifies a rotation relative to the default orientation; the default orientation has the user looking down the -Z axis with +X to the right and +Y straight up. Note that the single orientation rotation (which is a rotation about an arbitrary axis) is sufficient to completely specify any combination of view direction and "up" vector.
The fieldOfView field specifies a preferred field of view from this viewpoint, in radians. A smaller field of view corresponds to a zoom lens on a camera; a larger field of view corresponds to a wide-angle lens on a camera. The field of view should be greater than zero and smaller than PI; the default value corresponds to a 45 degree field of view. It is a hint to the browser and may be ignored.
A viewpoint can be placed in a VRML world to specify the initial location of the viewer when that world is entered. Browsers should recognize the URL syntax "..../scene.wrl#ViewpointName" as specifying that the user's initial view when entering the "scene.wrl" world should be the viewpoint named "ViewpointName".
The description field of the viewpoint may be used by browsers that provide a way for users to travel between viewpoints. The description should be kept brief, since browsers will typically display lists of viewpoints as entries in a pull-down menu, etc.
Viewpoint { exposedField SFVec3f position 0 0 0 exposedField SFRotation orientation 0 0 1 0 exposedField SFFloat fieldOfView 0.785 field SFString description "" }
The WorldInfo node contains information about the world. The title of the world is stored in its own field, allowing browsers to display it--for instance, in their window border. Any other information about the world can be stored in the info field--for instance, the scene author, copyright information, and public domain information.
WorldInfo { field SFString title "" field MFString info "" }
This grouping includes nodes that light the scene (DirectionalLight, PointLight, and SpotLight).
The DirectionalLight node defines a directional light source that illuminates along rays parallel to a given 3-dimensional vector.
A directional light source illuminates only the objects in its coordinate system, as defined by the enclosing Frame node. The light illuminates everything within this coordinate system, including the objects that precede it in the scene graph--for example:
Frame { Shape { ... } DirectionalLight { .... } # lights the preceding shape } DirectionalLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFColor color 1 1 1 exposedField SFVec3f direction 0 0 -1 }
The Fog node defines an axis-aligned ellipsoid of dense, colored atmosphere. The size field defines the size of this foggy region in the local coordinate system. The maxVisibility field defines the density of the fog in this region; if there is more than maxVisibility fog between the viewer and an object then that object is completely obscured by the fog. The color field may be used to simulate different kinds of atmospheric effects by changing the fog's color.
An ideal implementation of fog would compute exactly how much attenuation occurs between the viewer and every object rendered and render the scene appropriately. However, implementations are free to approximate this ideal behavior, perhaps by computing the intersection of the viewing direction vector with any foggy regions and computing some overall fogging parameteres each time the scene is rendered.
Fog { exposedField SFVec3f size 0 0 0 exposedField SFFloat maxVisibility 1 exposedField SFColor color 1 1 1 }
The PointLight node defines a point light source at a fixed 3D location. A point source illuminates equally in all directions; that is, it is omni-directional.
A point light illuminates everything within radius of its location. Its illumination should drop-off exponentially to zero at a distance of radius, with the drop-off rate controlled by the dropOffRate field (a dropOffRate of zero is constant illumination to radius, one is linear attenuation, two is distance^2 drop off, etc).
PointLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFColor color 1 1 1 exposedField SFVec3f location 0 0 1 exposedField SFVec3f radius 1 exposedField SFFloat dropOffRate 0 }
The SpotLight node defines a light source that is placed at a fixed location in 3-space and illuminates in a cone along a particular direction.
The cone of light extends a maximum distance of radius from the given location; the intensity of illumination should drop off exponentially as this distance is reached, with the rate of drop-off controlled by the dropOffRate field.
The intensity of the illumination drops off exponentially as a ray of light diverges from this direction toward the edges of the cone. The rate of drop-off and the angle of the cone are controlled by the dropOffRate and cutOffAngle fields.
SpotLight { exposedField SFBool on TRUE exposedField SFFloat intensity 1 exposedField SFColor color 1 1 1 exposedField SFVec3f location 0 0 0 exposedField SFVec3f direction 0 0 -1 exposedField SFVec3f radius 1 exposedField SFFloat dropOffRate 0 exposedField SFFloat cutOffAngle 0.785398 }
The Sound functional grouping includes the DirectedSound and PointSound nodes.
ISSUE: What sound file formats should be required?
The DirectedSound node describes a sound which emits primarily in the direction defined by the direction vector. Where minRange and maxRange determine the extent of a PointSound, the extent of a DirectedSound is determined by four fields: minFront, minBack, maxFront, and maxBack.
Around the location of the emitter, minFront and minBack determine the extent of the ambient region in front of and behind the sound. If the location of the sound is taken as a focus of an ellipse, and the minBack and minFront values (in combination with the direction vector) as determining the two vertices, these three points describe an ellipse bounding the ambient region of the sound. Similarly, maxFront and maxBack determine the limits of audibility in front of and behind the sound; they describe a second, outer ellipse.
The inner ellipse is analogous to the sphere determined by the minRange field in the PointSound definition: within this ellipse, the sound is non-directional, with constant and maximal intensity. The outer ellipse is analogous to the sphere determined by the maxRange field in the PointSound definition and represents the limits of audibility of the sound. Between the two ellipses, the intensity drops off proportionally with distance and the sound is localized in space.
One advantage of this model is that a DirectedSound behaves as expected when approached from any angle; the intensity increases smoothly even if the emitter is approached from the back.
See the PointSound node for a description of all other fields.
DirectedSound { field MFString name "" field SFString description "" exposedField SFFloat intensity 1 exposedField SFVec3f location 0 0 0 exposedField SFVec3f direction 0 0 1 exposedField SFFloat minFront 10 exposedField SFFloat maxFront 10 exposedField SFFloat minBack 10 exposedField SFFloat maxBack 10 exposedField SFBool loop FALSE exposedField SFTime start 0 exposedField SFTime pause 0 }
This functional group includes only one node, the Shape node.
A Shape node has two fields: appearance and geometry. These fields, in turn, contain other nodes. The appearance field contains an Appearance node that has material, texture, and textureTransform fields (see the Appearance node). The geometry field contains a geometry node. See Subsidiary Nodes.
Shape { field SFNode appearance NULL field SFNode geometry NULL }
The following groups of nodes are only used in fields within other nodes. They cannot stand alone in the scene graph.
A Shape node contains one geometry node in its geometry field. This node can be a Cone, Cube, Cylinder, ElevationGrid, GeneralCylinder, IndexedFaceSet, IndexedLineSet, PointSet, Sphere, or Text node. A geometry node can appear only in the geometry field of a Shape node. Geometry nodes usually contain Coordinate3, Normal, and TextureCoordinate2 nodes in specified SFNode fields. All geometry nodes are specified in a local coordinate system determined by the parent(s) nodes of the geometry.
The ccw field indicates whether the vertices are ordered in a counter-clockwise direction when the shape is viewed from the outside (TRUE). If the order is clockwise or unknown, this field value is FALSE. The solid field indicates whether the shape encloses a volume (TRUE). If nothing is known about the shape, this field value is FALSE. The convex field indicates whether all faces in the shape are convex (TRUE). If nothing is known about the faces, this field value is FALSE.
These hints allow VRML implementations to optimize certain rendering features. Optimizations that may be performed include enabling backface culling and disabling two-sided lighting. For example, if an object is solid and has ordered vertices, an implementation may turn on backface culling and turn off two-sided lighting. If the object is not solid but has ordered vertices, it may turn off backface culling and turn on two-sided lighting.
The IndexedFaceSet node represents a 3D shape formed by constructing faces (polygons) from vertices listed in the coord field. The coord field must contain a Coordinate3 node. IndexedFaceSet uses the indices in its coordIndex field to specify the polygonal faces. An index of -1 indicates that the current face has ended and the next one begins. The Coordinate3 node must contain at least as many vertex coordinates as the greatest index in the coordIndex field.
For descriptions of the coord, normal, and texCoord fields, see the Coordinate3, Normal, and TextureCoordinate2 nodes.
If the color field is not NULL then it must contain a Color node, whose colors are applied to the vertices or faces of the IndexedFaceSet as follows:
If the normal field is NULL, then the browser should automatically generate normals, using creaseAngle to determine if and how normals are smoothed across shared vertices.
If the normal field is not NULL, then it must contain a Normal node, whose normals are applied to the vertices or faces of the IndexedFaceSet in a manner exactly equivalent to that described above for applying colors to vertices/faces.
If the texCoord field is not NULL, then it must contain a TextureCoordinate2 node. The texture coordinates in that node are applied to the vertices of the IndexedFaceSet as follows:
If the texCoord field contains NULL, a default texture coordinate mapping is calculated using the bounding box of the shape. The longest dimension of the bounding box defines the S coordinates, and the next longest defines the T coordinates. If two or all three dimensions of the bounding box are equal, then ties should be broken by choosing the X, Y, or Z dimension in that order of preference. The value of the S coordinate ranges from 0 to 1, from one end of the bounding box to the other. The T coordinate ranges between 0 and the ratio of the second greatest dimension of the bounding box to the greatest dimension.
See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.
IndexedFaceSet { exposedField SFNode coord NULL exposedField SFNode color NULL exposedField SFNode normal NULL exposedField SFNode texCoord NULL field MFInt32 coordIndex [ ] field MFInt32 colorIndex [ ] field SFInt32 colorPerFace FALSE field MFInt32 normalIndex [ ] field SFInt32 normalPerFace FALSE field MFInt32 textureCoordIndex [ ] field SFBool ccw TRUE field SFBool solid TRUE field SFBool convex TRUE field SFFloat creaseAngle 0 }
This node represents a 3D shape formed by constructing polylines from vertices listed in the coord field. IndexedLineSet uses the indices in its coordIndex field to specify the polylines. An index of -1 indicates that the current polyline has ended and the next one begins.
For descriptions of the coord field, see the Coordinate3 node.
Lines are not texture-mapped or affected by light sources.
If the color field is not NULL, it must contain a Color node, and the colors are applied to the line(s) as folows:
IndexedLineSet { exposedField SFNode coord NULL exposedField SFNode color NULL field MFInt32 coordIndex 0 field MFInt32 colorIndex 0 field MFInt32 colorPerLine FALSE }
The PointSet node represents a set of points listed in the coord field. PointSet uses the coordinates in order. The number of points in the set is specified by the numPoints field.
Points are not texture-mapped or affected by light sources.
If the color field is not NULL, it must contain a Color node that contains at least numPoints colors. Colors are always applied to each point in order..
PointSet { exposedField SFNode coord NULL field SFInt32 numPoints 0 field SFNode color NULL }
The Sphere node represents a sphere. By default, the sphere is centered at the origin and has a radius of 1.
Spheres generate their own normals. When a texture is applied to a sphere, the texture covers the entire surface, wrapping counterclockwise from the back of the sphere. The texture has a seam at the back on the yz-plane.
Sphere { exposedField SFFloat radius 1 }
The Text node represents one or more text strings specified using the UTF-8 encoding of the ISO10646 character set (UTF-8 encloding is described below). An important note is that ASCII is a subset of UTF-8, so any ASCII strings are also UTF-8.
The text strings are contained in the string field. The fontStyle field contains one FontStyle node that specifies the font size, font family and style, direction of the text strings, and any specific language rendering techniques that must be used for non-English text.
The justify field determines where the text is positioned in relation to the origin (0,0,0) of the object coordinate system. The values for the justify field are 0 (beginning), 1 (end), and 2 (center). For a left-to-right direction , 0 would specify left-justified text, 1 would specify right-justified text, and 2 would specify centered text. See the FontStyle node for details of text placement.
The spacing field determines the spacing between multiple text strings. The size field of the FontStyle node specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either x or y by -( size * spacing). A value of 0 for spacing causes the string to be in the same position. A value of -1 causes subsequent strings to advance in the opposite direction.
The maxExtent field limits and scales the text string if the natural length of the string is longer than the maximum extent. If the text string is shorter than the maximum extent, it is not scaled. The maximum extent is measured horizontally for horizontal text (FontStyle node: horizontal=TRUE) and vertically for vertical text (FontStyle node: horizontal=FALSE).
The width field contains an MFFloat value that specifies the width of each text string. If the string is too short, it is stretched (either by scaling the text itself or by adding space between the characters). If the string is too long, it is compressed. If a width value is missing--for example, if there are four strings but only three width values--the missing values are considered to be 0.
For both the maxExtent and width fields, specifying a value of 0 indicates to allow the string to be any width.
Textures are applied to 3D text as follows. The texture origin is at the origin of the first string, as determined by the justification. The texture is scaled equally in both S and T dimensions, with the font height representing 1 unit. S increases to the right, T increases up.
UTF-8 Character Encodings
The 2 byte (UCS-2) encoding of ISO 10646 is identical to the Unicode standard. References for both ISO 10646 and Unicode are given in the references section at the end.
In order to avoid introducing binary data into VRML we have chosen to support the UTF-8 encoding of ISO 10646. This encoding allows ASCII text (0x0..0x7F) to appear without any changes and encodes all characters from 0x80.. 0x7FFFFFFF into a series of six or fewer bytes.
If the most significant bit of the first character is 0, then the remaining seven bits are interpretted as an ASCII character. Otherwise, the number of leading 1 bits will indicate the number of bytes following. There is always a o bit between the count bits and any data.
First byte could be one of the following. The X indicates bits available to encode the character.
0XXXXXXX only one byte 0..0x7F (ASCII) 110XXXXX two bytes Maximum character value is 0x7FF 1110XXXX three bytes Maximum character value is 0xFFFF 11110XXX four bytes Maximum character value is 0x1FFFFF 111110XX five bytes Maximum character value is 0x3FFFFFF 1111110X six bytes Maximum character value is 0x7FFFFFFF
All following bytes have this format: 10XXXXXX
A two byte example. The symbol for a register trade mark is "circled R registered sign" or 174 in both ISO/Latin-1 (8859/1) and ISO 10646. In hexadecimal it is 0xAE; In HTML ®. In UTF-8 it is has the following two byte encoding 0xC2, 0xAE.
Text { exposedField MFString string "" field SFNode fontStyle NULL field SFInt32 justify 0 field SFFloat spacing 1.0 exposedField SFFloat maxExtent 0.0 field MFFloat width [ ] }
Geometric properties are always contained in the corresponding SFNode fields of geometry nodes such as the IndexedFaceSet, IndexedLineSet, PointSet, and ElevationGrid nodes.
This node defines a set of RGB colors to be used in the color fields of an IndexedFaceSet, IndexedLineSet, PointSet, Cone and Cylinder node.
Color nodes are only used to specify multiple colors for a single piece of geometry, such as a different color for each face or vertex. A Material node should be used to specify the overall material parameters of a geometry. If both a Material and a Color node are specified for a geometry, the colors should ideally replace the diffuse component of the material.
Textures take precedence over colors; specifying both a Texture and a Color node for a geometry will result in the Color node being ignored.
Note that some browsers may not support this functionality, in which case an average color should be computed and used instead.
Color { exposedField MFColor rgb [] }
This node defines a set of 3D coordinates to be used in the coord field of an IndexedFaceSet, IndexedLineSet, or PointSet node.
Coordinate3 { exposedField MFVec3f point [] }
This node defines a set of 3D surface normal vectors to be used in the normal field of vertex-based shape nodes (IndexedFaceSet, IndexedLineSet, PointSet, ElevationGrid). This node contains one multiple-valued field that contains the normal vectors.
To save network bandwidth, it is expected that implementations will be able to automatically generate appropriate normals if none are given. However, the results will vary from implementation to implementation.
Normal { exposedField MFVec3f vector [] }
This node defines a set of 2D coordinates to be used in the texCoord field to map textures to the vertices of PointSet, IndexedLineSet, IndexedFaceSet, and ElevationGrid objects.
Texture coordinates range from 0 to 1 across the texture. The horizontal coordinate, called S, is specified first, followed by the vertical coordinate, T.
TextureCoordinate2 { exposedField MFVec2f point [] }
The Appearance node occurs only within the appearance field of a Shape node. The value for any of the fields in this node can be NULL. However, if the field contains anything, it must contain one specific type of node. Specifically, the material field, if specified, must contain a Material node. The texture field, if specified, must contain a Texture2 node. The textureTransform field, if specified, must contain a Texture2Transform node.
Appearance { exposedField SFNode material Material {} exposedField SFNode texture NULL exposedField SFNode textureTransform NULL }
The Material, Texture2, and Texture2Transform appearance property nodes are always contained within the appearance field of an Appearance node. The FontStyle node is always contained in the fontStyle field of a Text node.
The FontStyle node, which is always used in the fontStyle field of a Text node, defines the size, font family and style of the text font, as well as the direction of the text strings and any specific language rendering techniques that must be used for non-English text..
The size field specifies the height (in object space units) of glyphs rendered and determines the vertical spacing of adjacent lines of text. All subsequent strings advance in either x or y by -( size * spacing). (See the Text node for a description of the spacing field.)
Font Family and Style: Font attributes are defined with the family and style fields. It is up to the browser to assign specific fonts to the various attribute combinations.
The family field contains an SFString value that can be "SERIF" (the default) for a serif font such as Times Roman; "SANS" for a sans-serif font such as Helvetica; or "TYPEWRITER" for a fixed-pitch font such as Courier.
The style field contains an SFString value that can be an empty string (the default); "BOLD" for boldface type; "ITALIC" for italic type; or "BOLD ITALIC" for bold and italic type.
Direction: The horizontal, leftToRight, and topToBottom fields indicate the direction of the text. The horizontal field indicates whether the field is horizontal (TRUE; the default) or vertical (FALSE). The leftToRight field indicates whether the text progresses from left to right (TRUE; the default) or from right to left (FALSE). The topToBottom field indicates whether the text progresses from top to bottom (TRUE; the default), or from bottom to top (FALSE).
The justify field of the Text node determines where the text is positioned in relation to the origin (0,0,0) of the object coordinate system. The values for the justify field are 0 (beginning), 1 (end), and 2 (center). For a left-to-right direction (leftToRight = TRUE), 0 would specify left-justified text, 1 would specify right-justified text, and 2 would specify centered text.
For horizontal text (horizontal is TRUE), the first line of text is positioned with its baseline (bottom of capital letters) at y = 0. The text is positioned on the positive side of the x origin when leftToRight is TRUE and justify is 0; the same positioning is used when leftToRight is FALSE and justify is 1. The text is on the negative side of x when leftToRight is TRUE and justify is 1 (and when leftToRight is FALSE and justify is 0). For justify = 2 and horizontal = TRUE, each string will be centered at x = 0.
For vertical text (horizontal is FALSE), the first line of text is positioned with the left side of the glyphs along the y = 0 axis. When topToBottom is TRUE and justify is 0 (or when topToBottom is FALSE and justify is 1), the text is positioned with the top left corner at the origin. When topToBottom is TRUE and justify is 1 (or when topToBottom is FALSE and justify is 0), the bottom left is at the origin. For justify = 2 and horizontal = TRUE, the text is centered vertically at x = 0.
HORIZONTAL TEXT (horizontal=TRUE) -> -> -> LR=TRUE LR=TRUE LR=TRUE justify=0 justify=1 justify=2 VRML VRML VRML adds a adds a adds a dimension! dimension! dimension! HORIZONTAL TEXT (horizontal=TRUE) -> -> -> LR=FALSE LR=FALSE LR=FALSE justify=0 justify=1 justify=2 LMRV LMRV LMRV a sdda a sdda a sdda !noisnemid !noisnemid !noisnemid VERTICAL TEXT (horizontal=FALSE) -> -> -> -> -> -> -> -> -> TB=TRUE TB=TRUE TB=TRUE TB=FALSE TB=FALSE TB=FALSE justify=0 justify=1 justify=2 justify=0 justify=1 justify=2 V a d d d ! L a ! ! R d i i i n M n n M d m m a m o R s o a o L s e e V d e i V d i L i n a n R d n a s d s M s s a s d s M s s n a n R d n i V d i L i L s e e V d e o R s o a o M d m m a m n M n n R d i i i ! L a ! ! V a d d d
Text Language: There are many languages in which the proper rendering of the text requires more than just a sequence of glyphs. The language field allows the author to specify which, if any, language specific rendering techniques to use. For simple languages, such as English, this node may be safely ignored.
The tag used to specify languages will follow RFC1766 - Tags for the Identification of Languages. This RFC specifies that a language tag may simply be a two letter ISO 639 tag, for example "en" for English, "ja" for Japanese, and "sv" for Swedish. This may be optionally followed by a two letter country code from ISO 3166. So, Americans would be absolutely safe with "en-US". ISO does not have documents online, yet. Hardcopy documents can be ordered.
FontStyle { field SFFloat size 1.0 field SFString family "SERIF" # "SERIF", "SANS", "TYPEWRITER" field SFString style "" # "BOLD", "ITALIC", "BOLD ITALIC" field SFBool horizontal TRUE field SFBool leftToRight TRUE field SFBool topToBottom TRUE field SFString language "" }
The Material node defines surface material properties for an associated geometry node. Different shapes interpret materials with multiple values differently. To bind diffuse colors to shapes, use the colorBinding field within the geometry node.
The lighting parameters defined by the Material node are the same parameters defined by the OpenGL lighting model. For a rigorous mathematical description of how these parameters should be used to determine how surfaces are lit, see the description of lighting operations in the OpenGL Specification. Also note that OpenGL specifies the specular exponent as a non-normalized 0-128 value, which is specified as a normalized 0-1 value in VRML (simply multiplying the VRML value by 128 to translate to the OpenGL parameter).
For rendering systems that do not support the full OpenGL lighting model, the following simpler lighting model is recommended:
A transparency value of 0 is completely opaque, a value of 1 is completely transparent. Browsers need not support partial transparency, but should support at least fully transparent and fully opaque surfaces, treating transparency values >= 0.5 as fully transparent.
Issues for Low-End Rendering Systems. Many low-end PC rendering systems are not able to support the full range of the VRML material specification. For example, many systems do not render individual red, green and blue reflected values as specified in the specularColor field. The following table describes which Material fields are typically supported in popular low-end systems and suggests actions for browser implementors to take when a field is not supported.
Field Supported? Suggested Action ambientColor No Ignore diffuseColor Yes Use specularColor No Ignore emissiveColor No Use in place of diffuseColor if != 0 0 0 shininess Yes Use transparency No Ignore
It is also expected that simpler rendering systems may be unable to support both lit (diffuse) and unlit (emissive) objects in the same scene.
Material { exposedField SFColor ambientColor 0.2 0.2 0.2 exposedField SFColor diffuseColor 0.8 0.8 0.8 exposedField SFColor specularColor 0 0 0 exposedField SFColor emissiveColor 0 0 0 exposedField SFFloat shininess 0.2 exposedField SFFloat transparency 0 }
The Texture2 node defines a texture map and parameters for that map.
The texture can be read from the URL specified by the filename field. To turn off texturing, set the filename field to an empty string (""). Implementations should support the JPEG and PNG image file formats. Also supporting the GIF format is recommended.
If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL while the higher-order file is not available. See the section on URNs.
Textures can also be specified inline by setting the image field to contain the texture data. Supplying both image and filename fields will result in undefined behavior.
Texture images may be one component (greyscale), two component (greyscale plus transparency), three component (full RGB color), or four-component (full RGB color plus transparency). An ideal VRML implementation will use the texture image to modify the diffuse color and transparency of an object's material (specified in a Material node), then perform any lighting calculations using the rest of the object's material properties with the modified diffuse color to produce the final image. The texture image modifies the diffuse color and transparency depending on how many components are in the image, as follows:
Browsers may approximate this ideal behavior to increase performance. One common optimization is to calculate lighting only at each vertex and combining the texture image with the color computed from lighting (performing the texturing after lighting). Another common optimization is to perform no lighting calculations at all when texturing is enabled, displaying only the colors of the texture image.
??Are these SFBool fields acceptable substitutes for the REPEAT/CLAMP enums?? New text here ...pls check
The repeatS and repeatT fields specify how the texture wraps in the S and T directions. If repeatS is TRUE (the default), the texture map is repeated outside the 0-to-1 texture coordinate range in the S direction so that it fills the shape. If repeatS is FALSE, the texture coordinates are clamped in the S direction to lie within the 0-to-1 range. The repeatT field is analogous to the repeatS field.
Texture2 { exposedField SFString filename "" exposedField SFImage image 0 0 0 field SFBool repeatS TRUE field SFBool repeatT TRUE }
The Texture2Transform node defines a 2D transformation that is applied to texture coordinates. This node is used only in the textureTransform field of the Appearance node and affects the way textures are applied to the surfaces of the associated Geometry node. The transformation consists of (in order) a nonuniform scale about an arbitrary center point, a rotation about that same point, and a translation. This allows a user to change the size and position of the textures on shapes.
Texture2Transform { field SFVec2f translation 0 0 field SFFloat rotation 0 field SFVec2f scaleFactor 1 1 field SFVec2f center 0 0 }
Geometric sensor nodes are children of a Frame node. They generate events with respect to the Frame's coordinate system and children.
Proximity sensors are nodes that generates events when the viewpoint enters, exits, and moves inside a space. A proximity sensor can be activated or inactivated by sending it an "enable" event with a value of TRUE/FALSE.
A BoxProximitySensor generates isActive TRUE/FALSE events as the viewer enters/exits the region defined by its center and size fields.. Ideally, implementations will interpolate viewpoint positions and timestamp the isActive events with the exact time the viewpoint first intersected the volume.
A BoxProximitySensor with a (0 0 0) size field (the default) will sense the region defined by the objects in its coordinate system. The axis-aligned bounding box of the Frame containing the BoxProximitySensor should be computed and used instead of the center and size fields in this case.
position and orientation events giving the position and orientation of the viewer in the BoxProximitySensor's coordinate system are generated between the enter and exit times when either the user or the coordinate system of the sensor moves.
Multiple BoxProximitySensors will generate events at the same time if the regions they are sensing overlap. Unlike ClickSensors, there is no notion of a BoxProximitySensor lower in the scene graph "grabbing" events.
A BoxProximitySensor that surrounds the entire world will have an enter time equal to the time that the world was entered, and can be used to start up animations or behaviors as soon as a world is loaded.
BoxProximitySensor { exposedField SFVec3f center 0 0 0 exposedField SFVec3f size 0 0 0 exposedField SFBool enabled TRUE eventOut SFBool isActive eventOut SFVec3f position eventOut SFRotation orientation }
A ClickSensor tracks the pointing device with respect to some geometry. This sensor can be made active/inactive by being sent enable events.
The ClickSensor generates events as the pointing device passes over some geometry, and when the pointing device is over the geometry will also generate button press and release events for the button associated with the pointing device. Typically, the pointing device is a mouse and the button is a mouse button.
An enter event is generated when the pointing device passes over any of the shape nodes contained underneath the ClickSensor and contains the time at which the event occured. Likewise, an exit event is generated when the pointing device is no longer over the ClickSensor's geometry. isOver events are generated when enter/exit events are generated; an isOver event with a TRUE value is generated at the same time as enter events, and an isOver FALSE event is generated with exit events.
All of these events are generated only when the pointing device moves or the user clicks the button.
If the user presses the button associated with the pointing device while the cursor is located over its geometry, the ClickSensor will grab all further motion events from the pointing device until the button is released (other Click or Drag sensors will not generate events during this time). isActive TRUE/FALSE events are generated along with the press/release events. Motion of the pointing device while it has been grabbed by a ClickSensor is referred to as a "drag".
As the user drags the cursor over the ClickSensor's geometry, the point on that geometry which lies directly underneath the cursor is determined. When isOver and isActive are TRUE, hitPoint, hitNormal, and hitTexture events are generated whenever the pointing device moves. hitPoint events contain the 3D point on the surface of the underlying geometry, given in the ClickSensor's coordinate system. hitNormal events contain the surface normal at the hitPoint. hitTexture events contain the texture coordinates of that surface at the hitPoint, which can be used to support the 3D equivalent of an image map.
ClickSensor { exposedField SFBool enabled TRUE eventOut SFBool isOver eventOut SFBool isActive eventOut SFVec3f hitPoint eventOut SFVec3f hitNormal eventOut SFVec2f hitTexture }
The PlaneSensor maps dragging motion into a translation in two dimensions, in the x-y plane of its local space.
PlaneSensor { exposedField SFVec2f minPosition 0 0 exposedField SFVec2f maxPosition 0 0 exposedField SFBool enabled TRUE eventOut SFBool isOver eventOut SFBool isActive eventOut SFVec3f hitPoint eventOut SFVec3f hitNormal eventOut SFVec2f hitTexture eventOut SFVec3f trackPoint eventOut SFVec3f translation }
minPosition and maxPosition may be set to clamp translation events to a range of values as measured from the origin of the x-y plane. If the x or y component of minPosition is less than or equal to the corresponding component of maxPosition, translation events are not clamped in that dimension. trackPoint events provide unclamped drag position in in the x-y plane.
These nodes are not part of the world's transformational hierarchy.
Files that describe node behavior are referenced through a Script node:
Script { field MFString behavior "" field SFString scriptType "" field SFBool mustEvaluate FALSE field SFBool directOutputs FALSE # And any number of: eventIn eventTypeName eventName field fieldTypeName fieldName initialValue eventOut eventTypeName eventName }
For example:
Script { behavior "http://foo.com/bar.class" ; MFSTRING scriptType "JAVA" ; SFSTRING eventIn SFString name eventIn SFBool selected eventOut SFString lookto field SFInt32 currentState 0 field SFBool mustEvaluate TRUE }
Each Script node has some associated code in some programming language that is executed to carry out the Script node's function. That code will be referred to as "the script" in the rest of this description.
A Script node's scriptType field describes which scripting language is being used. The contents of the behavior field depends on which scripting language is being used. Typically the behavior field will contain URLs/URNs from which the script should be fetched.
Each scripting language supported by a browser defines bindings for the following functionality. See Appendices A and B for the standard Java and C language bindings.
The script is created, and any language-dependent or user-defined initialization is performed. The script should be able to receive and process events that are sent to it. Each event that can be received must be declared in the Script node using the same syntax as is used in a prototype definition:
eventIn type name
"eventIn" is a VRML keyword. The type can be any of the standard VRMLfield types, and name must be an identifier that is unique for this Script node.
The Script node should be able to generate events in response to the incoming events. Each event that can be generated must be declared in the Script node using the following syntax:
eventOut type name
If the Script node's mustEvaluate field is FALSE, the browser can delay sending input events to the script until its outputs are needed by the browser. If the mustEvaluate field is TRUE, the browser should send input events to the script as soon as possible, regardless of whether the outputs are needed. The mustEvaluate field should be set to TRUE only if the Script has effects that are not known to the browser (such as sending information across the network); otherwise, poor performance may result.
The script should be able to read and write the fields of the corresponding Script node.
Once the script has access to some VRML node (via an SFNode or MFNode value either in one of the Script node's fields or passed in as an eventIn), the script should be able to read the contents of that node's exposed field. If the Script node's directOutputs field is TRUE, the script may also send events directly to any node to which it has access.
A script should also be able to communicate directly with the VRML browser to get and set global information such as navigation information, the current time, the current world URL, and so on.
It is expected that all other functionality (such as networking capabilities, multi-threading capabilities, and so on) will be provided by the scripting language.
TimeSensors generate events as time passes. TimeSensors remains inactive until their startTime is reached. At the first simulation tick where real time >= startTime, the TimeSensor will begin generating time and alpha events, which may be routed to other nodes to drive continuous animation or simulated behaviors. The length of time a TimeSensor generates events is controlled using cycleInterval and cycleCount; a TimeSensor stops generating time events at time startTime+cycleInterval*cycleCount. The time events contain times relative to startTime, so they will start at zero and increase up to cycleInterval*cycleCount.
The forward and back fields controls the mapping of time to alpha values. If forward is TRUE and back is FALSE (the default), alpha events will rise from 0.0 to 1.0 over each interval. If forward is FALSE and back is TRUE the opposite will happen (alpha events will fall from 1.0 to 0.0 during each interval). If the are both TRUE, alpha events will alternate 0.0 to 1.0, 1.0 to 0.0, reversing direction on each interval. If they are both FALSE, then alpha and time events will be generated only once per cycle (and the alpha values generated will always be 0).
pauseTime may be set to interrupt the progress of a TimeSensor. If pauseTime is greater than startTime, time and alpha events will not be generated after the pause time. pauseTime is ignored if it is less than or equal to startTime.
If cycleCount is <= 0, the TimeSensor will continue to tick continuously, as if the cycleCount is infinity. This use of the TimeSensor should be used with caution, since it incurs continuous overhead on the simulation.
Setting cycleCount to 1 and cycleInterval to 0 will result in a single event being generated at startTime; this can be used to build an alarm that goes off at some point in the future.
No guarantees are made with respect to how often a TimeSensor will generate time events, but TimeSensors are guaranteed to generate final alpha and time events at or after time (startTime+cycleInterval*cycleCount) if pauseTime is less than or equal to startTime.
TimeSensor { exposedField SFTime startTime 0 exposedField SFTime pauseTime 0 exposedField SFTime cycleInterval 1 exposedField SFInt32 cycleCount 1 exposedField SFBool forward TRUE exposedField SFBool back FALSE eventOut SFTime time eventOut SFFloat alpha }
A Group node is a lightweight grouping node that can contain any number of children. It does not contain any transformation fields.
The bboxCenter and bboxSize fields may be used to specify a maximum possible bounding box for the objects inside this Group. These are hints to the browser that it may use to optimize certain operations such as determining whether the Group needs to be drawn. If the specified bounding box is smaller than the true bounding box of the Group, results are undefined.
The add_children event adds the nodes passed in to the Group's children field. Any nodes passed in the add_children event that are already in the Group's children list are simply ignored. The remove_children event removes the nodes passed in from the Group's children field. Any nodes passed in the remove_children event that are not in the Group's children list are simply ignored.
Group { field SFVec3f bboxCenter 0 0 0 field SFVec3f bboxSize 0 0 0 exposedField MFNode children [ ] eventIn MFNode add_children eventIn MFNode remove_children }
The LOD node is used to allow browsers to switch between various representations of objects automatically. The children of this node typically represent the same object or objects at varying levels of detail, from highest detail to lowest.
The distance from the viewpoint, transformed into the local coordinate space of the LOD node (including any scaling transformations), to the specified center point of the LOD is calculated. If the distance is less than the first value in the range array, then the first child of the LOD is drawn. If between the first and second values in the range array, the second child is drawn, and so on. If there are N values in the range array, the LOD group should have N+1 children. Specifying too few children will result in the last child being used repeatedly for the lowest levels of detail; if too many children are specified, the extra children will be ignored. Each value in the range array should be greater than the previous value; otherwise results are undefined. Not specifying any values in the range array (the default) is a special case that indicates that the browser may decide which child to draw to optimize rendering performance.
Authors should set LOD ranges so that the transitions from one level of detail to the next are barely noticeable. Browsers may adjust which level of detail is displayed to maintain interactive frame rates, to display an already-fetched level of detail while a higher level of detail (contained in a WWWInline node) is fetched, or might disregard the author-specified ranges for any other implementation-dependent reason. Authors should not use LOD nodes to emulate simple behaviors, because the results will be undefined. For example, using an LOD node to make a door appear to open when the user approaches probably will not work in all browsers. Use a ProximitySensor instead.
For best results, specify ranges only where necessary, and nest LOD nodes with and without ranges. For example:
LOD { range [100, 1000] LOD { Frame { ... detailed version... } DEF LoRes Frame { ... less detailed version... } } USE LoRes Info { } # Display nothing }
In this example, nothing at all will be displayed if the viewer is farther than 1,000 meters away from the object. A low-resolution version of the object will be displayed if the viewer is between 100 and 1,000 meters away, and either a low-resolution or a high-resolution version of the object will be displayed when the viewer is closer than 100 meters from the object.
LOD { field MFFloat range [ ] field SFVec3f center 0 0 0 exposedField MFNode levels [ ] }
The Switch grouping node traverses zero or one of its children (which are specified in the choices field).
The whichChild field specifies the index of the child to traverse, where the first child has index 0. If whichChild is less than zero or greater than the number of nodes in the choices array then nothing is chosen.
Switch { exposedField SFInt32 whichChild -1 exposedField MFNode choices [ ] }
The WWWAnchor grouping node loads a new scene into a VRML browser when one of its children is chosen. Exactly how a user "chooses" a child of the WWWAnchor is up to the VRML browser; typically, clicking on one of its children with the mouse will result in the new scene replacing the current scene. A WWWAnchor with an empty ("") name does nothing when its children are chosen. The name is an arbitrary URL.
If multiple URLs are presented, this expresses a descending order of preference. A browser may display a lower-preference URL if the higher-order file is not available. See the section on URNs.
The description field in the WWWAnchor allows for a friendly prompt to be displayed as an alternative to the URL in the name field. Ideally, browsers will allow the user to choose the description, the URL, or both to be displayed for a candidate WWWAnchor.
A WWWAnchor may be used to take the viewer to a particular viewpoint in a virtual world by specifying a URL ending with "#viewpointName", where "viewpointName" is the name of a viewpoint defined in the world. For example:
WWWAnchor { name "http://www.school.edu/vrml/someScene.wrl#OverView" Cube { } }
specifies an anchor that puts the viewer in the "someScene" world looking from the viewpoint named "OverView" when the Cube is chosen. If no world is specified, then the current scene is implied; for example:
WWWAnchor { name "#Doorway" Sphere { } }
will take the viewer to the viewpoint defined by the "Doorway" viewpoint in the current world when the sphere is chosen.
WWWAnchor { field MFString name "" field SFString description "" exposedField MFNode children [ ] }
The WWWInline node is a light-weight grouping node like Group that reads its children from anywhere in the World Wide Web. Exactly when its children are read is not defined; reading the children may be delayed until the WWWInline is actually displayed. A WWWInline with an empty name does nothing. The name is an arbitrary set of URLs.
Referring to a non-VRML URL in a WWWInline node is undefined.
If multiple URLs are specified, then this expresses a descending order of preference. A browser may display a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URNs.
If the WWWInline's bboxSize field specifies a non-empty bounding box (a bounding box is non-empty if at least one of its dimensions is greater than zero), then the WWWInline's object-space bounding box is specified by its bboxSize and bboxCenter fields. This allows an implementation to quickly determine whether or not the contents of the WWWInline might be visible. This is an optimization hint only; if the true bounding box of the contents of the WWWInline is different from the specified bounding box, results will be undefined.
WWWInline { field MFString name [ ] field SFVec3f bboxSize 0 0 0 field SFVec3f bboxCenter 0 0 0 }
The PointSound node defines a sound source located at a specific 3D location. The name field specifies a URL from which the sound is read. Implementations should support at least the ??? ??? sound file formats. Streaming sound files may be supported by browsers; otherwise, sounds should be loaded when the sound node is loaded. Browsers may limit the maximum number of sounds that can be played simultaneously.
If multiple URLs are specified, then this expresses a descending order of preference. A browser may use a URL for a lower-preference file while it is obtaining, or if it is unable to obtain, the higher-preference file. See also the section on URNs.
The description field is a textual description of the sound, which may be displayed in addition to or in place of playing the sound.
The intensity field adjusts the volume of each sound source; an intensity of 0 is silence, and an intensity of 1 is whatever intensity is contained in the sound file.
The sound source has a radius specified by the minRadius field. When the viewpoint is within this radius, the sound's intensity (volume) is constant, as indicated by the intensity field. Outside the minRadius, the intensity drops off to zero at a distance of maxRadius from the source location. If the two radii are equal, the drop-off is sharp and sudden. Otherwise, the drop-off should be proportional to the square of the distance of the viewpoint from the minRadius.
Browsers may also support spatial localizations of sound. However, within minRadius, localization should not occur, so intensity is constant in all channels. Between minRadius and maxRadius, the sound location should be the point on the minRadius sphere that is closest to the current viewpoint. This ensures a smooth change in location when the viewpoint leaves the minRadius sphere. Note also that an ambient sound can therefore be created by using a large minRadius value.
The loop field specifies whether or not the sound is constantly repeated. By default, the sound is played only once. If the loop field is FALSE, the sound has length "length," which is not specified in the VRML file but is implicit in the sound file pointed to by the URL in the name field. If the loop field is TRUE, the sound has an infinite length.
The start field specifies the time at which the sound should start playing. The pause fieldmay be used to make a sound stop playing some time after it has started.
With the start time "start," pause time "pause," and current time "now," the rules are as follows:
if: now < start: OFF else if: now >+ start+length: OFF else if: (pause> start) AND (start <= now < pause) : ON else: ON
Whenever start, pause, or now changes, the above rules need to be applied to figure out if the sound is playing. If it is, then it should be playing the bit of sound at (now - start) or, if it is looping, fmod( now - start, realLength).
A sound's location in the scene graph determines its spatial location (the sound's location is transformed by the current transformation) and whether or not it can be heard. A sound can only be heard while it is part of the traversed scene; sound nodes underneath LOD nodes or Switch nodes will not be audible unless they are traversed. If it is later part of the traversal again, the sound picks up where it would have been had it been playing continuously.
PointSound { field MFString name "" field SFString description "" exposedField SFFloat intensity 1 exposedField SFVec3f location 0 0 0 exposedField SFFloat minRange 10 exposedField SFFloat maxRange 10 exposedField SFBool loop FALSE exposedField SFTime start 0 exposedField SFTime pause 0 }
This node represents a simple cone whose central axis is aligned with the y-axis. By default, the cone is centered at (0,0,0) and has a size of -1 to +1 in all three directions. The cone has a radius of 1 at the bottom and a height of 2, with its apex at 1 and its bottom at -1.
The cone has two parts: the side and the bottom. Each part has an associated SFBool field that specifies whether it is visible (TRUE) or invisible (FALSE).
When a texture is applied to a cone, it is applied differently to the sides and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cone. The texture has a vertical seam at the back, intersecting the yz-plane. For the bottom, a circle is cut out of the texture square and applied to the cone's base circle. The texture appears right side up when the top of the cone is rotated towards the -Z axis.
Cone { exposedField SFFloat bottomRadius 1 exposedField SFFloat height 2 field SFBool side TRUE field SFBool bottom TRUE }
This node represents a cuboid aligned with the coordinate axes. By default, the cube is centered at (0,0,0) and measures 2 units in each dimension, from -1 to +1. A cube's width is its extent along its object-space X axis, its height is its extent along the object-space Y axis, and its depth is its extent along its object-space Z axis.
Textures are applied individually to each face of the cube; the entire texture goes on each face. On the front, back, right, and left sides of the cube, the texture is applied right side up. On the top, the texture appears right side up when the top of the cube is tilted toward the user. On the bottom, the texture appears right side up when the top of the cube is tilted towards the -Z axis.
Cube { exposedField SFFloat width 2 exposedField SFFloat height 2 exposedField SFFloat depth 2 }
This node represents a simple capped cylinder centered around the y-axis. By default, the cylinder is centered at (0,0,0) and has a default size of -1 to +1 in all three dimensions. You can use the radius and height fields to create a cylinder with a different size.
The cylinder has three parts: the side, the top (y = +1) and the bottom (y = -1). Each part has an associated SFBool field that indicates whether the part is visible (TRUE) or invisible (FALSE).
When a texture is applied to a cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps counterclockwise (from above) starting at the back of the cylinder. The texture has a vertical seam at the back, intersecting the yz-plane. For the top and bottom, a circle is cut out of the texture square and applied to the top or bottom circle. The top texture appears right side up when the top of the cylinder is tilted toward the +Z axis, and the bottom texture appears right side up when the top of the cylinder is tilted toward the -Z axis.
Cylinder { exposedField SFFloat radius 1 exposedField SFFloat height 2 field SFBool side TRUE field SFBool top TRUE field SFBool bottom TRUE }
This node creates a rectangular grid with varying heights, especially useful in modeling terrain. The model is primarily described by a scalar array of height values that specify the height of the surface above each point of the grid.
The verticesPerRow and verticesPerColumn fields define the number of grid points in the X and Z directions, respectively, defining a surface that contains (verticesPerRow-1) x (verticesPerColumn-1) rectangles.
The vertex locations for the rectangles are defined by the height field and the gridStep field. The vertex corresponding to the ith row and jth column is placed at
( gridStep[0] * j, heights[ i*verticesPerRow + j ], gridStep[ 1 ] * i )
in object space, where
0 <= i < verticesPerColumn,
0 <= j < verticesPerRow
The height field is an array of scalar values representing the height above the grid for each vertex. The height values are stored so that row 0 is first, followed by rows 1, 2, ..., verticesPerColumn-1. Within each row, the height values are stored so that column 0 is first, followed by columns 1, 2, ..., verticesPerRow-1. The rows have fixed Z values; the columns have fixed X values.
The default texture coordinates range from [0,0] at the first vertex to [1,1] at the far side of the diagonal. The S texture coordinate will be aligned with X, and the T texture coordinate with Z.
The colorPerQuad field determines whether colors (if specified in the color field) should be applied to each vertex or each quadrilateral of the ElevationGrid. If colorPerQuad is TRUE and the color field is not NULL, then the color field must contain a Color node containing at least (verticesPerColumn-1)*(verticesPerRow-1) colors. If colorPerQuad is FALSE and the color field is not NULL, then the color field must contain a Color node containing at least verticesPerColumn*verticesPerRow colors.
See the introductory Geometry section for a description of the ccw, solid, and creaseAngle fields.
By default, the rectangles are defined with a counterclockwise ordering, so the Y component of the normal is positive. Setting the vertexOrdering field of the current ShapeHints node to CLOCKWISE reverses the normal direction. (??Not sure how to edit this, since CLOCKWISE is not exactly the same as FALSE for ccw, is it?? Better to have either one or two booleans to determine whether front and/or back should be displayed?) Backface culling is enabled when the ccw field and the solid field are both TRUE (the default).
ElevationGrid { exposedField SFInt32 verticesPerColumn 0 exposedField SFInt32 verticesPerRow 0 exposedField SFVec2f gridStep [ 1 1 ] exposedField MFFloat height [ ] exposedField SFNode color NULL exposedField SFNode normal NULL exposedField SFNode texCoord NULL field SFInt32 colorPerQuad FALSE field SFInt32 normalPerQuad FALSE field SFBool ccw TRUE field SFBool solid TRUE field SFFloat creaseAngle 0 }
The GeneralCylinder node is used to parametrically describe numerous families of shapes: extrusions (along an axis or an arbitrary path), surfaces of revolution, and bend/twist/taper objects.
A GeneralCylinder is defined by a 2D crossSection piecewise linear curve, a 3D spine piecewise linear curve, a list of profile parameters, and a list of twist parameters. Shapes are constructed as follows. The cross section curve is scaled by the first profile parameter and twisted counter-clockwise by the first twist parameter. It is then extruded through space by the first segment of the spine curve. Next, it is scaled and twisted by the second profile and twist parameters and extruded by the second segment of the spine, and so on.
A transformed cross section is found for each joint (see below), and then these are connected to form the surface. No check is made for self-penetration. Each transformed cross section is determined as follows.
For all points other than the first or last: The tangent for point[i] is found by normalizing
(point[i+1] - point[i-1])
??--insert diagram here??
If the spine curve is closed: The first and last point need to have the same rotation so that they match. Their tangent is found as above, but using the points point[0] for i, point[1] for point[i+1] and point[n-2] for point[i-1] where point[n-2] is the next to last point on the curve. The last point in the curve, point[n-1] is the same as the first, point[0].
If the curve is not closed: The tangent used for the first point is just the direction from point[0] to point[1], and the tangent used for the last is the direction from point[n-2] to point[n-1].
In the simple case where the spine curve is flat in the x/y plane, these are all just rotations about the z axis. In the more general case where the spine curve is any 3D curve, then it's more complicated. You need to find the destinations for all 3 of the local x,y, and z axes so you can completely specify the rotation. The z axis is found by taking the cross product of
(point[i-1] - point[i]) and (point[i+1] - point[i]).
If the three points are colinear then this value is zero so you take the value from the previous point. Once you have the z (from the cross product) and the y (from the approximate tangent) you calculate the x as the cross product of y and z. Then you make a rotation matrix from them and there you go.
5. Finally, the cross section is translated to the location of the spine point.
If the crossSection field is NULL (the default), a circle is used.
Surfaces of Revolution: If the cross section is an approximation of a circle and the spine is straight, then the GeneralCylinder will be equivalent to a surface of revolution, where the profile parameters define the thickness of the cross section along the spine. In this case, the spine must define points along the extrusion where the profile parameters will be applied.
Cookie-cutter Extrusions: If both the profile and spine are straight, then the cross section acts like a cookie cutter, with the thickness of the cookie equal to the length of the spine.
Bend/Twist/Taper Objects: These shapes are the result of using all fields. The spine curve bends the extruded shape defined by the cross section, the twist parameters twist it, and the profile parameters taper it.
Planar top and bottom surfaces will be generated when the crossSection is closed (i.e., when the first and last points of the crossSection are equal). However, if the profile is also closed, the top and bottom are not generated; this is because a closed crossSection extruded along a closed profile creates a shape that is closed without the addition of top and bottom parts.
GeneralCylinder has three parts: the side, the top (the end of the profile curve with the greater Y value) and the bottom (the end of the profile curve with the lesser Y value). Each part has an associated SFBool field that indicates whether the part is visible (TRUE) or invisible (FALSE).
GeneralCylinder automatically generates its own normals. (It does not have a normalBinding field.) Orientation of the normals is determined by the vertex ordering of the triangles generated by GeneralCylinder. The vertex ordering is in turn determined by the crossSection curve. If the crossSection is drawn counterclockwise, then the polygons will have counterclockwise ordering when viewed from the 'outside' of the shape (and vice versa for clockwise ordered crossSections).
Texture coordinates are automatically generated by general cylinders. These will map textures like the label on a soup can: the coordinates will range in the u direction from 0 to 1 along the crossSection curve and in the v direction from 0 to 1 along the spine. If the top and/or bottom exist, textures map onto them in a planar fashion.
When a texture is applied to a general cylinder, it is applied differently to the sides, top, and bottom. On the sides, the texture wraps [0,1] of the u-direction of the texture along the crossSection from first point to last; it wraps [0,1] of the v-direction of the texture along the direction of the spine, from first point to last. When the crossSection is closed, the texture has a seam that follows the line traced by the crossSection's start/end point as it travels along the spine. For the top and bottom, the crossSection is cut out of the texture square and applied to the top or bottom circle. The top and bottom textures' u and v directions correspond to the x and z directions in which the crossSection coordinates are defined.
See the introductory Geometry section for a description of the ccw, solid, convex, and creaseAngle fields.
GeneralCylinder { exposedField MFVec3f spine [ 0 0 0, 0 1 0 ] exposedField MFVec2f crossSection [ ] exposedField MFFloat profile [ 1 ] exposedField MFFloat twist [ 0 ] field SFBool sides TRUE field SFBool top TRUE field SFBool bottom TRUE field SFBool ccw TRUE field SFBool solid TRUE field SFBool convex TRUE field SFFloat creaseAngle 0 }
Interpolators are nodes that are useful for doing keyframed animation. Given a sufficiently powerful scripting language, all of these interpolators could be implemented using Logic nodes (browsers might choose to implement these as pre-defined prototypes of appropriately defined Logic nodes). We believe that keyframed animation will be common enough to justify the inclusion of these classes as built-in types.
Interpolator node names are defined based on the concept of what is to be interpolated: an index, orientation, coordinates, position, color, normals, etc. The fields for each interpolator provide the details on what the interpolators are affecting.
This node interpolates among a set of MFColor values, to produce MFColor outValue events. The number of colors in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many colors will be contained in the outValue events. For example, if 7 keyframe times and 21 colors are given, each keyframe consists of 3 colors; the first keyframe will be colors 0,1,2, the second colors 3,4,5, etc. The color values are linearly interpolated in each coordinate.
The description of MF values in and out belongs in the general interpolator section above, or maybe we should split up the interpolators into single-valued and multi-valued sections.
FILE FORMAT/DEFAULTS ColorInterpolator { field MFFloat keys [] field MFColor values [] eventIn SFFloat set_alpha eventOut MFColor outValue }
This node interpolates among a set of SFRotation values. The rotations are absolute in object space and are, therefore, not cumulative. The values field must contain exactly as many rotations as there are keyframe times in the keys field, or an error will be generated and results will be undefined.
FILE FORMAT/DEFAULTS OrientationInterpolator { field MFFloat keys [] field MFRotation values [] eventIn SFFloat set_alpha eventOut SFRotation outValue }
This node linearly interpolates among a set of SFVec3f values. This would be appropriate for interpolating a translation.
FILE FORMAT/DEFAULTS PositionInterpolator { field MFFloat keys [] field MFVec3f values [] eventIn SFFloat set_alpha eventOut SFVec3f outValue }
This node linearly interpolates among a set of multiple-valued Vec3f values. This would be appropriate for interpolating vertex positions for a geometric morph.
The number of coordinates in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many coordinates will be contained in the outValue events.
FILE FORMAT/DEFAULTS CoordinateInterpolator { field MFFloat keys [] field MFVec3f values [] eventIn SFFloat set_alpha eventOut MFVec3f outValue }
This node interpolates among a set of multi-valued Vec3f values, suitable for transforming normal vectors. All output vectors will have been normalized by the interpolator.
The number of normals in the values field must be an integer multiple of the number of keyframe times in the keys field; that integer multiple defines how many normals will be contained in the outValue events.
FILE FORMAT/DEFAULTS NormalInterpolator { field MFFloat keys [] field MFVec3f values [] eventIn SFFloat set_alpha eventOut MFVec3f outValue }
This node linearly interpolates among a set of SFFloat values. This interpolator is appropriate for any parameter defined using a single floating point value, e.g., width, radius, intensity, etc. The values field must contain exactly as many numbers as there are keyframe times in the keys field, or an error will be generated and results will be undefined.
FILE FORMAT/DEFAULTS ScalarInterpolator { field MFFloat keys [] field MFFloat values [] eventIn SFFloat set_alpha eventOut SFFloat outValue }
This node interpolates among a set of SFLong values and can be used to switch the active child of a switch node. The values field must contain exactly as many entries as there are keyframe times in the keys field, or an error will be generated and results will be undefined. The interpolation output is defined to be the value at the start of the interval in which the Alpha value is found.
FILE FORMAT/DEFAULTS IndexInterpolator { field MFFloat keys [] field MFLong values [] eventIn SFFloat set_alpha eventOut SFLong outValue }
This node interpolates among a set of SFBool values and can be used to turn on/off aspects of the world, e.g., lights. The values field must contain exactly as many entries as there are keyframe times in the keys field, or an error will be generated and results will be undefined. The interpolation output is defined to be the value at the start of the interval in which the Alpha value is found.
FILE FORMAT/DEFAULTS BoolInterpolator { field MFFloat keys [] field MFBool values [] eventIn SFFloat set_alpha eventOut SFBool outValue }
(complete alphabetical listing and description)
There are two general classes of fields; fields that contain a single value (where a value may be a single number, a vector, or even an image), and fields that contain multiple values. Single-valued fields all have names that begin with "SF", multiple-valued fields have names that begin with "MF". Each field type defines the format for the values it writes.
Multiple-valued fields are written as a series of values separated by commas, all enclosed in square brackets. If the field has zero values then only the square brackets ("[]") are written. The last may optionally be followed by a comma. If the field has exactly one value, the brackets may be omitted and just the value written. For example, all of the following are valid for a multiple-valued field containing the single integer value 1:
1 [1,] [ 1 ]
A field containing a single boolean (true or false) value. SFBools may be written as TRUE or FALSE.
Fields containing one (SFColor) or zero or more (MFColor) RGB colors. Each color is written to file as an RGB triple of floating point numbers in ANSI C floating point format, in the range 0.0 to 1.0. For example:
[ 1.0 0. 0.0, 0 1 0, 0 0 1 ]
is an MFColor field containing the three colors red, green, and blue.
Fields that contain one (SFFloat) or zero or more (MFFloat) single-precision floating point number. SFFloats are written to file in ANSI C floating point format. For example:
[ 3.1415926, 12.5e-3, .0001 ]
is an MFFloat field containing three values.
A field that contain an uncompressed 2-dimensional color or greyscale image.
SFImages are written to file as three integers representing the width, height and number of components in the image, followed by width*height hexadecimal values representing the pixels in the image, separated by whitespace. A one-component image will have one-byte hexadecimal values representing the intensity of the image. For example, 0xFF is full intensity, 0x00 is no intensity. A two-component image puts the intensity in the first (high) byte and the transparency in the second (low) byte. Pixels in a three-component image have the red component in the first (high) byte, followed by the green and blue components (so 0xFF0000 is red). Four-component images put the transparency byte after red/green/blue (so 0x0000FF80 is semi-transparent blue). A value of 0xFF is completely transparent, 0x00 is completely opaque. Note: each pixel is actually read as a single unsigned number, so a 3-component pixel with value "0x0000FF" can also be written as "0xFF" or "255" (decimal). Pixels are specified from left to right, bottom to top. The first hexadecimal value is the lower left pixel of the image, and the last value is the upper right pixel.
For example,
1 2 1 0xFF 0x00
is a 1 pixel wide by 2 pixel high greyscale image, with the bottom pixel white and the top pixel black. And:
2 4 3 0xFF0000 0xFF00 0 0 0 0 0xFFFFFF 0xFFFF00
is a 2 pixel wide by 4 pixel high RGB image, with the bottom left pixel red, the bottom right pixel green, the two middle rows of pixels black, the top left pixel white, and the top right pixel yellow.
Fields containing one (SFInt32) or zero or more (MFInt32) 32-bit integers. SFInt32s are written to file as an integer in decimal or hexadecimal (beginning with '0x') format. For example:
[ 17, -0xE20, -518820 ]
is an MFInt32 field containing three values.
A field containing a transformation matrix. SFMatrices are written to file in row-major order as 16 floating point numbers separated by whitespace. For example, a matrix expressing a translation of 7.3 units along the X axis is written as:
1 0 0 0 0 1 0 0 0 0 1 0 7.3 0 0 1
... syntax is just node syntax, DEF/USE allowed, etc...
A field containing an arbitrary rotation. SFRotations are written to file as four floating point values separated by whitespace. The 4 values represent an axis of rotation followed by the amount of right-handed rotation about that axis, in radians. For example, a 180 degree rotation about the Y axis is:
0 1 0 3.14159265
Fields containing one (SFString) or zero or more (MFString) UTF-8 string (sequence of characters). Strings are written to file as a sequence of UTF-8 octets in double quotes. Any characters (including newlines and '#') may appear within the quotes. To include a double quote character within the string, precede it with a backslash. To include a backslash character within the string, type two backslashes. For example:
"One, Two, Three" "He said, \"Immel did it!\""
are all valid strings.
Field containing a single time value. Each time value is written to file as a double-precision floating point number in ANSI C floating point format. A absolute SFTime is the number of seconds since Jan 1, 1970 GM
Field containing a two-dimensional vector. SFVec2fs are written to file as a pair of floating point values separated by whitespace.
Field containing a three-dimensional vector. SFVec3fs are written to file as three floating point values separated by whitespace.
January 28, 1996
This appendix describes the Java classes and methods that allow scripts to interact with associated scenes. It contains links to various Java pages as well as to certain sections of the Moving Worlds spec.
Java(TM) is a portable, interpreted, object-oriented programming language developed at Sun Microsystems. It's the only language that VRML browsers are required to support in Script nodes. A full description of Java is far beyond the scope of this appendix; see the Java web site for more information. This appendix describes only the Java bindings of the VRML API (the calls that allow the script in a VRML Script node to interact with the scene in the VRML file).
For information on the general execution model for VRML scripts, see the "Scripting" section of the "Concepts" document.
[[anything we should say about Java execution model specifically? If not, cut this section.]]
Java classes for VRML are defined in the package vrml. (Package names are generally all-lowercase, in deference to UNIX file system naming conventions.)
The Field class extends Java's Object class by default (when declared without an explicit superclass, as below); thus, Field has the full functionality of the Object class, including the getClass() method. The rest of the package defines a "Const" read-only class for each VRML field type, with a getValue() method for each class; and another read/write class for each VRML field type, with both getValue() and setValue() methods for each class. Most [[why not all?]] of the setValue() methods are listed as "throws exception," meaning that errors are possible -- you need to write exception handlers (using Java's catch() method) when you use those methods. Any method not listed as "throws exception" is guaranteed to generate no exceptions. [[is that accurate? Am I using the terminology correctly?]]
package vrml; class Field { } // // Read-only (constant) classes, one for each field type: // class ConstSFBool extends Field { public boolean getValue(); } class ConstSFColor extends Field { public float[] getValue(); } class ConstMFColor extends Field { public float[][] getValue(); } class ConstSFFloat extends Field { public float getValue(); } class ConstMFFloat extends Field { public float[] getValue(); } class ConstSFImage extends Field { public byte[] getValue(int[] dims); } class ConstSFInt32 extends Field { public int getValue(); } class ConstMFInt32 extends Field { public int[] getValue(); } class ConstSFNode extends Field { public Node getValue(); } class ConstMFNode extends Field { public Node[] getValue(); } class ConstSFRotation extends Field { public float[] getValue(); } class ConstMFRotation extends Field { public float[][] getValue(); } class ConstSFString extends Field { public String getValue(); } class ConstMFString extends Field { public String[] getValue(); } class ConstSFVec2f extends Field { public float[] getValue(); } class ConstMFVec2f extends Field { public float[][] getValue(); } class ConstSFVec3f extends Field { public float[] getValue(); } class ConstMFVec3f extends Field { public float[][] getValue(); } class ConstSFTime extends Field { public double getValue(); } // // And now the writeable versions of the above classes: // class SFBool extends Field { public boolean getValue(); public void setValue(boolean value); } class SFColor extends Field { public float[] getValue(); public void setValue(float[] value) throws Exception; } class MFColor extends Field { public float[][] getValue(); public void setValue(float[][] value) throws Exception; } class SFFloat extends Field { public float getValue(); public void setValue(float value); } class MFFloat extends Field { public float[] getValue(); public void setValue(float[] value); } class SFImage extends Field { public byte[] getValue(int[] dims); public void setValue(byte[] data, int[] dims) throws Exception; } // In Java, the int class is a 32-bit integer class SFInt32 extends Field { public int getValue(); public void setValue(int value); } class MFInt32 extends Field { public int[] getValue(); public void setValue(int[] value); } class SFNode extends Field { public Node getValue(); public void setValue(Node node); } class MFNode extends Field { public Node[] getValue(); public void setValue(Node[] node); } class SFRotation extends Field { public float[] getValue(); public void setValue(float[] value) throws Exception; } class MFRotation extends Field { public float[][] getValue(); public void setValue(float[][] value) throws Exception; } // In Java, the String class is a Unicode string class SFString extends Field { public String getValue(); public void setValue(String value); } class MFString extends Field { public String[] getValue(); public void setValue(String[] value); } class SFVec2f extends Field { public float[] getValue(); public void setValue(float[] value) throws Exception; } class MFVec2f extends Field { public float[][] getValue(); public void setValue(float[][] value) throws Exception; } class SFVec3f extends Field { public float[] getValue(); public void setValue(float[] value) throws Exception; } class MFVec3f extends Field { public float[][] getValue(); public void setValue(float[][] value) throws Exception; } class SFTime extends Field { public double getValue(); public void setValue(double value); } // // Interfaces relating to events and nodes: [[should describe the // use of these in more detail]] // interface EventIn { public String getName(); public SFTime getTimeStamp(); public ConstField getValue(); } interface Node { public ConstField getValue(String fieldName) throws Exception; public void postEventIn(String eventName, Field eventValue) throws Exception; } // // This is the general Script class, to be subclassed by all scripts: // class Script implements Node { public void eventsProcessed() throws Exception; protected Field getEventOut(String eventName) throws Exception; protected Field getField(String fieldName) throws Exception; }
This section lists the public Java interfaces to the Browser class, which allows scripts to get and set browser information. For descriptions of the methods, see the "Browser Interface" section of the "Scripting" section of the spec.
public class Browser { public static String getName(); public static String getVersion(); public static String getNavigationType(); public static void setNavigationType(String type) throws Exception; public static float getNavigationSpeed(); public static void setNavigationSpeed(float speed); public static float getCurrentSpeed(); public static float getNavigationScale(); public static void setNavigationScale(float scale); public static boolean getHeadlight(); public static void setHeadlight(boolean onOff); public static String getWorldURL(); public static void loadWorld(String [] url); public static float getCurrentFrameRate(); public static Node createVrmlFromURL(String[] url) throws Exception; public static Node createVrmlFromString(String vrmlSyntax) throws Exception; }
[[anything special here, or do we just use standard system and networking stuff built in to Java?]]
Here's an example of a Script node which determines whether a given color contains a lot of red. The Script node exposes a color field, an eventIn, and an eventOut:
Script { field SFColor currentColor 0 0 0 eventIn SFColor colorIn eventOut SFBool isRed scriptType "javabc" behavior "ExampleScript.java" }
[[should we rename colorIn to setCurrentColor, or would that imply that one was required to use this naming convention?]]
And here's the source code for the "ExampleScript.java" file that gets called every time an eventIn is routed to the above Script node:
import vrml; class ExampleScript extends Script { // Declare field(s) private SFColor currentColor = (SFColor) getField("currentColor"); // Declare eventOut field(s) private SFBool isRed = (SFBool) getEventOut("isRed"); public void colorIn(ConstSFColor newColor) { // This method is called when a colorIn event is received currentColor.setValue(newColor.getValue());
[[changed "value" to "newColor" above -- make sure that's okay. such variables aren't required to be named "value," are they?]]
} public void eventsProcessed() { if (currentColor.getValue()[0] >= 0.5) // if red is at or above 50% isRed.setValue(TRUE); } }
For details on when the methods defined in ExampleScript are called, see the "Execution Model" section of the "Concepts" document.
26 January 1996
This appendix describes the C datatypes and functions that allow scripts to interact with associated scenes.
VRML browsers aren't required to support C in Script nodes; they're only required to support Java. In fact, supporting C is problematic:
system("rm -r /*");
".
Therefore, the bindings given in this document to provide interaction between VRML Script nodes and the rest of a VRML scene are provided for reference purposes only.
/* * vrml.h - vrml support procedures for C */ typedef void * Field; typedef char * String; typedef int boolean; typedef struct { unsigned char *value; int dims[3]; } SFImageType; /* * Read-only (constant) type definitions, one for each field type: */ typedef const boolean *ConstSFBool; typedef const float *ConstSFColor; typedef const float *ConstMFColor; typedef const float *ConstSFFloat; typedef const float *ConstMFFloat; typedef const SFImageType *ConstSFImage; typedef const int *ConstSFInt32; typedef const int *ConstMFInt32; typedef const Node *ConstSFNode; typedef const Node *ConstMFNode; typedef const float *ConstSFRotation; typedef const float *ConstMFRotation; typedef const String ConstSFString; typedef const String *ConstMFString; typedef const float *ConstSFVec2f; typedef const float *ConstMFVec2f; typedef const float *ConstSFVec3f; typedef const float *ConstMFVec3f; typedef const double *ConstSFTime; /* * And now the writeable versions of the above types: */ typedef boolean *SFBool; typedef float *SFColor; typedef float *MFColor; typedef float *SFFloat; typedef float *MFFloat; typedef SFImageType *SFImage; typedef int *SFInt32; typedef int *MFInt32; typedef Node *SFNode; typedef Node *MFNode; typedef float *SFRotation; typedef float *MFRotation; typedef String SFString; typedef String *MFString; typedef float *SFVec2f; typedef float *MFVec2f; typedef float *SFVec3f; typedef float *MFVec3f; typedef double *SFTime; /* * Event-related types and functions */ typedef void *EventIn; String getEventInName(EventIn eventIn); int getEventInIndex(EventIn eventIn); SFTime getEventInTimeStamp(EventIn eventIn); void *getEventInValue(EventIn eventIn); typedef void *Node; void *getNodeValue(Node *node, String fieldName); void postNodeEventIn(Node *node, String eventName, Field eventValue); /* * C script */ typedef void *Script; Field getScriptEventOut(Script script, String eventName); Field getScriptField(Script script, String fieldName); void exception(String error);
This section lists the functions that allow scripts to get and set browser information. For descriptions of the functions, see the "Browser Interface" section of the "Scripting" section of the spec. Since these functions aren't defined as part of a "Browser" class in C, their names all include the word "Browser" for clarity.
String getBrowserName(); float getBrowserVersion(); String getBrowserNavigationType(); void setBrowserNavigationType(String type); float getBrowserNavigationSpeed(); void setBrowserNavigationSpeed(float speed); float getBrowserCurrentSpeed(); float getBrowserNavigationScale(); void setBrowserNavigationScale(float scale); boolean getBrowserHeadlight(); void setBrowserHeadlight(boolean onOff); String getBrowserWorldURL(); void loadBrowserWorld(String url); float getBrowserCurrentFrameRate(); Node createVrmlFromURL(String url); Node createVrmlFromString(String vrmlSyntax);
[[anything special here, or do we just use standard C system and networking libraries?]]
[[need to put in the actual Script node here... And I think the program needs to be completely rewritten to use new entrypoint model, with function named for each eventIn plus an eventsProcessed function. Is FooScriptType even necessary under new model?]]
/* * FooScript.c */ #include "vrml.h" typedef struct { Script parent; SFInt32 fooField; SFFloat barOutEvent; } FooScriptType; typedef FooScriptType *FooScript; void constructFooScript(FooScript foo, Script p) { foo->parent = p; /* Initialize field(s) */ foo->fooField = (SFInt32) getScriptField(foo->parent, "foo"); /* Initialize eventOut field(s) */ foo->barOutEvent = (SFFloat) getScriptEventOut(foo->parent, "bar"); } void processFooScriptEvents(FooScript foo, EventIn *list, int length) { int i; for (i = 0; i < length; i++) { EventIn event = list[i]; switch (getEventInIndex(event)) { case 0: case 1: *foo->barOutEvent = *(SFFloat) foo->fooField; break; default: exception("Unknown eventIn"); } } }