Professional Documents
Culture Documents
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Mon, 03 Feb 2014 19:37:10 UTC
Contents
Articles
3D modeling
3D computer graphics
3D computer vision
10
12
3D reconstruction
15
16
21
Bounding volume
24
27
Box modeling
29
30
Cloth modeling
32
COLLADA
34
39
40
44
47
Crowd simulation
50
Cutaway drawing
52
Demoparty
55
Depth map
59
Digital puppetry
61
64
64
Draw distance
65
Edge loop
67
Euler operator
68
Explicit modeling
69
False radiosity
70
Fiducial marker
71
Fluid simulation
73
75
Forward kinematics
75
77
Geometry instancing
80
Geometry pipelines
81
Geometry processing
82
Gimbal lock
83
Glide API
88
GloriaFX
90
95
Image plane
95
Image-based meshing
96
Inflatable icons
98
98
99
Inverse kinematics
101
Isosurface
105
Joint constraints
106
Kinematic chain
106
109
Light stage
112
113
114
Low poly
115
Marching cubes
117
Mesh parameterization
119
Metaballs
120
Micropolygon
122
122
Motion capture
124
Newell's algorithm
133
134
Nonobtuse mesh
143
Normal (geometry)
143
Painter's algorithm
147
Parallax barrier
149
Parallel rendering
154
Particle system
156
Point cloud
159
161
Polygon mesh
161
Polygon soup
170
Polygonal modeling
170
Pre-rendering
174
176
Procedural modeling
177
Procedural texture
178
Progressive meshes
182
3D projection
183
186
Pyramid of vision
188
Quantitative Invisibility
188
189
Andreas Raab
200
RealityEngine
202
204
206
Retained mode
207
207
Schlick's approximation
209
Sculpted prim
210
Silhouette edge
212
Skeletal animation
213
Sketch-based modeling
215
Smoothing group
216
217
Solid modeling
221
227
Specularity
227
Static mesh
228
Stereoscopic acuity
229
Subdivision surface
231
Supinfocom
234
Surface caching
235
Surfel
235
Suzanne Award
236
Time-varying mesh
243
Timewarps
243
Triangle mesh
244
Vector slime
245
Vertex (geometry)
247
249
254
Vertex pipeline
255
Viewing frustum
256
Viewport
257
Virtual actor
258
260
Virtual replay
262
Volume mesh
262
Voxel
262
Web3D
266
References
Article Sources and Contributors
268
274
Article Licenses
License
278
3D modeling
3D modeling
3D computer graphics
Basics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses
v
t
e [1]
3D modeling
Models
3D models represent a 3D object using a collection of points in 3D
space, connected by various geometric entities such as triangles, lines,
curved surfaces, etc. Being a collection of data (points and other
information), 3D models can be created by hand, algorithmically
(procedural modeling), or scanned.
3D models are widely used anywhere in 3D graphics. Actually, their
use predates the widespread use of 3D graphics on personal computers.
Many computer games used pre-rendered images of 3D models as
sprites before computers could render them in real-time.
3D model of a spectrograph
Representation
Almost all 3D models can be divided into two categories.
Solid - These models define the volume of the object they represent
(like a rock). These are more realistic, but more difficult to build.
Solid models are mostly used for nonvisual simulations such as
medical and engineering simulations, for CAD and specialized
visual applications such as ray tracing and constructive solid
geometry
Shell/boundary - these models represent the surface, e.g. the
boundary of the object, not its volume (like an infinitesimally thin
eggshell). These are easier to work with than solid models. Almost
all visual models used in games and film are shell models.
Because the appearance of an object depends largely on the exterior of the object, boundary representations are
common in computer graphics. Two dimensional surfaces are a good analogy for the objects used in graphics,
though quite often these objects are non-manifold. Since surfaces are not finite, a discrete digital approximation is
required: polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation,
although point-based representations have been gaining some popularity in recent years. Level sets are a useful
representation for deforming surfaces which undergo many topological changes such as fluids.
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point
on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in
polygon-based rendering, where objects are broken down from abstract representations ("primitives") such as
spheres, cones etc., to so-called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of
e.g. squares) are popular as they have proven to be easy to render using scanline rendering.[3] Polygon
representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the
3D modeling
transition from abstract representation to rendered scene.
Modeling process
There are three popular ways to represent a model:
1. Polygonal modeling - Points in 3D space, called vertices, are
connected by line segments to form a polygonal mesh. The vast
majority of 3D models today are built as textured polygonal models,
because they are flexible and because computers can render them so
quickly. However, polygons are planar and can only approximate
curved surfaces using many polygons.
2. Curve modeling - Surfaces are defined by curves, which are
influenced by weighted control points. The curve follows (but does
not necessarily interpolate) the points. Increasing the weight for a
3D polygonal modeling of a human face.
point will pull the curve closer to that point. Curve types include
nonuniform rational B-spline (NURBS), splines, patches and geometric primitives
3. Digital sculpting - Still a fairly new method of modeling, 3D sculpting has become very popular in the few years
it has been around.[citation needed] There are currently 3 types of digital sculpting: Displacement, which is the most
widely used among applications at this moment, volumetric and dynamic tessellation. Displacement uses a dense
model (often generated by Subdivision surfaces of a polygon control mesh) and stores new locations for the
vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based
loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there
are not enough polygons in a region to achieve a deformation. Dynamic tesselation Is similar to Voxel but divides
the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very
artistic exploration as the model will have a new topology created over it once the models form and possibly
details have been sculpted. The new mesh will usually have the original high resolution mesh information
transferred into displacement data or normal map data if for a game engine.
The modeling stage consists of shaping individual objects that are later used in the scene. There are a number of
modeling techniques, including:
constructive solid geometry
implicit surfaces
subdivision surfaces
Modeling can be performed by means of a dedicated program (e.g., Cinema 4D, formZ, Maya, 3DS Max, Blender,
Lightwave, Modo, solidThinking) or an application component (Shaper, Lofter in 3DS Max) or some scene
description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such
cases modeling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and
Realsoft 3D).
Complex materials such as blowing sand, clouds, and liquid sprays are modeled with particle systems, and are a
mass of 3D coordinates which have either points, polygons, texture splats, or sprites assigned to them.
3D modeling
Compared to 2D methods
3D photorealistic effects are often achieved without
wireframe
modeling
and
are
sometimes
indistinguishable in the final form. Some graphic art
software includes filters that can be applied to 2D
vector graphics or 2D raster graphics on transparent
layers.
Advantages of wireframe 3D
exclusively 2D methods include:
modeling
over
Disadvantages compare to 2D photorealistic rendering may include a software learning curve and difficulty
achieving certain photorealistic effects. Some photorealistic effects may be achieved with special rendering filters
included in the 3D modeling software. For the best of both worlds, some artists use a combination of 3D modeling
followed by editing the 2D computer-rendered images from the 3D model.
3D model market
A large market for 3D models (as well as 3D-related content, such as textures, scripts, etc.) still exists - either for
individual models or large collections. Online marketplaces for 3D content, such as TurboSquid, The3DStudio,
CreativeCrash, CGTrader, NoneCG, CGPeopleNetwork and DAZ 3D, allow individual artists to sell content that
they have created. Often, the artists' goal is to get additional value out of assets they have previously created for
projects. By doing so, artists can earn more money out of their old content, and companies can save money by
buying pre-made models instead of paying an employee to create one from scratch. These marketplaces typically
split the sale between themselves and the artist that created the asset, artists get 40% to 95% of the sales according
the marketplace. In most cases, the artist retains ownership of the 3d model; the customer only buys the right to use
and present the model. Some artists sell their products directly in its own stores offering their products at a lower
price by not using intermediaries.
3D printing
3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying
down successive layers of material.
Human models
The first widely available commercial application of human virtual models appeared in 1998 on the Lands' End web
site. The human virtual models were created by the company My Virtual Mode Inc. and enabled users to create a
model of themselves and try on 3D clothing. There are several modern programs that allow for the creation of virtual
human models (Poser being one example).
3D modeling
Uses
3D modelling is used in various industries like films, animation and gaming, interior designing and architecture.
They are also used in the medical industry for the interactive representations of anatomy. A wide number of 3D
softwares are also used in constructing digital representation of mechanical models or parts before they are actually
manufactured. CAD/CAM related softwares are used in such fields, and with these softwares, not only can you
construct the parts, but also assemble them, and observe their functionality.
3D modelling is also used in the field of Industrial Design, wherein products are 3D modeled before representing
them to the clients. In Media and Event industries, 3D modelling is used in Stage/Set Design.
References
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:3D_computer_graphics& action=edit
[2] Ding, H., Hong, Y. (2003), NURBS curve controlled modeling for facial animation, Computers and Graphics, 27(3):373-385
[3] Jon Radoff, Anatomy of an MMORPG (http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/ ), August 22, 2008
3D computer graphics
3D computer graphics
Basics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses
3D computer graphics
v
t
e [1]
3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional
representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing
calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time.
3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model
and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction
between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as
lighting, and 3D may use 2D rendering techniques.
3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained
within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any
three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models
are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process
called 3D rendering, or used in non-graphical computer simulations and calculations.
History
William Fetter was credited with coining the term computer graphics in 1961[1] to describe his work at Boeing. One
of the first displays of computer animation was Futureworld (1976), which included an animation of a human face
and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by
University of Utah students Edwin Catmull and Fred Parke.
Overview
3D computer graphics creation falls into three basic phases:
3D modeling the process of forming a computer model of an object's shape
Layout and animation the motion and placement of objects within a scene
3D rendering the computer calculations that, based on light placement, surface types, and other qualities,
generate the image
Modeling
The model describes the process of forming the shape of an object. The two most common sources of 3D models are
those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned
into a computer from real-world objects. Models can also be produced procedurally or via physical simulation.
Basically, a 3D model is formed from points called vertices (or vertexes) that define the shape and form polygons. A
polygon is an area formed from at least three vertexes (a triangle). A four-point polygon is a quad, and a polygon of
more than four points is an n-gon[citation needed]. The overall integrity of the model and its suitability to use in
animation depend on the structure of the polygons.
3D computer graphics
Rendering
Rendering converts a model into an image either by simulating light transport to get photo-realistic images, or by
applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic rendering are
transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step
is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable
form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions.
Examples of 3D rendering
Left: A 3D rendering with ray tracing and ambient occlusion using Blender and YafaRay.
Center: A 3d model of a Dunkerque-class battleship rendered with flat shading.
Right: During the 3D rendering step, the number of reflections light rays can take, as well as various other
attributes, can be tailored to achieve a desired visual effect. Rendered with Cobalt.
3D computer graphics
Communities
There are a multitude of websites designed to help educate and support 3D graphic artists. Some are managed by
software developers and content providers, but there are standalone sites as well. These communities allow for
members to seek advice, post tutorials, provide product reviews or post examples of their own work.
References
[1] Computer Graphics, comphist.org (http:/ / www. comphist. org/ computing_history/ new_page_6. htm)
External links
A Critical History of Computer Graphics and Animation (http://accad.osu.edu/~waynec/history/lessons.html)
How Stuff Works - 3D Graphics (http://computer.howstuffworks.com/3dgraphics.htm)
History of Computer Graphics series of articles (http://hem.passagen.se/des/hocg/hocg_1960.htm)
Basics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses
v
t
e [1]
3D computer graphics software produces computer-generated imagery (CGI) through 3D modeling and 3D
rendering.
Classification
Modeling
3D modeling software is a class of 3D computer graphics software used to produce 3D models. Individual programs
of this class are called modeling applications or modelers.
3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise
change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can
be rotated and the view can be zoomed in and out.
3D modelers can export their models to files, which can then be imported into other applications as long as the
metadata are compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write
data in the native formats of other applications.
Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and
texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able
to generate full-motion video of a series of rendered scenes (i.e. animation).
Rendering
Although 3D modeling and CAD software may perform 3D rendering as well (e.g.Autodesk 3ds Max or Blender),
exclusive 3D rendering software also exist.
Computer-aided design
Computer aided design software may employ the same fundamental 3D modeling techniques that 3D modeling
software use but their goal differs. They are used in computer-aided engineering, computer-aided manufacturing,
Finite element analysis, product lifecycle management, 3D printing and Computer-aided architectural design.
10
Complementary tools
After producing video, studios then edit or composite the video using programs such as Adobe Premiere Pro or Final
Cut Pro at the low end, or Autodesk Combustion, Digital Fusion, Shake at the high-end. Match moving software is
commonly used to match live video with computer-generated video, keeping the two in sync as the camera moves.
Use of real-time computer graphics engines to create a cinematic production is called machinima.
References
External links
3D Tools table (http://wiki.cgsociety.org/index.php/Comparison_of_3d_tools) from the CGSociety wiki
Comparison of 10 most popular modeling software (http://tideart.com/?id=4e26f595) from TideArt
3D computer vision
3D computer vision, or human-like computer vision, is the ability of double camera processor powered devices to
acquire a real time picture of the world in three dimensions. It was not possible to achieve such instant speeds of 3D
info capturing using the traditional technology of stereo cameras due to huge resources required to process and
compare/combine images received from two misaligned image sensors.
In 2006 Science Bureau Inc. came up with an idea how to seamlessly
transition from 2D to 3D technology in personal computers and other
mobile devices to enable them to see the world as humans. The idea
behind the invention was quite simple. In order to avoid enormous
processing resources for compensation of misalignment of two image
sensors said image sensors need to be precisely aligned so that the
rows of their sensing elements are parallel to the line connecting their
optical centers. Then the rows can be easily compared on the fly. There
is no further need in powerful image processors what makes the
technology very inexpensive and suitable for its low budget mass
implementation.
The idea was introduced to all major companies playing in the market
of home electronics and computer games back in 2007 but was not
3D Computer Vision System
acquired by any of them. In 2010 US Patent 7,729,530 [1] was issued to
protect the intellectual rights. The same year all kinds of 3D devices
began flooding the market in the North America.
Despite this recent breakthrough in 3D technologies there is still a lack of real time 3D vision computer systems on
the market. There are a few high profile products that are close to achieving instant 3D image reconstruction.
Nevertheless, they are still far from providing real time image and gesture recognition for computer games and
device control. Lets take a closer look at them.
1. Microsofts Kinect for Xbox 360. The product uses the suggested advanced technology in part of having two
image sensors with aligned rows of sensing elements. However, Microsoft utilizes a special source of light
producing a large pattern on surrounding objects to get captured and recognized by the imaging part. Due to
specifics of the pattern the image resolution is very low and the device is only capable of recognizing major body
movements. The device uses low resolution image sensors and still not fast enough to process received images.
3D computer vision
2. Fujis stereo camera. Precisely aligned sensors with high grade optics. Could provide a great 3D real time image
if connected and controlled by computer.
3. Panasonics 3D camcorder. Great idea with mechanically alignable sensors to get 3D video images.
4. HTC has unveiled the EVO 3D, a follow-up to Sprint Nextel's breakout smartphone. It has a 4.3-inch (110mm)
touchscreen, which can display eye-popping 3D without needing glasses. Users will also be able to capture photos
and videos in 3D using a pair of cameras on the back.
5. LG Electronics has been working for a year and half on a 3D smartphone of its own.The Optimus 3D, as it's been
called, will launch on AT&T Mobility's network with the name Thrill 4G. LG developers spent a great deal of
time fine-tuning the pair of 5-megapixel cameras to accurately capture 3D media. Calibrating the cameras to
produce good-looking stills and video is more difficult than pulling off a glasses-free display.
6. Nintendo's 3DS also has a pair of cameras for capturing scenes in 3D, and it works quite well. Being the first out
of the gate to offer a mainstream glasses-free 3D gadget, Nintendo expected to find competitors, and it soon did
when LG announced its phone.
7. Both LG and HTC are planning to debut tablet computers that should be able, like their phones, capture 3D with
a pair of cameras.
It is obvious that all of the above companies are on the right track building their products based on the technology to
align two image sensors as precisely as possible. Therefore, if the technology keeps going in the defined direction,
we are to soon witness computers recognizing and communicating with their users; robots being everywhere and
doing everything from surgeries to driving cars; 3D virtual games with instant Avatar image creation of the players;
3D technologies everywhere from smartphones to TV.
References
[1] http:/ / patft. uspto. gov/ netacgi/ nph-Parser?Sect1=PTO1& Sect2=HITOFF& d=PALL& p=1& u=%2Fnetahtml%2FPTO%2Fsrchnum.
htm& r=1& f=G& l=50& s1=7729530. PN. & OS=PN/ 7729530& RS=PN/ 7729530
External links
"3-D smartphones ditch the glasses, CNN, 03/24/2011" (http://www.cnn.com/2011/TECH/mobile/03/24/3d.
phones.tablets/index.html?hpt=Sbin)
"Finepix Real 3DW1 Stereo Camera by Fuji" (http://www.fujifilm.com/products/3d/camera/
finepix_real3dw1/)
"3D Camcorder by Panasonic" (http://www2.panasonic.com/consumer-electronics/shop/
Cameras-Camcorders/Camcorders/model.HDC-SDT750K_11002_7000000000000005702)
"Kinect Xbox 360" (http://www.xbox.com/en-ca/kinect/?WT.srch=1)
"United States Patent and Trademark Office: US Patent 7,729,530" (http://patft.uspto.gov/netacgi/
nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&
l=50&s1=7729530.PN.&OS=PN/7729530&RS=PN/7729530)
11
Acquisition
Acquisition can occur from a multitude of methods including 2D images, acquired sensor data and on site sensors.
12
13
Software
Software used for airborne laser scanning includes OPALS (Orientation and Processing of Airborne Laser Scanning
data), ...[16]
Cost
Terrestric laserscandevices (pulse or phase devices) + processing software generally start at a price of 150,000 .
Some less precise devices (as the Trimble VX) cost around 75,000.
Terrestric LIDAR systems cost around 300,000 .
Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost
around 25,000. Systems that use still cameras with balloons are even cheaper (around 2,500 ), but require
additional manual processing. As the manual processing takes around 1 month of labor for every day of taking
pictures, this is thus also still an expensive solution in the long run.
Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost
around 11,000. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around
5,500. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2m resolution) cost around
1.000 per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.[17]
Object reconstruction
After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs
to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and
imported into another program for further refining, and/or to add additional data. Such additional data could be
gps-location data, ... Also, after the reconstruction, the data might be directly implemented into a local (GIS)
map[18][19] or a worldwide map such as Google Earth.
Software
Several software packets are used in which the acquired (and sometimes already processed) data from images or
sensors is imported. The software packets include[20] (in alphabetical order) :
3DF Zephyr
Canoma
Cyclone
Leica Photogrammetry Suite
MountainsMap SEM (microscopy applications only)
Neitra 3D pro
Orthoware
PhotoModeler
SketchUp
Smart3Dcapture (acute3D)
Rhinophoto
14
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
3D reconstruction
In computer vision and computer graphics, 3D reconstruction is the
process of capturing the shape and appearance of real objects. This
process can be accomplished either by active or passive methods. If the
model is allowed to change its shape in time, this is referred to as
non-rigid or spatio-temporal reconstruction.
Active methods
3D reconstruction of the general anatomy of the
These methods actively interfere with the reconstructed object, either
right side view of a small marine slug Pseudunela
mechanically or radiometrically. A simple example of a mechanical
viatoris.
method would use a depth gauge to measure a distance to a rotating
object put on a turntable. More applicable radiometric methods emit
radiance towards the object and then measure its reflected part. Examples range from moving light sources, colored
visible light, time-of-flight lasers to microwaves or ultrasound. See 3D scanning for more details.
15
3D reconstruction
Passive methods
Passive methods of 3D reconstruction do not interfere with the reconstructed object, they only use a sensor to
measure the radiance reflected or emitted by the object's surface to infer its 3D structure. Typically, the sensor is an
image sensor in a camera sensitive to visible light and the input to the method is a set of digital images (one, two or
more) or video. In this case we talk about image-based reconstruction and the output is a 3D model.
External links
3D Reconstruction from Multiple Images [1]
References
[1] http:/ / homepages. inf. ed. ac. uk/ rbf/ CVonline/ LOCAL_COPIES/ MOHR_TRIGGS/ node51. html
Overview
Binary space partitioning is a generic process of recursively dividing a scene into two until the partitioning satisfies
one or more requirements. It can be seen as a generalisation of other spatial tree structures such as k-d trees and
quadtrees, one where hyperplanes that partition the space may have any orientation, rather than being aligned with
the coordinate axes as they are in k-d trees or quadtrees. When used in computer graphics to render scenes composed
of planar polygons, the partitioning planes are frequently (but not always) chosen to coincide with the planes defined
by polygons in the scene.
The specific choice of partitioning plane and criterion for terminating the partitioning process varies depending on
the purpose of the BSP tree. For example, in computer graphics rendering, the scene is divided until each node of the
BSP tree contains only polygons that can render in arbitrary order. When back-face culling is used, each node
therefore contains a convex set of polygons, whereas when rendering double-sided polygons, each node of the BSP
tree contains only polygons in a single plane. In collision detection or ray tracing, a scene may be divided up into
primitives on which collision or ray intersection tests are straightforward.
Binary space partitioning arose from the computer graphics need to rapidly draw three dimensional scenes composed
of polygons. A simple way to draw such scenes is the painter's algorithm, which produces polygons in order of
distance from the viewer, back to front, painting over the background and previous polygons with each closer object.
This approach has two disadvantages: time required to sort polygons in back to front order, and the possibility of
errors in overlapping polygons. Fuchs and co-authors showed that constructing a BSP tree solved both of these
problems by providing a rapid method of sorting polygons with respect to a given viewpoint (linear in the number of
polygons in the scene) and by subdividing overlapping polygons to avoid errors that can occur with the painter's
algorithm. A disadvantage of binary space partitioning is that generating a BSP tree can be time-consuming.
16
Generation
The canonical use of a BSP tree is for rendering polygons (that are double-sided, that is, without back-face culling)
with the painter's algorithm. Each polygon is designated with a front side and a back side which could be chosen
arbitrarily and only affects the structure of the tree but not the required result. Such a tree is constructed from an
unsorted list of all the polygons in a scene. The recursive algorithm for construction of a BSP tree from that list of
polygons is:
1. Choose a polygon P from the list.
2. Make a node N in the BSP tree, and add P to the list of polygons at that node.
3. For each other polygon in the list:
1. If that polygon is wholly in front of the plane containing P, move that polygon to the list of nodes in front of
P.
2. If that polygon is wholly behind the plane containing P, move that polygon to the list of nodes behind P.
3. If that polygon is intersected by the plane containing P, split it into two polygons and move them to the
respective lists of polygons behind and in front of P.
4. If that polygon lies in the plane containing P, add it to the list of polygons at node N.
4. Apply this algorithm to the list of polygons in front of P.
5. Apply this algorithm to the list of polygons behind P.
The following diagram illustrates the use of this algorithm in converting a list of lines or polygons into a BSP tree.
At each of the eight steps (i.-viii.), the algorithm above is applied to a list of lines, and one new node is added to the
tree.
Start with a list of lines, (or in 3-D, polygons) making up the scene. In the tree diagrams, lists are denoted
by rounded rectangles and nodes in the BSP tree by circles. In the spatial diagram of the lines, direction
chosen to be the 'front' of a line is denoted by an arrow.
i.
ii.
We now apply the algorithm to the list of lines in front of A (containing B2, C2, D2). We choose a line, B2,
add it to a node and split the rest of the list into those lines that are in front of B2 (D2), and those that are
behind it (C2, D3).
iii.
Choose a line, D2, from the list of lines in front of B2. It is the only line in the list, so after adding it to a
node, nothing further needs to be done.
17
iv.
We are done with the lines in front of B2, so consider the lines behind B2 (C2 and D3). Choose one of
these (C2), add it to a node, and put the other line in the list (D3) into the list of lines in front of C2.
v.
Now look at the list of lines in front of C2. There is only one line (D3), so add this to a node and continue.
vi.
We have now added all of the lines in front of A to the BSP tree, so we now start on the list of lines behind
A. Choosing a line (B1) from this list, we add B1 to a node and split the remainder of the list into lines in
front of B1 (i.e. D1), and lines behind B1 (i.e. C1).
vii.
Processing first the list of lines in front of B1, D1 is the only line in this list, so add this to a node and
continue.
viii. Looking next at the list of lines behind B1, the only line in this list is C1, so add this to a node, and the BSP
tree is complete.
The final number of polygons or lines in a tree is often larger (sometimes much larger) than the original list, since
lines or polygons that cross the partitioning plane must be split into two. It is desirable to minimize this increase, but
also to maintain reasonable balance in the final tree. The choice of which polygon or line is used as a partitioning
plane (in step 1 of the algorithm) is therefore important in creating an efficient BSP tree.
Traversal
A BSP tree is traversed in a linear time, in an order determined by the particular function of the tree. Again using the
example of rendering double-sided polygons using the painter's algorithm, to draw a polygon P correctly requires
that all polygons behind the plane P lies in must be drawn first, then polygon P, then finally the polygons in front of
P. If this drawing order is satisfied for all polygons in a scene, then the entire scene renders in the correct order. This
procedure can be implemented by recursively traversing a BSP tree using the following algorithm. From a given
viewing location V, to render a BSP tree,
1. If the current node is a leaf node, render the polygons at the current node.
2. Otherwise, if the viewing location V is in front of the current node:
1. Render the child BSP tree containing polygons behind the current node
2. Render the polygons at the current node
3. Render the child BSP tree containing polygons in front of the current node
3. Otherwise, if the viewing location V is behind the current node:
1. Render the child BSP tree containing polygons in front of the current node
2. Render the polygons at the current node
3. Render the child BSP tree containing polygons behind the current node
4. Otherwise, the viewing location V must be exactly on the plane associated with the current node. Then:
1. Render the child BSP tree containing polygons in front of the current node
2. Render the child BSP tree containing polygons behind the current node
Applying this algorithm recursively to the BSP tree generated above results in the following steps:
The algorithm is first applied to the root node of the tree, node A. V is in front of node A, so we apply the
algorithm first to the child BSP tree containing polygons behind A
18
Timeline
1969 Schumacker et al. published a report that described how carefully positioned planes in a virtual environment
could be used to accelerate polygon ordering. The technique made use of depth coherence, which states that a
polygon on the far side of the plane cannot, in any way, obstruct a closer polygon. This was used in flight
simulators made by GE as well as Evans and Sutherland. However, creation of the polygonal data organization
was performed manually by scene designer.
1980 Fuchs et al. extended Schumackers idea to the representation of 3D objects in a virtual environment by
using planes that lie coincident with polygons to recursively partition the 3D space. This provided a fully
automated and algorithmic generation of a hierarchical polygonal data structure known as a Binary Space
Partitioning Tree (BSP Tree). The process took place as an off-line preprocessing step that was performed once
per environment/object. At run-time, the view-dependent visibility ordering was generated by traversing the tree.
1981 Naylor's Ph.D thesis containing a full development of both BSP trees and a graph-theoretic approach using
strongly connected components for pre-computing visibility, as well as the connection between the two methods.
BSP trees as a dimension independent spatial search structure was emphasized, with applications to visible
surface determination. The thesis also included the first empirical data demonstrating that the size of the tree and
the number of new polygons was reasonable (using a model of the Space Shuttle).
1983 Fuchs et al. describe a micro-code implementation of the BSP tree algorithm on an Ikonas frame buffer
system. This was the first demonstration of real-time visible surface determination using BSP trees.
1987 Thibault and Naylor described how arbitrary polyhedra may be represented using a BSP tree as opposed to
the traditional b-rep (boundary representation). This provided a solid representation vs. a surface
based-representation. Set operations on polyhedra were described using a tool, enabling Constructive Solid
Geometry (CSG) in real-time. This was the fore runner of BSP level design using brushes, introduced in the
Quake editor and picked up in the Unreal Editor.
19
References
Additional references
[NAYLOR90] B. Naylor, J. Amanatides, and W. Thibualt, "Merging BSP Trees Yields Polyhedral Set
Operations", Computer Graphics (Siggraph '90), 24(3), 1990.
[NAYLOR93] B. Naylor, "Constructing Good Partitioning Trees", Graphics Interface (annual Canadian CG
conference) May, 1993.
[CHEN91] S. Chen and D. Gordon. Front-to-Back Display of BSP Trees. (http://cs.haifa.ac.il/~gordon/
ftb-bsp.pdf) IEEE Computer Graphics & Algorithms, pp 7985. September 1991.
[RADHA91] H. Radha, R. Leoonardi, M. Vetterli, and B. Naylor Binary Space Partitioning Tree Representation
of Images, Journal of Visual Communications and Image Processing 1991, vol. 2(3).
[RADHA93] H. Radha, "Efficient Image Representation using Binary Space Partitioning Trees.", Ph.D. Thesis,
Columbia University, 1993.
[RADHA96] H. Radha, M. Vetterli, and R. Leoonardi, Image Compression Using Binary Space Partitioning
Trees, IEEE Transactions on Image Processing, vol. 5, No.12, December 1996, pp.16101624.
[WINTER99] AN INVESTIGATION INTO REAL-TIME 3D POLYGON RENDERING USING BSP TREES.
Andrew Steven Winter. April 1999. available online
Mark de Berg, Marc van Kreveld, Mark Overmars, and Otfried Schwarzkopf (2000). Computational Geometry
(2nd revised ed.). Springer-Verlag. ISBN3-540-65620-0. Section 12: Binary Space Partitions: pp.251265.
Describes a randomized Painter's Algorithm.
20
External links
BSP trees presentation (http://www.cs.wpi.edu/~matt/courses/cs563/talks/bsp/bsp.html)
Another BSP trees presentation (http://web.archive.org/web/20110719195212/http://www.cc.gatech.edu/
classes/AY2004/cs4451a_fall/bsp.pdf)
A Java applet that demonstrates the process of tree generation (http://symbolcraft.com/graphics/bsp/)
A Master Thesis about BSP generating (http://archive.gamedev.net/archive/reference/programming/features/
bsptree/bsp.pdf)
BSP Trees: Theory and Implementation (http://www.devmaster.net/articles/bsp-trees/)
BSP in 3D space (http://www.euclideanspace.com/threed/solidmodel/spatialdecomposition/bsp/index.htm)
Overview
Bounding interval hierarchies (BIH) exhibit many of the properties of both bounding volume hierarchies (BVH) and
kd-trees. Whereas the construction and storage of BIH is comparable to that of BVH, the traversal of BIH resemble
that of kd-trees. Furthermore, BIH are also binary trees just like kd-trees (and in fact their superset, BSP trees).
Finally, BIH are axis-aligned as are its ancestors. Although a more general non-axis-aligned implementation of the
BIH should be possible (similar to the BSP-tree, which uses unaligned planes), it would almost certainly be less
desirable due to decreased numerical stability and an increase in the complexity of ray traversal.
The key feature of the BIH is the storage of 2 planes per node (as opposed to 1 for the kd tree and 6 for an axis
aligned bounding box hierarchy), which allows for overlapping children (just like a BVH), but at the same time
featuring an order on the children along one dimension/axis (as it is the case for kd trees).
It is also possible to just use the BIH data structure for the construction phase but traverse the tree in a way a
traditional axis aligned bounding box hierarchy does. This enables some simple speed up optimizations for large ray
bundles [3] while keeping memory/cache usage low.
Some general attributes of bounding interval hierarchies (and techniques related to BIH) as described by [4] are:
21
Operations
Construction
To construct any space partitioning structure some form of heuristic is commonly used. For this the surface area
heuristic, commonly used with many partitioning schemes, is a possible candidate. Another, more simplistic
heuristic is the "global" heuristic described by which only requires an axis-aligned bounding box, rather than the full
set of primitives, making it much more suitable for a fast construction.
The general construction scheme for a BIH:
calculate the scene bounding box
use a heuristic to choose one axis and a split plane candidate perpendicular to this axis
sort the objects to the left or right child (exclusively) depending on the bounding box of the object (note that
objects intersecting the split plane may either be sorted by its overlap with the child volumes or any other
heuristic)
calculate the maximum bounding value of all objects on the left and the minimum bounding value of those on the
right for that axis (can be combined with previous step for some heuristics)
store these 2 values along with 2 bits encoding the split axis in a new node
continue with step 2 for the children
Potential heuristics for the split plane candidate search:
Classical: pick the longest axis and the middle of the node bounding box on that axis
Classical: pick the longest axis and a split plane through the median of the objects (results in a leftist tree which is
often unfortunate for ray tracing though)
Global heuristic: pick the split plane based on a global criterion, in the form of a regular grid (avoids unnecessary
splits and keeps node volumes as cubic as possible)
Surface area heuristic: calculate the surface area and amount of objects for both children, over the set of all
possible split plane candidates, then choose the one with the lowest costs (claimed to be optimal, though the cost
function poses unusual demands to proof the formula, which can not be fulfilled in real life. also an exceptionally
slow heuristic to evaluate)
Ray traversal
The traversal phase closely resembles a kd-tree traversal: One has to distinguish 4 simple cases, where the ray
For the third case, depending on the ray direction (negative or positive) of the component (x, y or z) equalling the
split axis of the current node, the traversal continues first with the left (positive direction) or the right (negative
direction) child and the other one is pushed onto a stack.
Traversal continues until a leaf node is found. After intersecting the objects in the leaf, the next element is popped
from the stack. If the stack is empty, the nearest intersection of all pierced leafs is returned.
It is also possible to add a 5th traversal case, but which also requires a slightly complicated construction phase. By
swapping the meanings of the left and right plane of a node, it is possible to cut off empty space on both sides of a
node. This requires an additional bit that must be stored in the node to detect this special case during traversal.
Handling this case during the traversal phase is simple, as the ray
just intersects the only child of the current node or
intersects nothing
22
Properties
Numerical stability
All operations during the hierarchy construction/sorting of the triangles are min/max-operations and comparisons.
Thus no triangle clipping has to be done as it is the case with kd-trees and which can become a problem for triangles
that just slightly intersect a node. Even if the kd implementation is carefully written, numerical errors can result in a
non-detected intersection and thus rendering errors (holes in the geometry) due to the missed ray-object intersection.
Extensions
Instead of using two planes per node to separate geometry, it is also possible to use any number of planes to create a
n-ary BIH or use multiple planes in a standard binary BIH (one and four planes per node were already proposed in
and then properly evaluated in [5]) to achieve better object separation.
References
Papers
[1] Nam, Beomseok; Sussman, Alan. A comparative study of spatial indexing techniques for multidimensional scientific datasets (http:/ /
ieeexplore. ieee. org/ Xplore/ login. jsp?url=/ iel5/ 9176/ 29111/ 01311209. pdf)
[2] Zachmann, Gabriel. Minimal Hierarchical Collision Detection (http:/ / zach. in. tu-clausthal. de/ papers/ vrst02. html)
[3] Wald, Ingo; Boulos, Solomon; Shirley, Peter (2007). Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies (http:/ /
www. sci. utah. edu/ ~wald/ Publications/ 2007/ / / BVH/ download/ / togbvh. pdf)
[4] Wchter, Carsten; Keller, Alexander (2006). Instant Ray Tracing: The Bounding Interval Hierarchy (http:/ / ainc. de/ Research/ BIH. pdf)
[5] Wchter, Carsten (2008). Quasi-Monte Carlo Light Transport Simulation by Efficient Ray Tracing (http:/ / vts. uni-ulm. de/ query/ longview.
meta. asp?document_id=6265)
External links
BIH implementations: Javascript (http://github.com/imbcmdth/jsBIH).
23
Bounding volume
24
Bounding volume
For building code compliance, see Bounding.
In computer graphics and computational geometry, a bounding
volume for a set of objects is a closed volume that completely contains
the union of the objects in the set. Bounding volumes are used to
improve the efficiency of geometrical operations by using simple
volumes to contain more complex objects. Normally, simpler volumes
have simpler ways to test for overlap.
A bounding volume for a set of objects is also a bounding volume for
the single object consisting of their union, and the other way around.
Therefore it is possible to confine the description to the case of a single
object, which is assumed to be non-empty and bounded (finite).
A three dimensional model with its bounding box
drawn in dashed lines.
Bounding volume
bounding boxes become more sophisticated.
A bounding box is a cuboid, or in 2-D a rectangle, containing the object. In dynamical simulation, bounding boxes
are preferred to other shapes of bounding volume such as bounding spheres or cylinders for objects that are roughly
cuboid in shape when the intersection test needs to be fairly accurate. The benefit is obvious, for example, for objects
that rest upon other, such as a car resting on the ground: a bounding sphere would show the car as possibly
intersecting with the ground, which then would need to be rejected by a more expensive test of the actual model of
the car; a bounding box immediately shows the car as not intersecting with the ground, saving the more expensive
test.
A bounding capsule is a swept sphere (i.e. the volume that a sphere takes as it moves along a straight line segment)
containing the object. Capsules can be represented by the radius of the swept sphere and the segment that the sphere
is swept across). It has traits similar to a cylinder, but is easier to use, because the intersection test is simpler. A
capsule and another object intersect if the distance between the capsule's defining segment and some feature of the
other object is smaller than the capsule's radius. For example, two capsules intersect if the distance between the
capsules' segments is smaller than the sum of their radii. This holds for arbitrarily rotated capsules, which is why
they're more appealing than cylinders in practice.
A bounding cylinder is a cylinder containing the object. In most applications the axis of the cylinder is aligned with
the vertical direction of the scene. Cylinders are appropriate for 3-D objects that can only rotate about a vertical axis
but not about other axes, and are otherwise constrained to move by translation only. Two vertical-axis-aligned
cylinders intersect when, simultaneously, their projections on the vertical axis intersect which are two line
segments as well their projections on the horizontal plane two circular disks. Both are easy to test. In video
games, bounding cylinders are often used as bounding volumes for people standing upright.
A bounding ellipsoid is an ellipsoid containing the object. Ellipsoids usually provide tighter fitting than a sphere.
Intersections with ellipsoids are done by scaling the other object along the principal axes of the ellipsoid by an
amount equal to the multiplicative inverse of the radii of the ellipsoid, thus reducing the problem to intersecting the
scaled object with a unit sphere. Care should be taken to avoid problems if the applied scaling introduces skew.
Skew can make the usage of ellipsoids impractical in certain cases, for example collision between two arbitrary
ellipsoids.
A bounding slab is related to the AABB and used to speed up ray tracing[1]
A bounding sphere is a sphere containing the object. In 2-D graphics, this is a circle. Bounding spheres are
represented by centre and radius. They are very quick to test for collision with each other: two spheres intersect
when the distance between their centres does not exceed the sum of their radii. This makes bounding spheres
appropriate for objects that can move in any number of dimensions.
In many applications the bounding box is aligned with the axes of the co-ordinate system, and it is then known as an
axis-aligned bounding box (AABB). To distinguish the general case from an AABB, an arbitrary bounding box is
sometimes called an oriented bounding box (OBB). AABBs are much simpler to test for intersection than OBBs,
but have the disadvantage that when the model is rotated they cannot be simply rotated with it, but need to be
recomputed.
A bounding triangle in 2-D is quite useful to speedup the clipping or visibility test of a B-Spline curve. See "Circle
and B-Splines clipping algorithms" under the subject Clipping (computer graphics) for an example of use.
A convex hull is the smallest convex volume containing the object. If the object is the union of a finite set of points,
its convex hull is a polytope.
A discrete oriented polytope (DOP) generalizes the AABB. A DOP is a convex polytope containing the object (in
2-D a polygon; in 3-D a polyhedron), constructed by taking a number of suitably oriented planes at infinity and
moving them until they collide with the object. The DOP is then the convex polytope resulting from intersection of
the half-spaces bounded by the planes. Popular choices for constructing DOPs in 3-D graphics include the
25
Bounding volume
26
axis-aligned bounding box, made from 6 axis-aligned planes, and the beveled bounding box, made from 10 (if
beveled only on vertical edges, say), 18 (if beveled on all edges), or 26 planes (if beveled on all edges and corners).
A DOP constructed from k planes is called a k-DOP; the actual number of faces can be less than k, since some can
become degenerate, shrunk to an edge or a vertex.
A minimum bounding rectangle or MBR the least AABB in 2-D is frequently used in the description of
geographic (or "geospatial") data items, serving as a simplified proxy for a dataset's spatial extent (see geospatial
metadata) for the purpose of data search (including spatial queries as applicable) and display. It is also a basic
component of the R-tree method of spatial indexing.
For the ranges m,n and o,p it can be said that they do not intersect if m>p or o>n. Thus, by projecting the ranges of 2
OBBs along the I, J, and K axes of each OBB, and checking for non-intersection, it is possible to detect
non-intersection. By additionally checking along the cross products of these axes (I0I1, I0J1, ...) one can be more
certain that intersection is impossible.
This concept of determining non-intersection via use of axis projection also extends to convex polyhedra, however
with the normals of each polyhedral face being used instead of the base axes, and with the extents being based on the
minimum and maximum dot products of each vertex against the axes. Note that this description assumes the checks
are being done in world space.
References
[1] POV-Ray Documentation (http:/ / www. povray. org/ documentation/ view/ 3. 6. 1/ 323/ )
External links
Illustration of several DOPs for the same model, from epicgames.com (http://udn.epicgames.com/Two/rsrc/
Two/CollisionTutorial/kdop_sizes.jpg)
27
Construction
There are three primary categories of tree construction methods: top-down, bottom-up, and insertion methods.
Top-down methods proceed by partitioning the input set into two (or more) subsets, bounding them in the chosen
bounding volume, then keep partitioning (and bounding) recursively until each subset consists of only a single
primitive (leaf nodes are reached). Top-down methods are easy to implement, fast to construct and by far the most
popular, but do not result in the best possible trees in general. Bottom-up methods start with the input set as the
leaves of the tree and then group two (or more) of them to form a new (internal) node, proceed in the same manner
until everything has been grouped under a single node (the root of the tree). Bottom-up methods are more difficult to
implement, but likely to produce better trees in general. Both top-down and bottom-up methods are considered
off-line methods as they both require all primitives to be available before construction starts. Insertion methods build
the tree by inserting one object at a time, starting from an empty tree. The insertion location should be chosen that
causes the tree to grow as little as possible according to a cost metric. Insertion methods are considered on-line
methods since they do not require all primitives to be available before construction starts and thus allow updates to
be performed at runtime.
Usage
BVHs are often used in ray tracing to eliminate potential intersection candidates within a scene by omitting
geometric objects located in bounding volumes, which are not intersected by the current ray. [3]
References
[1] Herman Johannes Haverkort, Results on geometric networks and data structures, 2004. Chapter 1: Introduction, page 9-10 + 16. Chapter 1
(http:/ / igitur-archive. library. uu. nl/ dissertations/ 2004-0506-101707/ c1. pdf)
[2] Christer Ericson, Real-Time Collision Detection, Page 236237
[3] Johannes Gnther, Stefan Popov, Hans-Peter Seidel and Philipp Slusallek, Realtime Ray Tracing on GPU with BVH-based Packet Traversal
(http:/ / www. mpi-inf. mpg. de/ ~guenther/ BVHonGPU/ )
External links
BVH implementations: Javascript (http://github.com/imbcmdth/jsBVH).
28
Box modeling
Box modeling
Box modeling is a technique in 3D modeling where you take a basic primitive shape (like a box, cylinder or others)
and make the basic shape rough draft of your final model from there you sculpt out your final model. The process
uses various tools and steps that sometimes get repeated again and again until you're done. Despite the fact youre
repeating these steps you will model faster and control the amount of detail you wish to add, slowly building your
model up from ground level of detail to high level.
Subdivision
Subdivision modeling is derived from the idea that as a work is progressed, should the artist want to make his work
appear less sharp, or "blocky", each face would be divided up into smaller, more detailed faces (usually into sets of
four). However, more experienced box modelers manage to create their model without subdividing the faces of the
model. Basically, box modeling is broken down into the very basic concept of polygonal management.
Quads
Quadrilateral faces, commonly named "quads", are the fundamental entity in box modeling. If an artist were to start
with a cube, the artist would have six quad faces to work with before extrusion. While most applications for
three-dimensional art provide abilities for faces up to any size, results are often more predictable and consistent
when working with quads. This is so because if one were to draw an X connecting the corner vertices of a quad, the
surface normal is nearly always the same. We say nearly because, when a quad is something other than a perfect
parallelogram (such as a rhombus or trapezoid), the surface normal would be different. Also, a quad subdivides into
two or four triangles cleanly, making it easier to prepare the model for software that can only handle triangles.
References
29
30
Recursive evaluation
CatmullClark surfaces are defined recursively, using the following
refinement scheme:
Start with a mesh of an arbitrary polyhedron. All the vertices in this
mesh shall be called original points.
For each face, add a face point
Set each face point to be the average of all original points for the
respective face.
continuity). After one iteration, the number of extraordinary points on the surface
remains constant.
The arbitrary-looking barycenter formula was chosen by Catmull and Clark based on the aesthetic appearance of the
resulting surfaces rather than on a mathematical derivation, although Catmull and Clark do go to great lengths to
rigorously show that the method yields bicubic B-spline surfaces.
Exact evaluation
The limit surface of CatmullClark subdivision surfaces can also be evaluated directly, without any recursive
refinement. This can be accomplished by means of the technique of Jos Stam. This method reformulates the
recursive refinement process into a matrix exponential problem, which can be solved directly by means of matrix
diagonalization.
3ds max
3D-Coat
AC3D
Anim8or
AutoCAD
Blender
Carrara
CATIA (Imagine and Shape)
CGAL
Cheetah3D
Cinema4D
Clara.io
DAZ Studio, 2.0
Gelato
Hammer
Hexagon
Houdini
K-3D
LightWave 3D, version 9
Maya
Metasequoia
modo
Mudbox
PRMan
Realsoft3D
Remo 3D
Shade
Rhinoceros 3D - Grasshopper 3D Plugin - Weaverbird Plugin
Silo
SketchUp - Requires a Plugin.
Softimage XSI
Strata 3D CX
Wings 3D
Zbrush
TopMod
31
References
Cloth modeling
Cloth modeling is the term used for simulating cloth within a computer program; usually in the context of 3D
computer graphics. The main approaches used for this may be classified into three basic types: geometric, physical,
and particle/energy.
Background
Most models of cloth are based on "particles" of mass connected in some manner of mesh. Newtonian Physics is
used to model each particle through the use of a "black box" called a physics engine. This involves using the basic
law of motion (Newton's Second Law):
In all of these models, the goal is to find the position and shape of a piece of fabric using this basic equation and
several other methods.
Geometric methods
Weil pioneered the first of these, the geometric technique, in 1986.[1] His work was focused on approximating the
look of cloth by treating cloth like a collection of cables and using Hyperbolic cosine (catenary) curves. Because of
this, it is not suitable for dynamic models but works very well for stationary or single-frame renders. This technique
creates an underlying shape out of single points; then, it parses through each set of three of these points and maps a
catenary curve to the set. It then takes the lowest out of each overlapping set and uses it for the render.
Physical methods
The second technique treats cloth like a grid work of particles connected to each other by springs. Whereas the
geometric approach accounted for none of the inherent stretch of a woven material, this physical model accounts for
stretch (tension), stiffness, and weight:
32
Cloth modeling
Particle/energy methods
The last method is more complex than the first two. The particle technique takes the physical technique from (f) a
step further and supposes that we have a network of particles interacting directly. That is to say, that rather than
springs, we use the energy interactions of the particles to determine the cloths shape. For this we use an energy
equation that adds onto the following:
The energy of repelling is an artificial element we add to prevent cloth from intersecting itself.
The energy of stretching is governed by Hooke's law as with the Physical Method.
The energy of bending describes the stiffness of the fabric
The energy of trellising describes the shearing of the fabric (distortion within the plane of the fabric)
The energy of gravity is based on acceleration due to gravity
We can also add terms for energy added by any source to this equation, then derive and find minima, which
generalizes our model. This allows us to model cloth behavior under any circumstance, and since we are treating the
cloth as a collection of particles its behavior can be described with the dynamics provided in our physics engine.
References
Cloth Modeling [2]
Notes
[1] Tutorial on Cloth Modeling (http:/ / www. webcitation. org/ query?url=http:/ / www. geocities. com/ SiliconValley/ Heights/ 5445/ cloth.
html& date=2009-10-25+ 09:48:40)
[2] http:/ / davis. wpi. edu/ ~matt/ courses/ cloth/
33
COLLADA
34
COLLADA
COLLADA
Filename extension .dae
Internet media type model/vnd.collada+xml
Developed by
Initial release
October2004
Latest release
1.5.0 / August2008
Type of format
3D computer graphics
Extended from
XML
Website
collada.org
[1]
COLLADA (from collaborative design activity) is an interchange file format for interactive 3D applications. It is
managed by the nonprofit technology consortium, the Khronos Group, and has been adopted by ISO as a publicly
available specification, ISO/PAS 17506.
COLLADA defines an open standard XML schema for exchanging digital assets among various graphics software
applications that might otherwise store their assets in incompatible file formats. COLLADA documents that describe
digital assets are XML files, usually identified with a .dae (digital asset exchange) filename extension.
History
Originally created at Sony Computer Entertainment by Rmi Arnaud and Mark C. Barnes, it has since become the
property of the Khronos Group, a member-funded industry consortium, which now shares the copyright with Sony.
The COLLADA schema and specification are freely available from the Khronos Group. The COLLADA DOM uses
the SCEA Shared Source License [2].
Several graphics companies collaborated with Sony from COLLADA's beginnings to create a tool that would be
useful to the widest possible audience, and COLLADA continues to evolve through the efforts of Khronos
contributors. Early collaborators included Alias Systems Corporation, Criterion Software, Autodesk, Inc., and Avid
Technology. DozensWikipedia:Manual of Style/Dates and numbers of commercial game studios and game engines
have adopted the standard.
Members of the developer team:
Lilli Thompson[3]
In March 2011, Khronos released[4] the COLLADA Conformance Test Suite (CTS). The suite allows applications
that import and export COLLADA to test against a large suite of examples, ensuring that they conform properly to
the specification. In July 2012, the CTS software was released on GitHub,[5] allowing for community contributions.
ISO/PAS 17506:2012 Industrial automation systems and integration -- COLLADA digital asset schema specification
for 3D visualization of industrial data was published in July 2012.
COLLADA
Software tools
COLLADA was originally intended as an intermediate format for transporting data from one digital content creation
(DCC) tool to another application. Applications exist to support the usage of several DCCs, including:
EskoArtwork Studio
FreeCAD
FormZ
GPure
Houdini (Side Effects Software)
iBooks Author
LightWave 3D (v 9.5)
Maya (ColladaMaya)
MeshLab
Mobile Model Viewer (Android) [7]
modo
Okino PolyTrans [8] for bidirectional Collada conversions
OpenRAVE
Poser Pro (v 7.0)
Presagis Creator
Robot Operating System
SAP Visual Enterprise Author
Shade 3D (E Frontier, Mirye)
SketchUp (v 8.0) KMZ file is a zip file containing a KML file, a COLLADA file, and texture images
Softimage|XSI
Strata 3D
rban PAD
Vectorworks
Visual3D Game Development Tool for Collada scene and model viewing, editing, and exporting
Wings 3D
Xcode (v 4.4)
35
COLLADA
Game engines
Although originally intended as an interchange format, many game engines now support COLLADA natively,
including:
Ardor3D
C4 Engine
CryEngine 2
GLGE
Irrlicht Engine
Panda3d
ShiVa
Spring
Torque 3D
Turbulenz
Unity
Unreal Engine
Vanda Engine [9]
Visual3D Game Engine
GamePlay
Applications
Some games and 3D applications have started to support COLLADA:
ArcGIS
Autodesk Infrastructure Modeler
Google Earth (v 4) users can simply drag and drop a COLLADA file on top of the virtual Earth
Maple (software) - 3D plots can be exported as COLLADA
Open Wonderland
OpenSimulator
Mac OS X 10.6's Preview
NASA World Wind
Second Life
TNTmips
SAP Visual Enterprise Author supports import and export .dae files.
Google Sketchup - import .dae files.
Kerbal Space Program - .dae files for 3d model mods.
Libraries
There are several libraries available to read and write COLLADA files under programmatic control:
COLLADA DOM [10] (C++) - The COLLADA DOM is generated at compile-time from the COLLADA schema.
It provides a low-level interface that eliminates the need for hand-written parsing routines, but is limited to
reading and writing only one version of COLLADA, making it difficult to upgrade as new versions are released.
FCollada [11] (C++) - A utility library available from Feeling Software. In contrast to the COLLADA DOM,
Feeling Software's FCollada provides a higher-level interface. FCollada is used in ColladaMaya [12], ColladaMax
[13]
, and several commercial game engines. The development of the open source part was discontinued by Feeling
Software in 2008. The company continues to support its paying customers and licenses with improved versions of
its software.
36
COLLADA
OpenCOLLADA [14] (C++) - The OpenCOLLADA project provides plugins for 3ds Max and Maya and the
sources of utility libraries which were developed for the plugins.
pycollada [15] (Python) - A Python module for creating, editing and loading COLLADA. The library allows the
application to load a COLLADA file and interact with it as a Python object. In addition, it supports creating a
COLLADA Python object from scratch, as well as in-place editing.
Scene Kit [16] (Objective-C) - An Objective-C framework introduced in OS X 10.8 Mountain Lion that allows
reading, high-level manipulation and display of COLLADA scenes.
GLGE (JavaScript) - a JavaScript library presenting COLLADA files in a web browser using WebGL.
Three.js (JavaScript) - a 3D Javascript library capable of loading COLLADA files in a web browser.
StormEngineC (JavaScript) - Javascript 3D graphics library with option of loading COLLADA files.
Physics
As of version 1.4, physics support was added to the COLLADA standard. The goal is to allow content creators to
define various physical attributes in visual scenes. For example, one can define surface material properties such as
friction. Furthermore, content creators can define the physical attributes for the objects in the scene. This is done by
defining the rigid bodies that should be linked to the visual representations. More features include support for
ragdolls, collision volumes, physical constraints between physical objects, and global physical properties such as
gravitation.
Physics middleware products that support this standard include Bullet Physics Library, Open Dynamics Engine, PAL
and NVIDIA's PhysX. These products support by reading the abstract found in the COLLADA file and transferring
it into a form that the middleware can support and represent in a physical simulation. This also enables different
middleware and tools to exchange physics data in a standardized manner.
The Physics Abstraction Layer provides support for COLLADA Physics to multiple physics engines that do not
natively provide COLLADA support including JigLib, OpenTissue, Tokamak physics engine and True Axis. PAL
also provides support for COLLADA to physics engines that also feature a native interface.
Versions
37
COLLADA
References
[1] http:/ / collada. org/
[2] http:/ / research. scea. com/ scea_shared_source_license. html
[3] Building Game Development Tools with App Engine, GWT, and WebGL (http:/ / www. google. com/ events/ io/ 2011/ sessions/
building-game-development-tools-with-app-engine-gwt-and-webgl. html), Google i/o 2011, Lilli Thompson.
[4] http:/ / www. khronos. org/ news/ press/ khronos-group-releases-free-collada-conformance-test-suite
[5] http:/ / www. khronos. org/ news/ permalink/ opencollada-and-collada-cts-now-on-github
[6] http:/ / www. cheddarcheesepress. com/
[7] http:/ / www. mobilemodelviewer. com/
[8] http:/ / www. okino. com/ conv/ exp_collada. htm
[9] http:/ / www. vandaengine. com
[10] http:/ / collada. org/ mediawiki/ index. php/ COLLADA_DOM
[11] http:/ / collada. org/ mediawiki/ index. php/ FCollada
[12] http:/ / collada. org/ mediawiki/ index. php/ ColladaMaya
[13] http:/ / collada. org/ mediawiki/ index. php/ ColladaMax
[14] https:/ / github. com/ khronosGroup/ OpenCOLLADA
[15] http:/ / pycollada. github. com/
[16] http:/ / developer. apple. com/ library/ mac/ documentation/ 3DDrawing/ Conceptual/ SceneKit_PG/ Introduction/ Introduction. html
External links
38
References
Freed JA. Possibility of correcting image cytometric DNA (ploidy) measurements in tissue sections: Insights from
computed corpuscle sectioning and the reference curve method. Analyt Quant Cytol Histol 19(5):376-386, 1997.
[1]
Freed JA. Improved correction of quantitative nuclear DNA (ploidy) measurements in tissue sections. Analyt
Quant Cytol Histol 21(2):103-112, 1999.[2]
Freed JA. Conceptual comparison of two computer models of corpuscle sectioning and of two algorithms for
correction of ploidy measurements in tissue sections. Analyt Quant Cytol Histol 22(1): 17-25, 2000. [3]
"A general method for determining the volume and profile area of a sectioned corpuscle", U.S. Pat. No. 5,918,038
issued 6/29/99 to Jeffrey A. Freed. [4]
"Method for correction of quantitative DNA measurements in a tissue section", U.S. Pat. No. 6,035,258 issued
3/7/00 to Jeffrey A. Freed. [5]
Use of perimeter measurements to improve ploidy measurements in a tissue section, U.S. Pat. No. 6,603,869
issued 8/5/03 to Jeffrey A. Freed. [6]
External links
Computed Corpuscle Sectioning and the Reference Curve Method [7]
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
39
Flattening a surface
Some open surfaces and surfaces closed in one direction may be flattened into a plane without deformation of the
surface. For example, a cylinder can be flattened into a rectangular area without distorting the surface distance
between surface features (except for those distances across the split created by opening up the cylinder). A cone may
also be so flattened. Such surfaces are linear in one direction and curved in the other (surfaces linear in both
directions were flat to begin with). Sheet metal surfaces which have flat patterns can be manufactured by stamping
a flat version, then bending them into the proper shape, such as with rollers. This is a relatively inexpensive process.
Other open surfaces and surfaces closed in one direction, and all surfaces closed in both directions, can't be flattened
without deformation. A hemisphere or sphere, for example, can't. Such surfaces are curved in both directions. This is
why maps of the Earth are distorted. The larger the area the map represents, the greater the distortion. Sheet metal
surfaces which lack a flat pattern must be manufactured by stamping using 3D dies (sometimes requiring multiple
40
Surface patches
A surface may be composed of one or more patches, where each patch has its own U-V coordinate system. These
surface patches are analogous to the multiple polynomial arcs used to build a spline. They allow more complex
surfaces to be represented by a series of relatively simple equation sets rather than a single set of complex equations.
Thus, the complexity of operations such as surface intersections can be reduced to a series of patch intersections.
Surfaces closed in one or two directions frequently must also be broken into two or more surface patches by the
software.
Faces
Surfaces and surface patches can only be trimmed at U and V flow lines. To overcome this severe limitation, surface
faces allow a surface to be limited to a series of boundaries projected onto the surface in any orientation, so long as
those boundaries are collectively closed. For example, trimming a cylinder at an angle would require such a surface
face.
A single surface face may span multiple surface patches on a single surface, but can't span multiple surfaces.
Planar faces are similar to surface faces, but are limited by a collectively closed series of boundaries projected to an
infinite plane, instead of a surface.
Transition to solids
Volumes can be filled in to build a solid model (possibly with other volumes subtracted from the interior). Skins and
faces can also be offset to create solids of uniform thickness.
Types of continuity
A surface's patches and the faces built on that surface typically have point continuity (no gaps) and tangent
continuity (no sharp angles). Curvature continuity (no sharp radius changes) may or may not be maintained.
Skins and volumes, however, typically only have point continuity. Sharp angles between faces built on different
supports (planes or surfaces) are common.
41
wireframe
hidden edges
42
wireframe
uv isolines
Faceted mode. In this mode each surface is drawn as a series of planar regions, usually rectangles. Hidden line
removal is typically used with such a representation. Static hidden line removal does not update which lines are
hidden during rotation, but only once the screen is refreshed. Dynamic hidden line removal continuously updates
which curves are hidden during rotations.
Facet wireframe
Facet shaded
Shaded mode. Shading can then be added to the facets, possibly with blending between the regions for a
smoother display. Shading can also be static or dynamic. A lower quality of shading is typically used for dynamic
shading, while high quality shading, with multiple light sources, textures, etc., requires a delay for rendering.
shaded
43
reflection lines
reflected image
External links
3D-XplorMath: Program to visualize many kinds of surfaces in wireframe, patch and anaglyph mode. [1]
References
[1] http:/ / 3d-xplormath. org
44
Workings of CSG
The simplest solid objects used for the
representation are called primitives. Typically
Venn diagram created with CSG
they are the objects of simple shape: cuboids,
The source is on the description page.
cylinders, prisms, pyramids, spheres, cones. The
set of allowable primitives is limited by each
software package. Some software packages allow CSG on curved objects while other packages do not.
It is said that an object is constructed from primitives by means of allowable operations, which are typically
Boolean operations on sets: union, intersection and difference.
A primitive can typically be described by a procedure which accepts some number of parameters; for example, a
sphere may be described by the coordinates of its center point, along with a radius value. These primitives can be
combined into compound objects using operations like these:
Union
Merger of two objects into one
Difference
Subtraction of one object from another
Intersection
Portion common to both objects
Combining these elementary operations, it is possible to build up objects with high complexity starting from simple
ones.
Applications of CSG
Constructive solid geometry has a number of
practical uses. It is used in cases where simple
geometric objects are desired, or where
mathematical accuracy is important. The Quake
engine and Unreal engine both use this system, as
does Hammer (the native Source engine level
editor), and Torque Game Engine/Torque Game
Engine Advanced. CSG is popular because a
modeler can use a set of relatively simple objects
to create very complicated geometry. When CSG
is procedural or parametric, the user can revise
their complex geometry by changing the position
of objects or by changing the Boolean operation
used to combine those objects.
One of the advantages of CSG is that it can easily
CSG objects can be represented by binary trees, where leaves represent
assure that objects are "solid" or water-tight if all
primitives, and nodes represent operations. In this figure, the nodes are
of the primitive shapes are water-tight. This can
labeled
for intersection,
for union, and
for difference.
be important for some manufacturing or
engineering computation applications. By comparison, when creating geometry based upon boundary
representations, additional topological data is required, or consistency checks must be performed to assure that the
given boundary description specifies a valid solid object.
A convenient property of CSG shapes is that it is easy to classify arbitrary points as being either inside or outside the
shape created by CSG. The point is simply classified against all the underlying primitives and the resulting boolean
expression is evaluated. This is a desirable quality for some applications such as collision detection.
3Delight
Blender (provides meta objects)
BRL-CAD
Clara.io
NETGEN [1] - an automatic 3d tetrahedral mesh generator. It accepts input from constructive solid geometry
(CSG) or boundary representation (BRep)
Feature Manipulation Engine
FreeCAD
GtkRadiant
HyperFun
OpenSCAD
PhotoRealistic RenderMan
PLaSM - Programming Language of Solid Modeling
POV-Ray
SimpleGeo [2] A solid modeling for particle transport Monte Carlo simulations
SolidWorks mechanical CAD suite
45
Gaming
3D World Studio
UnrealEd
Valve Hammer Editor
Leadwerks [5]
Libraries
Carve CSG [6] - a fast and robust constructive solid geometry library
CSG.js [7] A JavaScript implementation using WebGL
GTS [8] - an Open Source Free Software Library intended to provide a set of useful functions to deal with 3D
surfaces meshed with interconnected triangles
sgCore C++/C# library [9]
External links
Leadwerks Software 'What is Constructive Solid Geometry?' [10] - explanation of CSG definitions, equations,
techniques, and uses.
References
[1] http:/ / sourceforge. net/ projects/ netgen-mesher
[2] http:/ / www. cern. ch/ theis/ simplegeo
[3] http:/ / unbboolean. sourceforge. net/
[4] http:/ / www. inevo. pt/ portfolio/ gides/
[5] http:/ / www. leadwerks. com/
[6] http:/ / code. google. com/ p/ carve/
[7] http:/ / evanw. github. com/ csg. js/
[8] http:/ / gts. sourceforge. net/ index. html
[9] http:/ / www. geometros. com
[10] http:/ / www. leadwerks. com/ files/ csg. pdf
46
Definition
A unit quaternion can be described as:
We can associate a quaternion with a rotation around an axis by the following expression
where is a simple rotation angle (the value in radians of the angle of rotation) and cos(x), cos(y) and cos(z) are
the "direction cosines" locating the axis of rotation (Euler's Theorem).
47
48
Rotation matrices
The orthogonal matrix (post-multiplying a
column vector) corresponding to a
clockwise/left-handed rotation by the unit
quaternion
is
given by the inhomogeneous expression:
Euler angles The xyz (fixed) system is shown in blue, the XYZ (rotated) system
is shown in red. The line of nodes, labelled N, is shown in green.
If
is not a unit quaternion then the homogeneous form is still a scalar multiple of a rotation
matrix, while the inhomogeneous form is in general no longer an orthogonal matrix. This is why in numerical work
the homogeneous form is to be preferred if distortion is to be avoided.
The direction cosine matrix corresponding to a Body 3-2-1 sequence with Euler angles (, , ) is given by:
49
Conversion
By combining the quaternion representations of the Euler rotations we get for the Body 3-2-1 sequence, where the
airplane first does yaw (body-z) turn during taxing on the runway, then pitches (body-y) during take-off, and finally
rolls (body-x) in the air. The resulting orientation of Body 3-2-1 sequence is equivalent to that of Lab 1-2-3
sequence, where the airplane is rolled first (lab-X axis), and then nosed up around the horizontal lab-Y axis, and
finally rotated around the vertical Lab-Z axis:
Yaw
where the X-axis points forward, Y-axis to the right and Z-axis
downward and in the example to follow the rotation occurs in the
order yaw, pitch, roll (about body-fixed axes).
Singularities
One must be aware of singularities in the Euler angle
parametrization when the pitch approaches 90 (north/south
pole). These cases must be handled specially. The common name
for this situation is gimbal lock.
External links
Q60. How do I convert Euler rotation angles to a quaternion? [2] and related questions at The Matrix and
Quaternions FAQ
References
[1] http:/ / www. euclideanspace. com/ maths/ geometry/ rotations/ conversions/ quaternionToEuler/
[2] http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html#Q60
Crowd simulation
50
Crowd simulation
3D computer graphics
Basics
3D modeling / 3D scanning
3D rendering / 3D printing
3D computer graphics software
Primary Uses
v
t
e [1]
Crowd simulation is the process of simulating the movement of a large number of entities or characters, now often
appearing in 3D computer graphics for film. While simulating these crowds, observed human behavior interaction is
taken into account, to replicate the collective behavior. It is a method of creating virtual cinematography.
The need for crowd simulation arises when a scene calls for more characters than can be practically animated using
conventional systems, such as skeletons/bones. Simulating crowds offer the advantages of being cost effective as
well as allow for total control of each simulated character or agent.
Animators typically create a library of motions, either for the entire character or for individual body parts. To
simplify processing, these animations are sometimes baked as morphs. Alternatively, the motions can be generated
procedurally - i.e. choreographed automatically by software.
The actual movement and interactions of the crowd is typically done in one of two ways:
Crowd simulation
Particle Motion
The characters are attached to point particles, which are then animated by simulating wind, gravity, attractions, and
collisions. The particle method is usually inexpensive to implement, and can be done in most 3D software packages.
However, the method is not very realistic because it is difficult to direct individual entities when necessary, and
because motion is generally limited to a flat surface.
Crowd AI
The entities - also called agents - are given artificial intelligence, which guides the entities based on one or more
functions, such as sight, hearing, basic emotion, energy level, aggressiveness level, etc. The entities are given goals
and then interact with each other as members of a real crowd would. They are often programmed to respond to
changes in environment, enabling them to climb hills, jump over holes, scale ladders, etc. This system is much more
realistic than particle motion, but is very expensive to program and implement.
The most notable examples of AI simulation can be seen in New Line Cinema's The Lord of the Rings films, where
AI armies of many thousands battle each other. The crowd simulation was done using Weta Digital's Massive
software.
Sociology
Crowd simulation can also refer to simulations based on group dynamics and crowd psychology, often in public
safety planning. In this case, the focus is just the behavior of the crowd, and not the visual realism of the simulation.
Crowds have been studied as a scientific interest since the end of the 19th Century. A lot of research has focused on
the collective social behavior of people at social gatherings, assemblies, protests, rebellions, concerts, sporting events
and religious ceremonies. Gaining insight into natural human behavior under varying types of stressful situations
will allow better models to be created which can be used to develop crowd controlling strategies.
Emergency response teams such as policemen, the National Guard, military and even volunteers must undergo
some type of crowd control training. Using researched principles of human behavior in crowds can give disaster
training designers more elements to incorporate to create realistic simulated disasters. Crowd behavior can be
observed during panic and non-panic conditions. When natural and unnatural events toss social ideals into a twisting
chaotic bind, such as the events of 9/11 and hurricane Katrina, humanitys social capabilities are truly put to the test.
Military programs are looking more towards simulated training, involving emergency responses, due to their cost
effective technology as well as how effective the learning can be transferred to the real world.[citation needed] Many
events that may start out controlled can have a twisting event that turns them into catastrophic situations, where
decisions need to be made on the spot. It is these situations in which crowd dynamical understanding would play a
vital role in reducing the potential for anarchy.
Modeling techniques of crowds vary from holistic or network approaches to understanding individualistic or
behavioral aspects of each agent. For example the Social Force Model describes a need for individuals to find a
balance between social interaction and physical interaction. An approach that incorporates both aspects, and is able
to adapt depending on the situation, would better describe natural human behavior, always incorporating some
measure of unpredictability. With the use of multi-agent models understanding these complex behaviors has become
a much more comprehensible task. With the use of this type of software, systems can now be tested under extreme
conditions, and simulate conditions over long periods of time in the matter of seconds.
51
Crowd simulation
52
External links
CrowdManagementSimulation.com [1]
CrowdSimulation.org [2], Open discussion forum on crowd simulations
CSG [3], Crowd simulation research.
UNC GAMMA Group [4], Crowd simulation research at the University of North Carolina at Chapel Hill
SteerSuite [5], An open-source framework for developing and evaluating crowd simulation algorithms
Crowd Tracking [6], Crowd tracking research in computer vision
References
[1]
[2]
[3]
[4]
[5]
[6]
Cutaway drawing
Part of a series on
Graphical
projection
v
t
e [1]
Cutaway drawing
53
A cutaway drawing, also called a cutaway diagram is a 3D graphics, drawing, diagram and or illustration, in which
surface elements a three-dimensional model are selectively removed, to make internal features visible, but without
sacrificing the outer context entirely.
Overview
According to Diepstraten et al. (2003) "the purpose of a cutaway drawing is to allow the viewer to have a look into
an otherwise solid opaque object. Instead of letting the inner object shine through the surrounding surface, parts of
outside object are simply removed. This produces a visual appearance as if someone had cutout a piece of the object
or sliced it into parts. Cutaway illustrations avoid ambiguities with respect to spatial ordering, provide a sharp
contrast between foreground and background objects, and facilitate a good understanding of spatial ordering".[2]
Though cutaway drawing are not dimensioned manufacturing blueprints, they are meticulously drawn by a handful
of devoted artists who either had access to manufacturing details or deduced them by observing the visible evidence
of the hidden skeleton (e.g. rivet lines, etc.). The goal of this drawings in studies can be to identify common design
patterns for particular vehicle classes. Thus, the accuracy of most of these drawings, while not 100 percent, is
certainly high enough for this purpose.[3]
The technique is used extensively in computer-aided design, see first image. It has also been incorporated into the
user interface of some video games. In The Sims, for instance, users can select through a control panel whether to
view the house they are building with no walls, cutaway walls, or full walls.
History
The cutaway view and the exploded view were minor graphic
inventions of the Renaissance that also clarified pictorial
representation. This cutaway view originates in the early fifteenth
century notebooks of Marino Taccola (1382 1453). In the 16th
century cutaway views in definite form were used in Georgius
Agricola's (1494-1555) mining book De Re Metallica to illustrate
underground operations. [4] The 1556 book is a complete and
systematic treatise on mining and extractive metallurgy, illustrated
with many fine and interesting woodcuts which illustrate every
conceivable process to extract ores from the ground and metal from the
ore, and more besides. It shows the many watermills used in mining,
such as the machine for lifting men and material into and out of a mine
shaft, see image.
The term "Cutaway drawing" was already in use in the 19th century but, became popular in the 1930s.
Cutaway drawing
54
Technique
The location and shape to cut the outside object depends on many different factors, for example:
the sizes and shapes of the inside and outside objects,
the semantics of the objects,
personal taste, etc.
These factors, according to Diepstraten et al. (2003), "can seldom be formalized in a simple algorithm, But the
properties of cutaway can be distinguish in two classes of cutaways of a drawing":
cutout : illustrations were the cutaway is retricted to very simple and regularly shaped of often only a small
number of planar slices into the outside object.
breakaway : a cutaway realized by a single hole in the outside of the object.
Examples
Some more examples of cutaway drawings, from products and systems to architectural building.
A dynamic loudspeaker
Mercury spacecraft.
References
[1] http:/ / en. wikipedia. org/ w/ index. php?title=Template:Views& action=edit
[2] J. Diepstraten, D. Weiskopf & T. Ertl (2003). "Interactive Cutaway Illustrations" (http:/ / www. vis. uni-stuttgart. de/ ~weiskopf/ publications/
eg2003. pdf). in: Eurographics 2003. P. Brunet and D. Fellner (ed). Vol 22 (2003), Nr 3.
[3] Mark D. Sensmeier, and Jamshid A. Samareh (2003). "A Study of Vehicle Structural Layouts in Post-WWII Aircraft" (http:/ / 140. 116. 81.
56/ FS/ NASA-aiaa-2004-1624. pdf) Paper American Institute of Aeronautics and Astronautics.
[4] Eugene S. Ferguson (1999). Engineering and the Mind's Eye. p.82.
Demoparty
55
Demoparty
Demoscene
Concepts
Demo
Intro
Demoparty
Effects
Demogroup
Compo
Music disk
Diskmag
Module file
Tracker
Alternative demo platforms
Amiga
Apple IIGS
Atari ST
Commodore 64
Vic-20
Text mode
ZX Spectrum
Current parties
Alternative Party
Assembly
Buenzli
Evoke
The Gathering
Revision
Sundown
X
Past parties
Breakpoint
The Party
Websites
Scene.org
Mod Archive
Software
Demoparty
56
ProTracker
Scream Tracker
Fast Tracker
Impulse Tracker
ModPlug
Renoise
Tracker musicians
Demosceners
v
t
e [1]
A demoparty is an event that gathers demosceners[2] and other computer enthusiasts to compete in competitions.[3]
A typical demoparty is a non-stop event lasting over a weekend, providing the visitors a lot of time to socialize. The
competing works, at least those in the most important competitions, are usually shown at night, using a video
projector and big loudspeakers. The most important competition is usually the demo compo.[4]
Concept
The visitors of a demoparty often bring their own computers to compete and show off their works. To this end, most
parties provide a large hall with tables, electricity and usually a local area network connected to the Internet. In this
respect, many demoparties resemble LAN parties, and many of the largest events also gather gamers and other
computer enthusiasts in addition to demosceners. A major difference between a real demoparty and a LAN party is
that demosceners typically spend more time socializing (often outside the actual party hall) than in front of their
computers.[5]
Large parties have often tried to come up with alternative terms to describe the concept to the general public. While
the events have always been known as "demoparties", "copyparties" or just "parties" by the subculture itself, they are
often referred to as "computer conferences", "computer fairs", "computer festivals", "computer art festivals",
"youngsters' computer events" or even "geek gatherings" or "nerd festivals" by the mass media and the general
public.
Demoscene events are most frequent in continental Europe, with around fifty parties every year. In comparison, there
has only been a dozen or so demoparties in the United States in total. Most events are local, gathering demomakers
mostly from a single country, while the largest international parties (such as Breakpoint and Assembly) attract
visitors from all over the globe.[6]
Most demoparties are relatively small in size, with the number of visitors varying from dozens to a few hundred. The
largest events typically gather thousands of visitors, although most of them have little or no connection to the
demoscene. In that aspect, the scene separates "pure" parties (which abandons non-scene related activities and
promotion) from "crossover" parties.
Demoparty
57
History
Demoparties started to appear in the 1980s in the form
of copyparties, where software pirates and demomakers
gathered to meet each other and share their software.
Competitions did not become a major aspect of the
events until the early 1990s.
Copyparties mainly pertained to the Amiga and C64
scene. As the PC compatibles started to take over the
market, the difficulties in easily making nice demos
and intros increased. Along with increased police
crackdowns on copying of pirated software, the
"underground" copyparties were gradually replaced by
slightly higher-profile events that came to be known as
demoparties. However, some of the "old-school"
demosceners still prefer to use the word copyparty even
for today's demoparties.
Demoparty
58
Common properties
Parties usually last from two to four days, most often
from Friday to Sunday to ensure that sceners who work
or study are also able to attend. Small parties (under
100 attendants) usually take place in cultural centres or
schools, whereas larger parties (over 400500 people)
typically take place in sports halls.
Entrance fees are usually between 10 and 40, given
the size and location of the party. It is still a common
practice in many countries to allow females to enter the
party for free (mostly due to the low concentration of
female attendees, which is usually under 20%), albeit
most parties enforce an "only vote with ticket" rule,
which means that an attendee who got in free can only
vote with a paid ticket.
Attendees are allowed to bring their desktop computer along, but this is by no means a necessity and is usually
omitted by most sceners, especially those who travel long distance. Those who have computer-related jobs may even
regard a demoparty as a well-deserved break from sitting in front of a computer. For those who do bring a computer,
it is becoming increasingly common to bring a laptop or some sort of handheld device rather than a complete desktop
PC.
Partygoers often bring various senseless gadgets to parties to make their desk space look unique; this can be anything
from a disco ball or a plasma lamp to a large LED display panel complete with a scrolling message about how "elite"
its owner is. Many visitors also bring large loudspeakers for playing music. This kind of activity is particularly
common among new partygoers, while the more experienced attendees tend to prefer a more quiet and relaxed
atmosphere.
Those who need housing during the party are often offered a separate "sleeping room", usually an isolated empty
room with some sort of carpet or mats, where the attendees are able to sleep, separated from the noise. Most sceners
prefer bringing sleeping bags for this, as well as inflatable mattresses or polyfoam rolls. Parties that do not offer a
sleeping room generally allow sceners to sleep under the tables.
Partyplaces often become decorated by visitors with flyers and banners. These all serve promotional reasons, in most
cases to advertise a certain group, but sometimes to create promotion for a given demoscene product, such as a demo
or a diskmag, possibly to be released later at the party.
A major portion of the events at a demoparty often takes place outdoors. Demosceners usually spend considerable
time outside to have a beer and talk, or engage into some sort of open-air activity such as barbecueing or sport, such
as hardware throwing or soccer. It is also a common tradition to gather around a bonfire during the night, usually
after the compos.
In recent years, many parties were available for spectators through the Internet: This tradition was first started by the
live team of demoscene.tv, who broadcast from the event live or created footage for a postmortem video-report. This
has since been ostensibly replaced by the SceneSat radio crew, who provide live streaming radio shows from parties,
and larger parties now offer their own dedicated streaming video solution.
Demoparty
59
References
[1]
[2]
[3]
[4]
[5]
[6]
External links
Depth map
In 3D computer graphics a depth map is an image or image channel that contains information relating to the
distance of the surfaces of scene objects from a viewpoint. The term is related to and may be analogous to depth
buffer, Z-buffer, Z-buffering and Z-depth.[1] The "Z" in these latter terms relates to a convention that the central
axis of view of a camera is in the direction of the camera's Z axis, and not to the absolute Z axis of a scene.
Examples
Cubic Structure
Two different depth maps can be seen here, together with the original model from which they are derived. The first
depth map shows luminance in proportion to the distance from the camera. Nearer surfaces are darker; further
surfaces are lighter. The second depth map shows luminance in relation to the distances from a nominal focal plane.
Surfaces closer to the focal plane are darker; surfaces further from the focal plane are lighter, (both closer to and also
further away from the viewpoint).
Depth map
60
Uses
Depth maps have a number of uses,
including:
Simulating the effect of uniformly
dense semi-transparent media
within a scene - such as fog, smoke
or large volumes of water.
Simulating shallow depths of field where some parts of a scene appear
to be out of focus. Depth maps can
be used to selectively blur an image
to varying degrees. A shallow depth
of field can be a characteristic of
macro photography and so the
technique may form a part of the
process of miniature faking.
Z-buffering and z-culling,
techniques which can be used to
make the rendering of 3D scenes
more efficient. They can be used to
identify objects hidden from view
and which may therefore be ignored
for some rendering purposes. This
is particularly important in real time
applications such as computer
games, where a fast succession of
completed renders must be
available in time to be displayed at
a regular and fixed rate.
Fog effect
Shadow mapping - part of one process used to create shadows cast by illumination in 3D computer graphics. In
this use, the depth maps are calculated from the perspective of the lights, not the viewer.
To provide the distance information needed to create and generate autostereograms and in other related
applications intended to create the illusion of 3D viewing through stereoscopy .
Subsurface scattering - can be used as part of a process for adding realism by simulating the semi-transparent
properties of translucent materials such as human skin.
Limitations
Single channel depth maps record the first surface seen, and so cannot display information about those surfaces
seen or refracted through transparent objects, or reflected in mirrors. This can limit their use in accurately
simulating depth of field or fog effects.
Single channel depth maps cannot convey multiple distances where they occur within the view of a single pixel.
This may occur where more than one object occupies the location of that pixel. This could be the case - for
example - with models featuring hair, fur or grass. More generally, edges of objects may be ambiguously
described where they partially cover a pixel.
Depth map
Depending on the intended use of a depth map, it may be useful or necessary to encode the map at higher bit
depths. For example, an 8 bit depth map can only represent a range of up to 256 different distances.
Depending on how they are generated, depth maps may represent the perpendicular distance between an object
and the plane of the scene camera. For example, a scene camera pointing directly at - and perpendicular to - a flat
surface may record a uniform distance for the whole surface. In this case, geometrically, the actual distances from
the camera to the areas of the plane surface seen in the corners of the image are greater than the distances to the
central area. For many applications, however, this discrepancy is not a significant issue.
References
[1] Computer Arts / 3D World Glossary (ftp:/ / ftp. futurenet. co. uk/ pub/ arts/ Glossary. pdf), Document retrieved 26th January 2011.
Digital puppetry
Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a
virtual environment that are rendered in real time by computers. It is most commonly used in filmmaking and
television production, but has also been utilized in interactive theme park attractions and live theatre.
The exact definition of what is and is not digital puppetry is subject to debate among puppeteers and computer
graphics designers, but it is generally agreed that digital puppetry differs from conventional computer animation in
that it involves performing characters in real time, rather than animating them frame by frame.
Digital puppetry is closely associated with motion capture technologies and 3D animation, as well as skeletal
animation. Digital puppetry is also known as virtual puppetry, performance animation, living animation, live
animation and real-time animation (although the latter also refers to animation generated by computer game
engines). Machinima is another form of digital puppetry, and Machinima performers are increasingly being
identified as puppeteers.
Waldo C. Graphic
Perhaps the first truly commercially successful example of a digitally animated figure being performed and rendered
in real-time is Waldo C. Graphic, a character created in 1988 by Jim Henson and Pacific Data Images for the Muppet
television series The Jim Henson Hour. Henson had been trying to create computer generated puppets as early as
1985[2] and Waldo grew out of experiments Henson conducted to create a computer generated version of his
character Kermit the Frog.[3]
Waldo's strength as a computer generated puppet was that he could be controlled by a single puppeteer (Steve
Whitmire[4]) in real-time in concert with conventional puppets. The computer image of Waldo was mixed with the
video feed of the camera focused on physical puppets so that all of the puppeteers in a scene could perform together.
(It was already standard Muppeteering practice to use monitors while performing, so the use of a virtual puppet did
not significantly increase the complexity of the system.) Afterwards, in post production, PDI re-rendered Waldo in
full resolution, adding a few dynamic elements on top of the performed motion.[5]
61
Digital puppetry
Waldo C. Graphic can be seen today in Jim Henson's Muppet*Vision 3D at the Disney's Hollywood Studios and
Disney California Adventure Park theme parks.
Mike Normal
Another significant development in digital puppetry in 1988 was Mike Normal, which Brad DeGraf and partner
Michael Wahrman developed to show off the real-time capabilities of Silicon Graphics' then-new 4D series
workstations. Unveiled at the 1988 SIGGRAPH convention, it was the first live performance of a digital character.
Mike was a sophisticated talking head driven by a specially built controller that allowed a single puppeteer to control
many parameters of the character's face, including mouth, eyes, expression, and head position.[6]
The system developed by deGraf/Wahrman to perform Mike Normal was later used to create a representation of the
villain Cain in the motion picture RoboCop 2, which is believed to be the first example of digital puppetry being
used to create a character in a full-length motion picture.
Trey Stokes was the puppeteer for both Mike Normal's SIGGRAPH debut and Robocop II.
Cave Troll and Gollum on "The Lord of the Rings, The Fellowship of The Ring" (2001)
In 2000, Ramon Rivero was the first person to perform a digital puppet using Optical Motion Capture against
pre-recorded action footage of a feature film. The character was the Cave Troll in the first episode of the The Lord of
the Rings trilogy. The motion capture technology was created by Biomechanics Inc in Atlanta (now Giant Studios),
Ramon's ideas contributed to enhancements to the technology, directly related to markering systems; virtual
feedback of footage and computerised versions of the film sets; as well as the retargeting software called
CharMapper (short for Character Mapper). Although the final footage was made with keyframe animation, a few
seconds of Ramon's original performance can still be appreciated in the film. The character Gollum, tested by Ramon
but performed by Andy Serkis, was also made with the same technology and is still considered the epitome of a
virtual character in the film industry. Contrary to the Cave Troll, most of the animation of Gollum made it to the
final footage using the original motion captured performance.
Bugs Live
"Bugs Live" was a digital puppet of Bugs Bunny created by Phillip Reay for Warner Brothers Pictures. The puppet
was created using hand drawn frames of animation that were puppeteered by Bruce Lanoil and David Barclay. The
Bugs Live puppet was used to create nearly 900 minutes of live, fully interactive interviews of 2D animated Bugs
character about his role in the movie Looney Tunes: Back in Action in English and Spanish. Bugs Live also appeared
at the 2004 SIGGRAPH Digital Puppetry Special Session with the Muppet puppet Gonzo.
62
Digital puppetry
Machinima
A production technique that can be used to perform digital puppets. Machinima involves creating
computer-generated imagery (CGI) using the low-end 3D engines in video games. Players act out scenes in real-time
using characters and settings within a game and the resulting footage is recorded and later edited into a finished film.
References
[1] A Critical History of Computer Graphics and Animation: Analog approaches, non-linear editing, and compositing (http:/ / accad. osu. edu/
~waynec/ history/ lesson12. html), accessed April 28, 2007
[2] Sturman, David J. A Brief History of Motion Capture for Computer Character Animation (http:/ / www. siggraph. org/ education/ materials/
HyperGraph/ animation/ character_animation/ motion_capture/ history1. htm), accessed February 9, 2007
[3] Finch, Christopher. Jim Henson: The Works (New York: Random House, 1993)
[4] Henson.com Featured Creature: Waldo C. Graphic (archive.org) (http:/ / web. archive. org/ web/ 20030222193241/ http:/ / henson. com/ fun/
fcreature/ waldo_fcbts. html), accessed February 9, 2007
[5] Walters, Graham. The story of Waldo C. Graphic. Course Notes: 3D Character Animation by Computer, ACM SIGGRAPH '89, Boston, July
1989, pp. 65-79
[6] Barbara Robertson, Mike, the talking head Computer Graphics World, July 1988, pp. 15-17.
[7] Yilmaz, Emre. Elmo's World: Digital Puppetry on Sesame Street. Conference Abstracts and Applications, SIGGRAPH '2001, Los Angeles,
August 2001, p. 178
63
Digital puppetry
64
External links
The Henson Digital Puppetry Wiki - Wiki for Henson Digital Puppetry projects, people, characters, and
technology.
Animata (http://animata.kibu.hu) - Free, open source real-time animation software commonly used to create
digital puppets.
Mike the talking head (http://mambo.ucsc.edu/psl/mike.html) - Web page about Mike Normal, one of the
earliest examples of digital puppetry.
External links
http://wheger.tripod.com/vhl/vhl.htm
Evaluation
65
Just as for CatmullClark surfaces,
DooSabin limit surfaces can also be
evaluated directly without any
recursive refinement, by means of the
technique of Jos Stam.[3] The solution
is, however, not as computationally
efficient as for Catmull-Clark surfaces
because the DooSabin subdivision
External links
[1] D. Doo: A subdivision algorithm for smoothing down irregularly shaped polyhedrons, Proceedings on Interactive Techniques in Computer
Aided Design, pp. 157 - 165, 1978 ( pdf (http:/ / trac2. assembla. com/ DooSabinSurfaces/ export/ 12/ trunk/ docs/ Doo 1978 Subdivision
algorithm. pdf))
[2] D. Doo and M. Sabin: Behavior of recursive division surfaces near extraordinary points, Computer-Aided Design, 10 (6) 356360 (1978), (
doi (http:/ / dx. doi. org/ 10. 1016/ 0010-4485(78)90111-2), pdf (http:/ / www. cs. caltech. edu/ ~cs175/ cs175-02/ resources/ DS. pdf))
[3] Jos Stam, Exact Evaluation of CatmullClark Subdivision Surfaces at Arbitrary Parameter Values, Proceedings of SIGGRAPH'98. In
Computer Graphics Proceedings, ACM SIGGRAPH, 1998, 395404 ( pdf (http:/ / www. dgp. toronto. edu/ people/ stam/ reality/ Research/
pdf/ sig98. pdf), downloadable eigenstructures (http:/ / www. dgp. toronto. edu/ ~stam/ reality/ Research/ SubdivEval/ index. html))
Draw distance
Draw distance is a computer graphics term, defined as the maximum distance of objects in a three dimensional
scene that are drawn by the rendering engine. Polygons that lie behind the draw distance won't be drawn to the
screen.
As the draw distance increases more distant polygons need to be drawn onto the screen that would regularly be
clipped. This requires more computing power; the graphic quality and realism of the scene will increase as draw
distance increases, but the overall performance (frames per second) will decrease. Many games and applications will
allow users to manually set the draw distance to balance performance and visuals.
Draw distance
Alternatives
A common trick used in games to disguise a short draw distance is to obscure the area with a distance fog.
Alternative methods have been developed to sidestep the problem altogether using level of detail manipulation.
Black & White was one of the earlier games to use adaptive level of detail to decrease the number of polygons in
objects as they moved away from the camera, allowing it to have a massive draw distance while maintaining detail in
close-up views.
The Legend of Zelda: The Wind Waker uses a variant of the level of detail programming mentioned above. The game
overworld is divided into 49 squares. Each square has an island inside of it; the distance between the island and the
borders of the square are considerable. Everything within a square is loaded when entered, including all models used
in close-up views and animations. Utilizing the telescope item, one can see just how detailed even far away areas are.
However, textures are not displayed; they are faded in as one gets closer to the square's island-this may actually be
an aesthetic effect and not to free up system resources. Islands outside of the current square are less
detailed-however, these far-away island models do not degenerate any further than that, even though some of these
islands can be seen from everywhere else in the overworld. In both indoor and outdoor areas, there is no distance
fog-however, there are some areas where "distance" fog is used as an atmospheric effect. As a consequence to the
developers' excessive attention to detail, however, some areas of the game have lower framerates due to the large
amount of enemies on screen.
Grand Theft Auto III made particular use of fogging, however, this made the game less playable when driving or
flying at high speed, as objects would pop-up out of the fog and cause the player to crash into them.
Halo 3 is claimed by its creators at Bungie Studios to have a draw distance upwards of 14 miles. This is an example
of the vastly improved draw distances made able by more recent game consoles. In addition, Crysis is said to have a
draw distance up to 16 kilometers (9.9mi), while Cube 2: Sauerbraten, has a potentially unlimited draw distance,
possibly due to the larger map size. Grand Theft Auto V was praised by its seemingly infinite draw distance despite
having a large, detailed map.
External links
draw distance/fog problem - Beyond3D Forum [1]
How to: Optimize Your Frame Rates - Features at GameSpot [2]
References
[1] http:/ / forum. beyond3d. com/ showthread. php?t=42599
[2] http:/ / www. gamespot. com/ features/ 6168650/ index. html?cpage=9
66
Edge loop
Edge loop
An edge loop, in computer graphics, can loosely be defined as a set of connected edges across a surface. Usually the
last edge meets again with the first edge, thus forming a loop. The set or string of edges can for example be the outer
edges of a flat surface or the edges surrounding a 'hole' in a surface.
In a stricter sense an edge loop is defined as a set of edges where the loop follows the middle edge in every 'four way
junction'.[1] The loop will end when it encounters another type of junction (three or five way for example). Take an
edge on a mesh surface for example, say at one end of the edge it connects with three other edges, making a four way
junction. If you follow the middle 'road' each time you would either end up with a completed loop or the edge loop
would end at another type of junction.
Edge loops are especially practical in organic models which need to be animated. In organic modeling edge loops
play a vital role in proper deformation of the mesh.[2] A properly modeled mesh will take into careful consideration
the placement and termination of these edge loops. Generally edge loops follow the structure and contour of the
muscles that they mimic. For example, in modeling a human face edge loops should follow the orbicularis oculi
muscle around the eyes and the orbicularis oris muscle around the mouth. The hope is that by mimicking the way the
muscles are formed they also aid in the way the muscles are deformed by way of contractions and expansions. An
edge loop closely mimics how real muscles work, and if built correctly, will give you control over contour and
silhouette in any position.
An important part in developing proper edge loops is by understanding poles.[3] The E(5) Pole and the N(3) Pole are
the two most important poles in developing both proper edge loops and a clean topology on your model. The E(5)
Pole is derived from an extruded face. When this face is extruded, four 4-sided polygons are formed in addition to
the original face. Each lower corner of these four polygons forms a five-way junction. Each one of these five-way
junctions is an E-pole. An N(3) Pole is formed when 3 edges meet at one point creating a three-way junction. The
N(3) Pole is important in that it redirects the direction of an edge loop.
References
[1] Edge Loop (http:/ / wiki. cgsociety. org/ index. php/ Edge_Loop), CG Society
[2] Modeling With Edge Loops (http:/ / zoomy. net/ 2008/ 04/ 02/ modeling-with-edge-loops/ ), Zoomy.net
[3] "The pole" (http:/ / www. subdivisionmodeling. com/ forums/ showthread. php?t=907), SubdivisionModeling.com
External links
Edge Loop (http://wiki.cgsociety.org/index.php/Edge_Loop), CG Society
67
Euler operator
68
Euler operator
In mathematics Euler operators may refer to:
EulerLagrange differential operator d/dx see Lagrangian system
CauchyEuler operators e.g. xd/dx
quantum white noise conservation or QWN-Euler operator QWN-Euler operator
Description
V E F H S G
MBFLV
Make Body-Face-Loop-Vertex
MEV
Make Edge-Vertex
MEFL
Make Edge-Face-Loop
MEKL
-1
KFLEVB
Kill Faces-Loops-Edges-Vertices-Body
-1
Geometry
Euler operators modify the mesh's graph creating or removing faces, edges and vertices according to simple rules
while preserving the overall topology thus maintaining a valid boundary (i.e. not introducing holes). The operators
themselves don't define how geometric or graphical attributes map to the new graph: e.g. position, gradient, uv
texture coordinate, these will depend on the particular implementation.
References
[1] Baumgart, B.G^ "Winged edge polyhedron representation", Stanford Artificial Intelligence Report No. CS-320, October, 1972.
Euler operator
compsci/1587). Unfortunately this typo-ridden (OCRd?) paper can be quite hard to read.
Easier-to-read reference (http://solidmodel.me.ntu.edu.tw/lessoninfo/file/Chapter03.pdf), from a
solid-modelling course at NTU.
Another reference (http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/model/euler-op.html) that
uses a slightly different definition of terms.
Sven Havemann, Generative Mesh Modeling (http://www.eg.org/EG/DL/dissonline/doc/havemann.pdf),
PhD thesis, Braunschweig University, Germany, 2005.
Martti Mntyl, An Introduction to Solid Modeling, Computer Science Press, Rockville MD, 1988. ISBN
0-88175-108-1.
Explicit modeling
With the explicit modeling, designers quickly and easily create 3D CAD designs, which they then modify through
direct, on-the-fly interactions with the model geometry.
Advantages
The explicit approach is flexible and easy to use, so its ideal for companies that create one-off or highly customized
products products that simply dont require all the extra effort of up-front planning and the embedding of
information within models. With an explicit approach to 3D design, the interaction is with the model geometry and
not with an intricate sequence of design features. That makes initial training on the software easier. But it also means
designers working with an explicit 3D CAD system can easily pick up a design where others left offmuch like
anyone can open up and immediately continue working on a Microsoft Word document. Thus explicit modeling
appeals to a variety of audiences: companies with flexible staff; infrequent users of 3D CAD; and anyone who is
concurrently involved in a large number of design projects.
Use in repurposing
When designers repurpose a model, they take an existing 3D CAD design and radically transform it by
cutting/copying/pasting geometry to derive a new model that has no relationship to the original model. With an
explicit approach, companies have demonstrated accelerated product development by repurposing existing designs
into new and completely different products. This unique characteristic of an explicit approach can shave weeks or
even months from project schedules.
Even with direct modeling capabilities, the parametric approach is still designed to leverage embedded product
information. The explicit approach, on the other hand, intentionally limits the amount of information captured as part
of the model definition in order to provide a genuinely lightweight and flexible product design process.
69
Explicit modeling
False radiosity
False Radiosity is a 3D computer graphics technique used to create texture mapping for objects that emulates patch
interaction algorithms in radiosity rendering. Though practiced in some form since the late 90s, this term was coined
only around 2002 by architect Andrew Hartness, then head of 3D and real-time design at Ateliers Jean Nouvel.
During the period of nascent commercial enthusiasm for radiosity-enhanced imagery, but prior to the
democratization of powerful computational hardware, architects and graphic artists experimented with time-saving
3D rendering techniques. By darkening areas of texture maps corresponding to corners, joints and recesses, and
applying maps via self-illumination or diffuse mapping in a 3D program, a radiosity-like effect of patch interaction
could be created with a standard scan-line renderer. Successful emulation of radiosity required a theoretical
understanding and graphic application of patch view factors, path tracing and global illumination algorithms. Texture
maps were usually produced with image editing software, such as Adobe Photoshop. The advantage of this method is
decreased rendering time and easily modifiable overall lighting strategies.
Another common approach similar to false radiosity is the manual placement of standard omni-type lights with
limited attenuation in places in the 3D scene where the artist would expect radiosity reflections to occur. This
method uses many lights and can require an advanced light-grouping system, depending on what assigned
materials/objects are illuminated, how many surfaces require false radiosity treatment, and to what extent it is
anticipated that lighting strategies be set up for frequent changes.
70
False radiosity
References
Autodesk interview with Hartness about False Radiosity and real-time design [1]
References
[1] http:/ / usa. autodesk. com/ adsk/ servlet/ item?siteID=123112& id=5549510& linkID=10371177
Fiducial marker
A fiducial marker or fiducial is an object placed in the field of view of an imaging system which appears in the
image produced, for use as a point of reference or a measure. It may be either something placed into or on the
imaging subject, or a mark or set of marks in the reticle of an optical instrument.
Accuracy
In high-resolution optical microscopy, fiducials can be used to actively stabilize the field of view. Stabilization to
better than 0.1 nm is achievable (Carter et al. Applied Optics, (2007)).
Applications
Physics
In physics, 3D computer graphics, and photography, fiducials are reference points: fixed points or lines within a
scene to which other objects can be related or against which objects can be measured. Cameras outfitted with reseau
plates produce these reference marks (also called reseau crosses) and are commonly used by NASA. Such marks are
closely related to the timing marks used in optical mark recognition.
Geographical Survey
Airborne geophysical surveys also use the term "fiducial" as a sequential reference number in the measurement of
various geophysical instruments during a survey flight. This application of the term evolved from air photo frame
numbers that were originally used to locate geophysical survey lines in the early days of airborne geophysical
surveying. This method of positioning has since been replaced by GPS, but the term "fiducial" continues to be used
as the time reference for data measured during flights.
Virtual Reality
In applications of augmented reality or virtual reality, fiducials are often manually applied to objects in a scene so
that the objects can be recognized in images of the scene. For example, to track some object, a light-emitting diode
can be applied to it. With knowledge of the color of the emitted light, the object can easily be identified in the
picture.
The appearance of markers in images may act as a reference for image scaling, or may allow the image and physical
object, or multiple independent images, to be correlated. By placing fiducial markers at known locations in a subject,
the relative scale in the produced image may be determined by comparison of the locations of the markers in the
image and subject. In applications such as photogrammetry, the fiducial marks of a surveying camera may be set so
that they define the principal point, in a process called "collimation". This would be a creative use of how the term
collimation is conventionally understood.
71
Fiducial marker
72
Medical Imaging
Fiducial markers are used in a wide range of medical imaging applications. Images of the same subject produced
with two different imaging systems may be correlated by placing a fiducial marker in the area imaged by both
systems. In this case, a marker which is visible in the images produced by both imaging modalities must be used. By
this method, functional information from SPECT or positron emission tomography can be related to anatomical
information provided by magnetic resonance imaging (MRI).[1] Similarly, fiducial points established during MRI can
be correlated with brain images generated by magnetoencephalography to localize the source of brain activity. Such
fiducial points or markers are often created in tomographic images such as magnetic resonance and computed
tomography images using a device known as the N-localizer.
ECG
In electrocardiography, fiducial points are landmarks on the ECG complex such as the isoelectric line (PQ junction),
and onset of individual waves such as PQRST.
Cell Biology
In processes that involve following a labelled molecule as it is incorporated in some larger polymer, such markers
can be used to follow the dynamics of growth/shrinkage of the polymer, as well as its movement. Commonly used
fiducial markers are fluorescently labelled monomers of bio-polymers. The task of measuring and quantifying what
happens to these is borrowed from methods in physics and computational imaging like Speckle imaging.
Radio Therapy
In radiotherapy and radiosurgical systems such as the CyberKnife, fiducial points are landmarks in the tumour to
facilitate correct targets for treatment. In neuronavigation, a fiducial spatial coordinate system is used as a
reference, for use in neurosurgery, to describe the position of specific structures within the head or elsewhere in the
body. Such fiducial points or landmarks are often created in magnetic resonance imaging and computed tomography
images by using the N-localizer.
PCB
In printed circuit board (PCB) design, fiducial marks, also known as
circuit pattern recognition marks or simply "fids," allow automated
assembly equipment to accurately locate and place parts on boards.
These devices locate the circuit pattern by providing common
measurable points. They are usually made by leaving a circular area of
the board bare from solder-stop coating (similar to clearcoat), in which
a filled copper circle is placed. This center metallic disc can be
solder-coated, gold-plated or otherwise treated, although bare copper is
most common as it is not a current-carrying contact.
Most placement devices are fed boards for assembly by a rail conveyor, with the board being clamped down in the
assembly area of the machine. Each board will clamp slightly differently than the others, and the variance -- which
will generally be only tenths of a millimeter -- is sufficient to ruin a board without proper calibration. Consequently,
a typical PCB will have three fids to allow placement robots to precisely determine the board's orientation. By
measuring the location of the fids relative to the board plan stored in the machine's memory, the machine can reliably
compute the degree to which parts must be moved relative to the plan, called offset, to ensure accurate placement.
Using three fiducials enables the machine to determine offset in both the X and Y axes, as well as to determine if the
board has rotated during clamping, allowing the machine to rotate parts to be placed to match. Parts requiring a very
Fiducial marker
73
high degree of placement precision, such as integrated circuit chip packages with many fine leads, may have
subsidiary fiducial marks near the package placement area of the board to further fine-tune the targeting.
Conversely, low end, low-precision boards may only have two fiducials, or use fiducials applied as part of the screen
printing process applied to most circuit boards. Some very low-end boards may use the plated mounting screw holes
as ersatz fiducials, although this yields very low accuracy.
For prototyping and small batch production runs, the use of a fiducial camera can greatly improve the process of
board fabrication. By automatically locating fiducial markers, the camera automates board alignment. This helps
with front to back and multilayer applications, eliminating the need for set pins.[2]
Printing
In color printing, fiducialsalso called "registration black"are used at the edge of the cyan, magenta, yellow and
black (CMYK) printing plates so that they can be correctly aligned with each other.
References
[1] Correlation of single photon emission CT with MR image data using fiduciary markers. (http:/ / www. ajnr. org/ cgi/ content/ abstract/ 14/ 3/
713) BJ Erickson and CR Jack Jr., American Journal of Neuroradiology, Vol 14, Issue 3 713-720 (1993).
[2] http:/ / www. youtube. com/ watch?v=-tVZ-sdxG2o
Fluid simulation
Fluid simulation is an increasingly popular tool in computer graphics
for generating realistic animations of water, smoke, explosions, and
related phenomena. Given some input configuration of fluid and scene
geometry, a fluid simulator evolves the motion of the fluid forward in
time, making use of the (possibly heavily simplified) Navier-Stokes
equations which describe the physics of fluids. In computer graphics,
such simulations range in complexity from extremely time-consuming
high quality animations for film & visual effects, to simple real-time
particle systems used in modern games.
Example of fluid simulation
Approaches
There are several competing techniques for liquid simulation with a variety of trade-offs. The most common are
Eulerian grid-based methods, smoothed particle hydrodynamics (SPH) methods, vorticity-based methods, and
Lattice Boltzmann methods. These methods originated in the computational fluid dynamics community, and have
steadily been adopted by graphics practitioners. The key difference in the graphics setting is that the results need
only be plausible. That is, if a human observer is unable to identify by inspection whether a given animation is
physically correct, the results are sufficient, whereas in physics, engineering, or mathematics, more rigorous error
metrics are necessary.
Fluid simulation
Development
In computer graphics, the earliest attempts to solve the Navier-Stokes equations in full 3D came in 1996, by Nick
Foster and Dimitris Metaxas, who based their work primarily on a classic CFD paper from 1965 by Harlow &
Welch. Prior to this, many methods were built on ad-hoc particle systems, lower dimensional techniques such as 2D
shallow water models, and semi-random turbulent noise fields. In 1999, Jos Stam published the so-called Stable
Fluids method at SIGGRAPH, which exploited a semi-Lagrangian advection technique and implicit integration of
viscosity to provide unconditionally stable behaviour. This allowed for much larger time steps and in general, faster
simulations. This general technique was extended by Fedkiw & collaborators to handle complex 3d water
simulations using the level set method in papers in 2001 and 2002.
Some notable academic researchers in this area include James F. O'Brien, Ron Fedkiw, Mark Carlson, Greg Turk,
Robert Bridson, Ken Museth and Jos Stam.
Software
Several options are available for fluid simulation in off-the-shelf 3D packages. A popular open source package is
Blender 3D, with a stable Lattice Boltzmann method implemented, in addition to two distinct SPH approaches.
Another option is Glu3d, a plugin for 3ds Max very similar to Blender's fluid capability. Other options are RealFlow,
FumeFx and AfterBurn for Max, Dynamite for LightWave 3D, ICE SPH Fluids and Mootzoid's emFluid4 for
Softimage; Turbulence.4D, PhyFluids3D, DPIT for Cinema 4D. Houdini and Maya support fluids natively however
plugins can be bought to improve the simulations.
External links
References
[1] http:/ / www. fusioncis. com/
[2] http:/ / www. flowlines. info/
[3] http:/ / 3daliens. com/ glu3D/ index. htm
[4] http:/ / groups. google. com/ group/ ICE_SPH
[5] http:/ / www. mootzoid. com/ wb/ pages/ softimagexsi/ emfluid4. php
[6] http:/ / www. realflow. com/
[7] http:/ / www. blender3d. com
[8] http:/ / www. afterworks. com/
[9] http:/ / www. dpit2. de/
[10] http:/ / graphics. stanford. edu/ ~fedkiw/
[11] http:/ / www. cs. berkeley. edu/ b-cam/
[12] http:/ / software. intel. com/ en-us/ articles/ fluid-simulation-for-video-games-part-1/
74
75
Forward kinematics
Forward kinematics refers to the use of the kinematic equations
of a robot to compute the position of the end-effector from
specified values for the joint parameters. The kinematics equations
of the robot are used in robotics, computer games, and animation.
The reverse process that computes the joint parameters that
achieve a specified position of the end-effector is known as
inverse kinematics.
Kinematics equations
An articulated six DOF robotic arm uses forward
where [T] is the transformation locating the end-link. These equations are called the kinematics equations of the
serial chain.[1]
Forward kinematics
76
Link transformations
In 1955, Jacques Denavit and Richard Hartenberg introduced a
convention for the definition of the joint matrices [Z] and link
matrices [X] to standardize the coordinate frame for spatial
linkages.[2][3] This convention positions the joint frame so that it
consists of a screw displacement along the Z-axis
where i, di, i,i+1 and ai,i+1 are known as the Denavit-Hartenberg parameters.
where
to link
Denavit-Hartenberg matrix
The matrices associated with these operations are:
Similarly,
The use of the Denavit-Hartenberg convention yields the link transformation matrix, [i-1Ti] as
Forward kinematics
77
References
[1] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA.
[2] J. Denavit and R.S. Hartenberg, 1955, "A kinematic notation for lower-pair mechanisms based on matrices." Trans ASME J. Appl. Mech,
23:215221.
[3] Hartenberg, R. S., and J. Denavit. Kinematic Synthesis of Linkages. New York: McGraw-Hill, 1964 on-line through KMODDL (http:/ /
ebooks. library. cornell. edu/ k/ kmoddl/ toc_hartenberg1. html)
Surfaces
Freeform surface, or freeform surfacing, is used in CAD and other
computer graphics software to describe the skin of a 3D geometric
element. Freeform surfaces do not have rigid radial dimensions, unlike
regular surfaces such as planes, cylinders and conic surfaces. They are
used to describe forms such as turbine blades, car bodies and boat
hulls. Initially developed for the automotive and aerospace industries,
Variable smooth blend between surfaces.
freeform surfacing is now widely used in all engineering design
Animated version
disciplines from consumer goods products to ships. Most systems
today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are
other methods such as Gorden surfaces or Coons surfaces .
The forms of freeform surfaces (and curves) are not stored or defined in CAD software in terms of polynomial
equations, but by their poles, degree, and number of patches (segments with spline curves). The degree of a surface
determines its mathematical properties, and can be seen as representing the shape by a polynomial with variables to
the power of the degree value. For example, a surface with a degree of 1 would be a flat cross section surface. A
surface with degree 2 would be curved in one direction, while a degree 3 surface could (but does not necessarily)
change once from concave to convex curvature. Some CAD systems use the term order instead of degree. The order
of a polynomial is one greater than the degree, and gives the number of coefficients rather than the greatest exponent.
The poles (sometimes known as control
points) of a surface define its shape. The
natural surface edges are defined by the
positions of the first and last poles. (Note
that a surface can have trimmed
boundaries.) The intermediate poles act like
magnets drawing the surface in their
direction. The surface does not, however, go
through these points. The second and third
poles as well as defining shape, respectively
determine the start and tangent angles and
the curvature. In a single patch surface
(Bzier surface), there is one more pole than
Example surface pole map
the degree values of the surface. Surface
patches can be merged into a single NURBS surface; at these points are knot lines. The number of knots will
determine the influence of the poles on either side and how smooth the transition is. The smoothness between
patches, known as continuity, is often referred to in terms of a C value:
C0: just touching, could have a nick
C1: tangent, but could have sudden change in curvature
C2: the patches are curvature continuous to one another
Two more important aspects are the U and V parameters. These are values on the surface ranging from 0 to 1, used
in the mathematical definition of the surface and for defining paths on the surface: for example, a trimmed boundary
edge. Note that they are not proportionally spaced along the surface. A curve of constant U or constant V is known
as an isoperimetric curve, or U (V) line. In CAD systems, surfaces are often displayed with their poles of constant U
or constant V values connected together by lines; these are known as control polygons.
78
Modelling
When defining a form, an important factor is the continuity between surfaces - how smoothly they connect to one
another.
One example of where surfacing excels is automotive body panels. Just blending two curved areas of the panel with
different radii of curvature together, maintaining tangential continuity (meaning that the blended surface doesn't
change direction suddenly, but smoothly) won't be enough. They need to have a continuous rate of curvature change
between the two sections, or else their reflections will appear disconnected.
The continuity is defined using the terms
G0 position (touching)
G1 tangent (angle)
G2 curvature (radius)
G3 acceleration (rate of change of curvature)
To achieve a high quality NURBS or Bzier surface, degrees of 5 or greater are generally used. Depending on the
product and production process, different levels of accuracy are used but tolerances usually range from 0.02mm to
.001mm (for example, in the fairing of BIW concept surfaces to production surface). For ship building, this need not
be so tight, but for precision gears and medical devices it is much finer.
History of terms
The term lofting originally came from the shipbuilding industry where loftsmen worked on "barn loft" type
structures to create the keel and bulkhead forms out of wood. This was then passed on to the aircraft then automotive
industries who also required streamline shapes.
The term spline also has nautical origins coming from East Anglian dialect word for a thin long strip of wood
(probably from old English and Germanic word splint).
CATIA
Cobalt (Ashlar-Vellum [1])
formZ (form-Z [2])
PowerSHAPE (PowerSHAPE [3])
Solidworks
SolidThinking
ICEM Surf
Imageware
ProEngineer ISDX ([4])
NX (Unigraphics)
ProEngineer
Rhinoceros 3D
VSR Shape Modeling ([5])
FreeForm Modeling Plus from SensAble Technologies (Sensable.com [6]) Now part of Geomagic Design.
Autodesk Inventor
Alias StudioTools
FreeSHIP (FreeSHIP [7]) Link broken as of Jan 2014.
79
References
[1] http:/ / www. ashlar. com/ sections/ products/ cobalt/ cobalt. html
[2] http:/ / www. formz. com
[3] http:/ / www. powershape. com/
[4] http:/ / www. ptc. com/ products/ creo/ interactive-surface-design-extension
[5] http:/ / www. virtualshape. com/ en/ products/ shape-modeling
[6] http:/ / www. sensable. com/
[7] http:/ / www. freeship. org
[8] http:/ / www. right-toolbox. com. ar/ genesis/ index. html
[9] http:/ / www. omnicad. com
[10] http:/ / www. superfici3d. com
[11] http:/ / www. Bentley. com
[12] http:/ / punchcad. com/ index_pro. htm
[13] http:/ / moi3d. com/
Geometry instancing
In real-time computer graphics, geometry instancing is the practice of rendering multiple copies of the same mesh
in a scene at once. This technique is primarily used for objects such as trees, grass, or buildings which can be
represented as repeated geometry without appearing unduly repetitive, but may also be used for characters. Although
vertex data is duplicated across all instanced meshes, each instance may have other differentiating parameters (such
as color, or skeletal animation pose) changed in order to reduce the appearance of repetition.
80
Geometry instancing
External links
EXT_draw_instanced documentation [1]
A quick overview on D3D9 instancing on MSDN [2]
References
[1] http:/ / www. opengl. org/ registry/ specs/ EXT/ draw_instanced. txt
[2] http:/ / msdn. microsoft. com/ en-us/ library/ bb173349(VS. 85). aspx
Geometry pipelines
Geometric manipulation of modeling primitives, such as that performed by a geometry pipeline, is the first stage in
computer graphics systems which perform image generation based on geometric models. While Geometry Pipelines
were originally implemented in software, they have become highly amenable to hardware implementation,
particularly since the advent of very-large-scale integration (VLSI) in the early 1980s. A device called the Geometry
Engine developed by Jim Clark and Marc Hannah at Stanford University in about 1981 was the watershed for what
has since become an increasingly commoditized function in contemporary image-synthetic raster display systems.
Geometric transformations are applied to the vertices of polygons, or other geometric objects used as modelling
primitives, as part of the first stage in a classical geometry-based graphic image rendering pipeline. Geometric
computations may also be applied to transform polygon or patch surface normals, and then to perform the lighting
and shading computations used in their subsequent rendering.
History
Hardware implementations of the geometry pipeline were introduced in the early Evans & Sutherland Picture
System, but perhaps received broader recognition when later applied in the broad range of graphics systems products
introduced by Silicon Graphics (SGI). Initially the SGI geometry hardware performed simple model space to screen
space viewing transformations with all the lighting and shading handled by a separate hardware implementation
stage, but in later, much higher performance applications such as the RealityEngine, they began to be applied to
perform part of the rendering support as well.
More recently, perhaps dating from the late 1990s, the hardware support required to perform the manipulation and
rendering of quite complex scenes has become accessible to the consumer market. Companies such as Nvidia and
AMD Graphics (formerly ATI) are two current leading representatives of hardware vendors in this space. The
GeForce line of graphics cards from Nvidia was the first to support full OpenGL and Direct3D hardware geometry
processing in the consumer PC market, while some earlier products such as Rendition Verite incorporated hardware
geometry processing through proprietary programming interfaces. On the whole, earlier graphics accelerators by
3Dfx, Matrox and others relied on the CPU for geometry processing.
This subject matter is part of the technical foundation for modern computer graphics, and is a comprehensive topic
taught at both the undergraduate and graduate levels as part of a computer science education.
81
Geometry pipelines
References
Geometry processing
Geometry processing, or mesh processing, is a fast-growing[citation needed] area of research that uses concepts from
applied mathematics, computer science and engineering to design efficient algorithms for the acquisition,
reconstruction, analysis, manipulation, simulation and transmission of complex 3D models.
Applications of geometry processing algorithms already cover a wide range of areas from multimedia, entertainment
and classical computer-aided design, to biomedical computing, reverse engineering and scientific computing.[citation
needed]
References
External links
Siggraph 2001 Course on Digital Geometry Processing (http://www.multires.caltech.edu/pubs/DGPCourse/),
by Peter Schroder and Wim Sweldens
Symposium on Geometry Processing (http://www.geometryprocessing.org/)
Multi-Res Modeling Group (http://www.multires.caltech.edu/), Caltech
Mathematical Geometry Processing Group (http://geom.mi.fu-berlin.de/index.html), Free University of
Berlin
Computer Graphics Group (http://www.graphics.rwth-aachen.de), RWTH Aachen University
Polygonal Mesh Processing Book (http://www.pmp-book.org/)
82
Gimbal lock
83
Gimbal lock
Gimbal lock is the loss of one degree of freedom in a
three-dimensional, three-gimbal mechanism that occurs when the axes
of two of the three gimbals are driven into a parallel configuration,
"locking" the system into rotation in a degenerate two-dimensional
space.
The word lock is misleading: no gimbal is restrained. All three gimbals
can still rotate freely about their respective axes of suspension.
Nevertheless, because of the parallel orientation of two of the gimbals
axes there is no gimbal available to accommodate rotation along one
axis.
Gimbals
A gimbal is a ring that is so suspended as to rotate about an axis.
Gimbals typically nest one within another to accommodate rotation
about multiple axes.
Gimbal lock can occur in gimbal systems with two degrees of freedom
such as a theodolite with rotations about an azimuth and elevation in
two dimensions. These systems can gimbal lock at zenith and nadir, because at those points azimuth is not
well-defined, and rotation in the azimuth direction does not change the direction the theodolite is pointing.
Consider tracking a helicopter flying towards the theodolite from the horizon. The theodolite is a telescope mounted
on a tripod so that it can move in azimuth and elevation to track the helicopter. The helicopter flies towards the
theodolite and is tracked by the telescope in elevation and azimuth. The helicopter flies immediately above the tripod
(i.e. it is at zenith) when it changes direction and flies at 90 degrees to its previous course. The telescope cannot track
this maneuver without a discontinuous jump in one or both of the gimbal orientations. There is no continuous motion
that allows it to follow the target. It is in gimbal lock. So there is an infinity of directions around zenith that the
telescope cannot continuously track all movements of a target. Note that even if the helicopter does not pass through
zenith, but only near zenith, so that gimbal lock does not occur, the system must still move exceptionally rapidly to
Gimbal lock
84
track it, as it rapidly passes from one bearing to the other. The closer to zenith the nearest point is, the faster this
must be done, and if it actually goes through zenith, the limit of these "increasingly rapid" movements becomes
infinitely fast, namely discontinuous.
To recover from gimbal lock the user has to go around the zenith explicitly: reduce the elevation, change the
azimuth to match the azimuth of the target, then change the elevation to match the target.
Mathematically, this corresponds to the fact that spherical coordinates do not define a coordinate chart on the sphere
at zenith and nadir. Alternatively, that the corresponding map T2S2 from the torus T2 to the sphere S2 (given by the
point with given azimuth and elevation) is not a covering map at these points.
Solutions
This problem may be overcome by use of a fourth gimbal, intelligently
driven by a motor so as to maintain a large angle between roll and yaw
gimbal axes. Another solution is to rotate one or more of the gimbals to
an arbitrary position when gimbal lock is detected and thus reset the
device.
Modern practice is to avoid the use of gimbals entirely. In the context
of inertial navigation systems, that can be done by mounting the
inertial sensors directly to the body of the vehicle (this is called a
strapdown system) and integrating sensed rotation and acceleration
digitally using quaternion methods to derive vehicle orientation and
velocity. Another way to replace gimbals is to use fluid bearings or a
flotation chamber.
Gimbal lock
85
After the Lunar Module had landed, Mike Collins aboard the Command Module joked "How about sending me a
fourth gimbal for Christmas?"
Robotics
In robotics, gimbal lock is commonly referred to as "wrist flip", due to
the use of a "triple-roll wrist" in robotic arms, where three axes of the
wrist, controlling yaw, pitch, and roll, all pass through a common
point.
An example of a wrist flip, also called a wrist singularity, is when the
path through which the robot is traveling causes the first and third axes
of the robot's wrist to line up. The second wrist axis then attempts to
spin 180 in zero time to maintain the orientation of the end effector.
The result of a singularity can be quite dramatic and can have adverse
effects on the robot arm, the end effector, and the process.
The importance of non-singularities in robotics has led the American
Industrial robot operating in a foundry.
National Standard for Industrial Robots and Robot Systems Safety
Requirements to define it as "a condition caused by the collinear
alignment of two or more robot axes resulting in unpredictable robot motion and velocities".[1]
Gimbal lock
86
To make a comparison, all the translations can be described using three numbers
of three consecutive linear movements along three perpendicular axes
, and
, and
and
, as the succession
rotational movements around three axes that are perpendicular one to the next. This similarity between linear
coordinates and angular coordinates makes Euler angles very intuitive, but unfortunately they suffer from the gimbal
lock problem.
with
and
, and
.
and
, the above
The second matrix is the identity matrix and has no effect on the product. Carrying out matrix multiplication of first
and third matrices:
and
in the above matrix has the same effects: the rotation angle
changes, but
direction: the last column and the last row in the matrix won't change. Only one
Gimbal lock
into its nearest orthonormal representation. For quaternions, re-normalization requires performing quaternion
normalization.
References
[1] ANSI/RIA R15.06-1999
External links
Euler angles and gimbal lock (video) Part 1 (http://guerrillacg.org/home/3d-rigging/the-rotation-problem),
Part 2 (http://guerrillacg.org/home/3d-rigging/euler-rotations-explained)
Gimble Lock - Explained (http://www.youtube.com/watch?v=rrUCBOlJdt4) at YouTube
87
Glide API
88
Glide API
Glide
Original author(s) 3dfx Interactive
Stable release
Written in
Assembly, C
Operating system
Cross-platform
Type
3D graphics API
License
Website
glide.sourceforge.net
[1]
API
Glide is based on the basic geometry and "world view" of OpenGL. OpenGL is a large graphics library with 336
calls[citation needed] in the API, many of which are of limited use. Glide was an effort to select primarily features that
were useful for real-time rendering of 3D games. The result was an API that was small enough to be implemented
entirely in late-1990s hardware. However, this focus led to various limitations in Glide, such as a 16-bit color depth
limit in the display buffer.[3]
Glide API
Use in games
The combination of the hardware performance of Voodoo Graphics (Voodoo 1) and Glide's easy-to-use API resulted
in Voodoo cards generally dominating the gaming market during the latter half of the 1990s. The name Glide was
chosen to be indicative of the GL underpinnings, while being different enough to avoid trademark problems. [citation
needed]
References
[1] http:/ / glide. sourceforge. net/
[2] (http:/ / www. ohloh. net/ licenses/ 3DFX GLIDE Source Code General Public License) The 3DFX GLIDE Source Code General Public
License
[3] http:/ / www. gamers. org/ dEngine/ xf3D/ glide/ glidepgm. htm GLIDE programming manual
[4] 3dfx wraps up wrapper Web sites (http:/ / www. theregister. co. uk/ 1999/ 04/ 08/ 3dfx_wraps_up_wrapper_web/ ), The Register, April 8,
1999.
[5] http:/ / www. theregister. co. uk/ 1999/ 12/ 07/ 3dfx_open_sources_glide_voodoo/
[6] http:/ / sourceforge. net/ projects/ glide/
External links
89
GloriaFX
90
GloriaFX
GloriaFX
Gloria FX
The Head logo of GloriaFX
Type
Incorporated
Industry
Founded
2008
Headquarters
Dnepropetrovsk,
Ukraine
Key people
Tomash Kuzmitskiy
Owner(s)
Website
Gloria FX is a Ukrainian visual effects company based in Dnipropetrovsk, Ukraine. The company known for
creating high-quality visual effects for feature films, music videos and commercials.[1]
It was founded in 2008 by Tomash Kuzmitskiy. The company has more than 45 creative artists: VFX supervisors,
Animators, Modelers, FX TD, Matte painters, Compositors, Rotoscopers, Matchmove artists.[2]
In 2013 was opened the Gloria FX School, which allows to prepare the appropriate level professionals for the further
success projects.
The company collaborates with major U.S .and European production companies such as Riveting Entertainment,
London Alley Entertainment, RockHard, DNA, Iconoclast, the Masses, Saatchi&Saatchi, BHC Films, Friendly Films
AS , NE Derection, Ramble West Productions, Aggressive Group, Doomsday Entertainment, Star Media. Also
worked with such directors as Colin Tilley, Ray Kay, Nabil, Chris Marrs Piliero.
Gloria FX has completed many successful effects projects for music artists, including Chris Brown,[3] Lil
Wayne,[4]Wiz Khalifa,[5]Rick Ross,[6] Kelly Clarkson, Nicki Minaj,[7] Daft Punk, Tyga, Justin Bieber,[8] Foals,[9]
Madcon, Cher, Ciara,Busta Rhymes, Hurts, Snoop Dogg and many other.
GloriaFX
91
Awards
Year
Project
Gloria FX reel
2011 Chris Brown ft Lil Wayne & Busta Rhymes - Look at
Me Now
Gloria FX reel
Ceremony
Result
Artist to watch
MTV
Won
Won
BET
[10]
Music videography
2013
Category
Won
Won
Won
Won
GloriaFX
2012
92
GloriaFX
2011
93
GloriaFX
References
[1] http:/ / vfxg. org/ profiles/ blogs/ the-kick-ass-ukrainian-vfx-company-you-ve-never-heard-of?fb_action_ids=430594120387072&
fb_action_types=og. likes& fb_source=aggregation& fb_aggregation_id=288381481237582
[2] http:/ / www. cgmeetup. net/ home/ gloriafx-visual-effects-demoreel-2012/
[3] http:/ / gloriafx. com/ chris-brown-love-more- ft-nicki-minaj/
[4] http:/ / www. videostatic. com/ watch-it/ 2013/ 07/ 15/ lil-wayne-god-bless-amerika-eif-rivera-dir
[5] http:/ / gloriafx. com/ wiz-khalifa-no-sleep/
[6] http:/ / www. hotnewhiphop. com/ rick-ross-feat-future-no-games-video-video. 21251. html
[7] http:/ / www. dailymotion. com/ video/ xugfh2_nicki-minaj-cassie-the-boys-explicit_music
[8] http:/ / www. directlyrics. com/ justin-bieber-boyfriend-lyrics. html
[9] http:/ / www. stereogum. com/ 1383932/ foals-bad-habit-video-nsfw/ video/
[10] https:/ / www. facebook. com/ media/ set/ ?set=a. 314901068528152. 82780. 160300277321566& type=1
94
95
Shape
Unfolding a hemicube
Uses
The hemicube may be used in the Radiosity algorithm or other Light Transport algorithms in order to determine the
amount of light arriving at a particular point on a surface.
In some cases, a hemicube may be used in Environment Mapping or Reflection Mapping.
Image plane
In 3D computer graphics, the image plane is that plane in the world which is identified with the plane of the
monitor. If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the
image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A
rectangular region of this plane, called the viewing window or viewport, maps to the monitor. This establishes the
mapping between pixels on the monitor and points (or rather, rays) in the 3D world.
In optics, the image plane is the plane that contains the object's projected image, and lies beyond the back focal
plane.
Image-based meshing
Image-based meshing
Image-based meshing is the automated process of creating computer models for computational fluid dynamics
(CFD) and finite element analysis (FEA) from 3D image data (such as magnetic resonance imaging (MRI),
computed tomography (CT) or microtomography). Although a wide range of mesh generation techniques are
currently available, these were usually developed to generate models from computer-aided design (CAD), and have
therefore difficulties meshing from 3D imaging data.
CAD-based approach
The majority of approaches used to date still follow the traditional CAD route by using an intermediary step of
surface reconstruction which is then followed by a traditional CAD-based meshing algorithm.[1] CAD-based
approaches use the scan data to define the surface of the domain and then create elements within this defined
boundary. Although reasonably robust algorithms are now available, these techniques are often time consuming, and
virtually intractable for the complex topologies typical of image data. They also do not easily allow for more than
one domain to be meshed, as multiple surfaces are often non-conforming with gaps or overlaps at interfaces where
one or more structures meet.[2]
Image-based approach
This approach is the more direct way as it combines the geometric detection and mesh creation stages in one process
which offers a more robust and accurate result than meshing from surface data. Voxel conversion technique
providing meshes with brick elements [3] and with tetrahedral elements [4] have been proposed. Another approach
generates 3D hexahedral or tetrahedral elements throughout the volume of the domain, thus creating the mesh
directly with conforming multipart surfaces. [5]
Generating a model
The steps involved in the generation of models based on 3D imaging data are:
96
Image-based meshing
Typical use
References
[1] Viceconti et al, 1998. TRI2SOLID: an application of reverse engineering methods to the creation of CAD models of bone segments.
Computer Methods and Programs in Biomedicine, 56, 211220.
[2] Young et al, 2008. An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions
of the Royal Society A, 366, 31553173.
[3] Fyhrie et al, 1993. The probability distribution of trabecular level strains for vertebral cancellous bone. Transactions of the 39th Annual
Meeting of the Orthopaedic Research Society, San Francisco.
[4] Frey et al, 1994. Fully automatic mesh generation for 3-D domains based upon voxel sets. International Journal of Methods in Engineering,
37, 27352753.
[5] Young et al, 2008. An efficient approach to converting 3D image data into highly accurate computational models. Philosophical Transactions
of the Royal Society A, 366, 31553173.
External links
ScanIP commercial image-based meshing software: www.simpleware.com (http://www.simpleware.com)
Mimics 3D image-based engineering software for FEA and CFD on anatomical data: Mimics website (http://
www.materialise.com/mimics)
Google group on image-based modelling: (http://groups.google.co.uk/group/image-based-modelling)
Avizo Software's 3D image-based meshing tools for CFD and FEA
iso2mesh: a free 3D surface and volumetric mesh generator for matlab/octave (http://iso2mesh.sourceforge.net/
)
97
Inflatable icons
98
Inflatable icons
Inflatable Icons refers to a technique that turns 2D icons into 3D
models. There are many applications for this technique such as rapid
prototyping, simulations and presentations, where non-professional
computer users could benefit from the ability to create simple 3D
models. Existing tools are geared towards the creation of production
quality 3D models by professional users with sufficient background,
time and motivation to overcome steep learning curves.
3D model
References
Repenning, A. 2005. Inflatable Icons: Diffusion-based Interactive Extrusion of 2D Images into 3D
Models [1]. * The Journal of Graphical Tools, 10(1): 1-15
2D icon
References
[1] http:/ / jgt. akpeters. com/ papers/ Repenning05/
History
IDCA is an expansion of the original strategic partnership between Temasek Polytechnic and IM Innovations Pte Ltd
which started with the establishment of the 3D Media Studio in 2005 to promote the pervasive use of 3D
visualisation and digital media solutions and services. With the success of 3D Media Studio, Temasek Polytechnic,
IM Innovations and EON Reality Inc. formally launched IDCA on 1st November 2007, supported by the Infocomm
Development Authority of Singapore (IDA).
External links
References
[1]
[2]
[3]
[4]
[5]
Background
Methods for simulating deformation, such as changes of shapes, of dynamic bodies involve intensive calculations,
and several models have been developed. Some of these are known as free-form deformation, skeleton-driven
deformation, dynamic deformation and anatomical modelling. Skeletal animation is well known in computer
animation and 3D character simulation. Because of the calculation intensitivity of the simulation, few interactive
systems are available which realistically can simulate dynamic bodies in real-time. Being able to interact with such a
realistic 3D model would mean that calculations would have to be performed within the constraints of a frame rate
which would be acceptable via a user interface.
Recent research has been able to build on previously developed models and methods to provide sufficiently efficient
and realistic simulations. The promise for this technique can be as widespread as mimicing human facial expressions
for perception of simulating a human actor in real-time or other cell organisms. Using skeletal constraints and
parameterized force to calculate deformations also has the benefit of matching how a single cell has a shaping
skeleton, as well as how a larger living organism might have an internal bone skeleton - such as the vertebraes. The
generalized external body force simulations makes elasticity calculations more efficient, and means real-time
interactions are possible.
99
100
Basic theory
There are several components to such a simulation system:
domain with calculations of these hierarchical functions similar to that of lazy wavelets
Rather than fitting the object to the skeleton, as is common, the skeleton is used to set constraints for deformation.
Also the hierarchical basis means that detail levels can be introduced or removed when needed - for example,
observing from a distance or hidden surfaces.
Pre-calculated poses are used to be able to interpolate between shapes and achieve realistic deformations throughout
motions. This means traditional keyframes are avoided.
There are performance tuning similarities between this technique and procedural generation, wavelet and data
compression methods.
Algorithmic considerations
To achieve interactivity there are several optimizations necessary which are implementation specific.
Start
by
defining
the
object
you
.
wish
to
animate
as
set
(i.e.
define
all
the
points):
non-wobble
point):
you
need
to
define
the
rest
state
of
the
object
(the
Projects
Projects are taking place to further develop this technique and presenting results to SIGGRAPH, with available
reference of details. Academic institutions and commercial enterprises like Alias Systems Corporation (the makers of
the Maya rendering software), Intel and Electronic Arts are among the known proponents of this work. There are
also videos available showcasing the techniques, with editors showing interactivity in real-time with realistic results.
The computer game Spore also has showcased similar techniques.
References
Interactive Character Animation Using Dynamic Elastic Simulation [1], 2004, Steve Capell Ph.D. dissertation.
Interactive Skeleton-Driven Dynamic Deformations [2], 2002 SIGGRAPH. Authors: Steve Capell, Seth Green,
Brian Curless, Tom Duchamp and Zoran Popovi.
A Multiresolution Framework for Dynamic Deformations [3], 2002 SIGGRAPH.Authors: Steve Capell, Seth
Green, Brian Curless, Tom Duchamp and Zoran Popovi.
Physically Based Rigging for Deformable Characters [4], 2005 SIGGRAPH. Authors: Steve Capell, Matthew
Burkhart, Brian Curless, Tom Duchamp and Zoran Popovi.
101
Skeleton-driven Deformation - lecture on physically-based modelling, simulation and animation [5], 2005, Ming
C. Lin, University of North Carolina, USA.
External links
Video of an interactive skeletal and model editor with introduction to the basic theory [6], University of
Washington, USA.
Deformable Objects and Characters project [7], University of Washington, USA. Has example videos of the
techniques.
Motion Libraries for Character Animation project [8], University of Washington, USA. Has example videos of
the techniques.
References
[1]
[2]
[3]
[4]
[5]
[6] http:/ / grail. cs. washington. edu/ projects/ deformation/ Capell-2002-ISD-divx. avi
[7] http:/ / grail. cs. washington. edu/ projects/ deformation/
[8] http:/ / grail. cs. washington. edu/ projects/ charanim/
Inverse kinematics
Inverse kinematics refers to the use of the kinematics equations of a
robot to determine the joint parameters that provide a desired position
of the end-effector. Specification of the movement of a robot so that its
end-effector achieves a desired task is known as motion planning.
Inverse kinematics transforms the motion plan into joint actuator
trajectories for the robot.
The movement of a kinematic chain whether it is a robot or an
animated character is modeled by the kinematics equations of the
chain. These equations define the configuration of the chain in terms of
its joint parameters. Forward kinematics uses the joint parameters to
compute the configuration of the chain, and inverse kinematics
reverses this calculation to determine the joint parameters that achieves
a desired configuration.[1][2][3]
For example, inverse kinematics formulas allow calculation of the joint
parameters that position a robot arm to pick up a part. Similar formulas
determine the positions of the skeleton of an animated character that is
to move in a particular way.
Inverse kinematics
102
Kinematic analysis
Kinematic analysis is one of the first steps in the design of most industrial
robots. Kinematic analysis allows the designer to obtain information on
the position of each component within the mechanical system. This
information is necessary for subsequent dynamic analysis along with
control paths.
Inverse kinematics is an example of the kinematic analysis of a
constrained system of rigid bodies, or kinematic chain. The kinematic
equations of a robot can be used to define the loop equations of a
complex articulated system. These loop equations are non-linear
constraints on the configuration parameters of the system. The
independent parameters in these equations are known as the degrees of
freedom of the system.
While analytical solutions to the inverse kinematics problem exist for a
wide range of kinematic chains, computer modeling and animation tools
often use Newton's method to solve the non-linear kinematics equations.
Other applications of inverse kinematic algorithms include interactive
manipulation, animation control and collision avoidance.
Inverse kinematics
103
Where
Note that the (i, k)-th entry of the Jacobian matrix can be determined numerically:
Where
is simply
Where
Once some -vector has caused the error to drop close to zero, the algorithm should terminate. Existing methods
based on the Hessian matrix of the system have been reported to converge to desired values using fewer
iterations, though, in some cases more computational resources.
Inverse kinematics
References
[1] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA.
[2] J. J. Uicker, G. R. Pennock, and J. E. Shigley, 2003, Theory of Machines and Mechanisms, Oxford University Press, New York.
[3] J. M. McCarthy and G. S. Soh, 2010, Geometric Design of Linkages, (http:/ / books. google. com/ books?id=jv9mQyjRIw4C& pg=PA231&
lpg=PA231& dq=geometric+ design+ of+ linkages& source=bl& ots=j6TS1043qE& sig=R5ycw5DximWrQOEVshfiytflD6Q& hl=en&
sa=X& ei=0Zj4TuiCFvCGsgKyvO3FAQ& ved=0CGAQ6AEwBQ#v=onepage& q=geometric design of linkages& f=false) Springer, New
York.
External links
Robotics and 3D Animation in FreeBasic (http://sites.google.com/site/proyectosroboticos/
cinematica-inversa-iii) (Spanish)
Analytical Inverse Kinematics Solver (http://openrave.programmingvision.com/index.
php?title=Component:Ikfast) - Given an OpenRAVE robot kinematics description, generates a C++ file that
analytically solves for the complete IK.
Inverse Kinematics algorithms (http://freespace.virgin.net/hugo.elias/models/m_ik2.htm)
Robot Inverse solution for a common robot geometry (http://www.learnaboutrobots.com/inverseKinematics.
htm)
HowStuffWorks.com article How do the characters in video games move so fluidly? (http://entertainment.
howstuffworks.com/question538.htm) with an explanation of inverse kinematics
3D Theory Kinematics (http://www.euclideanspace.com/physics/kinematics/joints/index.htm)
Protein Inverse Kinematics (http://cnx.org/content/m11613/latest/)
Simple Inverse Kinematics example with source code using Jacobian (http://diegopark.googlepages.com/
computergraphics)
Detailed description of Jacobian and CCD solutions for inverse kinematics (http://billbaxter.com/courses/290/
html/index.htm)
104
Isosurface
105
Isosurface
An isosurface is a three-dimensional analog of an isoline. It is a
surface that represents points of a constant value (e.g. pressure,
temperature, velocity, density) within a volume of space; in other
words, it is a level set of a continuous function whose domain is
3D-space.
Isosurfaces are normally displayed using computer graphics, and are
used as data visualization methods in computational fluid dynamics
(CFD), allowing engineers to study features of a fluid flow (gas or
liquid) around objects, such as aircraft wings. An isosurface may
represent an individual shock wave in supersonic flight, or several
isosurfaces may be generated showing a sequence of pressure values in
the air flowing around a wing. Isosurfaces tend to be a popular form of
visualization for volume datasets since they can be rendered by a
simple polygonal model, which can be drawn on the screen very
quickly.
References
Charles D. Hansen; Chris R. Johnson (2004). Visualization Handbook [1]. Academic Press. pp.711.
ISBN978-0-12-387582-2.
Isosurface
External links
Isosurface Polygonization [2]
References
[1] http:/ / books. google. com/ books?id=ZFrlULckWdAC& pg=PA7
[2] http:/ / www2. imm. dtu. dk/ ~jab/ gallery/ polygonization. html
Joint constraints
Joint constraints are rotational constraints on the joints of an artificial bone system. They are used in an inverse
kinematics chain, for such things as 3D animation or robotics. Joint constraints can be implemented in a number of
ways, but the most common method is to limit rotation about the X, Y and Z axis independently. An elbow, for
instance, could be represented by limiting rotation on Y and Z axis to 0 degrees, and constraining the X-axis rotation
to 130 degrees.
To simulate joint constraints more accurately, dot-products can be used with an independent axis to repulse the child
bones orientation from the unreachable axis. Limiting the orientation of the child bone to a border of vectors tangent
to the surface of the joint, repulsing the child bone away from the border, can also be useful in the precise restriction
of shoulder movement.
Kinematic chain
Kinematic chain refers to an assembly of
rigid bodies connected by joints that is the
mathematical model for a mechanical
system.[1] As in the familiar use of the word
chain, the rigid bodies, or links, are
constrained by their connections to other
links. An example is the simple open chain
formed by links connected in series, like the
usual chain, which is the kinematic model
for a typical robot manipulator.[2]
Mathematical models of the connections, or
joints, between two links are termed
kinematic pairs. Kinematic pairs model the
hinged and sliding joints fundamental to
The JPL mobile robot ATHLETE is a platform with six serial chain legs ending in
robotics, often called lower pairs and the
wheels.
surface contact joints critical to cams and
gearing, called higher pairs. These joints are
generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that
shows the kinematic chain.
106
Kinematic chain
107
Mobility formula
The degrees of freedom, or mobility, of a
kinematic chain is the number of parameters
that define the configuration of the chain.[5]
A system of n rigid bodies moving in space
has 6n degrees of freedom measured relative
to a fixed frame. This frame is included in
the count of bodies, so that mobility does
not depend on link that forms the fixed
frame. This means the degree-of-freedom of
this system is M=6(N-1), where N=n+1 is
the number of moving bodies plus the fixed
body.
The arms, fingers and head of the JSC Robonaut are modeled as kinematic chains.
The movement of the Boulton & Watt steam engine is studied as a system of rigid
bodies connected by joints forming a kinematic chain.
Kinematic chain
108
allowed at each joint to the dimensions of the links in the chain, and form
algebraic equations that are solved to determine the configuration of the
chain associated with specific values of input parameters, called degrees
of freedom.
The constraint equations for a kinematic chain are obtained using rigid
transformations [Z] to characterize the relative movement allowed at each
joint and separate rigid transformations [X] to define the dimensions of
each link. In the case of a serial open chain, the result is a sequence of
rigid transformations alternating joint and link transformations from the
base of the chain to its end link, which is equated to the specified position
for the end link. A chain of n links connected in series has the kinematic
equations,
Its geometrical form: how are neighbouring joints spatially connected to each other?
Explanation:Two or more rigid bodies in space are collectively called a rigid body system. We can hinder the motion of these
independent rigid bodies with kinematic constraints. Kinematic constraints are constraints between rigid bodies that
result in the decrease of the degrees of freedom of rigid body system.
Kinematic chain
References
[1] Reuleaux, F., 1876 The Kinematics of Machinery, (http:/ / books. google. com/ books?id=WUZVAAAAMAAJ& printsec=frontcover&
dq=kinematics+ of+ machinery& hl=en& sa=X& ei=qpn4Tse-E9SasgLcsZytDw& ved=0CEQQ6AEwAQ#v=onepage& q=kinematics of
machinery& f=false) (trans. and annotated by A. B. W. Kennedy), reprinted by Dover, New York (1963)
[2] J. M. McCarthy and G. S. Soh, 2010, Geometric Design of Linkages, (http:/ / books. google. com/ books?id=jv9mQyjRIw4C& pg=PA231&
lpg=PA231& dq=geometric+ design+ of+ linkages& source=bl& ots=j6TS1043qE& sig=R5ycw5DximWrQOEVshfiytflD6Q& hl=en&
sa=X& ei=0Zj4TuiCFvCGsgKyvO3FAQ& ved=0CGAQ6AEwBQ#v=onepage& q=geometric design of linkages& f=false) Springer, New
York.
[3] Larry L. Howell, 2001, Compliant mechanisms (http:/ / books. google. com/ books/ about/ Compliant_mechanisms. html?id=tiiSOuhsIfgC),
John Wiley & Sons.
[4] Alexander Slocum, 1992, Precision Machine Design (http:/ / books. google. com/ books?id=uG7aqgal65YC& printsec=frontcover&
source=gbs_ge_summary_r& cad=0#v=onepage& q& f=false), SME
[5] J. J. Uicker, G. R. Pennock, and J. E. Shigley, 2003, Theory of Machines and Mechanisms, Oxford University Press, New York.
[6] J. M. McCarthy, 1990, Introduction to Theoretical Kinematics, MIT Press, Cambridge, MA.
[7] R. S. Hartenberg and J. Denavit, 1964, Kinematic Synthesis of Linkages, McGraw-Hill, New York.
[8] Suh, C. H., and Radcliffe, C. W., Kinematics and Mechanism Design, John Wiley and Sons, New York, 1978.
[9] Sandor,G.N.,andErdman,A.G.,1984,AdvancedMechanismDesign:AnalysisandSynthesis, Vol. 2. Prentice-Hall, Englewood Cliffs, NJ.
[10] Hunt, K. H., Kinematic Geometry of Mechanisms, Oxford Engineering Science Series, 1979
109
110
photons/(scm2sr),
which is the same as the normal observer.
111
and so
where
is the determinant of the Jacobian matrix for the unit sphere, and realizing that
is luminous flux
Uses
Lambert's cosine law in its reversed form (Lambertian reflection) implies that the apparent brightness of a
Lambertian surface is proportional to the cosine of the angle between the surface normal and the direction of the
incident light.
This phenomenon is, among others, used when creating mouldings, which are a means of applying light- and
dark-shaded stripes to a structure or object without having to change the material or apply pigment. The contrast of
dark and light areas gives definition to the object. Mouldings are strips of material with various cross-sections used
to cover transitions between surfaces or for decoration.
References
[1] RCA Electro-Optics Handbook, p.18 ff
[2] Modern Optical Engineering, Warren J. Smith, McGraw-Hill, p.228, 256
[3] Incropera and DeWitt, Fundamentals of Heat and Mass Transfer, 5th ed., p.710.
Light stage
112
Light stage
A light stage or light cage is an instrumentation set-up used for reflectance, texture and motion capture often with
structured light and a multi-camera setup.
Reflectance capture
The reflectance field over a human face was first captured in 2000 by Paul Debevec et al. The method they used to
find the light that travels under the skin was based on the existing scientific knowledge that light reflecting off the
air-to-oil retains its polarization while light that travels under the skin loses its polarization.
Using this information and the simplest, yet most
revolutionary to date, light stage was built by Paul
Debevec et al. and it consisted of
1. Moveable digital camera
2. Moveable simple light source (full rotation with
adjustable radius and height)
3. 2 polarizers set into various angles in front of the
light and the camera
4. A computer with relatively simple programs doing
relatively simple tasks.
See
Digital Emily [1] presented to the SIGGRAPH
convention in 2008 for which the reflection field of
actress Emily O'Brien was captured using the USC
light stage 5.[2] and the prerendered digital
look-alike was made in association with Image
Metrics. Video includes USC light stage 5 and USC
light stage 6.
Digital Ira [3] that runs in precomputed but also is fairly convincingly rendered also in real-time was presented at
the 2013 SIGGRAPH in association with Activision. Digital Emily shown in 2008 was a pre-computed simulation
meanwhile Digital Ira run in real-time in 2013 and is fairly realistic looking even in real-time rendering of
animation. The field is rapidly moving from movies to computer games and leisure applications Video includes
Light stage
USC light stage X.
References
[1] http:/ / www. ted. com/ talks/ paul_debevec_animates_a_photo_real_digital_face. html
[2] Paul Debevec animates a photo-real digital face - Digital Emily (http:/ / www. ted. com/ talks/
paul_debevec_animates_a_photo_real_digital_face. html) 2008
[3] http:/ / gl. ict. usc. edu/ Research/ DigitalIra
Light
thumbnail
Light Transport
The amount of light transported is measured by flux density, that is flux per unit area. See this link here [1] for a PDF
explaining A Theory of Inverse Light Transport.
Models
Hemisphere
Given a surface S, a hemisphere H can be projected on to S to calculate the amount of incoming and outgoing light .
If a point P is selected at random on the surface S, the amount of light incoming and outgoing can be calculated by
its projection onto the hemisphere.
Hemicube
The hemicube model works in a similar way that the hemisphere model works, with the exception that a hemicube is
projected as opposed to a hemisphere. The similarity is only in concept, the actual calculation done by integration
has a different form factor.
113
114
Equations
Rendering
Rendering converts a model into an image either by simulating light transport to get physically based photorealistic
images, or by applying some kind of style as in non-photorealistic rendering. The two basic operations in realistic
rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with
light).
References
[1] http:/ / www. cs. toronto. edu/ ~kyros/ pubs/ 05. iccv. interreflect. pdf
External links
Charles Loop: Smooth Subdivision Surfaces Based on Triangles, M.S.
Mathematics thesis, University of Utah, 1987 (pdf [1]).
Homepage of Charles Loop [2].
Jos Stam: Evaluation of Loop Subdivision Surfaces, Computer Graphics
Proceedings ACM SIGGRAPH 1998, (pdf [3], downloadable eigenstructures
[4]
).
Loop Subdivision of an
icosahedron (top) after one and
after two refinement steps
115
References
[1]
[2]
[3]
[4]
Low poly
Low poly is a polygon mesh in 3D computer graphics
that has a relatively small number of polygons. Low
poly meshes occur in real-time applications (e.g.
games) and contrast with high poly meshes in animated
movies and special effects of the same era. The term
low poly is used in both a technical and a descriptive
sense; the number of polygons in a mesh is an
important factor to optimize for performance but can
give an undesirable appearance to the resulting
graphics.
This polygon mesh representing a dolphin would be considered low
poly by modern (2013) standards.
Polygon budget
A combination of the game engine or rendering method and the computer being used defines the polygon budget;
the number of polygons which can appear in a scene and still be rendered at an acceptable frame rate. Therefore the
use of low poly meshes are mostly confined to computer games and other software a user must manipulate 3D
objects in real time because processing power is limited to that of a typical personal computer or games console and
the frame rate must be high. Computer generated imagery, for example, for films or still images have a higher
polygon budget because rendering does not need to be done in real-time, which requires higher frame rates. In
addition, computer processing power in these situations is typically less limited, often using a large network of
computers or what is known as a render farm. Each frame can take hours to create, despite the enormous computer
power involved. A common example of the difference this makes is full motion video sequences in computer games
which, because they can be pre-rendered, and look much smoother than the games themselves.
Low poly
116
The time the meshes were designed and for what hardware
The detail required in the final mesh
The shape and properties of the object in question
As computing power inevitably increases, the number of polygons that can be used increases too. For example,
Super Mario 64 would be considered low poly today, but was considered a stunning achievement when it was
released in 1996. Similarly, in 2009, using hundreds of polygons on a leaf in the background of a scene would be
considered high poly, but using that many polygons on the main character would be considered low poly.
References
Marching cubes
117
Marching cubes
Marching cubes is a computer graphics algorithm,
published in the 1987 SIGGRAPH proceedings by
Lorensen and Cline,[1] for extracting a polygonal mesh
of an isosurface from a three-dimensional scalar field
(sometimes called voxels). This paper has been one the
most cited papers in the computer graphics field. The
applications of this algorithm are mainly concerned
with medical visualizations such as CT and MRI scan
data images, and special effects or 3-D modelling with
what is usually called metaballs or other metasurfaces.
An analogous two-dimensional method is called the
marching squares algorithm.
History
The Algorithm was developed by William E. Lorensen
and Harvey E. Cline as a result of their research for
General Electric. At General Electric they worked on a
way to efficiently visualize data from CT and MRI
devices.
Head and cerebral structures (hidden) extracted from 150 MRI slices
using marching-cubes (about 150,000 triangles)
Marching cubes
Algorithm
The algorithm proceeds through the scalar field, taking eight neighbor locations at a time (thus forming an imaginary
cube), then determining the polygon(s) needed to represent the part of the isosurface that passes through this cube.
The individual polygons are then fused into the desired surface.
This is done by creating an index to a precalculated array of 256 possible polygon configurations (28=256) within the
cube, by treating each of the 8 scalar values as a bit in an 8-bit integer. If the scalar's value is higher than the
iso-value (i.e., it is inside the surface) then the appropriate bit is set to one, while if it is lower (outside), it is set to
zero. The final value after all 8 scalars are checked, is the actual index to the polygon indices array.
Finally each vertex of the generated polygons is placed on the appropriate position along the cube's edge by linearly
interpolating the two scalar values that are connected by that edge.
The gradient of the scalar field at each grid point is also the normal vector of a hypothetical isosurface passing from
that point. Therefore, we may interpolate these normals along the edges of each cube to find the normals of the
generated vertices which are essential for shading the resulting mesh with some illumination model.
Patent issues
The marching cubes algorithm is claimed by anti-software patent advocates as a prime example in the graphics field
of the woes of patenting software[citation needed]. An implementation was patented (United States Patent 4,710,876)
despite being a relatively obvious solution to the surface-generation problem, they claim. Another similar algorithm
was developed, called marching tetrahedra, in order to circumvent the patent as well as solve a minor ambiguity
problem of marching cubes with some cube configurations. This patent expired in 2005, and it is now legal for the
graphics community to use it without royalties since more than 17 years have passed from its issue date (December
1, 1987).
Sources
[1] William E. Lorensen, Harvey E. Cline: Marching Cubes: A high resolution 3D surface construction algorithm. In: Computer Graphics, Vol.
21, Nr. 4, July 1987
External links
Lorensen, W. E.; Cline, Harvey E. (1987). "Marching cubes: A high resolution 3d surface construction
algorithm". ACM Computer Graphics 21 (4): 163169. doi: 10.1145/37402.37422 (http://dx.doi.org/10.1145/
37402.37422).
Nielson, G. M.; Hamann, Bernd (1991). "The asymptotic decider: resolving the ambiguity in marching cubes"
(http://dl.acm.org/citation.cfm?id=949621). Proc. 2nd conference on Visualization (VIS' 91): 8391.
Montani, Claudio; Scateni, Riccardo; Scopigno, Roberto (1994). "A modified look-up table for implicit
disambiguation of Marching cubes". The Visual Computer 10 (6): 353355. doi: 10.1007/BF01900830 (http://
dx.doi.org/10.1007/BF01900830).
Nielson, G. M.; Sung, Junwon (1997). "Interval volume tetrahedrization". 8th IEEE Visualization (VIS'97). doi:
10.1109/VISUAL.1997.663886 (http://dx.doi.org/10.1109/VISUAL.1997.663886).
Paul Bourke. "Overview and source code" (http://paulbourke.net/geometry/polygonise/).
Matthew Ward. "GameDev overview" (http://www.gamedev.net/page/resources/_/technical/
math-and-physics/overview-of-marching-cubes-algorithm-r424).
"Introductory description with additional graphics" (http://users.polytech.unice.fr/~lingrand/MarchingCubes/
algo.html).
"Marching Cubes" (http://www.marchingcubes.org/index.php/Marching_Cubes).. Some of the early history
of Marching Cubes.
118
Marching cubes
Newman, Timothy S.; Yia, Hong (2006). "A survey of the marching cubes algorithm". Computers and Graphics
30 (5): 854879. doi: 10.1016/j.cag.2006.07.021 (http://dx.doi.org/10.1016/j.cag.2006.07.021).
Stephan Diel. "Specializing visualization algorithms" (http://extras.springer.com/2003/978-1-4020-7259-8/
media/diehl/diehl.pdf).
Mesh parameterization
Given two surfaces with the same topology, a bijective mapping between them exists. On triangular mesh surfaces,
the problem of computing this mapping is called mesh parameterization. The parameter domain is the surface that
the mesh is mapped onto.
Parameterization was mainly used for mapping textures to surfaces. Recently, it has become a powerful tool for
many applications in mesh processing.[citation needed] Various techniques are developed for different types of
parameter domains with different parameterization properties.
Applications
Texture mapping
Normal mapping
Detail transfer
Morphing
Mesh completion
Mesh Editing
Mesh Databases
Remeshing
Surface fitting
Techniques
Barycentric Mappings
Differential Geometry Primer
Non-Linear Methods
Implementations
A fast and simple stretch-minimizing mesh parameterization [1]
Graphite [2]: ABF, ABF++, DPBF, LSCM, HLSCM, Barycentric, mean-value coordinates, L2 stretch, spectral
conformal, Periodic Global Parameterization, Constrained texture mapping, texture atlas generation
Linear discrete conformal parameterization [3]
Discrete Exponential Map [4]
119
Mesh parameterization
120
External links
"Mesh Parameterization: theory and practice" [5]
References
[1]
[2]
[3]
[4]
[5]
Metaballs
Metaballs are, in computer graphics, organic-looking
n-dimensional objects. The technique for rendering metaballs was
invented by Jim Blinn in the early 1980s.
Each metaball is defined as a function in n-dimensions (i.e. for
three dimensions,
; three-dimensional metaballs tend
to be most common, with two-dimensional implementations
popular as well). A thresholding value is also chosen, to define a
solid volume. Then,
typical
function
chosen
for
metaballs
is
,
where
Smoothness. Because the isosurface is the result of adding the fields together, its smoothness is dependent on the
smoothness of the falloff curves.
The simplest falloff curve that satisfies these criteria is:
Metaballs
121
There are a number of ways to render the metaballs to the screen. In the case of three dimensional metaballs, the two
most common are brute force raycasting and the marching cubes algorithm.
2D metaballs were a very common demo effect in the 1990s. The effect is also available as an XScreensaver module.
Further reading
External links
Implicit Surfaces article [2] by Paul Bourke
References
Intro to Metaballs [7]
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Micropolygon
Micropolygon
In 3D computer graphics, a micropolygon (or -polygon) is a polygon that is very small relative to the image being
rendered. Commonly, the size of a micropolygon is close to or even less than the area of a pixel. Micropolygons
allow a renderer to create a highly detailed image.
The concept of micropolygons was developed within the Reyes algorithm, in which geometric primitives are
tessellated at render time into a rectangular grid of tiny, four-sided polygons. A shader might fill each micropolygon
with a single color or assign colors on a per-vertex basis. Shaders that operate on micropolygons can process an
entire grid of them at once in SIMD fashion. This often leads to faster shader execution, and allows shaders to
compute spatial derivatives (e.g. for texture filtering) by comparing values at neighboring micropolygon vertices.
Furthermore, a renderer using micropolygons can support displacement mapping simply by perturbing micropolygon
vertices during shading. This displacement is usually not limited to the local surface normal but can be given an
arbitrary direction.
Further reading
Robert L. Cook., Loren Carpenter, and Edwin Catmull. "The Reyes image rendering architecture." Computer
Graphics (SIGGRAPH '87 Proceedings), pp. 95102.
Anthony A. Apodaca, Larry Gritz: Advanced RenderMan: Creating CGI for Motion Pictures, Morgan Kaufmann
Publishers, ISBN 1-55860-618-1
Technique
The "morph target" is a deformed version of a shape. When applied to
In this example from the open source project
a human face, for example, the head is first modelled with a neutral
Sintel, four facial expressions have been defined
as deformations of the face geometry. The mouth
expression and a "target deformation" is then created for each other
is then animated by morphing between these
expression. When the face is being animated, the animator can then
deformations. Dozens of similar controllers are
smoothly morph (or "blend") between the base shape and one or
used to animate the rest of the face.
several morph targets. Typical examples of morph targets used in facial
animation is a smiling mouth, a closed eye, and a raised eyebrow, but the technique can also be used to morph
between, for example, Dr Jekyll and Mr Hyde.
When used for facial animation, these morph target are often referred to as "key poses". The interpolations between
key
poses
when
an
122
123
References
External links
Morph target example using C# and Microsoft XNA (http://mvinetwork.co.uk/2011/02/02/
xna-morph-targets/)
Motion capture
Motion capture
Motion capture is the process of recording
the movement of objects or people. It is
used in military, entertainment, sports, and
medical applications, and for validation of
computer vision[1] and robotics. In
filmmaking and video game development, it
refers to recording actions of human actors,
and using that information to animate digital
character models in 2D or 3D computer
animation. When it includes face and fingers
Motion capture of two pianists' fingers playing the same piece (slow motion, no
or captures subtle expressions, it is often
sound).
referred to as performance capture. In many
fields, motion capture is sometimes called
motion tracking, but in filmmaking and games, motion tracking more usually refers to match moving.
In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early
techniques used images from multiple cameras to calculate 3D positions, often the purpose of motion capture is to
record only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D
model so that the model performs the same actions as the actor. This process may be contrasted to the older
technique of rotoscope, such as the Ralph Bakshi 1978 The Lord of the Rings and 1981 American Pop animated
films where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a
hand-drawn animated character.
Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around
the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the
camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets
to have the same perspective as the video images from the camera. A computer processes the data and displays the
movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining
camera movement data from the captured footage is known as match moving or camera tracking.
Advantages
Motion capture offers several advantages over traditional computer animation of a 3D model:
More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of
keyframe-based animation. The Hand Over technique is an example of this.
The amount of work does not vary with the complexity or length of the performance to the same degree as when
using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a
different personality only limited by the talent of the actor.
Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces
can be easily recreated in a physically accurate manner.
The amount of animation data that can be produced within a given time is extremely large when compared to
traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines.
Potential for free software and third party solutions reducing its costs.
124
Motion capture
Disadvantages
Specific hardware and special software programs are required to obtain and process the data.
The cost of the software, equipment and personnel required can be prohibitive for small productions.
The capture system may have specific requirements for the space it is operated in, depending on camera field of
view or magnetic distortion.
When problems occur, it is easier to reshoot the scene rather than trying to manipulate the data. Only a few
systems allow real time viewing of the data to decide if the take needs to be redone.
The initial results are limited to what can be performed within the capture volume without extra editing of the
data.
Movement that does not follow the laws of physics cannot be captured.
Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion
or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.
If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a
cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is
not careful with their physical motion.
Applications
Video games often use motion capture to animate
athletes, martial artists, and other in-game characters.[2]
This has been done since the Atari Jaguar CD-based
game Highlander: The Last of the MacLeods, released
in 1995
Snow White and the Seven Dwarfs used an early form
of motion capture technology. Actors and actresses
would act out scenes and would be filmed. The
animators would then use the individual frames as a
guide to their drawings.
Movies use motion capture for CG effects, in some
cases replacing traditional cel animation, and for
completely computer-generated creatures, such as
Gollum, The Mummy, King Kong, Davy Jones from
Pirates of the Caribbean, the Na'vi from the film
Avatar, and Clu from Tron: Legacy. The Great Goblin,
the three Stone-trolls, many of the orcs and goblins in
the 2012 film The Hobbit: An Unexpected Journey, and
Smaug were created using motion capture.
Sinbad: Beyond the Veil of Mists was the first movie
Motion Capture Performers at Centroid, Pinewood Studios
made primarily with motion capture, although many
character animators also worked on the film, which had
a very limited release. Final Fantasy: The Spirits Within was the first widely released movie made primarily with
motion capture.
The Lord of the Rings: The Two Towers was the first feature film to utilize a real-time motion capture system. This
method streamed the actions of actor Andy Serkis into the computer generated skin of Gollum / Smeagol as it was
being performed.
125
Motion capture
126
Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster
House and the winner Happy Feet) used motion capture, and only DisneyPixar's Cars was animated without motion
capture. In the ending credits of Pixar's film Ratatouille, a stamp appears labelling the film as "100% Pure Animation
No Motion Capture!"
Motion capture has begun to be used extensively to produce films which attempt to simulate or approximate the look
of live-action cinema, with nearly photorealistic digital character models. The Polar Express used motion capture to
allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007
adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who
provided their motions and voices. James Cameron's Avatar used this technique to create the Na'vi that inhabit
Pandora. The Walt Disney Company has produced Robert Zemeckis's A Christmas Carol using this technique. In
2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in
2011, after a string of failures.
Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and
Cafe de Wereld in The Netherlands, and Headcases in the UK.
Virtual Reality and Augmented Reality allow users to interact with digital content in real-time. This can be useful for
training simulations, visual perception tests, or performing a virtual walk-throughs in a 3D environment. Motion
capture technology is frequently used in digital puppetry systems to drive computer generated characters in
real-time.
Gait analysis is the major application of motion capture in clinical medicine. Techniques allow clinicians to evaluate
human motion across several biometric factors, often while streaming this information live into analytical software.
During the filming of James Cameron's Avatar all of the scenes involving this process were directed in realtime
using Autodesk Motion Builder software to render a screen image which allowed the director and the actor to see
what they would look like in the movie, making it easier to direct the movie as it would be seen by the viewer. This
method allowed views and angles not possible from a pre-rendered animation. Cameron was so proud of his results
that he even invited Steven Spielberg and George Lucas on set to view the system in action.
In Marvel's The Avengers, Mark Ruffalo used motion capture so he could play his character the Hulk, rather than
have him be only CGI like previous films.
Motion capture
127
Optical systems
Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or
more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using
special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking
surface features identified dynamically for each particular subject. Tracking a large number of performers or
expanding the capture area is accomplished by the addition of more cameras. These systems produce data with 3
degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three
or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow. Newer hybrid
systems are combining inertial sensors with optical sensors to reduce occlusion, increase the number of users and
improve the ability to track without having to manually clean up data.
Passive markers
Passive optical system use markers coated with a retroreflective
material to reflect light that is generated near the cameras lens. The
camera's threshold can be adjusted so only the bright reflective markers
will be sampled, ignoring skin and fabric.
The centroid of the marker is estimated as a position within the 2
dimensional image that is captured. The grayscale value of each pixel
can be used to provide sub-pixel accuracy by finding the centroid of
the Gaussian.
An object with markers attached at known positions is used to calibrate
the cameras and obtain their positions and the lens distortion of each
camera is measured. If two calibrated cameras see a marker, a 3
dimensional fix can be obtained. Typically a system will consist of
around 2 to 48 cameras. Systems of over three hundred cameras exist
to try to reduce marker swap. Extra cameras are required for full
coverage around the capture subject and multiple subjects.
Motion capture
Active marker
Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with
software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting
light back that is generated externally, the markers themselves are powered to emit their own light. Since Inverse
Square law provides 1/4 the power at 2 times the distance, this can increase the distances and volume for capture.
The TV series ("Stargate SG1") produced episodes using an active optical system for the VFX allowing the actor to
walk around props that would make motion capture difficult for other non-active optical systems.
ILM used active Markers in Van Helsing to allow capture of Dracula's flying brides on very large sets similar to
Weta's use of active markers in "Rise of the Planet of the Apes". The power to each marker can be provided
sequentially in phase with the capture system providing a unique identification of each marker for a given capture
frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in realtime
applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of
the data.
128
Motion capture
Semi-passive imperceptible
marker
One can reverse the traditional approach
based on high speed cameras. Systems such
as Prakash use inexpensive multi-LED high
speed projectors. The specially built
multi-LED IR projectors optically encode
IR sensors can compute their location when lit by mobile multi-LED emitters, e.g.
the space. Instead of retro-reflective or
in a moving car. With Id per marker, these sensor tags can be worn under clothing
active light emitting diode (LED) markers,
and tracked at 500 Hz in broad daylight.
the system uses photosensitive marker tags
to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only
their own locations of each point, but also their own orientation, incident illumination, and reflectance. Microsoft's
Kinect system, released for the Xbox 360, projects an invisible infra red pattern for depth recovery motion
acquisition.
These tracking tags work in natural lighting conditions and can be imperceptibly embedded in attire or other objects.
The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker
reacquisition issues. Since the system eliminates a high speed camera and the corresponding high-speed image
stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be
used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion
capture or real-time broadcasting of virtual sets but has yet to be proven.
The vital part of the system, the Underwater camera, has a waterproof housing. The housing has a finish that
withstands corrosion and chlorine which makes it perfect for use in basins and swimming pools. The underwater
cameras comes with a cyan light strobe instead of the typical IR light for minimum falloff under water. Since the
index of refraction of water differs from air, a special internal and external calibration have been implemented.
Measurement volume
A Underwater camera is typically able to measure 1520 meters depending on the water quality and the type of
marker used. Unsurprisingly, the best range is achieved when the water is clear, and like always, the measurement
volume is also dependent on the number of cameras. A range of underwater markers are available for different
circumstances.
Tailored
Different pools require different mountings and fixtures. Therefore all underwater motion capture systems are
uniquely tailored to suit each specific pool installment. For cameras placed in the center of the pool, specially
designed tripods, using suction cups, are provided.
129
Motion capture
Markerless
Emerging techniques and research in computer vision are leading to the rapid development of the markerless
approach to motion capture. Markerless systems such as those developed at Stanford University, the University of
Maryland, MIT, and the Max Planck Institute, do not require subjects to wear special equipment for tracking. Special
computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify
human forms, breaking them down into constituent parts for tracking. Applications of this technology extend deeply
into popular imagination about the future of computing technology. Several commercial solutions for markerless
motion capture have also been introduced, including systems by Organic Motion[3] and Xsens.[4] ESC entertainment
a subsidiary of Warner Brothers Pictures, created specially to enable virtual cinematography, including photorealistic
digital look-alikes for filming the Matrix Reloaded and Matrix Revolutions movies and used a technique called
Universal Capture that utilized 7 camera setup and the tracking the optical flow of all pixels over all the 2-D planes
of the cameras for motion, gesture and facial expression capture leading to photorealistic results.
Traditional systems
Traditionally markerless optical motion tracking is used to keep track on various objects, including airplanes, launch
vehicles, missiles and satellites. Many of such optical motion tracking applications occur outdoors, requiring
differing lens and camera configurations. High resolution images of the target being tracked can thereby provide
more information than just motion data. The image obtained from NASA's long-range tracking system on space
shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems
are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage over radar in
that the objects must be reflecting or emitting sufficient light.
An optical tracking system typically consists of 3 subsystems. The optical imaging system, the mechanical tracking
platform and the tracking computer.
The optical imaging system is responsible for converting the light from the target area into digital image that the
tracking computer can process. Depending on the design of the optical tracking system, the optical imaging system
can vary from as simple as a standard digital camera to as specialized as an astronomical telescope on the top of a
mountain. The specification of the optical imaging system determines the upper-limit of the effective range of the
tracking system.
The mechanical tracking platform holds the optical imaging system and is responsible for manipulating the optical
imaging system in such a way that it always points to the target being tracked. The dynamics of the mechanical
tracking platform combined with the optical imaging system determines the tracking system's ability to keep the lock
on a target that changes speed rapidly.
The tracking computer is responsible for capturing the images from the optical imaging system, analyzing the image
to extract target position and controlling the mechanical tracking platform to follow the target. There are several
challenges. First the tracking computer has to be able to capture the image at a relatively high frame rate. This posts
a requirement on the bandwidth of the image capturing hardware. The second challenge is that the image processing
software has to be able to extract the target image from its background and calculate its position. Several textbook
image processing algorithms are designed for this task but each has its own limitations. This problem can be
simplified if the tracking system can expect certain characteristics that is common in all the targets it will track. The
next problem down the line is to control the tracking platform to follow the target. This is a typical control system
design problem rather than a challenge, which involves modeling the system dynamics and designing controllers to
control it. This will however become a challenge if the tracking platform the system has to work with is not designed
for real-time and highly dynamic applications, in which case the tracking software has to compensate for the
mechanical and software imperfections of the tracking platform.
Traditionally optical tracking systems often involves highly customized optical and electrical subsystems. The
software that runs such systems are also customized for the corresponding hardware components. Because of the
130
Motion capture
real-time nature of the application and the limited size of the market, commercializing optical tracking software posts
a big challenge. One example of such software is OpticTracker, which controls computerized telescopes to track
moving objects at great distances, such as planes and satellites.
Non-optical systems
Inertial systems
Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models and sensor fusion
algorithms. The motion data of the inertial sensors (inertial guidance system) is often transmitted wirelessly to a
computer, where the motion is recorded or viewed. Most inertial systems use gyroscopes to measure rotational rates.
These rotations are translated to a skeleton in the software. Much like optical markers, the more gyros the more
natural the data. No external cameras, emitters or markers are needed for relative motions, although they are required
to give the absolute position of the user if desired. Inertial motion capture systems capture the full six degrees of
freedom body motion of a human in real-time and can give limited direction information if they include a magnetic
bearing sensor, although these are much lower resolution and susceptible to electromagnetic noise. Benefits of using
Inertial systems include: no solving, portability, and large capture areas. Disadvantages include 'floating' where the
user looks like a marionette on strings, lower positional accuracy and positional drift which can compound over time.
These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates.
They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is
rising amongst independent game developers, mainly because of the quick and easy set up resulting in a fast
pipeline. A range of suits are now available from various manufacturers and base prices range from $5,000 to
$80,000 USD. Ironically the $5,000 systems use newer chips and sensors and are wireless taking advantage of the
next generation of inertial sensors and wireless devices.
Mechanical motion
Mechanical motion capture systems directly track body joint angles and are often referred to as exo-skeleton motion
capture systems, due to the way the sensors are attached to the body. Performers attaches the skeletal-like structure
to their body and as they move so do the articulated mechanical parts, measuring the performers relative motion.
Mechanical motion capture systems are real-time, relatively low-cost, free-of-occlusion, and wireless (untethered)
systems that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metal or plastic
rods linked together with potentiometers that articulate at the joints of the body. These suits tend to be in the $25,000
to $75,000 range plus an external absolute positioning system. Some suits provide limited force feedback or Haptic
input.
Magnetic systems
Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both
the transmitter and each receiver. The relative intensity of the voltage or current of the three coils allows these
systems to calculate both range and orientation by meticulously mapping the tracking volume. The sensor output is
6DOF, which provides useful results obtained with two-thirds the number of markers required in optical systems;
one on upper arm and one on lower arm for elbow position and angle. The markers are not occluded by nonmetallic
objects but are susceptible to magnetic and electrical interference from metal objects in the environment, like rebar
(steel reinforcing bars in concrete) or wiring, which affect the magnetic field, and electrical sources such as monitors,
lights, cables and computers. The sensor response is nonlinear, especially toward edges of the capture area. The
wiring from the sensors tends to preclude extreme performance movements. The capture volumes for magnetic
systems are dramatically smaller than they are for optical systems. With the magnetic systems, there is a distinction
between AC and DC systems: one uses square pulses, the other uses sine wave pulse.
131
Motion capture
Related techniques
Facial motion capture
Most traditional motion capture hardware vendors provide for some type of low resolution facial capture utilizing
anywhere from 32 to 300 markers with either an active or passive marker system. All of these solutions are limited
by the time it takes to apply the markers, calibrate the positions and process the data. Ultimately the technology also
limits their resolution and raw output quality levels.
High fidelity facial motion capture, also known as performance capture, is the next generation of fidelity and is
utilized to record the more complex movements in a human face in order to capture higher degrees of emotion.
Facial capture is currently arranging itself in several distinct camps, including traditional motion capture data, blend
shaped based solutions, capturing the actual topology of an actor's face, and proprietary systems.
The two main techniques are stationary systems with an array of cameras capturing the facial expressions from
multiple angles and using software such as the stereo mesh solver from OpenCV to create a 3D surface mesh, or to
use light arrays as well to calculate the surface normals from the variance in brightness as the light source, camera
position or both are changed. These techniques tend to be only limited in feature resolution by the camera resolution,
apparent object size and number of cameras. If the users face is 50 percent of the working area of the camera and a
camera has megapixel resolution, then sub millimeter facial motions can be detected by comparing frames. Recent
work is focusing on increasing the frame rates and doing optical flow to allow the motions to be retargeted to other
computer generated faces, rather than just making a 3D Mesh of the actor and their expressions.
RF positioning
RF (radio frequency) positioning systems are becoming more viable as higher frequency RF devices allow greater
precision than older RF technologies such as traditional radar. The speed of light is 30 centimeters per nanosecond
(billionth of a second), so a 10 gigahertz (billion cycles per second) RF signal enables an accuracy of about 3
centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about
8mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are
almost as line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to
cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy,
since the required resolution at 100 meter distances is not likely to be as high. Many RF scientists believe that radio
frequency will never produce the accuracy required for motion capture.
Non-traditional systems
An alternative approach was developed where the actor is given an unlimited walking area through the use of a
rotating sphere, similar to a hamster ball, which contains internal sensors recording the angular movements,
removing the need for external cameras and other equipment. Even though this technology could potentially lead to
much lower costs for motion capture, the basic sphere is only capable of recording a single continuous direction.
Additional sensors worn on the person would be needed to record anything more.
Another alternative is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional
treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an
unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training,
biomechanical research and virtual reality.
132
Motion capture
133
References
Library resources about
Motion capture
[5]
[1] David Noonan, Peter Mountney, Daniel Elson, Ara Darzi and Guang-Zhong Yang. A Stereoscopic Fibroscope for Camera Motion and 3D
Depth Recovery During Minimally Invasive Surgery. In proc ICRA 2009 , pp. 4463-4468.
<http://www.sciweavers.org/external.php?u=http%3A%2F%2Fwww.doc.ic.ac.uk%2F%7Epmountne%2Fpublications%2FICRA%25202009.pdf&p=ieee>
[2] Jon Radoff, Anatomy of an MMORPG, http:/ / radoff. com/ blog/ 2008/ 08/ 22/ anatomy-of-an-mmorpg/
[3] http:/ / www. newsweek. com/ video/ 2007/ 03/ 06/ videogames-organic-motion. html
[4] http:/ / venturebeat. com/ 2009/ 08/ 04/ xsens-technologies-captures-every-human-motion-with-body-suit/
[5] http:/ / tools. wmflabs. org/ ftl/ cgi-bin/ ftl?st=wp& su=Motion+ capture
Newell's algorithm
Newell's Algorithm is a 3D computer graphics procedure for elimination of polygon cycles in the depth sorting
required in hidden surface removal. It was proposed in 1972 by brothers Martin Newell and Dick Newell, and Tom
Sancha, while all three were working at CADCentre.
In the depth sorting phase of hidden surface removal, if two polygons have no overlapping extents or extreme
minimum and maximum values in the x, y, and z directions, then they can be easily sorted. If two polygons, Q and P,
do have overlapping extents in the Z direction, then it is possible that cutting is necessary.
In that case Newell's algorithm tests the following:
1. Test for Z overlap; implied in the selection of the face Q from the sort list
2. The extreme coordinate values in X of the two faces do not overlap (minimax test in
X)
3. The extreme coordinate values in Y of the two faces do not overlap (minimax test in
Y)
4. All vertices of P lie deeper than the plane of Q
5. All vertices of Q lie closer to the viewpoint than the plane of P
6. The rasterisation of P and Q do not overlap
Note that the tests are given in order of increasing computational difficulty.
Note also that the polygons must be planar.
If the tests are all false, then the polygons must be split. Splitting is accomplished by selecting one polygon and
cutting it along the line of intersection with the other polygon. The above tests are again performed, and the
algorithm continues until all polygons pass the above tests.
Newell's algorithm
134
References
Sutherland, Ivan E.; Sproull, Robert F.; Schumacker, Robert A. (1974), "A characterization of ten hidden-surface
algorithms", Computing Surveys 6 (1): 155, doi:10.1145/356625.356626 [1].
Newell, M. E.; Newell, R. G.; Sancha, T. L. (1972), "A new approach to the shaded picture problem", Proc. ACM
National Conference, pp.443450.
References
[1] http:/ / dx. doi. org/ 10. 1145%2F356625. 356626
History
Development of NURBS began in the
1950s by engineers who were in need
of
a
mathematically
precise
representation of freeform surfaces like
those used for ship hulls, aerospace
exterior surfaces, and car bodies,
which could be exactly reproduced
whenever technically needed. Prior
representations of this kind of surface
only existed as a single physical model
created by a designer.
Three-dimensional NURBS surfaces can have complex, organic shapes. Control points
influence the directions the surface takes. The outermost square below delineates the X/Y
extents of the surface.
A NURBS curve.
Use
NURBS are commonly used in computer-aided design
(CAD), manufacturing (CAM), and engineering (CAE)
and are part of numerous industry wide standards, such
as IGES, STEP, ACIS, and PHIGS. NURBS tools are
also found in various 3D modelling and animation
software packages.
They can be efficiently handled by the computer
programs and yet allow for easy human interaction.
NURBS surfaces are functions of two parameters
mapping to a surface in three-dimensional space. The
Motoryacht design.
shape of the surface is determined by control points.
NURBS surfaces can represent simple geometrical
shapes in a compact form. T-splines and subdivision surfaces are more suitable for complex organic shapes because
they reduce the number of control points twofold in comparison with the NURBS surfaces.
In general, editing NURBS curves and surfaces is highly intuitive and predictable. Control points are always either
connected directly to the curve/surface, or act as if they were connected by a rubber band. Depending on the type of
user interface, editing can be realized via an elements control points, which are most obvious and common for
Bzier curves, or via higher level tools such as spline modeling or hierarchical editing.
A surface under construction, e.g. the hull of a motor yacht, is usually composed of several NURBS surfaces known
as patches. These patches should be fitted together in such a way that the boundaries are invisible. This is
mathematically expressed by the concept of geometric continuity.
Higher-level tools exist which benefit from the ability of NURBS to create and establish geometric continuity of
different levels:
Positional continuity (G0)
holds whenever the end positions of two curves or surfaces are coincidental. The curves or surfaces may still
meet at an angle, giving rise to a sharp corner or edge and causing broken highlights.
Tangential continuity (G1)
requires the end vectors of the curves or surfaces to be parallel and pointing the same way, ruling out sharp
edges. Because highlights falling on a tangentially continuous edge are always continuous and thus look
natural, this level of continuity can often be sufficient.
Curvature continuity (G2)
further requires the end vectors to be of the same length and rate of length change. Highlights falling on a
curvature-continuous edge do not display any change, causing the two surfaces to appear as one. This can be
visually recognized as perfectly smooth. This level of continuity is very useful in the creation of models that
require many bi-cubic patches composing one continuous surface.
Geometric continuity mainly refers to the shape of the resulting surface; since NURBS surfaces are functions, it is
also possible to discuss the derivatives of the surface with respect to the parameters. This is known as parametric
135
136
continuity. Parametric continuity of a given degree implies geometric continuity of that degree.
First- and second-level parametric continuity (C0 and C1) are for practical purposes identical to positional and
tangential (G0 and G1) continuity. Third-level parametric continuity (C2), however, differs from curvature
continuity in that its parameterization is also continuous. In practice, C2 continuity is easier to achieve if uniform
B-splines are used.
The definition of the continuity 'Cn' requires that the nth derivative of the curve/surface (
) are equal
[1]
at a joint. Note that the (partial) derivatives of curves and surfaces are vectors that have a direction and a
magnitude. Both should be equal.
Highlights and reflections can reveal the perfect smoothing, which is otherwise practically impossible to achieve
without NURBS surfaces that have at least G2 continuity. This same principle is used as one of the surface
evaluation methods whereby a ray-traced or reflection-mapped image of a surface with white stripes reflecting on it
will show even the smallest deviations on a surface or set of surfaces. This method is derived from car prototyping
wherein surface quality is inspected by checking the quality of reflections of a neon-light ceiling on the car surface.
This method is also known as "Zebra analysis".
Technical specifications
A NURBS curve is defined by its order, a set of weighted control points, and a knot vector . NURBS curves and
surfaces are generalizations of both B-splines and Bzier curves and surfaces, the primary difference being the
weighting of the control points which makes NURBS curves rational (non-rational B-splines are a special case of
rational B-splines). Whereas Bzier curves evolve into only one parametric direction, usually called s or u, NURBS
surfaces evolve into two parametric directions, called s and t or u and v.
By evaluating a Bzier or a NURBS
curve at various values of the
parameter, the curve can be
represented in Cartesian two- or
three-dimensional space. Likewise, by
evaluating a NURBS surface at various
values of the two parameters, the
surface can be represented in Cartesian
space.
NURBS curves and surfaces are useful
for a number of reasons:
They are invariant under affine
transformations:[2] operations like
rotations and translations can be
applied to NURBS curves and surfaces by applying them to their control points.
They offer one common mathematical form for both standard analytical shapes (e.g., conics) and free-form
shapes.
They provide the flexibility to design a large variety of shapes.
They reduce the memory consumption when storing shapes (compared to simpler methods).
They can be evaluated reasonably quick by numerically stable and accurate algorithms.
In the next sections, NURBS is discussed in one dimension (curves). It should be noted that all of it can be
generalized to two or even more dimensions.
Control points
The control points determine the shape of the curve.[3] Typically, each point of the curve is computed by taking a
weighted sum of a number of control points. The weight of each point varies according to the governing parameter.
For a curve of degree d, the weight of any control point is only nonzero in d+1 intervals of the parameter space.
Within those intervals, the weight changes according to a polynomial function (basis functions) of degree d. At the
boundaries of the intervals, the basis functions go smoothly to zero, the smoothness being determined by the degree
of the polynomial.
As an example, the basis function of degree one is a triangle function. It rises from zero to one, then falls to zero
again. While it rises, the basis function of the previous control point falls. In that way, the curve interpolates between
the two points, and the resulting curve is a polygon, which is continuous, but not differentiable at the interval
boundaries, or knots. Higher degree polynomials have correspondingly more continuous derivatives. Note that
within the interval the polynomial nature of the basis functions and the linearity of the construction make the curve
perfectly smooth, so it is only at the knots that discontinuity can arise.
The fact that a single control point only influences those intervals where it is active is a highly desirable property,
known as local support. In modeling, it allows the changing of one part of a surface while keeping other parts equal.
Adding more control points allows better approximation to a given curve, although only a certain class of curves can
be represented exactly with a finite number of control points. NURBS curves also feature a scalar weight for each
control point. This allows for more control over the shape of the curve without unduly raising the number of control
points. In particular, it adds conic sections like circles and ellipses to the set of curves that can be represented
exactly. The term rational in NURBS refers to these weights.
The control points can have any dimensionality. One-dimensional points just define a scalar function of the
parameter. These are typically used in image processing programs to tune the brightness and color curves.
Three-dimensional control points are used abundantly in 3D modeling, where they are used in the everyday meaning
of the word 'point', a location in 3D space. Multi-dimensional points might be used to control sets of time-driven
values, e.g. the different positional and rotational settings of a robot arm. NURBS surfaces are just an application of
this. Each control 'point' is actually a full vector of control points, defining a curve. These curves share their degree
and the number of control points, and span one dimension of the parameter space. By interpolating these control
vectors over the other dimension of the parameter space, a continuous set of curves is obtained, defining the surface.
Knot vector
The knot vector is a sequence of parameter values that determines where and how the control points affect the
NURBS curve. The number of knots is always equal to the number of control points plus curve degree plus one (i.e.
number of control points plus curve order). The knot vector divides the parametric space in the intervals mentioned
before, usually referred to as knot spans. Each time the parameter value enters a new knot span, a new control point
becomes active, while an old control point is discarded. It follows that the values in the knot vector should be in
nondecreasing order, so (0, 0, 1, 2, 3, 3) is valid while (0, 0, 2, 1, 3, 3) is not.
Consecutive knots can have the same value. This then defines a knot span of zero length, which implies that two
control points are activated at the same time (and of course two control points become deactivated). This has impact
on continuity of the resulting curve or its higher derivatives; for instance, it allows the creation of corners in an
otherwise smooth NURBS curve. A number of coinciding knots is sometimes referred to as a knot with a certain
multiplicity. Knots with multiplicity two or three are known as double or triple knots. The multiplicity of a knot is
limited to the degree of the curve; since a higher multiplicity would split the curve into disjoint parts and it would
leave control points unused. For first-degree NURBS, each knot is paired with a control point.
The knot vector usually starts with a knot that has multiplicity equal to the order. This makes sense, since this
activates the control points that have influence on the first knot span. Similarly, the knot vector usually ends with a
137
138
knot of that multiplicity. Curves with such knot vectors start and end in a control point.
The individual knot values are not meaningful by themselves; only the ratios of the difference between the knot
values matter. Hence, the knot vectors (0, 0, 1, 2, 3, 3) and (0, 0, 2, 4, 6, 6) produce the same curve. The positions of
the knot values influences the mapping of parameter space to curve space. Rendering a NURBS curve is usually
done by stepping with a fixed stride through the parameter range. By changing the knot span lengths, more sample
points can be used in regions where the curvature is high. Another use is in situations where the parameter value has
some physical significance, for instance if the parameter is time and the curve describes the motion of a robot arm.
The knot span lengths then translate into velocity and acceleration, which are essential to get right to prevent damage
to the robot arm or its environment. This flexibility in the mapping is what the phrase non uniform in NURBS refers
to.
Necessary only for internal calculations, knots are usually not helpful to the users of modeling software. Therefore,
many modeling applications do not make the knots editable or even visible. It's usually possible to establish
reasonable knot vectors by looking at the variation in the control points. More recent versions of NURBS software
(e.g., Autodesk Maya and Rhinoceros 3D) allow for interactive editing of knot positions, but this is significantly less
intuitive than the editing of control points.
Order
The order of a NURBS curve defines the number of nearby control points that influence any given point on the
curve. The curve is represented mathematically by a polynomial of degree one less than the order of the curve.
Hence, second-order curves (which are represented by linear polynomials) are called linear curves, third-order curves
are called quadratic curves, and fourth-order curves are called cubic curves. The number of control points must be
greater than or equal to the order of the curve.
In practice, cubic curves are the ones most commonly used. Fifth- and sixth-order curves are sometimes useful,
especially for obtaining continuous higher order derivatives, but curves of higher orders are practically never used
because they lead to internal numerical problems and tend to require disproportionately large calculation times.
The parameter
are piecewise constant functions. They are one on the corresponding knot span and
, in which
[4]
is a linear interpolation of
and
139
is
is a triangular
function, nonzero over two knot spans rising from zero to one on the
first, and falling to zero on the second knot span. Higher order basis
functions are non-zero over corresponding more knot spans and have
correspondingly higher degree. If is the parameter, and
is the
-th knot, we can write the functions
and
as
and
functions
and
The functions
and
order basis functions are non-zero. By induction on n it follows that the basis functions are non-negative for all
values of and . This makes the computation of the basis functions numerically stable.
Again by induction, it can be proved that the sum of the basis functions for a particular value of the parameter is
unity. This is known as the partition of unity property of the basis functions.
The figures show the linear and the quadratic basis functions for the
knots {..., 0, 1, 2, 3, 4, 4.1, 5.1, 6.1, 7.1, ...}
One knot span is considerably shorter than the others. On that knot
span, the peak in the quadratic basis function is more distinct, reaching
almost one. Conversely, the adjoining basis functions fall to zero more
quickly. In the geometrical interpretation, this means that the curve
approaches the corresponding control point closely. In case of a double
knot, the length of the knot span becomes zero and the peak reaches
one exactly. The basis function is no longer differentiable at that point.
The curve will have a sharp corner if the neighbour control points are not collinear.
[5]
form:
In this,
and
normalizing factor that evaluates to one if all weights are one. This can be seen from the partition of unity property
of the basis functions. It is customary to write this as
140
with
Knot insertion
As the term suggests, knot insertion inserts a knot into the knot vector. If the degree of the curve is
control points are replaced by
, then
A knot can be inserted multiple times, up to the maximum multiplicity of the knot. This is sometimes referred to as
knot refinement and can be achieved by an algorithm that is more efficient than repeated knot insertion.
Knot removal
Knot removal is the reverse of knot insertion. Its purpose is to remove knots and the associated control points in
order to get a more compact representation. Obviously, this is not always possible while retaining the exact shape of
the curve. In practice, a tolerance in the accuracy is used to determine whether a knot can be removed. The process is
used to clean up after an interactive session in which control points may have been added manually, or after
importing a curve from a different representation, where a straightforward conversion process leads to redundant
control points.
141
Degree elevation
A NURBS curve of a particular degree can always be represented by a NURBS curve of higher degree. This is
frequently used when combining separate NURBS curves, e.g. when creating a NURBS surface interpolating
between a set of NURBS curves or when unifying adjacent curves. In the process, the different curves should be
brought to the same degree, usually the maximum degree of the set of curves. The process is known as degree
elevation.
Curvature
The most important property in differential geometry is the curvature . It describes the local properties (edges,
corners, etc.) and relations between the first and second derivative, and thus, the precise curve shape. Having
determined the derivatives it is easy to compute
second derivate
Example: a circle
Non-rational splines or Bzier curves may approximate a circle, but they cannot represent it exactly. Rational splines
can represent any conic section, including the circle, exactly. This representation is not unique, but one possibility
appears below:
x
weight
1 1 0
0
1 0
1 0
The order is three, since a circle is a quadratic curve and the spline's order is one more than the degree of its
piecewise polynomial segments. The knot vector is
. The
circle is composed of four quarter circles, tied together with double knots. Although double knots in a third order
NURBS curve would normally result in loss of continuity in the first derivative, the control points are positioned in
such a way that the first derivative is continuous. In fact, the curve is infinitely differentiable everywhere, as it must
be if it exactly represents a circle.
The curve represents a circle exactly, but it is not exactly parametrized in the circle's arc length. This means, for
example, that the point at does not lie at
(except for the start, middle and end point of each
quarter circle, since the representation is symmetrical). This would be impossible, since the x coordinate of the circle
would provide an exact rational polynomial expression for
, which is impossible. The circle does make one
full revolution as its parameter
as multiples of
goes from 0 to
, but this is only because the knot vector was arbitrarily chosen
References
Les Piegl & Wayne Tiller: The NURBS Book, Springer-Verlag 19951997 (2nd ed.). The main reference for
Bzier, B-Spline and NURBS; chapters on mathematical representation and construction of curves and surfaces,
interpolation, shape modification, programming concepts.
Dr. Thomas Sederberg, BYU NURBS, http://cagd.cs.byu.edu/~557/text/ch6.pdf
Dr. Lyle Ramshaw. Blossoming: A connect-the-dots approach to splines, Research Report 19, Compaq Systems
Research Center, Palo Alto, CA, June 1987, http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-19.
pdf
David F. Rogers: An Introduction to NURBS with Historical Perspective, Morgan Kaufmann Publishers 2001.
Good elementary book for NURBS and related issues.
Gershenfeld, Neil A. The nature of mathematical modeling. Cambridge university press, 1999.
Notes
[1]
[2]
[3]
[4]
Foley, van Dam, Feiner & Hughes: Computer Graphics: Principles and Practice, section 11.2, Addison-Wesley 1996 (2nd ed.).
David F. Rogers: An Introduction to NURBS with Historical Perspective, section 7.1
Gershenfeld: The Nature of Mathematical Modeling, page 141, Cambridge-University-Press 1999
Les Piegl & Wayne Tiller: The NURBS Book, chapter 2, sec. 2
[5]
[6]
[7]
[8]
Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 2
Les Piegl & Wayne Tiller: The NURBS Book, chapter 4, sec. 4
Les Piegl & Wayne Tiller: The NURBS Book, chapter 5
L. Piegl, Modifying the shape of rational B-splines. Part 1: curves, Computer-Aided Design, Volume 21, Issue 8, October 1989, Pages
509-518, ISSN 0010-4485, http:/ / dx. doi. org/ 10. 1016/ 0010-4485(89)90059-6.
External links
Clear explanation of NURBS for non-experts (http://www.rw-designer.com/NURBS)
Interactive NURBS demo (http://geometrie.foretnik.net/files/NURBS-en.swf)
About Nonuniform Rational B-Splines - NURBS (http://www.cs.wpi.edu/~matt/courses/cs563/talks/nurbs.
html)
An Interactive Introduction to Splines (http://ibiblio.org/e-notes/Splines/Intro.htm)
http://www.cs.bris.ac.uk/Teaching/Resources/COMS30115/all.pdf
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/DONAVANIK/bezier.html
http://mathcs.holycross.edu/~croyden/csci343/notes.html (Lecture 33: Bzier Curves, Splines)
http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/notes.html
A free software package for handling NURBS curves, surfaces and volumes (http://octave.sourceforge.net/
nurbs) in Octave and Matlab
142
Nonobtuse mesh
143
Nonobtuse mesh
A nonobtuse triangle mesh is composed of a set of triangles in which every angle is less than or equal to 90 we
call these triangles nonobtuse triangles. If each (triangle) face angle is strictly less than 90, then the triangle mesh is
said to be acute. The immediate benefits of having a nonobtuse or acute mesh include more efficient and more
accurate geodesic computation on meshes using fast marching, and guaranteed validity for planar mesh embeddings
via discrete harmonic maps.
The first guaranteed nonobtuse mesh generation in 3D was introduced in Eurographics Symposium on Geometry
Processing [1] 2006 by Li [2] and Zhang [3].
References
Nonobtuse Remeshing and Mesh Decimation [4]
Guaranteed Nonobtuse Meshes via Constrainted Optimizations [5]
References
[1] http:/ / www. geometryprocessing. org/
[2]
[3]
[4]
[5]
Normal (geometry)
In geometry, a normal is an object such as a line or vector that is
perpendicular to a given object. For example, in the two-dimensional
case, the normal line to a curve at a given point is the line
perpendicular to the tangent line to the curve at the point.
In the three-dimensional case a surface normal, or simply normal, to
a surface at a point P is a vector that is perpendicular to the tangent
plane to that surface at P. The word "normal" is also used as an
adjective: a line normal to a plane, the normal component of a force,
the normal vector, etc. The concept of normality generalizes to
orthogonality.
The concept has been generalized to differentiable manifolds of
arbitrary dimension embedded in a Euclidean space. The normal
vector space or normal space of a manifold at a point P is the set of
the vectors which are orthogonal to the tangent space at P. In the case
of differential curves, the curvature vector is a normal vector of special
interest.
The normal is often used in computer graphics to determine a surface's orientation toward a light source for flat
shading, or the orientation of each of the corners (vertices) to mimic a curved surface with Phong shading.
Normal (geometry)
144
, the
is a normal.
satisfying
.
For a surface S given explicitly as a function
(e.g.,
), its normal can be found in at least two equivalent ways. The first
one is obtaining its implicit form
gradient
.
(Notice that the implicit form could be defined alternatively as
;
these two forms correspond to the interpretation of the surface being oriented upwards or downwards, respectively,
as a consequence of the difference in the sign of the partial derivative
.) The second way of obtaining the
normal follows directly from the gradient of the explicit form,
;
by inspection,
Normal (geometry)
145
, where
, where
and
vectors.
If a surface does not have a tangent plane at a point, it does not have a normal at that point either. For example, a
cone does not have a normal at its tip nor does it have a normal along the edge of its base. However, the normal to
the cone is defined almost everywhere. In general, it is possible to define a normal almost everywhere for a surface
that is Lipschitz continuous.
Transforming normals
When applying a transform to a surface it is sometimes convenient to derive normals for the resulting surface from
the original normals. All points P on tangent plane are transformed to P. We want to find n perpendicular to P. Let
t be a vector on the tangent plane and Ml be the upper 3x3 matrix (translation part of transformation does not apply
to normal or tangent vectors).
So use the inverse transpose of the linear transformation (the upper 3x3 matrix) when transforming surface normals.
Also note that the inverse transpose is equal to the original matrix if the matrix is orthonormal, i.e. purely rotational
with no scaling or shearing.
-dimensional
hypersurfaces in a -dimensional space. A hypersurface may be locally defined implicitly as the set of points
satisfying an equation
, where
is a given scalar function. If
is continuously
differentiable then the hypersurface is a differentiable manifold in the neighbourhood of the points where the
gradient is not null. At these points the normal vector space has dimension one and is generated by the gradient
Normal (geometry)
The normal line at a point of the hypersurface is defined only if the gradient is not null. It is the line passing through
the point and having the gradient as direction.
The Jacobian matrix of the variety is the kn matrix whose i-th row is the gradient of fi. By implicit function
theorem, the variety is a manifold in the neighborhood of a point of it where the Jacobian matrix has rank k. At such
a point P, the normal vector space is the vector space generated by the values at P of the gradient vectors of the fi.
In other words, a variety is defined as the intersection of k hypersurfaces, and the normal vector space at a point is
the vector space generated by the normal vectors of the hypersurfaces at the point.
The normal (affine) space at a point P of the variety is the affine subspace passing through P and generated by the
normal vector space at P.
These definitions may be extended verbatim to the points where the variety is not a manifold.
Example
Let V be the variety defined in the 3-dimensional space by the equations
Uses
146
Normal (geometry)
147
References
External links
An explanation of normal vectors (http://msdn.microsoft.com/
en-us/library/bb324491(VS.85).aspx) from Microsoft's MSDN
Clear pseudocode for calculating a surface normal (http://www.
opengl.org/wiki/Calculating_a_Surface_Normal) from either a
triangle or polygon.
Painter's algorithm
The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in
3D computer graphics. When projecting a 3D scene onto a 2D plane, it is necessary at some point to decide which
polygons are visible, and which are hidden.
The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene
before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the
polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts
that are normally not visible thus solving the visibility problem at the cost of having painted invisible areas of
distant objects.
The distant mountains are painted first, followed by the closer meadows; finally, the closest objects in this scene, the
trees, are painted.Wikipedia:Please clarify
Painter's algorithm
References
Foley, James; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1990). Computer Graphics: Principles and
Practice. Reading, MA, USA: Addison-Wesley. p.1174. ISBN0-201-12110-7.
148
Parallax barrier
Parallax barrier
A parallax barrier is a device placed in front of an image source, such
as a liquid crystal display, to allow it to show a stereoscopic image or
multiscopic image without the need for the viewer to wear 3D glasses.
Placed in front of the normal LCD, it consists of a layer of material
with a series of precision slits, allowing each eye to see a different set
of pixels, so creating a sense of depth through parallax in an effect
similar to what lenticular printing produces for printed products and
lenticular lenses for other displays. A disadvantage of the technology is
that the viewer must be positioned in a well-defined spot to experience
the 3D effect. Another disadvantage is that the effective horizontal
pixel count viewable for each eye is reduced by one half; however,
there is research attempting to improve these limitations.
History
The principle of the parallax barrier was independently invented by
Auguste Berthier, who published first but produced no practical
Comparison of parallax-barrier and lenticular
results,[1] and by Frederic E. Ives, who made and exhibited the first
autostereoscopic displays. Note: The figure is not
known functional autostereoscopic image in 1901. About two years
to scale. Lenticules can be modified and more
later, Ives began selling specimen images as novelties, the first known
pixels can be used to make automultiscopic
displays
commercial use. Nearly a century later, Sharp developed the electronic
flat-panel application of this old technology to commercialization,
briefly selling two laptops with the world's only 3D LCD screens. These displays are no longer available from Sharp
but still being manufactured and further developed from other companies like Tridelity and SpatialView. Similarly,
Hitachi has released the first 3D mobile phone for the Japanese market under distribution by KDDI. In 2009,
Fujifilm released the Fujifilm FinePix Real 3D W1 digital camera, which features a built-in autostereoscopic LCD
display measuring 2.8" diagonal. Nintendo has also implemented this technology on its latest portable gaming
console, the Nintendo 3DS.
Applications
In addition to films and computer games, the technique has found uses in areas such as molecular modelling[citation
needed]
and airport security. It is also being used for the navigation system in the 2010-model Range Rover, allowing
the driver to view (for example) GPS directions, while a passenger watches a movie. It is also used in the Nintendo
3DS hand-held game console and LG's Optimus 3D and Thrill smartphones, HTC's EVO 3D[2] as well as Sharp's
Galapagos Android SmartPhone series.
The technology is harder to apply for 3D television sets, because of the requirement for a wide range of possible
viewing angles. A Toshiba 21-inch 3D display uses parallax barrier technology with 9 pairs of images, to cover a
viewing angle of 30 degrees.
149
Parallax barrier
150
and
Therefore:
For a typical auto-stereoscopic display of pixel pitch 65 micrometers,
eye separation 63mm, viewing distance 30cm, and refractive index
1.52, the pixel-barrier separation needs to be about 470 micrometers.
Parallax barrier
151
Barrier position
Note that the parallax barrier may also be placed behind the LCD
pixels. In this case, light from a slit passes the left image pixel in the
left direction, and vice versa. This produces the same basic effect as a
front parallax barrier.
In a parallax barrier system, the left eye sees only half the pixels (that
is to say the left image pixels) and the same is true for the right eye. Therefore the resolution of the display is
reduced, and so it can be advantageous to make a parallax barrier that can be switched on when 3D is needed or off
when a 2D image is required. One method of switching the parallax barrier on and off is to form it from a liquid
crystal material, the parallax barrier can then be created similar to the way that an image is formed in a liquid crystal
display.
Parallax barrier
152
The design requires a display that can switch fast enough to avoid
image flicker as the images swap each frame.
Crosstalk is the interference that exists between the left and right views
in a 3D display. In a display with high crosstalk left eye would be able
to see the right eye image faintly in the background. The perception of
crosstalk in stereoscopic displays has been studied widely. It is widely
acknowledged that the presence of high levels of crosstalk in a
stereoscopic display are detrimental. The effects of crosstalk in an
image include: ghosting and loss of contrast, loss of 3D effect and
depth resolution, and viewer discomfort. The visibility of crosstalk (ghosting) increases with increasing contrast and
increasing binocular parallax of the image. For example, a stereoscopic image with high-contrast will exhibit more
ghosting on a particular stereoscopic display than will an image with low contrast.
Parallax barrier
153
Measurement
A technique to quantify the level of crosstalk from a 3D display involves measuring the percentage of light that
deviates from one view to the other.
The crosstalk in a typical parallax-barrier based 3D system at the best
eye position might be 3%. Results of subjective tests carried out to
determine the image quality of 3D images conclude that for high
quality 3D, crosstalk should be 'no greater than around 1 to 2%'.
Parallax barrier
The newest and most convenient design, commercial products like the Nintendo 3DS, HTC Evo 3D, and LG
Optimus 3D do not have the physical parallax barrier in front of the pixels, but behind the pixels and in front of
the backlight. They thus send not different images to the two eyes but different light to each. This allows the two
channels of light to pass through the pixels, allowing glare over the opposite pixels giving the best image quality.
Pros
Clear image
Largest viewing angle
Cons
More expensive for mass production
Uses 20-25% more backlight than normal displays
References
[1] Berthier, Auguste. (May 16 and 23, 1896). "Images stroscopiques de grand format" (in French). Cosmos 34 (590, 591): 205210, 227-233
(see 229-231)
[2] HTC EVO 3D (http:/ / www. gsmarena. com/ htc_evo_3d-3895. php), from GSMArena
External links
Video explaining how the parallax barrier works (http://vimeo.com/44261419)
Principle of autostereo display (http://mrl.nyu.edu/~perlin/experiments/autostereo/) - Java applet illustrating
the idea
Parallel rendering
Parallel rendering (or Distributed rendering) is the application of parallel programming to the computational
domain of computer graphics. Rendering graphics can require massive computational resources for complex scenes
that arise in scientific visualization, medical visualization, CAD applications, and virtual reality. Rendering is an
embarrassingly parallel workload in multiple domains (e.g., pixels, objects, frames) and thus has been the subject of
much research.
Workload Distribution
There are two, often competing, reasons for using parallel rendering. Performance scaling allows frames to be
rendered more quickly while data scaling allows larger data sets to be visualized. Different methods of distributing
the workload tend to favor one type of scaling over the other. There can also be other advantages and disadvantages
such as latency and load balancing issues. The three main options for primitives to distribute are entire frames,
pixels, or objects (e.g. triangle meshes).
154
Parallel rendering
Frame distribution
Each processing unit can render an entire frame from a different point of view or moment in time. The frames
rendered from different points of view can improve image quality with anti-aliasing or add effects like depth-of-field
and three dimensional display output. This approach allows for good performance scaling but no data scaling.
When rendering sequential frames in parallel there will be a lag for interactive sessions. The lag between user input
and the action being displayed is proportional to the number of sequential frames being rendered in parallel.
Pixel distribution
Sets of pixels in the screen space can be distributed among processing units in what is often referred to as sort first
rendering.[1]
Distributing interlaced lines of pixels gives good load balancing but makes data scaling impossible. Distributing
contiguous 2D tiles of pixels allows for data scaling by culling data with the view frustum. However, there is a data
overhead from objects on frustum boundaries being replicated and data has to be loaded dynamically as the view
point changes. Dynamic load balancing is also needed to maintain performance scaling.
Object distribution
Distributing objects among processing units is often referred to as sort last rendering.[2] It provides good data scaling
and can provide good performance scaling, but it requires the intermediate images from processing nodes to be alpha
composited to create the final image. As the image resolution grows, the alpha compositing overhead also grows.
A load balancing scheme is also needed to maintain performance regardless of the viewing conditions. This can be
achieved by over partitioning the object space and assigning multiple pieces to each processing unit in a random
fashion, however this increases the number of alpha compositing stages required to create the final image. Another
option is to assign a contiguous block to each processing unit and update it dynamically, but this requires dynamic
data loading.
Hybrid distribution
The different types of distributions can be combined in a number of fashions. A couple of sequential frames can be
rendered in parallel while also rendering each of those individual frames in parallel using a pixel or object
distribution. Object distributions can try to minimize their overlap in screen space in order to reduce alpha
compositing costs, or even use a pixel distribution to render portions of the object space.
155
Parallel rendering
156
References
[1] Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. A Sorting Classification of Parallel Rendering. IEEE Computer Graphics and Algorithms,
pages 23-32, July 1994.
[2] Molnar, S., M. Cox, D. Ellsworth, and H. Fuchs. A Sorting Classification of Parallel Rendering. IEEE Computer Graphics and Algorithms,
pages 23-32, July 1994.
External links
Cluster Rendering at Princeton University (http://www.cs.princeton.edu/~rudro/cluster-rendering/)
Particle system
The term particle system refers to a computer graphics technique that
uses a large number of very small sprites or other graphic objects to
simulate certain kinds of "fuzzy" phenomena, which are otherwise very
hard to reproduce with conventional rendering techniques - usually
highly chaotic systems, natural phenomena, and/or processes caused by
chemical reactions.
Examples of such phenomena which are commonly replicated using
particle systems include fire, explosions, smoke, moving water (such
as a waterfall), sparks, falling leaves, clouds, fog, snow, dust, meteor
tails, stars and galaxies, or abstract visual effects like glowing trails,
magic spells, etc. - these use particles that fade out quickly and are then
re-emitted from the effect's source. Another technique can be used for
things that contain many strands - such as fur, hair, and grass involving rendering an entire particle's lifetime at once, which can then
be drawn and manipulated as a single strand of the material in
question.
Typical implementation
Typically a particle system's position and motion in 3D space are
Ad hoc particle system used to simulate a galaxy,
controlled by what is referred to as an emitter. The emitter acts as the
created in 3dengfx.
source of the particles, and its location in 3D space determines where
they are generated and whence they proceed. A regular 3D mesh
object, such as a cube or a plane, can be used as an emitter. The emitter has attached to it a set of particle behavior
parameters. These parameters can include the spawning rate (how many particles are generated per unit of time), the
particles' initial velocity vector (the direction they are emitted upon creation), particle lifetime (the length of time
each individual particle exists before disappearing), particle color, and many more. It is common for all or most of
these parameters to be "fuzzy" instead of a precise numeric value, the artist specifies
Particle system
157
Simulation stage
During the simulation stage, the number of new particles that must be created is calculated based on spawning rates
and the interval between updates, and each of them is spawned in a specific position in 3D space based on the
emitter's position and the spawning area specified. Each of the particle's parameters (i.e. velocity, color, etc.) is
initialized according to the emitter's parameters. At each update, all existing particles are checked to see if they have
exceeded their lifetime, in which case they are removed from the simulation. Otherwise, the particles' position and
other characteristics are advanced based on a physical simulation, which can be as simple as translating their current
position, or as complicated as performing physically accurate trajectory calculations which take into account external
forces (gravity, friction, wind, etc.). It is common to perform collision detection between particles and specified 3D
objects in the scene to make the particles bounce off of or otherwise interact with obstacles in the environment.
Collisions between particles are rarely used, as they are computationally expensive and not visually relevant for most
simulations.
Rendering stage
After the update is complete, each particle is rendered, usually in the form of a textured billboarded quad (i.e. a
quadrilateral that is always facing the viewer). However, this is not necessary; a particle may be rendered as a single
pixel in small resolution/limited processing power environments. Particles can be rendered as Metaballs in off-line
rendering; isosurfaces computed from particle-metaballs make quite convincing liquids. Finally, 3D mesh objects
can "stand in" for the particles a snowstorm might consist of a single 3D snowflake mesh being duplicated and
rotated to match the positions of thousands or millions of particles.
Particle system
158
However, if the entire life cycle of each particle is rendered simultaneously, the result is static particles strands of
material that show the particles' overall trajectory, rather than point particles. These strands can be used to simulate
hair, fur, grass, and similar materials. The strands can be controlled with the same velocity vectors, force fields,
spawning rates, and deflection parameters that animated particles obey. In addition, the rendered thickness of the
strands can be controlled and in some implementations may be varied along the length of the strand. Different
combinations of parameters can impart stiffness, limpness, heaviness, bristliness, or any number of other properties.
The strands may also use texture mapping to vary the strands' color, length, or other properties across the emitter
surface.
Particle system
159
External links
Particle Systems: A Technique for Modeling a Class of Fuzzy Objects [1] William T. Reeves (ACM
Transactions on Graphics, April 1983)
The Particle Systems API [2] - David K. McAllister
The ocean spray in your face. [3] Jeff Lander (Graphic Content, July 1998)
Building an Advanced Particle System [4] John van der Burg (Gamasutra, June 2000)
Particle Engine Using Triangle Strips [5] Jeff Molofee (NeHe)
Designing an Extensible Particle System using C++ and Templates [6] Kent Lai (GameDev.net)
repository of public 3D particle scripts in LSL Second Life format [7] - Ferd Frederix
GPU-Particlesystems using WebGL [8] - Particle effects directly in the browser using WebGL for calculations.
References
[1]
[2]
[3]
[4]
[5]
[6]
Point cloud
A point cloud is a set of data points in some
coordinate system.
In a three-dimensional coordinate system,
these points are usually defined by X, Y, and
Z coordinates, and often are intended to
represent the external surface of an object.
Point clouds may be created by 3D scanners.
These devices measure in an automatic way
a large number of points on the surface of an
object, and often output a point cloud as a
data file. The point cloud represents the set
of points that the device has measured.
As the result of a 3D scanning process point
clouds are used for many purposes,
including to create 3D CAD models for
manufactured
parts,
metrology/quality
inspection, and a multitude of visualization,
animation,
rendering
and
mass
customization applications.
Point cloud
While point clouds can be directly rendered and inspected,[2] usually
point clouds themselves are generally not directly usable in most 3D
applications, and therefore are usually converted to polygon mesh or
triangle mesh models, NURBS surface models, or CAD models
through a process commonly referred to as surface reconstruction.
There are many techniques for converting a point cloud to a 3D
surface. Some approaches, like Delaunay triangulation, alpha shapes,
[1]
and ball pivoting, build a network of triangles over the existing vertices
Geo-referenced point cloud by DroneMapper
of the point cloud, while other approaches convert the point cloud into
a volumetric distance field and reconstruct the implicit surface so defined through a marching cubes algorithm.[3]
One application in which point clouds are directly usable is industrial metrology or inspection. The point cloud of a
manufactured part can be aligned to a CAD model (or even another point cloud), and compared to check for
differences. These differences can be displayed as color maps that give a visual indicator of the deviation between
the manufactured part and the CAD model. Geometric dimensions and tolerances can also be extracted directly from
the point cloud.
Point clouds can also be used to represent volumetric data used for example in medical imaging. Using point clouds
multi-sampling and data compression are achieved.[4]
In geographic information system, point clouds are one of the sources to make digital elevation model of the
terrain.[5] The point clouds are also employed in order to generate 3D model of urban environment, e.g.[6]
References
[1] http:/ / dronemapper. com
[2] Rusinkiewicz, S. and Levoy, M. 2000. QSplat: a multiresolution point rendering system for large meshes. In Siggraph 2000. ACM , New
York, NY, 343352. DOI= http:/ / doi. acm. org/ 10. 1145/ 344779. 344940
[3] Meshing Point Clouds (http:/ / meshlabstuff. blogspot. com/ 2009/ 09/ meshing-point-clouds. html) A short tutorial on how to build surfaces
from point clouds
[4] Sitek et al. "Tomographic Reconstruction Using an Adaptive Tetrahedral Mesh Defined by a Point Cloud" IEEE Trans. Med. Imag. 25 1172
(2006) (http:/ / dx. doi. org/ 10. 1109/ TMI. 2006. 879319)
[5] From Point Cloud to Grid DEM: A Scalable Approach (http:/ / terrain. cs. duke. edu/ pubs/ lidar_interpolation. pdf)
[6] K. Hammoudi, F. Dornaika, B. Soheilian, N. Paparoditis. Extracting Wire-frame Models of Street Facades from 3D Point Clouds and the
Corresponding Cadastral Map. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences (IAPRS), vol.
38, part 3A, pp. 9196, Saint-Mand, France, 13 September 2010. (http:/ / www. isprs. org/ proceedings/ XXXVIII/ part3/ a/ pdf/
91_XXXVIII-part3A. pdf)
External links
PCL (Point Cloud Library) a comprehensive BSD open source library forn-D Point Clouds and3D geometry
processing. http://pointclouds.org
160
Floating Point
Fixed-Point
Polygon
because of rounding, every scanline has its own direction in space and may show its front or back side to the
viewer.
Fraction (mathematics)
Bresenham's line algorithm
Polygons have to be split into triangles
The whole triangle shows the same side to the viewer
The point numbers from the Transform and lighting stage have to converted to Fraction (mathematics)
Barycentric coordinates (mathematics)
Used in raytracing
Polygon mesh
A polygon mesh is a collection of vertices, edges and
faces that defines the shape of a polyhedral object in
3D computer graphics and solid modeling. The faces
usually consist of triangles, quadrilaterals or other
simple convex polygons, since this simplifies
rendering, but may also be composed of more general
concave polygons, or polygons with holes.
The study of polygon meshes is a large sub-field of
computer graphics and geometric modeling. Different
representations of polygon meshes are used for
Example of a triangle mesh representing a dolphin.
different applications and goals. The variety of
operations performed on meshes may include Boolean
logic, smoothing, simplification, and many others. Network representations, "streaming" and "progressive" meshes,
are used to transmit polygon meshes over a network. Volumetric meshes are distinct from polygon meshes in that
they explicitly represent both the surface and volume of a structure, while polygon meshes only explicitly represent
the surface (the volume is implicit). As polygonal meshes are extensively used in computer graphics, algorithms also
exist for ray tracing, collision detection, and rigid-body dynamics of polygon meshes.
161
Polygon mesh
Objects created with polygon meshes must store different types of elements. These include vertices, edges, faces,
polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored. A renderer
may support only 3-sided faces, so polygons must be constructed of many of these, as shown in Figure 1. However,
many renderers either support quads and higher-sided polygons, or are able to convert polygons to triangles on the
fly, making it unnecessary to store a mesh in a triangulated form. Also, in certain applications like head modeling, it
is desirable to be able to create both 3- and 4-sided polygons.
A vertex is a position along with other information such as color, normal vector and texture coordinates. An edge is
a connection between two vertices. A face is a closed set of edges, in which a triangle face has three edges, and a
quad face has four edges. A polygon is a coplanar set of faces. In systems that support multi-sided faces, polygons
and faces are equivalent. However, most rendering hardware supports only 3- or 4-sided faces, so polygons are
represented as multiple faces. Mathematically a polygonal mesh may be considered an unstructured grid, or
undirected graph, with additional properties of geometry, shape and topology.
Surfaces, more often called smoothing groups, are useful, but not required to group smooth regions. Consider a
cylinder with caps, such as a soda can. For smooth shading of the sides, all surface normals must point horizontally
away from the center, while the normals of the caps must point straight up and down. Rendered as a single,
Phong-shaded surface, the crease vertices would have incorrect normals. Thus, some way of determining where to
cease smoothing is needed to group smooth parts of a mesh, just as polygons group 3-sided faces. As an alternative
to providing surfaces/smoothing groups, a mesh may contain other data for calculating the same data, such as a
splitting angle (polygons with normals above this threshold are either automatically treated as separate smoothing
groups or some technique such as splitting or chamfering is automatically applied to the edge between them).
Additionally, very high resolution meshes are less subject to issues that would require smoothing groups, as their
polygons are so small as to make the need irrelevant. Further, another alternative exists in the possibility of simply
detaching the surfaces themselves from the rest of the mesh. Renderers do not attempt to smooth edges across
noncontiguous polygons.
Mesh format may or may not define other useful data. Groups may be defined which define separate elements of the
mesh and are useful for determining separate sub-objects for skeletal animation or separate actors for non-skeletal
animation. Generally materials will be defined, allowing different portions of the mesh to use different shaders
when rendered. Most mesh formats also suppose some form of UV coordinates which are a separate 2d
representation of the mesh "unfolded" to show what portion of a 2-dimensional texture map to apply to different
polygons of the mesh.
162
Polygon mesh
Representations
Polygon meshes may be represented in a variety of ways, using different methods to store the vertex, edge and face
data. These include:
Face-vertex meshes: A simple list of vertices, and a set of polygons that point to the vertices it uses.
Winged-edge meshes, in which each edge points to two vertices, two faces, and the four (clockwise and
counterclockwise) edges that touch it. Winged-edge meshes allow constant time traversal of the surface, but with
higher storage requirements.
Half-edge meshes: Similar to winged-edge meshes except that only half the edge traversal information is used.
(see OpenMesh [1])
Quad-edge meshes, which store edges, half-edges, and vertices without any reference to polygons. The polygons
are implicit in the representation, and may be found by traversing the structure. Memory requirements are similar
to half-edge meshes.
Corner-tables, which store vertices in a predefined table, such that traversing the table implicitly defines
polygons. This is in essence the triangle fan used in hardware graphics rendering. The representation is more
compact, and more efficient to retrieve polygons, but operations to change polygons are slow. Furthermore,
corner-tables do not represent meshes completely. Multiple corner-tables (triangle fans) are needed to represent
most meshes.
Vertex-vertex meshes: A "VV" mesh represents only vertices, which point to other vertices. Both the edge and
face information is implicit in the representation. However, the simplicity of the representation does not allow for
many efficient operations to be performed on meshes.
Each of the representations above have particular advantages and drawbacks, further discussed in Smith (2006).[2]
The choice of the data structure is governed by the application, the performance required, size of the data, and the
operations to be performed. For example, it is easier to deal with triangles than general polygons, especially in
computational geometry. For certain operations it is necessary to have a fast access to topological information such
as edges or neighboring faces; this requires more complex structures such as the winged-edge representation. For
hardware rendering, compact, simple structures are needed; thus the corner-table (triangle fan) is commonly
incorporated into low-level rendering APIs such as DirectX and OpenGL.
163
Polygon mesh
Vertex-vertex meshes
Vertex-vertex meshes represent an object as a set of vertices connected to other vertices. This is the simplest
representation, but not widely used since the face and edge information is implicit. Thus, it is necessary to traverse
the data in order to generate a list of faces for rendering. In addition, operations on edges and faces are not easily
accomplished.
However, VV meshes benefit from small storage space and efficient morphing of shape. Figure 2 shows the
four-sided cylinder example represented using VV meshes. Each vertex indexes its neighboring vertices. Notice that
the last two vertices, 8 and 9 at the top and bottom center of the "box-cylinder", have four connected vertices rather
than five. A general system must be able to handle an arbitrary number of vertices connected to any given vertex.
For a complete description of VV meshes see Smith (2006).
164
Polygon mesh
Face-vertex meshes
Face-vertex meshes represent an object as a set of faces and a set of vertices. This is the most widely used mesh
representation, being the input typically accepted by modern graphics hardware.
Face-vertex meshes improve on VV-mesh for modeling in that they allow explicit lookup of the vertices of a face,
and the faces surrounding a vertex. Figure 3 shows the "box-cylinder" example as an FV mesh. Vertex v5 is
highlighted to show the faces that surround it. Notice that, in this example, every face is required to have exactly 3
vertices. However, this does not mean every vertex has the same number of surrounding faces.
For rendering, the face list is usually transmitted to the GPU as a set of indices to vertices, and the vertices are sent
as position/color/normal structures (in the figure, only position is given). This has the benefit that changes in shape,
but not geometry, can be dynamically updated by simply resending the vertex data without updating the face
connectivity.
Modeling requires easy traversal of all structures. With face-vertex meshes it is easy to find the vertices of a face.
Also, the vertex list contains a list of faces connected to each vertex. Unlike VV meshes, both faces and vertices are
explicit, so locating neighboring faces and vertices is constant time. However, the edges are implicit, so a search is
still needed to find all the faces surrounding a given face. Other dynamic operations, such as splitting or merging a
face, are also difficult with face-vertex meshes.
165
Polygon mesh
Winged-edge meshes
Introduced by Baumgart 1975, winged-edge meshes explicitly represent the vertices, faces, and edges of a mesh.
This representation is widely used in modeling programs to provide the greatest flexibility in dynamically changing
the mesh geometry, because split and merge operations can be done quickly. Their primary drawback is large storage
requirements and increased complexity due to maintaining many indices. A good discussion of implementation
issues of Winged-edge meshes may be found in the book Graphics Gems II.
Winged-edge meshes address the issue of traversing from edge to edge, and providing an ordered set of faces around
an edge. For any given edge, the number of outgoing edges may be arbitrary. To simplify this, winged-edge meshes
provide only four, the nearest clockwise and counter-clockwise edges at each end. The other edges may be traversed
incrementally. The information for each edge therefore resembles a butterfly, hence "winged-edge" meshes. Figure 4
shows the "box-cylinder" as a winged-edge mesh. The total data for an edge consists of 2 vertices (endpoints), 2
faces (on each side), and 4 edges (winged-edge).
Rendering of winged-edge meshes for graphics hardware requires generating a Face index list. This is usually done
only when the geometry changes. Winged-edge meshes are ideally suited for dynamic geometry, such as subdivision
surfaces and interactive modeling, since changes to the mesh can occur locally. Traversal across the mesh, as might
be needed for collision detection, can be accomplished efficiently.
See Baumgart (1975) for more details.[3]
166
Polygon mesh
167
Vertex-vertex
Face-vertex
Winged-edge
Render dynamic
V-V
Explicit
E-F
Explicit
Explicit
V-F
F(a,b,c) {a,b,c}
Explicit
F e1, e2, e3 a, b, c
Explicit
F-V
Pair search
Explicit
Explicit
E-V
V {v,v1}, {v,v2},
{v,v3}, ...
Explicit
Explicit
F-E
List compare
List compare
Explicit
Explicit
V-E
Both vertices of an
edge
E(a,b) {a,b}
E(a,b) {a,b}
Explicit
Explicit
Flook
F(a,b,c) {a,b,c}
3F + V*avg(F,V)
3F + 8E + V*avg(E,V)
6F + 4E + V*avg(E,V)
Storage size
V*avg(V,V)
3*16 + 10*5 = 98
In the above table, explicit indicates that the operation can be performed in constant time, as the data is directly
stored; list compare indicates that a list comparison between two lists must be performed to accomplish the
operation; and pair search indicates a search must be done on two indices. The notation avg(V,V) means the average
number of vertices connected to a given vertex; avg(E,V) means the average number of edges connected to a given
vertex, and avg(F,V) is the average number of faces connected to a given vertex.
The notation "V f1, f2, f3, ... v1, v2, v3, ..." describes that a traversal across multiple elements is required to
perform the operation. For example, to get "all vertices around a given vertex V" using the face-vertex mesh, it is
necessary to first find the faces around the given vertex V using the vertex list. Then, from those faces, use the face
list to find the vertices around them. Notice that winged-edge meshes explicitly store nearly all information, and
other operations always traverse to the edge first to get additional info. Vertex-vertex meshes are the only
representation that explicitly stores the neighboring vertices of a given vertex.
Polygon mesh
168
As the mesh representations become more complex (from left to right in the summary), the amount of information
explicitly stored increases. This gives more direct, constant time, access to traversal and topology of various
elements but at the cost of increased overhead and space in maintaining indices properly.
Figure 7 shows the connectivity information for each of the four technique described in this article. Other
representations also exist, such as half-edge and corner tables. These are all variants of how vertices, faces and edges
index one another.
As a general rule, face-vertex meshes are used whenever an object must be rendered on graphics hardware that does
not change geometry (connectivity), but may deform or morph shape (vertex positions) such as real-time rendering
of static or morphing objects. Winged-edge or render dynamic meshes are used when the geometry changes, such as
in interactive modeling packages or for computing subdivison surfaces. Vertex-vertex meshes are ideal for efficient,
complex changes in geometry or topology so long as hardware rendering is not of concern.
Other representations
Streaming meshes store faces in an ordered, yet independent, way so that the mesh can be transmitted in pieces. The
order of faces may be spatial, spectral, or based on other properties of the mesh. Streaming meshes allow a very large
mesh to be rendered even while it is still being loaded.
Progressive meshes transmit the vertex and face data with increasing levels of detail. Unlike streaming meshes,
progressive meshes give the overall shape of the entire object, but at a low level of detail. Additional data, new edges
and faces, progressively increase the detail of the mesh.
Normal meshes transmit progressive changes to a mesh as a set of normal displacements from a base mesh. With this
technique, a series of textures represent the desired incremental modifications. Normal meshes are compact, since
only a single scalar value is needed to express displacement. However, the technique requires a complex series of
transformations to create the displacement textures.
File formats
There exist many different file formats for storing polygon mesh data. Each format is most effective when used for
the purpose intended by its creator. Some of these formats are presented below:
File suffix
Format name
Organization(s)
Program(s)
Description
.raw
Raw mesh
Unknown
Various
.blend
Blender Foundation
Blender 3D
.fbx
Various
.3ds
Autodesk
3ds Max
.dae
Digital Asset
Exchange
(COLLADA)
Sony Computer
N/A
Entertainment, Khronos
Group
.dgn
MicroStation File
Bentley Systems
MicroStation
.3dm
Rhino File
Rhinoceros 3D
.dxf
Drawing Exchange
Format
Autodesk
AutoCAD
.obj
Wavefront OBJ
Wavefront
Technologies
Various
Polygon mesh
169
.ply
Stanford University
Unknown
.pmd
.stl
Stereolithography
Format
3D Systems
N/A
.amf
Additive
Manufacturing File
Format
ASTM International
N/A
Like the STL format, but with added native color, material, and
constellation support.
.wrl
Virtual Reality
Modeling Language
Web3D Consortium
Web Browsers
.wrz
VRML Compressed
Web3D Consortium
Web Browsers
.x3d,
.x3db,
.x3dv
Extensible 3D
Web3D Consortium
Web Browsers
.x3dz,
.x3dbz,
.x3dvz
X3D Compressed
Binary
Web3D Consortium
Web Browsers
.c4d
Cinema 4D File
MAXON
CINEMA 4D
.lwo
LightWave 3D object
File
NewTek
LightWave 3D
.msh
Gmsh Mesh
GMsh Developers
GMsh Project
.mesh
OGRE XML
OGRE Development
Team
OGRE,
purebasic
.z3d
Z3d
Oleg Melashenko
Zanoza Modeler
.vtk
VTK mesh
VTK, Kitware
VTK, Paraview
References
[1] http:/ / www. openmesh. org/
[2] Colin Smith, On Vertex-Vertex Meshes and Their Use in Geometric and Biological Modeling, http:/ / algorithmicbotany. org/ papers/
smithco. dis2006. pdf
[3] Bruce Baumgart, Winged-Edge Polyhedron Representation for Computer Vision. National Computer Conference, May 1975. http:/ / www.
baumgart. org/ winged-edge/ winged-edge. html
[4] Tobler & Maierhofer, A Mesh Data Structure for Rendering and Subdivision. 2006. (http:/ / wscg. zcu. cz/ wscg2006/ Papers_2006/ Short/
E17-full. pdf)
Polygon mesh
External links
Weisstein, Eric W., " Simplicial complex (http://mathworld.wolfram.com/SimplicialComplex.html)",
MathWorld.
Weisstein, Eric W., " Triangulation (http://mathworld.wolfram.com/Triangulation.html)", MathWorld.
OpenMesh (http://www.openmesh.org/) open source half-edge mesh representation.
Polygon soup
A Polygon soup is a group of unorganized triangles, with generally no relationship whatsoever. Polygon soups are a
geometry storage format in a 3D modeling package, such as Maya, Houdini, or Blender. Polygon soup can help save
memory, load/write time, and disk space for large polygon meshes compared to the equivalent polygon mesh. The
larger the polygon soup the larger the savings, for instance, fluid simulations, particle simulations, rigid body
simulations, environments, and character models can reach into the millions of polygon for feature films causing
large disk space and read/write overhead. As soon as any kind of hierarchical sorting or clustering scheme is applied,
then it becomes something else (one example being an octree, a subdivided cube). Any kind of polygonal geometry
that hasn't been grouped in any way can be considered polygon soup. Optimized meshes may contain grouped items
to make drawing faster.
References
Polygonal modeling
In 3D computer graphics, polygonal modeling is an approach for modeling objects by representing or
approximating their surfaces using polygons. Polygonal modeling is well suited to scanline rendering and is
therefore the method of choice for real-time computer graphics. Alternate methods of representing 3D objects
include NURBS surfaces, subdivision surfaces, and equation-based representations used in ray tracers. See polygon
mesh for a description of how polygonal models are represented and stored.
170
Polygonal modeling
Many modeling programs do not strictly enforce geometric theory; for example, it is possible for two vertices to
have two distinct edges connecting them, occupying exactly the same spatial location. It is also possible for two
vertices to exist at the same spatial coordinates, or two faces to exist at the same location. Situations such as these are
usually not desired and many packages support an auto-cleanup function. If auto-cleanup is not present, however,
they must be deleted manually.
A group of polygons which are connected by shared vertices is referred to as a mesh. In order for a mesh to appear
attractive when rendered, it is desirable that it be non-self-intersecting, meaning that no edge passes through a
polygon. Another way of looking at this is that the mesh cannot pierce itself. It is also desirable that the mesh not
contain any errors such as doubled vertices, edges, or faces. For some purposes it is important that the mesh be a
manifold that is, that it does not contain holes or singularities (locations where two distinct sections of the mesh
are connected by a single vertex).
Cubes
Pyramids
Cylinders
2D primitives, such as squares, triangles, and disks
Specialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blender's monkey mascot.
Spheres - Spheres are commonly represented in one of two ways:
Icospheres are icosahedrons which possess a sufficient number of triangles to resemble a sphere.
UV Spheres are composed of quads, and resemble the grid seen on some globes - quads are larger near the
"equator" of the sphere and smaller near the "poles," eventually terminating in a single vertex.
Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a
user-friendly interface for constructing low-detail models quickly, while 3d scanners can be used to create high detail
meshes based on existing real-world objects in almost automatic way. These devices are very expensive, and are
generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital
representations.
171
Polygonal modeling
Operations
There are a very large number of operations which may be performed on polygonal meshes. Some of these roughly
correspond to real-world manipulations of 3D objects, while others do not.
Polygonal mesh operations:
Creations - Create new geometry from some other mathematical object
Loft - generate a mesh by sweeping a shape along a path
Extrude - same as loft, except the path is always a line
Revolve - generate a mesh by revolving (rotating) a shape around an axis
Marching cubes - algorithm to construct a mesh from an implicit function
Binary Creations - Create a new mesh from a binary operation of two other meshes
Add - boolean addition of two meshes
Subtract - boolean subtraction of two meshes
Intersect - boolean intersection
Union - boolean union of two meshes
Attach - attach one mesh to another (removing the interior surfaces)
Chamfer - create a beveled surface which smoothly connected two surfaces
Deformations - Move only the vertices of a mesh
Deform - systematically move vertices (according to certain functions or rules)
Weighted Deform - move vertices based on localized weights per vertex
Morph - move vertices smoothly between a source and target mesh
Bend - move vertices to "bend" the object
Twist - move vertices to "twist" the object
Manipulations - Modify the geometry of the mesh, but not necessarily topology
Displace - introduce additional geometry based on a "displacement map" from the surface
Simplify - systematically remove and average vertices
Subdivide - smooth a course mesh by subdividing the mesh (Catmull-Clark, etc.)
Convex Hull - generate another mesh which minimally encloses a given mesh (think shrink-wrap)
Cut - create a hole in a mesh surface
Stitch - close a hole in a mesh surface
Measurements - Compute some value of the mesh
Volume - compute the 3D volume of a mesh (discrete volumetric integral)
Surface Area - compute the surface area of a mesh (discrete surface integral)
Collision Detection - determine if two complex meshes in motion have collided
Fitting - construct a parametric surface (NURBS, bicubic spline) by fitting it to a given mesh
Point-Surface Distance - compute distance from a point to the mesh
Line-Surface Distance - compute distance from a line to the mesh
Line-Surface Intersection - compute intersection of line and the mesh
Cross Section - compute the curves created by a cross-section of a plane through a mesh
Centroid - compute the centroid, geometric center, of the mesh
Center-of-Mass - compute the center of mass, balance point, of the mesh
Circumcenter - compute the center of a circle or sphere enclosing an element of the mesh
Incenter - compute the center of a circle or sphere enclosed by an element of the mesh
172
Polygonal modeling
Extensions
Once a polygonal mesh has been constructed, further steps must be taken before it is useful for games, animation,
etc. The model must be texture mapped to add colors and texture to the surface and it must be given a skeleton for
animation. Meshes can also be assigned weights and centers of gravity for use in physical simulation.
To display a model on a computer screen outside of the modeling environment, it is necessary to store that model in
one of the file formats listed below, and then use or write a program capable of loading from that format. The two
main methods of displaying 3d polygon models are OpenGL and Direct3D. Both of these methods can be used with
or without a 3d accelerated graphics card.
File formats
A variety of formats are available for storing 3d polygon data. The most popular are:
173
Polygonal modeling
References
1. OpenGL SuperBible (3rd ed.), by Richard S Wright and Benjamin Lipchak ISBN 0-672-32601-9
2. OpenGL Programming Guide: The Official Guide to Learning OpenGL, Version 1.4, Fourth Edition by OpenGL
Architecture Review Board ISBN 0-321-17348-1
3. OpenGL(R) Reference Manual : The Official Reference Document to OpenGL, Version 1.4 (4th Edition) by
OpenGL Architecture Review Board ISBN 0-321-17383-X
4. Blender documentation: http://www.blender.org/cms/Documentation.628.0.html
5. Maya documentation: packaged with Alias Maya, http://www.alias.com/eng/index.shtml
Pre-rendering
Pre-rendering is the process in which video footage is not rendered in real-time by the hardware that is outputing or
playing back the video. Instead, the video is a recording of a footage that was previously rendered on a different
equipment (typically one that is more powerful than the hardware used for playback). Pre-rendered assets (typically
movies) may also be outsourced by the developer to an outside production company. Such assets usually have a level
of complexity that is too great for the target platform to render in real-time.
The term pre-rendered describes anything that is not rendered in real-time. This includes content that could have
been run in real-time with more effort on the part of the developer (e.g. video that covers a large number of a game's
environments without pausing to load, or video of a game in an early state of development that is rendered in
slow-motion and then played back at regular speed). The term is generally not used to describe video captures of
real-time rendered graphics despite the fact that video is technically pre-rendered by its nature. The term is also not
used to describe hand drawn assets or photographed assets (these assets not being computer rendered in the first
place).
174
Pre-rendering
Usage
Pre-rendered graphics are used primarily as cut scenes in modern video games, where they are also known as full
motion video. In the late 1990s and early 2000s, when most 3D game engines had pre-calculated/fixed Lightmaps
and texture mapping, developers often turned to pre-rendered graphics which had a much higher level of realism.
However this has lost favor since the mid-2000s, as advances in consumer PC and video game graphics have enabled
the use of the game's own engine to render these cinematics. For instance, the id Tech 4 engine used in Doom 3
allowed bump mapping and dynamic per-pixel lighting, previously only found in pre-rendered videos.
One of the first games to use pre-rendering was the Sharp X68000 enhanced remake of Ys I: Ancient Ys Vanished
released in 1991. It used 3D pre-rendered graphics for the boss sprites, though this ended up creating what is
considered "a bizarre contrast" with the game's mostly 2D graphics.[1] One of the first games to extensively use
pre-rendered graphics along with full motion video was The 7th Guest. Released in 1992 as one of the first PC games
exclusively on CD-ROM, the game was hugely popular, although reviews from critics were mixed. The game
featured pre-rendered video sequences that were at a resolution of 640x320 at 15 frames per second, a feat
previously thought impossible on personal computers. Shortly after, the release of Myst in 1993 made the use of
pre-rendered graphics and CD-ROMs even more popular; interestingly most of the rendered work of Myst would
later be the basis for the re-make realMyst: Interactive 3D Edition with its free-roaming real-time 3D graphics. The
most graphically advanced use of entirely pre-rendered graphics in games is often claimed to be Myst IV: Revelation,
released in 2004.
The use of pre-rendered backgrounds and movies also was made popular by the Resident Evil and Final Fantasy
franchises on the original PlayStation, both of which use pre-rendered backgrounds and movies extensively to
provide a visual presentation that is far greater than the console can provide with real-time 3D. These games include
real-time elements (characters, items, etc.) in addition to pre-rendered backgrounds to provide interactivity. Often a
game using pre-rendered backgrounds can devote additional processing power to the remaining interactive elements
resulting in a level of detail greater than the norm for the host platform. In some cases the visual quality of the
interactive elements is still far behind the pre-rendered backgrounds.
Games such as Warcraft III: Reign of Chaos have used both types of cutscenes; pre-rendered for the beginning and
end of a campaign, and the in-game engine for level briefings and character dialogue during a mission.
Some games also use 16-bit pre-rendered skybox, like Half-Life (only GoldSrc version), Re-Volt, Quake II, and
others.
CG movies such as Toy Story, Shrek and Final Fantasy: The Spirits Within are entirely pre-rendered.
Other methods
Another increasingly common pre-rendering method is the generation of texture sets for 3D games, which are often
used with complex real-time algorithms to simulate extraordinarily high levels of detail. While making Doom 3, id
Software used pre-rendered models as the basis for generating normal, specular and diffuse lighting maps that
simulate the detail of the original model in real-time.
Pre-rendered lighting is a technique that is losing popularity. Processor-intensive ray tracing algorithms can be used
during a game's production to generate light textures, which are simply applied on top of the usual hand drawn
textures.
175
Pre-rendering
References
[1] (cf. )
References
Peter-Pike Sloan, Jan Kautz, and John Snyder. "Precomputed Radiance Transfer for Real-time rendering in
Dynamic, Low-Frequency Lighting Environments". ACM Transactions on Graphics, Proceedings of the 29th
Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), pp. 527-536. New York,
NY: ACM Press, 2002. (http://www.mpi-inf.mpg.de/~jnkautz/projects/prt/prtSIG02.pdf)
NG, R., RAMAMOORTHI, R., AND HANRAHAN, P. 2003. All-Frequency Shadows Using Non-Linear
Wavelet Lighting Approximation. ACM Transactions on Graphics 22, 3, 376381. (http://graphics.stanford.
edu/papers/allfreq/allfreq.press.pdf)
176
Procedural modeling
Procedural modeling
Procedural modeling is an umbrella term for a number of techniques in computer graphics to create 3D models and
textures from sets of rules. L-Systems, fractals, and generative modeling are procedural modeling techniques since
they apply algorithms for producing scenes. The set of rules may either be embedded into the algorithm,
configurable by parameters, or the set of rules is separate from the evaluation engine. The output is called procedural
content, which can be used in computer games, films, be uploaded to the internet, or the user may edit the content
manually. Procedural models often exhibit database amplification, meaning that large scenes can be generated from a
much smaller amount of rules. If the employed algorithm produces the same output every time, the output need not
be stored. Often, it suffices to start the algorithm with the same random seed to achieve this. Although all modeling
techniques on a computer require algorithms to manage and store data at some point, procedural modeling focuses
on creating a model from a rule set, rather than editing the model via user input. Procedural modeling is often
applied when it would be too cumbersome to create a 3D model using generic 3D modelers, or when more
specialized tools are required. This is often the case for plants, architecture or landscapes.
Acropora [1]
BRL-CAD
Bryce
CityEngine
Derivative Touch Designer [2]
Generative Modelling Language
Grome
Houdini
HyperFun
Softimage
Terragen
3ds Max
External links
"Texturing and Modeling: A Procedural Approach" [3], Ebert, D., Musgrave, K., Peachey, P., Perlin, K., and
Worley, S
Procedural Inc. [4]
CityEngine [5]
"Procedural Modeling of Cities" [6], Yoav I H Parish, Pascal Mller
"Procedural Modeling of Buildings" [7], Pascal Mller, Peter Wonka, Simon Haegler, Andreas Ulmer and Luc
Van Gool
"King Kong The Building of 1933 New York City" [8], Chris White, Weta Digital. Siggraph 2006.
Tree Editors Compared:
List at Vterrain.org [9]
List at TreeGenerator [10]
177
Procedural modeling
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Procedural texture
A procedural texture is a computer-generated image
created using an algorithm intended to create a realistic
representation of natural elements such as wood, marble,
granite, metal, stone, and others.
Usually, the natural look of the rendered result is achieved by
the usage of fractal noise and turbulence functions. These
functions are used as a numerical representation of the
randomness found in nature.
Solid texturing
Solid texturing is a process where the texture generating
function is evaluated over
at each visible surface point of
the model. Traditionally these functions use Perlin noise as
A procedural floor grate texture generated with the texture
their basis function, but some simple functions may use more
[1]
editor Genetica .
trivial methods such as the sum of sinusoidal functions for
instance. Solid textures are an alternative to the traditional
2D texture images which are applied to the surfaces of a model. It is a difficult and tedious task to get multiple 2D
textures to form a consistent visual appearance on a model without it looking obviously tiled. Solid textures were
created to specifically solve this problem.
Instead of editing images to fit a model, a function is used to evaluate the colour of the point being textured. Points
are evaluated based on their 3D position, not their 2D surface position. Consequently, solid textures are unaffected
by distortions of the surface parameter space, such as you might see near the poles of a sphere. Also, continuity
between the surface parameterization of adjacent patches isnt a concern either. Solid textures will remain consistent
and have features of constant size regardless of distortions in the surface coordinate systems. [2]
Cellular texturing
Cellular texturing differs from the majority of other procedural texture generating techniques as it does not depend
on noise functions as its basis, although it is often used to complement the technique. Cellular textures are based on
feature points which are scattered over a three dimensional space. These points are then used to split up the space
into small, randomly tiled regions called cells. These cells often look like lizard scales, pebbles, or flagstones.
Even though these regions are discrete, the cellular basis function itself is continuous and can be evaluated anywhere
178
Procedural texture
179
in space. [3]
Genetic textures
Genetic texture generation is highly experimental approach for generating textures. It is a highly automated process
that uses a human to completely moderate the eventual outcome. The flow of control usually has a computer
generate a set of texture candidates. From these, a user picks a selection. The computer then generates another set of
textures by mutating and crossing over elements of the user selected textures.[4] For more information on exactly
how this mutation and cross over generation method is achieved, see Genetic algorithm. The process continues until
a suitable texture for the user is generated. This isn't a commonly used method of generating textures as its very
difficult to control and direct the eventual outcome. Because of this, it is typically used for experimentation or
abstract textures only.
Self-organizing textures
Starting from a simple white noise, self-organization processes lead to structured patterns - still with a part of
randomness. Reaction-diffusion systems are a good example to generate such kind of textures.
color
Ks
= .4,
Kd
= .6,
Ka
= .1,
roughness = .1,
txtscale = 1;
specularcolor = 1)
{
point
float
point
point
float
PP;
/*
csp;
/*
Nf;
/*
V;
/*
pixelsize, twice,
Procedural texture
/*
* Compute "turbulence" a la [PERLIN85]. Turbulence is a sum of
* "noise" components with a "fractal" 1/f power spectrum. It gives the
* visual impression of turbulent fluid flow (for example, as in the
* formation of blue_marble from molten color splines!). Use the
* surface element area in texture space to control the number of
* noise components so that the frequency content is appropriate
* to the scale. This prevents aliasing of the texture.
*/
PP = transform("shader", P) * txtscale;
pixelsize = sqrt(area(PP));
twice = 2 * pixelsize;
turbulence = 0;
for (scale = 1; scale > twice; scale /= 2)
turbulence += scale * noise(PP/scale);
/* Gradual fade out of highest-frequency component near limit */
if (scale > pixelsize) {
weight = (scale / pixelsize) - 1;
weight = clamp(weight, 0, 1);
turbulence += weight * scale * noise(PP/scale);
}
/*
* Magnify the upper part of the turbulence range 0.75:1
* to fill the range 0:1 and use it as the parameter of
* a color spline through various shades of blue.
*/
csp = clamp(4 * turbulence - 3, 0, 1);
Ci = color spline(csp,
color (0.25, 0.25, 0.35),
/* pale blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.20, 0.20, 0.30), /* medium blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.15, 0.15, 0.26), /* medium dark blue */
color (0.15, 0.15, 0.26), /* medium dark blue */
color (0.10, 0.10, 0.20), /* dark blue
*/
color (0.10, 0.10, 0.20), /* dark blue
*/
color (0.25, 0.25, 0.35), /* pale blue
*/
color (0.10, 0.10, 0.20)
/* dark blue
*/
);
/* Multiply this color by the diffusely reflected light. */
Ci *= Ka*ambient() + Kd*diffuse(Nf);
180
Procedural texture
References
[1]
[2]
[3]
[4]
[5]
181
Progressive meshes
Progressive meshes
Progressive meshes is one of the techniques of dynamic level of detail (LOD). This technique was introduced by
Hugues Hoppe in 1996. This method uses saving a model to the structure - the progressive mesh, which allows a
smooth choice of detail levels depending on the current view. Practically, this means that it is possible to display
whole model with the lowest level of detail at once and then it gradually shows even more details. Among the
disadvantages belongs considerable memory consumption. The advantage is that it can work in real time.
Progressive meshes could be used also in other areas of computer technology such as a gradual transfer of data
through the Internet or compression. [1]
Basic principle
Progressive meshes is a data structure which is created as the original model of the best quality simplifies a suitable
decimation algorithm, which removes step by step some of the edges in the model (edge-collapse operation). It is
necessary to undertake as many simplifications as needed to achieve the minimal model. The resultant model, in a
full quality, is then represented by the minimal model and by the sequence of inverse operations to simplistic (vertex
split operation). This forms a hierarchical structure which helps to create a model in the chosen level of detail.
Edge collapse
This simplistic operation - ecol takes two connected vertices and replaces them with a single vertex. Two triangles
{vs, vt, vl} and {vt, vs, vr} which were connected by the edge are also removed during this operation.
Vertex split
Vertex split (vsplit) is the inverse operation to the edge collapse that divides the vertex into two new vertexes.
Therefore a new edge {vt, vs} and two new triangles {vs, vt, vl} and {vt, vs, vr} arise.
References
[1] D. Luebke, M. Reddy, J. D. Cohen, A. Varshney, B. Watson, R. Huebner: Level of Detail for 3D Graphics, Morgan Kaufmann, 2002, ISBN
0-321-19496-9
182
3D projection
183
3D projection
Part of a series on
Graphical
projection
v
t
e [1]
3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current
methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection
is widespread, especially in computer graphics, engineering and drafting.
Orthographic projection
When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic
projection ignores this effect to allow the creation of to-scale drawings for construction and engineering.
Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a
three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and
elevation.
If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z
axis), the mathematical transformation is as follows; To project the 3D point
,
,
onto the 2D point
,
using an orthographic projection parallel to the y axis (profile view), the following equations can be used:
where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be
used to properly align the viewport. Using matrix multiplication, the equations become:
.
While orthographically projected images represent the three dimensional nature of the object projected, they do not
represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In
particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of
whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened
as they would be in a perspective projection.
3D projection
184
Perspective projection
When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as
perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition
shows distant objects as smaller to provide additional realism.
The perspective projection requires a more involved definition as compared to orthographic projections. A
conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the
object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control
the behavior of the projection transformation. The following variables are defined to describe this transformation:
- the 2D projection of
When
Otherwise, to compute
and
the 3D vector
called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the
x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):
This representation corresponds to rotating by three Euler angles (more properly, TaitBryan angles), using the xyz
convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x
(reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading
left-to-right)". Note that if the camera is not rotated (
), then the matrices drop out (as
identities), and this reduces to simply a shift:
Alternatively, without using matrices (let's replace (ax-cx) with x and so on, and abbreviate cos to c and sin to s):
3D projection
185
This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection
plane; literature also may use x/z):
in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving
In which
Diagram
3D projection
186
where
is the screen x coordinate
is the model x coordinate
is the focal lengththe axial distance from the camera center to the image plane
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram
and equation.
References
External links
A case study in camera projection (http://nccasymposium.bmth.ac.uk/2007/muhittin_bilginer/index.html)
Creating 3D Environments from Digital Photographs (http://nccasymposium.bmth.ac.uk/2009/
McLaughlin_Chris/McLaughlin_C_WebBasedNotes.pdf)
Further reading
Kenneth C. Finney (2004). 3D Game Programming All in One (http://books.google.com/
?id=cknGqaHwPFkC&pg=PA93&dq="3D+projection"). Thomson Course. p.93. ISBN978-1-59200-136-1.
Koehler; Dr. Ralph. 2D/3D Graphics and Splines with Source Code. ISBN0759611874.
Caveats
In both the proposed approaches there are two little problems which can be trivially solved and comes from the
different conventions used by eye space and texture space.
Defining properties of those spaces is beyond the scope of this article but it's well known that textures should usually
be addressed in the range [0..1] while eye space coordinates are addressed in the range [-1..1]. According to the used
texture wrap mode various artifacts may occur but it's obvious a shift and scale operation is definitely necessary to
get the expected result.
The other problem is actually a mathematical issue. It is well known the matrix math used produces a back
projection. This artifact has historically been avoided by using a special black and white texture to cut away
unnecessary projecting contributions. Using pixel shaders a different approach can be used: a coordinate check is
sufficient to discriminate between forward (correct) contributions and backward (wrong, to be avoided) ones.
References
1. ^ The original paper [4] from the nVIDIA developer web site [5] includes all the needed documentation on this
issue. The same site also contains additional hints [6].
2. ^ Texture coordinate generation is covered in section 2.11.4 "Generating Texture Coordinates" from the OpenGL
2.0 specification [7]. Eye linear texture coordinate generation is a special case.
3. ^ Texture matrix is introduced in section 2.11.2 "Matrices" of the OpenGL 2.0 specification [7].
External links
http://www.3dkingdoms.com/weekly/weekly.php?a=20 A tutorial showing how to implement projective
texturing using the programmable pipeline approach in OpenGL.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
187
Pyramid of vision
188
Pyramid of vision
Pyramid of vision is a 3D computer graphics term: the infinite
pyramid into the real world, with an apex at the observer's eye and
faces passing through the edges of the viewport ("window").
Pyramid of vision
Quantitative Invisibility
In CAD/CAM, quantitative invisibility (QI) is the number of solid bodies that obscure a point in space as projected
onto a plane. Often, CAD engineers project a model into a plane (a 2D drawing) in order to denote edges that are
visible with a solid line, and those that are hidden with dashed or dimmed lines.
Algorithm
Tracking the number of obscuring bodies gave rise to an algorithm that propagates the quantitative invisibility
throughout the model. This technique uses edge coherence to speed calculations in the algorithm. However, QI really
only works well when bodies are larger solids, non-interpenetrating, and not transparent.
A technique like this falls apart when applied to soft organic tissue as found in the human body, because there is not
always a clear delineation of structures. Also, when images become too cluttered and intertwined, the contribution of
this algorithm is marginal. Arthur Appel of the graphics group at IBM Watson Research coined the term quantitative
invisibility and used it in several of his papers.
External links
Vector Hidden Line Removal and Fractional Quantitative Invisibility [1]
References
Appel, A., "The Notion of Quantitative Invisibility and the Machine Rendering of Solids," Proceedings ACM
National Conference, Thompson Books, Washington, DC, 1967, pp.387393, pp.214220.
References
[1] http:/ / wheger. tripod. com/ vhl/ vhl. htm
189
The rotation is clockwise if our line of sight points in the same direction as u.
It can be shown that this rotation can be applied to an ordinary vector
in
3-dimensional space, considered as a quaternion with a real coordinate equal to zero, by evaluating the conjugation
ofp byq:
using the Hamilton product, where p = (px, py, pz) is the new position vector of the point after the rotation.
In this instance, q is a unit quaternion and
It follows that conjugation by the product of two quaternions is the composition of conjugations by these
quaternions. If p and q are unit quaternions, then rotation (conjugation) bypq is
,
which is the same as rotating (conjugating) byq and then byp. The scalar component of the result is necessarily
zero.
The quaternion inverse of a rotation is the opposite rotation, since
. The square of a
n
quaternion rotation is a rotation by twice the angle around the same axis. More generally q is a rotation byn times
the angle around the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation
between spatial orientations; see Slerp.
Two rotation quaternions can be combined into one equivalent quaternion by the relation:
190
in which q corresponds to the rotation q1 followed by the rotation q2. (Note that quaternion multiplication is not
commutative.) Thus, an arbitrary number of rotations can be composed together and then applied as a single rotation.
Example
The conjugation operation
Conjugating p by q refers to the operation p q p
q1.
Consider the rotation f around the axis
, with a rotation angle of 120, or 2/3radians.
The length of v is 3, the half angle is /3 (60) with cosine 1/2, (cos
60 = 0.5) and sine 3/2, (sin 60 0.866). We are therefore dealing
with a conjugation by the unit quaternion
It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginary
components. As a consequence,
and
This can be simplified, using the ordinary rules for quaternion arithmetic, to
As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120 about the long
diagonal through the fixed point (observe how the three axes are permuted cyclically).
Quaternion arithmetic in practice
Let's show how we reached the previous result. Let's develop the expression of f (in two stages), and apply the rules
It gives us:
which is the expected result. As we can see, such computations are relatively long and tedious if done manually;
however, in a computer program, this amounts to calling the quaternion multiplication routine twice.
191
192
where s and c are shorthand for sin and cos , respectively. Although care should be taken (due to degeneracy as
the quaternion approaches the identity quaternion(1) or the sine of the angle approaches zero) the axis and angle can
be extracted via:
Note that the equality holds only when the square root of the sum of the squared imaginary terms takes the same
sign as qr.
As with other schemes to apply rotations, the centre of rotation must be translated to the origin before the rotation is
applied and translated back to its original position afterwards.
Explanation
Quaternions briefly
The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebra
and additionally the rule i2 = 1. This is sufficient to reproduce all of the rules of complex number arithmetic: for
example:
.
In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2
= k2 = i j k = 1 and the usual algebraic rules except the commutative law of multiplication (a familiar example of
such a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmetic
follow: for example, one can show that:
.
The imaginary part
space, and the real part a behaves like a scalar in R. When quaternions are used in geometry, it is more convenient to
define them as a scalar plus a vector:
.
Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects of
very different natures, or to multiply two vectors together, as this operation is usually undefined. However, if one
remembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. In
other words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, and
another one with zero scalar/real part:
.
We can express quaternion multiplication in the modern language of vector cross and dot products (which were
actually inspired by the quaternions in the first place [citation needed]). In place of the rules i2 = j2 = k2 = ijk = 1 we
193
where:
Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), while
scalarscalar and scalarvector multiplications commute. From these rules it follows immediately that (see details):
.
The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-norm
ratio (see details):
,
as can be verified by direct calculation.
where
and
rotated by an angle
are the components of v perpendicular and parallel to u respectively. This is the formula of a
194
195
Orientation
The vector cross product, used to define the axisangle representation, does confer an orientation ("handedness") to
space: in a three-dimensional vector space, the three vectors in the equation a b = c will always form a
right-handed set (or a left-handed set, depending on how the cross product is defined), thus fixing an orientation in
the vector space. Alternatively, the dependence on orientation is expressed in referring to such u that specifies a
rotation as to axial vectors. In quaternionic formalism the choice of an orientation of the space corresponds to order
of multiplication: ij = k but ji = k. If one reverses the orientation, then the formula above becomes p q1 p q,
i.e. a unit q is replaced with the conjugate quaternion the same behaviour as of axial vectors.
196
197
and find the eigenvector (x, y, z, w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is a
pure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q
Wikipedia:Disputed statement
Performance comparisons
This section discusses the performance implications of using quaternions versus other methods (axis/angle or
rotation matrices) to perform rotations in 3D.
Results
Storage requirements
Method
Storage
Rotation matrix 9
Quaternion
Angle/axis
3*
* Note: angle/axis can be stored as 3 elements by multiplying the unit rotation axis by half of the rotation angle,
forming the logarithm of the quaternion, at the cost of additional calculations.
198
Rotation matrices 27
18
45
Quaternions
12
28
16
Rotation matrix 9
15
Quaternions
15
15
30
Angle/axis
23
16
41
Used methods
There are three basic approaches to rotating a vectorv:
1. Compute the matrix product of a 3 3 rotation matrixR and the original 3 1 column matrix representing v.
This requires 3 (3multiplications+ 2additions)= 9multiplications and 6additions, the most efficient method
for rotating a vector.
2. A rotation can be represented by a unit-length quaternion q = (w, r) with scalar (real) partw and vector
(imaginary) partr. The rotation can be applied to a 3D vector v via the formula
. This requires only 15 multiplications and 15 additions to evaluate (or 18
muls and 12 adds if the factor of 2 is done via multiplication.) This yields the same result as the less efficient but
more compact formula
.
3. Use the angle/axis formula to convert an angle/axis to a rotation matrixR then multiplying with a vector.
Converting the angle/axis to R using common subexpression elimination costs 14multiplies, 2function calls (sin,
cos), and 10add/subtracts; from item1, rotating using R adds an additional 9 multiplications and 6 additions for a
total of 23multiplies, 16add/subtracts, and 2 function calls (sin, cos).
It is straightforward to check that for each matrix: M MT= I, that is, that each matrix (and hence both matrices
together) represents a rotation. Note that since
, the two matrices must commute. Therefore,
there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotations
have 6degrees of freedom, each matrix represents 3 of those 6degrees of freedom.
Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all
(non-infinitesimal) four-dimensional rotations can also be represented.
References
[1] Amnon Katz (1996) Computational Rigid Vehicle Dynamics, Krieger Publishing Co. ISBN 978-1575240169
[2] J. B. Kuipers (1999) Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality, Princeton
University Press ISBN 978-0-691-10298-6
[3] Simon L. Altman (1986) Rotations, Quaternions, and Double Groups, Dover Publications (see especially Ch. 12).
E. P. Battey-Pratt & T. J. Racey (1980) Geometric Model for Fundamental Particles International Journal of
Theoretical Physics. Vol 19, No. 6
199
Andreas Raab
200
Andreas Raab
Dr. Andreas Raab
Born
Died
Citizenship
German
Fields
Computer science
Institutions
Squeak
Croquet Project
OpenQwaq
Tweak programming environment
Etoys
Spouse
Kathleen Raab
Children
Andreas Raab (November 24, 1968 January 14, 2013) was a German computer scientist who developed new
concepts and applications in 3D graphics. Raab was a key contributor to the Squeak platform and the Croquet virtual
world project. He was an early and longstanding member of the Squeak Central team headed by Alan Kay, and later
an elected member of the Squeak Oversight Board. He authored the initial Windows port of the Squeak virtual
machine, and created the Tweak programming environment used in virtual world applications.
Andreas Raab
Accomplishments
Andreas Raab was a key contributor and participant in the Squeak community. He was the largest contributor to the
code base. Colleagues consider him to have been a brilliant and artistic coder known for his solid design and lack of
bugs.
He ported the Squeak virtual machine to Windows while he was a Ph.D. student at Magdeburg University in 1997.
The Squeak Central team at Walt Disney Imagineering, led by Alan Kay, was very much impressed with his talent.
When Raab graduated, Kay hired him and brought him to California. It didn't take long for him he became a
productive member of the core team.
In 2001, it became clear that the Etoys architecture in Squeak had reached limits in the capabilities of its Morphic
interface infrastructure. Andreas Raab proposed defining a "script process" and providing a default scheduling
mechanism that avoids several more general problems. The result was a new user interface, proposed to replace the
Squeak Morphic user interface in the future. Tweak provides mechanisms of islands, asynchronous messaging,
players and costumes, language extensions, projects, and tile scripting.[3] Its underlying object system is class-based,
but to users (during programming) it acts like it is prototype-based. Tweak objects are created and run in Tweak
project windows. Tweak was used extensively in version 1.0 of the Sophie project under the direction of Robert
Stein.
At Alan Kay's Viewpoints Research Institute, Kay and Raab worked with David P. Reed, and David A. Smith,
implementing the concepts of David Reed's Ph.D. thesis by creating the first working model of Croquet.
In 2007, Smith and Raab started Qwaq, an immersive collaboration company, which further developed the Croquet
prototype for business applications, such as simulations for the United States Department of Defense. Qwaq was
later renamed to Teleplace and then became 3D Immersive Collaboration Consulting.
In 2009, Raab proposed and implemented a special event-driven version of Squeak VM which does not contain an
event loop, but instead acts as a handler for an externally-provided queue of events, and returns to the caller once all
the events have been processed. Such modification of the VM makes it very convenient for embedding with any
other (e. g. another language) runtime. Squeak on Android is just an example of such embedding with Java/Dalvik
VM.
Articles
"Coherent Zooming of Illustrations with 3D-Graphics and Text" - Proceedings of the conference on Graphics
interface '97 [4]
"User-centred design in the development of a navigational aid for blind travellers" - '97 Proceedings of the IFIP
TC13 Interantional Conference on Human-Computer Interaction Pages 220-227 [5]
TinLizzieWysiWiki andWikiPhone: Alternative approaches to asynchronous and synchronous collaboration on
the Web [6]
Croquet: a menagerie of new user interfaces [7]
Filters and tasks in Croquet [8]
Wouldn't you like to have your own studio in Croquet? [9]
The media messenger [10] About a new messaging system which sends media (video, presentations, animations,
audio, interactive games, 3D spaces) to other users on the Internet.
Scalability of Collaborative Environments [11]
201
Andreas Raab
202
External links
Andreas' Blog Squeaking Along [12]
References
[1]
[2]
[3]
[4]
[5]
[6]
Press release about his dissertation (german): http:/ / idw-online. de/ de/ news6763
(archived at Squeak Wiki)
Tweak: Whitepapers (http:/ / web. archive. org/ web/ 20070323064400/ http:/ / tweakproject. org/ TECHNOLOGY/ Whitepapers/ )
http:/ / www. vismd. de/ lib/ exe/ fetch. php?media=files:hci:preim_1997_gi. pdf
http:/ / dl. acm. org/ citation. cfm?id=647403. 723524& coll=DL& dl=GUIDE& CFID=256678779& CFTOKEN=38905774
http:/ / ieeexplore. ieee. org/ xpl/ login. jsp?tp=& arnumber=4144932& url=http%3A%2F%2Fieeexplore. ieee. org%2Fstamp%2Fstamp.
jsp%3Ftp%3D%26arnumber%3D4144932
[7] http:/ / ieeexplore. ieee. org/ xpl/ mostRecentIssue. jsp?punumber=9189
[8] http:/ / ieeexplore. ieee. org/ xpl/ mostRecentIssue. jsp?punumber=9721
[9] http:/ / ieeexplore. ieee. org/ xpl/ articleDetails. jsp?tp=& arnumber=4019389& contentType=Conference+ Publications&
searchWithin%3Dp_Authors%3A. QT. Raab%2C+ A. . QT.
[10] http:/ / ieeexplore. ieee. org/ xpl/ login. jsp?tp=& arnumber=1419794& url=http%3A%2F%2Fieeexplore. ieee. org%2Fstamp%2Fstamp.
jsp%3Ftp%3D%26arnumber%3D1419794
[11] http:/ / ieeexplore. ieee. org/ xpl/ articleDetails. jsp?tp=& arnumber=4019391& contentType=Conference+ Publications&
searchWithin%3Dp_Authors%3A. QT. Raab%2C+ A. . QT.
[12] http:/ / squeakingalong. wordpress. com/
RealityEngine
RealityEngine refers to a 3D graphics hardware architecture and a
family of graphics systems that implemented the aforementioned
hardware architecture that was developed and manufactured by Silicon
Graphics during the early to mid 1990s. The RealityEngine was
positioned as Silicon Graphics' high-end visualization hardware for
their MIPS/IRIX platform and was used exclusively in their Crimson
and Onyx family of visualization systems, which are sometimes
referred to as "graphics supercomputers" or "visualization
supercomputers". The RealityEngine was marketed to and used by
large organizations such as companies and universities that are
involved in computer simulation, digital content creation, engineering
and research.
It was succeeded by the InfiniteReality in early 1996, but coexisted
with it for a time as an entry-level option for older systems.
RealityEngine
The RealityEngine was a board set comprising a Geometry Engine board, one to four Raster Memory board(s), and a
DG2 Display Generator board. These boards plugged into a midplane on the host system.
The Geometry Engine was based around the 50 MHz Intel i860XP.
RealityEngine
203
VTX
The VTX was a cost-reduced RealityEngine and as a consequence, its
features and performance was below that of the RealityEngine. It
should not be mistaken as the VGX or VGXT board set.
RealityEngine2
The RealityEngine2, branded RealityEngine2, is an upgraded
RealityEngine with twelve instead of eight Geometry Engines
introduced towards the end of the RealityEngine's life. It was
succeeded by the InfiniteReality in early 1996.
Raster Memory board.
It uses the GE10 Geometry Engine board, RM4 Raster Memory board and DG2 Display Generator board.
References
Akeley, Kurt. "RealityEngine Graphics" [1]. Proceedings of SIGGRAPH '93, pp. 109-116.
References
[1] http:/ / www1. cs. columbia. edu/ ~ravir/ 6160/ papers/ p109-akeley. pdf
204
Examples
Polished or Mirror reflection
Mirrors are usually almost 100% reflective.
205
Metallic Reflection
Normal, (nonmetallic), objects reflect
light and colors in the original color of
the object being reflected.
Metallic objects reflect lights and
colors altered by the color of the
metallic object itself.
The large sphere on the left is blue with its reflection marked as metallic. The large sphere
on the right is the same color but does not have the metallic property selected.
Blurry Reflection
Many
materials
are
imperfect
reflectors, where the reflections are
blurred to various degrees due to
surface roughness that scatters the rays
of the reflections.
The large sphere on the left has sharpness set to 100%. The sphere on the right has
sharpness set to 50% which creates a blurry reflection.
206
Glossy Reflection
Fully
glossy
reflection,
shows
highlights from light sources, but does
not show a clear reflection from
objects.
The sphere on the left has normal, metallic reflection. The sphere on the right has the
same parameters, except that the reflection is marked as "glossy".
References
External links
Manuel's Relief texture mapping (http://www.inf.ufrgs.br/~oliveira/RTM.html)
Retained mode
Retained mode
In computing, retained mode rendering is a style for application programming interfaces of graphics libraries, in
which the libraries retain a complete model of the objects to be rendered.[1]
Overview
By using a "retained mode" approach, client calls do not directly cause actual rendering, but instead update an
internal model (typically a list of objects) which is maintained within the library's data space. This allows the library
to optimize when actual rendering takes place along with the processing of related objects.
Some techniques to optimize rendering include:[citation needed]
managing double buffering
performing occlusion culling
only transferring data that has changed from one frame to the next from the application to the library
Immediate mode is an alternative approach; the two styles can coexist in the same library and are not necessarily
exclusionary in practice. For example, OpenGL has immediate mode functions that can use previously defined server
side objects (textures, vertex and index buffers, shaders, etc.) without resending unchanged data.[citation needed]
References
[1] Retained Mode Versus Immediate Mode (http:/ / msdn. microsoft. com/ en-us/ library/ windows/ desktop/ ff684178(v=vs. 85). aspx)
Example
POV-Ray
#declare the_angle = 0;
#while (the_angle < 360)
box {
<-0.5, -0.5, -0.5>
<0.5, 0.5, 0.5>
texture { pigment { color Red }
finish { specular 0.6 }
normal { agate 0.25 scale 1/2 } }
rotate the_angle }
#declare the_angle = the_angle + 45;
#end
207
208
3DMLW
<?xml version="1.0" standalone="no"?>
<document>
<content2d>
<area width="200" height="100" color="#C0C0C0FF" texture="flower.png" />
</content2d>
<content3d id="content" camera="{#cam}">
<camera id="cam" class="cam_rotation" y="10" z="40" viewy="10"/>
<box name="ground" width="100" height="2" depth="100" color="green" class="ground" />
<box name="dynamic" y="20" width="10" height="10" depth="10" color="blue" />
</content3d>
</document>
X3D
<?xml version="1.0" encoding="UTF-8"?>
<Scene>
<Shape>
</IndexedFaceSet>
</Shape>
</Scene>
</X3D>
Tao Presentations
clear_color 0, 0, 0, 1
light 0
light_position 1000, 1000, 1000
rotatey 0.05 * mouse_x
text_box 0, 0, 800, 600,
extrude_depth 25
extrude_radius 5
align_center
209
vertical_align_center
font "Arial", 300
color "white"
text "3D"
line_break
font_size 80
text zero hours & ":" & zero minutes & ":" & zero seconds
draw_sphere N ->
locally
color_hsv 20 * N, 0.3, 1
translate 300*cos(N*0.1+time), 300*sin(N*0.17+time),
500*sin(N*0.23+time)
sphere 50
zero N -> if N < 10 then "0" & text N else text N
Schlick's approximation
In 3D computer graphics, Schlick's approximation is a formula for approximating the contribution of the Fresnel
term in the specular reflection of light from a non-conducting interface (surface) between two media.
According to Schlick's model, the specular reflection coefficient R can be approximated by:
where
is the angle between the viewing direction and the half-angle direction, which is halfway between the
. And
References
Schlick, C. (1994). "An Inexpensive BRDF Model for Physically-based Rendering". Computer Graphics Forum
13 (3): 233. doi:10.1111/1467-8659.1330233 [1].
References
[1] http:/ / dx. doi. org/ 10. 1111%2F1467-8659. 1330233
Sculpted prim
210
Sculpted prim
A sculpted prim(itive) (or sculpty, sculptie, or just sculpt) is a
Second Life 3D parametric object whose 3D shape is determined by a
texture. These textures are UV maps that form the rendered 3D
sculpted prim. Sculpted prims can be used to create more complex,
organic shapes that are not possible with Second Life's primitive
system.
Technical details
External links
Free sculpted prim creation software
ROKURO(lathe) - Sculpted Prim Maker [16] - A simple NURB curve maker and revolver (a "lathed" object is one
that is perfectly symmetrical around the center axis)
Sculptie-O-matic [17] transform inworld linksets into sculpties directly
Sculpted prim
Strata 3D [18]
TheBlack Box - Sculpt Studio [19]
TATARA - Sculpted Prim Previewer and Editor [20] - Kanae Project top of the line; Sculpted Prims can be edited
in the five modes: ROKURO/TOKOROTEN/MAGE/WAPPA/TSUCHI (contains ROKURO tool(above) and
TOKOROTEN tool (below) ) ~L$5000
TOKOROTEN(extruder) [21] - Makes sculpted prim texture TGA file of the pushed-out/extruded objects (makes
cookie-cutter style shapes). As of mid-2008 the maker of Rokuro ceased making either available for free; it is
now L$2500 (~$8 USD).
External links
References
[1] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prim_Explanation
[2] http:/ / dominodesigns. info/ second_life/ blender_scripts. html
[3] http:/ / www. wings3d. com/
[4] http:/ / pkpounceworks. sljoint. com/ index. php?option=com_remository& Itemid=28& func=fileinfo& id=119
[5] http:/ / www. slexchange. com/ modules. php?name=Marketplace& file=item& ItemID=266428
[6] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims:_3d_Software_Guide
[7] http:/ / www. curvy3d. com/
[8] http:/ / www. alias. com/
[9] http:/ / www. autodesk. com/ 3dsmax
[10] http:/ / artzone. daz3d. com/ wiki/ doku. php/ pub/ software/ hexagon/ start/
[11] http:/ / www. inivis. com/
[12] http:/ / wiki. secondlife. com/ wiki/ 3dm2sculpt
[13] http:/ / www. newtek. com/ lightwave/
[14] http:/ / www. xs4all. nl/ ~elout/ sculptpaint/
[15] http:/ / www. pixologic. com/
[16] http:/ / www. kanae. net/ secondlife/
[17] http:/ / slurl. com/ secondlife/ Sri%20Syadasti/ 21/ 85/ 37
[18] http:/ / www. strata. com/
[19] http:/ / www. slexchange. com/ modules. php?name=Marketplace& file=item& ItemID=278458
[20] http:/ / kanae. net/ secondlife/ tatara. html
[21] http:/ / kanae. net/ secondlife/ tokoroten. html
[22] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims
[23] http:/ / wiki. secondlife. com/ wiki/ Sculpted_Prims:_FAQ
[24] http:/ / wiki. secondlife. com/ wiki/ Talk:Sculpted_Prims
[25] http:/ / amandalevitsky. googlepages. com/ sculptedprims
[26] http:/ / blog. machinimatrix. org/ video-tutorials
[27] http:/ / www. blendernation. com/ 2007/ 05/ 21/ blender-and-second-life/
[28] http:/ / independentdeveloper. com/ archive/ 2007/ 09/ 27/ sculpted_prims_from_existing_3
211
Silhouette edge
Silhouette edge
In computer graphics, a silhouette edge on a 3D body projected onto a 2D plane (display plane) is the collection of
points whose outwards surface normal is perpendicular to the view vector. Due to discontinuities in the surface
normal, a silhouette edge is also an edge which separates a front facing face from a back facing face. Without loss of
generality, this edge is usually chosen to be the closest one on a face, so that in parallel view this edge corresponds to
the same one in a perspective view. Hence, if there is an edge between a front facing face and a side facing face, and
another edge between a side facing face and back facing face, the closer one is chosen. The easy example is looking
at a cube in the direction where the face normal is collinear with the view vector.
The first type of silhouette edge is sometimes troublesome to handle because it does not necessarily correspond to a
physical edge in the CAD model. The reason that this can be an issue is that a programmer might corrupt the original
model by introducing the new silhouette edge into the problem. Also, given that the edge strongly depends upon the
orientation of the model and view vector, this can introduce numerical instabilities into the algorithm (such as when
a trick like dilution of precision is considered).
Computation
To determine the silhouette edge of an object, we first have to know the plane equation of all faces. Then, by
examining the sign of the point-plane distance from the light-source to each face
Using this result, we can determine if the face is front- or back facing.
The silhouette edge(s) consist of all edges separating a front facing face from a back facing face.
Similar Technique
A convenient and practical implementation of front/back facing detection is to use the unit normal of the plane
(which is commonly precomputed for lighting effects anyway), then simply applying the dot product of the light
position to the plane's unit normal and adding the D component of the plane equation (a scalar value):
Where plane_D is easily calculated as a point on the plane dot product with the unit normal of the plane:
Note: The homogeneous coordinates, L_w and d, are not always needed for this computation.
After doing this calculation, you may notice indicator is actually the signed distance from the plane to the light
position. This distance indicator will be negative if it is behind the face, and positive if it is in front of the face.
This is also the technique used in the 2002 SIGGRAPH paper, "Practical and Robust Stenciled Shadow Volumes for
Hardware-Accelerated Rendering"
212
Silhouette edge
213
External links
http://wheger.tripod.com/vhl/vhl.htm
Skeletal animation
Skeletal animation is a technique in computer animation in which a
character is represented in two parts: a surface representation used to
draw the character (called skin or mesh) and a hierarchical set of
interconnected bones (called the skeleton or rig) used to animate (pose
and keyframe) the mesh. While this technique is often used to animate
humans or more generally for organic modeling, it only serves to make
the animation process more intuitive and the same technique can be
used to control the deformation of any object a door, a spoon, a
building, or a galaxy.
This technique is used in virtually all animation systems where
simplified user interfaces allows animators to control often complex
algorithms and a huge amount of geometry; most notably through
inverse kinematics and other "goal-oriented" techniques. In principle,
however, the intention of the technique is never to imitate real anatomy
or physical processes, but only to control the deformation of the mesh data.
Technique
"Rigging is making our characters able to move. The process of rigging is we take that digital sculpture, and we start building the
skeleton, the muscles, and we attach the skin to the character, and we also create a set of animation controls, which our animators use
to push and pull the body around."
Frank Hanner, character CG supervisor of the Walt Disney Animation Studios, provided a basic understanding on the technique
of character rigging.
This technique is used by constructing a series of 'bones,' sometimes referred to as rigging. Each bone has a three
dimensional transformation (which includes its position, scale and orientation), and an optional parent bone. The
bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own
transform. So moving a thigh-bone will move the lower leg too. As the character is animated, the bones change their
transformation over time, under the influence of some animation controller. A rig is generally composed of both
forward kinematics and inverse kinematics parts that may interact with each other. Skeletal animation is referring to
the forward kinematics part of the rig, where a complete set of bones configurations identifies a unique pose.
Each bone in the skeleton is associated with some portion of the character's visual representation. Skinning is the
process of creating this association. In the most common case of a polygonal mesh character, the bone is associated
with a group of vertices; for example, in a model of a human being, the 'thigh' bone would be associated with the
vertices making up the polygons in the model's thigh. Portions of the character's skin can normally be associated
with multiple bones, each one having a scaling factors called vertex weights, or blend weights. The movement of
skin near the joints of two bones, can therefore be influenced by both bones. In most state-of-the-art graphical
engines, the skinning process is done on the GPU thanks to a shader program.
For a polygonal mesh, each vertex can have a blend weight for each bone. To calculate the final position of the
vertex, each bone transformation is applied to the vertex position, scaled by its corresponding weight. This algorithm
is called matrix palette skinning, because the set of bone transformations (stored as transform matrices) form a
Skeletal animation
palette for the skin vertex to choose from.
Applications
Skeletal animation is the standard way to animate characters or mechanical objects for a prolonged period of time
(usually over 100 frames). It is commonly used by video game artists and in the movie industry, and can also be
applied to mechanical objects and any other object made up of rigid elements and joints.
Performance capture (or motion capture) can speed up development time of skeletal animation, as well as increasing
the level of realism.
For motion that is too dangerous for performance capture, there are computer simulations that automatically
calculate physics of motion and resistance with skeletal frames. Virtual anatomy properties such as weight of limbs,
muscle reaction, bone strength and joint constraints may be added for realistic bouncing, buckling, fracture and
tumbling effects known as virtual stunts. Virtual stunts are controversial[citation needed] due to their potential to
replace stunt performers. However, there are other applications of virtual anatomy simulations such as military and
emergency response. Virtual soldiers, rescue workers, patients, passengers and pedestrians can be used for training,
virtual engineering and virtual testing of equipment. Virtual anatomy technology may be combined with artificial
intelligence for further enhancement of animation and simulation technology.
References
214
Sketch-based modeling
Sketch-based modeling
Sketch-based modeling is a method of creating 3D models for use in 3D computer graphics applications.
Sketch-based modeling is differentiated from other types of 3D modeling by its interface - instead of creating a 3D
model by directly editing polygons, the user draws a 2D shape which is converted to 3D automatically by the
application.
Purpose
Many computer users think that traditional 3D modeling programs such as Blender or Maya have a high learning
curve. Novice users often have difficulty creating models in traditional modeling programs without first completing
a lengthy series of tutorials. Sketch-based modeling tools aim to solve this problem by creating a User interface
which is similar to drawing, which most users are familiar with.
Uses
Sketch-based modeling is primarily designed for use by persons with artistic ability, but no experience with 3D
modeling programs. Curvy3D and Teddy, below, have largely been designed for this purpose. However,
sketch-based modeling is also used for other applications. One popular application is rapid modeling of low-detail
objects for use in prototyping and design work.
Operation
There are two main types of sketch-based modeling. In the first, the user draws a shape in the workspace using a
mouse or a tablet. The system then interprets this shape as a 3D object. Users can then alter the object by cutting off
or adding sections. The process of adding sections to a model is generally referred to as overdrawing. The user is
never required to interact directly with the vertices or Nurbs control points.
In the second type of sketch-based modeling, the user draws one or more images on paper, then scans in the images.
The system then automatically converts the sketches to a 3D model.
Examples
Aartform Curvy 3D - http://www.curvy3d.com
Alias Studio Tools
Teddy - http://www-ui.is.s.u-tokyo.ac.jp/~takeo/teddy/teddy.htm
Paint3D - http://www.paint3d.net
ShapeShop - http://www.shapeshop3d.com
215
Sketch-based modeling
Research
A great deal of research is currently being done on sketch-based modeling. A number of papers on this topic are
presented each year at the ACM Siggraph conference. The European graphics conference Eurographics has held four
special conferences on sketch-based modeling:
2007 [1]
2006 [2]
2005 [3]
2004 [4]
References
[1]
[2]
[3]
[4]
Smoothing group
In 3D computer graphics, a smoothing group is a group of polygons in a polygon mesh which should appear to form
a smooth surface. Smoothing groups are useful for describing shapes where some polygons are connected smoothly
to their neighbors, and some are not. For example, in a mesh representing a cylinder, all of the polygons are
smoothly connected except along the edges of the end caps. One could make a smoothing group containing all of the
polygons in one end cap, another containing the polygons in the other end cap, and a last group containing the
polygons in the tube shape between the end caps.
By identifying the polygons in a mesh that should appear to be smoothly connected, smoothing groups allow 3D
modeling software to estimate the surface normal at any point on the mesh, by averaging the surface normals or
vertex normals in the mesh data that describes the mesh. The software can use this data to determine how light
interacts with the model. If each polygon lies in a plane, the software could calculate a polygon's surface normal by
calculating the normal of the polygon's plane, meaning this data would not have to be stored in the mesh. Thus, early
3D modeling software like 3D Studio Max DOS used smoothing groups as a way to avoid having to store accurate
vertex normals for each vertex of the mesh, as a strategy for computer representation of surfaces.
References
216
Deformable solids
The simulation of volumetric solid soft bodies can be realised by using a variety of approaches.
Spring/mass models
In this approach, the body is modeled as a set of point masses (nodes)
connected by ideal weightless elastic springs obeying some variant of
Hooke's law. The nodes may either derive from the edges of a
two-dimensional polygonal mesh representation of the surface of the
Two nodes as mass points connected by a parallel
object, or from a three-dimensional network of nodes and edges
circuit of a spring and a damper.
modeling the internal structure of the object (or even a
one-dimensional system of links, if for example a rope or hair strand is being simulated). Additional springs between
nodes can be added, or the force law of the springs modified, to achieve desired effects. Applying Newton's second
law to the point masses including the forces applied by the springs and any external forces (due to contact, gravity,
air resistance, wind, and so on) gives a system of differential equations for the motion of the nodes, which is solved
by standard numerical schemes for solving ODEs. Rendering of a three-dimensional mass-spring lattice is often done
using free-form deformation, in which the rendered mesh is embedded in the lattice and distorted to conform to the
shape of the lattice as it evolves. Assuming all point masses equal to zero one can obtain the Stretched grid method
aimed at several engineering problems solution relative to the elastic grid behavior.
217
Shape matching
In this scheme, penalty forces or constraints are applied to the model to drive it towards its original shape (i.e. the
material behaves as if it has shape memory). To conserve momentum the rotation of the body must be estimated
properly, for example via polar decomposition. To approximate finite element simulation, shape matching can be
applied to three dimensional lattices and multiple shape matching constraints blended.
Cloth simulation
In the context of computer graphics, cloth simulation refers to the simulation of soft bodies in the form of two
dimensional continuum elastic membranes, that is, for this purpose, the actual structure of real cloth on the yarn level
can be ignored (though modeling cloth on the yarn level has been tried). Via rendering effects, this can produce a
visually plausible emulation of textiles and clothing, used in a variety of contexts in video games, animation, and
film. It can also be used to simulate two dimensional sheets of materials other than textiles, such as deformable metal
panels or vegetation. In video games it is often used to enhance the realism of clothed characters, which otherwise
would be entirely animated.
Cloth simulators are generally based on mass-spring models, but a distinction must be made between force-based
and position-based solvers.
Force-based cloth
The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring
forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law
gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a
realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the
timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically
unstable for sufficiently stiff systems). Therefore implicit solvers must be used, requiring solution of a large sparse
matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive
frame rates. An alternative is to use an explicit method with low stiffness, with ad hoc methods to avoid instability
and excessive stretching (e.g. strain limiting corrections).
218
Position-based dynamics
To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably
PhysX, Havok Cloth, and Maya nCloth) use position based dynamics (PBD), an approach based on constraint
relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance
between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by
directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained. This is similar to a
Gauss-Seidel solution of the implicit matrix system for the mass-spring model. Care must be taken though to solve
the constraints in the same sequence each timestep, to avoid spurious oscillations, and to make sure that the
constraints do not violate linear and angular momentum conservation. Additional position constraints can be applied,
for example to keep the nodes within desired regions of space (sufficiently close to an animated model for example),
or to maintain the body's overall shape via shape matching.
219
Other applications
Other effects which may be simulated via the methods of soft-body dynamics are:
Destructible materials: fracture of brittle solids, cutting of soft bodies, and tearing of cloth. The finite element
method is especially suited to modelling fracture as it includes a realistic model of the distribution of internal
stresses in the material, which physically is what determines when fracture occurs, according to fracture
mechanics.
Plasticity (permanent deformation) and melting
Simulated hair, fur, and feathers
Simulated organs for biomedical applications
Simulating fluids in the context of computer graphics would not normally be considered soft-body dynamics, which
is usually restricted to mean simulation of materials which have a tendency to retain their shape and form. In
contrast, a fluid assumes the shape of whatever vessel contains it, as the particles are bound together by relatively
weak forces.
References
[1] http:/ / www. rigsofrods. com/
External links
"The Animation of Natural Phenomena", CMU course on physically based animation, including deformable
bodies (http://graphics.cs.cmu.edu/courses/15-869/)
Soft body dynamics video example (http://youtube.com/watch?v=gbXCGpuJI7w)
Introductory article (http://vizproto.prism.asu.edu/classes/sp03/wyman_g/Soft Body Dynamics.
htm)Wikipedia:Link rot
Article by Thomas Jakobsen which explains the basics of the PBD method (http://www.teknikus.dk/tj/
gdc2001.htm)
220
Solid modeling
Solid modeling
Solid modeling (or modelling) is a consistent set of principles for
mathematical and computer modeling of three-dimensional solids.
Solid modeling is distinguished from related areas of geometric
modeling and computer graphics by its emphasis on physical fidelity.
Together, the principles of geometric and solid modeling form the
foundation of computer-aided design and in general support the
creation, exchange, visualization, animation, interrogation, and
annotation of digital models of physical objects.
Overview
The use of solid modeling techniques allows for the automation of
several difficult engineering calculations that are carried out as a part
of the design process. Simulation, planning, and verification of
processes such as machining and assembly were one of the main
catalysts for the development of solid modeling. More recently, the
range of supported manufacturing applications has been greatly
The geometry in solid modeling is fully described
expanded to include sheet metal manufacturing, injection molding,
in 3D space; objects can be viewed from any
welding, pipe routing etc. Beyond traditional manufacturing, solid
angle.
modeling techniques serve as the foundation for rapid prototyping,
digital data archival and reverse engineering by reconstructing solids
from sampled points on physical objects, mechanical analysis using finite elements, motion planning and NC path
verification, kinematic and dynamic analysis of mechanisms, and so on. A central problem in all these applications is
the ability to effectively represent and manipulate three-dimensional geometry in a fashion that is consistent with the
physical behavior of real artifacts. Solid modeling research and development has effectively addressed many of these
issues, and continues to be a central focus of computer-aided engineering.
Mathematical foundations
The notion of solid modeling as practiced today relies on the specific need for informational completeness in
mechanical geometric modeling systems, in the sense that any computer model should support all geometric queries
that may be asked of its corresponding physical object. The requirement implicitly recognizes the possibility of
several computer representations of the same physical object as long as any two such representations are consistent.
It is impossible to computationally verify informational completeness of a representation unless the notion of a
physical object is defined in terms of computable mathematical properties and independent of any particular
representation. Such reasoning led to the development of the modeling paradigm that has shaped the field of solid
modeling as we know it today.
All manufactured components have finite size and well behaved boundaries, so initially the focus was on
mathematically modeling rigid parts made of homogeneous isotropic material that could be added or removed. These
postulated properties can be translated into properties of subsets of three-dimensional Euclidean space. The two
common approaches to define solidity rely on point-set topology and algebraic topology respectively. Both models
specify how solids can be built from simple pieces or cells.
221
Solid modeling
222
Solid modeling
forces modern geometric modeling systems to maintain several representation schemes of solids and also facilitate
efficient conversion between representation schemes.
Below is a list of common techniques used to create or represent solid models. Modern modeling software may use a
combination of these schemes to represent a solid.
Cell decomposition
This scheme follows from the combinatoric (algebraic topological) descriptions of solids detailed above. A solid can
be represented by its decomposition into several cells. Spatial occupancy enumeration schemes are a particular case
of cell decompositions where all the cells are cubical and lie in a regular grid. Cell decompositions provide
convenient ways for computing certain topological properties of solids such as its connectedness (number of pieces)
and genus (number of holes). Cell decompositions in the form of triangulations are the representations used in 3d
finite elements for the numerical solution of partial differential equations. Other cell decompositions such as a
Whitney regular stratification or Morse decompositions may be used for applications in robot motion planning.
Boundary representation
In this scheme a solid is represented by the cellular decomposition of its boundary. Since the boundaries of solids
have the distinguishing property that they separate space into regions defined by the interior of the solid and the
complementary exterior according to the Jordan-Brouwer theorem discussed above, every point in space can
unambiguously be tested against the solid by testing the point against the boundary of the solid. Recall that ability to
test every point in the solid provides a guarantee of solidity. Using ray casting it is possible to count the number of
intersections of a cast ray against the boundary of the solid. Even number of intersections correspond to exterior
points, and odd number of intersections correspond to interior points. The assumption of boundaries as manifold cell
complexes forces any boundary representation to obey disjointedness of distinct primitives, i.e. there are no
self-intersections that cause non-manifold points. In particular, the manifoldness condition implies all pairs of
vertices are disjoint, pairs of edges are either disjoint or intersect at one vertex, and pairs of faces are disjoint or
intersect at a common edge. Several data structures that are combinatorial maps have been developed to store
223
Solid modeling
224
boundary representations of solids. In addition to planar faces, modern systems provide the ability to store quadrics
and NURBS surfaces as a part of the boundary representation. Boundary representations have evolved into a
ubiquitous representation scheme of solids in most commercial geometric modelers because of their flexibility in
representing solids exhibiting a high level of geometric complexity.
Sweeping
The basic notion embodied in sweeping schemes is simple. A set moving through space may trace or sweep out
volume (a solid) that may be represented by the moving set and its trajectory. Such a representation is important in
the context of applications such as detecting the material removed from a cutter as it moves along a specified
trajectory, computing dynamic interference of two solids undergoing relative motion, motion planning, and even in
computer graphics applications such as tracing the motions of a brush moved on a canvas. Most commercial CAD
systems provide (limited) functionality for constructing swept solids mostly in the form of a two dimensional cross
section moving on a space trajectory transversal to the section. However, current research has shown several
approximations of three dimensional shapes moving across one parameter, and even multi-parameter motions.
Implicit representation
A very general method of defining a set of points X is to specify a predicate that can be evaluated at any point in
space. In other words, X is defined implicitly to consist of all the points that satisfy the condition specified by the
predicate. The simplest form of a predicate is the condition on the sign of a real valued function resulting in the
familiar representation of sets by equalities and inequalities. For example if
the
conditions
, and
More complex functional primitives may be defined by boolean combinations of simpler predicates. Furthermore,
the theory of R-functions allow conversions of such representations into a single function inequality for any closed
semi analytic set. Such a representation can be converted to a boundary representation using polygonization
algorithms, for example, the marching cubes algorithm.
Solid modeling
Computer-aided design
The modeling of solids is only the minimum requirement of a CAD systems capabilities. Solid modelers have
become commonplace in engineering departments in the last ten yearsWikipedia:Manual of Style/Dates and
numbers#Chronological items due to faster computers and competitive software pricing. Solid modeling software
creates a virtual 3D representation of components for machine design and analysis. A typical graphical user interface
includes programmable macros, keyboard shortcuts and dynamic model manipulation. The ability to dynamically
re-orient the model, in real-time shaded 3-D, is emphasized and helps the designer maintain a mental 3-D image.
A solid part model generally consists of a group of features, added one at a time, until the model is complete.
Engineering solid models are built mostly with sketcher-based features; 2-D sketches that are swept along a path to
become 3-D. These may be cuts, or extrusions for example. Design work on components is usually done within the
context of the whole product using assembly modeling methods. An assembly model incorporates references to
individual part models that comprise the product.
Another type of modeling technique is 'surfacing' (Freeform surface modeling). Here, surfaces are defined, trimmed
and merged, and filled to make solid. The surfaces are usually defined with datum curves in space and a variety of
complex commands. Surfacing is more difficult, but better applicable to some manufacturing techniques, like
injection molding. Solid models for injection molded parts usually have both surfacing and sketcher based features.
Engineering drawings can be created semi-automatically and reference the solid models.
Parametric modeling
Parametric modeling uses parameters to define a model (dimensions, for example). Examples of parameters are:
dimensions used to create model features, material density, formulas to describe swept features, imported data (that
describe a reference surface, for example). The parameter may be modified later, and the model will update to reflect
the modification. Typically, there is a relationship between parts, assemblies, and drawings. A part consists of
multiple features, and an assembly consists of multiple parts. Drawings can be made from either parts or assemblies.
Example: A shaft is created by extruding a circle 100mm. A hub is assembled to the end of the shaft. Later, the shaft
is modified to be 200mm long (click on the shaft, select the length dimension, modify to 200). When the model is
updated the shaft will be 200mm long, the hub will relocate to the end of the shaft to which it was assembled, and
the engineering drawings and mass properties will reflect all changes automatically.
Related to parameters, but slightly different are constraints. Constraints are relationships between entities that make
up a particular shape. For a window, the sides might be defined as being parallel, and of the same length. Parametric
modeling is obvious and intuitive. But for the first three decades of CAD this was not the case. Modification meant
re-draw, or add a new cut or protrusion on top of old ones. Dimensions on engineering drawings were created,
instead of shown. Parametric modeling is very powerful, but requires more skill in model creation. A complicated
model for an injection molded part may have a thousand features, and modifying an early feature may cause later
features to fail. Skillfully created parametric models are easier to maintain and modify. Parametric modeling also
lends itself to data re-use. A whole family of capscrews can be contained in one model, for example.
225
Solid modeling
226
Engineering
Because CAD programs running on
computers understand the true
geometry comprising complex shapes,
many attributes of/for a 3D solid, such
as its center of gravity, volume, and
mass, can be quickly calculated. For
instance, the cube shown at the top of
this article measures 8.4mm from flat
to flat. Despite its many radii and the
shallow pyramid on each of its six
faces, its properties are readily
calculated for the designer, as shown in
the screenshot at right.
References
External links
sgCore C++/C# library (http://www.geometros.com)
The Solid Modeling Association (http://solidmodeling.org/)
227
References
Specularity
Specularity is the visual appearance of specular reflections. In
computer graphics, it means the quantity used in three-dimensional
(3D) rendering which represents the amount of specular reflectivity a
surface has. It is a key component in determining the brightness of
specular highlights, along with shininess to determine the size of the
highlights.
It is frequently used in real-time computer graphics where the
mirror-like specular reflection of light from other surfaces is often
ignored (due to the more intensive computations required to calculate
it), and the specular reflection of light directly from point light sources
is modelled as specular highlights.
Specular highlights on a pair of spheres
References
Static mesh
Static mesh
Static meshes are polygon meshes which constitute a major part of map architecture in many game engines,
including Unreal Engine, Source, and Unity. The word "static" refers only to the fact that static meshes can't be
vertex animated, as they can be moved, scaled, or reskinned in realtime.
Static Meshes can create more complex shapes than CSG (the other major part of map architecture) and are faster to
render per triangle.
Characteristics
A Static Mesh contains information about its shape (vertices, edges and sides), a reference to the textures to be used,
and optionally a collision model (see the simple collision section below).
Collision
There are three ways for a Static Mesh to collide:
No collision: a static mesh can be set not to block anything. This is often used for small decoration like grass.
Per-polygon collision (default): individual polygons collide with actors. Each material (i.e. each part of the Static
Mesh using a separate texture) can be set to collide or not independently from the rest. The advantage of this
method is that one part of the Static Mesh can collide while another doesn't (a common example: a tree's trunk
collides, but its leaves don't). The disadvantage is that for complex meshes this can take a lot of processing power.
Simple collision: the static mesh doesn't collide itself, but has built-in blocking volumes that collide instead.
Usually, the blocking volumes will have a simpler shape than the Static Mesh, resulting in faster collision
calculation.
Texturing
Although Static Meshes have built-in information on what textures to use, this can be overridden by adding a new
skin in the Static Mesh's properties. Alternatively, the Static Mesh itself can be modified to use different textures by
default.
Usage
In maps, Static Meshes are very common, as they are used for anything more complex than basic architecture (in
which case CSG is used) or terrain.
Additionally, Static Meshes sometimes represent other objects, including weapon projectiles and destroyed vehicles.
Often after rendered cutscenes in which, for instance, a tank is destroyed, the tank's hull would be added as a static
mesh to the real-game world.
External links
UnrealWiki: Static Mesh [1]
References
[1] http:/ / wiki. beyondunreal. com/ wiki/ Static_Mesh
228
Stereoscopic acuity
Stereoscopic acuity
Stereoscopic acuity, also stereoacuity, is the smallest detectable depth difference that can be seen in binocular
vision.
d = c a dz/z2
where a is the interocular separation of the observer and z the distance of the fixed peg from the eye. To transfer d
into the usual unit of minutes of arc, a multiplicative constant c is inserted whose value is 3437.75 (1 radian in
arcminutes). In the calculation a, dz and z must be in the same units, say, feet, inches, cm or meters. For the average
interocular distance of 6.5cm, a target distance of 6m and a typical stereoacuity of 0.5 minute of arc, the just
detectable depth interval is 8cm. As targets come closer, this interval gets smaller by the inverse square of the
distance, so that an equivalent detectable depth interval at meter is 0.01cm or the depth of impression of the head
on a coin. These vary small values of normal stereoacuity, expressed in differences of either object distances, or
angle of disparity, makes it a hyperacuity.
Tests of stereoacuity
229
Stereoscopic acuity
Since the two-peg device, named Howard-Dolman after its inventors,[2]
is cumbersome, stereoacuity is usually measured using a stereogram in
which separate panels are shown to each eye by superimposing them in
a stereoscope using prisms or goggles with color or polarizing filters or
alternating occlusion (for a review see [3]). A good procedure is a
chart, analogous to the familiar Snellen visual acuity chart, in which
one letter in each row differs in depth (front or behind) sequentially
Example of a Snellen-like depth test
increasing in difficulty. For children the fly test is ideal: the image of a
fly is transilluminated by polarized light; wearing polarizing glasses the wing appears at a different depth and allows
stereopsis to be demonstrated by trying to pull on it.
Expected performance
There is no equivalent in stereoacuity of the normal 20/20 visual acuity standard. In every case, the numerical score,
even if expressed in disparity angle, depends to some extent on the test being used. Superior observers under ideal
conditions can achieve 0.1 arc min or even better.
The distinction between screening for the presence of stereopsis and a measurement of stereoacuity is valuable. To
ascertain that depth can be seen in a binocular views, a test must be easily administered and not subject to deception.
The random-dot stereogram is used widely for this purpose and has the advantage that for the uninitiated the object
shape is unknown. It is made of random small pattern elements; depth can be created only in multiples of elements
and therefore may not reach the small threshold disparity which is the purpose of stereoacuity measurements.
A population study revealed a surprisingly high incidence of good stereoacuity:.[4] Out of 188 biology students,
97.3% could perform at 2.3 minutes of arc or better.
References
[1]
[2]
[3]
[4]
[5]
Howard IP, Rogers BJ (2002) Seeing in Depth. Vol. 1I. Chapter 19 Porteous, Toronto
Howard HJ (1919) A test for the judgment of distance. Amer. J. Ophthalmol., 2, 656-675
http:/ / rspb. royalsocietypublishing. org/ content/ early/ 2011/ 04/ 09/ rspb. 2010. 2777. long
Coutant BE(1993) Population distribution of stereoscopic ability. Ophthalmic Physiol Opt, 13, 3-7.
Westheimer G, Pettet MW (1990) Contrast and duration of exposure differentially affect vernier and stereoscopic acuity. Proc R Soc Lond B
Biol Sci, 241, 42-6
[6] The Ferrier Lecture (1994) Seeing depth with two eyes: stereopsis. Proc R Soc Lond B Biol Sci, 257, 205-14
[7] Fendick M, Westheimer G. (1983) Effects of practice and the separation of test targets on foveal and peripheral stereoacuity. Vision Research,
23, 145-50
230
Stereoscopic acuity
231
[8] McKee SP, Taylor DG (2010) The precision of binocular and monocular depth judgments in natural settings. J. Vision, 10, 5
[9] Harwerth RS, Rawlings SC (1977) Viewing time and stereoscopic threshold with random-dot stereograms. Am J Optom Physiol Opt, 54,
452-457.
External links
Review of 3D displays and stereo vision (http://rspb.royalsocietypublishing.org/content/278/1716/2241.
long)
Subdivision surface
A subdivision surface, in the field of 3D computer graphics, is a method of representing a smooth surface via the
specification of a coarser piecewise linear polygon mesh. The smooth surface can be calculated from the coarse mesh
as the limit of a recursive process of subdividing each polygonal face into smaller faces that better approximate the
smooth surface.
Overview
Subdivision surfaces are defined recursively. The process starts with a
given polygonal mesh. A refinement scheme is then applied to this
mesh. This process takes that mesh and subdivides it, creating new
vertices and new faces. The positions of the new vertices in the mesh
are computed based on the positions of nearby old vertices. In some
refinement schemes, the positions of old vertices might also be altered
(possibly based on the positions of new vertices).
This process produces a denser mesh than the original one, containing
more polygonal faces. This resulting mesh can be passed through the
same refinement scheme again and so on.
The limit subdivision surface is the surface produced from this process
being iteratively applied infinitely many times. In practical use
however, this algorithm is only applied a limited number of times. The
limit surface can also be calculated directly for most subdivision
surfaces using the technique of Jos Stam, which eliminates the need for
recursive refinement. Subdivision surfaces and T-Splines are
competing technologies. Mathematically, subdivision surfaces are
spline surfaces with singularities.
Refinement schemes
Subdivision surface refinement schemes can be broadly classified into two categories: interpolating and
approximating. Interpolating schemes are required to match the original position of vertices in the original mesh.
Approximating schemes are not; they can and will adjust these positions as needed. In general, approximating
schemes have greater smoothness, but editing applications that allow users to set exact surface constraints require an
optimization step. This is analogous to spline surfaces and curves, where Bzier splines are required to interpolate
certain control points (namely the two end-points), while B-splines are not.
There is another division in subdivision surface schemes as well, the type of polygon that they operate on. Some
function for quadrilaterals (quads), while others operate on triangles.
Subdivision surface
Approximating schemes
Approximating means that the limit surfaces approximate the initial meshes and that after subdivision, the newly
generated control points are not in the limit surfaces. Examples of approximating subdivision schemes are:
CatmullClark (1978) generalized bi-cubic uniform B-spline to produce their subdivision scheme. For arbitrary
initial meshes, this scheme generates limit surfaces that are C2 continuous everywhere except at extraordinary
vertices where they are C1 continuous (Peters and Reif 1998).
DooSabin - The second subdivision scheme was developed by Doo and Sabin (1978) who successfully extended
Chaikin's corner-cutting method for curves to surfaces. They used the analytical expression of bi-quadratic
uniform B-spline surface to generate their subdivision procedure to produce C1 limit surfaces with arbitrary
topology for arbitrary initial meshes.
Loop, Triangles - Loop (1987) proposed his subdivision scheme based on a quartic box-spline of six direction
vectors to provide a rule to generate C2 continuous limit surfaces everywhere except at extraordinary vertices
where they are C1 continuous.
Mid-Edge subdivision scheme - The mid-edge subdivision scheme was proposed independently by PetersReif
(1997) and HabibWarren (1999). The former used the midpoint of each edge to build the new mesh. The latter
used a four-directional box spline to build the scheme. This scheme generates C1 continuous limit surfaces on
initial meshes with arbitrary topology.
3 subdivision scheme - This scheme has been developed by Kobbelt (2000): it handles arbitrary triangular
meshes, it is C2 continuous everywhere except at extraordinary vertices where it is C1 continuous and it offers a
natural adaptive refinement when required. It exhibits at least two specificities: it is a Dual scheme for triangle
meshes and it has a slower refinement rate than primal ones.
Interpolating schemes
After subdivision, the control points of the original mesh and the new generated control points are interpolated on the
limit surface. The earliest work was the butterfly scheme by Dyn, Levin and Gregory (1990), who extended the
four-point interpolatory subdivision scheme for curves to a subdivision scheme for surface. Zorin, Schrder and
Sweldens (1996) noticed that the butterfly scheme cannot generate smooth surfaces for irregular triangle meshes and
thus modified this scheme. Kobbelt (1996) further generalized the four-point interpolatory subdivision scheme for
curves to the tensor product subdivision scheme for surfaces.
Butterfly, Triangles - named after the scheme's shape
Midedge, Quads
Kobbelt, Quads - a variational subdivision method that tries to overcome uniform subdivision drawbacks
232
Subdivision surface
Key developments
1978: Subdivision surfaces were discovered simultaneously by Edwin Catmull and Jim Clark (see CatmullClark
subdivision surface). In the same year, Daniel Doo and Malcom Sabin published a paper building on this work
(see DooSabin subdivision surface.)
1995: Ulrich Reif characterized subdivision surfaces near extraordinary vertices by treating them as splines with
singularities.
1998: Jos Stam contributed a method for exact evaluation for CatmullClark and Loop subdivision surfaces under
arbitrary parameter values.[3]
References
Peters, J.; Reif, U. (October 1997). "The simplest subdivision scheme for smoothing polyhedra". ACM
Transactions on Graphics 16 (4): 420431. doi: 10.1145/263834.263851 (http://dx.doi.org/10.1145/263834.
263851).
Habib, A.; Warren, J. (May 1999). "Edge and vertex insertion for a class C1 of subdivision surfaces". Computer
Aided Geometric Design 16 (4): 223247. doi: 10.1016/S0167-8396(98)00045-4 (http://dx.doi.org/10.1016/
S0167-8396(98)00045-4).
Kobbelt, L. (2000). "3-subdivision". Proceedings of the 27th annual conference on Computer graphics and
interactive techniques - SIGGRAPH '00. pp.103112. doi: 10.1145/344779.344835 (http://dx.doi.org/10.
1145/344779.344835). ISBN1-58113-208-5.
External links
Resources about Subdvisions (http://www.subdivision.org)
Geri's Game (http://www.pixar.com/shorts/gg/theater/index.html) : Oscar winning animation by Pixar
completed in 1997 that introduced subdivision surfaces (along with cloth simulation)
Subdivision for Modeling and Animation (http://www.multires.caltech.edu/pubs/sig99notes.pdf) tutorial,
SIGGRAPH 1999 course notes
Subdivision for Modeling and Animation (http://www.mrl.nyu.edu/dzorin/sig00course/) tutorial,
SIGGRAPH 2000 course notes
Subdivision of Surface and Volumetric Meshes (http://www.hakenberg.de/subdivision/ultimate_consumer.
htm), software to perform subdivision using the most popular schemes
Surface Subdivision Methods in CGAL, the Computational Geometry Algorithms Library (http://www.cgal.
org/Pkg/SurfaceSubdivisionMethods3)
Surface and Volumetric Subdivision Meshes, hierarchical/multiresolution data structures in CGoGN (http://
cgogn.unistra.fr)
Modified Butterfly method implementation in C++ (https://bitbucket.org/rukletsov/b)
233
Supinfocom
234
Supinfocom
Supinfocom
Established 1988
Location
Website
Valenciennes, Arles
Official website
Pune
[1]
Curriculum
The curriculum includes:
Two years of preparatory courses (design and applied art, perspective, film analysis, video, color, 2D animation,
art history, sculpture, communication, English);
Three years of specialization in computer graphics (3D software, screenplay, storyboards, animation,
compositing, 3D production, sound, editing).
The final year of study is devoted to the team-based production of a short film in CG.
Until the class of 2007 entered, there were only two years of specialization courses; there are now three.
External links
Official site of Supinfocom Valenciennes [2]
Official site of DSK Supinfocom Pune [3]
References
[1] http:/ / www. supinfocom. fr
[2] http:/ / supinfocom. rubika-edu. com
[3] http:/ / www. dsksic. com/ animation/
Surface caching
Surface caching
Surface caching is a computer graphics technique pioneered by John Carmack, first used in the computer game
Quake, to apply lightmaps to level geometry. Carmack's technique was to combine lighting information with surface
textures in texture-space when primitives became visible (at the appropriate mipmap level), exploiting temporal
coherence for those calculations. As hardware capable of blended multi-texture rendering (and later pixel shaders)
became more commonplace, the technique became less common, being replaced with screenspace combination of
lightmaps in rendering hardware.
Surface caching contributed greatly to the visual quality of Quake's software rasterized 3D engine on Pentium
microprocessors, which lacked dedicated graphics instructions. [citation needed].
Surface caching could be considered a precursor to the more recent MegaTexture technique in which lighting and
surface decals and other procedural texture effects are combined for rich visuals devoid of unnatural repeating
artifacts.
External links
Quake's Lighting Model: Surface Caching [1] - an in-depth explanation by Michael Abrash
References
[1] http:/ / www. bluesnews. com/ abrash/ chap68. shtml
Surfel
Surfel is an abbreviation of "surface element". In 3D computer graphics, the use of surfels is an alternative to
polygonal modeling. An object is represented by a dense set of points or viewer-facing discs holding lighting
information. Surfels are well suited to modeling dynamic geometry, because there is no need to compute topology
information such as adjacency lists. Common applications are medical scanner data representation, real time
rendering of particle systems, and more generally, rendering surfaces of volumetric data by first extracting the
isosurface.[1]
Notes
[1] H. Pfister, M. Zwicker, J. van Baar, M.Gross, Surfels: Surface Elements as Rendering Primitives, SIGGRAPH 2000. Available from http:/ /
graphics. ethz. ch/ research/ past_projects/ surfels/ surfels/ index. html.
235
Suzanne Award
Suzanne Award
The Suzanne Award is awarded to animators using Blender annually ever since the second Blender conference held
in Amsterdam of 2003. The categories of the Suzanne Awards have changed repeatedly. In the following lists, the
people and works printed in bold were the winners of their particular category for that particular year; the remaining
entries were nominated.
Best Animation
Chicken Chair by Bassam Kurdali
Colin Levy
X-Warrior & Nayman
MakeHuman Team
Alan Dennis (RipSting)
Jean-Michel Soler (jms)
Anthony D'Agostino (Scorpius)
Stefano Selleri (S68)
Kester Maddock
Alfredo de Greef
Brecht van Lommel/Jens Ole Wund
Matt Ebb
Nathan Letwory
236
Suzanne Award
237
Suzanne Award
Night of the Living Dead Pixels byJussi Saarelma, Jere Virta & Peter Schulman
Blood by Yu Yonghai (Harrisyu)
Out of the Box by Andy Dolphin (AndyD)
Alchemy by Jason Pierce (Sketchy)
Giants by Thomas Kristov (Thomislav86)
238
Suzanne Award
239
Suzanne Award
Best Animation
Assembly: Life in Macrospace by Jonathan Lax and Ben Simonds from Gecko Animation Ltd.
Dikta-Goodbye by Hjalti Hjalmarsson and Bjorn Daniel Svavarson
Vacui Spacii- par IX: Infra by Martin Eschoyez
Concrete Babylon- Pilot by Peter Hertzberg
Rooms by Jakub Szczesniak
240
Suzanne Award
241
Suzanne Award
242
References
External links
2005 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2005/
animation-festival-2005/)
2006 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2006/
festival-2006/)
2007 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2007/
festival/)
2008 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2008/
festival/)
2009 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2009/
festival/)
2010 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2010/
suzanne-nominations/)
2011 Suzanne Awards (http://archive.blender.org/community/blender-conference/blender-conference-2010/
suzanne-nominations/)
2012 Suzanne Awards (http://suzanne.myblender.org/results/)
2013 Suzanne Awards (http://www.blender.org/conference/2013/suzanne-awards/)
Time-varying mesh
Time-varying mesh
Time-Varying Mesh (TVM) is composed of a sequence of polygonal mesh models reproducing dynamic 3D scenes.
TVM can be generated from synchronized videos captured by multiple cameras in a studio.
In each mesh model (or frame), there are three types of information including vertex positions in Cartesian
coordinate systems, vertex connection in triangle edges, and the color attached to its corresponding vertex. However,
no structure information or explicit corresponding information is available in mesh models. Namely, both the
number of vertices and the topology change frame by frame in TVM.
TVM can provide free viewpoint, thus has a potential of many applications such as education, CAD, heritage
documentation, broadcasting, and gaming.
Timewarps
A timewarp is a tool for manipulating the temporal dimension in a hierarchically described 3D computer animation
system. The term was coined by Jeff Smith and Karen Drewery in 1991.[1] Continuous curves that are normally
applied to parametric modeling and rendering attributes are instead applied to the local clock value, which
effectively remaps the flow of global time within the context of the subsection of the model to which the curves are
applied. The tool was first developed to assist animators in making minor adjustments to subsections of animated
scenes that might employ dozens of related interpolation curves. Rather than adjust the timing of every curve within
the subsection, a timewarp curve can be applied to the model section in question, adjusting the flow of time itself for
that element, with respect to the timing of the other, unaffected elements.
Originally, the tool was used to achieve minor adjustments, moving a motion forward or back in time, or to alter the
speed of a movement. Subsequent experiments with the technique moved beyond these simpler timing adjustment
and began to employ the timing curves to create more complex effects, such as continuous animation cycles and
simulating more natural movements of large collections of models, such as flocks or crowds, by creating numerous
identical copies of a single animated model and then applying small random perturbation timewarps to each of the
copies, giving the impression of a less robotic precision to the group's movements.
The tool has since become common in both 3D animation and video editing software systems.
References
[1] "Timewarps: A Temporal Reparameterization Paradigm for Parametric Animation" Jeff Smith, Karen Drewery (http:/ / www. citeulike. org/
user/ Jefficus/ article/ 4350388)
243
Triangle mesh
Triangle mesh
A triangle mesh is a type of polygon mesh in computer
graphics. It comprises a set of triangles (typically in
three dimensions) that are connected by their common
edges or corners.
Many graphics software packages and hardware
devices can operate more efficiently on triangles that
are grouped into meshes than on a similar number of
triangles that are presented individually. This is
typically because computer graphics do operations on
the vertices at the corners of triangles. With individual
Example of a triangle mesh representing a dolphin.
triangles, the system has to operate on three vertices for
every triangle. In a large mesh, there could be eight or
more triangles meeting at a single vertex - by processing those vertices just once, it is possible to do a fraction of the
work and achieve an identical effect.
Representation
Various methods of storing and working with a mesh in computer memory are possible. With the OpenGL and
DirectX APIs there are two primary ways of passing a triangle mesh to the graphics hardware, triangle strips and
index arrays.
Triangle strip
One way of sharing vertex data between triangles is the triangle strip. With strips of triangles each triangle shares
one complete edge with one neighbour and another with the next. Another way is the triangle fan which is a set of
connected triangles sharing one central vertex. With these methods vertices are dealt with efficiently resulting in the
need to only process N+2 vertices in order to draw N triangles.
Triangle strips are efficient, however the drawback is that it may not be obvious how or convenient to translate an
arbitrary triangle mesh into strips.
Index array
See also: Face-vertex meshes
With index arrays, a mesh is represented by two separate arrays, one array holding the vertices, and another holding
sets of three indices into that array which define a triangle. The graphics system processes the vertices first and
renders the triangles afterwards, using the index sets working on the transformed data. In OpenGL, this is supported
by the glDrawElements() primitive when using Vertex Buffer Object (VBO).
With this method, any arbitrary set of triangles sharing any arbitrary number of vertices can be stored, manipulated,
and passed to the graphics API, without any intermediary processing.
244
Vector slime
245
Vector slime
Demoscene
Concepts
Demo
Intro
Demoparty
Effects
Demogroup
Compo
Music disk
Diskmag
Module file
Tracker
Alternative demo platforms
Amiga
Apple IIGS
Atari ST
Commodore 64
Vic-20
Text mode
ZX Spectrum
Current parties
Alternative Party
Assembly
Buenzli
Evoke
The Gathering
Revision
Sundown
X
Past parties
Breakpoint
The Party
Websites
Scene.org
Mod Archive
Software
Vector slime
246
ProTracker
Scream Tracker
Fast Tracker
Impulse Tracker
ModPlug
Renoise
Tracker musicians
Demosceners
v
t
e [1]
In the demoscene (demo (computer programming)), vector slime refers to a class of visual effects achieved by
procedural deformation of geometric shapes.
Synopsis
A geometric object exposed to vector slime is usually defined by vertices and faces in two or three dimensions. In
the process of deformation, each vertex in the original shape undergoes one or more linear transformations (usually
rotation or translation), defined as a function of the vertex' position in space (usually a function of the magnitude of
the vector) and time. The desired result is an animated geometric object behaving in a harmonic way, creating some
degree of illusion of physical realism.
Older vector slime implementations kept old copies of the rendering result from simple vector objects in RAM, and
selected scan-lines from the different buffers in order to make a time-displacement illusion over the y-axis.
Appearance
Depending on variances in implementation, vector slime can approximate an array of physical properties. A
traditional approach is to let the linear transformation vary as a smooth function of time minus the magnitude of the
vector in question. This creates the illusion that there is a force applied to the origin of the object space (where the
object is usually centered), and the rest of the object's body reacts as a soft body, as each vertex reacts to a change in
the force delayed by the distance to the origin. Applied to a spikeball (a sphere with extracted arms), the object could
resemble the behaviour of a soft squid-like animal. Applied to a cube, the object would appear as a cubic piece of
jelly propelled by a gyro[1] force from the inside.
Areas of Application
Although the classical vector slime algorithms are far from an attempt of correct physical modelling, the result can,
under certain conditions, trick the viewer into believing that there is some sophisticated physical simulation
involved. The effect has therefore grown quite popular in the demoscene to create impressive visual effects at
relatively low computational cost. Interactive vector slime implementations can also eventually be found in computer
games as a substitute for a more correct physical simulation algorithm.
Vector slime
247
References
[1] http:/ / toolserver. org/ %7Edispenser/ cgi-bin/ dab_solver. py?page=Vector_slime& editintro=Template:Disambiguation_needed/ editintro&
client=Template:Dn
[2] http:/ / www. pouet. net/ prod. php?which=462
[3] http:/ / www. pouet. net/ prod. php?which=3226
[4] http:/ / www. pouet. net/ prod. php?which=1943
[5] http:/ / www. pouet. net/ prod. php?which=6983
Vertex (geometry)
In geometry, a vertex (plural vertices) is a special kind of point that describes the corners or intersections of
geometric shapes.
Definitions
Of an angle
The vertex of an angle is the point where two rays begin or meet,
where two line segments join or meet, where two lines intersect
(cross), or any appropriate combination of rays, segments and lines that
result in two straight "sides" meeting at one place.
Of a polytope
A vertex is a corner point of a polygon, polyhedron, or other higher
dimensional polytope, formed by the intersection of edges, faces or
facets of the object.
Vertex (geometry)
248
Of a plane tiling
A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles
of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a
tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the
vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces.
Principal vertex
A polygon vertex xi of a simple polygon P is a principal polygon vertex
if the diagonal [x(i1),x(i+1)] intersects the boundary of P only at x(i1)
and x(i+1). There are two types of principal vertices: ears and mouths.
Ears
A principal vertex xi of a simple polygon P is called an ear if the
diagonal [x(i1),x(i+1)] that bridges xi lies entirely in P. (see also convex
polygon)
Mouths
A principal vertex xi of a simple polygon P is called a mouth if the
diagonal [x(i1),x(i+1)] lies outside the boundary of P.
External links
Weisstein, Eric W., "Polygon Vertex [1]", MathWorld.
Weisstein, Eric W., "Polyhedron Vertex [2]", MathWorld.
Weisstein, Eric W., "Principal Vertex [3]", MathWorld.
References
[1] http:/ / mathworld. wolfram. com/ PolygonVertex. html
[2] http:/ / mathworld. wolfram. com/ PolyhedronVertex. html
[3] http:/ / mathworld. wolfram. com/ PrincipalVertex. html
249
250
251
/* These pointers will receive the contents of our shader source code
files */
GLchar *vertexSource, *fragmentSource;
},
},
};
252
References
[1] http:/ / www. opengl. org/ about/ arb/
External links
Vertex Buffer Object Whitepaper (http://www.opengl.org/registry/specs/ARB/vertex_buffer_object.txt)
253
254
Vertex attributes
Most attributes of a vertex represent vectors in the space to be rendered. Vectors can be 1 (x), 2 (x, y), or 3 (x, y, z)
dimensional and can include a fourth homogeneous coordinate (w).
The following is a table of built in attributes of vertices in the OpenGL standard.
gl_Vertex
Position (vec4)
gl_Normal
Normal (vec4)
gl_Color
References
Vertex pipeline
Vertex pipeline
The function of the vertex pipeline in any GPU is to take geometry data (usually supplied as vector points), work
with it if needed with either fixed function processes (earlier DirectX), or a vertex shader program (later DirectX),
and create all of the 3D data points in a scene to a 2D plane for display on a computer monitor.
It is possible to eliminate unneeded data from going through the rendering pipeline to cut out extraneous work
(called view volume clipping and backface culling). After the vertex engine is done working with the geometry, all
the 2D calculated data is sent to the pixel engine for further processing such as texturing and fragment shading.
As of DirectX 9c, the vertex processor is able to do the following by programming the vertex processing under the
Direct X API:
Tessellation
Displacement mapping
Geometry blending
Higher-order primitives
Point sprites
Matrix stacks
External links
Anandtech Article [1]
References
[1] http:/ / www. anandtech. com/ video/ showdoc. aspx?i=2044& p=3
255
Viewing frustum
256
Viewing frustum
In 3D computer graphics, the viewing frustum or view frustum is the
region of space in the modeled world that may appear on the screen; it
is the field of view of the notional camera.[1]
The exact shape of this region varies depending on what kind of
camera lens is being simulated, but typically it is a frustum of a
rectangular pyramid (hence the name). The planes that cut the frustum
perpendicular to the viewing direction are called the near plane and the
far plane. Objects closer to the camera than the near plane or beyond
the far plane are not drawn. Sometimes, the far plane is placed
infinitely far away from the camera so all objects within the frustum
are drawn regardless of their distance from the camera.
A view frustum.
Viewing frustum culling or view frustum culling is the process of removing objects that lie completely outside the
viewing frustum from the rendering process. Rendering these objects would be a waste of time since they are not
directly visible. To make culling fast, it is usually done using bounding volumes surrounding the objects rather than
the objects themselves.
Definitions
VPN
the view-plane normal a normal to the view plane.
VUV
the view-up vector the vector on the view plane that indicates the upward direction.
VRP
the viewing reference point a point located on the view plane, and the origin of the VRC.
PRP
the projection reference point the point where the image is projected from, for parallel projection, the PRP is
at infinity.
VRC
the viewing-reference coordinate system.
The geometry is defined by a field of view angle (in the 'y' direction), as well as an aspect ratio. Further, a set of
z-planes define the near and far bounds of the frustum.
References
[1] http:/ / msdn. microsoft. com/ en-us/ library/ ff634570. aspx Microsoft What Is a View Frustum?
Viewport
Viewport
A viewport is a polygon viewing region in computer graphics, or a term used for optical components. It has several
definitions in different contexts:
Computing
In 3D computer graphics it refers to the 2D rectangle used to project the 3D scene to the position of a virtual camera.
A viewport is a region of the screen used to display a portion of the total image to be shown.[1]
In virtual desktops, the viewport is the visible portion of a 2D area which is larger than the visualization device. In
web browsers, the viewport is the visible portion of the canvas element.
Optical components
In Manufacturing it refers to hermetically sealed optical components which are typically used for visual or broad
band energy transmission into and out of vacuum systems. Single and multi-layer coatings can be added to viewports
to optimize transmission performance. They describe where the selected object portion resides inside a window.
References
[1] http:/ / msdn. microsoft. com/ en-us/ library/ ff634571. aspx Microsoft - What Is a Viewport?
External links
List of viewport sizes for mobile and tablet devices (http:/ / i-skool. co. uk/ mobile-development/
web-design-for-mobiles-and-tablets-viewport-sizes/)
257
Virtual actor
Virtual actor
A virtual human or digital clone is the creation or re-creation of a human being in image and voice using
computer-generated imagery and sound, that is often indistinguishable from the real actor. This idea was first
portrayed in the 1981 film Looker, wherein models had their bodies scanned digitally to create 3D computer
generated images of the models, and then animating said images for use in TV commercials. Two 1992 books used
this concept: "Fools" by Pat Cadigan, and Et Tu, Babe by Mark Leyner.
In general, virtual humans employed in movies are known as synthespians, virtual actors, vactors, cyberstars, or
"silicentric" actors. There are several legal ramifications for the digital cloning of human actors, relating to
copyright and personality rights. People who have already been digitally cloned as simulations include Bill Clinton,
Marilyn Monroe, Fred Astaire, Ed Sullivan, Elvis Presley, Bruce Lee, Audrey Hepburn, Anna Marie Goddard, and
George Burns. Ironically, data sets of Arnold Schwarzenegger for the creation of a virtual Arnold (head, at least)
have already been made.
The name Schwarzeneggerization comes from the 1992 book Et Tu, Babe by Mark Leyner. In one scene, on pages
5051, a character asks the shop assistant at a video store to have Arnold Schwarzenegger digitally substituted for
existing actors into various works, including (amongst others) Rain Man (to replace both Tom Cruise and Dustin
Hoffman), My Fair Lady (to replace Rex Harrison), Amadeus (to replace F. Murray Abraham), The Diary of Anne
Frank (as Anne Frank), Gandhi (to replace Ben Kingsley), and It's a Wonderful Life (to replace James Stewart).
Schwarzeneggerization is the name that Leyner gives to this process. Only 10 years later, Schwarzeneggerization
was close to being reality.
By 2002, Schwarzenegger, Jim Carrey, Kate Mulgrew, Michelle Pfeiffer, Denzel Washington, Gillian Anderson, and
David Duchovny had all had their heads laser scanned to create digital computer models thereof.
Early history
Early computer-generated animated faces include the 1985 film Tony de Peltrie and the music video for Mick
Jagger's song "Hard Woman" (from She's the Boss). The first actual human beings to be digitally duplicated were
Marilyn Monroe and Humphrey Bogart in a March 1987 filmWikipedia:Please clarify created by Nadia Magnenat
Thalmann and Daniel Thalmann for the 100th anniversary of the Engineering Society of Canada. The film was
created by six people over a year, and had Monroe and Bogart meeting in a caf in Montreal. The characters were
rendered in three dimensions, and were capable of speaking, showing emotion, and shaking hands.
In 1987, the Kleizer-Walczak Construction Company begain its Synthespian ("synthetic thespian") Project, with the
aim of creating "life-like figures based on the digital animation of clay models".
In 1988, Tin Toy was the first entirely computer-generated movie to win an Academy Award (Best Animated Short
Film). In the same year, Mike the Talking Head, an animated head whose facial expression and head posture were
controlled in real time by a puppeteer using a custom-built controller, was developed by Silicon Graphics, and
performed live at SIGGRAPH. In 1989, The Abyss, directed by James Cameron included a computer-generated face
placed onto a watery pseudopod.
In 1991, Terminator 2, also directed by Cameron, confident in the abilities of computer-generated effects from his
experience with The Abyss, included a mixture of synthetic actors with live animation, including computer models of
Robert Patrick's face. The Abyss contained just one scene with photo-realistic computer graphics. Terminator 2
contained over forty shots throughout the film.
In 1997, Industrial Light and Magic worked on creating a virtual actor that was a composite of the bodily parts of
several real actors.
By the 21st century, virtual actors had become a reality. The face of Brandon Lee, who had died partway through the
shooting of The Crow in 1994, had been digitally superimposed over the top of a body-double in order to complete
258
Virtual actor
those parts of the movie that had yet to be filmed. By 2001, three-dimensional computer-generated realistic humans
had been used in Final Fantasy: The Spirits Within, and by 2004, a synthetic Laurence Olivier co-starred in Sky
Captain and the World of Tomorrow.
Legal issues
Critics such as Stuart Klawans in the New York Times expressed worry about the loss of "the very thing that art was
supposedly preserving: our point of contact with the irreplaceable, finite person". And even more problematic are the
issues of copyright and personality rights. Actors have little legal control over a digital clone of themselves. In the
United States, for instance, they must resort to database protection laws in order to exercise what control they have
(The proposed Database and Collections of Information Misappropriation Act would strengthen such laws). An actor
does not own the copyright on his digital clones, unless they were created by him. Robert Patrick, for example,
would not have any legal control over the liquid metal digital clone of himself that was created for Terminator 2.
The use of digital clones in movie industry, to replicate the acting performances of a cloned person, represents a
controversial aspect of these implications, as it may cause real actors to land in fewer roles, and put them in
disadvantage at contract negotiations, since a clone could always be used by the producers at potentially lower costs.
It is also a career difficulty, since a clone could be used in roles that a real actor would never accept for various
reasons. Bad identifications of an actor's image with a certain type of roles could harm his career, and real actors,
conscious of this, pick and choose what roles they play (Bela Lugosi and Margaret Hamilton became typecast with
their roles as Count Dracula and the Wicked Witch of the West, whereas Anthony Hopkins and Dustin Hoffman
have played a diverse range of parts). A digital clone could be used to play the parts of (for examples) an axe
murderer or a prostitute, which would affect the actor's public image, and in turn affect what future casting
opportunities were given to that actor. Both Tom Waits and Bette Midler have won actions for damages against
people who employed their images in advertisements that they had refused to take part in themselves.
In the USA, the use of a digital clone in advertisements is requireed to be accurate and truthful (section 43(a) of the
Lanham Act and which makes deliberate confusion unlawful). The use of a celebrity's image would be an implied
endorsement. The New York District Court held that an advertisement employing a Woody Allen impersonator
would violate the Act unless it contained a disclaimer stating that Allen did not endorse the product.
Other concerns include posthumous use of digital clones. Barbara Creed states that "Arnold's famous threat, 'I'll be
back', may take on a new meaning". Even before Brandon Lee was digitally reanimated, the California Senate drew
up the Astaire Bill, in response to lobbying from Fred Astaire's widow and the Screen Actors Guild, who were
seeking to restrict the use of digital clones of Astaire. Movie studios opposed the legislation, and as of 2002 it had
yet to be finalized and enacted. Several companies, including Virtual Celebrity Productions, have purchased the
rights to create and use digital clones of various dead celebrities, such as Marlene Dietrich[1] and Vincent Price.
In fiction
S1m0ne, a 2002 science fiction drama film written, produced and directed by Andrew Niccol, starring Al Pacino.
In business
A Virtual Actor can also be a person who performs a role in real-time when logged into a Virtual World or
Collaborative On-Line Environment. One who represents, via an avatar, a character in a simulation or training event.
One who behaves as if acting a part through the use of an avatar.
Vactor Studio LLC is a New York-based company, but its "Vactors" (virtual actors) are located all across the US and
Canada. The Vactors log into virtual world applications from their homes or offices to participate in exercises
covering an extensive range of markets including: Medical, Military, First Responder, Corporate, Government,
Entertainment, and Retail. Through their own computers, they become doctors, soldiers, EMTs, customer service
259
Virtual actor
reps, victims for Mass Casualty Response training, or whatever the demonstration requires. Since 2005, Vactor
Studios role-players have delivered thousands of hours of professional virtual world demonstrations, training
exercises, and event management services.
References
[1] Los Angeles Times / Digital Elite Inc. (http:/ / articles. latimes. com/ 1999/ aug/ 09/ business/ fi-64043)
Further reading
Michael D. Scott and James N. Talbott (1997). "Titles and Characters". Scott on Multimedia Law. Aspen
Publishers Online. ISBN1-56706-333-0. a detailed discussion of the law, as it stood in 1997, relating to virtual
humans and the rights held over them by real humans
Richard Raysman (2002). "Trademark Law". Emerging Technologies and the Law: Forms and Analysis. Law
Journal Press. pp.615. ISBN1-58852-107-9. how trademark law affects digital clones of celebrities who
have trademarked their person
External links
Vactor Studio (http://www.vactorstudio.com/)
Uses
Virtual environment software can be purposed for any use. From advanced military training in a virtual environment
simulator to virtual classrooms. Many Virtual Environments are being purposed as branding channels for products
and services by enterprise corporations and non-profit groups.
Currently, virtual event and virtual tradeshow have been the early accepted uses of virtual event services. More
recently, virtual environment software platforms have offered choice to enterprises with the ability to connect
people across the Internet. Virtual Environment Software enables organizations to extend their market and industry
reach while reducing (all travel-related) costs and time.
Background
Providers of virtual environments have tended to focus on the early marketplace adoption of virtual events. These
providers are typically software as a service (SaaS)-based. Most have evolved from the streaming media/gaming
arena and social networking applications.
This early virtual event marketplace is now moving towards 3D persistent environments, where enterprises combine
e-commerce, social media as core operating systems, and is evolving into virtual environments for branding,
customer acquisition, and service centers. A Persistent Environment enables users, visitors and administrators to
re-visit a part or parts of the event or session. Information gathered by attendees and end users is typically stored in a
virtual briefcase typically including contact information and marketing materials.
260
Potential advantages
Virtual environment software has the potential to maximize the benefits of both online and on-premises
environments. A flexible platform would allow companies to deploy the software in both environments while having
the ability to run reports on data in both locations from a centralized interface. The advent of persistent
environments lends itself to a rich integration with enterprise technology assets which can be solved efficiently
through the implementation of software.
Virtual environment software can be applied to virtual learning environments (also called Learning Management
Systems or LMS). In the US, Universities, Colleges and similar higher education institutions have adopted virtual
learning environments to economize time, resources and course effectiveness.
Future
Virtual events, trade shows and environments are not projected to replace physical events and interactions. Instead
they are seen as extensions and enhancements to these physical events and environments by increasing lead
generation and reaching a wider audience while decreasing expenses. The virtual environments industry has been
projected to reach a market size in the billions of dollars.
Market availability
Virtual environment software is an alternative to bundled services. Companies known to provide virtual environment
software are UBIVENT [1], Unisfair [2] or vcopious [3].
References
[1] http:/ / www. ubivent. com/
[2] http:/ / www. unisfair. com/
[3] http:/ / vcopious. com/
261
Virtual replay
262
Virtual replay
Virtual Replay is a technology which allows people to see 3D animations of sporting events. The technology was
widely used during the 2006 FIFA World Cup. During this event, bbcnews.com posted highlights of the event on
their websites soon after matches concluded and users could view the 3D renderings from multiple points of view.
External links
A page on bbcnews.com using virtual replay technology [1]
References
[1] http:/ / news. bbc. co. uk/ sport2/ hi/ football/ world_cup_2006/ 5148780. stm?goalid=500251
Volume mesh
Volumetric meshes are a polygonal representation of the interior volume of an object. Unlike polygon meshes,
which represent only the surface as polygons, volumetric meshes also discretize the interior structure of the object.
One application of volumetric meshes is in finite element analysis, which may use regular or irregular volumetric
meshes to compute internal stresses and forces in an object throughout the entire volume of the object.
Voxel
A voxel (volume element), represents a value on a regular grid in three
dimensional space. Voxel is a combination of "volume" and "pixel"
where pixel is a combination of "picture" and "element".[1] This is
analogous to a texel, which represents 2D image data in a bitmap
(which is sometimes referred to as a pixmap). As with pixels in a
bitmap, voxels themselves do not typically have their position (their
coordinates) explicitly encoded along with their values. Instead, the
position of a voxel is inferred based upon its position relative to other
voxels (i.e., its position in the data structure that makes up a single
volumetric image). In contrast to pixels and voxels, points and
polygons are often explicitly represented by the coordinates of their
vertices. A direct consequence of this difference is that polygons are
able to efficiently represent simple 3D structures with lots of empty or
homogeneously filled space, while voxels are good at representing
regularly sampled spaces that are non-homogeneously filled.
Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays
use voxels to describe their resolution. For example, a display might be able to show 512512512 voxels.
Voxel
263
Rasterization
Another technique for voxels involves Raster graphics where you simply raytrace every pixel of the display into the
scene. A typical implementation will raytrace each pixel of the display starting at the bottom of the screen using
what is known as a y-buffer. When a voxel is reached that has a higher y value on the display it is added to the
y-buffer overriding the previous value and connected with the previous y-value on the screen interpolating the color
values.
Outcast and other 1990's video games employed this graphics technique for effects such as reflection and
bump-mapping and usually for terrain rendering. Outcast's graphics engine was mainly a combination of a ray
casting (heightmap) engine, used to render the landscape, and a texture mapping polygon engine used to render
objects. The "Engine Programming" section of the games credits in the manual has several subsections related to
graphics, among them: "Landscape Engine", "Polygon Engine", "Water & Shadows Engine" and "Special effects
Engine". Although Outcast is often cited as a forerunner of voxel technology, this is somewhat misleading. The
game does not actually model three-dimensional volumes of voxels. Instead, it models the ground as a surface,
which may be seen as being made up of voxels. The ground is decorated with objects that are modeled using
texture-mapped polygons. When Outcast was developed, the term "voxel engine", when applied to computer games,
commonly referred to a ray casting engine (for example the VoxelSpace engine). On the engine technology page of
the game's website, the landscape engine is also referred to as the "Voxels engine".[2] The engine is purely
software-based; it does not rely on hardware-acceleration via a 3D graphics card.[3]
John Carmack also experimented with Voxels for the Quake III engine.[4] One such problem cited by Carmack is the
lack of graphics cards designed specifically for such rendering requiring them to be software rendered, which still
remains an issue with the technology to this day.
Comanche was also the first commercial flight simulation based on voxel technology. NovaLogic used the
proprietary Voxel Space engine developed for the company by Kyle Freeman [5](written entirely in Assembly
language) to create open landscapes.[6] This rendering technique allowed for much more detailed and realistic terrain
compared to simulations based on vector graphics at that time.
Voxel data
A voxel represents a single sample, or data point, on a regularly
spaced, three-dimensional grid. This data point can consist of a single
piece of data, such as an opacity, or multiple pieces of data, such as a
color in addition to opacity. A voxel represents only a single point on
this grid, not a volume; the space between each voxel is not
represented in a voxel-based dataset. Depending on the type of data
and the intended use for the dataset, this missing information may be
reconstructed and/or approximated, e.g. via interpolation.
The value of a voxel may represent various properties. In CT scans, the
values are Hounsfield units, giving the opacity of material to
X-rays.[7]:29 Different types of value are acquired from MRI or
ultrasound.
Voxel
While voxels provide the benefit of precision and depth of reality, they are typically large data sets and are unwieldy
to manage given the bandwidth of common computers. However, through efficient compression and manipulation of
large data files, interactive visualization can be enabled on consumer market computers.
Other values may be useful for immediate 3D rendering, such as a surface normal vector and color.
Uses
Common uses of voxels include volumetric imaging in medicine and representation of terrain in games and
simulations. Voxel terrain is used instead of a heightmap because of its ability to represent overhangs, caves, arches,
and other 3D terrain features. These concave features cannot be represented in a heightmap due to only the top 'layer'
of data being represented, leaving everything below it filled (the volume that would otherwise be the inside of the
caves, or the underside of arches or overhangs).
Visualization
A volume containing voxels can be visualized either by direct volume rendering or by the extraction of polygon
isosurfaces that follow the contours of given threshold values. The marching cubes algorithm is often used for
isosurface extraction, however other methods exist as well.
Computer gaming
Planet Explorers is a 3D building game that uses voxels for rendering equipment, buildings, and terrain. Using a
voxel editor, players can actually create their own models for weapons and buildings, and terrain can be modified
similar to other building games.
C4 Engine is a game engine that uses voxels for in game terrain and has a voxel editor for its built-in level editor.
Miner Wars 2081 uses its own Voxel Rage engine to let the user deform the terrain of asteroids allowing tunnels
to be formed.
Many NovaLogic games have used voxel-based rendering technology, including the Delta Force, Armored Fist
and Comanche series.
Westwood Studios' Command & Conquer: Tiberian Sun and Command & Conquer: Red Alert 2 use voxels to
render most vehicles.
Westwood Studios' Blade Runner video game used voxels to render characters and artifacts.
Outcast, a game made by Belgian developer Appeal, sports outdoor landscapes that are rendered by a voxel
engine.
Comanche series, a game made by NovaLogic used voxel rasterization for terrain rendering.[8]
The videogame Amok for the Sega Saturn makes use of voxels in its scenarios.
The computer game Vangers uses voxels for its two-level terrain system.
Master of Orion III uses voxel graphics to render space battles and solar systems. Battles displaying 1000 ships at
a time were rendered slowly on computers without hardware graphic acceleration.
Sid Meier's Alpha Centauri uses voxel models to render units.
Shattered Steel featured deforming landscapes using voxel technology.
Build engine first-person shooter games Shadow Warrior and Blood use voxels instead of sprites as an option for
many of the items pickups and scenery. Duke Nukem 3D has an fan-created pack in a similar style.
Crysis, as well as the Cryengine 2 and 3, use a combination of heightmaps and voxels for its terrain system.
Worms 4: Mayhem uses a voxel-based engine to simulate land deformation similar to the older 2D Worms games.
The multi-player role playing game Hexplore uses a voxel engine allowing the player to rotate the isometric
rendered playfield.
The computer game Voxatron, produced by Lexaloffle, is composed and generated fully using voxels.
Ace of Spades used Ken Silverman's Voxlap engine before being rewritten in a bespoke OpenGL engine.
264
Voxel
3D Dot Game Heroes uses voxels to present retro-looking graphics.
Vox, an upcoming voxel based exploration/RPG game focusing on player generated content.
ScrumbleShip, a block-building MMO space simulator game in development, renders each in-game component
and damage to those components using dozens to thousands of voxels.
Castle Story, a castle building Real Time Strategy game in development, has terrain consisting of smoothed
voxels
Block Ops, a voxel based First Person Shooter game.
Cube World, an Indie voxel based game with RPG elements based on games like, Terraria, Diablo (video game),
The Legend of Zelda, Monster Hunter, World of Warcraft, Secret of Mana, and many others.
EverQuest Next and EverQuest Next: Landmark, upcoming MMORPGs by Sony Online Entertainment, make
extensive use of voxels for world creation as well as player generated content
7 Days to Die, Voxel based open world survival horror game developed by The Fun Pimps Entertainment.
Brutal Nature, Voxel based Survival FPS that uses surface net relaxation to render voxels as a smooth mesh.
Voxel editors
While scientific volume visualization doesn't require modifying the actual voxel data, voxel editors can be used to
create art (especially 3D pixel art) and models for voxel based games. Some editors are focused on a single approach
to voxel editing while others mix various approaches. Some common approaches are:
Slice based: The volume is sliced in one or more axes and the user can edit each image individually using 2D
raster editor tools. These generally store color information in voxels.
Sculpture: Similar to the vector counterpart but with no topology constraints. These usually store density
information in voxels and lack color information.
Building blocks: The user can add and remove blocks just like a construction set toy.
Extensions
A generalization of a voxel is the doxel, or dynamic voxel. This is used in the case of a 4D dataset, for example, an
image sequence that represents 3D space together with another dimension such as time. In this way, an image could
contain 100100100100 doxels, which could be seen as a series of 100 frames of a 100100100 volume image
(the equivalent for a 3D image would be showing a 2D cross section of the image in each frame). Although storage
265
Voxel
266
and manipulation of such data requires large amounts of memory, it allows the representation and analysis of
spacetime systems.
References
[1] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-3. html
[2] Engine Technology (http:/ / web. archive. org/ web/ 20060507235618/ http:/ / www. outcast-thegame. com/ tech/ paradise. htm)
[3] " Voxel terrain engine (http:/ / www. codermind. com/ articles/ Voxel-terrain-engine-building-the-terrain. html)", introduction. In a coder's
mind, 2005.
[4] http:/ / www. tomshardware. com/ reviews/ voxel-ray-casting,2423-2. html
[5] http:/ / patents. justia. com/ inventor/ kyle-g-freeman
[6] http:/ / www. flightsim. com/ vbfs/ content. php?2994-NovaLogic-Awarded-Patent-For-Voxel-Space-Graphics-Engine
[7] Novelline, Robert. Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. 1997. ISBN 0-674-83339-2.
[8] http:/ / projectorgames. net/ blog/ ?p=168
External links
Games with voxel graphics (http://www.mobygames.com/game-group/visual-technique-style-voxel-graphics)
at MobyGames
Fundamentals of voxelization (http://labs.cs.sunysb.edu/labs/projects/volume/Papers/Voxel/)
Web3D
Web3D was initially the idea to fully display and navigate Web sites using 3D. By extension, the term now refers to
all interactive 3D content which are embedded into web pages html, and that we can see through a web browser.
Web3D Technologies usually require to install a Web 3D viewer (plugin) to see this kind of content.
Nowadays many formats and tools are available:
3DMLW
Adobe Shockwave
Altadyn
Burster (Web plugin to play Blender content)
Cult3D
FancyEngine
Java 3D
JOGL
LWJGL
O3D
Oak3D
ShiVa
TurnTool
Unity
Virtools
VRML
Viewpoint
Web3D Consortium
WebGL
WireFusion
X3D (extension of VRML)
Web3D
267
External links
Lateral Visions [1] Lateral Visions Software Company : 3D Web specialist and platform developers.
TDT3D [2] European 3D Community : specialized in computer graphics and real-time 3D rendering (updated
regularly).
Web3D and Web3D Framework : Web3D Software Company and Web3D Developers [3]
Walkthrough Web3D Online Galleries, Web3D Online Museums, Web3D Online Fairs [4]
Paul Festa (2002-02-26). "Bringing 3D to the Web" [5]. CNET News.
Canvas3D [6]
Altadyn [7] 3D Online Collaborative Platforms
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
268
269
270
271
272
273
274
275
276
277
License
License
Creative Commons Attribution-Share Alike 3.0
//creativecommons.org/licenses/by-sa/3.0/
278