You are on page 1of 114

Accessible and Assistive ICT

VERITAS
Virtual and Augmented Environments and Realistic User Interactions
To achieve Embedded Accessibility DesignS
247765

Core Simulation Platform

Deliverable No. D2.1.1


SubProject No. SP2 SubProject Innovative VR models, tools
Title and simulation environments
Workpackage No. W2.1 Workpackage VERITAS Open Simulation
Title Platform
Activity No. A2.1.1 Activity Title Core simulation platform
Authors Panagiotis Moschonas (CERTH/ITI),
Athanasios Tsakiris (CERTH/ITI),
Dimitrios Tzovaras (CERTH/ITI)
Status F (Final)
Dissemination level: Pu (Public)
File Name: D2.1.1 Core Simulation Platform
Project start date and duration 01 January 2010, 48 Months
VERITAS D2.1.1 PU Grant Agreement # 247765

Version History Table


Version Dates and comments
no.
1 Draft version created and sent for peer review. October 2011.
2 Final version created. December 2011.

December 2011 i CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Table of Contents
Version History Table............................................................................................i
List of Figures.......................................................................................................v
List of Tables........................................................................................................ix
List of Abbreviations............................................................................................ix
Executive Summary.............................................................................................1
1 Simulation Engine..............................................................................................2
1.1 Requirements from the Simulation Engine...........................................................2
1.2 Evaluation of proposed engines...........................................................................3
1.2.1 OGRE...................................................................................................................3
1.2.2 OpenSceneGraph................................................................................................3
1.2.3 Delta3D................................................................................................................4
1.2.4 Verdict..................................................................................................................5
1.3 Chosen engine ...................................................................................................7
1.4 Integration...........................................................................................................7
1.5 Simulation Parameters........................................................................................8
1.5.1 Simulation levels..................................................................................................8
1.5.2 World time-step....................................................................................................9
1.5.3 Collision related parameters................................................................................9
2 Architecture Analysis........................................................................................10
2.1 Overview ........................................................................................................... 10
2.1.1 Input Information................................................................................................12
2.1.2 Simulation components initialization..................................................................13
2.2 Core simulation modules...................................................................................14
2.3 Secondary humanoid modules..........................................................................16
2.3.1 Generic supportive modules..............................................................................16
2.3.2 Motor simulation modules..................................................................................17
2.3.2.1 Basic motor simulation modules......................................................................17
2.3.2.2 Advanced motor simulation modules...............................................................18
2.3.3 Vision simulation modules..................................................................................18
2.3.3.1 Hearing simulation sub-module.......................................................................19
2.3.3.2 Cognition simulation sub-module.....................................................................20
2.4 The simulation cycle..........................................................................................20
2.5 The Simulation Dependency Tree......................................................................23
3 Human Simulation............................................................................................23
3.1 Avatar Representation.......................................................................................24
3.2 Motor Simulation................................................................................................25
3.2.1 Basic Elements..................................................................................................26
3.2.1.1 Bones.............................................................................................................. 26
3.2.1.2 Joints............................................................................................................... 29
3.2.1.3 Points of Interest..............................................................................................34
3.2.2 Forward Kinematics...........................................................................................36
3.2.2.1 Avatar Configurations......................................................................................37
3.2.2.2 Motion Interpolation.........................................................................................38
3.2.3 Inverse Kinematics (IK)......................................................................................39
3.2.3.1 Inverse Kinematics Chains..............................................................................41
3.2.3.2 Numerical IK.................................................................................................... 43
3.2.3.3 Agility factor and comfort angle........................................................................45
3.2.3.4 Analytical IK..................................................................................................... 46
3.2.4 Dynamics...........................................................................................................47

December 2011 ii CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

3.2.4.1 Forward Dynamics........................................................................................... 47


3.2.4.2 Inverse Dynamics............................................................................................ 48
3.2.4.3 Dynamics Trainer.............................................................................................49
3.3 Advanced Motor Simulation...............................................................................50
3.3.1 Motion Planning Algorithms...............................................................................50
3.3.1.1 Configuration Spaces......................................................................................50
3.3.1.2 Configuration Distance....................................................................................51
3.3.1.3 Graph-based based Motion Planning..............................................................51
3.3.1.4 Rapidly-exploring Random Trees (RRT)..........................................................52
3.3.1.5 Comparison between Graph-based and RRT motion planning.......................55
3.3.1.6 Collision Groups.............................................................................................. 56
3.3.2 Gait Planning......................................................................................................58
3.3.2.1 Gait Path-finding.............................................................................................. 58
3.3.2.2 Gait Cycles...................................................................................................... 62
3.3.2.3 Avatar locomotion............................................................................................ 63
3.3.3 Grasp Planning..................................................................................................67
3.4 Vision Simulation...............................................................................................69
3.4.1 Head & Eyes Coordination.................................................................................69
3.4.2 Vision Model.......................................................................................................70
3.5 Hearing Simulation............................................................................................72
3.6 Cognitive/Mental Simulation..............................................................................74
4 Scene Objects Simulation................................................................................75
4.1 Static and moveable objects..............................................................................75
4.2 Object Points of Interest (PoI)............................................................................76
4.3 Object DoF-Chains............................................................................................76
4.4 Object Configurations........................................................................................78
4.5 Scene Rules......................................................................................................79
5 Task Simulation................................................................................................81
5.1 The Task Tree....................................................................................................81
5.2 Task Rules......................................................................................................... 82
5.2.1 Task Rule Conditions.........................................................................................82
5.2.2 Task Rule Results..............................................................................................84
5.2.3 A Simple Task Example......................................................................................85
5.2.4 Task Parallelism.................................................................................................87
5.2.5 Task Duration.....................................................................................................87
6 Accessibility Assessment.................................................................................87
6.1 Metrics and Human Factors...............................................................................87
6.1.1 Generic Metrics..................................................................................................88
6.1.1.1 Success-Failure............................................................................................... 88
6.1.1.2 Task Duration................................................................................................... 88
6.1.2 Physical Factors.................................................................................................88
6.1.2.1 Torque.............................................................................................................. 88
6.1.2.2 Angular Impulse...............................................................................................89
6.1.2.3 Energy consumption........................................................................................89
6.1.3 Anthropometry Factors.......................................................................................90
6.1.3.1 RoM Comfort Factor........................................................................................ 90
6.1.3.2 RoM-Torque Comfort Factor............................................................................91
6.2 Experiments.......................................................................................................91
6.2.1 Automotive.........................................................................................................92
6.2.2 Workplace..........................................................................................................97
6.2.3 Verdict..............................................................................................................101
Future Work.....................................................................................................101

December 2011 iii CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

References.......................................................................................................102

December 2011 iv CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

List of Figures
Figure 1: Simulation core platform architecture diagram. The data flow is also presented.........11
Figure 2: The core simulation platform input files. Several files from several external applications
needed to run a simulation session. The simulation session must be first initialised. Then the
simulation cycle is repeated until all the tasks (described in the scenario model file) become
successful or at least one fails............................................................................................... 13
Figure 3: The simulation platform's core modules. The four simulation modules, as well their
basic internal data, are displayed. The Simulation Module is basically an inspector of the
other three modules. The arrows display the data flow between the components.................14
Figure 4: The motor simulation modules. Advanced motor simulation (top line) is based on the
four basic modules (line in middle)........................................................................................17
Figure 5: Modules that are taking part in vision simulation. The LookAt Module must cooperate
with motor modules and the Task Manager in order to move the avatar's head and eyes. The
Vision Model applies image filtering and sends the output to the Simulation Module............19
Figure 6: Hearing Model act as a filter that receives audio data from the Simulation Module and
sends them back filtered based on the virtual user's audiogram...........................................19
Figure 7: The Cognition Module takes as input the cognitive attributes of the avatar and creates
delay factors for the Motion Manager and Task Manager modules.......................................20
Figure 9: Humanoid Module and it's components. A short description of the functionality of each
components is also presented...............................................................................................24
Figure 10: The humanoid visual representation is dependent on the loaded avatar model. Here
two different models are presented: a man's (left) and a woman's (middle). The collision
mesh (right) is calculated automatically based on the adaptation process of the loaded 3d
mesh. As it is shown the collision mesh consists of primitive shapes, such as boxes, sphere
and capsules......................................................................................................................... 25
Figure 11: The shoulder joint and its degrees of freedom. There three possible axes in which the
arm can be rotated, thus the joint's DoF number is equal to three........................................29
Figure 12: Avatar joints and their degrees of freedom. Each joint's location is represented by a
red sphere. Here, the joints regarding the movement of the limbs and the head have been
annotated.............................................................................................................................. 30
Figure 13: There are four torso joints. It is also shown that the root bone is located between the
LowerTorso joint and the Pelvis joint.....................................................................................30
Figure 14: The left hand's joints. Each hand has a total of 19 degrees of freedom....................31
Figure 15: The skeleton hierarchy. A child bone is connected to its parent bone by a joint
(represented by an arrow). The only bone that has not a parent bone is the root bone. For
illustration purposes the right arm and leg components have been hidden...........................32
Figure 16: Head Points of Interest. The arrow vectors declare the "LookAt" direction................34
Figure 17: Feet Points of Interest. These points are used by the gait planning algorithms.........34
Figure 18: The hands points of interest. Used primarily by the inverse kinematics and grasp
planning algorithms............................................................................................................... 35
Figure 19: Probability density functions of several normal distributions. The red line is
representing the standard normal distribution.......................................................................39
Figure 20: Left image presents an inverse kinematics chain (green dashed line). The ik-chain is
formed by the nine joints (yellow circled) and has two end effectors (on points A & B). The
LowerTorso_joint is considered the ik-chain's root. Both end effectors happens to have the
same parent, i.e. the RightIndexC_joint. Right image displays which bones are affected
directly by the ik-chain........................................................................................................... 42

December 2011 v CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 21: Using a numerical-based ik-chain with one position-only end effector (red sphere):
the ik-chain tries to match the location of the end effector to the point of interest target (red
cross). As it's shown in the three right images, the orientation of the end-effector cannot be
controlled properly: the inverse kinematics algorithm produces different palm-orientation
solutions each time................................................................................................................ 43
Figure 22: Using a numerical-based ik-chain with two position-only end effectors (red and green
spheres), results in loose end- effector orientation control: the two targets (red and green
crosses) define an axis (in white), around which the controlled body part can be placed......43
Figure 23: Using a numerical-based ik-chain with three position-only end effectors (red, green
and blue sphere): each of the spheres must be matched to its target (red, green, blue cross
respectively), thus resulting to a unique orientation of the palm............................................43
Figure 24: Numerical-based IK resulted posture: on the left image, the IK were computed
without the agility factors resulting in a non-natural posture, while on the right the
consideration of the agility factors produced a satisfactory result. The target is represented
as a red box and the end effector is the right hand's index-fingertip. Both chains start from
the lower torso joint............................................................................................................... 46
Figure 25: A 2d schematic representation of the 2-RRT method. As it's shown, the first (in green)
tree's root is the start configuration and second (in orange) tree has as root the goal
configuration . The numbers are declare the expansion sequence of the two trees. It is
shown after 31st iteration, a connection has been found between the two trees...................54
Figure 26: After the two trees are connected (via nodes 31 & 30), the solution can be provided.
The path is represented as the sequence of the red-circled nodes.......................................55
Figure 27: A 2D representation of the 3d graph's element used in the gait path-finding algorithm.
The green nodes are the floor “open” nodes and the nodes in red denote the “closed” nodes.
The left image represents a voxel with its 8 neighbours, all of which are “open”. In the middle
image, 3 of the neighbours do not contain floor elements, thus the transition to them is
discarded (red arrows). The result is shown in the right image..............................................59
Figure 28: The floor graph and its the voxels (shown with white boxes). The walls and any other
scene objects that are not masked as “floor”, are represented with red semi-transparent
colour..................................................................................................................................... 60
Figure 29: Floor graph path-finding example: initially the voxel that are colliding with the scene
objects are marked as “closed” (top left). Then the path is computed via the A* algorithm (top
right). Any intermediate path elements that are not crucial are discarded (bottom left). Finally
a Catmull-Rom spline is fitted to the remaining elements (bottom right). ..............................61
Figure 30: Result of the path-finding algorithm on a complex scene. The walls are shown in red
semitransparent colour. The blue cones represent the locations of the floor-graph nodes that
were used, while the green cone shows the target location. The final path is represented by
a green curve (Catmull-Rom spline)......................................................................................62
Figure 31: The average gait cycle. The stance and swing phases for the right leg are displayed.
In normal gait, the stance phase is around the 60% of the gait cycle....................................63
Figure 32: Path deviation error comprehension. The path to be followed is symbolised by a
green curve, while the red arrows () are the step-progress vectors. The yellow areas (Sx)
denote the deviation from the path. If the error exceeds the threshold (left image), the
progress steps are decreased (right image) in order to decrease the avatar's deviation from
the path................................................................................................................................. 65
Figure 33: Turn angle comprehension. If the turn angle is too sharp (left image), the next step is
decreased (right image)......................................................................................................... 65
Figure 34: Gait sequence generated by the Gait Module. For better representation reasons only
the lower half skeleton is presented (red). The spline path (green) is computed based on the
floor-graph path (yellow). The step-progress vectors are shown in white. The steps are
decreased for abrupt turns.................................................................................................... 66

December 2011 vi CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 35: Gait sequence generated for a user model with decreased step length....................67
Figure 36: Grasp Module surface points generation. Initially, a point of interest (yellow box) and
a sphere (yellow transparent colour) must be defined. Then by applying ray casting various
target points are generated................................................................................................... 68
Figure 37: The Grasp Module generates various configurations (left). Some of them are not
valid and are rejected (middle). From the remaining, one is chosen based on criteria, such as
number of collisions, hand placement, etc.............................................................................69
Figure 38: Head and eye coordination. The LookAt Module target has been “locked” on the
purple box. The avatar's line of sight is represented by two green intersecting lines.............70
Figure 39: Normal vision, full-colour palette................................................................................71
Figure 40: Protanopia, results in red-green colour-blindness.....................................................71
Figure 41: Deuteranopia, perceiving in medium wavelengths is minimal. ..................................71
Figure 42: Tritanopia effect results into the inability to perceive short wavelengths....................71
Figure 43: Glaucoma. A large proportion of the visual field is lost..............................................72
Figure 44: Macular degeneration, results in loss of the visual field's central region....................72
Figure 45: Demonstration of the glare sensitivity symptom simulation: compared to the normal
vision case (left image), areas near bright colours and lights have lost their contours (right
image)................................................................................................................................... 72
Figure 46: Otitis, only middle frequency bands are retained. .....................................................73
Figure 47: Otosclerosis, hearing loss of low frequency bands....................................................73
Figure 48: Presbycusis, mild hearing loss..................................................................................73
Figure 49: Presbycusis, moderate hearing loss..........................................................................73
Figure 50: Noise induced hearing loss of high frequency bands (mild case)..............................74
Figure 51: Noise induced hearing loss of high frequency bands (severe case)..........................74
Figure 52: Profound hearing loss with residual low frequency hearing.......................................74
Figure 53: Hearing loss of low frequency bands, rising configuration.........................................74
Figure 54: A simple DoF-Chain consists of two elements: the start element (elem.1) which must
be always a moveable object and the end element (elem.n) which can be either moveable or
static. Several degrees of freedom elements (elem. 2 ~ n-1) can be used in between to
connect the two objects. Each degree of freedom can be either rotational or translational.. .77
Figure 55: Two or more DoF-Chains can be connected in order to allow complex object
functionality. Here a complex object consisted of five primitive objects is presented. This
complex object is consisted of four moving parts and one static part....................................78
Figure 56: A car's storage compartment example. The compartment consists of three connected
parts: the dashboard (static, red wireframe), the door (moveable, green wireframe) and the
door handle (moveable, in black). The red box denotes an object-PoI that can be used by the
avatar in order to open the door. In State A, the handle is in its initial position and the door is
locked. In State B, the handle has been pulled by an external force. If the handle rotates
enough degrees, the door unlocks (State C).........................................................................79
Figure 57: Task tree example. The root task, Task A, is analysed into three subtasks. If any of
them fails, then the whole scenario fails. The sequence that is followed here is Task A →
Task B → Task B.1 → Task B.2 → Task C → Task D → Task D.1 → Task D.2......................81
Figure 58: Simple task example. The task contains three rules. One results in action and the
other two are used for success and failure checks................................................................86

December 2011 vii CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 59: Automotive scenario, testing a storage compartment with handle. The compartment's
door opens when the handle is rotated/pulled enough degrees. The green lines indicated the
avatar's line of sight and the yellow arrow indicates the current subtask's target..................92
Figure 60: Automotive scenario, testing a storage compartment that can be opened by pushing
its door................................................................................................................................... 93
Figure 61: Summed time durations for all the automotive task repetitions (in sec). Each duration
refers to ten repetitions of the same task sequence..............................................................94
Figure 62: The mean torques (Eq.53) at six body locations for the automotive scenario. All
values are in Nm.................................................................................................................... 94
Figure 63: The automotive task's angular impulses (Eq. 54). Values in . It is worthy to mention
that the inclusion of the time component into the torque discriminates better the two designs.
.............................................................................................................................................. 95
Figure 64: The total estimated energy consumed (Eq.58), by each body region for the
automotive scenario. Values declared in Joules....................................................................95
Figure 65: The RoM Comfort factor (Eq.62) distributions for the automotive tasks. Higher values
are indicating more comfortable situations for the examined body region.............................96
Figure 66: The RoM-Torque Comfort factor (Eq.64) distributions for the automotive tasks. Values
near the unit indicating more comfortable and less torque demanding body postures..........96
Figure 67: Workplace scenario, two different office designs: one having the drawer above its
table (left image), and one having it below (right image).......................................................97
Figure 68: Summed durations for all the workplace task repetitions (in sec). Each duration refers
to ten repetitions of the same task sequence........................................................................97
Figure 69: The mean torques (Eq.53) at six body locations for the workplace scenario. Values
are in Nm............................................................................................................................... 98
Figure 70: The workplace task's angular impulses (Eq. 54). Values in Values in .......................99
Figure 71: The total estimated consumed energy (Eq.58) by each body region for performing the
workplace tasks. Values in Joules.........................................................................................99
Figure 72: The RoM Comfort factor (Eq.62) distributions for the workplace tasks. Higher values
are indicating more comfortable situations for the examined body region...........................100
Figure 73: The RoM-torque comfort factor (Eq.64) distributions for the workplace tasks. Values
near the unit indicating more comfortable and less torque demanding body postures........100

December 2011 viii CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

List of Tables
Table 1: Main engine features for each proposed candidate........................................................6
Table 2: The bone groups and their respective bones................................................................27
Table 3: The humanoid's Points of Interest.................................................................................36
Table 4: Comparison of analytical and numerical inverse kinematics methods..........................41
Table 5: Analytical IK Chains...................................................................................................... 47
Table 6: Comparison between the Graph-based and Rapidly-exploring Random Tree-based
motion planners..................................................................................................................... 56
Table 7: Collision Groups and their elements.............................................................................57
Table 8: The supported scene rule conditions. Each condition element, besides its type, may
need some extra data parameters to be defined (Param. A/B/1)...........................................80
Table 9: Scene rule result elements and their respective parameters........................................80

List of Abbreviations
Abbreviation Explanation
DoF Degree of Freedom
GUI Graphical User Interface
IK Inverse Kinematics
LGPL GNU Lesser General Public License
ODE Open Dynamics Engine
OGRE Open Source 3D Graphics Engine
OS Operating System
OSG OpenSceneGraph
PoI Point of Interest
RoM Range of Motion
RRT Rapidly-exploring Random Tree
SP Sub-Project
VR Virtual Reality

December 2011 ix CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Executive Summary
The main objective of this manuscript is to analyse the development process and
algorithms that are used in the Core Simulation Platform. The Core Simulation Platform
forms the basis of the VERITAS platform. More specifically, the Platform is the
infrastructure for all simulations that take place within VERITAS and its purpose is to
provide all the necessary aids and functionality of reproducing realistically the
simulation scenarios. Support of all the intelligent avatar's structures (W1.6) and its
interaction with the virtual objects/products is also part of the Core Simulation Platform.
Finally, in order to recognize necessary design changes in the software or hardware
setup, the Core Simulation Platform contains all the needed elements that will provide
feedback to the product designer at every simulation step of the application scenario.
This document shall provide all the necessary information regarding the Core
Simulation Platform. In this manuscript, basic implementation aspects are described,
like the simulation engine that the platform has been based on, the architecture that
has been followed and the reasoning of “why this and not something else” is also
explained. Additionally, this document describes more complex aspects, like the
algorithms and structures which have been created for supporting the virtual human
simulation, the scene objects manipulation and the task sequence management.
This document is split into five sections. Each of the sections deal with one important
matter of the implementation process of the simulation core platform.
Section 1 presents a thorough analysis and evaluation of the available engines and
their characteristics. The selected engine and the reasoning behind this action is
presented.
The simulation core architecture is presented in Section 2. A detailed description of the
simulation platform's basic modules, i.e. human model, scene model and Task
Manager, and the various sub-modules that are supporting them are also described in
the same section.
Section 3 contains all the information that is relative to the human simulation. The
necessary modules that have been implemented for supporting the intelligent avatar
are also described in this section. In Section 3, the three-level simulation system is also
mentioned. Concerning the human model capabilities, there are categorized in four
main domains: motor simulation, vision simulation, hearing simulation and cognitive
simulation and are studied in each respective subsection.
The 3D scene and its objects simulation is presented in Section 4. Simple object
manipulation and more complex objects' simulation aspects are going to be presented.
The whole simulation process is supervised by a sophisticated task management
system that is described in Section 5. The task structures, their internal organization
and the overall management are supervised by the Task Manager Module which is
analysed in the same section.
Section 6 contains information about the metrics and the human factors that are used
by the core simulation platform in order to perform the product's accessibility
assessment. Two example cases are also presented, one regarding the automotive
area and one that concerns the workplace area.

December 2011 1 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

1 Simulation Engine
In this section, several aspects about the selection of the simulation engine of the
VERITAS Simulation Core Platform are studied. Starting from the Platform
requirements, passing to the comparison of engine candidates and finally selecting the
best simulation engine is a difficult task and thus it is something that needs to be
described thoroughly.

1.1 Requirements from the Simulation Engine


The most important decision regarding the core functionality and the foundation of
VERITAS software development is the choice of the most suitable simulation engine to
use in order to provide the most versatile, easy to use and full-featured basis for further
development.
After careful consideration, correspondence, in person meetings and teleconferences,
the partners involved in SP2 have agreed to the following general requirements for
candidate solutions:
• Non-commercial license but with the ability to derive commercial applications
without the need to provide the source code (LGPL – MIT).
• The ability to be used both in the core simulation platform and the VR
Immersive platform of VERITAS without the need to convert the codebase save
for the specific needs of the Immersive platform (stereoscopic display support,
cluster rendering support, VR peripheral support).
• Multiple 3D and 2D object file format support.
• Integration of needed libraries and tools to facilitate physical simulation,
character animation and object collisions.
• Multiple OS platform support.
• Must support or be built in C++ language. Any implementation in this
programming language provides faster 3d graphics manipulation and
cooperates better with the hardware acceleration of the computer's graphics
cards.
Furthermore, through the process described earlier, the partners involved in SP2 have
identified a number of additional technical requirements for the architecture of
VERITAS Simulation Core which are listed below:
• Robust, extendible and versatile skeletal animation library in order to support
the simulation of motor skills for the virtual user.
• Ability to integrate an image processing library such as OpenCV [1], in order to
simulate the vision of the virtual user and perform analysis on the visibility of
elements that affect task completion
• Ability to integrate an audio processing library such as OpenAL [2], in order to
simulate the hearing of the virtual user and perform analysis on the audibility of
elements that affect task completion

December 2011 2 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• Ability to integrate cognition simulation algorithms already implemented in other


open source projects such as the CogTool project [3].

1.2 Evaluation of proposed engines


The most prominent simulation engines that inherently cover the graphical and physics
simulation aspect as well as handle general I/O, positional sound, support multiple GUI
solutions and satisfy the requirements above are: OGRE [4], OpenSceneGraph [5] and
Delta3D [6]. In the following paragraphs, the main features of each of the main
candidates will be presented, in order to provide justification for the final simulation
platform selection.

1.2.1 OGRE
OGRE (Object-Oriented Graphics Rendering Engine) is a scene-oriented, flexible 3D
engine written in C++ designed to make it more intuitive for developers to produce
applications utilising hardware-accelerated 3D graphics. The class library abstracts all
the details of using the underlying system libraries like Direct3D and OpenGL and
provides an interface based on world objects and other intuitive classes. OGRE is
simple and easy to use object oriented interface designed to minimise the effort
required to render 3D scenes, and to be independent of 3D implementation i.e.
Direct3D/OpenGL. Common requirements like render state management, spatial
culling, dealing with transparency are done automatically. Full documentation exists for
all engine classes. So far it has proven to be a stable engine used in several
commercial products.
Concerning the 3D API support, the main advantage of OGRE is that is supports both
Direct3D and OpenGL. The support of Direct3D makes it able to provide a faster and
more advanced graphics rendering process. However, there are some things to be
taken into account: direct3d is an exclusive part of Microsoft Windows and is not
available for other operating systems, like Linux and MacOS. The hardware
acceleration in these operating systems can only be supported by the OpenGL API. By
default OGRE misses stereoscopic display support and it does not contain a physics
engine. However, these problems can be solved by applying plug-ins to the OGRE
library.

1.2.2 OpenSceneGraph
The OpenSceneGraph (or OSG) is an open source high performance 3D graphics tool-
kit, used by application developers in fields such as visual simulation, games, virtual
reality, scientific visualization and modelling. Written entirely in Standard C++ and
OpenGL it runs on all Windows platforms, OSX, GNU/Linux, IRIX, Solaris, HP-Ux, AIX
and FreeBSD operating systems. The OpenSceneGraph is now well established as the
world's leading scene graph technology, used widely in the vis-sim, space, scientific,
oil-gas, games and virtual reality industries.
All OpenSceneGraph structures are represented as nodes on a graph, resulting in a
very easy manipulation. Moreover, the support of smart pointers prevents memory
leaks and makes the code debugging process easier. Another advantage of the OSG is
its support for loading of numerous 3d graphics files. For reading and writing databases

December 2011 3 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

the database library (osgDB) adds support for a wide variety of database formats via
an extensible dynamic plug-in mechanism - the distribution now includes 55 separate
plug-ins for loading various 3D database and image formats. 3D database loaders
include COLLADA, LightWave (.lwo), Alias Wavefront (.obj), OpenFlight (.flt),
TerraPage (.txp) including multi-threaded paging support, Carbon Graphics GEO
(.geo), 3D Studio MAX (.3ds), Peformer (.pfb), AutoCAd? (.dxf), Quake Character
Models (.md2). Direct X (.x), and Inventor Ascii 2.0 (.iv)/ VRML 1.0 (.wrl), Designer
Workshop (.dw) and AC3D (.ac) and the native .osg ASCII format.
The scene graph will not only run on portables all the way up to high end multi-core,
multi-GPU systems and cluster. This is possible because the core scene graph
supports multiple graphics contexts for both OpenGL display lists and texture objects,
and the cull and draw traversals have been designed to cache rendering data locally
and use the scene graph almost entirely as a read-only operation. This allows multiple
cull-draw pairs to run on multiple CPU's which are bound to multiple graphics
subsystems. Support for multiple graphic context and multi-threading is all available
out-of-the-box via osgViewer - all the examples in the distribution can run multi-
threaded and multi-GPU.
OpenSceneGraph supports OpenGL, but misses any implementation of the Direct3D
API. Another drawback is the lack of physics support, which is needed for the VERITAS
simulations. Moreover, the OSG does not have any embedded sound libraries.
In general, OpenSceneGraph provides the programmer with advanced state of the art
3d graphics and animation functions, but misses support of core elements, like physics
and sound.

1.2.3 Delta3D
Delta3D [6] is an Open Source engine which can be used for games, simulations, or
other graphical applications. Its modular design integrates well-known Open Source
projects such as Open Scene Graph (OSG), Open Dynamics Engine (ODE [7]),
Character Animation Library (CAL3D [8]), and OpenAL as well as projects such as
Trolltech’s Qt, Crazy Eddie’s GUI (CEGUI), Xerces-C, Producer, InterSense Tracker
Drivers, HawkNL, and the Game Networking Engine (GNE). Rather than bury the
underlying modules, Delta3D integrates them together in an easy-to-use API which
allows direct access to important underlying behaviour when needed. Delta3D renders
using OpenSceneGraph and OpenGL.
The primary goal of Delta3D is to provide a single, flexible API with the basic elements
needed by all visualization applications. In addition to the underlying components,
Delta3D provides a variety of tools such as the Simulation, Training, and Game Editor
(STAGE), the BSP Compiler, the particle editor, a stand-alone model viewer, and a HLA
Stealth Viewer. Further, Delta3D has an extensive architectural suite that is integrated
throughout the engine. This suite includes frameworks such as the Application Base
Classes (ABC) for getting started; the Dynamic Actor Layer (DAL) for Actor Proxies and
Properties; signal/slot support for direct method linking; the Game Manager (GM) for
actor management; pluggable terrain tools for reading, rendering, and decorating
terrain; and high-level messaging for actor communication.
As mentioned earlier, Delta3D is a full Game/Simulation engine that is built on top of
OpenSceneGraph. Therefore, all the features of OSG are supported. Moreover,
physics and rigid body collisions are fully supported. Advances audio playback is also

December 2011 4 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

provided via the integration of OpenAL (dtAudio module). Another advantage of


Delta3D is the full support of the VRPN (Virtual Reality Peripheral Network), which is a
library that offers an extensible convenient I/O access to a multitude of VR peripherals
such as trackers, controllers, displays etc. The VRPN integration into Delta3D has
already been utilized by CERTH/ITI to provide I/O support for our ART infrared tracker
and the 6DoF wand that are part of our IC:IDO IC-1 single wall VR system.

1.2.4 Verdict
The main features of each engine are presented in the Table 1. Several partners
working in 3D applications have had experience with both OGRE and
Delta3D/OpenSceneGraph, the following are a list of things were observed and are of
importance to the final choice:
• OGRE is more oriented towards games and stereoscopic support is not a
priority for the developer community of that engine; therefore it is not as
thoroughly supported as required. OpenSceneGraph and Delta3D however are
targeted more to simulation and VR and have stereoscopic support out of the
box.
• Cluster support for OGRE exists only under a Linux environment through DMX.
OpenSceneGraph and Delta3D have integrated support for cluster rendering.
• Tools and editors in OGRE are sparse and mostly revolve around mesh, particle
and animation viewers without an official world creation tool that integrates
everything (along with physics support) into a single application. On the other
hand, the STAGE editor provided with Delta3D is a full featured world editor that
is extensible to include any new type of object declared through the modular
Actor architecture supported by Delta3D.
• Fraunhofer is already using OSG under their Lightning platform and therefore
are familiar with OSG development for Immersive VR.
• OSG supports more 3D/2D content formats than OGRE out of the box making it
easier to develop the necessary tools for VERITAS that will provide system
integration as stated in A2.3.5.
The pre-mentioned facts along with the inherent support for cluster rendering and
stereoscopic display indicate that Delta3D is better suited as the VERITAS simulation
engine of choice.

December 2011 5 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Delta3D
Feature OGRE (+OpenSceneGraph)
Stereoscopic display Through patch Integrated
Cluster support Only under Linux Integrated
OS support Windows, Linux, OSX Windows, Linux, OSX
Physics Engines Integration Plugin: Newton, PhysX, Integrated: ODE
ODE, Bullet Plugin: Newton, PhysX
GUI Support Integrated: CEGUI Integrated: CEGUI, Qt
Plugin: WxWidgets, Gtk, Plugin: WxWidgets, Gtk,
Qt, MFC, Flash MFC
3D Format Support Integrated: .mesh / Integrated: .3dc, .3ds,
.skeleton (OGRE specific) .ac, .dw, .flt, .geo, .ive,
Plugin: .bsp .logo, .lwo, .lws, .md2,
.obj, .osg, .tgz, .x,
Spatial Audio Plugin: FMOD, OpenAL Integrated: OpenAL
VR peripheral support Custom code Integrated: VRPN
Graphics API support DirectX, OpenGL OpenGL
Animation Support Frame-based, FK support, Frame-based,
Internal code for skeletal Skeletal animation using
animation and animation integrated Cal3D/
blending, IK through ReplicantBody, integrated
OpenTissue IK
License LGPL/MIT LGPL
Scene Editor External Developer: Integrated: STAGE 2.3.0
Ogitor v.0.3.2
Physics support in editor No Yes
AI support External Integrated: Waypoint
following, FSM, Scripting
Community Support Excellent OpenSceneGraph:
Excellent
Delta3D: Average

Table 1: Main engine features for each proposed candidate.

December 2011 6 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

1.3 Chosen engine


Through the requirements and comparison of features described in the previous
paragraphs, a consensus was reached among the SP2 developers to use Delta3D as
the basis for the development of Veritas Simulation Core. The work carried out so far
using the Delta3D platform has been very effective and the initial results were
showcased in the ICT 2010 event. The main dependency components that form the
codebase of the simulation engine include:
• Delta3D 2.3: main simulation engine.
• OpenSceneGraph 2.8.2: 3d graphics and animation support.
• Open Dynamics Engine 0.11.1: physics and rigid body collision support.
• VRPN 7.26: Virtual reality peripheral network.
• Open-DIS 2.5: Open source implementation of the distributed interactive
simulation protocol.
• Qt 4.5.3: 2d graphical user interface libraries.
• CEGUI 0.6.2b: graphics user interfaces support based on 3d texture rendering.
• CAL3D 0.11.0: skeletal animation supporting seamless meshes.
• GNE 0.75 & HawkNL 1.68: needed for networking capabilities of the engine.
• GDAL 1.6.2: adds support of virtual terrains.
• OpenAL 1.1 & ALUT 1.1.0: sound playback support.
• XERCES 2.8.0: xml parsing.
• Boost Python 1.38.0 & Python 2.5: multi-paradigm programming language
support.
Additionally, the need for support of image filtering and processing, for the vision
simulation purposes, made it necessary to have the inclusion of the OpenCV library.
The version that has been embedded in the simulation is 2.3.1, which also adds
support for mobile devices.
Here it must be declared to the reader, that several new components have been
implemented from scratch, like the Inverse Kinematics modules, collision avoidance
algorithms, advanced human and scene modelling structures, in order to fully provide
the necessary functionality to the Veritas simulation.

1.4 Integration
The Core Simulation Platform has been implemented as a C++ library (Dynamic Link
Library). Code is encapsulated in a single library file which contains a set of classes
and methods. This approach is one of the most effective in terms of performance (no
overheads) and easiness of integration (direct use of the exposed API).
Library integration allows an effective form of integration between software
components, both in terms of performance and productivity. The library API itself
specifies a «communication protocol» between the user and the library, avoiding the

December 2011 7 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

need of specifying further communication protocols and data formats. The


performances are the best achievable since they are based on native procedure calls
within a single process.
The integration of libraries requires the use of a compatible development environment.
It means that the same (or compatible) development environment used to develop the
target application must have been used to pack the library that has to be integrated.

1.5 Simulation Parameters


The core simulation platform is fully parametrisable. Several parameters can be altered
in order to perform the customized simulation session. The simulation parameters are
split into three categories:
• Simulation levels
• World time-step
• Collision related parameters
In the following paragraphs each of the above parameters will be analysed.

1.5.1 Simulation levels


There are three possible simulation levels that a session can run:
• Level 0: data comparison based simulation.
• Level 1: kinematic simulation.
• Level 2: dynamic simulation.
When a simulation session runs at level 0, every task validation is done by simple data
comparisons. For example, if the avatar has an upper limb length equal to x meters
and the object is unreachable, i.e. its distance from the shoulder is more than x meters,
then the task fails. The level 0 simulation is the fastest one (of the three levels), but it
does not contain any algorithmic computation – it just compares numbers. It can be
used only to exclude the feasibility of some basic actions, such as the reach action
described above.
The Level 1 simulation session can be described as a “kinematic session”. Every
algorithm that involves kinematic computations can be activated. All the kinematic
parameters of the virtual user models are used, such as the body joint's range of
motion, gait velocities. Regarding the avatar's motor functionality, forward and inverse
kinematics can be applied when performing the accessibility assessment. However,
level 1 does not contain any dynamics algorithm, and thus cannot involve any kind of
forces and torques.
The Level 2 based simulation is the most advanced simulation. It involves both
kinematics and dynamics. It expands the algorithms of level 1, by adding forward and
inverse dynamics. In level 2 simulations, forces and torques are present, thus concepts
such as the avatar's strength capabilities can now be tested. It must be mentioned that
level 2 simulations are slower when compared to kinematic simulations, due to the
extra computation of the dynamics.

December 2011 8 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

1.5.2 World time-step


The world time-step parameter defines the duration (in seconds) of the time-step of the
simulation session. Before describing anything else, it must be clarified that:
• the simulation time is not the same as the application time.
The application time describes time that passes when the application, i.e. the core
platform, is running. The simulation time is the internal simulated scenario time and
passes in constant discrete steps. These steps are defined here as the “world time-
steps”.
The world time-step is the most crucial parameter in the simulation regarding levels 1
and 2. Big time-step values can make a simulation session end faster and smaller time-
step values will result in longer simulation sessions. However, when the time step value
is too big, inaccuracy and instability issues may occur. This is something to be taken
under consideration, especially when the application is running in level-2 mode.
The default value of the simulation world time-step is 0.005sec. If the hardware
specifications can allow it, this value can be decreased in order to increase accuracy
and stability of the simulated session.

1.5.3 Collision related parameters


The collision parameters are applied only in level-1 and level-2 simulations. There are
two groups of the collision parameters:
• the parameters that are related to the collision avoidance: they define the
collision avoidance behaviour and can be altered to:
◦ enable or disable all the collision avoidance algorithms.
◦ enable or disable the collision avoidance of the avatar to the static scene
objects, i.e. objects that cannot be moved.
◦ enable or disable the collision avoidance of the avatar to the moveable
scene objects.
◦ enable or disable the avatar inter-collision avoidance, i.e. avoidance of
collisions between the body parts.
• the parameters related to the collision response: these parameters apply only to
level-2 simulations and are ignored from levels 0 & 1. If the collision response is
activated, every time a body part collides with an object forces are generated.
Enabling this parameters can make the simulation run very slow. The user can:
◦ enable or disable the collision response between the body parts and the
static environment.
◦ enable or disable the generation of collision reaction between the body parts
and the moveable objects.

December 2011 9 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2 Architecture Analysis

2.1 Overview
The basic operation of the simulation core platform can be summarized in the following
three primary steps:
a) Import to the simulation core platform any information and parameters that
consist the simulation scenario.
b) Update all the internal simulation components and their structures so to match
the simulation scenario specification.
c) Run the simulation cycle, determine if the task feasibility and report if the task
was successful or not.
These steps can be analysed to more secondary steps, but in general they describe all
the basis of the data flow. In this section, every one of these three steps will be studied.
In the following subsections more about the components (modules) and several special
parameters that consist the simulation platform will be described.
The simulation core platform architecture and the simulation data flow is shown in
Figure 1.

December 2011 10 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 1: Simulation core platform architecture diagram. The data flow is also presented.

December 2011 11 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2.1.1 Input Information


As it is shown in Figure 1, the Interaction Adaptor (main output of activity A2.1.5)
creates the necessary scenario information and stores it into several files. These files
contain all the needed simulation scenario information needed by the core platform in
order to perform a simulation session. There are two basic files exported by the
Interaction Adaptor:
• the scene model file and
• the simulation scenario file.
The scene model file is an xml file that describes all the virtual objects of the scene, as
well as their binding to their meshes as well as their special characteristics, such as
their weight, degrees of freedom and functionality. The scene model file is needed for
the simulation core platform in order to construct the virtual scene and add the objects
in it. It is also needed to assign collision geometry to them, as well as add to each one
physical characteristics. An analysis of how this process is applied is given in section 4.
The simulation scenario file is an xml file that describes in detail the avatar action
sequence. It is based on the task model and the simulation Contains special
information for basic actions, e.g. "move", "reach", "grasp", "look", as well as more
advanced actions, i.e. "walk". This information is needed by the simulation core
platform in order to "guide" the avatar of what to do during the session. This file also,
adds some success/failure conditions to the task sequence, for example: "if the hand is
further than 0.5cm from the target, then fail". More about the task management and the
structures that supporting it can be found in section 5.
It must be clarified that almost all motion actions are computed dynamically by the
simulation platform, using algorithms specially built for this purpose and which are not
predefined in any case in the simulation scenario file. This means that the simulation
scenario file does not contain any predefined animation or simulation steps. It is just a
definition of the task and action that needs to be taken by the intelligent avatar.
Except these two files, the simulation needs the definition of additional files containing
information about the avatar parameters. These can be spit into two categories:
• the avatar files and
• the virtual user model file.
The avatar files contain the visual representation of the avatar, i.e. the avatar 3d
meshes, its skeletal topology and specific declaration of locations of special points of
interest, like the position of the eyes, fingertips etc.
The virtual user model file contains the information of the capabilities of the avatar.
Specifically, the virtual use model file contains information about:
• the motion characteristics of the avatar, e.g. range of motion, etc.
• dynamics parameters, e.g. maximum forces applied by its limbs, etc.
• its hearing characteristics, e.g. audiogram specification, etc.
• vision characteristics, e.g. glare sensitivity parameters, etc.
• cognitive characteristics: mental characteristics of the avatar.

December 2011 12 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

The avatar files are generated as an output of using the tools provided by the activities
A1.6.3 "Intelligent avatar" and A2.1.6 "Virtual User and Simulation models adaptor".
The virtual user models are provided through the tool of A.1.6.5 "Integrated user model
generator". The whole process is shown in Figure 2.

Task Simulation
Scene
Model Model

Interaction Virtual User Intelligent Avatar


Adaptor Model Generator Editor

Virtual
Scenario Scene Avatar
User
Model Model Files
Model

Core Simulation Platform

Simulation Simulation Cycle


parameters
configuration
Simulation
Result
Initialisation
Process

Figure 2: The core simulation platform input files. Several files


from several external applications needed to run a simulation
session. The simulation session must be first initialised. Then
the simulation cycle is repeated until all the tasks (described in
the scenario model file) become successful or at least one
fails.

2.1.2 Simulation components initialization


Having all the above structures, the Core Simulation Platform has to initialize its
internal components, before running the first simulation step. First the avatar collision
meshes need to be adapted to the avatar mesh. Then, all the capability parameters,
which are defined in the virtual user model file, are passed into the simulation core's
avatar structures and change its motor, vision, hearing and cognitive capabilities. By
this time, the avatar model has adopted all the specifications needed to run the
simulation session. The responsible algorithms for this adaptation are described in
section 3.
After the avatar/human model adaptation, it is time for the scene to be initialized. The
information contained in the scene model file is used. All objects defined in it are
created and passed into the simulated environment. Collision geometry for every object

December 2011 13 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

is also created and the functionality of each one is generated. The responsible
structures for the scene model are described in section 4.
The last thing is the implementation of all the simulation scenario file structures that will
take place in a simulation session. From these the internal task sequence is generated.
The simulation core platform has internal structures for supporting each task. It also
has an advanced module that is is responsible for managing this task sequence. The
task management is described in section 5.

2.2 Core simulation modules


The Core Simulation Platform contains four major entities. Each entity cooperates with
another entity when running a simulation session. We will refer to these entities by the
term "core modules". The core modules and their data flow is presented in Figure 3.

Simulation
Simulation
Module
Module

simulation
parameters

Scene Humanoid
Scene
Module Humanoid
Module
Module Module

motor
scene scene properties
objects (collision mesh, vision
gravity etc,) humanoid
capabilities hearing
cognition

objects scene
degrees of functionality humanoid
freedom and rules functionality

Task Manager
Task Manager
Module
Module

tasks

Figure 3: The simulation platform's core modules. The four simulation modules, as well
their basic internal data, are displayed. The Simulation Module is basically an inspector
of the other three modules. The arrows display the data flow between the components.

The four core modules are:

December 2011 14 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• the Simulation Module, which is responsible for managing the whole simulation
and providing the user feedback.
• the Humanoid Module (or human model module), which is responsible for
simulating the avatar's capabilities and performing its actions.
• the Scene Module (or scene model module), which is responsible for managing
the scene objects and theirs functionality.
• the Task Manager Module, which has the purpose of managing the tasks
needed to be done, i.e. the sequence of the avatar's actions and their results
on the scene objects.
As it is shown, the Simulation Module encapsulates the other three components. In
terms of hierarchy, the Simulation Module is above any other structure in the VERITAS
simulation platform. It is the main component that supervises the whole simulation and
organizes the communication between the human, scene and Task Manager modules.
If anything goes wrong, from simple matters, like wrong input to the platform, to more
advanced, like physics instability issues, is reported by the Simulation Module.
The Humanoid Module's primary responsibility for managing the avatar structures
during the Simulation Module. More precisely, it is responsible for moving the avatar
based in the virtual user model motor capabilities, and applying various visual and
hearing filters, based on the virtual use model vision and hearing capabilities.
Moreover, it is responsible for applying basic alteration in the avatar's functions based
on the provided cognitive parameters. Beside all these, the Scene Module organizes
and manages the attributes of the scene objects. After parsing the scene model file, the
Scene Module is responsible for constructing all the scene objects and defining their
functionality. Simple rules define the functionality of an object. A simple rule example
would be "if the object-button is translated, i.e. pressed, more that 1cm, then the
object-compartment opens. This rule system, along with other elements such as the
object collisions, physical attributes (e.g. weight), environmental attributes (e.g. surface
mesh, gravitational forces), are all parts of the Scene Module.
The Task Manager Module manages everything related to the scenario tasks. When a
task is to be performed, the Task Manager informs the Humanoid Module to act. Then
the Task Manager supervises the humanoid at every simulation step and reports to the
Simulation Module if anything went wrong. The Task Manager Module organizes its
tasks in sets of rules or rulesets. Each rule is activated when its condition part is met.
More about the task rulesets can be found in section 5.
The reason why the simulation core platform has been split into four core modules is
because:
• It provides better organization, as each core module has to cope with specific
problems and entities.
• It provides better data flow coordination, as each module sends and receives
data only in the time-frame this is needed.
• It provides better code implementation scalability, meaning that:
◦ each core module can be given extra functionality easily if needed.
◦ A core module can even be used alone, e.g. only the humanoid classes
could be easily integrated into another platform/application.

December 2011 15 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• Optimization techniques can be better applied: e.g. multi-threading. Each core


module could run in a separate thread in a multi-core system in order to run the
simulations faster. Although this feature has not been implemented, it is very
feasible as the basic structures are already separated.
• Finally, this segmentation makes it easier to understand of what it happens
underneath the code classes.

2.3 Secondary humanoid modules


Although the four core modules provide the basic functionality of the core simulation
platform, in order to cope with more specific needs, several secondary modules
needed to be implemented. These secondary modules are part of the Humanoid
Module. The Humanoid Module has the responsibility of managing them and
processing their exchanging of data.
The secondary modules are classified into five categories:
• Generic modules: these are several multi-purpose modules needed by the
Humanoid Module but are not related directly to the simulation process, e.g.
graphics representation of the meshes.
• Motor simulation modules: these sub-modules provide to the humanoid with
basic or advanced motor functions.
• Vision simulation modules: their purpose is to coordinate the vision system of
the avatar to the target and apply advance filtering to the input images.
• Hearing simulation module: its purpose is to calibrate the humanoid auditory
system and apply audio filters to it.
• Cognition simulation module: this module has the purpose of applying delays
that emulate the thinking process of the avatar.
In the following paragraphs, each secondary module will be described briefly Rather
than bury the underlying modules, Delta3D integrates them together in an easy-to-use
API which allows direct access to important underlying behaviour when needed.
Delta3D renders using OpenSceneGraph and OpenGL.

2.3.1 Generic supportive modules


This category contains two components: the Cal3D module and the SimMath library.
The Cal3D module is responsible for importing the avatar mesh and computing the
bounding boxes of each limb and torso part. The output of the Cal3D module is
provided to the humanoid for skeletal adaptation and weight distribution. Besides this,
the Cal3D module is responsible for the visual representation of the avatar meshes. All
needed transformations from the humanoid skeleton to the cal3d model is performed
by the Cal3D module.
The SimMath library is basically a collection of extra mathematical functions that are
added to the existed mathematical functions of Delta3D and OpenSceneGraph. The
SimMath module is used by all components of the simulation platform.

December 2011 16 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2.3.2 Motor simulation modules


The motor simulation modules can be split into two subcategories: the ones that
provide the avatar's basic motor functionality and the ones that are based on them to
give more complex motor functions. The basic and advanced motor Modules are
depicted in Figure 4.

Advanced
Gait Module Collision Avoidance Manager Grasp Module
Motor Modules

Bone Manager Joint Manager Motion Manager IK Manager Basic


Motor Modules

Humanoid Module

Figure 4: The motor simulation modules. Advanced motor simulation (top line) is based
on the four basic modules (line in middle).

2.3.2.1 Basic motor simulation modules


Basic motor functionality is provided by the following four managers:
• Bone Manager
• Joint Manager
• Motion Manager
• Inverse Kinematics Manager (IK Manager)
At every time step, the Bone Manager gathers information from every humanoid's bone
part. This information contains the position, orientation, velocity, acceleration, etc. of
each bone. The Bone Manager also inspects the skeletal dynamics.
Joint Manager is responsible for managing the state of the bone joints (a bone joint is a
structure that connects two bones). Relative bone rotational velocity tracking and
inverse dynamics of each body joint are main responsibilities of the Joint Manager.
Other responsibility is the assignment of the range of motion parameters to each joint
and proper configuration of ODE parameters. The Joint Manager cooperates directly
with the Bone Manager when the humanoid's body moves.
Motion Manager's main responsibility is the proper coordination of the Bone Manager
and Joint Manager. Motion Manager generates the target body postures and passes
them to the two managers. Then the two managers are moving their respective sub-
elements, i.e. bones and joints. The Motion Manager communicates with Task Manager
Module at every simulation step. When there is a motion action that has to be
performed by the humanoid, the task module passes the request to the Motion
Manager.

December 2011 17 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

The Inverse Kinematics Manager (or IK Manager), contains all structures that are
needed for computing the avatar's inverse kinematics. The IK Manager gets
information from the current task and generates a desired posture. Then passes this
“target” posture to the Motion Manager to move the respected body parts.

2.3.2.2 Advanced motor simulation modules


In order to keep the basic motor modules' complexity at low levels and add extra
motion features to the humanoid model several more advanced modules needed to be
implemented. These are:
• Collision Avoidance Manager
• Gait Module
• Grasp Module
The Collision Avoidance Manager is responsible for coordinating the body parts in
order to avoid any collisions with the environment. The Collision Avoidance Manager
has access to the scene topology and its objects via the Scene Module. After
computing a proper path (in this case a path is a series of body postures), passes it to
the Motion Manager.
The Gait Module has a two fold role: it first computes the path to the target location
avoiding stepping through the scene objects and then generate all the postural
information and pass it to the Motion Manager. The Gait Module communicates with
the Scene Module in order to provide a collision-free path. The generated postural data
are generated by taking into account all the gait parameters of the loaded virtual user
model – this is done via communication with the Humanoid Module). The target
location is provided to the Gait Module by the Task Manager Module.
The Grasp Module is responsible for the humanoid motions that involve grasping
actions. Several grasping hand configurations are predefined and stored inside the
Grasp Module and are applied when a grasping action needs to take place. Grasp
Module cooperates both with Task Manager Module and Motion Manager.

2.3.3 Vision simulation modules


The vision simulation architecture is depicted in Figure 5. As it's shown, it is the result
of the cooperation of two components:
• Vision Model
• LookAt Module
The Vision Model contains all the virtual user model information concerning the vision
and is responsible for filtering the input frames based on the virtual user capabilities.
For this purpose, the Vision Model is cooperating with the Humanoid Module, from
which it receives the vision parameters. The filtered stereoscopic result (the result is a
set of two filtered images, one for each eye) is provided as an output to the end-user
via the Simulation Module.

December 2011 18 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Task Manager
Module

set
Motor Modules target

Motion Manager Vision Modules


head & eyes image Simulation
LookAt Module Vision Model
movement filtering Module
IK Manager

Humanoid Module

Figure 5: Modules that are taking part in vision simulation. The LookAt Module must
cooperate with motor modules and the Task Manager in order to move the avatar's
head and eyes. The Vision Model applies image filtering and sends the output to the
Simulation Module.

The LookAt Module is used for coordinating the head and rotating eye balls. This
coordination is based on a target point set by the task module and The LookAt Module
also provides the Humanoid Module with the necessary transformations needed to
visualize the avatar's line of sight.

2.3.3.1 Hearing simulation sub-module

audio input
audiogram
Humanoid Module Hearing Model Simulation Module
specs
filtered audio
audio
exchange

External Application

Figure 6: Hearing Model act as a filter that receives audio data from the Simulation
Module and sends them back filtered based on the virtual user's audiogram.

The hearing simulation is performed by one sub-module which is called the Hearing
Model. The Hearing Model contains several parameters that are based on the virtual
user model's audiogram. Initially it receives audio data from either the Simulation
Module or the external application and then applies audio filters to them. Finally it plays
the sound or sends it back to the external application. The whole process is displayed
in Figure 6.

December 2011 19 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2.3.3.2 Cognition simulation sub-module

cognitive delay
Humanoid Module Congition Module Motion Manager
attributes factors

Task Manager
Module

Figure 7: The Cognition Module takes as input the cognitive attributes of the avatar and
creates delay factors for the Motion Manager and Task Manager modules.

In its current state the Cognition Module cooperates with both Motion Manager and the
Task Manager. The Motion Manager contains elements which support the delay of the
functionality of some movements performed by the humanoid based on the virtual user
model cognitive parameters. The delay factors are also applied for the task actions,
e.g. to simulate the thinking process; so, the Cognition Module communicates with the
Task Manager. The Virtual User Model cognition parameters are passed through the
Humanoid Module (Figure 7).

2.4 The simulation cycle


After the initialization of the simulation components, the core simulation platform enters
a loop. This loop is called the simulation cycle. The cycle contains several smaller
steps which include the cooperation of the core and secondary modules, mentioned in
sections 2.2 & 2.3. Each simulation cycle, has a predefined time duration equal to the
world time-step parameter (1.5.2). Inside this time-frame a series of checks and
computations need to be done. The data flow of the simulation cycle is shown in Figure
8.
The steps in the simulation cycle are:
1. Initially, the Simulation Module sends a signal to the Task Manager Module to
update its state.
2. The Task Manager checks if there is at least one task that is running or waiting
to run.
a) If there is no task running or waiting to be run, the Task Manager checks if
there was any failure in its tasks.
• If there was a failure, the simulation cycle stops and the Simulation
Module reports that the task was failed.
• Otherwise, all the tasks were completed successfully.
b) If there is at least one task that needs to be started the Task Manager
initiates it.
3. If there is a task that is running, the Task Manager checks if that task concerns
the humanoid or the Scene Module.

December 2011 20 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

4. Then the respected module is instructed to run the task.


a) If it is a task that is concerning the scene objects, the Scene Module gets
the task information and passes it to them.
b) If it is a task that is concerning the Humanoid Module, the respective
humanoid sub-module is configured.
5. After the configuration of the scene and Humanoid Module, the Simulation
Module moves to the next cycle.
The cycle repeats until all task are completed or until at least one task fails. Analytical
information about the internal task structures and an analytical description of the task
management is presented in Section 5.

December 2011 21 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Simulation Update state Task Manager


Module Module

No
Is any task Yes
running or waiting for
execution?

Yes Is any task No


waiting to be
executed?
Start the
task

Did any task Yes


Is any task
fail? running?
Yes

No
No

Move to next Inform the simulation


simulationstep module that a task has failed Does the
task involve the
humanoid module
or the scene
module?

Humanoid task Scene task

Motor No Vision No Hearing No Mental


task? task? task? task?

Yes Yes Yes Yes

Inform structures
Bone Manager
Inform structures
Joint Manager Inform structures Inform structures inform
Vision Model
Motion Manager Hearing Model Cognition Module structures
LookAt Module
IK Manager
+ advanced modules

Humanoid Scene
Module Module

Figure 8: The simulation cycle diagram. The core modules are displayed with
gray boxes. The simulation session will fail if the read lined boxed if activated.

December 2011 22 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2.5 The Simulation Dependency Tree


The main feature of the organizational structure of the core simulation platform is that
every simulated component is part of an advanced tree structure.
In the tree structure, every component is represented by a tree node. Each child node
is dependent on its parent, meaning that when a parent node is removed from the
simulation, every descendent node is also deleted. Additionally, in each tree node,
information regarding its dependency to other tree nodes is stored.
The dependency tree allows the developer to easily manipulate the simulated
components, because:
• The insertion of an extra component to the simulation can be applied easy and
fast.
• At the deletion of an existing component, the core platform can check which
other components may be affected and can inform the user.
• An intuitive visual representation of the simulation components is much easier
to be constructed.
Considering the above, the dependency tree ensures a smooth data exchanging
between the simulation core platform and any external application, such as the
Interaction Adaptor.

3 Human Simulation
The human simulation is carried out through the Humanoid Module. The Humanoid
Module organises and coordinates a set of other sub-modules. These are shown in
Figure 9. Besides the sub-modules, several more components are included in the
humanoid structure in order to make its manipulation more convenient. In the following
sub-sections, the functionality of these components is going to be analysed.

December 2011 23 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Cal3D models adaptation,


Cal3d Module avatar visual
representation

Bone coordination,
tracking bone position,
Bone Manager orientation, velocity.
Points of interest
management.

Joint coordination,
tracking rotations,
Joint Manager range of motion limitation,
avatar posture control,
torque generation.

Humanoid Module Motion generation,


Motion Manager motor configuration
queue management.

Inverse kinematics chains


IK Manager
management

Collision Avoidance Management of collision


Manager avoidance algorithms

Management of collision
Gait Module
avoidance algorithms

Hand manipulation and


Grasp Module
grasp planner.

Figure 9: Humanoid Module and it's components. A short description


of the functionality of each components is also presented.

3.1 Avatar Representation


The avatar is represented by two triangle meshes:
• a mesh that is used for visual representation: this mesh can be a visual
representation of a man, woman or child.
• a mesh that is used for representation of the collision rigid bodies. The collision
primitive bodies can be either spheres, capsules or boxes and are used by

December 2011 24 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Open Dynamics Engine.


An example of such meshes is depicted in Figure 10.

Figure 10: The humanoid visual representation is dependent on the


loaded avatar model. Here two different models are presented: a
man's (left) and a woman's (middle). The collision mesh (right) is
calculated automatically based on the adaptation process of the
loaded 3d mesh. As it is shown the collision mesh consists of primitive
shapes, such as boxes, sphere and capsules.

3.2 Motor Simulation


In this sub-section the elements regarding motor simulation are presented.

December 2011 25 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

3.2.1 Basic Elements


The humanoid's skeletal system is consisted of bones and joints. Additionally, some
bones have special points of interest (PoI) defined in them. In the following paragraphs,
these elements are described.

3.2.1.1 Bones
The bone is a basic construction unit of the humanoid skeleton. By using the term
“bone” it must not be mistaken that the bone is a representation of an actual human
bone. Instead, the term “bone” is used to describe a specific body part.
The humanoid consists of 51 such bones. The number of bones is chosen having in
mind two facts:
a) the parameters that were gathered from the activities of the SP1 (either from
the bibliography or the multisensorial platform) did not refer to any extra body
parts than the ones defined here.
b) any extra bones would decrease the performance of the simulation platform
without providing any additional functionality and may even cause instability
issues in the dynamics simulation.
For better manipulation, the bone elements are organized in groups, which are called
“bone groups”. There are 10 bone groups, presented in Table 2. The bone groups are
structures that are manipulated by the Bone Manager.

December 2011 26 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Bone Group Bones


Torso UpperTorso, MiddleTorso, LowerTorso, RootBone, Pelvis
Head Head, Neck
Left Arm LeftUpperArm, LeftLowerArm
Right Arm RightUpperArm, RightLowerArm
Left Hand LeftHand,
LeftThumbFingerA, LeftThumbFingerC, LeftThumbFingerC,
LeftIndexFingerA, LeftIndexFingerB, LeftIndexFingerC,
LeftMiddleFingerA, LeftMiddleFingerB, LeftMiddleFingerC,
LeftRingFingerA, LeftRingFingerB, LeftRingFingerC,
LeftPinkyFingerA, LeftPinkyFingerB, LeftPinkyFingerC
Right Hand RightHand,
RightThumbFingerA, RightThumbFingerC, RightThumbFingerC,
RightIndexFingerA, RightIndexFingerB, RightIndexFingerC,
RightMiddleFingerA, RightMiddleFingerB, RightMiddleFingerC,
RightRingFingerA, RightRingFingerB, RightRingFingerC,
RightPinkyFingerA, RightPinkyFingerB, RightPinkyFingerC
Left Leg LeftUpperLeg, LeftLowerLeg
Right Leg RightUpperLeg, RightLowerLeg
Left Foot LeftFoot, LeftToes
Right Foot RightFoot, RightToes

Table 2: The bone groups and their respective bones.

Each bone has a rigid body and is defined by the following attributes:
• Mass: the mass definition includes the bone mass measured in kg along with a

[ ]
I 1,1 I 1,2 I 1,3
3x3 matrix I = I 2,1 I 2,2 I 2,3 , which is called the moment of inertia tensor
I 3,1 I 3,2 I 3,3
and is a measure of an object's resistance to changes to its rotation. The
tensor matrix is used only in dynamic sessions.
• Volume (m3): the volume is defined by the type and size of the primitive shape
that is representing the bone.
• Position (m): a 3d vector ⃗p =( x , y , z ) which states the current position in
absolute coordinates of the bone's centre of mass.
• Orientation: this attribute stores the global orientation of the bone's volume.
There three different ways to define it:
◦ Via a quaternion q=( q w , q x , q y , q z ) . This is the fastest and more preferable
method of the three. The inverse rotation is also derived very fast.
◦ Via an angle and axis ( a ,( x , y , z ) ) . This is as powerful as the quaternion
and it is generally a more intuitive representation. Axis angles are

December 2011 27 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

converted into quaternions after their declaration.


◦ Via a set of Euler angles ( a x , a y , a z ) . The Euler angles are used only for
checking the limits in range of motion restrictions and in general must be
avoided because they suffer from the gimbal lock problem.

• u=
Linear velocity (m/s): is defined by a triple of float numbers ⃗ ( dxdt , dydt , dzdt )
that declare the absolute linear velocity of the bone's centre of mass.
• Angular velocity (rad/s): is defined by a triple of float numbers

ω
⃗= ( dr x dr y dr z
,
dt dt dt
, )
which declare the angular velocity of the bone's volume

respectively to the three global axes.


• Linear acceleration (m/s2): this attribute has different meaning when is used in
kinematics and when is used in dynamics sessions:
◦ in purely kinematics sessions (i.e. level 1 simulations), the linear
acceleration can be only taken into consideration as the 3d vector

a linear =

d ⃗u
dt (=
du x du y du z
,
dt dt dt
, = )(
d 2 x d2 y d2z
, ,
dt 2 dt 2 dt 2 )
, where u is the linear

velocity.
◦ in dynamics sessions it is better to consider that the linear acceleration is
equal to the quotient of dividing the net force applied to the centre of mass
⃗ net
F
by the mass size, i.e. ⃗
a= .
m
• Angular acceleration (rad/s2): respectively as the linear acceleration case
◦ in purely kinematics, the angular acceleration is given by:

a angular =

dω⃗
dt
=
dt(
d ωx d ω y d ω z
,
dt
,
dt
.
)
◦ In dynamics sessions angular acceleration it is described better by taking
into consideration the bone's net torque and the tensor that is applied to the
bone, i.e.:
a angular = I −1τ⃗ net , where I −1 is the inverse of the inertia tensor and τ⃗ net is

the net torque applied to the bone.
• Collision mesh: represented by an ODE geometry. Basically, it is an abstract
geometry object having the shape of the bone's primitive shape.
Initially, the avatar has a predefined weight (in kg), which is given as input to the
Humanoid Module by the Cal3D module's parser. The weight is distributed uniformly to
each bone's mass depending on its volume. The bone volume is automatically
computed based on the visual representation (mesh) of the avatar 3d model. More
specifically, this is achieved by:
1. First, converting the cal3d model into a set of oriented bounding boxes. Each
bounding box is orientated regarding the cal3d bone part's mesh.

December 2011 28 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

2. Then, at each bounding box, a primitive 3d shape must fit. This process is
performed by the cooperation of the Cal3d module and the Bone Manager,
where the first creates the bounding boxes and the seconds fits a primitive
shape into them. The primitive shapes can be either be boxes, a spheres or a
capsules1. These primitive shapes are easier to manipulate by the dynamics
engine than the original triangle mesh. Another advantage is the lower
computational cost of the collision algorithms.
The bone can take part in the simulation in two available states: kinematic or dynamic.
When in kinematic state, the mass property of the bone is ignored. The kinematic state
is activated when the simulation is running in level-1 modes. When in dynamic state,
forces and torques can be applied to the bone, e.g. to move it, rotate it, etc. The
dynamic state of the bone is active when the simulation is running in level-2.
In respect to this point, it must be mentioned that there is a special bone in the skeleton
that needs to be analysed more. This is the Root Bone and is positioned in the pelvis
region. The root bone is unique because it defines the centre of the avatar and it is the
manipulation reference of changing the avatar's global location and/or orientation. By
default, the root bone is not allowed to have any mass attributes assigned to it and its
volume is equal to zero, meaning that is a point. However it keeps all the rest
attributes, such as the position, orientation, velocities and accelerations.

3.2.1.2 Joints
A joint is an element that keeps two bones connected. There are a total of 50 joints
supporting the skeleton. Each joint permits its attached bones to rotate:
a) by either setting the bones' orientations directly (in level-1 simulations) or
b) by generating and applying torques (in level-2 simulations) to them.
The number of the rotational degrees of freedom (DoF) is predefined by the Joint
Manager for each joint and can vary from 1 DoF to 3 DoF. One DoF joint means that it
can apply rotation to its two bones around one axis only (such 1-DoF joints are most of
the finger joints). Joints having 3DOF permit rotations along all three axes. Such a joint
is the shoulder joint shown in Figure 11

Figure 11: The shoulder joint and its degrees of freedom. There three possible axes in
which the arm can be rotated, thus the joint's DoF number is equal to three.

1 A capsule is a cylinder capped with hemispheres. The capsules are preferred over cylinders
because of their better dynamics behaviour.

December 2011 29 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 12: Avatar joints and their degrees of Figure 13: There are four torso joints.
freedom. Each joint's location is represented It is also shown that the root bone is
by a red sphere. Here, the joints regarding located between the LowerTorso joint
the movement of the limbs and the head and the Pelvis joint.
have been annotated.

December 2011 30 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 14: The left hand's joints. Each hand has a total of 19
degrees of freedom.
The joints' names along with their locations and DoF are presented in figures 12, 13
and 14. The joints are following a hierarchical system, i.e. each joint is connected to
another one in a specific manner (Figure 15). A joint is permitted to have only one
parent joint or none at all – the two joints without a parent are the ones that are
attached to the root bone, i.e. the “LowerTorso_joint” and the “Pelvis_joint”. Having this
in mind, a joint may have one or more children joints, that inherit their parent
transformation. Using this hierarchical skeletal system the rotations can be applied very
fast. The two bones that are attached to the joint are declared as parent and child
bone.

December 2011 31 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

HEAD
Head

Head_joint

Neck

LEFT ARM Neck_joint

LeftShoulder_joint
LeftUpperArm UpperTorso

LeftElbow_joint
UpperTorso_joint
LeftLowerArm TORSO
MiddleTorso
LEFT LeftWrist_joint
HAND MiddleTorso_joint
LeftHand

LowerTorso

LeftPinkyA LeftMiddleA LeftThumbA


LowerTorso_joint
LeftPinkyB_joint LeftMiddleB_joint LeftThumbB_joint
RootBone
LeftPinkyB LeftMiddleB LeftThumbB

LeftPinkyC_joint LeftMiddleC_joint LeftThumbC_joint Pelvis_joint

LeftPinkyC LeftMiddleC LeftThumbC


Pelvis

LeftUpperLeg_joint
LeftRingA LeftIndexA
LeftUpperLeg
LeftRingB_joint LeftIndexB_joint

LeftRingB LeftIndexB LeftKnee_joint

LeftRingC_joint LeftIndexC_joint LeftLowerLeg


LeftRingC LeftIndexC
LeftAnkle_joint

LeftFoot LEFT
LEG
LeftToes_joint

LeftToes

Figure 15: The skeleton hierarchy. A child bone is connected to its


parent bone by a joint (represented by an arrow). The only bone that
has not a parent bone is the root bone. For illustration purposes the
right arm and leg components have been hidden.

December 2011 32 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

There are several attributes that define the behaviour of each joint. These are:
• The joint's anchor: basically declares the current joint position. It is basically a
3d vector ⃗p joint =( x , y , z ) . The joint's anchor can be expressed in three
possible ways:
◦ as an absolute set of coordinates.
◦ as a relative translation to its parent and
◦ as a relative translation to the root bone's position.
• The joint's local axis/axes of rotation: the local axis define the rotational axis
along which the child bone is permitted to be rotated relative to the parent's
bone. The axes number is the same as the joint's DoF number, thus varying
from one to three. It is a requirement by the core simulation platform that in the
3-DoF joints all the axes must be perpendicular.
• The range of motion of the joint's degrees of freedom: each degree of freedom
is defined by an angle value. All the possible angle values define the DoF's
range of motion. The upper and lower angle limits are stored inside the joint's
structure and are parametrisable, depending by the virtual user model's motor
capabilities.
• The comfort angle: the comfort angle is the angle at which the joint's DoF
results in comfortable bone postures. Each joint can be assigned from one to
three comfort angles, depending on its number of DoF. Every comfort angle
must be inside the respective range of limit.
• The agility factor: this factor declares the kinematic behaviour of the joint. The
agility factor is used by the inverse kinematics resulting into more natural body
postures.
• The torque definitions (applies only to dynamics): while in dynamics sessions,
each joint can apply to its child bone torques in order to rotate them. Every joint
can apply limited torque based on the formula:
τ max=τ min + f τ⋅τ base (1)
where:
◦ τ max is the maximum torque that can be generated in the joint.
◦ τ min is the joint's minimum torque. This value must always be greater than
zero. Although zero values are permitted, they can lead to instability issues,
and thus must be avoided.
◦ τbase is the joint's base torque: the base torque measures how strong is a
joint compared to the rest body joints.
◦ f τ is joint's torque factor and is depending on the strength capabilities of
the loaded virtual user model.
All the torque values are measured in Nm.

December 2011 33 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

3.2.1.3 Points of Interest


Besides the bone and joint elements, in order to properly define some specific regions
on the skeleton, another set of basic elements had to be introduced, the Points of
Interest or PoI. The PoI are special points on the humanoid's body that are needed so
as to properly define the higher level modules' structures.
Each PoI has three primary attributes:
• Name: which is unique and defines the identification of the PoI.
• Parent bone: i.e. the name of the bone that acts as parent of the PoI. The PoI
will inherit all transformations of its parent bone.
• Relative position to the parent bone: thus the point's absolute position can be
found by adding the relative translation to its parent's transformation.
There are, also, two another optional attributes in the PoI's definition:
• “Look At” direction: a 3d vector of pointing a “front” direction.
• “Up Vector” direction: another 3d vector of pointing the “up” direction.
The are a total of 26 PoI defined. Table 3 summarises the humanoid's PoI. Several of
PoI locations are shown in figures 16, 17 and 18.
Heel PoI,
used for gait
planning
Eye poi, is used by Ear poi, is used by (Gait Module).
the LookAt Module the Hearing Module

Toes oriented poi,


Mouth poi
(also, for gait
planning).

Figure 16: Head Points of Interest. The Figure 17: Feet Points of Interest. These
arrow vectors declare the "LookAt" points are used by the gait planning
direction. algorithms.

December 2011 34 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Thumb fingertip

Index fingertip

Middle fingertip

Ring fingertip

Pinky fingertip

Palm's oriented poi.


Orientation is used
in grasp planning
(by the Grasp Module).

Figure 18: The hands points of interest. Used primarily by the


inverse kinematics and grasp planning algorithms.

December 2011 35 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Region PoI Parent Bone Look At Up Vector


LeftEye Head √ √
LeftEar Head √
Head RightEye Head √ √
RightEar Head √
Mouth Head √
LeftPalm LeftHand √ √
LeftIndexFingertip LeftIndexC
LeftMiddleFingertip LeftMiddleC
Left Hand
LeftRingFingertip LeftRingC
LeftPinkyFingertip LeftPinkyC
LeftThumbFingertip LeftThumbC
RightPalm RightHand √ √
RightIndexFingertip RightIndexC
RightMiddleFingertip RightMiddleC
Right Hand
RightRingFingertip RightRingC
RightPinkyFingertip RightPinkyC
RightThumbFingertip RightThumbC
Left Foot LeftHeel LeftFoot
LeftFootCenter LeftFoot √ √
LeftToes LeftFoot √ √
Right Foot RightHeel RightFoot
RightFootCenter RightFoot √ √
RightToes RightFoot √ √
Torso LeftButtock Pelvis √
RightButtock Pelvis √
BodyCenter RootBone √ √

Table 3: The humanoid's Points of Interest

3.2.2 Forward Kinematics


Forward kinematics is the computation process of the position and orientation of a
humanoid's “end effector” as a function of its joint angles. The term “end effector”
originates from the robotics domain and it declares the “end” of the humanoid's arm or
leg, designed to interact with the environment. This “end” can be either be a fingertip,
the hand's palm, or the foot centre. The end effector has a maximum of six degrees of
freedom:

December 2011 36 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• three regarding the position, represented by a 3d vector ⃗p ef =( x , y , z )


• three regarding the rotation: intuitively can be represented by a set of three
Euler angles ( a x , a y , a z ) , but it is better to be described by a quaternion
q=( q w , q x , q y , q z ) .
Having set the angles of each joint of the skeleton the end effector transformation can
be derived, by computing the following product sequence of the respective
transformations:
n
T e =T n (q n )⋅T n−1 (q n−1)⋅…⋅T 1 (q1 )=∏ T i ( qi ) (2)
i=1

where T e is the global transformation of the end effector, T i is the local


transformation of ith joint's degree freedom, q i is the current quaternion of the joint and
n is the number of joints that are included in the path from the root bone to the end
effector. The joint's local transformation, besides the rotational information, contains
also the local translation of the joint to its parent joint, so it can be analysed into:
T i (q i)=Ri (q i )⋅P i (3)
Where Ri is local rotational matrix of the joint and P i is its local translational matrix
containing its local translation from its parent joint. P i remains constant through the
simulation and changes only when the humanoid avatar model is changed. The Ri
can be analysed furtherer into:
D
Ri (qi )=∏ Ri ,d (qi ,d ), D∈ℕ , D∈[ 1, 3] (4)
d =1

Where D is the number of the degrees of freedom of the joint and Ri ,d ( qi ,d )


declares the rotational matrix of the dth degree of freedom of the ith joint.

3.2.2.1 Avatar Configurations


In order to animate the avatar, a quaternion for each degree of freedom must be first
computed. This procedure is repeated at every simulation step. So, a set of
quaternions must be computed in order to change the avatar's current posture. This
postural information of the avatar, along with some extra information, is called the
configuration of the avatar.
Each avatar configuration must contain the following elements:
• each joint translation to its parent joint (remains constant for configuration of the
same avatar model).
• each joint local rotation to its parent joint (changes through the simulation, when
the respective bones are rotated).
• the root transformation, i.e. the root bone's current global position and
orientation.

December 2011 37 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

3.2.2.2 Motion Interpolation


Animating the avatar is performed by computing the transition from one configuration to
another, i.e.:
f :C a → C b (5)
Where C a is the start configuration and C b the final configuration. By including the
simulation time step component to the function the above can be expressed as:
f (C a , C b , t)→ C (t ), t∈[0.0, 1.0] (6)
where t is the normalized time between the two configurations and C (t) is the
resulted configuration at time t .
The function of Eq. 6 is the motion interpolation function and is part of the Motion
Manager. There are several ways to support transitioning from a current configuration
to another one. The factor that needs to be considered is the angular velocity of the
joint rotation. The Motion Manager supports the following interpolation techniques:
• Linear interpolation: this is the simplest interpolation technique can produce
constant angular velocities. However it cannot be used in dynamics because it
will cause instabilities when t → 1 . Linear interpolation is given by the formula:
C (t)=(1−t )⋅C a+ t⋅C b (7)
The above can be analysed into two different types of interpolation. The first
and the simpler one is Euler interpolation:
θ d (t )=(1−t)θ a , d + t θ b , d , ∀d ∈ Di , ∀i ∈C (t) (8)
where each Euler angle θ d (t ) of each degree of freedom is the result of the
linear interpolation of the start and end angles, θ a ,d and θ b , d , and D i is the
DoF set of the ith joint of the configuration. The Euler angle interpolation produce
linear angular velocities per each degree of freedom. However this does not
mean that the resulted joint angular velocity will be also linear.
The linear joint angular velocity can be achieved by applying the spherical linear
interpolation or SLERP:
q a sin((1−t)Ω)+ qb sin(t Ω)
q (t)= (9)
sinΩ
Where q (t) is the quaternion that describes the joint's current rotation, q a is
contains the initial rotation of the joint, q b the final rotation and Ω is the half
angle (in rad) between the two quaternions i.e. the inverse cosine of the
quaternions' dot product:
Ω=arccos(qa⋅q b) (10)
Although slower, the SLERP function is the only way to ensure that joint will
rotate at a constant angular velocity.
• Bell shaped interpolation: The linear interpolation, which has been described
in the previous paragraph, produces constant angular velocities. However there
cases, especially in dynamics sessions, that the velocity must gradually
increase, reach a maximum and then decrease in order to ensure dynamics

December 2011 38 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

stability. The kinds of functions that produces such curves are called bell-
shaped functions. The most known is the normal/Gaussian distribution
probability density (pdf) function (Figure 19):
2
−( x− μ )
1 2σ
2
ϕ μ , σ ( x)= e (11)
√2π σ 2

Figure 19: Probability density functions of several normal distributions. The red line
is representing the standard normal distribution
By replacing the uniform distribution of the time parameter values of Eq. 8&9
with the distribution defined in Eq. 11, the linear angular velocities can be easily
converted to bell-shaped velocities.
• Fade-in/out variations: applying only the left half part of the bell-shaped
function to the time parameter, will result that the angular velocity will gradually
increase. This is useful for “fade in” motion effects, i.e. at the start of movement
series. The same can be applied for “fade-out” effects, i.e. when ending the
movement of the avatar. These kinds of motion, besides they are necessary for
computing target postures in dynamics sessions, they also produce more
natural animation.

3.2.3 Inverse Kinematics (IK)


In contrast to forward kinematics, which calculate the position of a body and its end
effector after applying a series of joint rotations, inverse kinematics calculate the
rotations necessary to achieve a desired end effector's position and orientation. So, the
input parameters here are the desired end effector's position and orientation, and the
output is a series of joint angles.
The inverse kinematics algorithms are managed by the Inverse Kinematics Manager.

December 2011 39 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

The IK Manager supports two methods for solving IK problems:


• analytical methods: which attempt to mathematically solve an exact solution by
directly inverting the forward kinematics Equations This is only possible on
relatively simple chains.
• numerical methods: which use approximation and iteration to converge on a
solution. They tend to be more expensive, but far more general purpose.
For each method there are advantages and disadvantages which are summarised in
Table 4.

December 2011 40 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

IK Feature Analytical IK Numerical IK


Number of calls needed for Only one call is needed, i.e. Needs a series of
providing a solution the solution is provided repetitions, depending on
instantly. various factors.
Number of joints' degrees The IK joints must have a User configurable can
of freedom supported total of 7 degrees of support any number of
freedom. The format is joints having any number of
strict: degrees of freedom.
3 DoF for the first joint,
2 DoF for the middle joint,
2 DoF for the end joint.
Number of end effectors Only one, having 6 degrees “At least one” end effector
supported of freedom. systems are supported,
meaning that more than
one end effectors are
permitted to each IK
system.
End effectors' degrees of The (one and only) end Although, there are
freedom specifications effector must have six supported 6-DoF end
degrees of freedom (3 for effectors, a model of only
its position and 3 for its positional end effectors (3-
orientation). DoF) is preferred because
it is producing better results
in less time. Specific
orientation can be achieved
by using extra end effector
elements (explained in
3.2.3.1).
Prone to local minimum No, the analytical IK are Yes. Extra algorithm actions
solution traps? providing the optimal are necessary when the IK
solution, if that exists. are trapped into a local
minimum, in order to
provide the optimal
solution.
Speed Almost zero time needed Around 2.25 microseconds2
for providing the solution. per iteration on a Pentium 4
CPU at 3.2GHz, for a 7
DoF IK system.

Table 4: Comparison of analytical and numerical inverse kinematics methods.

3.2.3.1 Inverse Kinematics Chains


An inverse kinematics chain is a system of elements that is parted from a) a series of
body joints and b) a set of end effectors. In this case the joints are called ik-joints.
Generally, the end effector's set is much smaller than the the joint's set, and in most
2 A microsecond is equal to 10-6 seconds.

December 2011 41 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

chains is parted by only one end effector. There are a number of rules that must apply
at the construction of an IK chain:
• The ik-chain must have only one root. The ik-root is basically a joint and is
considered the “start” of the chain. Chains having more than one roots are
invalid and are not supported by the IK Manager.
• All the joints that take part in an ik-chain must be connected. For example, the
ik-chain shown in Figure 20 could not have the “Head_joint” as part of it,
because it is not connected directly to any of its joints.
• The skeleton hierarchy is maintained into the ik-chain, meaning that each ik-
joint acts as parent of another ik-joint(s).
• The end effector elements cannot have any children and their parent must
always be an ik-joint.

Figure 20: Left image presents an inverse kinematics chain (green dashed line). The
ik-chain is formed by the nine joints (yellow circled) and has two end effectors (on
points A & B). The LowerTorso_joint is considered the ik-chain's root. Both end
effectors happens to have the same parent, i.e. the RightIndexC_joint. Right image
displays which bones are affected directly by the ik-chain.
Also, regarding the numerical IK only:
• An IK chain with only one end-effector point cannot be used for computation of
effector orientations (Figure 21).

December 2011 42 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• By adding more than one end effector points to the same ik-joint, a desired
orientation can be set. Two end-effectors can be used for loose orientation
control (Figure 22) and three end-effectors are needed for a more strict control
of orientation (Figure 23).

Figure 21: Using a numerical-based ik-chain with one position-only end effector (red
sphere): the ik-chain tries to match the location of the end effector to the point of
interest target (red cross). As it's shown in the three right images, the orientation of the
end-effector cannot be controlled properly: the inverse kinematics algorithm produces
different palm-orientation solutions each time.

Figure 22: Using a numerical-based ik-chain with two position-only end effectors (red
and green spheres), results in loose end- effector orientation control: the two targets
(red and green crosses) define an axis (in white), around which the controlled body
part can be placed.

Figure 23: Using a numerical-based ik-chain with three position-only end effectors (red,
green and blue sphere): each of the spheres must be matched to its target (red, green,
blue cross respectively), thus resulting to a unique orientation of the palm.

3.2.3.2 Numerical IK
The numerical IK method uses a Jacobian algorithm for computing the joint angles.
The Jacobian is a “vector derivative with respect to another vector”. The two vectors in
the ik case are:

December 2011 43 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

T
Φ =[ϕ 1, ϕ 2 ,… ϕ N ] (12)
T
E=[ e 1, e 2 ,… , e M ] (13)
Where Φ is the vector containing the N degrees of freedom parameters of the ik-
chain, i.e. the joint angles, and E contains the M degrees of freedom parameters of
the end effectors. Because the end effectors are containing position-only information,
the Eq.13, may be rewritten as a matrix:

[ ]
e 1, x e 1, y e 1, z
E= e 2, x e 2, y e 2, z
(14)

eK , x eK , y eK , z
Where e i , x denotes the x translation of the i-th end effector. It is derived easily that:

1
K= M (15)
3
The Jacobian can then be given by the matrix:

[ ]
∂ e1, x ∂e 1, x ∂ e1, x

∂ϕ1 ∂ϕ2 ∂ϕ N
∂e 1, y ∂e 1, y ∂e
… ∂ ϕ1, y
∂ϕ1 ∂ϕ2
dE N
J ( E ,Φ )= = ∂ e 1,z ∂ e1, z ∂ e 1,z (16)
dΦ …
∂ϕ1 ∂ϕ2 ∂ϕ N
⋮ ⋮ ⋱ ⋮
∂e K , z ∂e K , z ∂e K , z
… ∂ϕ
∂ϕ1 ∂ϕ2 N

∂e k , x
where the is the partial derivative of the x component of the k-th end effector to
∂ ϕn
the n-th degree of freedom angle. The derivative contains a measure of “how much an
end effector is moving on an axis when a joint rotates”.
Having a vector Δ Φ that represents a small change in the joint DoF values, i.e.:

ΔΦ= ≈Φ (t)−Φ (t −1) (17)
dt
then the change of the end effectors can be estimated by:
dE
Δ E≈ Δ Φ =J (E ,Φ )⋅Δ Φ (18)

which can be inversed into:

Δ Φ ≈ J −1 (E , Φ )⋅Δ E (19)
So, having the Δ E (i.e. the end effector to target difference) and inverting the
Jacobian matrix, the change of the DoF angles can be found and added to the avatar's

December 2011 44 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

current configuration. There are two problems arisen here:


1. the computation of the Jacobian matrix: i.e. how are the partial derivatives
computed.
2. the computation of its inverse matrix.: it must noted that the Jacobian matrix
most of the times is a non-squared matrix, because the number of the joint DoF
is not equal to the number of the end-effectors DoF.
The solution to the first is very simple. Computing the partial derivative of an end
effector position to a joint's DoF angle is given by:
∂ ⃗e k
a n ×(⃗e k −⃗p n ) ⇔ (20)
∂ϕ n = ⃗

[]
∂ ek , x
∂ϕn

[ ][ ]
an, x e k , x− p n , x
∂ ek , y
⇔ = a n , y × ek , y− p n , y (21)
∂ϕn
an, z ek , z − pn , z
∂ ek , z
∂ϕn
where ⃗ a n denotes the joint DoF global rotational axis, ⃗e k the end effectors global
position, ⃗p n the joint's global position and “x” denotes the cross product operation. By
computing the Eq.21 for every end-effector and joint-DoF pair, the Jacobian matrix is
derived.
The solution to the second problem is the application the damped least squares
method. First the product of the Jacobian to its transpose matrix is computed:
T
U =JJ (22)
Then, a small value is added to the U diagonal elements in order to make the squared
U more stable for its inversion:

U ←U+ λ2 I (23)
By inverting U , multiplying it by Δ E and applying the result into Eq. 19:

Δ Φ ≈ J ( E , Φ )⋅U −1⋅Δ E (24)


the needed change in the DoF angles can be estimated.

3.2.3.3 Agility factor and comfort angle


In order to simulate properly the kinematics of an ik-chain and provide more natural
results, a weighting system for each joint has been implemented. These weights are
call agility factors and are assigned to every ik-joint of the ik-chain. Higher agility value
means that the joint can rotate in greater velocity. The agility factors have small values,
around the unit and they applied directly into the jacobian's partial derivatives, i.e. the
cross product of Eq. 20, is converted into:

December 2011 45 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

{ ( ϕn (t)−ϕ n( t−1))( ψ n−ϕn (t−1))> 0


}
∂⃗e k s n ⃗a n×(⃗
e k −⃗pn ) , if
∂ϕ n = (25)
⃗an ×(⃗e k −⃗pn ), otherwise
where the s n is the agility factor, ϕ n(t) is the current DoF angle, ϕ n(t−1) is the its
previous angle and ψ n is the n-th DoF comfort angle. The comfort angle is a
predefined angle that declares the angle of the joint that when it applies it results in a
comfortable posture. The conditional term in Eq. 25 removes the agility factor when the
joint is rotated away from its comfort angle. The comfort angles can be set as
parameters to the system in order to simulate different comfort avatar motion
behaviours. An example of the application of the agility factors can be seen in Figure
24.

Figure 24: Numerical-based IK resulted posture: on the left image, the IK were
computed without the agility factors resulting in a non-natural posture, while on the
right the consideration of the agility factors produced a satisfactory result. The target is
represented as a red box and the end effector is the right hand's index-fingertip. Both
chains start from the lower torso joint.

3.2.3.4 Analytical IK
The implemented analytical inverse kinematics algorithms are based on the method
described in [9]. More specifically, the analytical inverse kinematics algorithm has a
number of particularities:
• The ik-chain must have three joints, no more, no less.
• The number of DoF is: three for the root joint, and two for the rest joints.
• Only one end effector is supported and it has six degrees of freedom.
The pre-described model will be called as “3-2-2-DoF model”. Despite its restrictions,

December 2011 46 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

the 3-2-2-DoF structure is present in four crucial body regions of the avatar. These are:
the left/right arm and the left/right leg (Table 5).
Analytical IK Chain Joints
Left Arm LeftShoulder (3), LeftElbow (2), LeftWrist (2)
Right Arm RightShoulder (3), RightElbow (2), RightWrist (2)
Left Leg LeftUpperLeg (3), LeftKnee (2), LeftAnkle (2)
Right Leg RightUpperLeg (3), RightKnee (2), RightAnkle (2)

Table 5: Analytical IK Chains

3.2.4 Dynamics
Each body joint is able to generate torques to move its attached bones. There are two
methods provided for applying the dynamics: either by directly setting the exact torque,
i.e. by forward dynamics, or by setting the desired relative angle velocity, i.e. by inverse
dynamics. It is reminded that all the dynamics described in this manuscript are referring
only to rigid body dynamics [10].

3.2.4.1 Forward Dynamics


In forward dynamics the torques are set directly to each of the avatar's joints and the
result is motion. The child bone is accelerated via the equation:

[ ][ ][ ]
−1
ar , x I xx I xy I xz τx
τ⃗ =I ⃗a r ⇔ ⃗a r =I −1 τ⃗ ⇔ a r , y = I yx I yy I yz τy (26)
ar , z I zx I zy I zz τz

where the τ⃗ is the 3d torque vector in relative to parent bone coordinate system, ⃗ a r is
the angular acceleration of the child bone and I is a 3x3 matrix called the moment of
inertia tensor. The moment of inertia tensor is a measure of a rigid body's resistance to
changes to its rotation. Each bone element has its own moment of inertia tensor. The
moment of inertia tensor depends on the bone's mass and its shape. Same masses
distributed in different shapes result in different tensors. The mass distribution and the
computation of the bone elements is performed by the Bone Manager.
By using Eq.26, after the application of torque, the child bone is accelerated. The joint
is responsible for this angular acceleration. However the joint, besides the torque,
needs to apply a force to its two child bone in order to keep it connected to its parent.
This force is computed internally by the Open Dynamics Engine. So, besides the
angular acceleration, the force change the linear acceleration of the child bone, via:
⃗f internal =m⃗ a internal =m ⃗f internal
a internal ⇔ ⃗
−1
(27)
where ⃗f internal is the internal force vector, ⃗
a internal is the linear acceleration of the child
bone's centre of mass due to this force, and m is a scalar that declares the bone's
mass.
Finally, in order to compensate the gravitational forces, the joint adds an extra anti-

December 2011 47 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

gravity force which is opposite to its weight:

[ ]
0
⃗f anti− gravity =−⃗
γ m≈ 0 (28)
9.81⋅m
where ⃗f anti− gravity is the force that the joint needs to apply the for negating its weight,
and γ
⃗ declares the earth's constant gravitational acceleration.
Forward dynamics are used only for testing the avatar's dynamic capabilities. The must
be avoided when running simulations, because their usage may lead into unnatural
motions. For proper simulations, the inverse dynamics way must be followed,
otherwise, the user must calculate and apply all the forces and torques to the rigid
bodies.

3.2.4.2 Inverse Dynamics


The inverse dynamics is a method for computing forces and moments of force
(torques) based on the kinematics (motion) of a body and the body's inertial properties
(mass and moment of inertia). So this method can be simply stated as: “having a target
body motion, compute the torques and forces that must be applied from each joint to its
child bone”.
Inverse dynamics are computed and applied only in level-2 simulation sessions. The
module responsible for the management of the inverse dynamics is the Motion
Manager. The algorithm that is followed can be described in the following abstract way:
1. Get the avatar's current configuration C current , and pass it to the Motion
Manager.
2. Compute the target configuration C target by using the IK Module or any other
higher level module and pass it to the Motion Manager.
3. Compute the series of the in-between “target” postures by using a specific
interpolation function (more in 3.2.2.2). There is one such in-between “target”
posture for every simulation step.
4. Having the series of postures, try to follow them by applying inverse dynamics.
The 4th step is further analysed in a series of actions. First, each transition from the
current posture to the next, gives a target angular velocity for every joint:
d ω⃗
v angular =
⃗ (29)
dt
where the ⃗ v angular denotes the angular velocity that the joint must follow in order to
reach the targeted configuration in dt time, where dt is the simulation step. If the joint
stores its rotation in Euler angles then Eq.29 is re-written into:

[ ]
ω x (t+ 1)−ω x (t)
1
v angular =
⃗ ω y (t+ 1)−ω y (t) (30)
dt
ω z (t+ 1)−ω z ( t)
where ω x (t) denotes the x-axis component of the joint's relative angular velocity in

December 2011 48 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

the current time step and the same Euler angle ω x (t+ 1) in the target frame. If the
target configuration is containing relative rotations declared in quaternions, the relative
rotation change is given by:

q delta =q (t+ 1)⋅q−1 ( t) (31)


where q (t + 1) is the quaternion that contains the target frame's rotation and q ( t) is
−1

the inverse quaternion of the current frame. Quaternion q delta is then converted into an
axis-angle representation:

[ ]
d ω⋅r x
[ ]
q delta → d ω → d ω⋅r y
⃗r
d ω⋅r z
(32)

where the ⃗r is the unit axis around which the child bone must be rotated by d ω rad
in order to reach it target configuration. Dividing Eq.32 by dt the ⃗ v angular can be
retrieved:

[ ]
d ω⋅r x
1
v angular =
⃗ d ω⋅r y (33)
dt
d ω⋅r z
Having the angular velocity (either from Eq.30 or 33), the next step is to compute the
needed forces and torques that need to be applied at the joints in order to move the
respective bones. This problem is solved automatically by the Open Dynamics Engine
(ODE) and the only requirement is to input the target angular velocity ⃗ v angular of the
joint. ODE uses a Lagrange multipliers technique in order to provide the solution of
computing the inverse dynamics. Lagrange multipliers besides the target velocity, take
into consideration a number of constrains, such as the maximum allowable torque of
the joint, the joint's range of motion, etc. and provide a solution with the minimum
required torque.
Here it must be noted that, depending on the avatar's strength capabilities, i.e. its
maximum joint torques, the target configuration may not be achieved in the time that
was given. The Motion Manager takes this into consideration and can extend the joint's
given time if needed. Of course, this feature can be disabled for more “strict” simulation
sessions.

3.2.4.3 Dynamics Trainer


It has been declared that each virtual user model may have different strength
capabilities, thus the maximum allowable torque has to be different for each joint.
However, there is a problem here: it is almost impossible to measure the maximum
torque for each body joint and simply transfer the information into the core simulation
platform's joint elements. In most of the cases the strength capabilities for a person are
formalized in phrases like “the maximum weight this person can lift by using the left
arm is 20kg”.
The dynamics trainer has been created in order to cope with this problem. It contains a
series of tests that are applied to the current avatar model in order to assign the proper
maximum torques values to each joint. Each dynamics test contains the following

December 2011 49 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

elements:
• An avatar's configuration/posture.
• An abstract mass object – it is defined as a point-mass.
• A specific body part, i.e. bone, to which the mass object will be attached.
Having the above declarations, the dynamics trainer, applies the posture to the avatar
and attaches the mass object to the declared bone. Then it computes the torques
needed for holding this weight stable. The final output of the dynamics trainer is the
computation for all the torque factors f τ (Eq. 1).

3.3 Advanced Motor Simulation


In this sub-section the algorithms that are used for motion planning, gait and grasp
planning are described.

3.3.1 Motion Planning Algorithms


The motion planning algorithms and their respective classes are part of the Collision
Avoidance Manager. The algorithms used are sampling-based [11]. There are two
sampling-based motion planners supported: a multi-query graph based approach and
an exploring-tree based approach. Both are explained in the following paragraphs, but
first an introduction to the main concepts of the motion planning is presented.

3.3.1.1 Configuration Spaces


The core element of the motion planning of the avatar is the configuration space. As
already explained in 3.2.2.1, an avatar configuration contains its postural information,
such as joints' local and global rotations and translations. The configuration space is a
set of such configurations, i.e.:

C= {c1, c2 ,… , c N } (34)
where C denotes the configuration space containing N configurations ( c ). These
configurations are called “samples”. The sampling process is based on the construction
of randomly generated postures of the avatar. Configurations that contain self-colliding
postures are discarded at the process. Here, it must be mentioned that the “real”
configuration space has an infinite number of avatar configurations and that Eq.34
contains a representative proportion of it.
After loading the scene, each configuration c is tested for collisions with the scene
objects. The whole process is performed by the Collision Avoidance Manager and is
inspected by the Scene and Humanoid modules. Let the boolean function that decides
if the configuration is collision-free denoted by f free . If the configuration is not colliding
with the scene object is flagged as “free3” or “true”, otherwise it is flagged as “closed” or
“false”:

f free {
(c)= 1, if c → collision free
0, otherwise } (35)

3 The adjectives “free” and “closed” are derived from the robotics domain.

December 2011 50 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

So the C can be split into two sets:

C= {C free∪C closed ∣ C free∩C closed =∅ } (36)


C free ={c ∈C ∣ f free (c)=1 } (37)
C closed = {c∈C ∣ f free ( c)=0 } (38)
Let the initial avatar's configuration be denoted as c 0 ∈C free and the goal configuration
is denoted as c G ∈C free . Then the motion planning algorithm tries to find a non-
colliding path from c 0 to c G , i.e.:

P={c (t)∈C free ∣ t ∈[t 0, t G ], c (t 0)=c 0, c( t g )=c g } (39)


There are two main approaches for providing the solution of Eq.39: multi-query and
single-query planners. The main difference is at the sampling process of the C free
space. For the multi-query motion planning a graph based technique has been
implemented and is described in 3.3.1.3. The single-query motion planner is based on
the rapidly exploring dense tree structures (analysed in 3.3.1.4).

3.3.1.2 Configuration Distance


Avatar configurations are complex structures: they contain various heterogeneous
information, such as joints' positions, angles, quaternions, etc. Thus, proper distance
definitions between two configurations is not obvious.
First the data that need to be compared must be of the same field, i.e. position-only,
angle-only, quaternion only etc. Having this in mind, two basic distance metrics are
defined:
• the relative DoF angle distance metric: this distance is computed by averaging
the absolute difference (in rad) of every joint DoF pair between the two
configurations:
D
1
d ang (c a , cb )=d ang (c b , ca )= ∑ ∣θ −θ b , d∣
D d =1 a , d
(40)

where c a , c b are the two configurations containing information of D number


degrees of freedom each, θ a ,d , θ b , d declare the d-th DoF Euler angles of c a
and c b respectively.
• the joint position distance metric: this distance is computed by averaging the
relative-to-the avatar root location distance of all the joints:
N
1
d pos ( c a , c b )=d pos (c b , c a )=
N
∑∣⃗pa , i−⃗pb , i∣ (41)
i=1

Both of the above distances can be computed very fast and are used extensively in the
collision avoidance algorithms.

3.3.1.3 Graph-based based Motion Planning


The graph-based motion planning is a multi-query technique that uses a graph to store

December 2011 51 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

and manage the C free samples. Let the graph be denoted as: G={V , E } , where the
set V contains the graph nodes and the set E contains the edges/links that connect
its nodes.
After loading the scene, a set of configuration samples is generated by the Collision
Avoidance Manager. At each sample-generation step, the sample is checked for
collisions with the scene. If it is collision-free, the sample is placed as a node into the
graph, otherwise it is discarded. So the set V is actually the free configuration space,
i.e.:
V ≡C free (42)
It is preferable that the V is as dense as possible, i.e. that there are enough samples
occupying the 3d scene space. Otherwise the graph will not be capable of providing
satisfactory results. After lots of tests, a couple of thousand configurations is the golden
rule for a providing satisfactory results regarding a 2x2x2m3 region.
When a sample c is placed into the graph, it is checked whether it can connect with
any of the existing graph nodes. A connection between two nodes, c a and c b ,
declares that the transition c a → cb is possible, i.e. that the avatar can move without
colliding from configuration c a to c b . This check is achieved buy using the f free
function (Eq.35) in conjunction with the motion interpolation described in 3.2.2.2. So,
the link set E is the:

E={(c a , c b) ∣ c a , cb ∈V , f free ( f interpolation (c a , c b ,t )) =1,∀t∈[0.0, 1.0]} (43)


where f interpolation is the function of Eq.6.
At this point a clarification needs to be made: when the new sample c a is added into
the graph, the Collision Avoidance Manager checks and creates the new links only for
a small neighbourhood around c a . This neighbourhood is defined by setting a
maximum allowable distance (Eq.40,41).
This approach ensures that transitions between distant nodes are not checked, which
results in:
• less graph construction time,
• small number of neighbours per node, i.e. less memory size is needed,
• faster graph traversal.
Having the graph defined, the motion planning solution (Eq.39) can be provided by
using fast graph traversal techniques. First, it is ensured that both c 0 and c G
configurations have been added to the graph, otherwise the are added and any
connections are made. Then, by using a cost metric, the graph is traversed in order to
provide the optimal path c 0 → c g . The graph is traversed very fast and provide the
solution almost instantly, because the whole process is based on a binary heap
implementation [12].

3.3.1.4 Rapidly-exploring Random Trees (RRT)


The rapidly exploring random trees or RRT is a method of motion planning that
explores the configuration space C incrementally [11][13]. Let the RRT be denoted as

December 2011 52 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

R= {V , E } , where V is the set of tree nodes and E declares their connections set.
Let the avatar's initial configuration c 0 is stored at the tree's root ( c 0 ∈V ) and the
avatar's goal configuration be denoted as c G . Both c 0 and c G must belong to the
C free space, otherwise the algorithm will return error.
At every iteration, the algorithm runs a series of steps in order to expand the tree until
the path c 0 → c G is found. These steps are:
Step 1: Select pseudo-randomly an existing tree node. When there are not any
other nodes except the root c 0 , select c 0 . Let this selected node be denoted
as c expand . The selection of the c expand node is based on some criteria and is
not purely random. The idea is that the c expand candidate most of the times will
be the most distant and isolated of the tree nodes. This will help the tree to
cover most of the C free space very fast.
Step 2: The algorithm will tree to expand the c expand node, by generating another
configuration, c new , that is close to c expand . The generation process adds
random angle deviations to the c expand angles. Instead of the generation of the
c new , every k iterations the c G is used as a c new .
Step 3: Check if the path from c expand to c new configuration exists, i.e.:

f free ( f interpolation (c a , cb , t))=1, ∀t∈[0.0, 1.0] (44)


◦ If the path does not exist, then try to recompute another c new , i.e. go to
Step 2. Here a check is made if the algorithm has been trapped, if the
Step 2 has been called the too many times. If this is true, the algorithm
returns to Step 1 and selects another node for expansion.
◦ If c new =cG then the algorithm has found a solution.
Step 4: Connect the c new node to the c expand node, by setting the first as child of
the second. By this step the tree has been expanded and the algorithm returns
to Step 1.
The RRT method does not require any preprocessing to be made, because it explores
the configuration space incrementally. The algorithm will try to expand the tree until a
solution is found. However, if the path c 0 → c G does not exist, the algorithm will run
infinitely, thus after a predefined number of iterations, the algorithm must be stopped.
The main drawback of the algorithm is that the convergence to the solution is too slow,
especially in scenes with complex objects. So, additionally to the pre-mentioned
method, the collision avoidance method supports a RRT method based on two trees,
where:
• The root of the first tree is the c 0 configuration.
• The root of the second tree is the c G configuration.
• The trees are expanded one after another.
• After the expansion of the first tree, the algorithm searches for the second tree's
node c b that is closest to the c new . If the two nodes can be connected, i.e.

December 2011 53 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

c new → c b , their trees can be connected also, and the solution is the
combination of their respective sub-paths, i.e. c 0 → c new and c b → c G .
The method with the two RRT is converging much faster, comparing to the 1-RRT,
because it cannot be trapped easily near the closed/obstacle space C closed . A
schematic representation of the implemented 2-RRT example is presented in Figures
25 and 26.

c0 cG
1 2

3 5 4 6

7 9 8 10

11 13 15 17 12 14 16

19 21 18 20

23 25 27 31 29 22 24 26

28 30

Figure 25: A 2d schematic representation of the 2-RRT method. As it's shown, the first
(in green) tree's root is the start configuration c 0 and second (in orange) tree has as
root the goal configuration c G . The numbers are declare the expansion sequence of
the two trees. It is shown after 31 st iteration, a connection has been found between the
two trees.

December 2011 54 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

c0 cG
1 2

3 5 4 6

7 9 8 10

11 13 15 17 12 14 16

19 21 18 20

23 25 27 31 29 22 24 26

28 30

Figure 26: After the two trees are connected (via nodes 31 & 30), the solution can be
provided. The path is represented as the sequence of the red-circled nodes.

3.3.1.5 Comparison between Graph-based and RRT motion planning


There are advantages and disadvantages for using each motion planning method. The
biggest difference between the Graph-based and RRT techniques is that the first pre-
computes the sampling of the C free space, while the second explores it incrementally.
In simulation sessions that involve tests of different tasks on the same environment, the
graph-based algorithm must be preferred, because:
• the free space graph is computed only once and can be used a lot of times, with
different start and goal postures, making it suitable for simulation sessions that
involve the same scene objects but different scenarios.
• the graph-based solution is provided rapidly, much faster than the RRT.
• can report very fast that there is no path from the start to the goal configuration.
However, different scene models need the re-computation of the configuration graph, a
process that consumes a lot of time. In theses cases, the RRT are preferred. Another
big difference is that the RRT method is based on stochastic procedures, thus the
solutions they provide are different (for the same c 0 , c G ). Instead, the configuration
graph provides always the same solution for the same pair of c 0 , c G . A comparison
between two methods is shown in Table 6.

December 2011 55 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Graph-based Motion RRT Motion Planner


Planner
Initialisation process The initialisation is needed Initialisation is needed
only when either the scene every time for every motion
or the avatar model planning request.
changes. Otherwise, the
graph is reusable for many
motion planning tests.
Initialisation time Depends on the scene and The initialisation process is
the avatar's number of very fast (order of ms).
degrees of freedom, but for
very dense graphs it may
take a lot of time (i.e. more
than an hour for generating
a graph with 2*104
configurations).
Processing time Provides the solution very Time depends on the
fast, almost instantly, even distance between the start
at high density graphs. The and goal configurations,
algorithm is capable of and the in-between
finding out “if there is a obstacles. May lasts a
solution or not”. moderate amount of
minutes for very complex
scenes. The algorithm is
not capable of knowing the
solution existence, thus
must be stopped after a
predefined number of
iterations.
Optimal Solution The solution provided is The solution provided is not
near-optimal, i.e. with the optimal and needs post-
lowest cost. The path is processing in order to
always the same for the achieve satisfactory results.
same pair of c 0 , c G . The solution provided is
different every time the
algorithm is called, due to
its stochastic procedures.

Table 6: Comparison between the Graph-based and Rapidly-exploring Random Tree-


based motion planners

3.3.1.6 Collision Groups


The collision groups are a categorisation of the various elements that take part in a
scenario, such as the avatar's bones and the scene objects, in order to define properly
their collision. The Collision Avoidance Manager needs to know which elements can
collide and which cannot. In order to properly describe the collision “rules”, a system
with collision-categories needed to be created. The main components of this system

December 2011 56 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

are the collision groups. The collision groups are presented in Table 7.
Collision Group Description Group Can Collide With
Null Used for elements that are Cannot collide with anything
not submitted to collisions.
Floor Used for scene objects that Although can cause collision with
can be used as floor (i.e. avatar body parts and moveable
ramps, stairs). objects, the Floor Group's elements
cannot be moved.
Static Object Used for scene objects that Although can cause collision with
cannot be moved. avatar body parts and moveable
objects, the Static Object Group's
elements cannot be moved.
Moveable Object Used for moveable scene Static Object, Floor, Left/Right
objects. Arm/Hand/Leg/Foot, Head, Torso,
other Moveable objects.
Left Arm Used for left arm (excluding
the hand and fingers)
Right Arm Used for right arm (excluding
the hand and fingers)
Left Hand Used for left hand.
Right Hand Used for right hand.
Left Leg Used for left leg (excluding Moveable Object, Static Object,
the foot). Floor.
Right Leg Used for right leg (excluding
the foot).
Left Foot Used for left foot and toes.
Right Foot Used for right foot and toes.
Torso Used for torso body parts
(includes the pelvis).
Head Used for head and neck.

Table 7: Collision Groups and their elements

The user can fully parametrise the dynamic behaviour of the simulation collisions by
enabling or disabling a set of flags that define the properties of the collision groups.
The supported flags are:
• Flag “Null”: no collision.
• Flag “Moveable with Static”: if true, it enables the collision between the
moveable and static objects.
• Flag “Moveable with Moveable”: if true, it enables the collision between the
moveable objects.

December 2011 57 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• Flag ”Moveable With Floor”: if true, it enables the collision between the
moveable objects and the floor objects.
• Flag “Avatar Self Collision”: if true, it enables the collision between the groups
that part the avatar's body.
• Flag “Avatar with Static”: if true, the avatar can collide to the static objects.
• Flag “Avatar with Moveable”: if true, the avatar can collide to the moveable
objects.
• Flag “Avatar with Floor”: if true, the avatar can collide to the floor.
Several flags (or all) can be enabled simultaneously in order to allow specific collision
behaviour. However, it must be noted that enabling all the flags may lead into
instabilities in level-2 sessions if the scene complexity is very high.

3.3.2 Gait Planning


The Gait Module is responsible for the locomotion of the avatar. Its role is double: first
is responsible for providing the shortest path between two points and then, it generates
the gait data. The algorithm for providing the solution to the first problem is described in
the 3.3.2.1, while the second is presented in the 3.3.2.2. The algorithms described in
the following paragraphs are based on the ones used in [14][15][16].

3.3.2.1 Gait Path-finding


The gait path-finding algorithm is responsible for providing a valid path to be followed
by the avatar for performing the gait tasks. The path is described by a series of 3d
points and the algorithm takes into account the bounding volume of the avatar. The
algorithm is based on the A* search algorithm and provides very fast the optimal
solution [17].
Before any computation of the path, the scene must be transformed into a graph, that
permits the application of the A* algorithm. This pre-process is described by the
following steps:
Step 1: Define a 3d grid of the scene and it's objects (without the avatar). Any grid
voxels, that does not contain any part of the scene geometry is discarded. The
default grid voxel is a cube with side length equal to 10cm. However the grid
density is fully parametrisable (voxels can be also have non-cubic form).
Step 2: The remaining voxels will contain part of the scene geometry and are
labelled as:
◦ floor voxels, if they contain part of the floor, i.e. if the contained geometry is
representing a floor surface.
◦ object voxels, if they contain any other static or moveable object, such as
walls, furniture, etc.
Step 3: A collision graph is created based on the floor and object voxel. If a graph-
node contains a floor voxel is declared as “free”, otherwise it is a “closed” node.
Step 4: All the “free” nodes, establish links to their “free” neighbour-nodes. A cost is
also assigned at each link. The cost is analogous to the distance of the two

December 2011 58 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

voxel centres. Any connection between the “free” (i.e. floor) and “closed” (i.e.
wall, object, etc.) nodes is not valid and is discarded. A link between two nodes
declares that the avatar's transition, i.e. step, from a voxel to another is
permitted (Figure 27).

Figure 27: A 2D representation of the 3d graph's element used in


the gait path-finding algorithm. The green nodes are the floor
“open” nodes and the nodes in red denote the “closed” nodes. The
left image represents a voxel with its 8 neighbours, all of which are
“open”. In the middle image, 3 of the neighbours do not contain
floor elements, thus the transition to them is discarded (red
arrows). The result is shown in the right image.

After the creation of the floor-graph, the floor will be represented by a set of voxels. A
floor graph example is shown in Figure 28. It must be noted that any graph elements
that are near the walls are label as “closed” by the Gait Module, taking into account the
bounding volume of the avatar. Additionally, the floor graph takes into consideration the
height of the avatar and rejects any floor elements that are too close to the floor below,
e.g. “the ground below the first steps of a set of stairs”.

December 2011 59 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 28: The floor graph and its the voxels (shown with white boxes). The walls and
any other scene objects that are not masked as “floor”, are represented with red semi-
transparent colour.

After various tests, the best (in terms both of accuracy and efficiency) avatar's
bounding geometry for rejecting the pre-mentioned elements was the capsule.
Bounding boxes suffer from inaccuracies around the corners, while the cylinder
performed poorly in ramps and stairs. More complex bounding volume representation
are not recommended, because the floor graph's creation process would need too
much time. Using the capsule, the graph is generated in less than 0.5 sec for the scene
shown in Figure 28.
The shortest path can be found by applying the A* algorithm to the floor graph. The
solution (whether it exists or not) is provided rapidly. However, because of the scene
segmentation process the result is not satisfying. Thus, a series of post-process steps
are needed to be applied. First, any intermediate path nodes that are not needed are
discarded and then a Catmull-Rom [18] spline is fitted through the remaining nodes
(Figure 29). A path-finding example is shown in Figure 30.

December 2011 60 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

END END

START START

END END

START START

Figure 29: Floor graph path-finding example: initially the voxel that are colliding with
the scene objects are marked as “closed” (top left). Then the path is computed via the
A* algorithm (top right). Any intermediate path elements that are not crucial are
discarded (bottom left). Finally a Catmull-Rom spline is fitted to the remaining elements
(bottom right).

December 2011 61 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 30: Result of the path-finding algorithm on a complex scene. The walls are
shown in red semitransparent colour. The blue cones represent the locations of the
floor-graph nodes that were used, while the green cone shows the target location. The
final path is represented by a green curve (Catmull-Rom spline).

3.3.2.2 Gait Cycles


The gait cycle is used to describe the complex activity of walking. More specifically, it
encodes the walking pattern of the avatar and is modelling a full gait stride, i.e. the time
interval between two successive points of initial contact of the same foot. A gait cycle is
periodic during the gait sequence. In every period, each leg passes from two phases
(Figure 31):
• Stance phase: starts when the foot heel touches the ground and ends when the
foot lifts off.
• Swing phase: starts when the foot lifts off and ends when it touches the ground.
As it's shown in Figure 31, the time needed for the two phases is not equal. In normal
gait, the stance phase is around the 60% of the gait cycle [19].

December 2011 62 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 31: The average gait cycle. The stance and swing phases for the right leg are
displayed. In normal gait, the stance phase is around the 60% of the gait cycle.

In order to model properly a gait cycle, the following information is needed:


• The angle of every degree of freedom on the leg at the cycle-normalized time:
ϕ d (t), t ∈[ 0.0,1.0 ] , where ϕ d (t) represents the angle of the DoF d at time
t . The time is normalized, so when t=0.0 the gait is cycle is at its start and
when t=1.0 the cycle ends. For proper definition, at least the angle
information for the hip joint, knee and ankle is needed. The default angle data
are taken from [19][20][21].
• The instances in time (time-stamps) when the left and right feet touch the
ground and when they lift off. This is required because, some motor impaired
users do not follow the 60%-40% pattern.
• The cadence, i.e. the number of full gait cycles per minute. This is needed
because it encodes how fast the avatar walks.
Having the above information, the gait cycle can compute the avatar's left and right
step length and its stride. However, if needed, e.g. due to an avatar's motor
impairment, the step's length can change (either be decreased on increased,
asymmetric stepping is also supported) by the cooperation of the Gait Module and the
IK Manager. The Gait Module cycles are not representing only flat-ground walking.
Several gait cycles can be used and combined in order to support advance gait
behaviours, such as stair climbing and stair descending.

3.3.2.3 Avatar locomotion


The gait cycles structures produce the avatar's configuration sequences and pass them

December 2011 63 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

to the Gait Module. The Gait Module, then, coordinates the avatar to walk on the path
that has been computed by the path planning algorithm. The coordination takes into
account the maximum allowable deviation surface from the path and the maximum
allowed turn angle.
if the avatar steps away from the computed path, then a collision with a wall or any
other obstacle may occur. Thus, it is very important that the avatar stays as much
closer to the path. This is achieved by using the following algorithm (Figure 32):
Step 1: Compute the step progress 6-dimensional vector ⃗v p , which symbolises the
transition from the current step to the next, taking into consideration the step
length, the ground deviation. Let:

[]
px
py

[]
⃗p p
⃗v p= = z (45)
vx
⃗v
vy
vz
where the ⃗p , is the current projection of the avatar's centre onto the ground
surface and ⃗ v is the vector that denotes the direction of the next step. The
length of is equal to the respective leg's step of the used gait cycle, i.e.:
∣⃗v p∣=s LR (46)
where the s LR denotes the respective leg's step. The direction v⃗ is computed
in such way, so that the point ⃗p + ⃗v intersects with the path, i.e.

⃗p + ⃗v ∈T (47)
where T denotes the path's curve.
Step 2: Compute the path deviation error. The error is given as the surface
S ( ⃗v p , T ) that is defined by the ⃗v p and the path's curve (Figure 32).
Step 3: If the S ( ⃗v p , T ) , exceeds a predefined threshold, the algorithm returns to
Step 1 and decreases the step length, i.e. Eq. 46 becomes:
∣⃗v p∣=s ' LR=a⋅s LR , a∈(0,1) (48)
Otherwise, the algorithm advances to the computation of the next progress
vector.
The avatar is allowed to turn when walking a path. Extreme turns must be avoided
because they lead into unnatural gait simulation. The Gait Module has a maximum
allowable turn angle threshold, above which it decreases the next step's length in order
to decrease the turn's angle (Figure 33). The algorithm is similar to the one mentioned
previously, but in Step 3: it checks if the angle threshold has been exceeded.

December 2011 64 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

⃗v 2 ⃗v 3
S2 S3
⃗v 2
S1 S1 S2
⃗v 1
⃗v 1

Figure 32: Path deviation error comprehension. The path to be followed is symbolised
v x ) are the step-progress vectors. The yellow
by a green curve, while the red arrows ( ⃗
areas (Sx) denote the deviation from the path. If the error exceeds the threshold (left
image), the progress steps are decreased (right image) in order to decrease the
avatar's deviation from the path.

θ 1,2
θ 1,2

⃗v 2 θ 2,3

⃗v 1 ⃗v 2 ⃗v 1 ⃗v 3

Figure 33: Turn angle comprehension. If the turn angle is too sharp (left image), the
next step is decreased (right image).
After the computation of the next step-progress vector, the Gait Module requests from
the gait cycle the avatar configurations and transforms them accordingly. Then the
configurations are passed to the Motion Manager and the avatar moves. The Gait
Module stops the avatar movement when it has reached the path's end or when the
gait-task has failed. Here it must be mentioned that the Gait Module in its current form
does not support gait generation based on dynamics. However, the dynamics metrics
can be still approximated by using the kinematics. Two examples of the Gait Module
data sequences are shown in Figures 34 and 35.

December 2011 65 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Yellow: unrefined path


Green: spline path
White: progress vectors

Figure 34: Gait sequence generated by the Gait Module. For better representation
reasons only the lower half skeleton is presented (red). The spline path (green) is
computed based on the floor-graph path (yellow). The step-progress vectors are
shown in white. The steps are decreased for abrupt turns.

December 2011 66 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Yellow: unrefined path


Green: spline path
White: progress vectors

Figure 35: Gait sequence generated for a user model with decreased step length.

3.3.3 Grasp Planning


For the purposes of grasping objects a special module has been implemented. The
Grasp Module analyses the 3d surface of scene object and produces a valid arm
configuration for the avatar. The implemented method is based on the algorithm
described in [22].
Initially, the Grasp Module takes as an input a user-defined sphere near the object's
surface. The sphere is defined by a point of interest or PoI (sphere's centre) and an
area of influence (denoted by the sphere's radius). The algorithm generates various
points on the object's surface based on ray-casting. More specifically, a ray is emitted
from the location of the sphere's centre to the sphere's surface. Every ray intersection
with the object's triangle-mesh results in a new point. The sphere radius and the
density of the points generated are configurable. An example is shown in Figure 36,
where a number of surface points has been generated for the grasping of the steering
wheel case.

December 2011 67 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 36: Grasp Module surface points generation. Initially, a point of interest (yellow
box) and a sphere (yellow transparent colour) must be defined. Then by applying ray
casting various target points are generated.

Each of the generated points is going to be used as a target of the avatar's palm. By
applying inverse kinematics the Grasp Module generates various arm configurations.
The grasp planner check for hand-object penetration and discards the configuration
that are not valid. From the remaining configurations, it chooses the best one, based on
the following criteria:
• the palm's placement: arm configurations where the palm's centre is closer to
the sphere's centre are preferable.
• hand orientation: configurations having orientations that are enfolding the object
are preferred.
• finger collisions: grasp configurations that result into many finger collisions with
the object's surface are favored over the ones that do not.

December 2011 68 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 37: The Grasp Module generates various configurations (left). Some of them
are not valid and are rejected (middle). From the remaining, one is chosen based on
criteria, such as number of collisions, hand placement, etc..

An example of the grasp configurations generation steps is shown in Figure 37.

3.4 Vision Simulation


The avatar's vision simulation is performed by the cooperation of two main entities: the
“lookAt” module and the Vision Model. The first is responsible for the avatar's head and
eye coordination, while the second applies various filters on the eye-input image
depending on the avatar's impairments.

3.4.1 Head & Eyes Coordination


The head and eye coordination is managed by the “lookAt” module. The input to the
algorithm is a 3d point, representing the tracking object. The LookAt Module
manipulates the degrees of freedom of the head and neck joints and rotates the
avatar's eyeballs.
The LookAt Module takes into account various parameters in order to achieve natural

December 2011 69 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

head motions. Several restrictions are applied in the computation of each joint rotation.
The parameters that define this behaviour are:
• maximum allowable head and neck joint angular velocity,
• maximum allowable eyeball angular velocity,
• respect to the range of motion of the head and neck degrees of freedom.

Figure 38: Head and eye coordination. The LookAt Module target has been “locked”
on the purple box. The avatar's line of sight is represented by two green intersecting
lines.

As it is shown in Figure 38, the LookAt Module uses two lines for informing the user of
the avatar-target line of sight. The line of sight is updated at each simulation step. If the
avatar's target is out of the line of sight the lines change their colour from green into
red.

3.4.2 Vision Model


The Vision Model is responsible for filtering the eye-input for simulating the avatar's
vision impairment. Various vision impairments symptoms are supported. These are:
• Protanopia (Figure 40): a severe form of red-green color-blindness, in which
there is impairment in perception of very long wavelengths, such as reds. To
these individuals, reds are "perceived" as beige or gray and greens tend to
"look" beige or gray like reds [23][24].
• Deuteranopia (Figure 41): consists of an impairment in perceiving medium
wavelengths, such as greens [23][24].
• Tritanopia (Figure 42): a more rare form of color blindness, where there exists

December 2011 70 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

an inability to perceive short wavelengths, such as blues. Sufferers have trouble


distinguishing between yellow and blue. They tend to confuse greens and
blues, and yellow can "appear" pink [23][24].
• Protanomaly: a less severe version of protanopia [23][24].
• Deuteranomaly: a less severe form of deuteranopia. Those with deuteranomaly
cannot see reds and greens like those without this condition; however, they can
still distinguish them in most cases [23][24].
• Tritanomaly: a less severe version of tritanopia [23][24].
• Glaucoma (Figure 43): main symptom is the visual field loss, and especially of
the peripheral vision. Rarely, halos around lights appear in the vision field [25].
• Macular degeneration (Figure 44): results in loss of vision in the center of the
visual field. Another symptom is the loss of contrast sensitivity [26].
• Glare & Contrast sensitivity (Figure 45): results in loss of contours and contrast.
The symptoms are more severe when the visual input has bright colours or
lights. The glare sensitivity symptom is present in the cataract disease [27].

Figure 39: Normal vision, full-colour Figure 40: Protanopia, results in red-
palette. green colour-blindness.

Figure 41: Deuteranopia, perceiving Figure 42: Tritanopia effect results


in medium wavelengths is minimal. into the inability to perceive short
wavelengths.

December 2011 71 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 43: Glaucoma. A large Figure 44: Macular degeneration,


proportion of the visual field is lost. results in loss of the visual field's
central region.

Figure 45: Demonstration of the glare sensitivity symptom simulation: compared to the
normal vision case (left image), areas near bright colours and lights have lost their
contours (right image).

The vision impairment simulation is based on image filters. The filters are applied on
the input frame stored in the Vision Model's buffer. Many filters can be combined in
order to simulate a set of symptoms. Both global and per-pixel filters have been
developed for this purpose. Global filters are used for the generation of the glaucoma,
macular degeneration and glare sensitivity symptoms, while the per-pixel filters are
used for the simulation of protanopia, deuteranopia, tritanopia and their less severe
variations, i.e. protanomaly, deuteranomaly and tritanomaly. The Vision Model filtering
is performed using OpenCV functions.

3.5 Hearing Simulation


Hearing simulation is based on the analysis of the virtual user model's audiogram
parameters. An audiogram [28] is a standard way of representing a person's hearing
loss. Initially, the audiogram parameters, stored inside the virtual user model
specification, are passed into the hearing model. Then, the hearing model constructs

December 2011 72 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

the audiogram of the avatar. Using the audiogram, the hearing model generates the
audio filter and applies it on all input sounds.
Using the above method, several audio impairment symptoms can be simulated, such
as otitis [29], otosclerosis [30], noise induced hearing loss, presbycusis [31] etc. Audio
impairment audiograms are presented in Figures 46 to 53.

Figure 46: Otitis, only middle frequency Figure 47: Otosclerosis, hearing loss of
bands are retained. low frequency bands.

Figure 48: Presbycusis, mild hearing loss. Figure 49: Presbycusis, moderate hearing
loss.

December 2011 73 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 50: Noise induced hearing loss of Figure 51: Noise induced hearing loss of
high frequency bands (mild case). high frequency bands (severe case).

Figure 52: Profound hearing loss with Figure 53: Hearing loss of low frequency
residual low frequency hearing. bands, rising configuration.

3.6 Cognitive/Mental Simulation


Cognitive simulation is performed by the cognition module. In its current state, the
cognition modules cooperates with the Motion Manager and applies delay to the avatar
motions, based on the avatar's cognitive parameters. There are two kinds of time delay
that are used to emulate the avatar's thinking process:
• Pre-action delay: this delay is applied before the avatar's action. It simulates the
thinking process of the task. The complex tasks result in greater durations than
the simple ones. The complexity of one task is given by the path of the end
effector. Long curve paths that avoid lots of obstacles increase the pre-action
delay.

December 2011 74 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• In-action delay factor: the in-action delay factor is applied while the avatar is in
motion. The target time is increased by this factor, and the whole process takes
longer. This is achieved by increasing the interpolating duration between the
key-postures.
By applying the pre-action and in-action delay factors, the final task time is given by
Eq.49:
t task =t pre + f i n⋅t target , f i n≥1.0 (49)
where t task is the resulted duration of the action, t pre is the pre-action delay, f i n is
the in-action delay factor, and t target is the target time duration. Both t pre and f i n are
given as pre-computed parameters to the cognition module.

4 Scene Objects Simulation


The 3D scene and its objects simulation are presented in this section, along with
simple object manipulation and more complex objects' simulation aspects.

4.1 Static and moveable objects


The virtual environment of the core simulation platform is represented by a series of
objects. The coordination and manipulation of these objects is performed by the Scene
Module. There are two kinds of objects:
• Static objects: object that cannot be moved, like walls, floors, etc. In this
category apply also, objects that are moveable in real life but in the virtual world
it is better to stay motionless for stability reasons, like a car chassis, desk
tables, etc. Static objects are defined only by their geometry and their position
(and orientation) in the virtual space. The static objects do not have any mass
properties, and therefore any force or torque applied to them is negated.
• Moveable objects: are representing all the other objects that the avatar can
interact and apply forces/torques in order to move them. Unlike the static
objects, they have mass and inertia properties. Kinematic properties, like
velocity and acceleration, define the current state of this kind of objects.
Dynamics object properties, like net force and net torque, can be retrieved any
time by the Scene Module.
Both moveable and static objects, have attributes that define their position, orientation,
visual representation and collision mesh. Hear it must be mentioned that either the
collision mesh or the visual representation mesh can be omitted, in order to make
faster simulation sessions. Especially the omission of the visual representation meshes
decreases a lot the duration of the simulation sessions which have hi-polygon
environments.
In level-2 simulations, collisions between the statics do not have any meaning, because
they are massless, and thus will not result into any object acceleration. However,
collisions between moveables or between a moveable and a static object are permitted
and generate forces. In the second case the generated forces are applied only to the
moveable object.

December 2011 75 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

In most cases the simulated object represents only a part of its real-life object. Thus,
several objects must be defined in order to create the virtual representation of a real-
life object. This is achieved by connecting several primitive objects and creating
“chains” of objects which are called “DoF-Chains”. The DoF-Chains are analysed in
Subsection 4.3.

4.2 Object Points of Interest (PoI)


Modelling the interaction of the avatar with the virtual environment is not an easy task.
Even simple actions, like reaching an object, have to be defined properly, otherwise a
series of ambiguities may rise, like “where exactly the hand has to touch the object?”.
For such interaction purposes, special points, the Point of Interest (or PoI), need to be
manually set on some objects. The PoI declares the location of the interaction and can
be used as guides for various avatar actions.
The IK Manager and the Grasp Module make use of object PoI elements in order to
coordinate the avatar with its interactions. For example, a PoI must be defined for a
grasp task, in order to guide the Grasp Module to compute the avatar's arm posture
(Figure 36, on page 68). Moreover, the LookAt Module uses PoI elements in order to
coordinate properly the head and eye gaze into the task's target.
The PoI elements are set manually via the Veritas Interaction Adaptor. A PoI is defined
by its relative position to its parent object. Each transformation of the parent object is
applied also to the PoI. An object may have more than one PoI elements in order to
allow special avatar interaction with it. For example, in a reach task, an arm ik-chain
with one end-effector (on the avatar's palm) needs only one object PoI. However, the
computed avatar configurations will result into random end effector orientations (Figure
21, on page 43). In order to constrain the palm's orientation, an ik-chain with two or
three end-effectors and the respective number of object PoI must be used (Figures 22,
23, on page 43). For such purposes, more than one PoI can be grouped and create
PoI-sets, making it easier for the IK Manager to guide its ik-chains, because the ik-
chain's end-effectors are matched directly to the PoI-set points.

4.3 Object DoF-Chains


There are tasks where the avatar needs to interact with a complex object, having more
than one moving parts. Such an object cannot be simulated by a simple moveable
object. A series of moveable objects must be connected in order to allow such complex
functionalities. This series results into a chain of objects. Each chain's object may be
connected to the next one permitting motion on one or more degrees of freedom (DoF).
This chain is called “DoF-Chain” and is used for simulating complex virtual objects.
The simplest DoF-Chain is consisted of two object elements connected by a series of
DoF-elements (Figure 54). Proper definition of a DoF-Chain requires the following:
• The chain elements are following a specific order. Each chain element must
have one parent. The only parent-less element is the last chain element.
• The first chain element, i.e. the start-element, must always be a moveable
object.
• The last chain element, i.e. the end-element, can be either moveable or static

December 2011 76 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

object.
• At least one DoF element must be declared. However, there is no restriction in
the maximum number of the DoF-elements that will be used. It must be noted,
though, that more than three degrees of freedom may lead to indefinite system
behaviour.

elem. 1 2 3 ... n-1 n

moveable
moveable
DoF DoF DoF or static
object
object

or

rotational translational
Figure 54: A simple DoF-Chain consists of two elements: the start element (elem.1)
which must be always a moveable object and the end element (elem.n) which can be
either moveable or static. Several degrees of freedom elements (elem. 2 ~ n-1) can be
used in between to connect the two objects. Each degree of freedom can be either
rotational or translational.

There are two kinds of DoF elements, that can be used for connecting the two objects:
rotational DoF and translational DoF. A rotational DoF allows the objects to rotate
around a specific axis, while the translational DoF allows them to move on a specific
direction. Several rotational and translational DoF elements can be combined in order
to allow complex movements.
Each DoF element, depending on its nature (rotational or translational) has a number
of special attributes. A translational DoF has the following attributes:
• axis of translation: defined by a 3d vector. The translation between the two
objects is parallel to this axis.
• range of motion: defined by the minimum and maximum translation between the
two objects. The two objects are allowed to move only inside these limits.
• spring attributes: defined by a desired relative linear velocity between the
objects and the maximum force that can be applied in order to achieve it. The
spring attributes can be used to emulate springs or motor behaviours. The
spring force and velocity direction is always parallel to the the axis of
translation.
The rotational DoF has the following attributes:
• axis of rotation: defined by a 3d vector an a 3d point.
• range of motion: defined by the minimum and maximum angles of rotation. The
zero angle is the angle that the two objects have in their initial states.

December 2011 77 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• spring attributes: same as in the translational DoF. However the velocity here is
angular and it is achieved by the application of torque (instead of force).

moveable moveable
DoF DoF DoF DoF
object object

moveable moveable
object object

DoF DoF

DoF DoF
static
object

Figure 55: Two or more DoF-Chains can be connected in order to allow complex object
functionality. Here a complex object consisted of five primitive objects is presented.
This complex object is consisted of four moving parts and one static part.

The Scene Module allows the connection between two or more DoF-Chains, to allow
the creation of complex virtual objects, i.e. objects with more than one moving parts
(Figure 55). One such example is shown in Figure 56. In this example a car's storage
compartment has been created. The compartment is consisted of three primitive
objects: the dashboard (static), the compartment's door (moveable) and its handle
(moveable). The objects are connected via two rotational DoF-Chains: one connecting
the door to the dashboard and one connecting the handle to its door.
The Scene Module allows the assignment of more than one DoF-Chains to the same
pair of objects. In these cases, however, only one DoF-Chain can be active – the
others are deactivated automatically be the Scene Module. This activation/deactivation
function can be used in order to simulate objects that change their DoF characteristics.

4.4 Object Configurations


Just like the avatar's configurations, the scene objects can be have their own
configurations. The object configuration denotes a state of the DoF elements of the
object's parent DoF-Chain. Object configurations are stored inside the configuration

December 2011 78 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

pool of the DoF-Chains. Several configurations can be defined by the user (via the
Interaction Adaptor) in order to define special states of an object.
The object configurations are used by the Task Manager in order to check the state of
the current task or as target object configuration for specific sub-tasks. For example,
concerning the storage compartment of Figure 56, three states, i.e. three configurations
can be declared: a) the initial configuration b) one with the handle fully open and c) one
with the door open. These three configurations can be given as an input to the Task
Manager in order to coordinate the avatar to open the storage compartment.

4.5 Scene Rules


The Scene Module supports a sophisticated rule system that allows the user to fully
configure the functionality of the virtual objects. Several rules can be added to the
scene in order to simulate the functionalities of its objects.
Each scene rule is consisted of two kinds of elements: its conditions and its results.
The Scene Module checks at each simulation step, if the condition part is satisfied and
if such thing is true, the result part is activated. Proper definition of a scene rule
requires the declaration of at least one condition element and at least one result
element. It must be clarified that, when the condition part contains more than one
condition elements, then all of them must be satisfied, for the activation of the result
part.

State A State B State C


Figure 56: A car's storage compartment example. The compartment consists of three
connected parts: the dashboard (static, red wireframe), the door (moveable, green
wireframe) and the door handle (moveable, in black). The red box denotes an object-
PoI that can be used by the avatar in order to open the door. In State A, the handle is
in its initial position and the door is locked. In State B, the handle has been pulled by
an external force. If the handle rotates enough degrees, the door unlocks (State C).

There are several types of scene conditions:


• conditions that are checking the current value of a specific DoF Element,
• conditions that are checking if a DoF-Chain is active or not, and
• dummy conditions, that are always true, and used for initialisation purposes.
Each of the above types, needs some extra data parameters to be defined. The
assignment of these parameters is performed in the Interaction Adaptor. The supported

December 2011 79 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

scene rule condition types, along with their data and parameters, are presented in
Table 8.
Scene-Rule Condition Param. A Param. B Param. 1 Description
DofValueBiggerThanParam1
Compares the DoF-
DofValueSmallerThanParam1 specific
4 element's current
DoF-Chain DoF-index angle or
DofValueBiggerOrEq.alToParam1 value to the
distance.
Param.1.
DofValueSmallerOrEq.alToParam1
DofChainIsActive Checks if the DoF-
DoF-Chain - - Chain is active or
DofChainIsNotActive not.
Dummy condition –
used at the
AlwaysTrue - - -
initialisation of
scene.

Table 8: The supported scene rule conditions. Each condition element, besides its
type, may need some extra data parameters to be defined (Param. A/B/1).

Concerning the scene rule result element, they can be one of the following categories:
• results that activate or deactivate DoF-Chains,
• results that enable or disable the collision between objects,
• results that change the attributes of the DoF-Chains.
The supported result elements along with their parameters are displayed in Table 9.
Scene-Rule Result Param. A Param. B Param. 1 Description
DofChainActivate Enables or disables
DoF-Chain - -
DofChainDeactivate the DoF-Chain.
MoveableCannotCollideWith moveable Prevents collision
-
Moveable object between the two
moveable
elements – useful for
MoveableCannotCollideWith object
static object - objects which have
Static too coarse geometry.
new range
DofSetParam_rangeMin
minimum
new range Changes the
DofSetParam_rangeMax
maximum attribute of the
DoF-Chain DoF-index new specified DoF
DofSetParam_fResist resistance element with the one
force/torque declared in Param.1.
new desired
DofSetParam_velocSpring
velocity

Table 9: Scene rule result elements and their respective parameters.

4 The DoF index denotes the identification of a specific DoF element inside its parent chain.

December 2011 80 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Using the pre-mentioned scene rule conditions and results, special object
functionalities like the one of the storage compartment in Figure 56 can be defined. In
this example, a scene rule needs to be created having one condition that checks the
state of the “handle → door” DoF-Chain. If the rotational DoF exceeds the predefined
angle parameter then the “dashboard → door” DoF-Chain increases its maximum
angle limit in order to let the door open.

5 Task Simulation
The whole simulation process is supervised by a sophisticated task management
system that is described in this section. The task structures, their internal organisation
and the overall management are supervised by the Task Manager Module which is
going to be analysed in the following sub-sections.

5.1 The Task Tree


The avatar's actions within a simulation scenario are organised into tasks. The Task
Manager Module is responsible for coordinating the various tasks and provides
feedback to the Simulation Module if something goes wrong and a task has failed.
Initially, the Task Manager reads the scenario model file, that has been generated by
the Interaction Adaptor, and creates the various tasks that need to be performed by the
avatar. The various tasks are organised in a tree structure, the task tree. In the task
tree, every task is a tree node and may have one or more children. The tree root is a
complex task that represents the whole scenario process. If the root task fails then the
Task Manager reports failure to the user. By default, if a child task fails, then its parent
fails too, and this goes on and on, until the root task fails.
The task manager advances to the next sibling task node, only if the current task
children (if any) are successful.

Task A

Task B Task C Task D

Task B.1 Task B.2 Task D.1 Task D.2


Figure 57: Task tree example. The root task, Task A, is analysed into three subtasks. If
any of them fails, then the whole scenario fails. The sequence that is followed here is
Task A → Task B → Task B.1 → Task B.2 → Task C → Task D → Task D.1 → Task D.2

An example of a tree structure is shown in Figure 57. In this example, the Task A is the
root task. It has three children tasks: Task B, C, D. The Task B is further analysed into
two sub-tasks (B.1 and B.2). Task D has also two sub-tasks (D.1 and D.2). The

December 2011 81 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

sequence that will be followed is:

1. Task A
2. Task B
3. Task B.1
4. Task B.2
5. Task B: if both B.1 and B.2 where successful, then B is also successful.
6. Task C
7. Task D
8. Task D.1
9. Task D.2: if D.2 is successful the flag
10. Task D: if both D.1 and D.2 where successful, the D is also successful.
11. Task A: if both B, C, D were successful, then Task A is also successful and the
scenario has been performed successfully by the avatar.

A task may have four possible states:


• Not started: if the task has not been started yet.
• Running: if it is the current task.
• Succeeded: if the task has finished successfully
• Failed: if the target avatar action could not be achieved.
Initially, all the tasks are flagged as “Not started”. Every task that has been activated
and not finished or failed is flagged as “Running”. It must be noted that more than one
tasks can be under “Running” flag at the same time. For example a parent task is
considered as “Running”, if any of its children tasks is running. The simulation session
is considered successful only if all the tasks are flagged “Succeeded”.

5.2 Task Rules


Internally, a task is consisted of a set of rules. Each rule has a condition part and a
result part (just like the scene rules, in Section 4.5). The condition part contains a set of
conditions that when satisfied, make the result part active. There are various types of
task condition elements and various types of result elements.

5.2.1 Task Rule Conditions


The Task Manager supports various task rule condition types. The following lines
describe the task rule condition types:
• “TaskJustStarted”: becomes true, if the rule's parent task has just started, i.e.
has been called only one time.

December 2011 82 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• “TaskNotStarted”: true, if the rule's parent task has not started yet. Condition is
false if task has already failed or completed successfully.
• “TimeSinceRuleStarted_GreaterThan”: true, if the time since the specified rule
has started is greater than the seconds declared.
• “TimeSinceTaskStarted_GreaterThan”: true, if the time passed since the parent
task has started, is greater than the seconds declared.
• “TimesRuleHasBeenCalled”: true, if the number of times that the specified rule
has been called is equal to the integer declared in the condition. This type is
very useful when needing to call a rule only once (the integer equals to zero).
• “TimesRuleHasBeenCalled_GreaterThan”: same as the previous type, but now
the condition becomes true only when the number of times exceeds the
declared threshold.
• “TimesTaskHasBeenCalled_GreaterThan”: same manner as the previous type,
but now it counts the times the parent task has been called.
• “DistanceAverage_EndEffectorsFromPoiSet_GreaterThan”: this condition
checks if the average distance of the end effectors of an ik-chain to the
specified object PoI-set is greater than the meters defined. The current ik-chain
data are applied to the humanoid joints and then, the distance from the PoI-set
is computed. The rule expects that the ik-effector names and the poi names are
matched 1-1, i.e. they have the same set of names. This type of condition is
usually used to declare a task as failed.
• “DistanceAverage_EndEffectorsFromPoiSet_LessThan”: same as previous, but
now the condition becomes true if the distance is lower than the one specified.
This type of condition is usually used to declare a task as successful.
• “RuleStateIs”: checks if the rule's state is equal to the defined state (NotStarted,
Running, Succeeded, Failed).
• “DistanceAll_CurrentAnglesToSpecificPose_LessThan”: checks if the
hunamoid's current pose is close enough to a predefined one. It checks every
angle of the predefined pose and if the maximum difference is less than a
threshold, returns true.
• “DistanceAll_CurrentAnglesToSpecific_GreaterThan”: same as the previous
type, but now returns true if the maximum difference is greater than the
predefined.
• “OnLoadScenario”: this condition becomes true when the scenario model file is
loaded. In order for this to work correctly, the parent task must be the root task.
• “DofValue_GreaterThan”: true, if the specified degree of freedom of a scene
DoF-chain, has a current value greater than the defined threshold (meters or
degrees, depending on the nature of the DoF).
• “DofValue_LessThan”: opposite manner of the previous.
• “HumanoidIsFree”: returns true, if the humanoid has not any action to perform.
All the above condition types can be used either in kinematic1 (level-1) and dynamic
(level-2) simulations. However, when the level-0 simulation is selected, some of these

December 2011 83 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

types cannot be applied. Level-0 simulation tasks are supported by the following
condition types:
• “L0_PassAsTrue”: Always true condition. Used for passing to the next task
without making any comparisons.
• “L0_ComparePoseValid”: Compares the joint capabilities of the current user
model to a predefined pose and returns true if the pose is feasible.
• “L0_ComparePoseInvalid”: Compares the joint capabilities of the current user
model to a predefined pose and returns true if the pose is not feasible.

5.2.2 Task Rule Results


As already mentioned, if a task rule is satisfied, a set of actions is performed. These
actions are defined in the result elements. A task rule must have at least one result
element. There are several result types, that can be used for various actions. The result
types are:
• “DisplayMessage”: simply displays a message.
• “SetTaskState”: changes the parent task's state. The available options are:
“NotStarted”, “Running”, “Succeeded” and “Failed”.
• “SetRuleState”: changes the state of a specific rule. The possible values for the
new state are the same as above. For task failure only the “SetTaskState” type
must be used. Setting a rule as “failed” does not result in its parent task failure!
• “MoveEndEffectorsToPoiSet”: first, computes the inverse kinematics of an ik-
chain, so that its end effectors are matched to a target PoI-set. Then, it sets the
humanoid in motion to reach the computed body configuration. The preferred
time duration of the move is also declared. Each end effector must be matched
1-1 to each PoI. It is necessary that all effector↔PoI pairs have the same
names.
• “ApplyPoseInstantly”: sets a predefined pose to the humanoid. The pose is
applied instantly. This result can be used in the first task, when setting the initial
pose to the avatar.
• “ApplyPose”: applies gradually a predefined pose to the humanoid in the
specified duration.
• “FollowPoiSetPath”: is used when there is an humanoid interaction with a
moveable object. Basically, It instructs the ik-chain to follow a specific PoI-set
of a specific object's DoF-chain from its current configuration to a target object
configuration. Computes all the middle poses and sets the humanoid in motion.
The target time duration of the action must be also set. This result type can be
used in conjunction with the “AttachBoneToMoveable” type, in order to simulate
interaction with the scene moveable objects, e.g. object pushing, pulling etc.
• “AttachBoneToMoveable”: creates a link that connects a moveable object to a
body bone part. The moveable-to-bone link tries to maintain the relative
translation that the two structures initially have. However, the relative orientation
can be changed, depending on the bone's movement. This type is used to
emulate grab actions.

December 2011 84 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• “DeAttachBoneFromMoveable”: it removes a moveable-to-bone link that had


been already created by using the previous type.
• “DeAttachAllMoveables”: it removes all moveable-to-bone links.
• “ReportFailure_CannotReach”: it is used to report a task failure message and
more specifically, when the humanoid could not reach an object. It also reports
the final distance to the target.
• “ReportFailure_CannotPull”: it is used to report a pull action failure message.
The distance between the moveable's current configuration and its predefined
target configuration is also reported.
• “ReportFailure_CannotPush”: same as previous, but for push actions.

5.2.3 A Simple Task Example


In Figure 58, a simple abstract task structure is shown, where the avatar must reach
the handle of a storage compartment. It contains three rules. Rule 1 is the “action rule”,
and contains the definition of the action that needs to be performed by the avatar. The
other two rules are “check” rules. They check if the action can be performed or not.
Rule 2 results into success and Rule 3 into failure. A task must contain at least one
success-rule5 and at least one fail-rule, otherwise the task is not defined properly and
the Task Manager reports an error to the user.
Task Rule 1 checks if the rule has already been called and if not, the result part is
activated. The result part computes via the IK Manager the configuration path of the
right hand IK-Chain that has as target the storage compartments handle. Then passes
the result to the Motion Manager.
Task Rules 2 & 3 check if dt seconds have passed since the task has started and then
check the hand's distance to the target. If it is less than 1cm, then Rule 2 flags the task
as successful, otherwise the Rule 3 flags the task as failed.

5 A rule that contains a “SetTaskState” element that succeeds is called a “success rule”, while
a rule that contains a “SetTaskState” element that fails is called a “failure rule”.

December 2011 85 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Task:
Reach storage compartment's handle with right hand

Task Rule 1

Condition set:
a. Number of times "Task Rule 1" called = 0 Action Rule:
("TimesRuleHasBeenCalled") results in action.
Result set:
a. Move right hand to handle in dt secs max.
("MoveEndEffectorsToPoiSet")

Task Rule 2

Condition set:
a. Time since current task started > dt secs
("TimeSinceTaskStarted_GreaterThan") Success Rule:
results in a
b. Distance of right hand to handle ≤ 0.01m. successful task.
("DistanceAverage_EndEffectorsFromPoiSet_LessThan")

Result set:
a. Set current task state = succeeded
("SetTaskState")

Task Rule 3

Condition set:
a. Time since current task started > dt secs
("TimeSinceTaskStarted_GreaterThan") Failure Rule:
results in a
b. Distance of right hand to handle > 0.01m. failed task.
("DistanceAverage_EndEffectorsFromPoiSet_GreaterThan")

Result set:
a. Set current task state = failed
("SetTaskState")

Figure 58: Simple task example. The task contains three rules. One results in action
and the other two are used for success and failure checks.

December 2011 86 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

5.2.4 Task Parallelism


The task example presented in 5.2.3, with one action-rule, one fail-rule and one
success-rule, has the minimum set of rules for a proper task definition. More complex
tasks can be defined, containing more than one action rules and their respective
success/failure counterparts.
Two or more actions per task results in task-action parallelism. The Task Manager is
constructed in such way that supports fully the parallel task actions. Motion manager in
cooperation with the rest Humanoid Modules, can manage various avatar motions in
the same time, e.g. coordination of both arms via the IK Manager to different targets.

5.2.5 Task Duration


Every task action that takes time, needs the user declaration of its duration. The fact
that the user inputs a target time for the action, does not mean necessarily that this
action will be performed exactly in the declared period of time. Especially in dynamic
sessions, where the torque-generation capabilities of the avatar are limited, may lead
into slower movement than the one desired. This however must be taken into account,
because the target time is just a target time and the interest still remains to whether or
not the task is feasible.
In order to cope with the above problem, the Motion Manager supports a “retry” system
for primitive motions that fail to reach the target posture in the predefined time. The
retry-system checks if the current posture of the moving bones have not reached the
desired configuration and generates new series of configurations if needed. This can
be repeated as many times as needed until the posture is reached, but a retry-
threshold must be placed after some point.
The more the retries of an action, the greater time deviation from the target time. Thus,
in most cases, the target time is just an abstract way to declare how fast the task is
going to be performed. Additionally, the extra delays from the Cognition Module, will
result in extra deviations from the target time.

6 Accessibility Assessment
This section contains information about the metrics and the human factors that are
used by the core simulation platform in order to perform the product's accessibility
assessment. Two example cases are also presented, one regarding the automotive
area and one that concerns the workplace area.

6.1 Metrics and Human Factors


The Core Simulation Platform is able to measure various statistics. In this subsection
the nature of these metrics is analysed. There are three categories:
• Generic metrics: general feedback about the simulation session, like the its
duration and the success/failure result.
• Physical factors: physics based metrics, like body torque distribution, angular
impulse and energy consumption. The physical factors require level-2

December 2011 87 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

simulations.
• Anthropometry factors: measures how comfortable the avatar postures are
during the simulation.
The physical and anthropometry factors are, in fact, human factors. Human factors can
provide to the end user important accessibility assessment criteria. Moreover, the
human factors move on step further and can be used to evaluate the ergonomics of the
virtual products.

6.1.1 Generic Metrics


The generic metrics provide general information about the simulated task.

6.1.1.1 Success-Failure
The task result, i.e. success or failure, is the minimum information required for the
accessibility assessment of the tested product. Additional Information comes with the
failure result, .e.g. the distance of the hand to the target when the reach task has failed.

6.1.1.2 Task Duration


The task duration counts the time needed for performing the task actions. At the
creation step of the scenario model file, using the Interaction Adaptor, the user must
insert a target duration for every scenario action. However, as it is written in 5.2.5, this
target duration is only used as guide and it is not the final-actual task duration. The final
task duration is based on the capabilities of the current avatar.
The task duration feedback increases the assessment quality of the succeeded tasks
among different virtual user models, as two success results with different task
durations, cannot be considered equal.

6.1.2 Physical Factors


Three metrics from the physics domain are used as physical factors: torque, angular
impulse and energy consumption.

6.1.2.1 Torque
When a body part is moved, or more precisely rotated, torques are generated by the
musculoskeletal system. These torques, which are 3-dimensional vectors can be easily
measured at each humanoid joint. The joint's net torque is given by:
τ⃗ net =I α⃗ (50)
where I is the moment of inertia and ⃗ a is the angular acceleration of the rotated
body part. This means that when a user motor task needs to be completed very fast,
requires higher body acceleration, which results in higher torques. Beside this, if the
body part is interacting with an object, extra torque is needed in order to cope with the
object's added moment of inertia. Thus, this τ⃗ net could be used as a factor measuring
strength.
However, there are some things to be reconsidered: every body joint generates forces,

December 2011 88 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

besides torques, which are trying to keep its body parts connected. Higher forces mean
that the body is stressed or pressured, which results in discomfort. So, it is wise to take
into account these forces too. These forces can be converted into torques:

τ⃗ force=⃗r × F (51)
where ⃗r is the vector from the joint's centre to the body part's centre of mass and ×
denotes the cross product. Thus, the final joint torque that is generated in a time instant
is now given by:

τ⃗ (t)=⃗ ⃗ (t)
τ net (t)+ τ⃗ force (t)= I ⃗a (t)+ ⃗r (t )× F (52)
By computing each joint torque and averaging it through time, a measure of “how
strength demanding is a human task'' can be constructed. This, mean torque factor, is
given by:
n
1
τ mean= ∑∣τ⃗ (t)∣
n t=1
(53)

where n is the number of time frames passed from the task's start, and τ⃗ (t) is the
torque generated by the joint at the time t . At this point, it must be mentioned that all
simulation time steps are considered equal.

6.1.2.2 Angular Impulse


By taking into consideration only the mean torque factor, a metric of fatigue cannot be
established correctly. Fatigue contains a time duration sense that is not included in the
definition of torque. Thus, by computing the time duration of a task and providing it to
the τ mean , a more sophisticated metric is produced. This metric is given as the product
of the mean joint torque and the total duration of the task:
J ang =τ mean dt task (54)
where dt task is the task duration. This metric is the angular impulse that is generated at
each joint. The angular impulse enhances the torque measure of “how strength
demanding is a human task” to “how strength demanding is a human task and how
long it lasts” and will be used as fatigue measure.

6.1.2.3 Energy consumption


Another aspect that can be considered when evaluating ergonomics is the energy
consumption. Two types are of energy can be measured by the Core Simulation
Platform: kinetic and potential energy. The kinetic energy produced by a joint in a very
small time duration dt at time step t is given by:
1
E kinetic (t) = I ( ω 2 (t)−ω 2 (t−1) ) ⇒ (55)
2
1
E kinetic (t) ≃ τ (t ) (ω ( t)+ ω (t −1) ) dt (56)
2
where E kinetic (t) is the kinetic energy produced at time t , ω (t) is the angular velocity
of the rotated body part and dt is the size of the simulation step. Eq.56 is an
approximation that stems from Eq.55 assuming that dt → 0 . The formula of Eq.55 is

December 2011 89 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

easier to to compute because of the absence of the factor I .


The potential energy is given via the equation:

E potential (t )=m g dz=m g ( z ( t)−z (t−1) ) (57)


where m is the body part mass, g is gravity acceleration and dz is the translation of
the rigid body's centre of mass in the vertical direction in this time step.
By adding Eq.56 to Eq.57, and averaging through time:
n
E total ≃ ∑
t =1
( 12 τ (t) (ω (t )+ ω (t−1)) dt + m g ( z (t)−z (t−1))) (58)

an estimation of the total chemical energy consumed by the joint and its body parts can
be made.

6.1.3 Anthropometry Factors


Each body joint can have one to three rotational degrees of freedom (DoF). Each DoF
has a specific range of motion (RoM) and its minimum and maximum values are
changing according to the properties of the active user model. In general, angles near
the extremities of the RoM, are considered to produce discomfort, while values near
the RoM middle, result in more comfortable postures. This assumption led to the
introduction of two “comfort” factors: a) a RoM comfort factor and b) a RoM & Torque
factor [32], which are explained in the following paragraphs.

6.1.3.1 RoM Comfort Factor


A possible comfort metric of a specific degree of freedom (DoF) of a joint would be the
distance of its current angle to the nearest limit (minimum or maximum) of its total
range:

C d (t )=min (∣θ d , max−θ d (t)∣ ,∣θ d , min −θ d (t)∣) (59)


where d is the id of the DoF, and θ d ,max , θ d ,min , θ d (t ) are the maximum, minimum
and current angle of this DoF respectively. In order to compute a comfort factor per
joint, a simple addition of its respected DoF comfort values is proposed:
m
C (t) ← ∑ C d (t) (60)
d =1

where m is the total number of the joint's DoF. The can be normalized to the interval
by using the following equation:
m
2 ∑ C d ( t)
d =1
C (t)= m (61)
∑ (θ d ,max −θ d , min )
d =1

Values near unit are considered more “comfortable” than values near zero. The C (t)
factor changes only when the joint rotates. By taking the average of these C (t) over
time, the “RoM Comfort Factor” can be derived by:

December 2011 90 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

n
1
C= ∑ C (t)
n t=1
(62)

By computing Eq.61, 62, an estimation of the humanoid's overall posture comfort can
be given for a specific task.

6.1.3.2 RoM-Torque Comfort Factor


The RoM Comfort Factor, introduced in 6.1.3.2, can be enriched by including the joint's
generated torque magnitude. This will result into a factor that includes the joint's
dynamic properties besides the kinematics and takes into consideration the fact that “a
torque of an uncomfortable posture has not the same fatigue impact of the same torque
of a more comfortable posture”. An initial proposal for this factor formula would be:
C (t )
Cτ (t) ← (63)
∣τ⃗ (t)∣
where the C (t) is given from Eq.61 and τ⃗ (t) from Eq.52. This proposed formula
means that higher values would be considered as more “comfortable” than values near
zero.
τ (t )∣→ 0 . Thus, it
However, this formula has the irregularity that tends to infinity when ∣⃗
needs to be replaced by a more sophisticated one, that would take into account the
whole task duration (and not just a time frame):

( )
n
1 ( 1−C (t) )∣⃗
τ (t)∣
Cτ =1− ∑ (64)
n t =1 τ max∣
∣⃗
where τ⃗ max is the joint torque with the maximum magnitude that has been recorded in
the task. It is worthy to mention that the factor Cτ is already normalized to the interval
[0,1] . The Cτ will be referred as the “RoM-Torque” comfort factor and will be used to
evaluate the overall body posture comfort considering the dynamics demands of the
task.

6.2 Experiments
The factors that are presented in 6.1 were applied into two different realistic scenarios:
one coming from the automotive area and one considering a typical workplace. In each
scenario two different designs were tested. A common car interior was evaluated in the
first scenario, while in the second a common workplace design was examined.
In each scenario, the human factors were computed for five different virtual user
models:
• fully capable user: normal values of RoM in all body joints, full strength
capabilities.
• elderly user (aged 60~84), decreased strength capabilities, mild decrease of
RoM in several body joints.
• user with spinal cord injury: heavy decrease of shoulder's RoM.
• user with rheumatoid arthritis: severe decrease of shoulder's RoM.

December 2011 91 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

• user with hemiparesis: decreased flexion in elbow and shoulder joints.


The proposed factors were recorded at six body locations: lower torso, middle torso,
upper torso, shoulder, elbow and wrist. The reference arm was the right for the
automotive and left for the workplace scenario. In all cases the simulation time step
was set at 0.005sec.

6.2.1 Automotive
The automotive scenario included the design of the storage compartment in a common
car interior. The scenario contained three subtasks: a) locate the storage compartment,
b) reach it with the right hand's index finger and c) open it. The accessibility of two
different storage compartments was evaluated. The compartment tested first, had a
handle, which must be pulled (Figure 59). The second did not include any handle and
in order to open it, the user had to simply push it (Figure 60). The compartment's
resistance torque in both cases was the same, around 5⋅10−4 Nm .

Figure 59: Automotive scenario, testing a storage compartment with handle. The
compartment's door opens when the handle is rotated/pulled enough degrees. The
green lines indicated the avatar's line of sight and the yellow arrow indicates the
current subtask's target.

December 2011 92 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 60: Automotive scenario, testing a storage compartment that can be opened by
pushing its door.

All users, except the one having rheumatoid arthritis, were capable of running
successfully all the tasks. The user with rheumatoid arthritis due to his/her heavy
restriction of the shoulder RoM could not reach the compartment. So, this user's seated
location was moved a little bit towards the storage compartment, until it was within
his/her reach.
Each subject was given a target time of 2sec to complete each task. However, this time
amount could change, depending on the different user model capabilities and the
loaded scene setup. Each test was repeated ten times in order to eliminate the inverse
kinematics and motion planning randomness. For the motion planning the RRT
(3.3.1.4) method was used.
The results are shown in Figures 61 to 66 and indicate that the design of the
compartment with the pushable door is more accessible than the one with the handle. A
detailed discussion about the results can be found in [32].

December 2011 93 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 61: Summed time durations for all the automotive task repetitions
(in sec). Each duration refers to ten repetitions of the same task sequence.

Figure 62: The mean torques (Eq.53) at six body locations for the automotive scenario.
All values are in Nm.

December 2011 94 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 63: The automotive task's angular impulses (Eq. 54). Values in Nm⋅sec . It is
worthy to mention that the inclusion of the time component into the torque
discriminates better the two designs.

Figure 64: The total estimated energy consumed (Eq.58), by each body region for the
automotive scenario. Values declared in Joules.

December 2011 95 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 65: The RoM Comfort factor (Eq.62) distributions for the automotive tasks.
Higher values are indicating more comfortable situations for the examined body region.

Figure 66: The RoM-Torque Comfort factor (Eq.64) distributions for the automotive
tasks. Values near the unit indicating more comfortable and less torque demanding
body postures.

December 2011 96 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

6.2.2 Workplace
In the workplace scenario, an office desk was the evaluated product. Two different
designs were tested: one having its drawer above the desk's table and one having it
below (Figure 67). The task sequence contained three subtasks: a) reach the drawer,
b) grab its handle and c) pull it. The users' left hand was used. The task was
considered to be successful when the drawer was opened at least 15cm. The drawer
weight was set at 300gr and its resistance force was very small, at 0.01N.

Figure 67: Workplace scenario, two different office designs: one having the drawer
above its table (left image), and one having it below (right image).

Figure 68: Summed durations for all the workplace task repetitions (in
sec). Each duration refers to ten repetitions of the same task
sequence.

December 2011 97 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Each task sequence was repeated ten times in order to avoid the randomness of the
motion planning algorithms. The target time duration that was given to each user model
was 3.5sec per repetition. All user models were capable of performing all subtasks. The
results are shown in Figures 68 to 73. As it is shown, overall, the drawer below the
office has been proven more accessible than the one above, both in terms of strength
and comfort requirements. A more detailed report can be found in [32].

Figure 69: The mean torques (Eq.53) at six body locations for the workplace scenario.
Values are in Nm.

December 2011 98 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 70: The workplace task's angular impulses (Eq. 54). Values in Values in
Nm⋅sec .

Figure 71: The total estimated consumed energy (Eq.58) by each body region for
performing the workplace tasks. Values in Joules.

December 2011 99 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

Figure 72: The RoM Comfort factor (Eq.62) distributions for the workplace tasks.
Higher values are indicating more comfortable situations for the examined body region.

Figure 73: The RoM-torque comfort factor (Eq.64) distributions for the workplace tasks.
Values near the unit indicating more comfortable and less torque demanding body
postures.

December 2011 100 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

6.2.3 Verdict
The experiments indicated that only the consideration of task durations and success-
failure result, cannot discriminate the product design among different user populations.
Accessibility assessment can be improved when measuring special physical and
anthropometric factors. The integration of these factors into the VERITAS Core
Simulation Platform increases the fidelity of the accessibility results and helps the end-
user to find the population response to the designed product.

Future Work
In its current state, the Core Simulation Platform supports a lot of humanoid and
environment simulation algorithms. However, there are things are missing and are
needed, in order to achieve a complete simulation framework. This lack of features is
going to be filled by the work that will be implemented as part of the Veritas D2.1.4.
The Core Simulation Platform is able to perform simulations of 3D environments.
However, there is the need of performing accessibility assessment for graphical user
interfaces (GUI), which in most of the cases are 2D. The Vision Model filters can be
easily applied into 2D surfaces and there is no need for changes to the Hearing Model.
However, several of the motor related modules need to be reinforced with the
implementation of their 2D counterpart algorithms.
Concerning the cognitive support, currently it is simulated by applying delay factors in
the various humanoid actions. This behaviour can be supported with extra cognition
factors related to context awareness aspects.
Regarding the motor simulation, the gait module needs to gain fully dynamic
functionality, as currently is based on reproduction of gait cycles. Moreover, the
implementation of a Sitting Module would provide the necessary feedback of actions
related to sitting and standing up.
Testing the same virtual product for many different Virtual User Models, requires now
many user actions, parsing different user models. The need of supporting an automatic
loading (in a cascade manner) the various VUM would result in a great decrease of the
evaluation process.
Finally, the various session data that are gathered by the Simulation Module, must
somehow be presented via a graph statistics to the user. This will help to the
accessibility assessment process of the virtual product very much.

December 2011 101 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

References
[1] OpenCV, Open Source Computer Vision, http://opencv.willowgarage.com/wiki/
[2] OpenAL, http://connect.creativelabs.com/openal/default.aspx
[3] CogTool, http://cogtool.hcii.cs.cmu.edu/research/research-publications
[4] OGRE, Open Source 3D Graphics Engine, http://www.ogre3d.org/
[5] OpenSceneGraph, Open source high performance 3D graphics toolkit,
http://www.openscenegraph.org/projects/osg
[6] Delta3D, Open Source Game and Simulation Engine, http://www.delta3d.org/
[7] ODE, Open Dynamics Engine, http://www.ode.org/
[8] CAL3D, 3D Character Animation Library, http://gna.org/projects/cal3d/
[9] Marcelo Kallmann, Analytical inverse kinematics with body posture control, 2008
[10] Roy Featherstone, Rigid Body Dynamics Algorithms,
[11] Bruno Siciliano, Oussama Khatib, Springer Handbook of Robotics, 2008
[12] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein,
Introduction to Algorithms, 2009
[13] Steven M. Lavalle and James J. Kuffner and Jr., Rapidly-Exploring Random
Trees: Progress and Prospects, Algorithmic and Computational Robotics: New
Directions, 2000, pg 293--308.
[14] Harold C. Sun and Dimitris N. Metaxas. 2001. Automating gait generation. In
Proceedings of the 28th annual conference on Computer graphics and interactive
techniques (SIGGRAPH '01). ACM, New York, NY, USA, 261-270.
[15] KangKang Yin, Kevin Loken, and Michiel van de Panne. 2007. SIMBICON: simple
biped locomotion control. ACM Trans. Graph. 26, 3, Article 105 (July 2007).
[16] Xiaoyue Guo, Shibiao Xu, Wujun Che, and Xiaopeng Zhang. 2010. Automatic
motion generation based on path editing from motion capture data. In
Transactions on edutainment IV, Lecture Notes In Computer Science, Vol. 6250.
Springer-Verlag, Berlin, Heidelberg 91-104.
[17] P.E. Hart, N.J. Nilsson, B. Raphael, A Formal Basis for the Heuristic
Determination of Minimum Cost Paths, 1968
[18] Catmull, E., and Rom, R. A class of local interpolating splines. In Computer Aided
GeometricDesign, R. E. Barnhill and R. F. Reisenfeld, Eds. Academic Press, New
York, 1974, pp. 317–326.
[19] Jacquelin Perry, Gait Analysis: Normal and Pathological Function, 1992
[20] Riener R, Rabuffetti M, Frigo C: Stair ascent and descent at different
inclinations.Gait & Posture 2002, 15:32-44.
[21] Anastasia Protopapadaki, Wendy I. Drechsler, Mary C. Cramp, Fiona J. Coutts,
Oona M. Scott, Hip, knee, ankle kinematics and kinetics during stair ascent and
descent in healthy young individuals, Clinical Biomechanics, Volume 22, Issue 2,
February 2007, Pages 203-210, ISSN 0268-0033,
10.1016/j.clinbiomech.2006.09.010.

December 2011 102 CERTH/ITI


VERITAS D2.1.1 PU Grant Agreement # 247765

[22] Berenson, D.; Diankov, R.; Nishiwaki, K.; Kagami, S.; Kuffner, J.; , "Grasp
planning in complex scenes," Humanoid Robots, 2007 7th IEEE-RAS
International Conference on , vol., no., pp.42-48, Nov. 29 2007-Dec. 1 2007
[23] Kaiser, Peter K.; Boynton, R.M. (1996). Human Color Vision (2nd ed.).
Washington, DC: Optical Society of America. ISBN 1-55752-461-0.
[24] McIntyre, Donald (2002). Colour Blindness: Causes and Effects. UK: Dalton
Publishing. ISBN 0-9541886-0-8.
[25] Merck Manual Home Edition, "Glaucoma". Merck.com,
http://www.merckmanuals.com/home/eye_disorders/glaucoma/glaucoma.html
[26] Bradley, D T; Zipfel, P F; Hughes, A E (2011). "Complement in age-related
macular degeneration: a focus on function". Eye 25 (6): 683–693.
[27] David A. Quillen, Common Causes of Vision Loss in Elderly Patients,
Pennsylvania State University College of Medicine, Hershey, PennsylvaniaAm
Fam Physician. 1999 Jul 1; 60 (1): 99-108.
[28] Audiogram, http://en.wikipedia.org/wiki/Audiogram
[29] Neff M.J. (June 2004). "AAP, AAFP, AAO-HNS release guideline on diagnosis and
management of otitis media with effusion". Am Fam Physician 69 (12): 2929–31.
[30] House J.W., Cunningham C.D. III. Otosclerosis. In: Cummings CW, Flint PW,
Haughey BH, et al, eds. Otolaryngology: Head & Neck Surgery. 5th ed.
Philadelphia, Pa: Mosby Elsevier; 2010:chap 144.
[31] D.W. Robinson and G.J. Sutton "Age effect in hearing - a comparative analysis of
published threshold data." Audiology 1979; 18(4): 320-334.
[32] P. Moschonas, N. Kaklanis, and D. Tzovaras. 2011. Novel human factors for
ergonomy evaluation in virtual environments using virtual user models. In
Proceedings of the 10th International Conference on Virtual Reality Continuum
and Its Applications in Industry (VRCAI '11). ACM, New York, NY, USA, 31-40.

December 2011 103 CERTH/ITI

You might also like