Professional Documents
Culture Documents
7 (’Cthugha’)
Steve Streeting
Copyright
c Torus Knot Software Ltd
Permission is granted to make and distribute verbatim copies of this manual provided the copy-
right notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the condi-
tions for verbatim copying, provided that the entire resulting derived work is distributed under
the terms of a permission notice identical to this one.
i
Table of Contents
OGRE Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1 Object Orientation - more than just a buzzword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Multi-everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1 Material Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.1.2 Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.3 Texture Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.1.4 Declaring Vertex/Geometry/Fragment Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.1.5 Cg programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.1.6 DirectX9 HLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.1.7 OpenGL GLSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.1.8 Unified High-level Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.1.9 Using Vertex/Geometry/Fragment Programs in a Pass . . . . . . . . . . . . . . . . . . . . . . . 69
3.1.10 Vertex Texture Fetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.1.11 Script Inheritence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.1.12 Texture Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.1.13 Script Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.14 Script Import Directive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2 Compositor Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2.1 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.2.2 Target Passes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.2.3 Compositor Passes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.2.4 Applying a Compositor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.3 Particle Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.3.1 Particle System Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3.3.2 Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
3.3.3 Particle Emitter Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.3.4 Standard Particle Emitters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.3.5 Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.3.6 Standard Particle Affectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.4 Overlay Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.4.1 OverlayElement Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.4.2 Standard OverlayElements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
3.5 Font Definition Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
ii
7 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7.1 Stencil Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
7.2 Texture-based Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.3 Modulative Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
7.4 Additive Light Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8 Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.1 Skeletal Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.2 Animation State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.3 Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.3.1 Morph Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.3.2 Pose Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
8.3.3 Combining Skeletal and Vertex Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8.4 SceneNode Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
8.5 Numeric Value Animation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
OGRE Manual 1
OGRE Manual
Copyright
c The OGRE Team
This work is licenced under the Creative Commons Attribution-ShareAlike 2.5 License. To
view a copy of this licence, visit http://creativecommons.org/licenses/by-sa/2.5/ or send
a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Chapter 1: Introduction 2
1 Introduction
This chapter is intended to give you an overview of the main components of OGRE and why
they have been put together that way.
Well, nowadays graphics engines are like any other large software system. They start small,
but soon they balloon into monstrously complex beasts which just can’t be all understood at
once. It’s pretty hard to manage systems of this size, and even harder to make changes to them
reliably, and that’s pretty important in a field where new techniques and approaches seem to
appear every other week. Designing systems around huge files full of C function calls just doesn’t
cut it anymore - even if the whole thing is written by one person (not likely) they will find it
hard to locate that elusive bit of code after a few months and even harder to work out how it
all fits together.
Object orientation is a very popular approach to addressing the complexity problem. It’s
a step up from decomposing your code into separate functions, it groups function and state
data together in classes which are designed to represent real concepts. It allows you to hide
complexity inside easily recognised packages with a conceptually simple interface so they are
easy to recognise and have a feel of ’building blocks’ which you can plug together again later.
You can also organise these blocks so that some of them look the same on the outside, but have
very different ways of achieving their objectives on the inside, again reducing the complexity for
the developers because they only have to learn one interface.
I’m not going to teach you OO here, that’s a subject for many other books, but suffice to
say I’d seen enough benefits of OO in business systems that I was surprised most graphics code
seemed to be written in C function style. I was interested to see whether I could apply my
design experience in other types of software to an area which has long held a place in my heart
- 3D graphics engines. Some people I spoke to were of the opinion that using full C++ wouldn’t
be fast enough for a real-time graphics engine, but others (including me) were of the opinion
that, with care, and object-oriented framework can be performant. We were right.
In summary, here’s the benefits an object-oriented approach brings to OGRE:
Abstraction
Common interfaces hide the nuances between different implementations of 3D API
and operating systems
Encapsulation
There is a lot of state management and context-specific actions to be done in a
graphics engine - encapsulation allows me to put the code and data nearest to
where it is used which makes the code cleaner and easier to understand, and more
reliable because duplication is avoided
Polymorphism
The behaviour of methods changes depending on the type of object you are using,
even if you only learn one interface, e.g. a class specialised for managing indoor
Chapter 1: Introduction 3
levels behaves completely differently from the standard scene manager, but looks
identical to other classes in the system and has the same methods called on it
1.2 Multi-everything
I wanted to do more than create a 3D engine that ran on one 3D API, on one platform, with
one type of scene (indoor levels are most popular). I wanted OGRE to be able to extend to
any kind of scene (but yet still implement scene-specific optimisations under the surface), any
platform and any 3D API.
Therefore all the ’visible’ parts of OGRE are completely independent of platform, 3D API
and scene type. There are no dependencies on Windows types, no assumptions about the type
of scene you are creating, and the principles of the 3D aspects are based on core maths texts
rather than one particular API implementation.
Now of course somewhere OGRE has to get down to the nitty-gritty of the specifics of the
platform, API and scene, but it does this in subclasses specially designed for the environment
in question, but which still expose the same interface as the abstract versions.
For example, there is a ’Win32Window’ class which handles all the details about rendering
windows on a Win32 platform - however the application designer only has to manipulate it via
the superclass interface ’RenderWindow’, which will be the same across all platforms.
Similarly the ’SceneManager’ class looks after the arrangement of objects in the scene and
their rendering sequence. Applications only have to use this interface, but there is a ’BspScene-
Manager’ class which optimises the scene management for indoor levels, meaning you get both
performance and an easy to learn interface. All applications have to do is hint about the kind
of scene they will be creating and let OGRE choose the most appropriate implementation - this
is covered in a later tutorial.
OGRE’s object-oriented nature makes all this possible. Currently OGRE runs on Windows,
Linux and Mac OSX using plugins to drive the underlying rendering API (currently Direct3D or
OpenGL). Applications use OGRE at the abstract level, thus ensuring that they automatically
operate on all platforms and rendering subsystems that OGRE provides without any need for
platform or API specific code.
Chapter 2: The Core Objects 4
Introduction
This tutorial gives you a quick summary of the core objects that you will use in OGRE and
what they are used for.
At the very top of the diagram is the Root object. This is your ’way in’ to the OGRE system,
and it’s where you tend to create the top-level objects that you need to deal with, like scene
managers, rendering systems and render windows, loading plugins, all the fundamental stuff.
If you don’t know where to start, Root is it for almost everything, although often it will just
give you another object which will actually do the detail work, since Root itself is more of an
organiser and facilitator object.
The majority of rest of OGRE’s classes fall into one of 3 roles:
Scene Management
This is about the contents of your scene, how it’s structured, how it’s viewed from
cameras, etc. Objects in this area are responsible for giving you a natural declarative
interface to the world you’re building; i.e. you don’t tell OGRE "set these render
states and then render 3 polygons", you tell it "I want an object here, here and
here, with these materials on them, rendered from this view", and let it get on with
it.
Resource Management
All rendering needs resources, whether it’s geometry, textures, fonts, whatever. It’s
important to manage the loading, re-use and unloading of these things carefully, so
that’s what classes in this area do.
Rendering Finally, there’s getting the visuals on the screen - this is about the lower-level end
of the rendering pipeline, the specific rendering system API objects like buffers,
render states and the like and pushing it all down the pipeline. Classes in the Scene
Management subsystem use this to get their higher-level scene information onto the
screen.
You’ll notice that scattered around the edge are a number of plugins. OGRE is designed
to be extended, and plugins are the usual way to go about it. Many of the classes in OGRE
can be subclassed and extended, whether it’s changing the scene organisation through a custom
SceneManager, adding a new render system implementation (e.g. Direct3D or OpenGL), or
providing a way to load resources from another source (say from a web location or a database).
Again this is just a small smattering of the kinds of things plugins can do, but as you can see
they can plug in to almost any aspect of the system. This way, OGRE isn’t just a solution for
one narrowly defined problem, it can extend to pretty much anything you need it to do.
The root object lets you configure the system, for example through the showConfigDialog()
method which is an extremely handy method which performs all render system options detection
and shows a dialog for the user to customise resolution, colour depth, full screen options etc. It
also sets the options the user selects so that you can initialise the system directly afterwards.
The root object is also your method for obtaining pointers to other objects in the system,
such as the SceneManager, RenderSystem and various other resource managers. See below for
details.
Chapter 2: The Core Objects 6
Finally, if you run OGRE in continuous rendering mode, i.e. you want to always refresh all
the rendering targets as fast as possible (the norm for games and demos, but not for windowed
utilities), the root object has a method called startRendering, which when called will enter
a continuous rendering loop which will only end when all rendering windows are closed, or
any FrameListener objects indicate that they want to stop the cycle (see below for details of
FrameListener objects).
However, a typical application should not normally need to manipulate the RenderSystem
object directly - everything you need for rendering objects and customising settings should be
available on the SceneManager, Material and other scene-oriented classes. It’s only if you want
to create multiple rendering windows (completely separate windows in this case, not multiple
viewports like a split-screen effect which is done via the RenderWindow class) or access other
advanced features that you need access to the RenderSystem object.
For this reason I will not discuss the RenderSystem object further in these tutorials. You
can assume the SceneManager handles the calls to the RenderSystem at the appropriate times.
It is to the SceneManager that you go when you want to create a camera for the scene. It’s
also where you go to retrieve or to remove a light from the scene. There is no need for your
application to keep lists of objects, the SceneManager keeps a named set of all of the scene
objects for you to access, should you need them. Look in the main documentation under the
getCamera, getLight, getEntity etc methods.
Chapter 2: The Core Objects 7
The SceneManager also sends the scene to the RenderSystem object when it is time to render
the scene. You never have to call the SceneManager:: renderScene method directly though - it
is called automatically whenever a rendering target is asked to update.
So most of your interaction with the SceneManager is during scene setup. You’re likely to
call a great number of methods (perhaps driven by some input file containing the scene data) in
order to set up your scene. You can also modify the contents of the scene dynamically during
the rendering cycle if you create your own FrameListener object (see later).
Because different scene types require very different algorithmic approaches to deciding which
objects get sent to the RenderSystem in order to attain good rendering performance, the Scene-
Manager class is designed to be subclassed for different scene types. The default SceneManager
object will render a scene, but it does little or no scene organisation and you should not expect
the results to be high performance in the case of large scenes. The intention is that special-
isations will be created for each type of scene such that under the surface the subclass will
optimise the scene organisation for best performance given assumptions which can be made for
that scene type. An example is the BspSceneManager which optimises rendering for large indoor
levels based on a Binary Space Partition (BSP) tree.
The application using OGRE does not have to know which subclasses are available. The
application simply calls Root::createSceneManager(..) passing as a parameter one of a number
of scene types (e.g. ST GENERIC, ST INTERIOR etc). OGRE will automatically use the
best SceneManager subclass available for that scene type, or default to the basic SceneManager
if a specialist one is not available. This allows the developers of OGRE to add new scene
specialisations later and thus optimise previously unoptimised scene types without the user
applications having to change any code.
ResourceManagers ensure that resources are only loaded once and shared throughout the
OGRE engine. They also manage the memory requirements of the resources they look after.
They can also search in a number of locations for the resources they need, including multiple
search paths and compressed archives (ZIP files).
Most of the time you won’t interact with resource managers directly. Resource managers will
be called by other parts of the OGRE system as required, for example when you request for a
texture to be added to a Material, the TextureManager will be called for you. If you like, you
Chapter 2: The Core Objects 8
can call the appropriate resource manager directly to preload resources (if for example you want
to prevent disk access later on) but most of the time it’s ok to let OGRE decide when to do it.
One thing you will want to do is to tell the resource managers where to look for resources. You
do this via Root::getSingleton().addResourceLocation, which actually passes the information on
to ResourceGroupManager.
Because there is only ever 1 instance of each resource manager in the engine, if you do want
to get a reference to a resource manager use the following syntax:
TextureManager::getSingleton().someMethod()
MeshManager::getSingleton().someMethod()
Mesh objects are a type of resource, and are managed by the MeshManager resource manager.
They are typically loaded from OGRE’s custom object format, the ’.mesh’ format. Mesh files
are typically created by exporting from a modelling tool See Section 4.1 [Exporters], page 140
and can be manipulated through various Chapter 4 [Mesh Tools], page 140
You can also create Mesh objects manually by calling the MeshManager::createManual
method. This way you can define the geometry yourself, but this is outside the scope of this
manual.
Mesh objects are the basis for the individual movable objects in the world, which are called
Section 2.6 [Entities], page 8.
Mesh objects can also be animated using See Section 8.1 [Skeletal Animation], page 175.
2.6 Entities
An entity is an instance of a movable object in the scene. It could be a car, a person, a dog, a
shuriken, whatever. The only assumption is that it does not necessarily have a fixed position in
the world.
Entities are based on discrete meshes, i.e. collections of geometry which are self-contained
and typically fairly small on a world scale, which are represented by the Mesh object. Multiple
entities can be based on the same mesh, since often you want to create multiple copies of the
Chapter 2: The Core Objects 9
You create an entity by calling the SceneManager::createEntity method, giving it a name and
specifying the name of the mesh object which it will be based on (e.g. ’muscleboundhero.mesh’).
The SceneManager will ensure that the mesh is loaded by calling the MeshManager resource
manager for you. Only one copy of the Mesh will be loaded.
Entities are not deemed to be a part of the scene until you attach them to a SceneNode (see
the section below). By attaching entities to SceneNodes, you can create complex hierarchical
relationships between the positions and orientations of entities. You then modify the positions
of the nodes to indirectly affect the entity positions.
To understand how this works, you have to know that all Mesh objects are actually composed
of SubMesh objects, each of which represents a part of the mesh using one Material. If a Mesh
uses only one Material, it will only have one SubMesh.
When an Entity is created based on this Mesh, it is composed of (possibly) multiple SubEntity
objects, each matching 1 for 1 with the SubMesh objects from the original Mesh. You can access
the SubEntity objects using the Entity::getSubEntity method. Once you have a reference to a
SubEntity, you can change the material it uses by calling it’s setMaterialName method. In this
way you can make an Entity deviate from the default materials and thus create an individual
looking version of it.
2.7 Materials
The Material object controls how objects in the scene are rendered. It specifies what basic
surface properties objects have such as reflectance of colours, shininess etc, how many texture
layers are present, what images are on them and how they are blended together, what special
effects are applied such as environment mapping, what culling mode is used, how the textures
are filtered etc.
Basically everything about the appearance of an object apart from it’s shape is controlled by
the Material class.
The SceneManager class manages the master list of materials available to the scene. The
list can be added to by the application by calling SceneManager::createMaterial, or by loading
a Mesh (which will in turn load material properties). Whenever materials are added to the
SceneManager, they start off with a default set of properties; these are defined by OGRE as the
following:
2.8 Overlays
Overlays allow you to render 2D and 3D elements on top of the normal scene contents to create
effects like heads-up displays (HUDs), menu systems, status panels etc. The frame rate statistics
panel which comes as standard with OGRE is an example of an overlay. Overlays can contain
2D or 3D elements. 2D elements are used for HUDs, and 3D elements can be used to create
cockpits or any other 3D object which you wish to be rendered on top of the rest of the scene.
You can create overlays either through the SceneManager::createOverlay method, or you can
define them in an .overlay script. In reality the latter is likely to be the most practical because
it is easier to tweak (without the need to recompile the code). Note that you can define as many
Chapter 2: The Core Objects 11
overlays as you like: they all start off life hidden, and you display them by calling their ’show()’
method. You can also show multiple overlays at once, and their Z order is determined by the
Overlay::setZOrder() method.
Creating 2D Elements
The OverlayElement class abstracts the details of 2D elements which are added to overlays. All
items which can be added to overlays are derived from this class. It is possible (and encouraged)
for users of OGRE to define their own custom subclasses of OverlayElement in order to provide
their own user controls. The key common features of all OverlayElements are things like size,
position, basic material name etc. Subclasses extend this behaviour to include more complex
properties and behaviour.
If you wish to add child elements to that container, call OverlayContainer::addChild. Child
elements can be OverlayElements or OverlayContainer instances themselves. Remember that
the position of a child element is relative to the top-left corner of it’s parent.
Pixel Mode
This mode is useful when you want to specify an exact size for your overlay items,
and you don’t mind if those items get smaller on the screen if you increase the screen
resolution (in fact you might want this). In this mode the only way to put something
in the middle or at the right or bottom of the screen reliably in any resolution is to
use the aligning options, whilst in relative mode you can do it just by using the right
relative coordinates. This mode is very simple, the top-left of the screen is (0,0) and
the bottom-right of the screen depends on the resolution. As mentioned above, you
can use the aligning options to make the horizontal and vertical coordinate origins
the right, bottom or center of the screen if you want to place pixel items in these
locations without knowing the resolution.
Relative Mode
This mode is useful when you want items in the overlay to be the same size on the
screen no matter what the resolution. In relative mode, the top-left of the screen
is (0,0) and the bottom-right is (1,1). So if you place an element at (0.5, 0.5),
it’s top-left corner is placed exactly in the center of the screen, no matter what
resolution the application is running in. The same principle applies to sizes; if you
set the width of an element to 0.5, it covers half the width of the screen. Note that
because the aspect ratio of the screen is typically 1.3333 : 1 (width : height), an
element with dimensions (0.25, 0.25) will not be square, but it will take up exactly
1/16th of the screen in area terms. If you want square-looking areas you will have
to compensate using the typical aspect ratio e.g. use (0.1875, 0.25) instead.
Transforming Overlays
Another nice feature of overlays is being able to rotate, scroll and scale them as a whole. You
can use this for zooming in / out menu systems, dropping them in from off screen and other nice
effects. See the Overlay::scroll, Overlay::rotate and Overlay::scale methods for more information.
Scripting overlays
Overlays can also be defined in scripts. See Section 3.4 [Overlay Scripts], page 127 for details.
GUI systems
Overlays are only really designed for non-interactive screen elements, although you can
use them as a crude GUI. For a far more complete GUI solution, we recommend CEGui
(http://www.cegui.org.uk), as demonstrated in the sample Demo Gui.
Chapter 3: Scripts 13
3 Scripts
OGRE drives many of its features through scripts in order to make it easier to set up. The scripts
are simply plain text files which can be edited in any standard text editor, and modifying them
immediately takes effect on your OGRE-based applications, without any need to recompile. This
makes prototyping a lot faster. Here are the items that OGRE lets you script:
• Section 3.1 [Material Scripts], page 13
• Section 3.2 [Compositor Scripts], page 92
• Section 3.3 [Particle Scripts], page 106
• Section 3.4 [Overlay Scripts], page 127
• Section 3.5 [Font Definition Scripts], page 137
Loading scripts
Material scripts are loaded when resource groups are initialised: OGRE looks in all resource loca-
tions associated with the group (see Root::addResourceLocation) for files with the ’.material’ ex-
tension and parses them. If you want to parse files manually, use MaterialSerializer::parseScript.
It’s important to realise that materials are not loaded completely by this parsing process:
only the definition is loaded, no textures or other resources are loaded. This is because it is
common to have a large library of materials, but only use a relatively small subset of them
in any one scene. To load every material completely in every script would therefore cause
unnecessary memory overhead. You can access a ’deferred load’ Material in the normal way
(MaterialManager::getSingleton().getByName()), but you must call the ’load’ method before
trying to use it. Ogre does this for you when using the normal material assignment methods of
entities etc.
Another important factor is that material names must be unique throughout ALL scripts
loaded by the system, since materials are always identified by name.
Format
Several materials may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’
(note, no nested form comments allowed). The general format is shown below in the example
below (note that to start with, we only consider fixed-function materials which don’t use vertex,
geometry or fragment programs, these are covered later):
Chapter 3: Scripts 14
// This is a comment
material walls/funkywall1
{
// first, preferred technique
technique
{
// first pass
pass
{
ambient 0.5 0.5 0.5
diffuse 1.0 1.0 1.0
// Texture unit 0
texture_unit
{
texture wibbly.jpg
scroll_anim 0.1 0.0
wave_xform scale sine 0.0 0.7 0.0 1.0
}
// Texture unit 1 (this is a multitexture pass)
texture_unit
{
texture wobbly.png
rotate_anim 0.25
colour_op add
}
}
}
}
Every material in the script must be given a name, which is the line ’material <blah>’ before
the first opening ’’. This name must be globally unique. It can include path characters (as in
the example) to logically divide up your materials, and also to avoid duplicate names, but the
engine does not treat the name as hierarchical, just as a string. If you include spaces in the
name, it must be enclosed in double quotes.
NOTE: ’:’ is the delimiter for specifying material copy in the script so it can’t be used as
part of the material name.
A material can inherit from a previously defined material by using a colon : after the material
name followed by the name of the reference material to inherit from. You can in fact even inherit
just parts of a material from others; all this is covered in See Section 3.1.11 [Script Inheritence],
page 84). You can also use variables in your script which can be replaced in inheriting versions,
Chapter 3: Scripts 15
A material can be made up of many techniques (See Section 3.1.1 [Techniques], page 17)-
a technique is one way of achieving the effect you are looking for. You can supply more than
one technique in order to provide fallback approaches where a card does not have the ability to
render the preferred technique, or where you wish to define lower level of detail versions of the
material in order to conserve rendering power when objects are more distant.
Each technique can be made up of many passes (See Section 3.1.2 [Passes], page 20), that
is a complete render of the object can be performed multiple times with different settings in
order to produce composite effects. Ogre may also split the passes you have defined into many
passes at runtime, if you define a pass which uses too many texture units for the card you are
currently running on (note that it can only do this if you are not using a fragment program).
Each pass has a number of top-level attributes such as ’ambient’ to set the amount & colour of
the ambient light reflected by the material. Some of these options do not apply if you are using
vertex programs, See Section 3.1.2 [Passes], page 20 for more details.
Within each pass, there can be zero or many texture units in use (See Section 3.1.3 [Texture
Units], page 39). These define the texture to be used, and optionally some blending operations
(which use multitexturing) and texture effects.
You can also reference vertex and fragment programs (or vertex and pixel shaders, if
you want to use that terminology) in a pass with a given set of parameters. Programs
themselves are declared in separate .program scripts (See Section 3.1.4 [Declaring
Vertex/Geometry/Fragment Programs], page 54) and are used as described in Section 3.1.9
[Using Vertex/Geometry/Fragment Programs in a Pass], page 69.
lod strategy
Sets the name of the LOD strategy to use. Defaults to ’Distance’ which means LOD changes
based on distance from the camera. Also supported is ’PixelCount’ which changes LOD based
on an estimate of the screen-space pixels affected.
lod values
This attribute defines the values used to control the LOD transition for this material. By
setting this attribute, you indicate that you want this material to alter the Technique that it
uses based on some metric, such as the distance from the camera, or the approximate screen
space coverage. The exact meaning of these values is determined by the option you select for
[lod strategy], page 15 - it is a list of distances for the ’Distance’ strategy, and a list of pixel
counts for the ’PixelCount’ strategy, for example. You must give it a list of values, in order from
highest LOD value to lowest LOD value, each one indicating the point at which the material
will switch to the next LOD. Implicitly, all materials activate LOD index 0 for values less than
the first entry, so you do not have to specify ’0’ at the start of the list. You must ensure that
there is at least one Technique with a [lod index], page 18 value for each value in the list (so if
you specify 3 values, you must have techniques for LOD indexes 0, 1, 2 and 3). Note you must
always have at least one Technique at lod index 0.
Example:
lod strategy Distance lod values 300.0 600.5 1200
The above example would cause the material to use the best Technique at lod index 0 up to
a distance of 300 world units, the best from lod index 1 from 300 up to 600, lod index 2 from
600 to 1200, and lod index 3 from 1200 upwards.
receive shadows
This attribute controls whether objects using this material can have shadows cast upon them.
Whether or not an object receives a shadow is the combination of a number of factors, See
Chapter 7 [Shadows], page 160 for full details; however this allows you to make a material opt-
out of receiving shadows if required. Note that transparent materials never receive shadows so
this option only has an effect on solid materials.
Whether or not an object casts a shadow is the combination of a number of factors, See Chapter 7
[Shadows], page 160 for full details; however this allows you to make a transparent material cast
shadows, when it would otherwise not. For example, when using texture shadows, transparent
materials are normally not rendered into the shadow texture because they should not block light.
This flag overrides that.
This attribute can be used to set the textures used in texture unit states that were inherited
from another material.(See Section 3.1.12 [Texture Aliases], page 87)
3.1.1 Techniques
A "technique" section in your material script encapsulates a single method of rendering an
object. The simplest of material definitions only contains a single technique, however since PC
hardware varies quite greatly in it’s capabilities, you can only do this if you are sure that every
card for which you intend to target your application will support the capabilities which your
technique requires. In addition, it can be useful to define simpler ways to render a material if
you wish to use material LOD, such that more distant objects use a simpler, less performance-
hungry technique.
When a material is used for the first time, it is ’compiled’. That involves scanning the
techniques which have been defined, and marking which of them are supportable using the
current rendering API and graphics card. If no techniques are supportable, your material will
render as blank white. The compilation examines a number of things, such as:
• The number of texture unit entries in each pass
Note that if the number of texture unit entries exceeds the number of texture units in the
current graphics card, the technique may still be supportable so long as a fragment program
is not being used. In this case, Ogre will split the pass which has too many entries into
multiple passes for the less capable card, and the multitexture blend will be turned into a
multipass blend (See [colour op multipass fallback], page 51).
• Whether vertex, geometry or fragment programs are used, and if so which syntax they use
(e.g. vs 1 1, ps 2 x, arbfp1 etc.)
• Other effects like cube mapping and dot3 blending
• Whether the vendor or device name of the current graphics card matches some user-specified
rules
In a material script, techniques must be listed in order of preference, i.e. the earlier techniques
are preferred over the later techniques. This normally means you will list your most advanced,
most demanding techniques first in the script, and list fallbacks afterwards.
Chapter 3: Scripts 18
To help clearly identify what each technique is used for, the technique can be named but
its optional. Techniques not named within the script will take on a name that is the technique
index number. For example: the first technique in a material is index 0, its name would be
"0" if it was not given a name in the script. The technique name must be unique within the
material or else the final technique is the resulting merge of all techniques with the same name
in the material. A warning message is posted in the Ogre.log if this occurs. Named techniques
can help when inheriting a material and modifying an existing technique: (See Section 3.1.11
[Script Inheritence], page 84)
scheme
Sets the ’scheme’ this Technique belongs to. Material schemes are used to control top-level
switching from one set of techniques to another. For example, you might use this to define
’high’, ’medium’ and ’low’ complexity levels on materials to allow a user to pick a performance /
quality ratio. Another possibility is that you have a fully HDR-enabled pipeline for top machines,
rendering all objects using unclamped shaders, and a simpler pipeline for others; this can be
implemented using schemes. The active scheme is typically controlled at a viewport level, and
the active one defaults to ’Default’.
lod index
Sets the level-of-detail (LOD) index this Technique belongs to.
All techniques must belong to a LOD index, by default they all belong to index 0, i.e. the
highest LOD. Increasing indexes denote lower levels of detail. You can (and often will) assign
more than one technique to the same LOD index, what this means is that OGRE will pick the
best technique of the ones listed at the same LOD index. For readability, it is advised that
you list your techniques in order of LOD, then in order of preference, although the latter is the
only prerequisite (OGRE determines which one is ’best’ by which one is listed first). You must
always have at least one Technique at lod index 0.
The distance at which a LOD level is applied is determined by the lod distances attribute of
the containing material, See [lod distances], page 15 for details.
Techniques also contain one or more passes (and there must be at least one), See Section 3.1.2
[Passes], page 20.
An ’include’ rule means that the technique will only be supported if one of the include rules is
matched (if no include rules are provided, anything will pass). An ’exclude’ rules means that the
technique is considered unsupported if any of the exclude rules are matched. You can provide
as many rules as you like, although <vendor name> and <device pattern> must obviously be
unique. The valid list of <vendor name> values is currently ’nvidia’, ’ati’, ’intel’, ’s3’, ’matrox’
and ’3dlabs’. <device pattern> can be any string, and you can use wildcards (’*’) if you need to
match variants. Here’s an example:
These rules, if all included in one technique, will mean that the technique will only be considered
supported on graphics cards made by NVIDIA and Intel, and so long as the device name doesn’t
have ’950’ in it.
Note that these rules can only mark a technique ’unsupported’ when it would otherwise be
considered ’supported’ judging by the hardware capabilities. Even if a technique passes these
rules, it is still subject to the usual hardware support tests.
3.1.2 Passes
A pass is a single render of the geometry in question; a single call to the rendering API with a
certain set of rendering properties. A technique can have between one and 16 passes, although
clearly the more passes you use, the more expensive the technique will be to render.
To help clearly identify what each pass is used for, the pass can be named but its optional.
Passes not named within the script will take on a name that is the pass index number. For
example: the first pass in a technique is index 0 so its name would be "0" if it was not given a
name in the script. The pass name must be unique within the technique or else the final pass
is the resulting merge of all passes with the same name in the technique. A warning message
is posted in the Ogre.log if this occurs. Named passes can help when inheriting a material and
modifying an existing pass: (See Section 3.1.11 [Script Inheritence], page 84)
Passes have a set of global attributes (described below), zero or more nested texture unit
entries (See Section 3.1.3 [Texture Units], page 39), and optionally a reference to a vertex and
/ or a fragment program (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a
Pass], page 69).
Here are the attributes you can use in a ’pass’ section of a .material script:
• [ambient], page 21
• [diffuse], page 22
• [specular], page 22
• [emissive], page 23
• [scene blend], page 23
Chapter 3: Scripts 21
Attribute Descriptions
ambient
Sets the ambient colour reflectance properties of this pass. This attribute has no effect if a asm,
CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material
state.
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much ambient light (directionless global light) is
reflected. It is also possible to make the ambient reflectance track the vertex colour as defined
in the mesh by using the keyword vertexcolour instead of the colour values. The default is
full white, meaning objects are completely globally illuminated. Reduce this if you want to see
diffuse or specular light effects, or change the blend of colours to make the object have a base
colour other than white. This setting has no effect if dynamic lighting is disabled using the
’lighting off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
diffuse
Sets the diffuse colour reflectance properties of this pass. This attribute has no effect if a asm,
CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material
state.
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much diffuse light (light from instances of the
Light class in the scene) is reflected. It is also possible to make the diffuse reflectance track the
vertex colour as defined in the mesh by using the keyword vertexcolour instead of the colour
values. The default is full white, meaning objects reflect the maximum white light they can
from Light objects. This setting has no effect if dynamic lighting is disabled using the ’lighting
off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
specular
Sets the specular colour reflectance properties of this pass. This attribute has no effect if a asm,
CG, or HLSL shader program is used. With GLSL, the shader can read the OpenGL material
state.
Chapter 3: Scripts 23
The base colour of a pass is determined by how much red, green and blue light is reflects
at each vertex. This property determines how much specular light (highlights from instances
of the Light class in the scene) is reflected. It is also possible to make the diffuse reflectance
track the vertex colour as defined in the mesh by using the keyword vertexcolour instead of the
colour values. The default is to reflect no specular light. The colour of the specular highlights
is determined by the colour parameters, and the size of the highlights by the separate shininess
parameter.. The higher the value of the shininess parameter, the sharper the highlight ie the
radius is smaller. Beware of using shininess values in the range of 0 to 1 since this causes the the
specular colour to be applied to the whole surface that has the material applied to it. When the
viewing angle to the surface changes, ugly flickering will also occur when shininess is in the range
of 0 to 1. Shininess values between 1 and 128 work best in both DirectX and OpenGL renderers.
This setting has no effect if dynamic lighting is disabled using the ’lighting off’ attribute, or if
any texture layer has a ’colour op replace’ attribute.
emissive
Sets the amount of self-illumination an object has. This attribute has no effect if a asm, CG, or
HLSL shader program is used. With GLSL, the shader can read the OpenGL material state.
If an object is self-illuminating, it does not need external sources to light it, ambient or
otherwise. It’s like the object has it’s own personal ambient light. Unlike the name suggests,
this object doesn’t act as a light source for other objects in the scene (if you want it to, you
have to create a light which is centered on the object). It is also possible to make the emissive
colour track the vertex colour as defined in the mesh by using the keyword vertexcolour instead
of the colour values. This setting has no effect if dynamic lighting is disabled using the ’lighting
off’ attribute, or if any texture layer has a ’colour op replace’ attribute.
scene blend
Sets the kind of blending this pass has with the existing contents of the scene. Wheras the texture
blending operations seen in the texture unit entries are concerned with blending between texture
layers, this blending is about combining the output of this pass as a whole with the existing
contents of the rendering target. This blending therefore allows object transparency and other
special effects. There are 2 formats, one using predefined blend types, the other allowing a
roll-your-own approach using source and destination factors.
This is the simpler form, where the most commonly used blending modes are enumerated
using a single parameter. Valid <blend type> parameters are:
add The colour of the rendering output is added to the scene. Good for explosions,
flares, lights, ghosts etc. Equivalent to ’scene blend one one’.
modulate The colour of the rendering output is multiplied with the scene contents. Generally
colours and darkens the scene, good for smoked glass, semi-transparent objects etc.
Equivalent to ’scene blend dest colour zero’.
colour blend
Colour the scene based on the brightness of the input colours, but don’t darken.
Equivalent to ’scene blend src colour one minus src colour’
alpha blend
The alpha value of the rendering output is used as a mask. Equivalent to
’scene blend src alpha one minus src alpha’
This version of the method allows complete control over the blending operation, by speci-
fying the source and destination blending factors. The resulting colour which is written to the
rendering target is (texture * sourceFactor) + (scene pixel * destFactor). Valid values for both
parameters are:
one Constant value of 1.0
zero Constant value of 0.0
dest colour
The existing pixel colour
src colour The texture pixel (texel) colour
one minus dest colour
1 - (dest colour)
Chapter 3: Scripts 25
Format1: separate scene blend <simple colour blend> <simple alpha blend>
This example would add colour components but multiply alpha components. The blend
modes available are as in [scene blend], page 23. The more advanced form is also available:
Format2: separate scene blend <colour src factor> <colour dest factor> <alpha src factor>
<alpha dest factor>
Example: separate scene blend one one minus dest alpha one one
Again the options available in the second format are the same as those in the second format
of [scene blend], page 23.
scene blend op
This directive changes the operation which is applied between the two components of the scene
blending equation, which by default is ’add’ (sourceFactor * source + destFactor * dest). You
may change this to ’add’, ’subtract’, ’reverse subtract’, ’min’ or ’max’.
depth check
Sets whether or not this pass renders with depth-buffer checking on or not.
If depth-buffer checking is on, whenever a pixel is about to be written to the frame buffer
the depth buffer is checked to see if the pixel is in front of all other pixels written at that point.
If not, the pixel is not written. If depth checking is off, pixels are written no matter what has
been rendered before. Also see depth func for more advanced depth check configuration.
depth write
Sets whether or not this pass renders with depth-buffer writing on or not.
If depth-buffer writing is on, whenever a pixel is written to the frame buffer the depth buffer
is updated with the depth value of that new pixel, thus affecting future rendering operations if
future pixels are behind this one. If depth writing is off, pixels are written without updating the
depth buffer. Depth writing should normally be on but can be turned off when rendering static
backgrounds or when rendering a collection of transparent objects at the end of a scene so that
they overlap each other correctly.
depth func
Sets the function used to compare depth values when depth checking is on.
If depth checking is enabled (see depth check) a comparison occurs between the depth value
of the pixel to be written and the current contents of the buffer. This comparison is normally
less equal, i.e. the pixel is written if it is closer (or at the same distance) than the current
contents. The possible functions are:
always fail
Never writes a pixel to the render target
always pass
Always writes a pixel to the render target
less Write if (new Z < existing Z)
less equal Write if (new Z <= existing Z)
equal Write if (new Z == existing Z)
not equal Write if (new Z != existing Z)
greater equal
Write if (new Z >= existing Z)
greater Write if (new Z >existing Z)
depth bias
Sets the bias applied to the depth value of this pass. Can be used to make coplanar polygons
appear on top of others e.g. for decals.
get the depth bias value, the second time it will get depth bias + iteration depth bias, the third
time it will get depth bias + iteration depth bias * 2, and so on. The default is zero.
alpha rejection
Sets the way the pass will have use alpha to totally reject pixels from the pipeline.
The function parameter can be any of the options listed in the material depth function
attribute. The value parameter can theoretically be any value between 0 and 255, but is best
limited to 0 or 128 for hardware compatibility.
alpha to coverage
Sets whether this pass will use ’alpha to coverage’, a way to multisample alpha texture edges
so they blend more seamlessly with the background. This facility is typically only available on
cards from around 2006 onwards, but it is safe to enable it anyway - Ogre will just ignore it if
the hardware does not support it. The common use for alpha to coverage is foliage rendering
and chain-link fence style textures.
light scissor
Sets whether when rendering this pass, rendering will be limited to a screen-space scissor rectan-
gle representing the coverage of the light(s) being used in this pass, derived from their attenuation
ranges.
Chapter 3: Scripts 29
This option is usually only useful if this pass is an additive lighting pass, and is at least the
second one in the technique. Ie areas which are not affected by the current light(s) will never
need to be rendered. If there is more than one light being passed to the pass, then the scissor is
defined to be the rectangle which covers all lights in screen-space. Directional lights are ignored
since they are infinite.
This option does not need to be specified if you are using a standard additive shadow mode,
i.e. SHADOWTYPE STENCIL ADDITIVE or SHADOWTYPE TEXTURE ADDITIVE,
since it is the default behaviour to use a scissor for each additive shadow pass. However, if
you’re not using shadows, or you’re using [Integrated Texture Shadows], page 168 where passes
are specified in a custom manner, then this could be of use to you.
This option will only function if there is a single non-directional light being used in this pass.
If there is more than one light, or only directional lights, then no clipping will occur. If there
are no lights at all then the objects won’t be rendered at all.
A specific note about OpenGL: user clip planes are completely ignored when you use an ARB
vertex program. This means light clip planes won’t help much if you use ARB vertex programs
on GL, although OGRE will perform some optimisation of its own, in that if it sees that the clip
volume is completely off-screen, it won’t perform a render at all. When using GLSL, user clipping
Chapter 3: Scripts 30
can be used but you have to use glClipVertex in your shader, see the GLSL documentation for
more information. In Direct3D user clip planes are always respected.
illumination stage
When using an additive lighting mode (SHADOWTYPE STENCIL ADDITIVE or SHADOW-
TYPE TEXTURE ADDITIVE), the scene is rendered in 3 discrete stages, ambient (or pre-
lighting), per-light (once per light, with shadowing) and decal (or post-lighting). Usually OGRE
figures out how to categorise your passes automatically, but there are some effects you cannot
achieve without manually controlling the illumination. For example specular effects are muted
by the typical sequence because all textures are saved until the ’decal’ stage which mutes the
specular effect. Instead, you could do texturing within the per-light stage if it’s possible for
your material and thus add the specular on after the decal texturing, and have no post-light
rendering.
If you assign an illumination stage to a pass you have to assign it to all passes in the technique
otherwise it will be ignored. Also note that whilst you can have more than one pass in each
group, they cannot alternate, ie all ambient passes will be before all per-light passes, which
will also be before all decal passes. Within their categories the passes will retain their ordering
though.
Format: illumination stage <ambient|per light|decal>
normalise normals
Sets whether or not this pass renders with all vertex normals being automatically re-normalised.
Scaling objects causes normals to also change magnitude, which can throw off your lighting
calculations. By default, the SceneManager detects this and will automatically re-normalise
normals for any scaled object, but this has a cost. If you’d prefer to control this manually, call
SceneManager::setNormaliseNormalsOnScale(false) and then use this option on materials which
are sensitive to normals being resized.
transparent sorting
Sets if transparent textures should be sorted by depth or not.
By default all transparent materials are sorted such that renderables furthest away from
the camera are rendered first. This is usually the desired behaviour but in certain cases this
depth sorting may be unnecessary and undesirable. If for example it is necessary to ensure the
rendering order does not change from one frame to the next. In this case you could set the value
to ’off’ to prevent sorting.
You can also use the keyword ’force’ to force transparent sorting on, regardless of other
circumstances. Usually sorting is only used when the pass is also transparent, and has a depth
write or read which indicates it cannot reliably render without sorting. By using ’force’, you tell
OGRE to sort this pass no matter what other circumstances are present.
cull hardware
Sets the hardware culling mode for this pass.
A typical way for the hardware rendering engine to cull triangles is based on the ’vertex
winding’ of triangles. Vertex winding refers to the direction in which the vertices are passed
or indexed to in the rendering operation as viewed from the camera, and will wither be clock-
wise or anticlockwise (that’s ’counterclockwise’ for you Americans out there ;). If the option
’cull hardware clockwise’ is set, all triangles whose vertices are viewed in clockwise order from
the camera will be culled by the hardware. ’anticlockwise’ is the reverse (obviously), and ’none’
turns off hardware culling so all triagles are rendered (useful for creating 2-sided passes).
cull software
Sets the software culling mode for this pass.
In some situations the engine will also cull geometry in software before sending it to the
hardware renderer. This setting only takes effect on SceneManager’s that use it (since it is
best used on large groups of planar world geometry rather than on movable geometry since this
would be expensive), but if used can cull geometry before it is sent to the hardware. In this case
the culling is based on whether the ’back’ or ’front’ of the triangle is facing the camera - this
definition is based on the face normal (a vector which sticks out of the front side of the polygon
Chapter 3: Scripts 32
perpendicular to the face). Since Ogre expects face normals to be on anticlockwise side of the
face, ’cull software back’ is the software equivalent of ’cull hardware clockwise’ setting, which
is why they are both the default. The naming is different to reflect the way the culling is done
though, since most of the time face normals are pre-calculated and they don’t have to be the
way Ogre expects - you could set ’cull hardware none’ and completely cull in software based on
your own face normals, if you have the right SceneManager which uses them.
lighting
Sets whether or not dynamic lighting is turned on for this pass or not. If lighting is turned
off, all objects rendered using the pass will be fully lit. This attribute has no effect if a vertex
program is used.
Turning dynamic lighting off makes any ambient, diffuse, specular, emissive and shading
properties for this pass redundant. When lighting is turned on, objects are lit according to their
vertex normals for diffuse and specular light, and globally for ambient and emissive.
Default: lighting on
shading
Sets the kind of shading which should be used for representing dynamic lighting for this pass.
When dynamic lighting is turned on, the effect is to generate colour values at each vertex.
Whether these values are interpolated across the face (and how) depends on this setting.
flat No interpolation takes place. Each face is shaded with a single colour determined
from the first vertex in the face.
gouraud Colour at each vertex is linearly interpolated across the face.
phong Vertex normals are interpolated across the face, and these are used to determine
colour at each pixel. Gives a more natural lighting effect but is more expensive and
works better at high levels of tessellation. Not supported on all hardware.
Chapter 3: Scripts 33
polygon mode
Sets how polygons should be rasterised, i.e. whether they should be filled in, or just drawn as
lines or points.
fog override
Tells the pass whether it should override the scene fog settings, and enforce it’s own. Very useful
for things that you don’t want to be affected by fog when the rest of the scene is fogged, or vice
versa. Note that this only affects fixed-function fog - the original scene fog parameters are still
sent to shaders which use the fog params parameter binding (this allows you to turn off fixed
function fog and calculate it in the shader instead; if you want to disable shader fog you can do
that through shader parameters anyway).
If you specify ’true’ for the first parameter and you supply the rest of the parameters, you
are telling the pass to use these fog settings in preference to the scene settings, whatever they
might be. If you specify ’true’ but provide no further parameters, you are telling this pass to
Chapter 3: Scripts 34
never use fogging no matter what the scene says. Here is an explanation of the parameters:
colour write
Sets whether or not this pass renders with colour writing on or not.
If colour writing is off no visible pixels are written to the screen during this pass. You might
think this is useless, but if you render with colour writing off, and with very minimal other
settings, you can use this pass to initialise the depth buffer before subsequently rendering other
passes which fill in the colour data. This can give you significant performance boosts on some
newer cards, especially when using complex fragment programs, because if the depth check fails
then the fragment program is never run.
start light
Sets the first light which will be considered for use with this pass.
You can use this attribute to offset the starting point of the lights for this pass. In other
words, if you set start light to 2 then the first light to be processed in that pass will be the third
actual light in the applicable list. You could use this option to use different passes to process the
first couple of lights versus the second couple of lights for example, or use it in conjunction with
the [iteration], page 35 option to start the iteration from a given point in the list (e.g. doing the
first 2 lights in the first pass, and then iterating every 2 lights from then on perhaps).
Chapter 3: Scripts 35
max lights
Sets the maximum number of lights which will be considered for use with this pass.
The maximum number of lights which can be used when rendering fixed-function materials
is set by the rendering system, and is typically set at 8. When you are using the programmable
pipeline (See Section 3.1.9 [Using Vertex/Geometry/Fragment Programs in a Pass], page 69)
this limit is dependent on the program you are running, or, if you use ’iteration once per light’
or a variant (See [iteration], page 35), it effectively only bounded by the number of passes you
are willing to use. If you are not using pass iteration, the light limit applies once for this pass. If
you are using pass iteration, the light limit applies across all iterations of this pass - for example
if you have 12 lights in range with an ’iteration once per light’ setup but your max lights is set
to 4 for that pass, the pass will only iterate 4 times.
iteration
Sets whether or not this pass is iterated, i.e. issued more than once.
Examples:
iteration once
The pass is only executed once which is the default behaviour.
iteration once per light point
The pass is executed once for each point light.
iteration 5 The render state for the pass will be setup and then the draw call will execute 5
times.
iteration 5 per light point
The render state for the pass will be setup and then the draw call will execute 5
times. This will be done for each point light.
iteration 1 per n lights 2 point
The render state for the pass will be setup and the draw call executed once for every
2 lights.
Chapter 3: Scripts 36
By default, passes are only issued once. However, if you use the programmable pipeline, or
you wish to exceed the normal limits on the number of lights which are supported, you might
want to use the once per light option. In this case, only light index 0 is ever used, and the pass
is issued multiple times, each time with a different light in light index 0. Clearly this will make
the pass more expensive, but it may be the only way to achieve certain effects such as per-pixel
lighting effects which take into account 1..n lights.
Using a number instead of "once" instructs the pass to iterate more than once after the
render state is setup. The render state is not changed after the initial setup so repeated draw
calls are very fast and ideal for passes using programmable shaders that must iterate more than
once with the same render state i.e. shaders that do fur, motion blur, special filtering.
If you use once per light, you should also add an ambient pass to the technique before this
pass, otherwise when no lights are in range of this object it will not get rendered at all; this is
important even when you have no ambient light in the scene, because you would still want the
objects silhouette to appear.
The lightType parameter to the attribute only applies if you use once per light, per light,
or per n lights and restricts the pass to being run for lights of a single type (either ’point’,
’directional’ or ’spot’). In the example, the pass will be run once per point light. This can be
useful because when you’re writing a vertex / fragment program it is a lot easier if you can
assume the kind of lights you’ll be dealing with. However at least point and directional lights
can be dealt with in one way.
Example: Simple Fur shader material script that uses a second pass with 10 iterations to
grow the fur:
// GLSL simple Fur
vertex_program GLSLDemo/FurVS glsl
{
source fur.vert
default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto passNumber pass_number
param_named_auto multiPassNumber pass_iteration_number
param_named furLength float 0.15
}
}
source fur.frag
default_params
{
param_named Ka float 0.2
param_named Kd float 0.5
param_named Ks float 0.0
param_named furTU int 0
}
}
material Fur
{
technique GLSL
{
pass base_coat
{
ambient 0.7 0.7 0.7
diffuse 0.5 0.8 0.5
specular 1.0 1.0 1.0 1.5
vertex_program_ref GLSLDemo/FurVS
{
}
fragment_program_ref GLSLDemo/FurFS
{
}
texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}
pass grow_fur
{
ambient 0.7 0.7 0.7
diffuse 0.8 1.0 0.8
specular 1.0 1.0 1.0 64
depth_write off
vertex_program_ref GLSLDemo/FurVS
{
}
fragment_program_ref GLSLDemo/FurFS
Chapter 3: Scripts 38
{
}
texture_unit
{
texture Fur.tga
tex_coord_set 0
filtering trilinear
}
}
}
}
Note: use gpu program auto parameters [pass number], page 80 and [pass iteration number],
page 80 to tell the vertex, geometry or fragment program the pass number and iteration number.
point size
This setting allows you to change the size of points when rendering a point list, or a list of point
sprites. The interpretation of this command depends on the [point size attenuation], page 38
option - if it is off (the default), the point size is in screen pixels, if it is on, it expressed as
normalised screen coordinates (1.0 is the height of the screen) when the point is at the origin.
NOTE: Some drivers have an upper limit on the size of points they support - this can even
vary between APIs on the same card! Don’t rely on point sizes that cause the points to get very
large on screen, since they may get clamped on some cards. Upper sizes can range from 64 to
256 pixels.
point sprites
This setting specifies whether or not hardware point sprite rendering is enabled for this pass.
Enabling it means that a point list is rendered as a list of quads rather than a list of dots. It is
very useful to use this option if you’re using a BillboardSet and only need to use point oriented
billboards which are all of the same size. You can also use it for any other point list render.
You only have to provide the final 3 parameters if you turn attenuation on. The formula for
attenuation is that the size of the point is multiplied by 1 / (constant + linear * dist + quadratic
* d^2); therefore turning it off is equivalent to (constant = 1, linear = 0, quadratic = 0) and
standard perspective attenuation is (constant = 0, linear = 1, quadratic = 0). The latter is
assumed if you leave out the final 3 parameters when you specify ’on’.
Note that the resulting attenuated size is clamped to the minimum and maximum point size,
see the next section.
• [filtering], page 46
• [max anisotropy], page 47
• [mipmap bias], page 48
• [colour op], page 48
• [colour op ex], page 49
• [colour op multipass fallback], page 51
• [alpha op ex], page 51
• [env map], page 51
• [scroll], page 52
• [scroll anim], page 52
• [rotate], page 52
• [rotate anim], page 53
• [scale], page 53
• [wave xform], page 53
• [transform], page 54
• [binding type], page 44
• [content type], page 45
You can also use a nested ’texture source’ section in order to use a special add-in as a source
of texture data, See Chapter 6 [External Texture Sources], page 157 for details.
Attribute Descriptions
texture alias
Sets the alias name for this texture unit.
Setting the texture alias name is useful if this material is to be inherited by other other
materials and only the textures will be changed in the new material.(See Section 3.1.12 [Texture
Aliases], page 87)
Default: If a texture unit has a name then the texture alias defaults to the texture unit name.
texture
Sets the name of the static texture image this layer will use.
This setting is mutually exclusive with the anim texture attribute. Note that the texture
file cannot include spaces. Those of you Windows users who like spaces in filenames, please get
over it and use underscores instead.
The ’type’ parameter allows you to specify a the type of texture to create - the default is ’2d’,
but you can override this; here’s the full list:
1d A 1-dimensional texture; that is, a texture which is only 1 pixel high. These kinds
of textures can be useful when you need to encode a function in a texture and use
it as a simple lookup, perhaps in a fragment program. It is important that you
use this setting when you use a fragment program which uses 1-dimensional texture
coordinates, since GL requires you to use a texture type that matches (D3D will let
you get away with it, but you ought to plan for cross-compatibility). Your texture
widths should still be a power of 2 for best compatibility and performance.
2d The default type which is assumed if you omit it, your texture has a width and a
height, both of which should preferably be powers of 2, and if you can, make them
square because this will look best on the most hardware. These can be addressed
with 2D texture coordinates.
3d A 3 dimensional texture i.e. volume texture. Your texture has a width, a height,
both of which should be powers of 2, and has depth. These can be addressed with
3d texture coordinates i.e. through a pixel shader.
cubic This texture is made up of 6 2D textures which are pasted around the inside of
a cube. Can be addressed with 3D texture coordinates and are useful for cubic
reflection maps and normal maps.
The ’numMipMaps’ option allows you to specify the number of mipmaps to generate for this
texture. The default is ’unlimited’ which means mips down to 1x1 size are generated. You can
specify a fixed number (even 0) if you like instead. Note that if you use the same texture in
many material scripts, the number of mipmaps generated will conform to the number specified
in the first texture unit used to load the texture - so be consistent with your usage.
The ’alpha’ option allows you to specify that a single channel (luminance) texture should be
loaded as alpha, rather than the default which is to load it into the red channel. This can be
helpful if you want to use alpha-only textures in the fixed function pipeline.
Default: none
The <PixelFormat> option allows you to specify the desired pixel format of the texture to
create, which may be different to the pixel format of the texture file being loaded. Bear in
mind that the final pixel format will be constrained by hardware capabilities so you may not
get exactly what you ask for. The available options are:
PF L8 8-bit pixel format, all bits luminance.
PF L16 16-bit pixel format, all bits luminance.
PF A8 8-bit pixel format, all bits alpha.
PF A4L4 8-bit pixel format, 4 bits alpha, 4 bits luminance.
Chapter 3: Scripts 42
PF BYTE LA
2 byte pixel format, 1 byte luminance, 1 byte alpha
PF R5G6B5
16-bit pixel format, 5 bits red, 6 bits green, 5 bits blue.
PF B5G6R5
16-bit pixel format, 5 bits blue, 6 bits green, 5 bits red.
PF R3G3B2
8-bit pixel format, 3 bits red, 3 bits green, 2 bits blue.
PF A4R4G4B4
16-bit pixel format, 4 bits for alpha, red, green and blue.
PF A1R5G5B5
16-bit pixel format, 1 bit for alpha, 5 bits for red, green and blue.
PF R8G8B8
24-bit pixel format, 8 bits for red, green and blue.
PF B8G8R8
24-bit pixel format, 8 bits for blue, green and red.
PF A8R8G8B8
32-bit pixel format, 8 bits for alpha, red, green and blue.
PF A8B8G8R8
32-bit pixel format, 8 bits for alpha, blue, green and red.
PF B8G8R8A8
32-bit pixel format, 8 bits for blue, green, red and alpha.
PF R8G8B8A8
32-bit pixel format, 8 bits for red, green, blue and alpha.
PF X8R8G8B8
32-bit pixel format, 8 bits for red, 8 bits for green, 8 bits for blue like PF A8R8G8B8,
but alpha will get discarded
PF X8B8G8R8
32-bit pixel format, 8 bits for blue, 8 bits for green, 8 bits for red like PF A8B8G8R8,
but alpha will get discarded
PF A2R10G10B10
32-bit pixel format, 2 bits for alpha, 10 bits for red, green and blue.
PF A2B10G10R10
32-bit pixel format, 2 bits for alpha, 10 bits for blue, green and red.
PF FLOAT16 R
16-bit pixel format, 16 bits (float) for red
PF FLOAT16 RGB
48-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float)
for blue
PF FLOAT16 RGBA
64-bit pixel format, 16 bits (float) for red, 16 bits (float) for green, 16 bits (float)
for blue, 16 bits (float) for alpha
PF FLOAT32 R
16-bit pixel format, 16 bits (float) for red
Chapter 3: Scripts 43
PF FLOAT32 RGB
96-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float)
for blue
PF FLOAT32 RGBA
128-bit pixel format, 32 bits (float) for red, 32 bits (float) for green, 32 bits (float)
for blue, 32 bits (float) for alpha
PF SHORT RGBA
64-bit pixel format, 16 bits for red, green, blue and alpha
The ’gamma’ option informs the renderer that you want the graphics hardware to perform
gamma correction on the texture values as they are sampled for rendering. This is only appli-
cable for textures which have 8-bit colour channels (e.g.PF R8G8B8). Often, 8-bit per channel
textures will be stored in gamma space in order to increase the precision of the darker colours
(http://en.wikipedia.org/wiki/Gamma_correction) but this can throw out blending and
filtering calculations since they assume linear space colour values. For the best quality shading,
you may want to enable gamma correction so that the hardware converts the texture values
to linear space for you automatically when sampling the texture, then the calculations in the
pipeline can be done in a reliable linear colour space. When rendering to a final 8-bit per channel
display, you’ll also want to convert back to gamma space which can be done in your shader (by
raising to the power 1/2.2) or you can enable gamma correction on the texture being rendered
to or the render window. Note that the ’gamma’ option on textures is applied on loading the
texture so must be specified consistently if you use this texture in multiple places.
anim texture
Sets the images to be used in an animated texture layer. In this case an animated texture
layer means one which has multiple frames, each of which is a separate image file. There are 2
formats, one for implicitly determined image names, one for explicitly named images.
This sets up an animated texture layer made up of 5 frames named flame 0.jpg, flame 1.jpg,
flame 2.jpg etc, with an animation length of 2.5 seconds (2fps). If duration is set to 0, then no
automatic transition takes place and frames must be changed manually in code.
This sets up the same duration animation but from 5 separately named image files. The first
format is more concise, but the second is provided if you cannot make your images conform to
the naming standard required for it.
Chapter 3: Scripts 44
Default: none
cubic texture
Sets the images used in a cubic texture, i.e. one made up of 6 individual images making up the
faces of a cube. These kinds of textures are used for reflection maps (if hardware supports cubic
reflection maps) or skyboxes. There are 2 formats, a brief format expecting image names of a
particular format and a more flexible but longer format for arbitrarily named textures.
The base name in this format is something like ’skybox.jpg’, and the system will expect
you to provide skybox fr.jpg, skybox bk.jpg, skybox up.jpg, skybox dn.jpg, skybox lf.jpg, and
skybox rt.jpg for the individual faces.
Format2 (long): cubic texture <front> <back> <left> <right> <up> <down> separateUV
In this case each face is specified explicitly, incase you don’t want to conform to the image
naming standards above. You can only use this for the separateUV version since the combine-
dUVW version requires a single texture name to be assigned to the combined 3D texture (see
below).
Default: none
binding type
Tells this texture unit to bind to either the fragment processing unit or the vertex processing
unit (for Section 3.1.10 [Vertex Texture Fetch], page 83).
Chapter 3: Scripts 45
content type
Tells this texture unit where it should get its content from. The default is to get texture
content from a named texture, as defined with the [texture], page 40, [cubic texture], page 44,
[anim texture], page 43 attributes. However you can also pull texture information from other
automated sources. The options are:
named The default option, this derives texture content from a texture name, loaded by
ordinary means from a file or having been manually created with a given name.
shadow This option allows you to pull in a shadow texture, and is only valid when you use
texture shadows and one of the ’custom sequence’ shadowing types (See Chapter 7
[Shadows], page 160). The shadow texture in question will be from the ’n’th closest
light that casts shadows, unless you use light-based pass iteration or the light start
option which may start the light index higher. When you use this option in multiple
texture units within the same pass, each one references the next shadow texture. The
shadow texture index is reset in the next pass, in case you want to take into account
the same shadow textures again in another pass (e.g. a separate specular / gloss
pass). By using this option, the correct light frustum projection is set up for you for
use in fixed-function, if you use shaders just reference the texture viewproj matrix
auto parameter in your shader.
compositor
This option allows you to reference a texture from a compositor, and is only valid
when the pass is rendered within a compositor sequence. This can be either in a
render scene directive inside a compositor script, or in a general pass in a viewport
that has a compositor attached. Note that this is a reference only, meaning that it
does not change the render order. You must make sure that the order is reasonable
for what you are trying to achieve (for example, texture pooling might cause the
referenced texture to be overwritten by something else by the time it is referenced).
The extra parameters for the content type are only required for this type:
The second is the name of the texture to reference in the compositor. (Required)
The third is the index of the texture to take, in case of an MRT. (Optional)
Format: content type <named|shadow|compositor> [<Referenced Compositor Name>] [<Ref-
erenced Texture Name>] [<Referenced MRT Index>]
filtering
Sets the type of texture filtering used when magnifying or minifying a texture. There are 2
formats to this attribute, the simple format where you simply specify the name of a predefined
set of filtering options, and the complex format, where you individually set the minification,
magnification, and mip filters yourself.
Simple Format
Format: filtering <none|bilinear|trilinear|anisotropic>
Default: filtering bilinear
With this format, you only need to provide a single parameter which is one of the following:
none No filtering or mipmapping is used. This is equivalent to the complex format ’fil-
tering point point none’.
bilinear 2x2 box filtering is performed when magnifying or reducing a texture, and a mipmap
is picked from the list but no filtering is done between the levels of the mipmaps.
This is equivalent to the complex format ’filtering linear linear point’.
trilinear 2x2 box filtering is performed when magnifying and reducing a texture, and the
closest 2 mipmaps are filtered together. This is equivalent to the complex format
’filtering linear linear linear’.
anisotropic
This is the same as ’trilinear’, except the filtering algorithm takes account of the
slope of the triangle in relation to the camera rather than simply doing a 2x2 pixel
filter in all cases. This makes triangles at acute angles look less fuzzy. Equivalent
to the complex format ’filtering anisotropic anisotropic linear’. Note that in order
for this to make any difference, you must also set the [max anisotropy], page 47
attribute too.
Complex Format
Format: filtering <minification> <magnification> <mip>
Default: filtering linear linear point
This format gives you complete control over the minification, magnification, and mip filters.
Each parameter can be one of the following:
none Nothing - only a valid option for the ’mip’ filter , since this turns mipmapping off
completely. The lowest setting for min and mag is ’point’.
point Pick the closet pixel in min or mag modes. In mip mode, this picks the closet
matching mipmap.
linear Filter a 2x2 box of pixels around the closest one. In the ’mip’ filter this enables
filtering between mipmap levels.
anisotropic
Only valid for min and mag modes, makes the filter compensate for camera-space
slope of the triangles. Note that in order for this to make any difference, you must
also set the [max anisotropy], page 47 attribute too.
max anisotropy
Sets the maximum degree of anisotropy that the renderer will try to compensate for when filtering
textures. The degree of anisotropy is the ratio between the height of the texture segment visible
Chapter 3: Scripts 48
in a screen space region versus the width - so for example a floor plane, which stretches on into
the distance and thus the vertical texture coordinates change much faster than the horizontal
ones, has a higher anisotropy than a wall which is facing you head on (which has an anisotropy
of 1 if your line of sight is perfectly perpendicular to it). You should set the max anisotropy
value to something greater than 1 to begin compensating; higher values can compensate for
more acute angles. The maximum value is determined by the hardware, but it is usually 8 or
16.
In order for this to be used, you have to set the minification and/or the magnification [filtering],
page 46 option on this texture to anisotropic.
Format: max anisotropy <value>
Default: max anisotropy 1
mipmap bias
Sets the bias value applied to the mipmapping calculation, thus allowing you to alter the decision
of which level of detail of the texture to use at any distance. The bias value is applied after
the regular distance calculation, and adjusts the mipmap level by 1 level for each unit of bias.
Negative bias values force larger mip levels to be used, positive bias values force smaller mip
levels to be used. The bias is a floating point value so you can use values in between whole
numbers for fine tuning.
In order for this option to be used, your hardware has to support mipmap biasing (exposed
through the render system capabilities), and your minification [filtering], page 46 has to be set
to point or linear.
Format: mipmap bias <value>
Default: mipmap bias 0
colour op
Determines how the colour of this texture layer is combined with the one below it (or the lighting
effect on the geometry if this is the first layer).
This method is the simplest way to blend texture layers, because it requires only one param-
eter, gives you the most common blending types, and automatically sets up 2 blending methods:
one for if single-pass multitexturing hardware is available, and another for if it is not and the
blending must be achieved through multiple rendering passes. It is, however, quite limited
and does not expose the more flexible multitexturing operations, simply because these can’t be
automatically supported in multipass fallback mode. If want to use the fancier options, use
[colour op ex], page 49, but you’ll either have to be sure that enough multitexturing units will
be available, or you should explicitly set a fallback using [colour op multipass fallback], page 51.
colour op ex
This is an extended version of the [colour op], page 48 attribute which allows extremely detailed
control over the blending applied between this and earlier layers. Multitexturing hardware can
apply more complex blending operations that multipass blending, but you are limited to the
number of texture units which are available in hardware.
See the IMPORTANT note below about the issues between multipass and multitexturing
that using this method can create. Texture colour operations determine how the final colour
of the surface appears when rendered. Texture units are used to combine colour values from
various sources (e.g. the diffuse colour of the surface from lighting calculations, combined with
the colour of the texture). This method allows you to specify the ’operation’ to be used, i.e. the
calculation such as adds or multiplies, and which values to use as arguments, such as a fixed
value or a value from a previous calculation.
Operation options
source1 Use source1 without modification
source2 Use source2 without modification
modulate Multiply source1 and source2 together.
modulate x2
Multiply source1 and source2 together, then by 2 (brightening).
modulate x4
Multiply source1 and source2 together, then by 4 (brightening).
add Add source1 and source2 together.
add signed
Add source1 and source2 then subtract 0.5.
add smooth
Add source1 and source2, subtract the product
subtract Subtract source2 from source1
blend diffuse alpha
Use interpolated alpha value from vertices to scale source1, then add
source2 scaled by (1-alpha).
blend texture alpha
As blend diffuse alpha but use alpha from texture
Chapter 3: Scripts 50
For example ’modulate’ takes the colour results of the previous layer, and multiplies them with
the new texture being applied. Bear in mind that colours are RGB values from 0.0-1.0 so
multiplying them together will result in values in the same range, ’tinted’ by the multiply. Note
however that a straight multiply normally has the effect of darkening the textures - for this
reason there are brightening operations like modulate x2. Note that because of the limitations
on some underlying APIs (Direct3D included) the ’texture’ argument can only be used as the
first argument, not the second.
Note that the last parameter is only required if you decide to pass a value manually into the
operation. Hence you only need to fill these in if you use the ’blend manual’ operation.
IMPORTANT: Ogre tries to use multitexturing hardware to blend texture layers together.
However, if it runs out of texturing units (e.g. 2 of a GeForce2, 4 on a GeForce3) it has to
fall back on multipass rendering, i.e. rendering the same object multiple times with differ-
ent textures. This is both less efficient and there is a smaller range of blending operations
which can be performed. For this reason, if you use this method you really should set the
colour op multipass fallback attribute to specify which effect you want to fall back on if suffi-
cient hardware is not available (the default is just ’modulate’ which is unlikely to be what you
want if you’re doing swanky blending here). If you wish to avoid having to do this, use the
simpler colour op attribute which allows less flexible blending options but sets up the multipass
fallback automatically, since it only allows operations which have direct multipass equivalents.
Chapter 3: Scripts 51
Because some of the effects you can create using colour op ex are only supported under mul-
titexturing hardware, if the hardware is lacking the system must fallback on multipass rendering,
which unfortunately doesn’t support as many effects. This attribute is for you to specify the
fallback operation which most suits you.
The parameters are the same as in the scene blend attribute; this is because multipass
rendering IS effectively scene blending, since each layer is rendered on top of the last using the
same mechanism as making an object transparent, it’s just being rendered in the same place
repeatedly to get the multitexture effect. If you use the simpler (and less flexible) colour op
attribute you don’t need to call this as the system sets up the fallback for you.
alpha op ex
Behaves in exactly the same away as [colour op ex], page 49 except that it determines how
alpha values are combined between texture layers rather than colour values.The only difference
is that the 2 manual colours at the end of colour op ex are just single floating-point values in
alpha op ex.
env map
Turns on/off texture coordinate effect that makes this layer an environment map.
Environment maps make an object look reflective by using automatic texture coordinate
generation depending on the relationship between the objects vertices or normals and the eye.
spherical A spherical environment map. Requires a single texture which is either a fish-eye
lens view of the reflected scene, or some other texture which looks good as a spherical
map (a texture of glossy highlights is popular especially in car sims). This effect is
based on the relationship between the eye direction and the vertex normals of the
Chapter 3: Scripts 52
object, so works best when there are a lot of gradually changing normals, i.e. curved
objects.
planar Similar to the spherical environment map, but the effect is based on the position
of the vertices in the viewport rather than vertex normals. This effect is therefore
useful for planar geometry (where a spherical env map would not look good because
the normals are all the same) or objects without normals.
cubic reflection
A more advanced form of reflection mapping which uses a group of 6 textures making
up the inside of a cube, each of which is a view if the scene down each axis. Works
extremely well in all cases but has a higher technical requirement from the card
than spherical mapping. Requires that you bind a [cubic texture], page 44 to this
texture unit and use the ’combinedUVW’ option.
cubic normal
Generates 3D texture coordinates containing the camera space normal vector from
the normal information held in the vertex data. Again, full use of this feature
requires a [cubic texture], page 44 with the ’combinedUVW’ option.
scroll
Sets a fixed scroll offset for the texture.
This method offsets the texture in this layer by a fixed amount. Useful for small adjustments
without altering texture coordinates in models. However if you wish to have an animated scroll
effect, see the [scroll anim], page 52 attribute.
scroll anim
Sets up an animated scroll for the texture layer. Useful for creating fixed-speed scrolling effects
on a texture layer (for varying scroll speeds, see [wave xform], page 53).
rotate
Rotates a texture to a fixed angle. This attribute changes the rotational orientation of a tex-
ture to a fixed angle, useful for fixed adjustments. If you wish to animate the rotation, see
[rotate anim], page 53.
rotate anim
Sets up an animated rotation effect of this layer. Useful for creating fixed-speed rotation ani-
mations (for varying speeds, see [wave xform], page 53).
scale
Adjusts the scaling factor applied to this texture layer. Useful for adjusting the size of textures
without making changes to geometry. This is a fixed scaling factor, if you wish to animate this
see [wave xform], page 53.
Valid scale values are greater than 0, with a scale factor of 2 making the texture twice as big
in that dimension etc.
wave xform
Sets up a transformation animation based on a wave function. Useful for more advanced texture
layer transform effects. You can add multiple instances of this attribute to a single texture layer
if you wish.
Format: wave xform <xform type> <wave type> <base> <frequency> <phase> <amplitude>
xform type
scroll x Animate the x scroll value
scroll y Animate the y scroll value
rotate Animate the rotate value
scale x Animate the x scale value
scale y Animate the y scale value
Chapter 3: Scripts 54
wave type
sine A typical sine wave which smoothly loops between min and max values
triangle An angled wave which increases & decreases at constant speed, changing
instantly at the extremes
square Max for half the wavelength, min for the rest with instant transition
between
sawtooth Gradual steady increase from min to max over the period with an instant
return to min at the end.
inverse sawtooth
Gradual steady decrease from max to min over the period, with an
instant return to max at the end.
base The base value, the minimum if amplitude > 0, the maximum if amplitude < 0
frequency The number of wave iterations per second, i.e. speed
phase Offset of the wave start
amplitude The size of the wave
The range of the output of the wave will be base, base+amplitude. So the example above scales
the texture in the x direction between 1 (normal size) and 5 along a sine wave at one cycle every
5 second (0.2 waves per second).
transform
This attribute allows you to specify a static 4x4 transformation matrix for the texture unit, thus
replacing the individual scroll, rotate and scale attributes mentioned above.
Format: transform m00 m01 m02 m03 m10 m11 m12 m13 m20 m21 m22 m23 m30 m31 m32
m33
The indexes of the 4x4 matrix value above are expressed as m<row><col>.
The definition of a program can either be embedded in the .material script itself (in which
case it must precede any references to it in the script), or if you wish to use the same program
across multiple .material files, you can define it in an external .program script. You define the
program in exactly the same way whether you use a .program script or a .material script, the
only difference is that all .program scripts are guaranteed to have been parsed before all .material
scripts, so you can guarantee that your program has been defined before any .material script
Chapter 3: Scripts 55
that might use it. Just like .material scripts, .program scripts will be read from any location
which is on your resource path, and you can define many programs in a single script.
Vertex, geometry and fragment programs can be low-level (i.e. assembler code written to the
specification of a given low level syntax such as vs 1 1 or arbfp1) or high-level such as DirectX9
HLSL, Open GL Shader Language, or nVidia’s Cg language (See [High-level Programs], page 58).
High level languages give you a number of advantages, such as being able to write more intuitive
code, and possibly being able to target multiple architectures in a single program (for example,
the same Cg program might be able to be used in both D3D and GL, whilst the equivalent
low-level programs would require separate techniques, each targeting a different API). High-
level programs also allow you to use named parameters instead of simply indexed ones, although
parameters are not defined here, they are used in the Pass.
ps 2 x DirectX pixel shader (ie fragment program) assembler syntax. This is basically
ps 2 0 with a higher number of instructions.
Supported cards: ATI Radeon X series, nVidia GeForce FX 6 series
arbfp1 This is the OpenGL standard assembler format for fragment programs. It’s roughly
equivalent to ps 2 0, which means that not all cards that support basic pixel shaders
under DirectX support arbfp1 (for example neither the GeForce3 or GeForce4 sup-
port arbfp1, but they do support ps 1 1).
fp20 This is an nVidia-specific OpenGL fragment syntax which is a superset of ps 1.3. It
allows you to use the ’nvparse’ format for basic fragment programs. It actually uses
NV texture shader and NV register combiners to provide functionality equivalent
to DirectX’s ps 1 1 under GL, but only for nVidia cards. However, since ATI
cards adopted arbfp1 a little earlier than nVidia, it is mainly nVidia cards like the
GeForce3 and GeForce4 that this will be useful for. You can find more information
about nvparse at http://developer.nvidia.com/object/nvparse.html.
fp30 Another nVidia-specific OpenGL fragment shader syntax. It is a superset of ps 2.0,
which is supported on nVidia GeForce FX 5 series and higher. ATI Radeon HD
2000+ also supports it.
Chapter 3: Scripts 57
You can get a definitive list of the syntaxes supported by the current card by calling GpuPro-
gramManager::getSingleton().getSupportedSyntax().
default_params
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named_auto worldViewProj worldviewproj_matrix
param_named shininess float 10
}
}
The syntax of the parameter definition is exactly the same as when you define parameters
when using programs, See [Program Parameter Specification], page 70. Defining default pa-
Chapter 3: Scripts 58
rameters allows you to avoid rebinding common parameters repeatedly (clearly in the above
example, all but ’shininess’ are unlikely to change between uses of the program) which makes
your material declarations shorter.
Format: shared param name <param name> <param type> [<[array size]>] [<initial values>]
The param name must be unique within the set, and the param type can be any one of
float, float2, float3, float4, int, int2, int3, int4, matrix2x2, matrix2x3, matrix2x4, matrix3x2,
matrix3x3, matrix3x4, matrix4x2, matrix4x3 and matrix4x4. The array size option allows you
to define arrays of param type should you wish, and if present must be a number enclosed in
square brackets (and note, must be separated from the param type with whitespace). If you
wish, you can also initialise the parameters by providing a list of values.
Once you have defined the shared parameters, you can reference them inside default params
and params blocks using [shared params ref], page 81. You can also obtain a reference to
them in your code via GpuProgramManager::getSharedParameters, and update the values for
all instances using them.
High-level Programs
Support for high level vertex and fragment programs is provided through plugins; this is to make
sure that an application using OGRE can use as little or as much of the high-level program func-
tionality as they like. OGRE currently supports 3 high-level program types, Cg (Section 3.1.5
[Cg], page 60) (an API- and card-independent, high-level language which lets you write pro-
grams for both OpenGL and DirectX for lots of cards), DirectX 9 High-Level Shader Language
(Section 3.1.6 [HLSL], page 61), and OpenGL Shader Language (Section 3.1.7 [GLSL], page 62).
HLSL can only be used with the DirectX rendersystem, and GLSL can only be used with the GL
rendersystem. Cg can be used with both, although experience has shown that more advanced
programs, particularly fragment programs which perform a lot of texture fetches, can produce
better code in the rendersystem-specific shader language.
Chapter 3: Scripts 59
One way to support both HLSL and GLSL is to include separate techniques in the material
script, each one referencing separate programs. However, if the programs are basically the same,
with the same parameters, and the techniques are complex this can bloat your material scripts
with duplication fairly quickly. Instead, if the only difference is the language of the vertex &
fragment program you can use OGRE’s Section 3.1.8 [Unified High-level Programs], page 66 to
automatically pick a program suitable for your rendersystem whilst using a single technique.
If you use stencil shadows, then any vertex programs which do vertex deformation can be
a problem, because stencil shadows are calculated on the CPU, which does not have access to
the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see
section above) because Ogre knows how to replicate the effect in software, but any other vertex
deformation cannot be replicated, and you will either have to accept that the shadow will not
reflect this deformation, or you should turn off shadows for that object.
If you use texture shadows, then vertex deformation is acceptable; however, when rendering
the object into a shadow texture (the shadow caster pass), the shadow has to be rendered in a
solid colour (linked to the ambient colour for modulative shadows, black for additive shadows).
You must therefore provide an alternative vertex program, so Ogre provides you with a way of
specifying one to use when rendering the caster, See [Shadows and Vertex Programs], page 81.
3.1.5 Cg programs
In order to define Cg programs, you have to have to load Plugin CgProgramManager.so/.dll at
startup, either through plugins.cfg or through your own plugin loading code. They are very easy
to define:
fragment_program myCgFragmentProgram cg
{
source myCgFragmentProgram.cg
entry_point main
profiles ps_2_0 arbfp1
}
There are a few differences between this and the assembler program - to begin with, we declare
that the fragment program is of type ’cg’ rather than ’asm’, which indicates that it’s a high-level
program using Cg. The ’source’ parameter is the same, except this time it’s referencing a Cg
source file instead of a file of assembler.
Here is where things start to change. Firstly, we need to define an ’entry point’, which is the
Chapter 3: Scripts 61
name of a function in the Cg program which will be the first one called as part of the fragment
program. Unlike assembler programs, which just run top-to-bottom, Cg programs can include
multiple functions and as such you must specify the one which start the ball rolling.
Next, instead of a fixed ’syntax’ parameter, you specify one or more ’profiles’; profiles are how
Cg compiles a program down to the low-level assembler. The profiles have the same names as
the assembler syntax codes mentioned above; the main difference is that you can list more than
one, thus allowing the program to be compiled down to more low-level syntaxes so you can write
a single high-level program which runs on both D3D and GL. You are advised to just enter the
simplest profiles under which your programs can be compiled in order to give it the maximum
compatibility. The ordering also matters; if a card supports more than one syntax then the one
listed first will be used.
Lastly, there is a final option called ’compile arguments’, where you can specify arguments
exactly as you would to the cgc command-line compiler, should you wish to.
Important Matrix Ordering Note: One thing to bear in mind is that HLSL allows you
to use 2 different ways to multiply a vector by a matrix - mul(v,m) or mul(m,v). The only
difference between them is that the matrix is effectively transposed. You should use mul(m,v)
with the matrices passed in from Ogre - this agrees with the shaders produced from tools
like RenderMonkey, and is consistent with Cg too, but disagrees with the Dx9 SDK and FX
Composer which use mul(v,m) - you will have to switch the parameters to mul() in those shaders.
Note that if you use the float3x4 / matrix3x4 type in your shader, bound to an OGRE auto-
definition (such as bone matrices) you should use the column major matrices = false option
(discussed below) in your program definition. This is because OGRE passes float3x4 as row-
major to save constant space (3 float4’s rather than 4 float4’s with only the top 3 values used)
and this tells OGRE to pass all matrices like this, so that you can use mul(m,v) consistently
for all calculations. OGRE will also to tell the shader to compile in row-major form (you don’t
have to set the /Zpr compile option or #pragma pack(row-major) option, OGRE does this for
you). Note that passing bones in float4x3 form is not supported by OGRE, but you don’t need
it given the above.
Chapter 3: Scripts 62
Advanced options
GLSL supports the use of modular shaders. This means you can write GLSL external func-
tions that can be used in multiple shaders.
vertex_program myExternalGLSLFunction1 glsl
{
source myExternalGLSLfunction1.txt
}
source myGLSLfunction.txt
attach myExternalGLSLFunction1 myExternalGLSLFunction2
}
void main(void)
{
gl_FragColor = texture2D(diffuseMap, UV);
}
In material script:
fragment_program myFragmentShader glsl
{
source example.frag
}
material exampleGLSLTexturing
{
technique
{
pass
{
fragment_program_ref myFragmentShader
{
param_named diffuseMap int 0
}
texture_unit
{
texture myTexture.jpg 2d
}
}
}
}
An index value of 0 refers to the first texture unit in the pass, an index value of 1 refers to
the second unit in the pass and so on.
Chapter 3: Scripts 64
Matrix parameters
Here are some examples of passing matrices to GLSL mat2, mat3, mat4 uniforms:
material exampleGLSLmatrixUniforms
{
technique matrix_passing
{
pass examples
{
vertex_program_ref myVertexShader
{
// mat4 uniform
param_named OcclusionMatrix matrix4x4 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0
// or
param_named ViewMatrix float16 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0
// mat3
param_named TextRotMatrix float9 1 0 0 0 1 0 0 0 1
}
fragment_program_ref myFragmentShader
{
// mat2 uniform
param_named skewMatrix float4 0.5 0 -0.5 1.0
}
}
}
}
In addition to the built in attributes described in section 7.3 of the GLSL manual, Ogre
supports a number of automatically bound custom vertex attributes. There are some drivers
that do not behave correctly when mixing built-in vertex attributes like gl Normal and custom
Chapter 3: Scripts 65
vertex attributes, so for maximum compatibility you may well wish to use all custom attributes
in shaders where you need at least one (e.g. for skeletal animation).
vertex Binds VES POSITION, declare as ’attribute vec4 vertex;’.
normal Binds VES NORMAL, declare as ’attribute vec3 normal;’.
colour Binds VES DIFFUSE, declare as ’attribute vec4 colour;’.
secondary colour
Binds VES SPECULAR, declare as ’attribute vec4 secondary colour;’.
uv0 - uv7 Binds VES TEXTURE COORDINATES, declare as ’attribute vec4 uv0;’. Note
that uv6 and uv7 share attributes with tangent and binormal respectively so cannot
both be present.
tangent Binds VES TANGENT, declare as ’attribute vec3 tangent;’.
binormal Binds VES BINORMAL, declare as ’attribute vec3 binormal;’.
blendIndices
Binds VES BLEND INDICES, declare as ’attribute vec4 blendIndices;’.
blendWeights
Binds VES BLEND WEIGHTS, declare as ’attribute vec4 blendWeights;’.
Preprocessor definitions
GLSL supports using preprocessor definitions in your code - some are defined by the imple-
mentation, but you can also define your own, say in order to use the same source code for a
few different variants of the same technique. In order to use this feature, include preprocessor
conditions in your GLSL code, of the kind #ifdef SYMBOL, #if SYMBOL==2 etc. Then in
your program definition, use the ’preprocessor defines’ option, following it with a string if def-
initions. Definitions are separated by ’;’ or ’,’ and may optionally have a ’=’ operator within
them to specify a definition value. Those without an ’=’ will implicitly have a definition of 1.
For example:
// in your GLSL
#ifdef CLEVERTECHNIQUE
// some clever stuff here
#else
// normal technique
#endif
#if NUM_THINGS==2
// Some specific code
#else
// something else
#endif
entry_point main_vp
target vs_2_0
}
fragment_program myFragmentProgramHLSL hlsl
{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
material SupportHLSLandGLSLwithoutUnified
{
// HLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
// GLSL technique
technique
{
pass
{
vertex_program_ref myVertexProgramHLSL
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
Chapter 3: Scripts 68
fragment_program_ref myFragmentProgramHLSL
{
}
}
}
}
And that’s a really small example. Everything you added to the HLSL technique, you’d have
to duplicate in the GLSL technique too. So instead, here’s how you’d do it with unified program
definitions:
vertex_program myVertexProgramHLSL hlsl
{
source prog.hlsl
entry_point main_vp
target vs_2_0
}
fragment_program myFragmentProgramHLSL hlsl
{
source prog.hlsl
entry_point main_fp
target ps_2_0
}
vertex_program myVertexProgramGLSL glsl
{
source prog.vert
}
fragment_program myFragmentProgramGLSL glsl
{
source prog.frag
default_params
{
param_named tex int 0
}
}
// Unified definition
vertex_program myVertexProgram unified
{
delegate myVertexProgramGLSL
delegate myVertexProgramHLSL
}
fragment_program myFragmentProgram unified
{
delegate myFragmentProgramGLSL
delegate myFragmentProgramHLSL
}
material SupportHLSLandGLSLwithUnified
{
// HLSL technique
technique
{
pass
{
Chapter 3: Scripts 69
vertex_program_ref myVertexProgram
{
param_named_auto worldViewProj world_view_proj_matrix
param_named_auto lightColour light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
param_named_auto lightAtten light_attenuation 0
}
fragment_program_ref myFragmentProgram
{
}
}
}
}
At runtime, when myVertexProgram or myFragmentProgram are used, OGRE automatically
picks a real program to delegate to based on what’s supported on the current hardware /
rendersystem. If none of the delegates are supported, the entire technique referencing the unified
program is marked as unsupported and the next technique in the material is checked fro fallback,
just like normal. As your materials get larger, and you find you need to support HLSL and GLSL
specifically (or need to write multiple interface-compatible versions of a program for whatever
other reason), unified programs can really help reduce duplication.
As well as naming the program in question, you can also provide parameters to it. Here’s a
simple example:
vertex_program_ref myVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed 4 float4 10.0 0 0 0
}
In this example, we bind a vertex program called ’myVertexProgram’ (which will be defined
elsewhere) to the pass, and give it 2 parameters, one is an ’auto’ parameter, meaning we do not
have to supply a value as such, just a recognised code (in this case it’s the world/view/projection
matrix which is kept up to date automatically by Ogre). The second parameter is a manually
specified parameter, a 4-element float. The indexes are described later.
The syntax of the link to a vertex program and a fragment or geometry program are iden-
tical, the only difference is that ’fragment program ref’ and ’geometry program ref’ are used
respectively instead of ’vertex program ref’.
For many situations vertex, geometry and fragment programs are associated with each other
in a pass but this is not cast in stone. You could have a vertex program that can be used by
Chapter 3: Scripts 70
several different fragment programs. Another situation that arises is that you can mix fixed
pipeline and programmable pipeline (shaders) together. You could use the non-programable
vertex fixed function pipeline and then provide a fragment program ref in a pass i.e. there
would be no vertex program ref section in the pass. The fragment program referenced in the
pass must meet the requirements as defined in the related API in order to read from the outputs
of the vertex fixed pipeline. You could also just have a vertex program that outputs to the
fragment fixed function pipeline.
The requirements to read from or write to the fixed function pipeline are similar between
rendering API’s (DirectX and OpenGL) but how its actually done in each type of shader
(vertex, geometry or fragment) depends on the shader language. For HLSL (DirectX
API) and associated asm consult MSDN at http://msdn.microsoft.com/library/.
For GLSL (OpenGL), consult section 7.6 of the GLSL spec 1.1 available at
http://developer.3dlabs.com/documents/index.htm. The built in varying vari-
ables provided in GLSL allow your program to read/write to the fixed function pipeline
varyings. For Cg consult the Language Profiles section in CgUsersManual.pdf that comes with
the Cg Toolkit available at http://developer.nvidia.com/object/cg_toolkit.html. For
HLSL and Cg its the varying bindings that allow your shader programs to read/write to the
fixed function pipeline varyings.
Parameter specification
Parameters can be specified using one of 4 commands as shown below. The same syntax is
used whether you are defining a parameter just for this particular use of the program, or when
specifying the [Default Program Parameters], page 57. Parameters set in the specific use of the
program override the defaults.
• [param indexed], page 70
• [param indexed auto], page 71
• [param named], page 81
• [param named auto], page 81
• [shared params ref], page 81
param indexed
This command sets the value of an indexed parameter.
The ’index’ is simply a number representing the position in the parameter list which the
value should be written, and you should derive this from your program definition. The index is
relative to the way constants are stored on the card, which is in 4-element blocks. For example if
you defined a float4 parameter at index 0, the next index would be 1. If you defined a matrix4x4
at index 0, the next usable index would be 4, since a 4x4 matrix takes up 4 indexes.
Chapter 3: Scripts 71
The value of ’type’ can be float4, matrix4x4, float<n>, int4, int<n>. Note that ’int’ parameters
are only available on some more advanced program syntaxes, check the D3D or GL vertex /
fragment program documentation for full details. Typically the most useful ones will be float4
and matrix4x4. Note that if you use a type which is not a multiple of 4, then the remaining
values up to the multiple of 4 will be filled with zeroes for you (since GPUs always use banks of
4 floats per constant even if only one is used).
’value’ is simply a space or tab-delimited list of values which can be converted into the type
you have specified.
’index’ has the same meaning as [param indexed], page 70; note this time you do not have
to specify the size of the parameter because the engine knows this already. In the example, the
world/view/projection matrix is being used so this is implicitly a matrix4x4.
world matrix
The current world matrix.
inverse world matrix
The inverse of the current world matrix.
transpose world matrix
The transpose of the world matrix
inverse transpose world matrix
The inverse transpose of the world matrix
world matrix array 3x4
An array of world matrices, each represented as only a 3x4 matrix (3 rows of
4columns) usually for doing hardware skinning. You should make enough entries
available in your vertex program for the number of bones in use, ie an array of
numBones*3 float4’s.
view matrix
The current view matrix.
inverse view matrix
The inverse of the current view matrix.
transpose view matrix
The transpose of the view matrix
Chapter 3: Scripts 72
vertex winding
Indicates what vertex winding mode the render state is in at this point; +1 for
standard, -1 for inverted (e.g. when processing reflections).
light diffuse colour
The diffuse colour of a given light; this requires an index in the ’extra params’ field,
and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers to
the closest light - note that directional lights are always first in the list and always
present). NB if there are no lights this close, then the parameter will be set to black.
light specular colour
The specular colour of a given light; this requires an index in the ’extra params’
field, and relates to the ’nth’ closest light which could affect this object (i.e. 0 refers
to the closest light). NB if there are no lights this close, then the parameter will be
set to black.
light attenuation
A float4 containing the 4 light attenuation variables for a given light. This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which could
affect this object (i.e. 0 refers to the closest light). NB if there are no lights this
close, then the parameter will be set to all zeroes. The order of the parameters is
range, constant attenuation, linear attenuation, quadric attenuation.
spotlight params
A float4 containing the 3 spotlight parameters and a control value. The order of
the parameters is cos(inner angle /2 ), cos(outer angle / 2), falloff, and the final w
value is 1.0f. For non-spotlights the value is float4(1,0,0,1). This requires an index
in the ’extra params’ field, and relates to the ’nth’ closest light which could affect
this object (i.e. 0 refers to the closest light). If there are less lights than this, the
details are like a non-spotlight.
light position
The position of a given light in world space. This requires an index in the ’ex-
tra params’ field, and relates to the ’nth’ closest light which could affect this object
(i.e. 0 refers to the closest light). NB if there are no lights this close, then the
parameter will be set to all zeroes. Note that this property will work with all kinds
of lights, even directional lights, since the parameter is set as a 4D vector. Point
lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional lights will be (-dir.x, -dir.y,
-dir.z, 0.0f). Operations like dot products will work consistently on both.
light direction
The direction of a given light in world space. This requires an index in the ’ex-
tra params’ field, and relates to the ’nth’ closest light which could affect this object
(i.e. 0 refers to the closest light). NB if there are no lights this close, then the
parameter will be set to all zeroes. DEPRECATED - this property only works on
directional lights, and we recommend that you use light position instead since that
returns a generic 4D vector.
light position object space
The position of a given light in object space (i.e. when the object is at (0,0,0)). This
requires an index in the ’extra params’ field, and relates to the ’nth’ closest light
which could affect this object (i.e. 0 refers to the closest light). NB if there are no
lights this close, then the parameter will be set to all zeroes. Note that this property
will work with all kinds of lights, even directional lights, since the parameter is set
as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional
Chapter 3: Scripts 74
lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work
consistently on both.
light direction object space
The direction of a given light in object space (i.e. when the object is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ clos-
est light which could affect this object (i.e. 0 refers to the closest light). NB if
there are no lights this close, then the parameter will be set to all zeroes. DEPRE-
CATED, except for spotlights - for directional lights we recommend that you use
light position object space instead since that returns a generic 4D vector.
light distance object space
The distance of a given light from the centre of the object - this is a useful approxi-
mation to per-vertex distance calculations for relatively small objects. This requires
an index in the ’extra params’ field, and relates to the ’nth’ closest light which could
affect this object (i.e. 0 refers to the closest light). NB if there are no lights this
close, then the parameter will be set to all zeroes.
light position view space
The position of a given light in view space (i.e. when the camera is at (0,0,0)). This
requires an index in the ’extra params’ field, and relates to the ’nth’ closest light
which could affect this object (i.e. 0 refers to the closest light). NB if there are no
lights this close, then the parameter will be set to all zeroes. Note that this property
will work with all kinds of lights, even directional lights, since the parameter is set
as a 4D vector. Point lights will be (pos.x, pos.y, pos.z, 1.0f) whilst directional
lights will be (-dir.x, -dir.y, -dir.z, 0.0f). Operations like dot products will work
consistently on both.
light direction view space
The direction of a given light in view space (i.e. when the camera is at (0,0,0)).
This requires an index in the ’extra params’ field, and relates to the ’nth’ clos-
est light which could affect this object (i.e. 0 refers to the closest light). NB if
there are no lights this close, then the parameter will be set to all zeroes. DEPRE-
CATED, except for spotlights - for directional lights we recommend that you use
light position view space instead since that returns a generic 4D vector.
light power
The ’power’ scaling for a given light, useful in HDR rendering. This requires an
index in the ’extra params’ field, and relates to the ’nth’ closest light which could
affect this object (i.e. 0 refers to the closest light).
light diffuse colour power scaled
As light diffuse colour, except the RGB channels of the passed colour have been
pre-scaled by the light’s power scaling as given by light power.
light specular colour power scaled
As light specular colour, except the RGB channels of the passed colour have been
pre-scaled by the light’s power scaling as given by light power.
light number
When rendering, there is generally a list of lights available for use by all of the passes
for a given object, and those lights may or may not be referenced in one or more
passes. Sometimes it can be useful to know where in that overall list a given light
light (as seen from a pass) is. For example if you use iterate once per light, the pass
always sees the light as index 0, but in each iteration the actual light referenced is
different. This binding lets you pass through the actual index of the light in that
Chapter 3: Scripts 75
overall list. You just need to give it a parameter of the pass-relative light number
and it will map it to the overall list index.
light diffuse colour array
As light diffuse colour, except that this populates an array of parameters with a
number of lights, and the ’extra params’ field refers to the number of ’nth clos-
est’ lights to be processed. This parameter is not compatible with light-based
pass iteration options but can be used for single-pass lighting.
light specular colour array
As light specular colour, except that this populates an array of parameters with a
number of lights, and the ’extra params’ field refers to the number of ’nth clos-
est’ lights to be processed. This parameter is not compatible with light-based
pass iteration options but can be used for single-pass lighting.
light diffuse colour power scaled array
As light diffuse colour power scaled, except that this populates an array of parame-
ters with a number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-based
pass iteration options but can be used for single-pass lighting.
light specular colour power scaled array
As light specular colour power scaled, except that this populates an array of pa-
rameters with a number of lights, and the ’extra params’ field refers to the number
of ’nth closest’ lights to be processed. This parameter is not compatible with light-
based pass iteration options but can be used for single-pass lighting.
light attenuation array
As light attenuation, except that this populates an array of parameters with a num-
ber of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights
to be processed. This parameter is not compatible with light-based pass iteration
options but can be used for single-pass lighting.
spotlight params array
As spotlight params, except that this populates an array of parameters with a num-
ber of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights
to be processed. This parameter is not compatible with light-based pass iteration
options but can be used for single-pass lighting.
light position array
As light position, except that this populates an array of parameters with a number
of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights
to be processed. This parameter is not compatible with light-based pass iteration
options but can be used for single-pass lighting.
light direction array
As light direction, except that this populates an array of parameters with a number
of lights, and the ’extra params’ field refers to the number of ’nth closest’ lights
to be processed. This parameter is not compatible with light-based pass iteration
options but can be used for single-pass lighting.
light position object space array
As light position object space, except that this populates an array of parameters
with a number of lights, and the ’extra params’ field refers to the number of ’nth
closest’ lights to be processed. This parameter is not compatible with light-based
pass iteration options but can be used for single-pass lighting.
Chapter 3: Scripts 76
time 0 x Single float time value, which repeats itself based on "cycle time" given as an ’ex-
tra params’ field
costime 0 x
Cosine of time 0 x
sintime 0 x
Sine of time 0 x
tantime 0 x
Tangent of time 0 x
time 0 x packed
4-element vector of time0 x, sintime0 x, costime0 x, tantime0 x
time 0 1 As time0 x but scaled to [0..1]
costime 0 1
As costime0 x but scaled to [0..1]
sintime 0 1
As sintime0 x but scaled to [0..1]
tantime 0 1
As tantime0 x but scaled to [0..1]
time 0 1 packed
As time0 x packed but all values scaled to [0..1]
time 0 2pi
As time0 x but scaled to [0..2*Pi]
costime 0 2pi
As costime0 x but scaled to [0..2*Pi]
sintime 0 2pi
As sintime0 x but scaled to [0..2*Pi]
tantime 0 2pi
As tantime0 x but scaled to [0..2*Pi]
time 0 2pi packed
As time0 x packed but scaled to [0..2*Pi]
frame time
The current frame time, factored by the optional parameter (or 1.0f if not supplied).
fps The current frames per second
viewport width
The current viewport width in pixels
viewport height
The current viewport height in pixels
inverse viewport width
1.0/the current viewport width in pixels
inverse viewport height
1.0/the current viewport height in pixels
viewport size
4-element vector of viewport width, viewport height, inverse viewport width, in-
verse viewport height
Chapter 3: Scripts 79
texel offsets
Provides details of the rendersystem-specific texture coordinate offsets required to
map texels onto pixels. float4(horizontalOffset, verticalOffset, horizontalOffset /
viewport width, verticalOffset / viewport height).
view direction
View direction vector in object space
view side vector
View local X axis
view up vector
View local Y axis
fov Vertical field of view, in radians
near clip distance
Near clip distance, in world units
far clip distance
Far clip distance, in world units (may be 0 for infinite view projection)
texture viewproj matrix
Applicable to vertex programs which have been specified as the ’shadow receiver’ ver-
tex program alternative, or where a texture unit is marked as content type shadow;
this provides details of the view/projection matrix for the current shadow projec-
tor. The optional ’extra params’ entry specifies which light the projector refers to
(for the case of content type shadow where more than one shadow texture may be
present in a single pass), where 0 is the default and refers to the first light referenced
in this pass.
texture viewproj matrix array
As texture viewproj matrix, except an array of matrices is passed, up to the number
that you specify as the ’extra params’ value.
texture worldviewproj matrix
As texture viewproj matrix except it also includes the world matrix.
texture worldviewproj matrix array
As texture worldviewproj matrix, except an array of matrices is passed, up to the
number that you specify as the ’extra params’ value.
spotlight viewproj matrix
Provides a view / projection matrix which matches the set up of a given spotlight
(requires an ’extra params’ entry to indicate the light index, which must be a spot-
light). Can be used to project a texture from a given spotlight.
spotlight worldviewproj matrix
As spotlight viewproj matrix except it also includes the world matrix.
scene depth range
Provides information about the depth range as viewed from the current camera
being used to render. Provided as float4(minDepth, maxDepth, depthRange, 1 /
depthRange).
shadow scene depth range
Provides information about the depth range as viewed from the shadow cam-
era relating to a selected light. Requires a light index parameter. Provided as
float4(minDepth, maxDepth, depthRange, 1 / depthRange).
Chapter 3: Scripts 80
shadow colour
The shadow colour (for modulative shadows) as set via SceneMan-
ager::setShadowColour.
shadow extrusion distance
The shadow extrusion distance as determined by the range of a non-directional
light or set via SceneManager::setShadowDirectionalLightExtrusionDistance for di-
rectional lights.
texture size
Provides texture size of the selected texture unit. Requires a texture unit index
parameter. Provided as float4(width, height, depth, 1). For 2D-texture, depth sets
to 1, for 1D-texture, height and depth sets to 1.
inverse texture size
Provides inverse texture size of the selected texture unit. Requires a texture unit
index parameter. Provided as float4(1 / width, 1 / height, 1 / depth, 1). For
2D-texture, depth sets to 1, for 1D-texture, height and depth sets to 1.
packed texture size
Provides packed texture size of the selected texture unit. Requires a texture unit
index parameter. Provided as float4(width, height, 1 / width, 1 / height). For
3D-texture, depth is ignored, for 1D-texture, height sets to 1.
pass number
Sets the active pass index number in a gpu parameter. The first pass in a technique
has an index of 0, the second an index of 1 and so on. This is useful for multipass
shaders (i.e. fur or blur shader) that need to know what pass it is. By setting up
the auto parameter in a [Default Program Parameters], page 57 list in a program
definition, there is no requirement to set the pass number parameter in each pass
and lose track. (See [fur example], page 36)
pass iteration number
Useful for GPU programs that need to know what the current pass iteration number
is. The first iteration of a pass is numbered 0. The last iteration number is one less
than what is set for the pass iteration number. If a pass has its iteration attribute set
to 5 then the last iteration number (5th execution of the pass) is 4.(See [iteration],
page 35)
animation parametric
Useful for hardware vertex animation. For morph animation, sets the parametric
value (0..1) representing the distance between the first position keyframe (bound
to positions) and the second position keyframe (bound to the first free texture
coordinate) so that the vertex program can interpolate between them. For pose
animation, indicates a group of up to 4 parametric weight values applying to a
sequence of up to 4 poses (each one bound to x, y, z and w of the constant), one for
each pose. The original positions are held in the usual position buffer, and the offsets
to take those positions to the pose where weight == 1.0 are in the first ’n’ free texture
coordinates; ’n’ being determined by the value passed to includes pose animation.
If more than 4 simultaneous poses are required, then you’ll need more than 1 shader
constant to hold the parametric values, in which case you should use this binding
more than once, referencing a different constant entry; the second one will contain
the parametrics for poses 5-8, the third for poses 9-12, and so on.
custom This allows you to map a custom parameter on an individual Renderable (see Ren-
derable::setCustomParameter) to a parameter on a GPU program. It requires that
Chapter 3: Scripts 81
you complete the ’extra params’ field with the index that was used in the Render-
able::setCustomParameter call, and this will ensure that whenever this Renderable
is used, it will have it’s custom parameter mapped in. It’s very important that this
parameter has been defined on all Renderables that are assigned the material that
contains this automatic mapping, otherwise the process will fail.
param named
This is the same as param indexed, but uses a named parameter instead of an index. This
can only be used with high-level programs which include parameter names; if you’re using an
assembler program then you have no choice but to use indexes. Note that you can use indexed
parameters for high-level programs too, but it is less portable since if you reorder your parameters
in the high-level program the indexes will change.
The type is required because the program is not compiled and loaded when the material script
is parsed, so at this stage we have no idea what types the parameters are. Programs are only
loaded and compiled when they are used, to save memory.
The allowed value codes and the meaning of extra params are detailed in
[param indexed auto], page 71.
The only required parameter is a name, which must be the name of an already defined shared
parameter set. All named parameters which are present in the program that are also present in
the shared parameter set will be linked, and the shared parameters used as if you had defined
them locally. This is dependent on the definitions (type and array size) matching between the
shared set and the program.
Chapter 3: Scripts 82
If you use stencil shadows, then any vertex programs which do vertex deformation can be
a problem, because stencil shadows are calculated on the CPU, which does not have access to
the modified vertices. If the vertex program is doing standard skeletal animation, this is ok (see
section above) because Ogre knows how to replicate the effect in software, but any other vertex
deformation cannot be replicated, and you will either have to accept that the shadow will not
reflect this deformation, or you should turn off shadows for that object.
If you use texture shadows, then vertex deformation is acceptable; however, when rendering
the object into the shadow texture (the shadow caster pass), the shadow has to be rendered in
a solid colour (linked to the ambient colour). You must therefore provide an alternative vertex
program, so Ogre provides you with a way of specifying one to use when rendering the caster.
Basically you link an alternative vertex program, using exactly the same syntax as the original
vertex program link:
shadow_caster_vertex_program_ref myShadowCasterVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 ambient_light_colour
}
When rendering a shadow caster, Ogre will automatically use the alternate program. You
can bind the same or different parameters to the program - the most important thing is that
you bind ambiend light colour, since this determines the colour of the shadow in modulative
texture shadows. If you don’t supply an alternate program, Ogre will fall back on a fixed-
function material which will not reflect any vertex deformation you do in your vertex program.
In addition, when rendering the shadow receivers with shadow textures, Ogre needs to project
the shadow texture. It does this automatically in fixed function mode, but if the receivers use
vertex programs, they need to have a shadow receiver program which does the usual vertex
deformation, but also generates projective texture coordinates. The additional program linked
into the pass like this:
shadow_receiver_vertex_program_ref myShadowReceiverVertexProgram
{
param_indexed_auto 0 worldviewproj_matrix
param_indexed_auto 4 texture_viewproj_matrix
}
For the purposes of writing this alternate program, there is an automatic parameter binding
of ’texture viewproj matrix’ which provides the program with texture projection parameters.
The vertex program should do it’s normal vertex processing, and generate texture coordinates
using this matrix and place them in texture coord sets 0 and 1, since some shadow techniques
use 2 texture units. The colour of the vertices output by this vertex program must always be
white, so as not to affect the final colour of the rendered shadow.
Chapter 3: Scripts 83
When using additive texture shadows, the shadow pass render is actually the lighting render,
so if you perform any fragment program lighting you also need to pull in a custom fragment
program. You use the shadow receiver fragment program ref for this:
shadow_receiver_fragment_program_ref myShadowReceiverFragmentProgram
{
param_named_auto lightDiffuse light_diffuse_colour 0
}
You should pass the projected shadow coordinates from the custom vertex program. As for
textures, texture unit 0 will always be the shadow texture. Any other textures which you bind in
your pass will be carried across too, but will be moved up by 1 unit to make room for the shadow
texture. Therefore your shadow receiver fragment program is likely to be the same as the bare
lighting pass of your normal material, except that you insert an extra texture sampler at index
0, which you will use to adjust the result by (modulating diffuse and specular components).
To reflect this, you should use the [binding type], page 44 attribute in a texture unit to
indicate which unit you are targeting with your texture - ’fragment’ (the default) or ’vertex’.
For render systems that don’t have separate bindings, this actually does nothing. But for those
that do, it will ensure your texture gets bound to the right processing unit.
Chapter 3: Scripts 84
Note that whilst DirectX9 has separate bindings for the vertex and fragment pipelines, bind-
ing a texture to the vertex processing unit still uses up a ’slot’ which is then not available for
use in the fragment pipeline. I didn’t manage to find this documented anywhere, but the nVidia
samples certainly avoid binding a texture to the same index on both vertex and fragment units,
and when I tried to do it, the texture did not appear correctly in the fragment unit, whilst it
did as soon as I moved it into the next unit.
Hardware limitations
As at the time of writing (early Q3 2006), ATI do not support texture fetch in their current crop
of cards (Radeon X1n00). nVidia do support it in both their 6n00 and 7n00 range. ATI support
an alternative called ’Render to Vertex Buffer’, but this is not standardised at this time and is
very much different in its implementation, so cannot be considered to be a drop-in replacement.
This is the case even though the Radeon X1n00 cards claim to support vs 3 0 (which requires
vertex texture fetch).
For example, to make a new material that is based on one previously defined, add a colon :
after the new material name followed by the name of the material that is to be copied.
The only caveat is that a parent material must have been defined/parsed prior to the child
material script being parsed. The easiest way to achieve this is to either place parents at the
beginning of the material script file, or to use the ’import’ directive (See Section 3.1.14 [Script
Import Directive], page 92). Note that inheritence is actually a copy - after scripts are loaded
into Ogre, objects no longer maintain their copy inheritance structure. If a parent material is
modified through code at runtime, the changes have no effect on child materials that were copied
from it in the script.
Material copying within the script alleviates some drudgery from copy/paste but having the
ability to identify specific techniques, passes, and texture units to modify makes material copying
easier. Techniques, passes, texture units can be identified directly in the child material without
having to layout previous techniques, passes, texture units by associating a name with them,
Techniques and passes can take a name and texture units can be numbered within the material
Chapter 3: Scripts 85
script. You can also use variables, See Section 3.1.13 [Script Variables], page 91.
Names become very useful in materials that copy from other materials. In order to override
values they must be in the correct technique, pass, texture unit etc. The script could be lain
out using the sequence of techniques, passes, texture units in the child material but if only one
parameter needs to change in say the 5th pass then the first four passes prior to the fifth would
have to be placed in the script:
Here is an example:
material test2 : test1
{
technique
{
pass
{
}
pass
{
}
pass
{
}
pass
{
}
pass
{
ambient 0.5 0.7 0.3 1.0
}
}
}
This method is tedious for materials that only have slight variations to their parent. An
easier way is to name the pass directly without listing the previous passes:
The parent pass name must be known and the pass must be in the correct technique in order
for this to work correctly. Specifying the technique name and the pass name is the best method.
If the parent technique/pass are not named then use their index values for their name as done
in the example.
Note: if passes or techniques aren’t given a name, they will take on a default name based on
their index. For example the first pass has index 0 so its name will be 0.
material Test
{
technique
{
pass : ParentPass
Chapter 3: Scripts 87
{
}
}
}
Notice that the pass inherits from ParentPass. This allows for the creation of more fine-
grained inheritance hierarchies.
Along with the more generalized inheritance system comes an important new keyword: "ab-
stract." This keyword is used at a top-level object declaration (not inside any other object) to
denote that it is not something that the compiler should actually attempt to compile, but rather
that it is only for the purpose of inheritance. For example, a material declared with the abstract
keyword will never be turned into an actual usable material in the material framework. Objects
which cannot be at a top-level in the document (like a pass) but that you would like to declare
as such for inheriting purpose must be declared with the abstract keyword.
The final matching option is based on wildcards. Using the ’*’ character, you can make a
powerful matching scheme and override multiple objects at once, even if you don’t know exact
names or positions of those objects in the inherited object.
version of Section 3.1.13 [Script Variables], page 91 which can be used to easily set other values.
material TSNormalSpecMapping
{
technique GLSL
{
pass
{
ambient 0.1 0.1 0.1
diffuse 0.7 0.7 0.7
specular 0.7 0.7 0.7 128
vertex_program_ref GLSLDemo/OffsetMappingVS
{
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
param_named textureScale float 1.0
}
fragment_program_ref GLSLDemo/TSNormalSpecMappingFS
{
param_named normalMap int 0
param_named diffuseMap int 1
param_named fxMap int 2
}
// Normal map
texture_unit NormalMap
{
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}
texture_unit DiffuseMap
{
texture defaultDiff.png
filtering trilinear
tex_coord_set 1
}
technique HLSL_DX9
{
pass
{
vertex_program_ref FxMap_HLSL_VS
{
param_named_auto worldViewProj_matrix worldviewproj_matrix
param_named_auto lightPosition light_position_object_space 0
param_named_auto eyePosition camera_position_object_space
}
fragment_program_ref FxMap_HLSL_PS
{
param_named ambientColor float4 0.2 0.2 0.2 0.2
}
// Normal map
texture_unit
{
texture_alias NormalMap
texture defaultNM.png
tex_coord_set 0
filtering trilinear
}
}
}
Note that the GLSL and HLSL techniques use the same textures. For each texture usage
type a texture alias is given that describes what the texture is used for. So the first texture unit
in the GLSL technique has the same alias as the TUS in the HLSL technique since its the same
texture used. Same goes for the second and third texture units.
For demonstration purposes, the GLSL technique makes use of texture unit naming and there-
fore the texture alias name does not have to be set since it defaults to the texture unit name. So
why not use the default all the time since its less typing? For most situations you can. Its when
you clone a material that and then want to change the alias that you must use the texture alias
command in the script. You cannot change the name of a texture unit in a cloned material so
texture alias provides a facility to assign an alias name.
Now we want to clone the material but only want to change the textures used. We could copy
and paste the whole material but if we decide to change the base material later then we also
have to update the copied material in the script. With set texture alias, copying a material is
very easy now. set texture alias is specified at the top of the material definition. All techniques
using the specified texture alias will be effected by set texture alias.
Format:
set texture alias <alias name> <texture name>
The same process can be done in code as long you set up the texture alias names so then there
is no need to traverse technique/pass/TUS to change a texture. You just call myMaterialPtr-
>applyTextureAliases(myAliasTextureNameList) which will update all textures in all texture
Chapter 3: Scripts 91
units that match the alias names in the map container reference you passed as a parameter.
You don’t have to supply all the textures in the copied material.
Another example:
material fxTest3 : TSNormalSpecMapping
{
set_texture_alias DiffuseMap fxTest2Diff.png
}
fxTest3 will end up with the default textures for the normal map and spec map setup in
TSNormalSpecMapping material but will have a different diffuse map. So your base material
can define the default textures to use and then the child materials can override specific textures.
material Test
{
technique
{
pass : ParentPass
{
set $diffuse_colour "1 0 0 1"
}
}
}
The ParentPass object declares a variable called "diffuse colour" which is then overridden
in the Test material’s pass. The "set" keyword is used to set the value of that variable. The
variable assignment follows lexical scoping rules, which means that the value of "1 0 0 1" is
Chapter 3: Scripts 92
only valid inside that pass definition. Variable assignment in outer scopes carry over into inner
scopes.
material Test
{
set $diffuse_colour "1 0 0 1"
technique
{
pass : ParentPass
{
}
}
}
The $diffuse colour assignment carries down through the technique and into the pass.
Note, however that importing does not actually cause objects in the imported script to
be fully parsed & created, it just makes the definitions available for inheritence. This has a
specific ramification for vertex / fragment program definitions, which must be loaded before
any parameters can be specified. You should continue to put common program definitions in
.program files to ensure they are fully parsed before being referenced in multiple .material files.
The ’import’ command just makes sure you can resolve dependencies between equivalent script
definitions (e.g. material to material).
define them. You still need to use code to instantiate a compositor against one of your visible
viewports, but this is a much simpler process than actually defining the compositor itself.
Compositor Fundamentals
Performing post-processing effects generally involves first rendering the scene to a texture, either
in addition to or instead of the main window. Once the scene is in a texture, you can then pull
the scene image into a fragment program and perform operations on it by rendering it through
full screen quad. The target of this post processing render can be the main result (e.g. a
window), or it can be another render texture so that you can perform multi-stage convolutions
on the image. You can even ’ping-pong’ the render back and forth between a couple of render
textures to perform convolutions which require many iterations, without using a separate texture
for each stage. Eventually you’ll want to render the result to the final output, which you do
with a full screen quad. This might replace the whole window (thus the main window doesn’t
need to render the scene itself), or it might be a combinational effect.
So that we can discuss how to implement these techniques efficiently, a number of definitions
are required:
Compositor
Definition of a fullscreen effect that can be applied to a user viewport. This is what
you’re defining when writing compositor scripts as detailed in this section.
Compositor Instance
An instance of a compositor as applied to a single viewport. You create these based
on compositor definitions, See Section 3.2.4 [Applying a Compositor], page 105.
Compositor Chain
It is possible to enable more than one compositor instance on a viewport at the same
time, with one compositor taking the results of the previous one as input. This is
known as a compositor chain. Every viewport which has at least one compositor
attached to it has a compositor chain. See Section 3.2.4 [Applying a Compositor],
page 105
Target This is a RenderTarget, i.e. the place where the result of a series of render operations
is sent. A target may be the final output (and this is implicit, you don’t have to
declare it), or it may be an intermediate render texture, which you declare in your
script with the [compositor texture], page 95. A target which is not the output
target has a defined size and pixel format which you can control.
Output Target
As Target, but this is the single final result of all operations. The size and pixel
format of this target cannot be controlled by the compositor since it is defined by
the application using it, thus you don’t declare it in your script. However, you do
declare a Target Pass for it, see below.
Target Pass
A Target may be rendered to many times in the course of a composition effect. In
particular if you ’ping pong’ a convolution between a couple of textures, you will
have more than one Target Pass per Target. Target passes are declared in the script
using a Section 3.2.2 [Compositor Target Passes], page 98, the latter being the final
output target pass, of which there can be only one.
Chapter 3: Scripts 94
Pass Within a Target Pass, there are one or more individual Section 3.2.3 [Compositor
Passes], page 100, which perform a very specific action, such as rendering the original
scene (or pulling the result from the previous compositor in the chain), rendering
a fullscreen quad, or clearing one or more buffers. Typically within a single target
pass you will use the either a ’render scene’ pass or a ’render quad’ pass, not both.
Clear can be used with either type.
Loading scripts
Compositor scripts are loaded when resource groups are initialised: OGRE looks in all re-
source locations associated with the group (see Root::addResourceLocation) for files with the
’.compositor’ extension and parses them. If you want to parse files manually, use CompositorSe-
rializer::parseScript.
Format
Several compositors may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (’’, ’’), and comments indicated by starting a line with ’//’
(note, no nested form comments allowed). The general format is shown below in the example
below:
// This is a comment
// Black and white effect
compositor B&W
{
technique
{
// Temporary textures
texture rt0 target_width target_height PF_A8R8G8B8
target rt0
{
// Render output from previous compositor (or original scene)
input previous
}
target_output
{
// Start with clear output
input none
// Draw a fullscreen quad with the black and white image
pass render_quad
{
// Renders a fullscreen quad with a material
material Ogre/Compositor/BlackAndWhite
input 0 rt0
}
}
}
}
Chapter 3: Scripts 95
Every compositor in the script must be given a name, which is the line ’compositor <name>’
before the first opening ’’. This name must be globally unique. It can include path characters
(as in the example) to logically divide up your compositors, and also to avoid duplicate names,
but the engine does not treat the name as hierarchical, just as a string. Names can include
spaces but must be surrounded by double quotes ie compositor "My Name".
The major components of a compositor are the Section 3.2.1 [Compositor Techniques],
page 95, the Section 3.2.2 [Compositor Target Passes], page 98 and the Section 3.2.3 [Com-
positor Passes], page 100, which are covered in detail in the following sections.
3.2.1 Techniques
A compositor technique is much like a Section 3.1.1 [Techniques], page 17 in that it describes
one approach to achieving the effect you’re looking for. A compositor definition can have more
than one technique if you wish to provide some fallback should the hardware not support the
technique you’d prefer to use. Techniques are evaluated for hardware support based on 2 things:
Material support
All Section 3.2.3 [Compositor Passes], page 100 that render a fullscreen quad use a
material; for the technique to be supported, all of the materials referenced must have
at least one supported material technique. If they don’t, the compositor technique
is marked as unsupported and won’t be used.
Texture format support
This one is slightly more complicated. When you request a [compositor texture],
page 95 in your technique, you request a pixel format. Not all formats are natively
supported by hardware, especially the floating point formats. However, in this case
the hardware will typically downgrade the texture format requested to one that the
hardware does support - with compositor effects though, you might want to use a
different approach if this is the case. So, when evaluating techniques, the compositor
will first look for native support for the exact pixel format you’ve asked for, and will
skip onto the next technique if it is not supported, thus allowing you to define other
techniques with simpler pixel formats which use a different approach. If it doesn’t
find any techniques which are natively supported, it tries again, this time allowing
the hardware to downgrade the texture format and thus should find at least some
support for what you’ve asked for.
As with material techniques, compositor techniques are evaluated in the order you define
them in the script, so techniques declared first are preferred over those declared later.
Format: technique
texture
This declares a render texture for use in subsequent Section 3.2.2 [Compositor Target Passes],
page 98.
Format: texture <Name> <Width> <Height> <Pixel Format> [<MRT Pixel Format2>] [<MRT
Pixel FormatN>] [pooled] [gamma] [no fsaa] [<scope>]
You can in fact repeat this element if you wish. If you do so, that means that this render
texture becomes a Multiple Render Target (MRT), when the GPU writes to multiple textures
at once. It is imperative that if you use MRT that the shaders that render to it render to ALL
the targets. Not doing so can cause undefined results. It is also important to note that although
you can use different pixel formats for each target in a MRT, each one should have the same
total bit depth since most cards do not support independent bit depths. If you try to use this
feature on cards that do not support the number of MRTs you’ve asked for, the technique will
be skipped (so you ought to write a fallback technique).
Example : texture mrt output target width target height PF FLOAT16 RGBA
PF FLOAT16 RGBA chain scope
texture ref
This declares a reference of a texture from another compositor to be used in this compositor.
Format: texture ref <Local Name> <Reference Compositor> <Reference Texture Name>
Here is a description of the parameters:
Local Name
A name to give the referenced texture, which must be unique within this compositor.
This name is used to reference the texture in Section 3.2.2 [Compositor Target
Passes], page 98, when the texture is rendered to, and in Section 3.2.3 [Compositor
Passes], page 100, when the texture is used as input to a material rendering a
fullscreen quad.
Reference Compositor
The name of the compositor that we are referencing a texture from
Make sure that the texture being referenced is scoped accordingly (either chain or global
scope) and placed accordingly during chain creation (if referencing a chain-scoped texture, the
compositor must be present in the chain and placed before the compositor referencing it).
Example : texture ref GBuffer GBufferCompositor mrt output
scheme
This gives a compositor technique a scheme name, allowing you to manually switch between
different techniques for this compositor when instantiated on a viewport by calling Composi-
torInstance::setScheme.
compositor logic
This connects between a compositor and code that it requires in order to function correctly.
When an instance of this compositor will be created, the compositor logic will be notified and
will have the chance to prepare the compositor’s operation (for example, adding a listener).
There are two types of target pass, the sort that updates a render texture:
... and the sort that defines the final output render:
The contents of both are identical, the only real difference is that you can only have a single
target output entry, whilst you can have many target entries. Here are the attributes you can
use in a ’target’ or ’target output’ section of a .compositor script:
• [compositor target input], page 98
• [only initial], page 99
• [visibility mask], page 99
• [compositor lod bias], page 99
• [material scheme], page 99
• [compositor shadows], page 99
• Section 3.2.3 [Compositor Passes], page 100
Attribute Descriptions
input
Sets input mode of the target, which tells the target pass what is pulled in before any of its own
passes are rendered.
none The target will have nothing as input, all the contents of the target must be gener-
ated using its own passes. Note this does not mean the target will be empty, just
no data will be pulled in. For it to truly be blank you’d need a ’clear’ pass within
this target.
previous The target will pull in the previous contents of the viewport. This will be either
the original scene if this is the first compositor in the chain, or it will be the output
from the previous compositor in the chain if the viewport has multiple compositors
enabled.
only initial
If set to on, this target pass will only execute once initially after the effect has been enabled.
This could be useful to perform once-off renders, after which the static contents are used by the
rest of the compositor.
visibility mask
Sets the visibility mask for any render scene passes performed in this target pass. This
is a bitmask (although it must be specified as decimal, not hex) and maps to SceneMan-
ager::setVisibilityMask. Format: visibility mask <mask>
lod bias
Set the scene LOD bias for any render scene passes performed in this target pass. The default
is 1.0, everything below that means lower quality, higher means higher quality.
shadows
Sets whether shadows should be rendered during any render scene pass performed in this target
pass. The default is ’on’.
Default: shadows on
Chapter 3: Scripts 100
material scheme
If set, indicates the material scheme to use for any render scene pass. Useful for performing
special-case rendering effects.
Default: None
Format: ’pass’ (render quad | clear | stencil | render scene | render custom) [custom name]
material
For passes of type ’render quad’, sets the material used to render the quad. You will want to
use shaders in this material to perform fullscreen effects, and use the [compositor pass input],
page 101 attribute to map other texture targets into the texture bindings needed by this material.
input
For passes of type ’render quad’, this is how you map one or more local render textures (See
[compositor texture], page 95) into the material you’re using to render the fullscreen quad. To
bind more than one texture, repeat this attribute with different sampler indexes.
sampler The texture sampler to set, must be a number in the range [0,
OGRE MAX TEXTURE LAYERS-1].
Name The name of the local render texture to bind, as declared in [compositor texture],
page 95 and rendered to in one or more Section 3.2.2 [Compositor Target Passes],
page 98.
MRTIndex
If the local texture that you’re referencing is a Multiple Render Target (MRT), this
identifies the surface from the MRT that you wish to reference (0 is the first surface,
1 the second etc).
Example: input 0 rt0
identifier
Associates a numeric identifier with the pass. This is useful for registering a listener with the
compositor (CompositorInstance::addListener), and being able to identify which pass it is that’s
being processed when you get events regarding it. Numbers between 0 and 2^32 are allowed.
Default: identifier 0
material scheme
If set, indicates the material scheme to use for this pass only. Useful for performing special-case
rendering effects.
This will overwrite the scheme if set at the target scope as well.
Default: None
Clear Section
For passes of type ’clear’, this section defines the buffer clearing parameters.
Format: clear
Here are the attributes you can use in a ’clear’ section of a .compositor script:
• [compositor clear buffers], page 102
• [compositor clear colour value], page 102
• [compositor clear depth value], page 103
• [compositor clear stencil value], page 103
buffers
Sets the buffers cleared by this pass.
colour value
Set the colour used to fill the colour buffer by this pass, if the colour buffer is being cleared
([compositor clear buffers], page 102).
Chapter 3: Scripts 103
depth value
Set the depth value used to fill the depth buffer by this pass, if the depth buffer is being
cleared ([compositor clear buffers], page 102).
stencil value
Set the stencil value used to fill the stencil buffer by this pass, if the stencil buffer is being
cleared ([compositor clear buffers], page 102).
Stencil Section
For passes of type ’stencil’, this section defines the stencil operation parameters.
Format: stencil
Here are the attributes you can use in a ’stencil’ section of a .compositor script:
• [compositor stencil check], page 103
• [compositor stencil comp func], page 103
• [compositor stencil ref value], page 104
• [compositor stencil mask], page 104
• [compositor stencil fail op], page 104
• [compositor stencil depth fail op], page 105
• [compositor stencil pass op], page 105
• [compositor stencil two sided], page 105
check
Enables or disables the stencil check, thus enabling the use of the rest of the features in
this section. The rest of the options in this section do nothing if the stencil check is off.
Format: check (on | off)
Chapter 3: Scripts 104
comp func
Sets the function used to perform the following comparison:
(ref value & mask) comp func (Stencil Buffer Value & mask)
What happens as a result of this comparison will be one of 3 actions on the stencil
buffer, depending on whether the test fails, succeeds but with the depth buffer check
still failing, or succeeds with the depth buffer check passing too. You set the actions in
the [compositor stencil fail op], page 104, [compositor stencil depth fail op], page 105 and
[compositor stencil pass op], page 105 respectively. If the stencil check fails, no colour or
depth are written to the frame buffer.
Format: comp func (always fail | always pass | less | less equal | not equal | greater equal
| greater)
ref value
Sets the reference value used to compare with the stencil buffer as described in
[compositor stencil comp func], page 103.
mask
Sets the mask used to compare with the stencil buffer as described in
[compositor stencil comp func], page 103.
fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 103) and depth comparison is that both fail.
Format: fail op (keep | zero | replace | increment | decrement | increment wrap | decre-
ment wrap | invert)
depth fail op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 103) passes but the depth comparison fails.
Format: depth fail op (keep | zero | replace | increment | decrement | increment wrap |
decrement wrap | invert)
pass op
Sets what to do with the stencil buffer value if the result of the stencil comparison
([compositor stencil comp func], page 103) and the depth comparison pass.
Format: pass op (keep | zero | replace | increment | decrement | increment wrap | decre-
ment wrap | invert)
two sided
Enables or disables two-sided stencil operations, which means the inverse of the operations
applies to back-facing polygons.
CompositorManager::getSingleton().addCompositor(viewport, compositorName);
Chapter 3: Scripts 106
Where viewport is a pointer to your viewport, and compositorName is the name of the com-
positor to create an instance of. By doing this, a new instance of a compositor will be added
to a new compositor chain on that viewport. You can call the method multiple times to add
further compositors to the chain on this viewport. By default, each compositor which is added
is disabled, but you can change this state by calling:
For more information on defining and using compositors, see Demo Compositor in the Samples
area, together with the Examples.compositor script in the media area.
Loading scripts
Particle system scripts are loaded at initialisation time by the system: by default it looks
in all common resource locations (see Root::addResourceLocation) for files with the ’.particle’
extension and parses them. If you want to parse files with a different extension, use the Par-
ticleSystemManager::getSingleton().parseAllSources method with your own extension, or if you
want to parse an individual file, use ParticleSystemManager::getSingleton().parseScript.
Once scripts have been parsed, your code is free to instantiate systems based on them using
the SceneManager::createParticleSystem() method which can take both a name for the new
system, and the name of the template to base it on (this template name is in the script).
Format
Several particle systems may be defined in a single script. The script format is pseudo-C++, with
sections delimited by curly braces (), and comments indicated by starting a line with ’//’ (note,
no nested form comments allowed). The general format is shown below in a typical example:
// A sparkly purple fountain
particle_system Examples/PurpleFountain
{
material Examples/Flare2
particle_width 20
particle_height 20
cull_each false
quota 10000
Chapter 3: Scripts 107
billboard_type oriented_self
// Area emitter
emitter Point
{
angle 15
emission_rate 75
time_to_live 3
direction 0 1 0
velocity_min 250
velocity_max 300
colour_range_start 1 0 0
colour_range_end 0 0 1
}
// Gravity
affector LinearForce
{
force_vector 0 -100 0
force_application add
}
// Fader
affector ColourFader
{
red -0.25
green -0.25
blue -0.25
}
}
Every particle system in the script must be given a name, which is the line before the first opening
’’, in the example this is ’Examples/PurpleFountain’. This name must be globally unique. It
can include path characters (as in the example) to logically divide up your particle systems, and
also to avoid duplicate names, but the engine does not treat the name as hierarchical, just as a
string.
A system can have top-level attributes set using the scripting commands available, such as
’quota’ to set the maximum number of particles allowed in the system. Emitters (which create
particles) and affectors (which modify particles) are added as nested definitions within the script.
The parameters available in the emitter and affector sections are entirely dependent on the type
of emitter / affector.
For a detailed description of the core particle system attributes, see the list below:
quota
Sets the maximum number of particles this system is allowed to contain at one time. When
this limit is exhausted, the emitters will not be allowed to emit any more particles until some
destroyed (e.g. through their time to live running out). Note that you will almost always want
to change this, since it defaults to a very low value (particle pools are only ever increased in
size, never decreased).
material
Sets the name of the material which all particles in this system will use. All particles in a system
use the same material, although each particle can tint this material through the use of it’s colour
property.
particle width
Sets the width of particles in world coordinates. Note that this property is absolute when bill-
board type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length of
the direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicu-
lar common’.
particle height
Sets the height of particles in world coordinates. Note that this property is absolute when bill-
board type (see below) is set to ’point’ or ’perpendicular self’, but is scaled by the length of
the direction vector when billboard type is ’oriented common’, ’oriented self’ or ’perpendicu-
lar common’.
cull each
All particle systems are culled by the bounding box which contains all the particles in the system.
This is normally sufficient for fairly locally constrained particle systems where most particles
are either visible or not visible together. However, for those that spread particles over a wider
area (e.g. a rain system), you may want to actually cull each particle individually to save on
time, since it is far more likely that only a subset of the particles will be visible. You do this by
setting the cull each parameter to true.
renderer
Particle systems do not render themselves, they do it through ParticleRenderer classes. Those
classes are registered with a manager in order to provide particle systems with a particular
’look’. OGRE comes configured with a default billboard-based renderer, but more can be added
through plugins. Particle renders are registered with a unique name, and you can use that name
in this attribute to determine the renderer to use. The default is ’billboard’.
Particle renderers can have attributes, which can be passed by setting them on the root
particle system.
Chapter 3: Scripts 110
sorted
By default, particles are not sorted. By setting this attribute to ’true’, the particles will be
sorted with respect to the camera, furthest first. This can make certain rendering effects look
better at a small sorting expense.
local space
By default, particles are emitted into world space, such that if you transform the node to which
the system is attached, it will not affect the particles (only the emitters). This tends to give
the normal expected behaviour, which is to model how real world particles travel independently
from the objects they are emitted from. However, to create some effects you may want the
particles to remain attached to the local space the emitter is in and to follow them directly.
This option allows you to do that.
billboard type
This is actually an attribute of the ’billboard’ particle renderer (the default), and is an example of
passing attributes to a particle renderer by declaring them directly within the system declaration.
Particles using the default renderer are rendered using billboards, which are rectangles formed
by 2 triangles which rotate to face the given direction. However, there is more than 1 way to
orient a billboard. The classic approach is for the billboard to directly face the camera: this
is the default behaviour. However this arrangement only looks good for particles which are
representing something vaguely spherical like a light flare. For more linear effects like laser fire,
you actually want the particle to have an orientation of it’s own.
rainstorms, starfields etc where the particles will traveling in one direction - this is
slightly faster than oriented self (see below).
oriented self
Particles are oriented around their own direction vector, which acts as their local Y
axis. As the particle changes direction, so the billboard reorients itself to face this
way. Good for laser fire, fireworks and other ’streaky’ particles that should look like
they are traveling in their own direction.
perpendicular common
Particles are perpendicular to a common, typically fixed direction vector (see
[common direction], page 112), which acts as their local Z axis, and their lo-
cal Y axis coplanar with common direction and the common up vector (see
[common up vector], page 112). The billboard never rotates to face the camera,
you might use double-side material to ensure particles never culled by back-facing.
Good for aureolas, rings etc where the particles will perpendicular to the ground -
this is slightly faster than perpendicular self (see below).
perpendicular self
Particles are perpendicular to their own direction vector, which acts as their local Z
axis, and their local Y axis coplanar with their own direction vector and the common
up vector (see [common up vector], page 112). The billboard never rotates to face
the camera, you might use double-side material to ensure particles never culled by
back-facing. Good for rings stack etc where the particles will perpendicular to their
traveling direction.
billboard origin
Specifying the point which acts as the origin point for all billboard particles, controls the fine
tuning of where a billboard particle appears in relation to it’s position.
format: billboard origin <top left|top center|top right|center left|center|center right|bottom left|bottom
common direction
Only required if [billboard type], page 110 is set to oriented common or perpendicular common,
this vector is the common direction vector used to orient all particles in the system.
See also: Section 3.3.2 [Particle Emitters], page 114, Section 3.3.5 [Particle Affectors], page 121
common up vector
Only required if [billboard type], page 110 is set to perpendicular self or perpendicular common,
this vector is the common up vector used to orient all particles in the system.
See also: Section 3.3.2 [Particle Emitters], page 114, Section 3.3.5 [Particle Affectors], page 121
point rendering
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whether
or not the BillboardSet will use point rendering rather than manually generated quads.
Chapter 3: Scripts 113
Using point rendering is faster than generating quads manually, but is more restrictive. The
following restrictions apply:
• Only the ’point’ orientation type is supported
• Size and appearance of each particle is controlled by the material pass ([point size], page 38,
[point size attenuation], page 38, [point sprites], page 38)
• Per-particle size is not supported (stems from the above)
• Per-particle rotation is not supported, and this can only be controlled through texture unit
rotation in the material definition
• Only ’center’ origin is supported
• Some drivers have an upper limit on the size of points they support - this can even vary
between APIs on the same card! Don’t rely on point sizes that cause the point sprites to
get very large on screen, since they may get clamped on some cards. Upper sizes can range
from 64 to 256 pixels.
You will almost certainly want to enable in your material pass both point attenuation and
point sprites if you use this option.
accurate facing
This is actually an attribute of the ’billboard’ particle renderer (the default), and sets whether
or not the BillboardSet will use a slower but more accurate calculation for facing the billboard
to the camera. Bt default it uses the camera direction, which is faster but means the billboards
don’t stay in the same orientation as you rotate the camera. The ’accurate facing true’ option
makes the calculation based on a vector from each billboard to the camera, which means the
orientation is constant even whilst the camera rotates.
iteration interval
Usually particle systems are updated based on the frame rate; however this can give variable
results with more extreme frame rate ranges, particularly at lower frame rates. You can use this
option to make the update frequency a fixed interval, whereby at lower frame rates, the particle
update will be repeated at the fixed interval until the frame time is used up. A value of 0 means
the default frame time iteration.
This option lets you set a ’timeout’ on the particle system, so that if it isn’t visible for this
amount of time, it will stop updating until it is next visible. A value of 0 disables the timeout
and always updates.
It is also possible to ’emit emitters’ - that is, have new emitters spawned based on the position
of particles. See [Emitting Emitters], page 120
See also: Section 3.3 [Particle Scripts], page 106, Section 3.3.5 [Particle Affectors], page 121
angle
Sets the maximum angle (in degrees) which emitted particles may deviate from the direction of
the emitter (see direction). Setting this to 10 allows particles to deviate up to 10 degrees in any
direction away from the emitter’s direction. A value of 180 means emit in any direction, whilst
0 means emit always exactly in the direction of the emitter.
colour
Sets a static colour for all particle emitted. Also see the colour range start and colour range end
attributes for setting a range of colours. The format of the colour parameter is "r g b a", where
each component is a value from 0 to 1, and the alpha value is optional (assumes 1 if not specified).
format: as colour
example (generates random colours between red and blue):
colour range start 1 0 0
colour range end 0 0 1
Chapter 3: Scripts 116
default: both 1 1 1 1
direction
Sets the direction of the emitter. This is relative to the SceneNode which the particle system is
attached to, meaning that as with other movable objects changing the orientation of the node
will also move the emitter.
emission rate
Sets how many particles per second should be emitted. The specific emitter does not have to
emit these in a continuous burst - this is a relative parameter and the emitter may choose to
emit all of the second’s worth of particles every half-second for example, the behaviour depends
on the emitter. The emission rate will also be limited by the particle system’s ’quota’ setting.
position
Sets the position of the emitter relative to the SceneNode the particle system is attached to.
velocity
Sets a constant velocity for all particles at emission time. See also the velocity min and veloc-
ity max attributes which allow you to set a range of velocities instead of a fixed one.
format: as velocity
example:
velocity min 50
velocity max 100
default: both 1
time to live
Sets the number of seconds each particle will ’live’ for before being destroyed. NB it is possible
for particle affectors to alter this in flight, but this is the value given to particles on emission.
See also the time to live min and time to live max attributes which let you set a lifetime range
instead of a fixed one.
duration
Sets the number of seconds the emitter is active. The emitter can be started again, see
[repeat delay], page 118. A value of 0 means infinite duration. See also the duration min
and duration max attributes which let you set a duration range instead of a fixed one.
format: as duration
example:
duration min 2
duration max 5
default: both 0
repeat delay
Sets the number of seconds to wait before the emission is repeated when stopped by a limited
[duration], page 117. See also the repeat delay min and repeat delay max attributes which
allow you to set a range of repeat delays instead of a fixed one.
See also: Section 3.3.4 [Standard Particle Emitters], page 118, Section 3.3 [Particle Scripts],
page 106, Section 3.3.5 [Particle Affectors], page 121
Point Emitter
This emitter emits particles from a single point, which is it’s position. This emitter has no
additional attributes over an above the standard emitter attributes.
To create a point emitter, include a section like this within your particle system script:
emitter Point
{
// Settings go here
}
Box Emitter
This emitter emits particles from a random location within a 3-dimensional box. It’s extra
attributes are:
width Sets the width of the box (this is the size of the box along it’s local X axis, which
is dependent on the ’direction’ attribute which forms the box’s local Z).
format: width <units>
example: width 250
default: 100
height Sets the height of the box (this is the size of the box along it’s local Y axis, which
is dependent on the ’direction’ attribute which forms the box’s local Z).
format: height <units>
example: height 250
default: 100
depth Sets the depth of the box (this is the size of the box along it’s local Z axis, which is
the same as the ’direction’ attribute).
format: depth <units>
example: depth 250
default: 100
To create a box emitter, include a section like this within your particle system script:
emitter Box
{
// Settings go here
}
Cylinder Emitter
This emitter emits particles in a random direction from within a cylinder area, where the cylinder
is oriented along the Z-axis. This emitter has exactly the same parameters as the [Box Emitter],
page 119 so there are no additional parameters to consider here - the width and height determine
Chapter 3: Scripts 120
the shape of the cylinder along it’s axis (if they are different it is an ellipsoid cylinder), the depth
determines the length of the cylinder.
Ellipsoid Emitter
This emitter emits particles from within an ellipsoid shaped area, i.e. a sphere or squashed-
sphere area. The parameters are again identical to the [Box Emitter], page 119, except that the
dimensions describe the widest points along each of the axes.
inner width
The width of the inner area which does not emit any particles.
inner height
The height of the inner area which does not emit any particles.
inner depth
The depth of the inner area which does not emit any particles.
Ring Emitter
This emitter emits particles from a ring-shaped area, i.e. a little like [Hollow Ellipsoid Emitter],
page 120 except only in 2 dimensions.
inner width
The width of the inner area which does not emit any particles.
inner height
The height of the inner area which does not emit any particles.
See also: Section 3.3 [Particle Scripts], page 106, Section 3.3.2 [Particle Emitters], page 114
Emitting Emitters
It is possible to spawn new emitters on the expiry of particles, for example to product ’firework’
style effects. This is controlled via the following directives:
Particle affectors actually have no universal attributes; they are all specific to the type of
affector.
See also: Section 3.3.6 [Standard Particle Affectors], page 121, Section 3.3 [Particle Scripts],
page 106, Section 3.3.2 [Particle Emitters], page 114
force vector
Sets the vector for the force to be applied to every particle. The magnitude of this
vector determines how strong the force is.
format: force vector <x> <y> <z>
example: force vector 50 0 -50
default: 0 -100 0 (a fair gravity effect)
force application
Sets the way in which the force vector is applied to particle momentum.
format: force application <add|average>
example: force application average
default: add
The options are:
Chapter 3: Scripts 122
average The resulting momentum is the average of the force vector and the
particle’s current motion. Is self-stabilising but the speed at which the
particle changes direction is non-linear.
add The resulting momentum is the particle’s current motion plus the force
vector. This is traditional force acceleration but can potentially result
in unlimited velocity.
To create a linear force affector, include a section like this within your particle system script:
affector LinearForce
{
// Settings go here
}
Please note that the name of the affector type (’LinearForce’) is case-sensitive.
ColourFader Affector
This affector modifies the colour of particles in flight. It’s extra attributes are:
red Sets the adjustment to be made to the red component of the particle colour per
second.
format: red <delta value>
example: red -0.1
default: 0
green Sets the adjustment to be made to the green component of the particle colour per
second.
format: green <delta value>
example: green -0.1
default: 0
blue Sets the adjustment to be made to the blue component of the particle colour per
second.
format: blue <delta value>
example: blue -0.1
default: 0
alpha Sets the adjustment to be made to the alpha component of the particle colour per
second.
format: alpha <delta value>
example: alpha -0.1
default: 0
To create a colour fader affector, include a section like this within your particle system script:
affector ColourFader
{
// Settings go here
}
Chapter 3: Scripts 123
ColourFader2 Affector
This affector is similar to the [ColourFader Affector], page 122, except it introduces two states
of colour changes as opposed to just one. The second colour change state is activated once a
specified amount of time remains in the particles life.
red1 Sets the adjustment to be made to the red component of the particle colour per
second for the first state.
format: red <delta value>
example: red -0.1
default: 0
green1 Sets the adjustment to be made to the green component of the particle colour per
second for the first state.
format: green <delta value>
example: green -0.1
default: 0
blue1 Sets the adjustment to be made to the blue component of the particle colour per
second for the first state.
format: blue <delta value>
example: blue -0.1
default: 0
alpha1 Sets the adjustment to be made to the alpha component of the particle colour per
second for the first state.
format: alpha <delta value>
example: alpha -0.1
default: 0
red2 Sets the adjustment to be made to the red component of the particle colour per
second for the second state.
format: red <delta value>
example: red -0.1
default: 0
green2 Sets the adjustment to be made to the green component of the particle colour per
second for the second state.
format: green <delta value>
example: green -0.1
default: 0
blue2 Sets the adjustment to be made to the blue component of the particle colour per
second for the second state.
format: blue <delta value>
example: blue -0.1
default: 0
alpha2 Sets the adjustment to be made to the alpha component of the particle colour per
second for the second state.
Chapter 3: Scripts 124
state change
When a particle has this much time left to live, it will switch to state 2.
format: state change <seconds>
example: state change 2
default: 1
To create a ColourFader2 affector, include a section like this within your particle system
script:
affector ColourFader2
{
// Settings go here
}
Scaler Affector
This affector scales particles in flight. It’s extra attributes are:
rate The amount by which to scale the particles in both the x and y direction per second.
To create a scale affector, include a section like this within your particle system script:
affector Scaler
{
// Settings go here
}
Rotator Affector
This affector rotates particles in flight. This is done by rotating the texture. It’s extra attributes
are:
rotation speed range start
The start of a range of rotation speeds to be assigned to emitted particles.
format: rotation speed range start <degrees per second>
example: rotation speed range start 90
default: 0
To create a rotate affector, include a section like this within your particle system script:
affector Rotator
{
// Settings go here
}
ColourInterpolator Affector
Similar to the ColourFader and ColourFader2 Affector?s, this affector modifies the colour of
particles in flight, except it has a variable number of defined stages. It swaps the particle colour
for several stages in the life of a particle and interpolates between them. It’s extra attributes
are:
time0 The point in time of stage 0.
format: time0 <0-1 based on lifetime>
example: time0 0
default: 1
[...]
Chapter 3: Scripts 126
The number of stages is variable. The maximal number of stages is 6; where time5 and
colour5 are the last possible parameters. To create a colour interpolation affector, include a
section like this within your particle system script:
affector ColourInterpolator
{
// Settings go here
}
ColourImage Affector
This is another affector that modifies the colour of particles in flight, but instead of program-
matically defining colours, the colours are taken from a specified image file. The range of colour
values begins from the left side of the image and move to the right over the lifetime of the
particle, therefore only the horizontal dimension of the image is used. Its extra attributes are:
image The start of a range of rotation speed to be assigned to emitted particles.
format: image <image name>
example: image rainbow.png
default: none
To create a ColourImage affector, include a section like this within your particle system
script:
affector ColourImage
{
// Settings go here
}
DeflectorPlane Affector
This affector defines a plane which deflects particles which collide with it. The attributes are:
plane point
A point on the deflector plane. Together with the normal vector it defines the plane.
default: plane point 0 0 0
plane normal
The normal vector of the deflector plane. Together with the point it defines the
plane.
default: plane normal 0 1 0
bounce The amount of bouncing when a particle is deflected. 0 means no deflection and 1
stands for 100 percent reflection.
default: bounce 1.0
DirectionRandomiser Affector
This affector applies randomness to the movement of the particles. Its extra attributes are:
randomness
The amount of randomness to introduce in each axial direction.
example: randomness 5
default: randomness 1
Chapter 3: Scripts 127
keep velocity
Determines whether the velocity of particles is unchanged.
example: keep velocity true
default: keep velocity false
Loading scripts
Overlay scripts are loaded at initialisation time by the system: by default it looks in all com-
mon resource locations (see Root::addResourceLocation) for files with the ’.overlay’ extension
and parses them. If you want to parse files with a different extension, use the OverlayMan-
ager::getSingleton().parseAllSources method with your own extension, or if you want to parse
an individual file, use OverlayManager::getSingleton().parseScript.
Format
Several overlays may be defined in a single script. The script format is pseudo-C++, with sections
delimited by curly braces (), comments indicated by starting a line with ’//’ (note, no nested
form comments allowed), and inheritance through the use of templates. The general format is
shown below in a typical example:
// The name of the overlay comes first
MyOverlays/ANewOverlay
{
zorder 200
container Panel(MyOverlayElements/TestPanel)
{
// Center it horizontally, put it at the top
left 0.25
top 0
width 0.5
height 0.1
material MyMaterials/APanelMaterial
top 0
width 0.1
height 0.1
material MyMaterials/NestedPanel
}
}
}
The above example defines a single overlay called ’MyOverlays/ANewOverlay’, with 2 panels
in it, one nested under the other. It uses relative metrics (the default if no metrics mode option
is found).
Every overlay in the script must be given a name, which is the line before the first opening
’’. This name must be globally unique. It can include path characters (as in the example) to
logically divide up your overlays, and also to avoid duplicate names, but the engine does not
treat the name a hierarchical, just as a string. Within the braces are the properties of the
overlay, and any nested elements. The overlay itself only has a single property ’zorder’ which
determines how ’high’ it is in the stack of overlays if more than one is displayed at the same
time. Overlays with higher zorder values are displayed on top.
’element’ if you want to define a 2D element which cannot have children of it’s own
’container’ if you want to define a 2D container object (which may itself have nested containers
or elements)
The element and container blocks are pretty identical apart from their ability to store nested
blocks.
type name
Must resolve to the name of a OverlayElement type which has been registered with
the OverlayManager. Plugins register with the OverlayManager to advertise their
ability to create elements, and at this time advertise the name of the type. OGRE
comes preconfigured with types ’Panel’, ’BorderPanel’ and ’TextArea’.
Chapter 3: Scripts 129
instance name
Must be a name unique among all other elements / containers by which to identify
the element. Note that you can obtain a pointer to any named element by calling
OverlayManager::getSingleton().getOverlayElement(name).
template name
Optional template on which to base this item. See templates.
The properties which can be included within the braces depend on the custom type. However
the following are always valid:
• [metrics mode], page 131
• [horz align], page 131
• [vert align], page 132
• [left], page 132
• [top], page 133
• [width], page 133
• [height], page 133
• [overlay material], page 134
• [caption], page 134
Templates
You can use templates to create numerous elements with the same properties. A template is
an abstract element and it is not added to an overlay. It acts as a base class that elements can
inherit and get its default properties. To create a template, the keyword ’template’ must be
the first word in the element definition (before container or element). The template element is
created in the topmost scope - it is NOT specified in an Overlay. It is recommended that you
define templates in a separate overlay though this is not essential. Having templates defined in
a separate file will allow different look & feels to be easily substituted.
Elements can inherit a template in a similar way to C++ inheritance - by using the : operator
on the element definition. The : operator is placed after the closing bracket of the name
(separated by a space). The name of the template to inherit is then placed after the : operator
(also separated by a space).
A template can contain template children which are created when the template is subclassed
and instantiated. Using the template keyword for the children of a template is optional but rec-
ommended for clarity, as the children of a template are always going to be templates themselves.
MyOverlays/AnotherOverlay
{
zorder 490
container BorderPanel(MyElements/BackPanel) : MyTemplates/BasicBorderPanel
{
left 0
top 0
width 1
height 1
Also note that the instantiate of a Button needs a template name for the caption attribute.
So templates can also be used by elements that need dynamic creation of children elements (the
button creates a TextAreaElement in this case for its caption).
See Section 3.4.1 [OverlayElement Attributes], page 131, Section 3.4.2 [Standard OverlayEle-
ments], page 135
metrics mode
Sets the units which will be used to size and position this element.
This can be used to change the way that all measurement attributes in the rest of this element
are interpreted. In relative mode, they are interpreted as being a parametric value from 0 to 1,
as a proportion of the width / height of the screen. In pixels mode, they are simply pixel offsets.
horz align
Sets the horizontal alignment of this element, in terms of where the horizontal origin is.
This can be used to change where the origin is deemed to be for the purposes of any horizontal
positioning attributes of this element. By default the origin is deemed to be the left edge of the
screen, but if you change this you can center or right-align your elements. Note that setting the
alignment to center or right does not automatically force your elements to appear in the center
or the right edge, you just have to treat that point as the origin and adjust your coordinates
appropriately. This is more flexible because you can choose to position your element anywhere
relative to that origin. For example, if your element was 10 pixels wide, you would use a ’left’
property of -10 to align it exactly to the right edge, or -20 to leave a gap but still make it stick
to the right edge.
Note that you can use this property in both relative and pixel modes, but it is most useful
in pixel mode.
vert align
Sets the vertical alignment of this element, in terms of where the vertical origin is.
This can be used to change where the origin is deemed to be for the purposes of any vertical
positioning attributes of this element. By default the origin is deemed to be the top edge of
the screen, but if you change this you can center or bottom-align your elements. Note that
setting the alignment to center or bottom does not automatically force your elements to appear
in the center or the bottom edge, you just have to treat that point as the origin and adjust
your coordinates appropriately. This is more flexible because you can choose to position your
element anywhere relative to that origin. For example, if your element was 50 pixels high, you
would use a ’top’ property of -50 to align it exactly to the bottom edge, or -70 to leave a gap
but still make it stick to the bottom edge.
Note that you can use this property in both relative and pixel modes, but it is most useful
in pixel mode.
left
Sets the horizontal position of the element relative to it’s parent.
Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size.
Therefore 0.5 is half-way across the screen.
Default: left 0
top
Sets the vertical position of the element relative to it’s parent.
Positions are relative to the parent (the top-left of the screen if the parent is an overlay,
the top-left of the parent otherwise) and are expressed in terms of a proportion of screen size.
Therefore 0.5 is half-way down the screen.
Default: top 0
width
Sets the width of the element as a proportion of the size of the screen.
Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are not
relative to the parent; this is common in windowing systems where the top and left are relative
but the size is absolute.
Default: width 1
height
Sets the height of the element as a proportion of the size of the screen.
Sizes are relative to the size of the screen, so 0.25 is a quarter of the screen. Sizes are not
relative to the parent; this is common in windowing systems where the top and left are relative
but the size is absolute.
Default: height 1
material
Sets the name of the material to use for this element.
This sets the base material which this element will use. Each type of element may interpret
this differently; for example the OGRE element ’Panel’ treats this as the background of the
panel, whilst ’BorderPanel’ interprets this as the material for the center area only. Materials
should be defined in .material scripts.
Note that using a material in an overlay element automatically disables lighting and depth
checking on this material. Therefore you should not use the same material as is used for real
3D objects for an overlay.
Default: none
caption
Sets a text caption for the element.
Not all elements support captions, so each element is free to disregard this if it wants.
However, a general text caption is so common to many elements that it is included in the
generic interface to make it simpler to use. This is a common feature in GUI systems.
Default: blank
rotation
Sets the rotation of the element.
Format: rotation <angle in degrees> <axis x> <axis y> <axis z> Example: rotation 30 0 0 1
Chapter 3: Scripts 135
Default: none
This section describes how you define their custom attributes in an .overlay script, but you can
also change these custom properties in code if you wish. You do this by calling setParame-
ter(paramname, value). You may wish to use the StringConverter class to convert your types
to and from strings.
Panel (container)
This is the most bog-standard container you can use. It is a rectangular area which can contain
other elements (or containers) and may or may not have a background, which can be tiled
however you like. The background material is determined by the material attribute, but is only
displayed if transparency is off.
Attributes:
transparent <true | false>
If set to ’true’ the panel is transparent and is not rendered itself, it is just used as
a grouping level for it’s children.
tiling <layer> <x tile> <y tile>
Sets the number of times the texture(s) of the material are tiled across the panel in
the x and y direction. <layer> is the texture layer, from 0 to the number of texture
layers in the material minus one. By setting tiling per layer you can create some
nice multitextured backdrops for your panels, this works especially well when you
animate one of the layers.
uv coords <topleft u> <topleft v> <bottomright u> <bottomright v>
Sets the texture coordinates to use for this panel.
BorderPanel (container)
This is a slightly more advanced version of Panel, where instead of just a single flat panel, the
panel has a separate border which resizes with the panel. It does this by taking an approach
very similar to the use of HTML tables for bordered content: the panel is rendered as 9 square
areas, with the center area being rendered with the main material (as with Panel) and the outer
8 areas (the 4 corners and the 4 edges) rendered with a separate border material. The advantage
of rendering the corners separately from the edges is that the edge textures can be designed so
that they can be stretched without distorting them, meaning the single texture can serve any
size panel.
Attributes:
Chapter 3: Scripts 136
TextArea (element)
This is a generic element that you can use to render text. It uses fonts which can be defined in
code using the FontManager and Font classes, or which have been predefined in .fontdef files.
See the font definitions section for more information.
Attributes:
font name <name>
The name of the font to use. This font must be defined in a .fontdef file to ensure
it is available at scripting time.
char height <height>
The height of the letters as a proportion of the screen height. Character widths may
vary because OGRE supports proportional fonts, but will be based on this constant
height.
colour <red> <green> <blue>
A solid colour to render the text in. Often fonts are defined in monochrome, so this
allows you to colour them in nicely and use the same texture for multiple different
coloured text areas. The colour elements should all be expressed as values between
0 and 1. If you use predrawn fonts which are already full colour then you don’t need
this.
colour bottom <red> <green> <blue> / colour top <red> <green> <blue>
As an alternative to a solid colour, you can colour the text differently at the top
and bottom to create a gradient colour effect which can be very effective.
alignment <left | center | right>
Sets the horizontal alignment of the text. This is different from the horz align
parameter.
space width <width>
Sets the width of a space in relation to the screen.
Chapter 3: Scripts 137
All font definitions are held in .fontdef files, which are parsed by the system at startup time.
Each .fontdef file can contain multiple font definitions. The basic format of an entry in the
.fontdef file is:
<font_name>
{
type <image | truetype>
source <image file | truetype font file>
...
... custom attributes depending on type
}
’character’ is either an ASCII character for non-extended 7-bit ASCII, or for ex-
tended glyphs, a unicode decimal value, which is identified by preceding the number
with a ’u’ - e.g. ’u0546’ denotes unicode value 546.
A note for Windows users: I recommend using BitmapFontBuilder
(http://www.lmnopc.com/bitmapfontbuilder/), a free tool which will generate a
texture and export character widths for you, you can find a tool for converting the binary
output from this into ’glyph’ lines in the Tools folder.
you should specify a space-separated list of inclusive code point ranges of the form
’start-end’. Numbers must be decimal.
You can also create new fonts at runtime by using the FontManager if you wish.
Chapter 4: Mesh Tools 140
4 Mesh Tools
There are a number of mesh tools available with OGRE to help you manipulate your meshes.
Section 4.1 [Exporters], page 140
For getting data out of modellers and into OGRE.
Section 4.2 [XmlConverter], page 140
For converting meshes and skeletons to/from XML.
Section 4.3 [MeshUpgrader], page 141
For upgrading binary meshes from one version of OGRE to another.
4.1 Exporters
Exporters are plugins to 3D modelling tools which write meshes and skeletal animation to file
formats which OGRE can use for realtime rendering. The files the exporters write end in .mesh
and .skeleton respectively.
Each exporter has to be written specifically for the modeller in question, although they all use
a common set of facilities provided by the classes MeshSerializer and SkeletonSerializer. They
also normally require you to own the modelling tool.
All the exporters here can be built from the source code, or you can download precompiled
versions from the OGRE web site.
4.2 XmlConverter
The OgreXmlConverter tool can converter binary .mesh and .skeleton files to XML and back
again - this is a very useful tool for debugging the contents of meshes, or for exchanging mesh
data easily - many of the modeller mesh exporters export to XML because it is simpler to do,
and OgreXmlConverter can then produce a binary from it. Other than simplicity, the other
advantage is that OgreXmlConverter can generate additional information for the mesh, like
bounding regions and level-of-detail reduction.
Syntax:
Chapter 4: Mesh Tools 141
4.3 MeshUpgrader
This tool is provided to allow you to upgrade your meshes when the binary format changes -
sometimes we alter it to add new features and as such you need to keep your own assets up to
date. This tools has a very simple syntax:
OgreMeshUpgrade <oldmesh> <newmesh>
The OGRE release notes will notify you when this is necessary with a new release.
Chapter 5: Hardware Buffers 142
5 Hardware Buffers
Vertex buffers, index buffers and pixel buffers inherit most of their features from the Hardware-
Buffer class. The general premise with a hardware buffer is that it is an area of memory with
which you can do whatever you like; there is no format (vertex or otherwise) associated with
the buffer itself - that is entirely up to interpretation by the methods that use it - in that way, a
HardwareBuffer is just like an area of memory you might allocate using ’malloc’ - the difference
being that this memory is likely to be located in GPU or AGP memory.
For example:
VertexDeclaration* decl = HardwareBufferManager::getSingleton().createVertexDeclaration();
HardwareVertexBufferSharedPtr vbuf =
HardwareBufferManager::getSingleton().createVertexBuffer(
3*sizeof(Real), // size of one whole vertex
numVertices, // number of vertices
HardwareBuffer::HBU_STATIC_WRITE_ONLY, // usage
false); // no shadow buffer
Don’t worry about the details of the above, we’ll cover that in the later sections. The
important thing to remember is to always create objects through the HardwareBufferManager,
don’t use ’new’ (it won’t work anyway in most cases).
The most optimal type of hardware buffer is one which is not updated often, and is never
read from. The usage parameter of createVertexBuffer or createIndexBuffer can be one of the
following:
HBU_STATIC
This means you do not need to update the buffer very often, but you might occa-
sionally want to read from it.
Chapter 5: Hardware Buffers 143
HBU_STATIC_WRITE_ONLY
This means you do not need to update the buffer very often, and you do not need to
read from it. However, you may read from it’s shadow buffer if you set one up (See
Section 5.3 [Shadow Buffers], page 143). This is the optimal buffer usage setting.
HBU_DYNAMIC
This means you expect to update the buffer often, and that you may wish to read
from it. This is the least optimal buffer setting.
HBU_DYNAMIC_WRITE_ONLY
This means you expect to update the buffer often, but that you never want
to read from it. However, you may read from it’s shadow buffer if you set
one up (See Section 5.3 [Shadow Buffers], page 143). If you use this option,
and replace the entire contents of the buffer every frame, then you should use
HBU DYNAMIC WRITE ONLY DISCARDABLE instead, since that has better
performance characteristics on some platforms.
HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE
This means that you expect to replace the entire contents of the buffer on an ex-
tremely regular basis, most likely every frame. By selecting this option, you free
the system up from having to be concerned about losing the existing contents of
the buffer at any time, because if it does lose them, you will be replacing them
next frame anyway. On some platforms this can make a significant performance
difference, so you should try to use this whenever you have a buffer you need to
update regularly. Note that if you create a buffer this way, you should use the
HBL DISCARD flag when locking the contents of it for writing.
Choosing the usage of your buffers carefully is important to getting optimal performance out
of your geometry. If you have a situation where you need to update a vertex buffer often, consider
whether you actually need to update all the parts of it, or just some. If it’s the latter, consider
using more than one buffer, with only the data you need to modify in the HBU DYNAMIC
buffer.
Always try to use the WRITE ONLY forms. This just means that you cannot read directly
from the hardware buffer, which is good practice because reading from hardware buffers is very
slow. If you really need to read data back, use a shadow buffer, described in the next section.
the contents of the hardware buffer will have been copied into system memory somewhere in
order for you to get access to it. For the same reason, when you’re finished with the buffer you
must unlock it; if you locked the buffer for writing this will trigger the process of uploading the
modified information to the graphics hardware.
Lock parameters
When you lock a buffer, you call one of the following methods:
// Lock the entire buffer
pBuffer->lock(lockType);
// Lock only part of the buffer
pBuffer->lock(start, length, lockType);
The first call locks the entire buffer, the second locks only the section from ’start’ (as a
byte offset), for ’length’ bytes. This could be faster than locking the entire buffer since less
is transferred, but not if you later update the rest of the buffer too, because doing it in small
chunks like this means you cannot use HBL DISCARD (see below).
The lockType parameter can have a large effect on the performance of your application, especially
if you are not using a shadow buffer.
HBL_NORMAL
This kind of lock allows reading and writing from the buffer - it’s also the least
optimal because basically you’re telling the card you could be doing anything at all.
If you’re not using a shadow buffer, it requires the buffer to be transferred from the
card and back again. If you’re using a shadow buffer the effect is minimal.
HBL_READ_ONLY
This means you only want to read the contents of the buffer. Best used when you
created the buffer with a shadow buffer because in that case the data does not have
to be downloaded from the card.
HBL_DISCARD
This means you are happy for the card to discard the entire current contents of the
buffer. Implicitly this means you are not going to read the data - it also means that
the card can avoid any stalls if the buffer is currently being rendered from, because
it will actually give you an entirely different one. Use this wherever possible when
you are locking a buffer which was not created with a shadow buffer. If you are
using a shadow buffer it matters less, although with a shadow buffer it’s preferable
to lock the entire buffer at once, because that allows the shadow buffer to use
HBL DISCARD when it uploads the updated contents to the real buffer.
HBL_NO_OVERWRITE
This is useful if you are locking just part of the buffer and thus cannot use
HBL DISCARD. It tells the card that you promise not to modify any section of
the buffer which has already been used in a rendering operation this frame. Again
this is only useful on buffers with no shadow buffer.
Once you have locked a buffer, you can use the pointer returned however you wish (just don’t
bother trying to read the data that’s there if you’ve used HBL DISCARD, or write the data
if you’ve used HBL READ ONLY). Modifying the contents depends on the type of buffer, See
Section 5.6 [Hardware Vertex Buffers], page 145 and See Section 5.7 [Hardware Index Buffers],
page 149
Chapter 5: Hardware Buffers 145
It’s worth noting that you don’t necessarily have to use VertexData to store your applications
geometry; all that is required is that you can build a VertexData structure when it comes to
rendering. This is pretty easy since all of VertexData’s members are pointers, so you could
maintain your vertex buffers and declarations in alternative structures if you like, so long as you
can convert them for rendering.
vertexBufferBinding
A pointer to a VertexBufferBinding object which defines which vertex buffers are
bound to which sources - again, this is created for you by VertexData. See
Section 5.6.3 [Vertex Buffer Bindings], page 147
To add an element to a VertexDeclaration, you call it’s addElement method. The parameters
to this method are:
source This tells the declaration which buffer the element is to be pulled from. Note
that this is just an index, which may range from 0 to one less than the number of
buffers which are being bound as sources of vertex data. See Section 5.6.3 [Vertex
Buffer Bindings], page 147 for information on how a real buffer is bound to a source
index. Storing the source of the vertex element this way (rather than using a buffer
pointer) allows you to rebind the source of a vertex very easily, without changing
the declaration of the vertex format itself.
offset Tells the declaration how far in bytes the element is offset from the start of each
whole vertex in this buffer. This will be 0 if this is the only element being sourced
from this buffer, but if other elements are there then it may be higher. A good way
of thinking of this is the size of all vertex elements which precede this element in
the buffer.
type This defines the data type of the vertex input, including it’s size. This is an impor-
tant element because as GPUs become more advanced, we can no longer assume that
position input will always require 3 floating point numbers, because programmable
vertex pipelines allow full control over the inputs and outuputs. This part of the
element definition covers the basic type and size, e.g. VET FLOAT3 is 3 floating
point numbers - the meaning of the data is dealt with in the next paramter.
semantic This defines the meaning of the element - the GPU will use this to determine what to
use this input for, and programmable vertex pipelines will use this to identify which
semantic to map the input to. This can identify the element as positional data,
normal data, texture coordinate data, etc. See the API reference for full details of
all the options.
index This parameter is only required when you supply more than one element of the same
semantic in one vertex declaration. For example, if you supply more than one set
of texture coordinates, you would set first sets index to 0, and the second set to 1.
You can repeat the call to addElement for as many elements as you have in your vertex input
structures. There are also useful methods on VertexDeclaration for locating elements within a
declaration - see the API reference for full details.
Important Considerations
Whilst in theory you have completely full reign over the format of you vertices, in reality there are
some restrictions. Older DirectX hardware imposes a fixed ordering on the elements which are
Chapter 5: Hardware Buffers 147
pulled from each buffer; specifically any hardware prior to DirectX 9 may impose the following
restrictions:
• VertexElements should be added in the following order, and the order of the elements within
any shared buffer should be as follows:
1. Positions
2. Blending weights
3. Normals
4. Diffuse colours
5. Specular colours
6. Texture coordinates (starting at 0, listed in order, with no gaps)
• You must not have unused gaps in your buffers which are not referenced by any VertexEle-
ment
• You must not cause the buffer & offset settings of 2 VertexElements to overlap
OpenGL and DirectX 9 compatible hardware are not required to follow these strict limita-
tions, so you might find, for example that if you broke these rules your application would run
under OpenGL and under DirectX on recent cards, but it is not guaranteed to run on older
hardware under DirectX unless you stick to the above rules. For this reason you’re advised to
abide by them!
numVertices
The number of vertices in this buffer. Remember, not all the vertices have to be
used at once - it can be beneficial to create large buffers which are shared between
many chunks of geometry because changing vertex buffer bindings is a render state
switch, and those are best minimised.
usage This tells the system how you intend to use the buffer. See Section 5.2 [Buffer
Usage], page 142
useShadowBuffer
Tells the system whether you want this buffer backed by a system-memory copy.
See Section 5.3 [Shadow Buffers], page 143
There are also methods for retrieving buffers from the binding data - see the API reference for
full details.
Lets start with a vert simple example. Lets say you have a buffer which only contains vertex
positions, so it only contains sets of 3 floating point numbers per vertex. In this case, all you
need to do to write data into it is:
Real* pReal = static_cast<Real*>(vbuf->lock(HardwareBuffer::HBL_DISCARD));
... then you just write positions in chunks of 3 reals. If you have other floating point data
in there, it’s a little more complex but the principle is largely the same, you just need to write
alternate elements. But what if you have elements of different types, or you need to derive how
to write the vertex data from the elements themselves? Well, there are some useful methods on
the VertexElement class to help you out.
Firstly, you lock the buffer but assign the result to a unsigned char* rather than a specific type.
Then, for each element whcih is sourcing from this buffer (which you can find out by calling Ver-
texDeclaration::findElementsBySource) you call VertexElement::baseVertexPointerToElement.
This offsets a pointer which points at the base of a vertex in a buffer to the beginning of the
element in question, and allows you to use a pointer of the right type to boot. Here’s a full
example:
// Get base pointer
unsigned char* pVert = static_cast<unsigned char*>(vbuf->lock(HardwareBuffer::HBL_READ_ONLY)
Real* pReal;
for (size_t v = 0; v < vertexCount; ++v)
{
// Get elements
VertexDeclaration::VertexElementList elems = decl->findElementsBySource(bufferIdx);
VertexDeclaration::VertexElementList::iterator i, iend;
Chapter 5: Hardware Buffers 149
...
}
pVert += vbuf->getVertexSize();
}
vbuf->unlock();
See the API docs for full details of all the helper methods on VertexDeclaration and Vertex-
Element to assist you in manipulating vertex buffer data pointers.
indexType There are 2 types of index; 16-bit and 32-bit. They both perform the same way,
except that the latter can address larger vertex buffers. If your buffer includes more
than 65526 vertices, then you will need to use 32-bit indexes. Note that you should
only use 32-bit indexes when you need to, since they incur more overhead than
16-bit vertices, and are not supported on some older hardware.
numIndexes
The number of indexes in the buffer. As with vertex buffers, you should consider
whether you can use a shared index buffer which is used by multiple pieces of
geometry, since there can be performance advantages to switching index buffers less
often.
usage This tells the system how you intend to use the buffer. See Section 5.2 [Buffer
Usage], page 142
useShadowBuffer
Tells the system whether you want this buffer backed by a system-memory copy.
See Section 5.3 [Shadow Buffers], page 143
5.8.1 Textures
A texture is an image that can be applied onto the surface of a three dimensional model. In
Ogre, textures are represented by the Texture resource class.
Creating a texture
Textures are created through the TextureManager. In most cases they are created from image
files directly by the Ogre resource system. If you are reading this, you most probably want to
create a texture manually so that you can provide it with image data yourself. This is done
through TextureManager::createManual:
ptex = TextureManager::getSingleton().createManual(
"MyManualTexture", // Name of texture
"General", // Name of resource group in which the texture should be created
TEX_TYPE_2D, // Texture type
256, // Width
256, // Height
1, // Depth (Must be 1 for two dimensional textures)
0, // Number of mipmaps
Chapter 5: Hardware Buffers 151
Texture usages
In addition to the hardware buffer usages as described in See Section 5.2 [Buffer Usage], page 142
there are some usage flags specific to textures:
TU AUTOMIPMAP
Mipmaps for this texture will be automatically generated by the graphics hardware.
The exact algorithm used is not defined, but you can assume it to be a 2x2 box
filter.
TU RENDERTARGET
This texture will be a render target, ie. used as a target for render to texture.
Setting this flag will ignore all other texture usages except TU AUTOMIPMAP.
TU DEFAULT
This is actualy a combination of usage flags, and is equivalent to
TU AUTOMIPMAP | TU STATIC WRITE ONLY. The resource system uses
these flags for textures that are loaded from images.
Getting a PixelBuffer
A Texture can consist of multiple PixelBuffers, one for each combo if mipmap level and face num-
ber. To get a PixelBuffer from a Texture object the method Texture::getBuffer(face, mipmap)
is used:
face should be zero for non-cubemap textures. For cubemap textures it identifies the face to
use, which is one of the cube faces described in See Section 5.8.3 [Texture Types], page 152.
mipmap is zero for the zeroth mipmap level, one for the first mipmap level, and so on. On
textures that have automatic mipmap generation (TU AUTOMIPMAP) only level 0 should be
accessed, the rest will be taken care of by the rendering API.
A simple example of using getBuffer is
// Get the PixelBuffer for face 0, mipmap 0.
HardwarePixelBufferSharedPtr ptr = tex->getBuffer(0,0);
blitFromMemory
The easy method to get an image into a PixelBuffer is by using HardwarePixel-
Buffer::blitFromMemory. This takes a PixelBox object and does all necessary pixel format
conversion and scaling for you. For example, to create a manual texture and load an image into
it, all you have to do is
// Manually loads an image and puts the contents in a manually created texture
Chapter 5: Hardware Buffers 152
Image img;
img.load("elephant.png", "General");
// Create RGB texture with 5 mipmaps
TexturePtr tex = TextureManager::getSingleton().createManual(
"elephant",
"General",
TEX_TYPE_2D,
img.getWidth(), img.getHeight(),
5, PF_X8R8G8B8);
// Copy face 0 mipmap 0 of the image to face 0 mipmap 0 of the texture.
tex->getBuffer(0,0)->blitFromMemory(img.getPixelBox(0,0));
/// Unlock the buffer again (frees it for use by the GPU)
buffer->unlock();
TEX TYPE 1D
One dimensional texture, used in combination with 1D texture coordinates.
TEX TYPE 2D
Two dimensional texture, used in combination with 2D texture coordinates.
TEX TYPE 3D
Three dimensional volume texture, used in combination with 3D texture coordinates.
Chapter 5: Hardware Buffers 153
Colour channels
The meaning of the channels R,G,B,A,L and X is defined as
R Red colour component, usually ranging from 0.0 (no red) to 1.0 (full red).
G Green colour component, usually ranging from 0.0 (no green) to 1.0 (full green).
B Blue colour component, usually ranging from 0.0 (no blue) to 1.0 (full blue).
A Alpha component, usually ranging from 0.0 (entire transparent) to 1.0 (opaque).
L Luminance component, usually ranging from 0.0 (black) to 1.0 (white). The lumi-
nance component is duplicated in the R, G, and B channels to achieve a greyscale
image.
X This component is completely ignored.
If none of red, green and blue components, or luminance is defined in a format, these default
to 0. For the alpha channel this is different; if no alpha is defined, it defaults to 1.
Introduction
This tutorial will provide a brief introduction of ExternalTextureSource and ExternalTexture-
SourceManager classes, their relationship, and how the PlugIns work. For those interested in
developing a Texture Source Plugin or maybe just wanting to know more about this system, take
a look the ffmpegVideoSystem plugin, which you can find more about on the OGRE forums.
How do external texture source plugins benefit OGRE? Well, the main answer is: adding
support for any type of texture source does not require changing OGRE to support it... all that
is involved is writing a new plugin. Additionally, because the manager uses the StringInterface
class to issue commands/params, no change to the material script reader is needs to be made.
As a result, if a plugin needs a special parameter set, it just creates a new command in it’s
Parameter Dictionary. - see ffmpegVideoSystem plugin for an example. To make this work, two
classes have been added to OGRE: ExternalTextureSource & ExternalTextureSourceManager.
ExternalTextureSource Class
The ExternalTextureSource class is the base class that Texture Source PlugIns must be derived
from. It provides a generic framework (via StringInterface class) with a very limited amount
of functionality. The most common of parameters can be set through the TexturePlugInSource
class interface or via the StringInterface commands contained within this class. While this may
seem like duplication of code, it is not. By using the string command interface, it becomes
extremely easy for derived plugins to add any new types of parameters that it may need.
ExternalTextureSourceManager Class
ExternalTextureSourceManager is responsible for keeping track of loaded Texture Source Plug-
Ins. It also aids in the creation of texture source textures from scripts. It also is the interface
you should use when dealing with texture source plugins.
Chapter 6: External Texture Sources 158
Note: The function prototypes shown below are mockups - param names are simplified to
better illustrate purpose here... Steps needed to create a new texture via ExternalTexture-
SourceManager:
• Obviously, the first step is to have the desired plugin included in plugin.cfg for it to be
loaded.
• Set the desired PlugIn as Active via AdvancedTextureMan-
ager::getSingleton().SetCurrentPlugIn( String Type ); – type is whatever the
plugin registers as handling (e.g. "video", "flash", "whatever", etc).
• Note: Consult Desired PlugIn to see what params it needs/expects. Set params/value
pairs via AdvancedTextureManager::getSingleton().getCurrentPlugIn()->setParameter(
String Param, String Value );
• After required params are set, a simple call to AdvancedTextureManager::getSingleton().getCurrentPlugIn(
>createDefinedTexture( sMaterialName ); will create a texture to the material name
given.
The manager also provides a method for deleting a texture source material: AdvancedTex-
tureManager::DestroyAdvancedTexture( String sTextureName ); The destroy method works by
broadcasting the material name to all loaded TextureSourcePlugIns, and the PlugIn who ac-
tually created the material is responsible for the deletion, while other PlugIns will just ignore
the request. What this means is that you do not need to worry about which PlugIn created
the material, or activating the PlugIn yourself. Just call the manager method to remove the
material. Also, all texture plugins should handle cleanup when they are shutdown.
Notice that the first two param/value pairs are defined in the ExternalTextureSource base
class and that the third parameter/value pair is not defined in the base class... That parameter
is added to the param dictionary by the ffmpegVideoPlugin... This shows that extending the
functionality with the plugins is extremely easy. Also, pay particular attention to the line:
texture source video. This line identifies that this texture unit will come from a texture source
plugin. It requires one parameter that determines which texture plugin will be used. In the
example shown, the plugin requested is one that registered with "video" name.
7 Shadows
Shadows are clearly an important part of rendering a believable scene - they provide a more
tangible feel to the objects in the scene, and aid the viewer in understanding the spatial relation-
ship between objects. Unfortunately, shadows are also one of the most challenging aspects of 3D
rendering, and they are still very much an active area of research. Whilst there are many tech-
niques to render shadows, none is perfect and they all come with advantages and disadvantages.
For this reason, Ogre provides multiple shadow implementations, with plenty of configuration
settings, so you can choose which technique is most appropriate for your scene.
Shadow implementations fall into basically 2 broad categories: Section 7.1 [Stencil Shadows],
page 161 and Section 7.2 [Texture-based Shadows], page 164. This describes the method by
which the shape of the shadow is generated. In addition, there is more than one way to render
the shadow into the scene: Section 7.3 [Modulative Shadows], page 169, which darkens the scene
in areas of shadow, and Section 7.4 [Additive Light Masking], page 170 which by contrast builds
up light contribution in areas which are not in shadow. You also have the option of [Integrated
Texture Shadows], page 168 which gives you complete control over texture shadow application,
allowing for complex single-pass shadowing shaders. Ogre supports all these combinations.
Enabling shadows
Shadows are disabled by default, here’s how you turn them on and configure them in the general
sense:
1. Enable a shadow technique on the SceneManager as the first thing you doing your scene
setup. It is important that this is done first because the shadow technique can alter the
way meshes are loaded. Here’s an example:
mSceneMgr->setShadowTechnique(SHADOWTYPE_STENCIL_ADDITIVE);
2. Create one or more lights. Note that not all light types are necessarily supported by all
shadow techniques, you should check the sections about each technique to check. Note that
if certain lights should not cast shadows, you can turn that off by calling setCastShad-
ows(false) on the light, the default is true.
3. Disable shadow casting on objects which should not cast shadows. Call setCastShad-
ows(false) on objects you don’t want to cast shadows, the default for all objects is to
cast shadows.
4. Configure shadow far distance. You can limit the distance at which shadows are considered
for performance reasons, by calling SceneManager::setShadowFarDistance.
5. Turn off the receipt of shadows on materials that should not receive them. You can turn
off the receipt of shadows (note, not the casting of shadows - that is done per-object) by
calling Material::setReceiveShadows or using the receive shadows material attribute. This
is useful for materials which should be considered self-illuminated for example. Note that
transparent materials are typically excluded from receiving and casting shadows, although
see the [transparency casts shadows], page 16 option for exceptions.
In order to generate the stencil, ’shadow volumes’ are rendered by extruding the silhouette
of the shadow caster away from the light. Where these shadow volumes intersect other objects
(or the caster, since self-shadowing is supported using this technique), the stencil is updated,
allowing subsequent operations to differentiate between light and shadow. How exactly this is
used to render the shadows depends on whether Section 7.3 [Modulative Shadows], page 169 or
Section 7.4 [Additive Light Masking], page 170 is being used. Objects can both cast and receive
stencil shadows, so self-shadowing is inbuilt.
The advantage of stencil shadows is that they can do self-shadowing simply on low-end
hardware, provided you keep your poly count under control. In contrast doing self-shadowing
with texture shadows requires a fairly modern machine (See Section 7.2 [Texture-based Shadows],
page 164). For this reason, you’re likely to pick stencil shadows if you need an accurate shadowing
solution for an application aimed at older or lower-spec machines.
The disadvantages of stencil shadows are numerous though, especially on more modern hard-
ware. Because stencil shadows are a geometric technique, they are inherently more costly the
higher the number of polygons you use, meaning you are penalized the more detailed you make
Chapter 7: Shadows 162
your meshes. The fillrate cost, which comes from having to render shadow volumes, also escalates
the same way. Since more modern applications are likely to use higher polygon counts, stencil
shadows can start to become a bottleneck. In addition, the visual aspects of stencil shadows are
pretty primitive - your shadows will always be hard-edged, and you have no possibility of doing
clever things with shaders since the stencil is not available for manipulation there. Therefore,
if your application is aimed at higher-end machines you should definitely consider switching to
texture shadows (See Section 7.2 [Texture-based Shadows], page 164).
There are a number of issues to consider which are specific to stencil shadows:
• [CPU Overhead], page 162
• [Extrusion distance], page 162
• [Camera far plane positioning], page 162
• [Mesh edge lists], page 163
• [The Silhouette Edge], page 163
• [Be realistic], page 163
• [Stencil Optimisations Performed By Ogre], page 163
CPU Overhead
Calculating the shadow volume for a mesh can be expensive, and it has to be done on the
CPU, it is not a hardware accelerated feature. Therefore, you can find that if you overuse
this feature, you can create a CPU bottleneck for your application. Ogre quite aggressively
eliminates objects which cannot be casting shadows on the frustum, but there are limits to how
much it can do, and large, elongated shadows (e.g. representing a very low sun position) are
very difficult to cull efficiently. Try to avoid having too many shadow casters around at once,
and avoid long shadows if you can. Also, make use of the ’shadow far distance’ parameter on the
SceneManager, this can eliminate distant shadow casters from the shadow volume construction
and save you some time, at the expense of only having shadows for closer objects. Lastly,
make use of Ogre’s Level-Of-Detail (LOD) features; you can generate automatically calculated
LODs for your meshes in code (see the Mesh API docs) or when using the mesh tools such
as OgreXmlConverter and OgreMeshUpgrader. Alternatively, you can assign your own manual
LODs by providing alternative mesh files at lower detail levels. Both methods will cause the
shadow volume complexity to decrease as the object gets further away, which saves you valuable
volume calculation time.
Extrusion distance
When vertex programs are not available, Ogre can only extrude shadow volumes a finite distance
from the object. If an object gets too close to a light, any finite extrusion distance will be
inadequate to guarantee all objects will be shadowed properly by this object. Therefore, you
are advised not to let shadow casters pass too close to light sources if you can avoid it, unless
you can guarantee that your target audience will have vertex program capable hardware (in
this case, Ogre extrudes the volume to infinity using a vertex program so the problem does not
occur).
When infinite extrusion is not possible, Ogre uses finite extrusion, either derived from the
attenuation range of a light (in the case of a point light or spotlight), or a fixed extrusion
distance set in the application in the case of directional lights. To change the directional light
extrusion distance, use SceneManager::setShadowDirectionalLightExtrusionDistance.
Be realistic
Don’t expect to be able to throw any scene using any hardware at the stencil shadow algorithm
and expect to get perfect, optimum speed results. Shadows are a complex and expensive tech-
nique, so you should impose some reasonable limitations on your placing of lights and objects;
they’re not really that restricting, but you should be aware that this is not a complete free-for-all.
• Try to avoid letting objects pass very close (or even through) lights - it might look nice but
it’s one of the cases where artifacts can occur on machines not capable of running vertex
programs.
• Be aware that shadow volumes do not respect the ’solidity’ of the objects they pass through,
and if those objects do not themselves cast shadows (which would hide the effect) then the
result will be that you can see shadows on the other side of what should be an occluding
object.
• Make use of SceneManager::setShadowFarDistance to limit the number of shadow volumes
constructed
• Make use of LOD to reduce shadow volume complexity at distance
• Avoid very long (dusk and dawn) shadows - they exacerbate other issues such as volume
clipping, fillrate, and cause many more objects at a greater distance to require volume
construction.
Chapter 7: Shadows 164
technique simply because they are more powerful, and the increasing speed of GPUs is rapidly
amortizing the fillrate / texture access costs of using them.
The main disadvantage to texture shadows is that, because they are simply a texture, they
have a fixed resolution which means if stretched, the pixellation of the texture can become
obvious. There are ways to combat this though:
Choosing a projection basis
The simplest projection is just to render the shadow casters from the lights perspec-
tive using a regular camera setup. This can look bad though, so there are many
other projections which can help to improve the quality from the main camera’s per-
spective. OGRE supports pluggable projection bases via it’s ShadowCameraSetup
class, and comes with several existing options - Uniform (which is the simplest),
Uniform Focussed (which is still a normal camera projection, except that the cam-
era is focussed into the area that the main viewing camera is looking at), LiSPSM
(Light Space Perspective Shadow Mapping - which both focusses and distorts the
shadow frustum based on the main view camera) and Plan Optimal (which seeks to
optimise the shadow fidelity for a single receiver plane).
Filtering You can also sample the shadow texture multiple times rather than once to soften
the shadow edges and improve the appearance. Percentage Closest Filtering (PCF)
is the most popular approach, although there are multiple variants depending on
the number and pattern of the samples you take. Our shadows demo includes a
5-tap PCF example combined with depth shadow mapping.
Using a larger texture
Again as GPUs get faster and gain more memory, you can scale up to take advantage
of this.
If you combine all 3 of these techniques you can get a very high quality shadow solution.
The other issue is with point lights. Because texture shadows require a render to texture
in the direction of the light, omnidirectional lights (point lights) would require 6 renders to
totally cover all the directions shadows might be cast. For this reason, Ogre primarily supports
directional lights and spotlights for generating texture shadows; you can use point lights but
they will only work if off-camera since they are essentially turned into a spotlight shining into
your camera frustum for the purposes of texture shadows.
Directional Lights
Directional lights in theory shadow the entire scene from an infinitely distant light. Now, since
we only have a finite texture which will look very poor quality if stretched over the entire scene,
clearly a simplification is required. Ogre places a shadow texture over the area immediately in
front of the camera, and moves it as the camera moves (although it rounds this movement to
multiples of texels so that the slight ’swimming shadow’ effect caused by moving the texture is
minimised). The range to which this shadow extends, and the offset used to move it in front
of the camera, are configurable (See [Configuring Texture Shadows], page 166). At the far edge
of the shadow, Ogre fades out the shadow based on other configurable parameters so that the
termination of the shadow is softened.
Spotlights
Spotlights are much easier to represent as renderable shadow textures than directional lights,
since they are naturally a frustum. Ogre represents spotlight directly by rendering the shadow
Chapter 7: Shadows 166
from the light position, in the direction of the light cone; the field-of-view of the texture camera
is adjusted based on the spotlight falloff angles. In addition, to hide the fact that the shadow
texture is square and has definite edges which could show up outside the spotlight, Ogre uses
a second texture unit when projecting the shadow onto the scene which fades out the shadow
gradually in a projected circle around the spotlight.
Point Lights
As mentioned above, to support point lights properly would require multiple renders (either 6
for a cubic render or perhaps 2 for a less precise parabolic mapping), so rather than do that we
approximate point lights as spotlights, where the configuration is changed on the fly to make
the light shine from its position over the whole of the viewing frustum. This is not an ideal
setup since it means it can only really work if the point light’s position is out of view, and in
addition the changing parameterisation can cause some ’swimming’ of the texture. Generally
we recommend avoiding making point lights cast texture shadows.
If you’re not using depth shadow mapping, OGRE divides shadow casters and receivers into 2
disjoint groups. Simply by turning off shadow casting on an object, you automatically make
it a shadow receiver (although this can be disabled by setting the ’receive shadows’ option to
’false’ in a material script. Similarly, if an object is set as a shadow caster, it cannot receive
shadows.
You can adjust this manually by simply turning off shadow casting for lights you do not wish
to cast shadows. In addition, you can set a maximum limit on the number of shadow textures
Ogre is allowed to use by calling SceneManager::setShadowTextureCount. Each frame, Ogre
determines the lights which could be affecting the frustum, and then allocates the number of
shadow textures it is allowed to use to the lights on a first-come-first-served basis. Any additional
lights will not cast shadows that frame.
Note that you can set the number of shadow textures and their size at the same time by using
the SceneManager::setShadowTextureSettings method; this is useful because both the individual
calls require the potential creation / destruction of texture resources.
Important: if you use the GL render system your shadow texture size can only be larger
(in either dimension) than the size of your primary window surface if the hardware sup-
ports the Frame Buffer Object (FBO) or Pixel Buffer Object (PBO) extensions. Most
modern cards support this now, but be careful of older cards - you can check the abil-
ity of the hardware to manage this through ogreRoot->getRenderSystem()->getCapabilities()-
>hasCapability(RSC HWRENDER TO TEXTURE). If this returns false, if you create a
shadow texture larger in any dimension than the primary surface, the rest of the shadow texture
will be blank.
Here is where ’integrated’ texture shadows step in. Both of the texture shadow types above
have alternative versions called SHADOWTYPE TEXTURE MODULATIVE INTEGRATED
and SHADOWTYPE TEXTURE ADDITIVE INTEGRATED, where instead of rendering the
shadows for you, it just creates the texture shadow and then expects you to use that shadow
texture as you see fit when rendering receiver objects in the scene. The downside is that you have
to take into account shadow receipt in every one of your materials if you use this option - the
upside is that you have total control over how the shadow textures are used. The big advantage
Chapter 3: Scripts 169
here is that you can can perform more complex shading, taking into account shadowing, than
is possible using the generalised bolt-on approaches, AND you can probably write them in a
smaller number of passes, since you know precisely what you need and can combine passes
where possible. When you use one of these shadowing approaches, the only difference between
additive and modulative is the colour of the casters in the shadow texture (the shadow colour for
modulative, black for additive) - the actual calculation of how the texture affects the receivers
is of course up to you. No separate modulative pass will be performed, and no splitting of your
materials into ambient / per-light / decal etc will occur - absolutely everything is determined
by your original material (which may have modulative passes or per-light iteration if you want
of course, but it’s not required).
You reference a shadow texture in a material which implements this approach by using
the ’[content type], page 45 shadow’ directive in your {texture unit. It implicitly references a
shadow texture based on the number of times you’ve used this directive in the same pass, and
the light start option or light-based pass iteration, which might start the light index higher than
0.
There are 2 modulative shadow techniques; stencil-based (See Section 7.1 [Stencil Shadows],
page 161 : SHADOWTYPE STENCIL MODULATIVE) and texture-based (See Section 7.2
[Texture-based Shadows], page 164 : SHADOWTYPE TEXTURE MODULATIVE). Modula-
tive shadows are an inaccurate lighting model, since they darken the areas of shadow uniformly,
irrespective of the amount of light which would have fallen on the shadow area anyway. However,
they can give fairly attractive results for a much lower overhead than more ’correct’ methods like
Section 7.4 [Additive Light Masking], page 170, and they also combine well with pre-baked static
lighting (such as pre-calculated lightmaps), which additive lighting does not. The main thing to
consider is that using multiple light sources can result in overly dark shadows (where shadows
overlap, which intuitively looks right in fact, but it’s not physically correct) and artifacts when
using stencil shadows (See [The Silhouette Edge], page 163).
Shadow Colour
The colour which is used to darken the areas in shadow is set by SceneMan-
ager::setShadowColour; it defaults to a dark grey (so that the underlying colour still shows
through a bit).
Note that if you’re using texture shadows you have the additional option of using [Integrated
Texture Shadows], page 168 rather than being forced to have a separate pass of the scene to
render shadows. In this case the ’modulative’ aspect of the shadow technique just affects the
colour of the shadow texture.
Chapter 3: Scripts 170
As many technical papers (and game marketing) will tell you, rendering realistic lighting like
this requires multiple passes. Being a friendly sort of engine, Ogre frees you from most of the
hard work though, and will let you use the exact same material definitions whether you use
this lighting technique or not (for the most part, see [Pass Classification and Vertex Programs],
page 171). In order to do this technique, Ogre automatically categorises the Section 3.1.2
[Passes], page 20 you define in your materials into 3 types:
1. ambient Passes categorised as ’ambient’ include any base pass which is not lit by any
particular light, i.e. it occurs even if there is no ambient light in the scene. The ambient
pass always happens first, and sets up the initial depth value of the fragments, and the
ambient colour if applicable. It also includes any emissive / self illumination contribution.
Only textures which affect ambient light (e.g. ambient occlusion maps) should be rendered
in this pass.
2. diffuse/specular Passes categorised as ’diffuse/specular’ (or ’per-light’) are rendered once
per light, and each pass contributes the diffuse and specular colour from that single light
as reflected by the diffuse / specular terms in the pass. Areas in shadow from that light
are masked and are thus not updated. The resulting masked colour is added to the existing
colour in the scene. Again, no textures are used in this pass (except for textures used for
lighting calculations such as normal maps).
3. decal Passes categorised as ’decal’ add the final texture colour to the scene, which is mod-
ulated by the accumulated light built up from all the ambient and diffuse/specular passes.
In practice, Section 3.1.2 [Passes], page 20 rarely fall nicely into just one of these categories.
For each Technique, Ogre compiles a list of ’Illumination Passes’, which are derived from the
user defined passes, but can be split, to ensure that the divisions between illumination pass
categories can be maintained. For example, if we take a very simple material definition:
material TestIllumination
{
technique
{
pass
{
ambient 0.5 0.2 0.2
diffuse 1 0 0
specular 1 0.8 0.8 15
texture_unit
{
texture grass.png
}
}
}
}
Ogre will split this into 3 illumination passes, which will be the equivalent of this:
Chapter 3: Scripts 171
material TestIlluminationSplitIllumination
{
technique
{
// Ambient pass
pass
{
ambient 0.5 0.2 0.2
diffuse 0 0 0
specular 0 0 0
}
// Decal pass
pass
{
scene_blend modulate
lighting off
texture_unit
{
texture grass.png
}
}
}
}
So as you can see, even a simple material requires a minimum of 3 passes when using this
shadow technique, and in fact it requires (num lights + 2) passes in the general sense. You
can use more passes in your original material and Ogre will cope with that too, but be aware
that each pass may turn into multiple ones if it uses more than one type of light contribution
(ambient vs diffuse/specular) and / or has texture units. The main nice thing is that you get the
full multipass lighting behaviour even if you don’t define your materials in terms of it, meaning
that your material definitions can remain the same no matter what lighting approach you decide
to use.
In practice this is very easy. Even though your vertex program could be doing a lot of
complex, highly customised processing, it can still be classified into one of the 3 types listed
above. All you need to do to tell Ogre what you’re doing is to use the pass attributes ambient,
diffuse, specular and self illumination, just as if you were not using a vertex program. Sure,
these attributes do nothing (as far as rendering is concerned) when you’re using vertex programs,
but it’s the easiest way to indicate to Ogre which light components you’re using in your vertex
program. Ogre will then classify and potentially split your programmable pass based on this
information - it will leave the vertex program as-is (so that any split passes will respect any
vertex modification that is being done).
Note that when classifying a diffuse/specular programmable pass, Ogre checks to see whether
you have indicated the pass can be run once per light (iteration once per light). If so, the pass
is left intact, including it’s vertex and fragment programs. However, if this attribute is not
included in the pass, Ogre tries to split off the per-light part, and in doing so it will disable the
fragment program, since in the absence of the ’iteration once per light’ attribute it can only
assume that the fragment program is performing decal work and hence must not be used per
light.
So clearly, when you use additive light masking as a shadow technique, you need to make sure
that programmable passes you use are properly set up so that they can be classified correctly.
However, also note that the changes you have to make to ensure the classification is correct does
not affect the way the material renders when you choose not to use additive lighting, so the
principle that you should be able to use the same material definitions for all lighting scenarios
still holds. Here is an example of a programmable material which will be classified correctly by
the illumination pass classifier:
// Per-pixel normal mapping Any number of lights, diffuse and specular
material Examples/BumpMapping/MultiLightSpecular
{
technique
{
// Base ambient pass
pass
{
// ambient only, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 1 1 1
diffuse 0 0 0
specular 0 0 0 0
// Really basic vertex program
Chapter 3: Scripts 173
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
param_named_auto worldViewProj worldviewproj_matrix
param_named_auto ambient ambient_light_colour
}
}
// Now do the lighting pass
// NB we don’t do decal texture here because this is repeated per light
pass
{
// set ambient off, not needed for rendering, but as information
// to lighting pass categorisation routine
ambient 0 0 0
// do this for each light
iteration once_per_light
scene_blend add
// Fragment program
fragment_program_ref Examples/BumpMapFPSpecular
{
param_named_auto lightDiffuse light_diffuse_colour 0
param_named_auto lightSpecular light_specular_colour 0
}
tex_coord_set 1
tex_address_mode clamp
}
}
// Decal pass
pass
{
lighting off
// Really basic vertex program
vertex_program_ref Ogre/BasicVertexPrograms/AmbientOneTexture
{
param_named_auto worldViewProj worldviewproj_matrix
param_named ambient float4 1 1 1 1
}
scene_blend dest_colour zero
texture_unit
{
texture RustedMetal.jpg
}
}
}
}
Note that if you’re using texture shadows you have the additional option of using [Integrated
Texture Shadows], page 168 rather than being forced to use this explicit sequence - allowing
you to compress the number of passes into a much smaller number at the expense of defining
an upper number of shadow casting lights. In this case the ’additive’ aspect of the shadow
technique just affects the colour of the shadow texture and it’s up to you to combine the shadow
textures in your receivers however you like.
Static Lighting
Despite their power, additive lighting techniques have an additional limitation; they do not
combine well with pre-calculated static lighting in the scene. This is because they are based on
the principle that shadow is an absence of light, but since static lighting in the scene already
includes areas of light and shadow, additive lighting cannot remove light to create new shadows.
Therefore, if you use the additive lighting technique you must either use it exclusively as your
lighting solution (and you can combine it with per-pixel lighting to create a very impressive
dynamic lighting solution), or you must use [Integrated Texture Shadows], page 168 to combine
the static lighting according to your chosen approach.
Chapter 8: Animation 175
8 Animation
OGRE supports a pretty flexible animation system that allows you to script animation for several
different purposes:
Section 8.1 [Skeletal Animation], page 175
Mesh animation using a skeletal structure to determine how the mesh deforms.
There are many grades of skeletal animation, and not all engines (or modellers for that
matter) support all of them. OGRE supports the following features:
• Each mesh can be linked to a single skeleton
• Unlimited bones per skeleton
• Hierarchical forward-kinematics on bones
• Multiple named animations per skeleton (e.g. ’Walk’, ’Run’, ’Jump’, ’Shoot’ etc)
• Unlimited keyframes per animation
• Linear or spline-based interpolation between keyframes
• A vertex can be assigned to multiple bones and assigned weightings for smoother skinning
• Multiple animations can be applied to a mesh at the same time, again with a blend weighting
Skeletons and the animations which go with them are held in .skeleton files, which are produced
by the OGRE exporters. These files are loaded automatically when you create an Entity based
on a Mesh which is linked to the skeleton in question. You then use Section 8.2 [Animation
State], page 176 to set the use of animation on the entity in question.
Skeletal animation can be performed in software, or implemented in shaders (hardware skin-
ning). Clearly the latter is preferable, since it takes some of the work away from the CPU
and gives it to the graphics card, and also means that the vertex data does not need to be
re-uploaded every frame. This is especially important for large, detailed models. You should try
to use hardware skinning wherever possible; this basically means assigning a material which has
Chapter 8: Animation 176
a vertex program powered technique. See hundefinedi [Skeletal Animation in Vertex Programs],
page hundefinedi for more details. Skeletal animation can be combined with vertex animation,
See Section 8.3.3 [Combining Skeletal and Vertex Animation], page 179.
Morph animation is a simple approach where we have a whole series of snapshots of vertex
data which must be interpolated, e.g. a running animation implemented as morph targets.
Because this is based on simple snapshots, it’s quite fast to use when animating an entire mesh
because it’s a simple linear change between keyframes. However, this simplistic approach does
not support blending between multiple morph animations. If you need animation blending, you
are advised to use skeletal animation for full-mesh animation, and pose animation for animation
of subsets of meshes or where skeletal animation doesn’t fit - for example facial animation. For
animating in a vertex shader, morph animation is quite simple and just requires the 2 vertex
buffers (one the original position buffer) of absolute position data, and an interpolation factor.
Each track in a morph animation refrences a unique set of vertex data.
Pose animation is more complex. Like morph animation each track references a single unique
set of vertex data, but unlike morph animation, each keyframe references 1 or more ’poses’, each
with an influence level. A pose is a series of offsets to the base vertex data, and may be sparse -
i.e. it may not reference every vertex. Because they’re offsets, they can be blended - both within
a track and between animations. This set of features is very well suited to facial animation.
For example, let’s say you modelled a face (one set of vertex data), and defined a set of
poses which represented the various phonetic positions of the face. You could then define an
animation called ’SayHello’, containing a single track which referenced the face vertex data, and
which included a series of keyframes, each of which referenced one or more of the facial positions
at different influence levels - the combination of which over time made the face form the shapes
required to say the word ’hello’. Since the poses are only stored once, but can be referenced
may times in many animations, this is a very powerful way to build up a speech system.
The downside of pose animation is that it can be more difficult to set up, requiring poses to be
separately defined and then referenced in the keyframes. Also, since it uses more buffers (one for
the base data, and one for each active pose), if you’re animating in hardware using vertex shaders
you need to keep an eye on how many poses you’re blending at once. You define a maximum
supported number in your vertex program definition, via the includes pose animation material
script entry, See hundefinedi [Pose Animation in Vertex Programs], page hundefinedi.
So, by partitioning the vertex animation approaches into 2, we keep the simple morph tech-
nique easy to use, whilst still allowing all the powerful techniques to be used. Note that morph
animation cannot be blended with other types of vertex animation on the same vertex data
(pose animation or other morph animation); pose animation can be blended with other pose
animation though, and both types can be combined with skeletal animation. This combination
limitation applies per set of vertex data though, not globally across the mesh (see below). Also
note that all morph animation can be expressed (in a more complex fashion) as pose animation,
but not vice versa.
For example, a common set-up for a complex character which needs both skeletal and facial
animation might be to split the head into a separate SubMesh with its own geometry, then apply
skeletal animation to both submeshes, and pose animation to just the head.
To see how to apply vertex animation, See Section 8.2 [Animation State], page 176.
Because absolute positions are used, it is not possible to blend more than one morph anima-
tion on the same vertex data; you should use skeletal animation if you want to include animation
blending since it is much more efficient. If you activate more than one animation which includes
morph tracks for the same vertex data, only the last one will actually take effect. This also
means that the ’weight’ option on the animation state is not used for morph animation.
Morph animation can be combined with skeletal animation if required See Section 8.3.3 [Com-
bining Skeletal and Vertex Animation], page 179. Morph animation can also be implemented
in hardware using vertex shaders, See hundefinedi [Morph Animation in Vertex Programs],
page hundefinedi.
In order to do this, pose animation uses a set of reference poses defined in the mesh, expressed
as offsets to the original vertex data. It does not require that every vertex has an offset - those
that don’t are left alone. When blending in software these vertices are completely skipped -
when blending in hardware (which requires a vertex entry for every vertex), zero offsets for
vertices which are not mentioned are automatically created for you.
Once you’ve defined the poses, you can refer to them in animations. Each pose animation
track refers to a single set of geometry (either the shared geometry of the mesh, or dedicated
geometry on a submesh), and each keyframe in the track can refer to one or more poses, each
with its own influence level. The weight applied to the entire animation scales these influence
levels too. You can define many keyframes which cause the blend of poses to change over time.
The absence of a pose reference in a keyframe when it is present in a neighbouring one causes
it to be treated as an influence of 0 for interpolation.
You should be careful how many poses you apply at once. When performing pose animation
in hardware (See hundefinedi [Pose Animation in Vertex Programs], page hundefinedi), every
active pose requires another vertex buffer to be added to the shader, and in when animating in
software it will also take longer the more active poses you have. Bear in mind that if you have
2 poses in one keyframe, and a different 2 in the next, that actually means there are 4 active
keyframes when interpolating between them.
You can combine pose animation with skeletal animation, See Section 8.3.3 [Combining Skele-
tal and Vertex Animation], page 179, and you can also hardware accelerate the application of
the blend with a vertex shader, See hundefinedi [Pose Animation in Vertex Programs], page hun-
definedi.
Combining the two is, from a user perspective, as simple as just enabling both animations
at the same time. When it comes to using this feature efficiently though, there are a few points
to bear in mind:
bullet [Combined Hardware Skinning], page 179
bullet [Submesh Splits], page 180
page hundefinedi, hundefinedi [Morph Animation in Vertex Programs], page hundefinedi, hun-
definedi [Pose Animation in Vertex Programs], page hundefinedi.
When combining animation types, your vertex programs must support both types of anima-
tion that the combined mesh needs, otherwise hardware skinning will be disabled. You should
implement the animation in the same way that OGRE does, ie perform vertex animation first,
then apply skeletal animation to the result of that. Remember that the implementation of
morph animation passes 2 absolute snapshot buffers of the from & to keyframes, along with a
single parametric, which you have to linearly interpolate, whilst pose animation passes the base
vertex data plus ’n’ pose offset buffers, and ’n’ parametric weight values.
Submesh Splits
If you only need to combine vertex and skeletal animation for a small part of your mesh, e.g.
the face, you could split your mesh into 2 parts, one which needs the combination and one which
does not, to reduce the calculation overhead. Note that it will also reduce vertex buffer usage
since vertex keyframe / pose buffers will also be smaller. Note that if you use hardware skinning
you should then implement 2 separate vertex programs, one which does only skeletal animation,
and the other which does skeletal and vertex animation.
At it’s heart, scene node animation is mostly the same code which animates the under-
lying skeleton in skeletal animation. After creating the main Animation using SceneMan-
ager::createAnimation you can create a NodeAnimationTrack per SceneNode that you want
to animate, and create keyframes which control its position, orientation and scale which can be
interpolated linearly or via splines. You use Section 8.2 [Animation State], page 176 in the same
way as you do for skeletal/vertex animation, except you obtain the state from SceneManager
instead of from an individual Entity.Animations are applied automatically every frame, or the
state can be applied manually in advance using the applySceneAnimations() method on Scene-
Manager. See the API reference for full details of the interface for configuring scene animations.
AnimableObject
AnimableObject is an abstract interface that any class can extend in order to provide access
to a number of [AnimableValue], page 181s. It holds a ’dictionary’ of the available animable
properties which can be enumerated via the getAnimableValueNames method, and when its
createAnimableValue method is called, it returns a reference to a value object which forms a
bridge between the generic animation interfaces, and the underlying specific object property.
One example of this is the Light class. It extends AnimableObject and provides AnimableVal-
ues for properties such as "diffuseColour" and "attenuation". Animation tracks can be created
for these values and thus properties of the light can be scripted to change. Other objects, in-
cluding your custom objects, can extend this interface in the same way to provide animation
support to their properties.
AnimableValue
When implementing custom animable properties, you have to also implement a number of meth-
ods on the AnimableValue interface - basically anything which has been marked as unimple-
mented. These are not pure virtual methods simply because you only have to implement the
methods required for the type of value you’re animating. Again, see the examples in Light to
see how this is done.