You are on page 1of 311

UNIVERSIDAD POLIT

ECNICA DE MADRID
ESCUELA T

ECNICA SUPERIOR DE INGENIEROS INDUSTRIALES


Design and Control of Intelligent
Heterogeneous Multi-configurable
Chained Microrobotic Modular
Systems
PhD Thesis
Alberto Brunete Gonzalez
Ingeniero de Telecomunicacion
2010
DEPARTAMENTO DE AUTOM

ATICA, INGENIER

IA ELECTR

ONICA E
INFORM

ATICA INDUSTRIAL
ESCUELA T

ECNICA SUPERIOR DE INGENIEROS INDUSTRIALES


Design and Control of Intelligent
Heterogeneous Multi-configurable
Chained Microrobotic Modular
Systems
PhD Thesis
Alberto Brunete Gonzalez
Ingeniero de Telecomunicacion
Supervisors
Ernesto Gambao Galan
Doctor Ingeniero Industrial
Miguel Hernando Gutierrez
Doctor Ingeniero Industrial
2010
Ttulo:
Design and Control of Intelligent Heterogeneous
Multi-congurable Chained Microrobotic Modular Systems
Autor:
Alberto Brunete Gonzalez
Ingeniero de Telecomunicaci on
(D-15)
Tribunal nombrado por el Magfco. y Excmo. Sr. Rector de la Universidad Politecnica
de Madrid, el da de de 2010
Presidente:
Vocal:
Vocal:
Vocal:
Secretario:
Suplente:
Suplente:
Realizado el acto de lectura y defensa de la tesis el da de de
en la E.T.S.I. / Facultad
El Presidente: El Secretario:
Los Vocales:
Dedication
Version 0.95
vii
viii
Abstract
The objective of this thesis is the Design and Control of Intelligent Heterogeneous Multi-
congurable Chained Microrobotic Modular Systems. That is, the development of mod-
ular microrobots composed of dierent types of modules able to perform dierent types
of movements (gaits), that can have dierent (chained) congurations depending on the
task to perform.
Heterogenous is the key word in this thesis. It is possible to nd in literature many
designs concerning modular robots, but almost all of them are homogenous: all are com-
posed of the same modules except for some designs having two dierent modules but one
of them passive. In this thesis, several active modules are proposed (rotation, support,
extension, helicoidal, etc.) that can be combined and execute dierent gaits.
The original idea was to make the robots as smaller as possible, reaching in the end
a nal diameter of 27mm. Although they are not really microrobots, they are in the
mesoscale (from hundreds of microns to tens of centimeters) and in literature they are
called for simplicity minirobots or microrobots.
Several modules have been developed: the rotation module (indeed it is a double
rotation module, but for simplicity it is called rotation module) v1 and v2, the helicoidal
module v1 and v2, the support module v1, v1.1 and v2, the extension module v1 and v2,
the camera module v1 and v2, the contact module (it is included in the camera module v2)
and the battery module. Some others are still in the design or conceptual phase, but they
can be simulated. They are the SMA-based module (there is already a prototype), the
traveler module (in the design phase) and the sensor module (in a conceptual phase). All
modules have been designed with the idea to miniaturized them in the future, and so both
the electronic and the embedded control programs are as simple as possible (maintaining
the planned functionality).
Parallel to the construction of the modules a simulator has been developed to pro-
vide a very ecient way of prototyping and verication of control algorithms, hardware
design, and exploring system deployment scenarios. It is built upon an existing open
source implementation of rigid body dynamics, the Open Dynamics Engine (ODE). Simu-
lated modules have been designed as simple as possible (using simple primitives) to make
simulation uid, but trying to reect as much as possible its real physic conditions and
parameters, its electronics and communication buses, and the software embedded in the
modules. The simulator has been validated using the information gathered from real mod-
ules experiments and this has helped to adjust the parameters of the simulator to have an
accurate model.
Although the rst idea was to develop the microrobot for pipe inspection, the expe-
rience acquired with the rst prototypes causes to realize that locomotion systems used
inside pipes could also be suitable outside them, and that the prototypes and the control
ix
architecture were useful in open spaces. In this way, research was extended to open spaces
and the ego-positioning system was added.
The EGO-positioning system is a method that allows all individual robots of a swarm
to know their own positions and orientations based in the projection of sequences of coded
images composed of horizontal and vertical stripes over photodiodes placed on the robots.
This concept can also be applied to the modules in order for them to know their position
and orientation, and to send commands to all of them at the same time.
To manage all of this a control architecture based on behaviors has been developed.
Since the modules cannot have a big processor, a central control is included in the ar-
chitecture to take the high level control. The central control has a model-based subpart
and another part based on behaviors. The embedded control in the modules is entirely
behavior-based. Between this two there is an heterogenous agent (layer) that allows the
central control to treat all modules in the same way, since the heterogenous layer trans-
lates its commands into module specic commands. A behavior-based architecture has
been chosen because it is specically appropriate for designing and controlling biologically
inspired robots, it has proven to be suitable for modular systems and it integrates very
well both low and high level control.
In order to communicate all actors (behaviors, modules and central control), a commu-
nication protocol based on I2C has been developed. It allows to send messages from the
operator to the central control, from central control to the modules and between behaviors.
A Module Description Language (MDL) has been designed, a language that allows
modules to transmit their capabilities to the central control, so it can process this infor-
mation and choose the best conguration and parameters for the microrobot.
Inside the control architecture an oine genetic algorithm has been developed in order
to: rst, determine the modules to use to have an optimal conguration for an specic
task (conguration demand), and second, determine the optimum parameters for best
performance for a given module conguration (parameter optimization).
Thus, the main contributions that can be found in this thesis are: the design and
construction of an Heterogeneous Modular Multi-congurable Chained Microrobot able
to perform dierent gaits (snake-like, inch-worm, helicoidal, combination), the design of a
common interface for the modules, a behavior-based control architecture for heterogenous
chained modular robot, a simulator for the physics and dynamics (including the design
of a servo model), electronics, communications and embedded software routines of the
modules, and nally, the enhancement of the ego-positioning system.
x
Resumen
El objetivo de esta tesis es el dise no y control de microrobots inteligentes modulares
heterogeneos multicongurables de tipo cadena. Es decir, el desarrollo de microrobots
modulares compuestos por diferentes tipos de modulos capaces de realizar diferentes tipos
de movimientos (gaits en ingles), que pueden ser dispuestos en diferentes conguraciones
(siempre en cadena) dependiendo de la tarea a realizar.
Heterogeneo es la palabra clave en esta tesis. Es posible encontrar en la literatura
muchos dise nos sobre robots modulares, pero casi todos ellos son homogeneos: todos se
componen de los mismos modulos, excepto en algunos dise nos que tienen dos modulos
diferentes, pero uno de ellos pasivo. En esta tesis, se proponen varios modulos activos
(rotacion, soporte, extension, helicoidales, etc) que se pueden combinar y ejecutar difer-
entes movimientos, ademas de otros pasivos (bateras, sensores, medicion de la distancia
recorrida) como complemento a los primeros.
La idea original era hacer los robots lo mas peque nos posible, alcanzando nalmente
un diametro de 27 mm. Aunque no se puedan considerar tecnicamente como microrobots,
estan en la mesoescala (entre cientos de micras y decenas de centmetros) y en la literatura
se les suele llamar por simplicidad minirrobots o microrrobots.
Durante el desarrollo de esta tesis, varios modulos han sido desarrollados: el modulo
de rotacion (en realidad se trata de un modulo de doble rotacion, pero por simplicidad se
le llama modulo de rotacion) v1 y v2, el modulo helicoidal v1 y v2, el modulo de soporte
v1, v1.1 y v2, el modulo de extension v1 y v2, el modulo de camara v1 y v2, el modulo
de contacto (que esta incluido en el modulo de la camara v2) y el modulo de batera.
Algunos otros estan todava en fase de dise no o conceptual, pero pueden ser utilizados
en la simulacion. Son el modulo basado en SMA (ya existe un prototipo), el modulo
de medicion de distacia recorrida (en fase de dise no) y el modulo de sensores (en fase
conceptual). Todos los modulos han sido dise nados con la idea de ser miniaturizados en
el futuro, por lo que tanto la electronica como los programas de control integrados se han
hecho tan simples como es posible (manteniendo por supuesto la funcionalidad prevista).
Paralelamente a la construccion de los modulos se ha desarrollado un simulador para
proporcionar un medio ecaz de creacion de prototipos y de vericacion de los algoritmos
de control, dise no de hardware, y exploracion de escenarios de despliegue del sistema.
Esta construido sobre un software (libre y de codigo abierto) de simulacion de dinamica
de cuerpos rgidos, el Open Dynamics Engine (ODE). Los modulos simulados se han
dise nado de la forma mas simple posible (usando primitivas simples) para hacer uida la
simulacion, pero tratando de reejar lo mas posible sus condiciones reales y los parametros
fsicos, sus componentes electronicos y buses de comunicacion, y el software incluido en
los modulos. El simulador ha sido validado con la informacion obtenida en experimentos
con modulos reales, y esto ha ayudado a ajustar los parametros del simulador para tener
xi
un modelo preciso.
Aunque la primera idea fue desarrollar el microrobot para la inspeccion de tuberas, la
experiencia adquirida con los primeros prototipos mostro que los sistemas de locomocion
utilizados en el interior de tuberas tambien podran ser adecuados fuera de ellas, y que los
prototipos y la arquitectura de control son utiles en espacios abiertos. De esta manera, la
investigacion se extendio a los espacios abiertos y se a nadio el sistema de ego-positioning.
El sistema de ego-positioning es un metodo que permite a los robots de un enjambre
conocer su posicion y orientacion basadas en la proyeccion de secuencias de imagenes
codicadas compuesto por rayas horizontales y verticales sobre fotodiodos colocados en
los robots. Este concepto tambien puede aplicarse a los modulos de un microrobot para
que puedan conocer su posicion y orientacion, y para enviar comandos a todos ellos al
mismo tiempo.
Para gestionar todo esto se ha desarrollado una arquitectura de control basada en
comportamientos. Dado que los modulos no pueden tener un procesador de grandes ca-
pacidades, se incluye en la arquitectura un control central para proporcionar control de
alto nivel. El control central tiene una parte basada en modelos y otra parte basada en
comportamientos. El control integrado en los modulos esta totalmente basado en compor-
tamientos. Entre los dos hay un agente heterogeneo (o capa) que permite que el control
central trate a todos los modulos de la misma manera, ya que la capa heterogenea traduce
sus ordenes a comandos especcos del modulo. Esta arquitectura basada en compor-
tamientos ha sido elegida porque es especialmente adecuada para el dise no y control de
robots inspirados en sistemas biologicos, ha demostrado ser adecuada para sistemas mod-
ulares e integra muy bien niveles altos y bajos de control.
Con el n de comunicar a todos los actores (los comportamientos, los modulos y el
control central), se ha desarrollado un protocolo de comunicacion basado en I
2
C. Este
protocolo permite enviar mensajes del operador al control central, desde el control central
a los modulos y entre comportamientos.
Dentro de la arquitectura tambien se ha desarrollado un Lenguaje de Descripcion
de Modulos(MDL por sus siglas en ingles Module Description Language), un lenguaje
que permite a los modulos transmitir sus capacidades al control central, para que pueda
procesar esta informacion y elegir la mejor conguracion y los parametros del microrobot.
Dentro de la arquitectura de control se ha desarrollado un algoritmo genetico con el
n de: primero, determinar los modulos a utilizar para tener una conguracion optima
para una tarea especca (peticion de conguracion), y segundo, determinar los parametros
optimos para el mejor funcionamiento de un modulo dada una conguracion (optimizacion
de parametros).
Como resumen, las principales contribuciones que se pueden encontrar en esta tesis
son: el dise no y la construccion de un microrobot modular heterogeneo multicongurable
de tipo cadena capaz de llevar a cabo diferentes sistemas de locomocion (de tipo serpiente,
gusano, helicoidal y combinacion de los anteriores), el dise no de un interfaz com un para los
modulos, una arquitectura de control basada en comportamientos para robots modulares
heterogeneos de tipo cadena, un simulador de la fsica y la dinamica (incluyendo el dise no
de un modelo de servo), electronica, comunicaciones y rutinas embebidas de software de
los modulos y nalmente, la mejora del sistema de ego-positioning.
xii
Contents
Abstract ix
Resumen xi
Contents xiii
List of Figures xvii
List of Tables xxiii
Acknowledgements xxvii
1 Introduction 1
1.1 Motivation and framework of the thesis . . . . . . . . . . . . . . . . . . . . 1
1.2 Topics of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 About Microrobotics . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 About Modular Robots . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.3 About Pipe Inspection Robots . . . . . . . . . . . . . . . . . . . . . 3
1.3 Objectives of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Overview of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Review on Modular, Pipe Inspection and Micro Robotic Systems 9
2.1 The origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Modular robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.1 PolyBot and PolyPod . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.2 M-TRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 CONRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.4 Molecube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.5 Crystalline and Molecule robots . . . . . . . . . . . . . . . . . . . . . 22
2.2.6 Telecube and Proteo (Digital Clay) . . . . . . . . . . . . . . . . . . . 24
2.2.7 Chobie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2.8 ATRON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2.9 Active Cord Mechanism (ACM) . . . . . . . . . . . . . . . . . . . . 31
2.2.10 WormBot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2.11 Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.3 Microrobots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3.1 Micro size modular machine using SMAs . . . . . . . . . . . . . . . . 36
2.3.2 Denso Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
xiii
2.3.3 Endoscope microrobots . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.3.4 LMS, LAB and LAI microrobots . . . . . . . . . . . . . . . . . . . . 38
2.3.5 12-legged endoscopic capsular robot . . . . . . . . . . . . . . . . . . 41
2.4 Pipe Inspection robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4.1 MRInspect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.2 FosterMiller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.3 Helipipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.4 Theseus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Robot Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3 Review on Control Architectures for Modular Microrobots 49
3.1 Classication of control architectures . . . . . . . . . . . . . . . . . . . . . . 50
3.2 Behaviour-Based Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.1 What is a behavior? . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.2.2 Behavior-based systems . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2.3 Behavior representation . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2.4 Behavioral encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.5 Emergent behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.6 Behavior coordination . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.3 Behavior-Based Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3.1 Subsumption Architecture . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.2 Motor Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.3.3 Activation Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.3.4 DAMN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.3.5 CAMPOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 Hybrid Deliberate-Reactive Architectures . . . . . . . . . . . . . . . . . . . 72
3.4.1 3-Tiered (3T) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4.2 Aura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.4.3 Atlantis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.4.4 Saphira . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.4.5 DD&P . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.5 Modular Robot Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.5.1 CONRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.5.2 M-TRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.5.3 Polybot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.6 Adaptive Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.6.1 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.6.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.6.3 Fuzzy Behavioral Control . . . . . . . . . . . . . . . . . . . . . . . . 84
3.6.4 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4 Electromechanical design 89
4.1 Developed modules hardware description . . . . . . . . . . . . . . . . . . . . 90
4.1.1 Rotation Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.1.2 Support and Extension modules . . . . . . . . . . . . . . . . . . . . 95
4.1.3 Helicoidal drive module . . . . . . . . . . . . . . . . . . . . . . . . . 101
xiv
4.1.4 Camera module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.1.5 Batteries module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2 Other modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.1 SMA-based module . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.2 Traveler module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.2.3 Sensor module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3 Embedded electronics description . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.1 Common interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.2 Actuator control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.3 Sensor management . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.4 I
2
C communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.5 Synchronism lines communication . . . . . . . . . . . . . . . . . . . 109
4.3.6 Auto protection and adaptable motion . . . . . . . . . . . . . . . . . 110
4.3.7 Self orientation detection . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4 Chained congurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4.1 Homogeneous congurations . . . . . . . . . . . . . . . . . . . . . . . 113
4.4.2 Heterogeneous congurations . . . . . . . . . . . . . . . . . . . . . . 121
4.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5 Simulation Environment 125
5.1 Physics and dynamics simulator . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.1.1 Open Dynamics Engine (ODE) . . . . . . . . . . . . . . . . . . . . . 126
5.1.2 Servomotor model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5.1.3 Modules physical model . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.1.4 Environment model . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2 Electronic and control simulator . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2.1 Software description . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2.2 Actuator control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5.2.3 Sensor management . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2.4 I
2
C communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.2.5 Synchronism lines communication . . . . . . . . . . . . . . . . . . . 136
5.2.6 Simulation of the power consumption . . . . . . . . . . . . . . . . . 136
5.3 Class implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.3.1 I
2
C classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.3.2 Servo class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.3.3 Module classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.3.4 Central Control class . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.3.5 Robot class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.3.6 Graphical User Interface classes . . . . . . . . . . . . . . . . . . . . . 141
5.4 Heterogenous modular robot . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6 Positioning System for Mobile Robots: Ego-Positioning 147
6.1 Brief on Positioning Systems for Mobile Robots . . . . . . . . . . . . . . . . 147
6.1.1 IR light emission-detection . . . . . . . . . . . . . . . . . . . . . . . 148
6.1.2 Electrical elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.1.3 Wireless Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.1.4 Ultrasound systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
xv
6.1.5 Electromagnetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.1.6 Pressure sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.1.7 Visual systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.2 Introduction to EGO-positioning . . . . . . . . . . . . . . . . . . . . . . . . 154
6.3 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.3.1 Sensing devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.3.2 Beamer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.4 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.4.1 EGO-positioning procedures: theory and performances . . . . . . . . 163
6.4.2 I-Swarm considerations . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.4.3 Image Sequence Programming . . . . . . . . . . . . . . . . . . . . . 166
6.4.4 Alice software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6.4.5 I-Swarm software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.5.1 Transmission of commands . . . . . . . . . . . . . . . . . . . . . . . 168
6.5.2 Programming robots . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.6 Results and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
7 Control Architecture 173
7.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.2 Communication protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.2.1 Layer structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.2.2 Command messages structure . . . . . . . . . . . . . . . . . . . . . . 176
7.2.3 Low level commands (LLC) . . . . . . . . . . . . . . . . . . . . . . . 178
7.2.4 High level commands (HLC) . . . . . . . . . . . . . . . . . . . . . . 180
7.3 Module Description Language (MDL) . . . . . . . . . . . . . . . . . . . . . 182
7.4 Working modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
7.5 Onboard control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
7.5.1 Embedded Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.5.2 Behavior fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
7.6 Heterogeneous layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.6.1 Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7.6.2 Conguration check . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7.6.3 MDL phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.7 Central control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.7.1 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.7.2 Inference Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
7.7.3 Central control Behaviors . . . . . . . . . . . . . . . . . . . . . . . . 200
7.7.4 Behavior fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.8 Oine Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.8.1 Brief on genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . 206
7.8.2 Codication and set up . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.8.3 Phases of the GAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
xvi
8 Test and Results 219
8.1 Real tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
8.1.1 Camera/Contact Module . . . . . . . . . . . . . . . . . . . . . . . . 220
8.1.2 Helicoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
8.1.3 Worm-like . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
8.1.4 Snake-like . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.2 Validation tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.2.1 Servomotor tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
8.2.2 Inchworm tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.2.3 Helicoidal module test . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.2.4 Snake-like gait tests . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.3 Simulation tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
8.3.1 Locomotion tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
8.3.2 Control tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
9 Conclusions and Future Works 247
9.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
9.2 Main contributions of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . 248
9.3 Publications and Merits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
9.3.1 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
9.3.2 Merits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
9.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
A Fabrication technologies 253
A.1 Stereolithography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
A.1.1 Part generation mechanics . . . . . . . . . . . . . . . . . . . . . . . . 253
A.1.2 Images from real work process . . . . . . . . . . . . . . . . . . . . . 254
A.1.3 Advantages, drawbacks and limitations . . . . . . . . . . . . . . . . 255
A.2 Micro-milling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
B Terms and Concepts 261
C Equipment used 267
C.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
C.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
C.2.1 Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
C.2.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
C.2.3 Microchip programming . . . . . . . . . . . . . . . . . . . . . . . . . 270
C.2.4 Editing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Glossary 273
Bibliography 275
xvii
xviii
List of Figures
2.1 Tetrobot: a parallel Stewart platform. . . . . . . . . . . . . . . . . . . . . . 10
2.2 Real picture of CEBOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Fracta robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Metamorphic robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Polypod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.6 Dierent congurations of PolyBot . . . . . . . . . . . . . . . . . . . . . . . 15
2.7 Dierent versions of PolyBot main modules . . . . . . . . . . . . . . . . . . 15
2.8 Overview of M-TRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.9 M-TRAN main module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.10 Dierent congurations of M-TRAN . . . . . . . . . . . . . . . . . . . . . . 17
2.11 Main module of CONRO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.12 Dierent congurations of CONRO . . . . . . . . . . . . . . . . . . . . . . . 19
2.13 Example of reconguration in Molecube . . . . . . . . . . . . . . . . . . . . 20
2.14 Molecubes new design (2007) . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.15 Crystalline robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.16 Molecule robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.17 Telecube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.18 Digital Clay Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.19 Slide motion mechanism of Chobie II . . . . . . . . . . . . . . . . . . . . . . 28
2.20 Chobie reconguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.21 ATRON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.22 Active Cord Mechanism (ACM): version III (a), R3 (b), R4 (c) and R5 (d) 31
2.23 WormBot: CPG-driven Autonomous Robot . . . . . . . . . . . . . . . . . . 33
2.24 Prototype from the University of Camberra . . . . . . . . . . . . . . . . . . 34
2.25 Superbot modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.26 MAAM and Vertical Modules . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.27 I-Cubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.28 Basic motion of Micro SMA . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.29 Estructure and real module of Micro SMA . . . . . . . . . . . . . . . . . . . 38
2.30 Denso microrobot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.31 Endoscope microrobots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.32 LAI, LMS and LAB microrobots . . . . . . . . . . . . . . . . . . . . . . . . 40
2.33 12-legged endoscopic capsular robot . . . . . . . . . . . . . . . . . . . . . . 41
2.34 MRInspect pipe inspection robot . . . . . . . . . . . . . . . . . . . . . . . . 42
2.35 Foster Miller pipe inspection robot . . . . . . . . . . . . . . . . . . . . . . . 43
2.36 Helipipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.37 Thes-I pipe inspection robot . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
xix
2.38 Thes-III pipe inspection robot . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1 AI models: a) Deliberative b) Reactive c) Hybrid d) Behavior-based . . . . 50
3.2 NASREM architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.3 Example of stimulus response diagram . . . . . . . . . . . . . . . . . . . . . 56
3.4 FSA encoding a door traversal mechanisms . . . . . . . . . . . . . . . . . . 57
3.5 Potential elds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.6 Basic block in subsumption architecture . . . . . . . . . . . . . . . . . . . . 61
3.7 Fuzzy command fusion example . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.8 Example of structure in subsumption architecture . . . . . . . . . . . . . . . 64
3.9 Subsumption AFSM of a Three Layered Robot . . . . . . . . . . . . . . . . 65
3.10 Structure of Motor Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.11 Activation Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.12 DAMN architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.13 CAMPOUT: block diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.14 3T intelligent controll architecture . . . . . . . . . . . . . . . . . . . . . . . 74
3.15 Aura Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.16 Atlantis Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.17 Saphira system architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.18 DD&P Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.19 Control Architecture of M-TRAN . . . . . . . . . . . . . . . . . . . . . . . . 81
3.20 Polybot control scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.21 Neural Networks Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
3.22 Fuzzy Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.23 GA scheme in M-TRAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.1 Detail of a wheel of the helicoidal module . . . . . . . . . . . . . . . . . . . 90
4.2 Gearhead design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.3 Rotation module V1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.4 Rotation module v2 plus camera . . . . . . . . . . . . . . . . . . . . . . . . 92
4.5 Snake conguration plus camera . . . . . . . . . . . . . . . . . . . . . . . . 93
4.6 Reference system for Denavit-Hartenberg . . . . . . . . . . . . . . . . . . . 94
4.7 Worm-like microrobot V1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.8 Support module 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.9 Support module v2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.10 Inchworm conguration based on v2.1 modules plus camera . . . . . . . . . 98
4.11 Extension module detailed mechanism . . . . . . . . . . . . . . . . . . . . . 99
4.12 Coordinate system for the kinematics of the support module . . . . . . . . . 100
4.13 Kinematics diagrams of the extension module . . . . . . . . . . . . . . . . . 101
4.14 Helicoidal module v1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
4.15 Helicoidal module V2 plus camera . . . . . . . . . . . . . . . . . . . . . . . 103
4.16 Camera module v1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.17 Camera module v2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.18 Batteries Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.19 SMA-based modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.20 Traveler Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.21 Common interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.22 Camera electronic circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
xx
4.23 Auto-protection control scheme . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.24 Auto-protection circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.25 Consumption output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.26 Accelerometer tests: still module . . . . . . . . . . . . . . . . . . . . . . . . 114
4.27 Module moving along a linear trajectory in the XY plane . . . . . . . . . . 115
4.28 Servo moving from 30

to 150

with no load . . . . . . . . . . . . . . . . . 115


4.29 Servo moving from 150

to 30

loaded . . . . . . . . . . . . . . . . . . . . . 116
4.30 Snake-like conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.31 Snake movements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.32 Snake-like congurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.33 Snake-like microrobot inside pipes . . . . . . . . . . . . . . . . . . . . . . . 119
4.34 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.35 Worm-like module: Sequence of movement . . . . . . . . . . . . . . . . . . . 121
4.36 Helicoidal conguretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.37 Multi-modular conguration . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
5.1 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
5.2 Mathematical model of the servomotor . . . . . . . . . . . . . . . . . . . . . 127
5.3 Rotation Module and Helicoidal Module . . . . . . . . . . . . . . . . . . . . 130
5.4 Inchworm Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.5 Touch Module and Traveler Module . . . . . . . . . . . . . . . . . . . . . . 133
5.6 Accelerometer axis sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.7 Class diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
5.8 Class interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.9 Elbow Negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.1 Experimental setup of iGPS . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.2 Behavior of the system for irregular oors . . . . . . . . . . . . . . . . . . . 149
6.3 NorthStar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
6.4 Indoor positioning network . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.5 Illustration of time dierence of arrival (TDOA) localization . . . . . . . . . 151
6.6 Example of wireless ethernet distribution of ve base stations (enumerated
small circles) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.7 MotionStar system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.8 Smart Floor plate (left) and load cell (right) . . . . . . . . . . . . . . . . . . 154
6.9 Ego-positioning system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.10 Position and orientation calculation (a) and Alice robot (b) . . . . . . . . 155
6.11 Ego-positioning extension to chained modular robots . . . . . . . . . . . . . 156
6.12 BPW34 main features (a) and photodiodes board (b) . . . . . . . . . . . . . 157
6.13 Optimal RC Filter (a) and Spectral sensitivity of aSi:H (b) . . . . . . . . . 158
6.14 Current comparator for I-SWARM . . . . . . . . . . . . . . . . . . . . . . . 158
6.15 Color wheel of the DLP beamer . . . . . . . . . . . . . . . . . . . . . . . . . 159
6.16 Response of the beamer to a white image . . . . . . . . . . . . . . . . . . . 159
6.17 Response of the beamer (without color wheel) to a white image . . . . . . . 160
6.18 Response of the photodiode to a red image (a) and a yellow image (b) . . . 160
6.19 Response of the photodiode to a projection of sequences of black and white
images at 60 Hz (a) and 85 Hz (b) . . . . . . . . . . . . . . . . . . . . . . . 161
6.20 Response of the photodiode to a grey image . . . . . . . . . . . . . . . . . . 161
xxi
6.21 Response of the photodiode to a projection of sequences of 3 (a) and 4 (b)
dierent grey scale images at 60 Hz . . . . . . . . . . . . . . . . . . . . . . . 162
6.22 Distribution of intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
6.23 Output voltage for a black and white sequence at the point of higher (a)
and lower (b) illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
6.24 Binary (a) and Gray (b) code . . . . . . . . . . . . . . . . . . . . . . . . . . 164
6.25 Sampling time to get the RGB values of the projected image . . . . . . . . 165
6.26 Interruption Service Routine Photodiodes (a) and function SequenceTest
(b) pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.27 Sampling procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
6.28 Function EGO Position (a) and Main program (b) pseudocode . . . . . . 170
6.29 Gray to Binary conversion scheme . . . . . . . . . . . . . . . . . . . . . . . 171
6.30 Success - error rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
7.1 Control Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.2 Control Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.3 Behavior sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
7.4 HLC and LLC commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
7.5 Communication Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
7.6 I
2
C frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7.7 Behavior scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.8 Heat dissipation sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.9 Maximun servomotor consumption with blocking . . . . . . . . . . . . . . . 189
7.10 Extension module at its higher and lower position . . . . . . . . . . . . . . 190
7.11 Behavior fusion scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.12 Conguration check sequence diagram . . . . . . . . . . . . . . . . . . . . . 196
7.13 Ext / Contraction capabilites: a) grade 3 and b) grade 1 . . . . . . . . . . . 198
7.14 Behavior fusion scheme for Central Control behaviors . . . . . . . . . . . . 205
7.15 Roulette probabilty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7.16 Single point crossover example . . . . . . . . . . . . . . . . . . . . . . . . . 214
7.17 Mutation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
8.1 Images taken form the camera inside a pipe . . . . . . . . . . . . . . . . . . 220
8.2 Camera Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.3 Helicoidal module inside a pipe . . . . . . . . . . . . . . . . . . . . . . . . . 221
8.4 Worm module tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
8.5 Snake-like movement over undulated terrain . . . . . . . . . . . . . . . . . . 223
8.6 Corner negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.7 30

to 120

unloaded: rotation angle . . . . . . . . . . . . . . . . . . . . . . . 225


8.8 30

to 120

unloaded: intensity . . . . . . . . . . . . . . . . . . . . . . . . . . 225


8.9 30

to 120

unloaded: torque . . . . . . . . . . . . . . . . . . . . . . . . . . . 226


8.10 30

to 120

loaded: rotation angle . . . . . . . . . . . . . . . . . . . . . . . . 226


8.11 30

to 120

loaded: intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . 227


8.12 30

to 120

loaded: tau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227


8.13 90

to 30

unloaded: rotation angle . . . . . . . . . . . . . . . . . . . . . . . 228


8.14 90

to 30

unloaded: intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . 228


8.15 90

to 30

unloaded: tau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229


8.16 90

to 30

unloaded: rotation angle . . . . . . . . . . . . . . . . . . . . . . . 229


xxii
8.17 90

to 30

unloaded: intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . 230


8.18 90

to 30

unloaded: tau . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230


8.19 Rotation module v1 torque test . . . . . . . . . . . . . . . . . . . . . . . . . 231
8.20 1D sinusoidal movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
8.21 Turning movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
8.22 Rolling movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
8.23 Rotating movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
8.24 Lateral shifting movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
8.25 R+H elbow negotiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
8.26 R+H elbow negotiation depending on pipe diameter . . . . . . . . . . . . . 238
8.27 Rotation + passive modules in a vertical sinusoidal movement . . . . . . . . 239
8.28 Rotation + passive modules negotiating an elbow with and without heli-
coidal module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
8.29 Inchworm locomotion composed of several extension and support modules . 241
8.30 Example of heterogenous conguration . . . . . . . . . . . . . . . . . . . . . 242
8.31 Conguration check example . . . . . . . . . . . . . . . . . . . . . . . . . . 243
8.32 Example of orientation behavior . . . . . . . . . . . . . . . . . . . . . . . . 244
8.33 Contact, Rotation, Helicoidal and Passive . . . . . . . . . . . . . . . . . . . 244
8.34 Contact and rotation modules . . . . . . . . . . . . . . . . . . . . . . . . . . 245
8.35 Example of chain splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
A.1 Stereolithography process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
A.2 Support columns removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
A.3 Laser trajectory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
A.4 Solidication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
A.5 Post-cure oven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
A.6 Detail of some parts of the rotation module v1 . . . . . . . . . . . . . . . . 257
A.7 Micro-milling system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
A.8 Fixation System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
A.9 Contouring machining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
A.10 Helicoidal module leg generated by micromachining . . . . . . . . . . . . . . 260
C.1 U2C-12 card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
C.2 Communication box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
xxiii
xxiv
List of Tables
1.1 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1 3-D Robots summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.2 2-D Robots summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3 1-D Robots summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.1 Subsumption Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.2 Motor Schemas Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.3 Activation Networks Architecture . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4 DAMN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.5 Control Architecture for Multi-robot Planetary Outposts (CAMPOUT) Ar-
chitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.1 Modules main characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.2 Denavit-Hartenberg parameters . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.3 Velocity in a 30cm pipe at dierent angles (helicoidal module) . . . . . . 103
4.4 Velocity in a 30cm pipe at dierent angles (2nd helicoidal module) . . . . 103
4.5 Power Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.1 Setup description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.2 Color coding table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
6.3 Programming time and speed . . . . . . . . . . . . . . . . . . . . . . . . . . 170
7.1 LLC1 commands: sending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7.2 LLC1 commands: answering . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7.3 LLC2 commands: sending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.4 LLC2 commands: answering . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.5 HLC commands: sending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.6 HLC commands: answering . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
7.7 Behavior encoding: Avoid overheating . . . . . . . . . . . . . . . . . . . . . 188
7.8 Behavior encoding: Avoid actuator damage . . . . . . . . . . . . . . . . . . 188
7.9 Behavior encoding: Avoid mechanical damages . . . . . . . . . . . . . . . . 190
7.10 Behavior encoding: Self diagnostic . . . . . . . . . . . . . . . . . . . . . . . 191
7.11 Behavior encoding: Situation awareness . . . . . . . . . . . . . . . . . . . . 191
7.12 Behavior encoding: Environment diagnostic . . . . . . . . . . . . . . . . . . 192
7.13 Behavior encoding: Vertical sinusoidal movement . . . . . . . . . . . . . . . 193
7.14 Behavior encoding: Horizontal sinusoidal movement . . . . . . . . . . . . . 193
7.15 Behavior encoding: Worm-like movement . . . . . . . . . . . . . . . . . . . 194
xxv
7.16 Behavior encoding: Push-Forward movement . . . . . . . . . . . . . . . . . 194
7.17 Table of Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
7.18 Behavior encoding: Balance / Stability . . . . . . . . . . . . . . . . . . . . . 201
7.19 Behavior encoding: Straight forward / backwards . . . . . . . . . . . . . . . 202
7.20 Behavior encoding: Edge Following . . . . . . . . . . . . . . . . . . . . . . . 202
7.21 Behavior encoding: Pipe Following . . . . . . . . . . . . . . . . . . . . . . . 203
7.22 Behavior encoding: Obstacle negotiation . . . . . . . . . . . . . . . . . . . . 203
7.23 GA Conguration demand genes value range . . . . . . . . . . . . . . . . . 209
7.24 GA Conguration demand parameters . . . . . . . . . . . . . . . . . . . . . 210
7.25 GA Parameter optimization genes value range . . . . . . . . . . . . . . . . . 211
8.1 Speed and slope for dierent congurations . . . . . . . . . . . . . . . . . . 220
8.2 Parameters for the servomotor tests . . . . . . . . . . . . . . . . . . . . . . 224
8.3 Speed test of the inchworm conguration . . . . . . . . . . . . . . . . . . . 231
8.4 Speed test of helicoidal module . . . . . . . . . . . . . . . . . . . . . . . . . 232
xxvi
Acknowledgements
xxvii
xxviii
Chapter 1
Introduction
When I read a book I seem to read it with my eyes only, but now and then I come
across a passage, perhaps only a phrase, which has a meaning for me, and it becomes
part of me
W. Somerset Maugham
1.1 Motivation and framework of the thesis
The idea that has given place to this thesis is the lack of multicongurable heterogenous
microrobotic systems to inspect the inner part of narrow pipes. There are many robots
for pipe inspection, but they are too big. There are a lot of modular systems (both lattice
and chain) but they are homogenous and also too wider and box-shaped what makes
them not suitable for pipes. And there are microrobots for colonoscopy but they are too
slow for pipe inspection. In summary, the idea of the thesis is to put together all these
advantages of modules, micro and pipe inspection robots into a Intelligent Heterogeneous
Multi-congurable Chained Microrobotic Modular System
After a rigorous study of the state of the art, it was decided that this thesis should lie
amongst three elds: micro-robotics, modular robots and pipe-inspection robots. There
are many robots and studies in each of these elds, but there are none that combines all
of them. This thesis tries to create a model to develop microrobots capable to move in
narrow pipes to explore them. The purpose is to do it by using modular robotic principles.
Once the basics of the research were clear, a control scheme had to be built upon
the mechanical system. And the selected approach was behavior-based control, for many
reasons that will be described in chapter 7.
After some time of research, it was necessary to increase the dimensions of the proto-
types in order to facilitate the fabrication of the prototypes, so the target pipe diameters
moved to 40mm diameter. This made possible to build more robust prototypes and to add
1
CHAPTER 1. Introduction
some other functionalities. This is the reason why in this thesis it is talked about micro-
robots: although the measures of the prototypes are a little bit bigger for a microrobot,
the concept was created to be applicable to a microrobot.
Although the rst idea was to develop the microrobot for pipe inspection, the expe-
rience acquired with the rst prototypes causes to realize that locomotion systems used
inside pipes were also suitable outside them, and that the prototypes and the control ar-
chitecture were useful in open spaces. That is why research was extended to open spaces
and the ego-positioning system was added.
This thesis has been developed along three projects: MICROROB (TAMAI), MICRO-
MULT (MICROTEC) and I-SWARM.
The purpose of the MICROTUB project is the design and construction of a micro-
robot able to move in pipes and tubes (straight or not) of about 26mm diameter. The
development of this micro-robot will guide to the automation of inspection and mainte-
nance of pipes and tubes at a lower cost in for example sewer systems, gas pipelines, water,
gas and heating pipes in buildings, etc.
MICROMULT stands for Multi-congurable Micro-robotic Systems. It is the subpro-
ject 1 in the project MICROTEC (Integration of Micromanufacturing, Microassembly and
Microrobotics technologies)
The main goals of MICROMULT are:
design and construction of a multi-congurable heterogeneous modular micro-robotic
system able to move in narrow environments.
design and construction of a micro-assembly robotic station to develop micro-assembly,
micro-gripping and micro-machining techniques.
The I- SWARM project intends to lead the way towards the development of an articial
ant and thus make a signicant step forward in robotics research by bringing together
expertise in micro-robotics, in distributed and adaptive systems as well as in self-organising
biological swarm systems. Building on the expertise of two EC-funded projects, MINIMAN
and MiCRoN, this project will produce technological advances to facilitate the mass-
production of micro-robots, which can then be employed as a real swarm consisting
of up to 1000 robot clients. These clients will all be equipped with limited, on-board
intelligence. Such a robot swarm can perform a variety of applications, including micro
assembly, biological, medical or cleaning tasks.
1.2 Topics of the thesis
1.2.1 About Microrobotics
Microrobotics (or microbotics) is the eld of miniature robotics, in particular mobile robots
with characteristic dimensions less than 1 mm. The term can also be used for robots
capable of handling micrometer size components, which is the case of the robots developped
in this thesis, in which some components are smaller than 1 mm. Generally speaking, the
term microrobot is used to described very small robots.
2
1.2. Topics of the thesis
The earliest research and conceptual design of such small robots was conducted in
the early 1970s in (then) classied research for U.S. intelligence agencies. Applications
envisioned at that time included prisoner of war rescue assistance and electronic intercept
missions. The underlying miniaturization support technologies were not fully developed
at that time, so that progress in prototype development was not immediately forthcoming
from this early set of calculations and concept design.
The concept of building very small robots, and beneting from recent advances in
Micro Electro Mechanical Systems (MEMS) was publicly introduced in the seminal paper
by Anita M. Flynn, Gnat Robots (and How They Will Change Robotics) [Flynn, 1987].
Microbots were born thanks to the appearance of the microcontroller in the last
decade of the 20th century, and the appearance of miniature mechanical systems on silicon
(MEMS), although many microbots do not use silicon for mechanical components other
than sensors.
One of the major challenges in developing a microrobot is to achieve motion using a
very limited power supply. In this thesis microrobots need a power supply cable to work.
1.2.2 About Modular Robots
Modular Robotics is an approach to building robots for various complex tasks. Instead
of designing a new and dierent mechanical robot for each task, many copies of one
simple module are built. The module cant do much by itself, but when many of them
are connected together, the result is a system that can do complicated things. In fact, a
modular robot can even recongure itself change its shape by moving its modules around
to meet the demands of dierent tasks or dierent working environments.
What are the limitations on the number of modules for a useful modular robotic
system? How does the number of modules aect:
Versatility (dierent shapes)
Robustness (self-repair and redundancy)
Cost (economies of scale?)
These are very important questions that should be answered by each project.
Scientic papers point out the importance of the modular design as a complementary
direction of the integral design. The main benets of this design method are: minimizing
the time of design, increasing the number of congurations, an easy maintenance, a fall in
prices... etc.
Modularity refers to the user possibility to recongure the robot, both in hardware and
software aspects, by combining several hard modules as well as redening the architecture
of the control program by using some programs modules.
1.2.3 About Pipe Inspection Robots
Pipelines increasingly need to be inspected, maintained, and/or repaired in a wide range
of industries, such as in petroleum, chemical, nuclear, space/aeronautic, and waste elds.
3
CHAPTER 1. Introduction
Tests Basic General Conguration
Demand
Surveillance
Robot Conguration Known Known Known Unknown Known
Homogeneity Homogeneous Homogeneous Heterogeneous Heterogeneous Heterogeneous
Environment Known Unknown Unknown Known Unknown
Task Known Known Known Known Unknown
Table 1.1: Use Cases
Pipe inspection is important not only for optimizing ow eciency, but it also is critical
to prevent failure. The eects of time, corrosion, and damage make pipeline failure an
increasing concern with some pipelines being in use for 30 to 40 years. In-pipe inspection
robots are needed with a smaller size, longer range, and increased maneuverability.
Pipes in heating, water and gas systems, placed in homes, buildings or installations
(like swimming pools,tanks...etc), are not usually accessible because they are either hidden
or cannot be dismantled for inspection. In addition, some of these pipes are quite narrow,
and most of the commercial robots can not get into them.
As an example, the inspection of gas transmission mains requires the innovative mar-
riage of a highly adaptable/exible robotic platform with advanced sensor technologies
operating as an autonomous inspection system in a live natural gas environment. Work-
ing with New York GAS and the Department of Energy, Foster-Miller has developed and
is using a unique robotic system called Pipe Mouse to meet the demanding requirements
of gas pipe inspection.
1.3 Objectives of the thesis
The main objective of this thesis is the design of a multicongurable modular heteroge-
neous microrobot that gathers the advantages of the microrobots, modular robots and
pipe inspection robots. This includes the design and fabrication of modules, the design of
the control architecture and the development of a simulator.
The main objectives are explained in the following sections.
Electromechanical design and construction of an heteroge-
neous multi-congurable chained microrobot
In order to develop an heterogenous modular robot, several heterogenous modules have to
be built: Rotation (2 dof ), support, extension, helicoidal, camera plus contact detection
and batteries.
Modules can be arranged in two dierent congurations: Homogenous (Worm-like,
Snake-like, Helicoidal drive) and Heterogeneous (a composition of all of them)
The use cases that the microrobot has been conceived for are shown in table 1.1.
Tests The robot will be able to move through tubes between 30 to 50mm diameter,
consisting on the following parts:
4
1.3. Objectives of the thesis
horizontal straight sections
vertical straight sections
bends up to 90 degrees both horizontally and vertically
bifurcations up to 90 degrees both horizontally and vertically
moving from a section to another of dierent diameter.
For each of these parts the best conguration and the best sequence of moves will be
explored. The robot will also be able to move through the soil (crawl), but only in settings
that allow it. It will be determined experimentally the congurations that are capable of
doing it, for example the type snake.
Preconditions: the robot must be congured.
Normal course: the robot will be able to travel the corresponding segment.
Basic The robot will be able to move through tubes between 30 to 50mm diameter that
are composed of unknown segments.
Preconditions: the robot must be congured.
Normal course: the robot will be able to travel the corresponding segment.
General The operator puts the robot at the entrance of the pipe and will give the
order to proceed until further notice. The system veries the conguration (through the
synchronous line) and optimize the sequence of movements to be carried out.
Preconditions: The dierent modules will already be assembled and ready.
Normal course: the robot will move forward, adapting to the shape of the pipe and
overcoming any unforeseen obstacles.
Conguration Demand The operator will specify the path that has to be traveled or
the mission that has to be undertaken and the system will output the appropriate modules
and their position in the chain.
Postconditions: the robot will be prepared for a mission.
Surveillance Utopian goal. The robot will move to an unfamiliar environment to mon-
itor the environment and managing the repair and / or surveillance tasks for which it has
been designed.
Postconditions: the robot will return to the base station for recharging batteries and
/ or downloading of audio-visual material (photos, video, etc).
Development of a control architecture for heterogeneous mod-
ular chain-type microrobots
Regarding the control scheme, the microrobot will be a semi-distributed autonomous
robot. The control scheme will be divided on three layers:
5
CHAPTER 1. Introduction
Low level: embedded in each module. It will control the movements of the module
and the response to external unexpected extimuli. Easy to implement in small
modules with limited microcontrollers.
Heterogenous layer: it is the interpreter from the high level control to the low level
control of each module.
High level: central control, planning. Thinks of the microrobot as a whole, not each
module individually.
The control architecture will be enhance with an oine genetic algorithm aimed at
improving the conguration of the microrobotic modular chain and to optimize its loco-
motion parameters.
Development of a simulator for the previous microrobotic
systems
Due to the limitations in the fabrication process and its high cost, a simulation environ-
ment will be created with several purposes: to develop the control architecture without
damaging the modules and to developed new prototypes and test them before fabricating
them.
The physical simulator will include an electronic simulator that emulates the micro-
controller program that is running on the modules, including physical signals (synchro-
nization signal), I2C communications, etc. To maintain the independence of each module,
each control programs will run in a dierent thread.
This design facilitates the transfer of the code from the simulator to real modules.
Development of systems for position measurement and trav-
eled distance measurement
A system will be developed and integrated in the robot that allows to know the position
in open spaces and the traveled distance inside pipes.
1.4 Overview of the thesis
Chapter 2 and Chapter 3 will give an overview of the state of the art in Modular,
Pipe Inspection and Micro Robotic Systems and Control Architectures for Modular
Microrobots
Chapter 4 will present the modules developed and its dierent versions, how they
have evolved and the problems that have appeared during its construction.
The simulation environment that has been created will be described in Chapter 5.
It will explain the physical dynamic engine, the control and electronic simulation and the
programming structure.
6
1.4. Overview of the thesis
Chapter 6 will be dedicated to a positioning system that allows the robot to know
its position in open space, based on the emission of coded images and its reception via
photodiodes.
The control architecture will be explained in Chapter 7: the behavior-based architec-
ture, the communication system and the Module Description Language (MDL), the layers
with the high and low level controls and the oine genetic algorithm for optimization.
Chapter 8 will show the test that have been performed and its results, with real
modules and in the simulator.
Finally, Chapter 9 will show the conclusions, some remarks about the main contri-
bution of the thesis and related publications and the future work.
7
CHAPTER 1. Introduction
8
Chapter 2
Review on Modular, Pipe
Inspection and Micro Robotic
Systems
Everything should be made as simple as possible, but not one bit simpler
Albert Einstein
The key word in modular robot is module. But what is a module? In this thesis
it will be used the following denition
1
: A module is a piece or a set of pieces that are
repeated in a construction of any kind, to make it easier, regular and economic. Thus,
a robotic module would be: A module that performs totally or partially typical tasks of
a robot, and that has the possibility to interact with other modules. Finally, a modular
robot is a robot composed of modules, i.e., a robot composed of parts that have indepen-
dent functionalities but that are able to interact with each other in one or another way,
giving as a result an entity with new capabilities.
What are the advantages of using modular robots? Some of the main advantages are:
Provide the system with congurability: multicongurability, recongurability and
autocongurability
Increase fault tolerance: a module can fail without compromising the whole system
Make system scalable: new modules can be added without reconguration of the
whole system.
Reduce the cost of large production because only one or few modules have to be
massively produced and there is no assembly needed between parts.
1
From the Real Academia Espa nola (RAE)
9
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.1: Tetrobot: a parallel Stewart platform.
It is possible to classify modular robots according to its congurability capabilities in:
recongurable (multicongurable), autocongurable, metamorphic, self-replicant. Multi-
congurability or recongurability refers to the property of a system that can be congured
in dierent ways, no matter how. Autocongurable robots are able to change its congu-
ration by its own means, while in multicongurable robots the reconguration has to be
done externally (i.e. by the operator).
Metamorphic robots are called those that are composed of one repeated module that
are able to change its shape. Most of recongurable robots are also metamorphic. Self-
replicating robots are able to make a copy of itself (providing they have the necessary
modules) by its own means.
The state of the art for the type of robot described in the rst part of this thesis
include several elds: modular robots (lattice and chain) regarding the design and concept,
microrobots regarding its size and pipe inspections robots regarding its purpose. In the
next sections the state of the art in these elds will be shown, with especial emphasis in
the features related to this thesis.
2.1 The origins
In this section some of the rst prototypes that have inspired the development of mod-
ular robots are mentioned as a reference to understand the evolution of this kind of robots.
10
2.1. The origins
Figure 2.2: Real picture of CEBOT
TETROBOT [Hamlin and Sanderson, 1996], from the Rensselaer Polytechnic Insti-
tute, is a modular system for the design, implementation and control of a class of highly
redundant parallel robotic mechanisms developed in 1996 (gure 2.1). It is an actuated
robotic structure which may be reassembled into many dierent congurations while still
being controlled by the same hardware and software architecture. Some implementations
that can be obtained are a double octahedral platform, a tetrahedral arm and a sixlegged
walker.
Main researchers: G.J. Hamlin and A.C. Sanderson
Web: http://www.rpi.edu/dept/cie/faculty_sanderson.html
CEBOT (Cellular Robotic System) [Fukuda and Kawauchi, 1990], from Nagoya Uni-
versity, is a dynamically congurable robot that has the capability of self-organizing,
self-evolution and functional amplication (ability of a system to coordinate together to
accomplish tasks that cannot be performed by the individual units themselves).
The CEBOT (gure 2.2) consists of many robotic units with a simple function, named
cell. The CEBOT can recongure the whole system depending on given tasks and en-
vironments and organize collective or swarm intelligence. The concept of the CEBOT
is based on biological organization constructed by enormous natural cells. This research
project includes mutual communication between cells, the optimum dynamic knowledge
allocation among cells, the reconguration strategy of the system and the articial-life
such as the cooperative behavior modeling of ants. This invokes many interesting research
problems, such as dynamic decentralized planning, dynamic distribution and coordinated
control system as well as hardware systems. Experiments in automated re-conguration
were carried out, but the robot did not self-recongure because a manipulator arm was
required for this.
Main researcher: T. Fukuda.
Web: http://www.mein.nagoya-u.ac.jp/staff/fukuda-e.html
Fracta was created at the Murata Laboratory. The Murata Lab has been one of the
rst in researching modular recongurable robots. There, it has been developed from 1998,
11
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
(a) 2D (b) 3D Universal Structure
Figure 2.3: Fracta robot
the 2D and 3D versions of Fracta [Murata et al., 1998] (g. 2.3). In the 3D design, it has
three symmetric axes with twelve degrees of freedom. A unit is composed of a 265mm cube
weighing 7kg with connecting arms attached to each face. Selfreconguration is performed
by means of rotating the arms and an automatic connection mechanism. Each unit has an
on-board microprocessor and communication system. The drawback of this approach is
that each module is quite big and heavy. The connection mechanism uses six sensors and
encoders, further increasing system complexity. However, this is one of the few systems
that can achieve 3D self- reconguration. This system perfectly illustrates the problems
with a homogeneous design: the modules become big and cumbersome.
The 2D design [Tomita et al., 1999] has six arms, three electromagnet male arms and
three permanent magnet female arms. Based on simple magnetics, connection occurs when
a neighbor (male) has a same polarity of permanent magnet (female). On the other hand,
reversing the polarity of the electromagnets causes disconnection. A unit has three ball
wheels under a body, its own processor and optical communication.
Main researcher: S. Murata
Web: http://www.mrt.dis.titech.ac.jp/english.htm
The Metamorphic robot [Chirikjian, 1994] was created at the Robot and Protein
Kinematics Lab, Johns Hopkins University.
The Metamorphic robot (gure 2.4) is a collection of mechatronic modules, each of
which has the ability to connect, disconnect, and climb over adjacent modules developed
in 1994. It is used to examine the near-optimal reconguration of a metamorphic robot
from an arbitrary initial conguration to a desired nal conguration. Concepts of distance
between metamorphic robot congurations are dened, and shown to satisfy the formal
properties of a metric. These metrics, called conguration metrics, are then applied to
the automatic self-reconguration of metamorphic systems in the case when one module
is allowed to move at a time. There is no simple method for computing the optimal
sequence of moves required to recongure. As a result, heuristics which can give a near
12
2.2. Modular robots
Figure 2.4: Metamorphic robot
optimal solution must be used. The technique of Simulated Annealing is used to drive the
reconguration process with conguration metrics as cost functions.
Main researcher: G. Chirikjian
Web: http://caesar.me.jhu.edu/research/metamorphic_robot.html
2.2 Modular robots
In this section the most important designs on modular robots are described. Most of
them are chain, lattice or hybrid modular robots. Lattice architectures have units that
are arranged and connected in some regular, space-lling three-dimensional pattern, such
as a cubical or hexagonal grid. Control and motion are executed in parallel. Lattice
architectures usually oer simpler computational representation that can be more easily
scaled to complex systems.
Chain/tree architectures have units that are connected together in a string or tree
topology. This chain or tree can fold up to become space lling, but underlying architecture
is serial. Chain architectures can reach any point in space, and are therefore more versatile
but more computationally dicult to represent and analyze.
2.2.1 PolyBot and PolyPod
PARC - Palo Alto Research Center.
Systems and Practices Laboratory. Modular Robotics Lab.
Main Researcher: Mark Yim.
http://www2.parc.com/spl/projects/modrobots/
Polypod is a bi-unit modular robot developed in 1994. This means that the robot is
built up of exactly two types of modules that are repeated many times. This repetition
makes manufacturing easier and cheaper. Dynamic recongurability [Yim, 1994] allows the
robot to be highly versatile, reconguring itself to whatever shape best suits the current
13
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
(a) Main modules (b) Dierent congurations
Figure 2.5: Polypod
task. To study this versatility, locomotion was chosen as the class of tasks for examination.
Polypod (g. 2.5) is made up of two types of modules called Segments and Nodes.
Segments are two degree of freedom parallel mechanisms composed of 10 links. The
kinematics of the resulting mechanism is similar to two prismatic joints joined together
by a revolute joint where the prismatic joints are constrained to have the same length.
The structure is essentially a two four bar linkages attached by two other links with an
added sliding bar constraining four joints in the two four bars to remain collinear. The two
degrees of freedom are not exactly a prismatic degree of freedom and a revolute degree of
freedom, but it is easy to intuitively think of it that way. The revolute degree of freedom
has a range of motion of +45 and -45 degrees, and the prismatic degree of freedom can
change the length of the module from about 1 inch to 2.5 inches tall.
Each segment module contains all the components to be a stand-alone robot in it-
self (except for power): processor (Motorola XC68HC11E2), two DC motors, IR prox-
imity sensing, crude force/torque sensing, joint angle position sensing (potentiometers)
and Inter-module Communication: SPI plus local IR communication between adjacent
modules.
Power is supplied by the second type of modules called Nodes. Nodes are rigid cube
shaped modules roughly 5cmx5cmx5cm with 6 connection ports whose main purpose is to
hold gel-cell batteries and to allow for non-serial chain robots.
PolyBot [Yim et al., 2000] [Yim et al., 2001] [Yim et al., 2007] is the evolution of
PolyPod from year 1997 (g. 2.6). It is made up of many repeated modules. Each module
is virtually a robot itself having a computer, a motor, sensors and the ability to attach
to other modules. In some cases, power is supplied o board and passed from module to
module. These modules attach together to form chains, which can be used like an arm or
a leg or a nger depending on the task at hand.
PolyBot has gone through many variations with three basic generations. The evolution
of the main module can be seen in g. 2.7.
The rst generation of PolyBot has the basic ideas shared in all the generations of
repeated modules being about 5 cm on a side. The modules are built up from simple hobby
RC servos, power and computation are supplied oboard. The modules are manually
screwed together, so they do not self-recongure.
Following versions were integrating more robust servos, connection plates, power supply
14
2.2. Modular robots
Figure 2.6: Dierent congurations of PolyBot
Figure 2.7: Dierent versions of PolyBot main modules
(NiMH batteries) and electronics (on board control with a PIC16F877). The modules may
run either fully autonomously or under supervisory control from a PC sending commands
through a wired or wireless radio link.
Generation II of PolyBot includes onboard computing (Power PC 555) as well as the
ability to recongure automatically via shape memory alloy actuated latches. Docking of
the chains is aided by infrared emitters and detectors.
The last version, v.III, is 5cm x 5cm x 5cm and weights 70 grams. It has a DC motor
with Hall eect sensor, a potentiometer for angle measurement, 4 accelerometers, contact
sensors and 4 IR leds for inter-module communications. The main processing unit is still
a PowerPC and the communications amongst the modules are done via a CAN bus.
2.2.2 M-TRAN
Intelligent Systems Institute.
National Institute of Advanced Industrial Science and Technology (AIST).
Main researcher: H. Kurokawa and S. Murata
http://unit.aist.go.jp/is/frrg/dsysd/mtran3/index.htm
15
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.8: Overview of M-TRAN
M-TRAN (Modular TRANsformer) [Murata et al., 2002] [Kurokawa et al., 2003] [Mu-
rata and Kurokawa, 2007] [Yoshida et al., 2003] is a self-recongurable modular robot that
has been developed by AIST and Tokyo-Tech since 1998. A number of M-TRAN modules
can form:
a 3-D structure which changes its own conguration
a 3-D structure which generates smaller robots
a multi-DOF robot which exibly locomotes
a robot which metamorphoses
The M-TRAN system can change its 3-D structure and its motion in order to adapt
itself to the environment. In small sized conguration, it walks in a form of legged robot,
then metamorphoses into a snake-like robot to enter narrow spaces (g. 2.8). A large
structure can gradually change its conguration to make a ow-like motion, climb a step
by transporting modules one by one, and produce a tower structure to look down. It
can also generate multiple walkers. Possible applications of the M-TRAN are autonomous
exploration under unknown environment such as planetary explorations, or search and
rescue operation in disaster areas.
The design of M-TRAN has the advantages of two types of modular robots, lattice
type and chain (linear) type. This hybrid design, unique 3-D shape of the block parts,
and parallel joint axes are all keys to realize a exible self-recongurable robotic system.
An M-TRAN module is composed of two blocks (1/2 cubic and 1/2 cylindrical) and a
link (Fig.2.9). Each of the three at surfaces of each block can mechanically connect and
couple with a surface of another module. All the connection surfaces have their gender
and an active (male) surface can couple with a passive (female) surface in four possible
relative orientations. The connection is controlled by the module itself.
As M-TRAN I and II used permanent magnets and SMA actuators for their connection
mechanism (details), it was time and energy consuming to control module connection. In
order for faster and more power-eective consuming connection, a mechanical connector
was designed in M-TRAN III.
16
2.2. Modular robots
(a) Descripction (b) Evolution
Figure 2.9: M-TRAN main module
Figure 2.10: Dierent congurations of M-TRAN
Each M-TRAN module has four microcomputers, one master and three slaves. All the
master computers of the connected modules are connected by CAN bus, by which they
communicate, synchronize their motions, and cooperate. In previous versions it was used
asynchronous serial communications for local communications and LonWorks for module
communications. Version III incorporates also bluetooth and proximity and inclinometer
sensors.
M-TRAN has also the following features.
Dimensions: 65mm x 65mm x 130mm and weight: 420g
Wire-less operation by battery: Lithium polymer battery.
Automatic generation of locomotion patterns: Coordinated motions are generated
for various multi-module structures (four-legged, six-legged and snake-like) by the
program using GA. Those are downloaded to the hardware and veried by experi-
ments.
17
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.11: Main module of CONRO
Distributed control: The robot motion is controlled by all the modules CPUs.
Motion generation: Locomotion for several structure robots is automatically gener-
ated in the host PC by using CPG technique and GA. Then such locomotion patterns
are played back by the hardware.
M-TRAN can achieve quadruped walker, H-shape, snake and a caterpillar congu-
rations amongst others(g. 2.10).
M-TRANs concept is very similar to the one presented in this thesis, with the exception
that M-TRAN is homogeneous and Microtub is heterogenous.
2.2.3 CONRO
Polymorphic Robotics Laboratory. Information Science Institute.
University of Southern California.
Main reseacher: P. Will.
http://www.isi.edu/robots/conro/
The CONRO self-recongurable robot [Shen et al., 2000] [Shen et al., 2002] [Salemi
et al., 2004] is made of a set of connectable modules 2.11. Each module is an autonomous
unit that contains two batteries, one STAMP II micro-controller, two motors, four pairs
of IR transmitters/receivers and four docking connectors to allow connections with other
modules.
Modules can be connected together by their docking connectors, located at either end
of each module. Male connectors consist of two pins. Female connectors have an SMA-
triggered locking/releasing mechanism. Each module has two degrees of freedom: DOF1
for pitch (up and down) and DOF2 for yaw (left and right). With these two DOFs, a
single module can wiggle its body but cannot change its location. However, when two or
more modules connect to form a structure, they can accomplish many dierent types of
locomotion. For example, a body of six legs can perform hexapod gaits, while a chain
of modules can mimic a snake or a caterpillar motion (g. 2.12). To make an n-module
caterpillar move forward, each module

Os DOF1 goes through a series of positions and


18
2.2. Modular robots
Figure 2.12: Dierent congurations of CONRO
the synchronized global eect of these local motions is a forward movement of the whole
caterpillar.
CONRO modules communicate with one another using IR transmitters and receivers.
When a module is connected to another module via a connector, the two pairs of IR
transmitters/receivers at the docked connectors will be aligned to form a bi-directional
communication link. Since each module has four connectors, each module can have up
to four communication links. The IR transmitters/receivers can also be used as docking
proximity sensors for guiding two modules to dock to each other during a reconguration
action. A selfrecongurable robot can be viewed as a network of autonomous systems with
communication links between modules. The topology of this network is dynamic because
a robot may choose to recongure itself at any time.
To increase the exibility of controlling self-recongurable robots, it has been designed
and implemented a distributed control mechanism based on the biological concept of hor-
mones. Similar to a content-based message, a hormone is a signal that triggers dierent
actions at dierent subsystems and yet leaves the execution and coordination of these
actions to the local subsystems. For example, when a human experiences sudden fear,
a hormone released by the brain causes dierent actions, e.g., the mouth opens and the
legs jump. Using this property, it has been designed a distributed control mechanism that
reduces the communication cost for locomotion controls, yet maintains global synchro-
nization and execution monitoring.
In parallel with the hardware implementation of the CONRO robot, there is a Newto-
nian mechanics based simulator, Working Model 3D, to develop the hormone-based control
theory, with the objective that the theory and its related algorithms will eventually be
migrated to the real robots. This control theory is explained in chapter 3.
2.2.4 Molecube
Computational Synthesis Laboratory (CCSL)
Cornell University.
Main researchers: V. Zykov and H. Lipson
http://ccsl.mae.cornell.edu/self_replication
http://www.molecubes.org/
19
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.13: Example of reconguration in Molecube
Molecubes [Zykov et al., 2005] are made up of a series of modular cubes, each contain-
ing identical machinery and the complete computer program for replication. The cubes
have electromagnets on their faces that allow them to selectively attach to and detach
from one another, and a complete robot consists of several cubes linked together. Each
cube is divided in half along a diagonal, which allows a robot composed of many cubes to
bend, recongure and manipulate other cubes. For example, a tower of cubes can bend
itself over at a right angle to pick up another cube (g. 2.13).
Each module of the self-replicating robot is a cube about 10 cm on a side, able to
swivel along a diagonal. To begin replication, the stack of cubes bends over and sets its
top cube on the table. Then it bends to one side or another to pick up a new cube and
deposit it on top of the rst. By repeating the process, one robot made up of a stack
of cubes can create another just like itself. Since one robot cannot reach across another
robot of the same height, the robot being built assists in completing its own construction.
A physical system is self-reproducing if it can construct a detached, functional copy
of itself. Self-reproduction diers from self-assembly, in which the resulting system is not
able to make, catalyze or in some other way induce more copies of itself.
In its second version, Molecubes design has been miniaturized, simplied, and ruggedi-
zed [Zykov et al., 2007] (g. 2.14). Each module has a shape of a cube with rounded corners
and comprises approximately two triangular pyramidal halves connected with their bases
20
2.2. Modular robots
Figure 2.14: Molecubes new design (2007)
so that their main axes are coincident. These cube halves are rotated by the robot motor
about a common axis relative to each other. Each of the six faces of a robot is equipped
with an electromechanical connector that can be used to join two modules together. Sym-
metric connector design allows 4 possible relative orientations of two connected module
interfaces, each resulting in dierent robot kinematics. Each of the two halves of every
robotic module is equipped with one Atmel Mega16 microprocessor. Both microprocessors
are connected through a RS232 bus, to which all other joined actuator, controller, and
other add-on robotic modules are connected.
Every cube in the automata also has an associated software controller, which de-
termines the next state of the cube magnets, which halves of a cube should swivel, and
whether the cube should overwrite the controllers of its neighbors.
Every iteration, a molecube controller receives four binary bits of input, indicating
which of the four neighboring cells are lled by a cube (von Neumann neighborhood), and
produces binary output to control the cube. The von Neumann neighborhood was chosen
because real molecubes are much simpler to build with inputs on cube faces. For clarity,
the cube controller is broken up into three logical sections: magnet controller, swivel
controller, and overwrite controller. The magnet controller outputs four bits indicating
the new on/o state of each magnet A, B, C, and D. A cube can be ipped and moved
in the automata, so the magnets are labeled with letters to emphasize that a particular
magnet is not always pointing in a particular direction. The swivel controller outputs
four bits indicating whether each of the cube halves should swivel. Because there are
two possible swivel cuts through the cube, only two of these bits are used in a particular
cube. The other two bits are used when the controller is copied to a cube with a dierent
direction swivel cut, allowing the rules to specify independent behavior for the two cube
21
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.15: Crystalline robot
types. Finally, the overwrite controller outputs four bits indicating which neighbors A, B,
C, or D to overwrite.
The function of the cube controller is essentially to map binary strings to other binary
strings. For simplicity it is used a binary tree that can emit symbols on branches. A
controller decision tree consists of a tree of nodes containing values indicating which input
bit to use when deciding the next branch. Between nodes, <output value, output bit
position> pairs may be emitted, and when a leaf node is reached all output values must
have been specied. This particular implementation is useful because it is able to repre-
sent any symbolic input output mapping, is easy to use when generating randomized
controllers, and lends itself well to testing because it can be readily understood as a nested
series of if-statements. Also, in the future, controllers could easily be combined with one
another by merging trees together at random nodes.
2.2.5 Crystalline and Molecule robots
Rus Robotic Lab.
Dartmouth College and MIT.
Main Researcher: Daniella Rus, Zack Butler and Keith Kotay.
http://groups.csail.mit.edu/drl/modular_robots/crystal/crystal.html
http://groups.csail.mit.edu/drl/modular_robots/molecule/molecule.html
The Crystalline Robot [Rus and Vona, 2000] (2000) is one of the rst two-dimensional
lattice self-recongurable modular robot system. It is composed of Atoms, square modules
that actuate by expanding and contracting by a factor of two in each dimension.
The Crystalline Atom (see Figure 2.15) has square (cubic in 3D) shape with connectors
to other modules in the middle of each face. It is activated by three binary actuators,
22
2.2. Modular robots
one to permit the side length of the square to shrink and expand and two to make or
break connections to other Atoms. This actuation scheme allows an individual module to
relocate to arbitrary positions on the surface of a structure of modules in constant time.
The Atom uses complimentary rack and pinion mechanisms to implement the contrac-
tion and expansion actuation, a similar mechanism used in the support module v1.1.
Each Atom contains an on-board processor (Atmel AT89C2051 microcontroller), power
supply (ve 2/3 A Lithium batteries), and support circuitry, which allows both fully
untethered and tethered operations. Atoms are connected by a wired serial link to a
host computer to download programs. For untethered operations, an experiment specic
operating program specied as a state sequence is rst downloaded over a tether. When
the tether is removed, an on-board IR receiver is used to detect synchronization beacons
from the host.
Crystalline robot systems are dynamic structures: they can move using sequences of
recongurations to implement locomotion gaits, and they can undergo shape metamor-
phosis. The dynamic nature of these systems is supported by the ability of individual
modules to move globally relative to the structure. The basic operations in a Crystalline
robot system are:
(expand < atom >, < dimension >) - expand a compressed Atom in the desired
dimension (z,y, or z)
(contract < atom >, < dimension >) - compress an expanded Atom in the desired
dimension
(bond < atom >, < dimension >) - activate one of the Atoms connectors to bond
with a neighboring Atom in the structure
(free < atom >, < dimension >) - deactivate one of the Atoms connectors to break
a bond with a neighboring Atom in the structure
The Molecule Robot [Kotay et al., 1998] [Kotay and Rus, 2005] is a self-reconguring
robot consists of a set of identical modules that can dynamically and autonomously re-
congure in a variety of shapes, to best t the terrain, environment, and task. Self-
reconguration leads to versatile robots that can support multiple modalities of locomo-
tion and manipulation. For example, a self-reconguring robot can aggregate as a snake
to traverse a tunnel and then recongure as a six-legged robot to traverse rough terrain,
such as a lunar surface, and change shape again to climb stairs and enter a building.
A Molecule robot consists of two atoms linked by a rigid connection called a bond.
Each atom has ve inter-Molecule connection points and two degrees of freedom. One
degree of freedom allows the atom to rotate 180 degrees relative to its bond connection,
and the other degree of freedom allows the atom (thus the entire Molecule) to rotate
relative 180 degrees relative to one of the inter-Molecule connectors at a right angle to the
bond connection.
The Molecule is controlled by two types of software: low-level assembly code in the
onboard processor(s), and high-level code on a workstation.
Molecules control is focused on locomotion gait development. As an example, self-
reconguring robots can climb stairs even in the absence of models of the height, width,
23
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.16: Molecule robot
and length of the stairway. The robot will be given the command to move forward.
The robot will proceed with a translation motion until the front sensors mounted on
the forward modules detect an obstacle (e.g., the rst step). At this point the robot
will change the locomotion modality from translation to stacking. When the top modules
detect free space again (that is, after the rst step has been cleared), the robot will change
locomotion modality again to unstacking and then translation. These capabilities lead to
on-line algorithms for navigation that take advantage of self-reconguring capabilities to
create a water-ow-like locomotion.
In Molecules it is used a generic algorithm [Butler et al., 2004] for implementing the
water-ow locomotion gait using the self-reconguration property. In this gait a group
of modules tumble on top of each other to implement forward progress. The eciency of
this gait is analyzed in terms of the number of actuations, and this result helps to develop
a rolling gait which is dynamically but not statically stable. Using self-reconguration, a
group of modules can actively change their center of mass to generate forward motion. It
is demonstrated that the dynamically stable algorithm is more ecient and the locomotion
tumbling gait on a four-module Molecule robot.
2.2.6 Telecube and Proteo (Digital Clay)
PARC - Palo Alto Research Center.
Systems and Practices Laboratory. Modular Robotics Lab.
Main Researcher: Mark Yim.
http://www2.parc.com/spl/projects/modrobots/
Telecube modules [Suh et al., 2002] are cube shaped modules with faces that can
24
2.2. Modular robots
(a) Module (b) Example of module reconguration in a net
Figure 2.17: Telecube
extend out doubling the length of any dimension. Each face telescopes out, thus the
name. Each face also has a latching mechanism to attach or detach from any other
face of a neighboring module. Shape memory alloy and permanent switching magnet
technologies has been experimented in various versions of this system. This work builds
on the Crystalline robot by Marty Vona and Daniela Rus starting at Dartmouth. Their
initial Crystalline modules are 2D squares with one degree of freedom (all faces expanded
at the same time). The telecube modules are 3D with every face having the ability to
extend or contract independently. One module recongures from one site on a virtual grid
by detaching from all modules except one. Then extending (or contracting) the faces that
are attached the module moves to the neighboring site.
The target size for the module is a cube that is 5 centimeters on a side. Packing
actuators, electronics and structure into that small size to get the needed functionality is
one of the more dicult parts of developing the module. There are two main mechanical
functions:
1. Latch/unlatch from neighboring faces and
2. Telescope the faces (expand/collapse)
Each of the faces, called a connection plate, has a remotely controllable means to
reversibly clamp onto and to transmit power and data to the neighboring module. The
devices which produce the linear extension/contraction and module to module clamps are
called the telescoping-tube linear actuator and the switching permanent magnet devices,
respectively.
Each module is also given simple sensing and communication abilities. Modules can
send messages through their faceplates to their immediate neighbors using a low bandwidth
IR link. Each module can also gauge the extension of each faceplate, read the contact
sensor on each of the faces, and determine whether it is latched to a neighboring module.
Locomotion control is similar to the Crystalline robot. It has the same low-level
primitives (Extend arm, connect, etc.), over which more complicated actions have been
build, like Move(direction).
To achieve completeness of reconguration, meta-modules (group of 8 individual Tele-
cubes) are created. The cubes are arranged in a tight cube with their arms fully retracted.
25
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.18: Digital Clay Modules
Three locomotion primitives are dened: Move, Roll and S-Roll. Move is the explicit
sequence of actions that allows a module to move along a given direction. For example,
Move(EAST) would result with a meta-module at (x, y,z) to move to position (x + 1,y,z).
The Roll allows for one meta-module to roll around a corner of another meta-module.
For example, Roll(EAST, SOUTH) results in a meta-module at (x, y, z) to move to posi-
tion(x + 1, y - 1, z). An S-Roll is similar to Roll but making and s shape.
Proteo (Digital Clay) is the continuation of Telecubes. There are two approaches
to understanding digital clay. First approach: it is a stripped down version of a modular
robot. That is, there is:
1. no active coupling
2. no actuation for producing module to module motions.
Changes to an assembly of modules is made by a user. But it embodies one very
important aspect that the modules have some capacity to sense or know their own ori-
entation in space with respect to other modules. As such it may be a useful hardware
system for testing software, communications, power distribution for physically modular
and recongurable systems.
The other approach is to see it as a 3 dimensional human-computer interface. A struc-
ture where physical changes made to the structure are represented in a computer model.
As with clay the user shapes the material into some form or orientation. The orientation
or distribution of the inherently regular structure is sensed and directly represented in the
computer. It is a kind of smart material or structure that a user (or designer) can actually
experience in real 3D space, and yet has a direct representation in a cad program.
There are two kinds of structures one can imagine. The rst is made up of permanently
connected modules. It is made up of tetrahedral nodes connected by several right angle
links which are free to rotate. The overall topology is similar to the molecular structure
of diamond. The resulting structure can be freely molded. Angle sensors at each joint
could provide the information necessary for a computer model of the structure. For such
a structure to be useful would require a large number of nodes and a very large number
of calculations.
The second type is made up of an assembly of individual modules. Each individual
module must be able to sense to which other modules it is connected as well as its orien-
tation with respect to the other modules, i.e. which face is connected to which. Thus each
26
2.2. Modular robots
module must have its own identity within the assembly, and each of the 12 faces must
have a unique identity within a module.
Each module is made of a ex circuit with 12 rigid backer boards and a single com-
ponent sheet which folds into the center of the module. Each module has an 8 MHz PIC
processor with 6 serial ports communicating in pairs to each of the twelve faces. Each face
has 4 sets of three connection padspower, ground, and communication. Each group of
connection pads is backed by a NdFeB magnet (2 north, 2 south) one of which is attached
to the center of a spiral cut in the Kapton to insure a good connection.
2.2.7 Chobie
Biomechanics laboratory. Department of Mechanical and Control Engineering.
Graduate School of Science and Engineering.
Tokyo Institute of Technology.
Main researcher: N. Inou
http://www.mech.titech.ac.jp/
~
inouhp/index.html
CHOBIE (Cooperative Hexahedral Objects for Building with Intelligent Enhancement)
[Inou et al., 2003] [Suzuki et al., 2007] [Suzuki et al., 2006] is a cellular robot designed for
supporting large outer forces that cooperatively can transform the mechanical structure
that is forming by reconguration. Each cellular robot communicates with adjacent robots
and determines the behavior where it should be positioned. They form the structure by
successive cooperative movements. CHOBIE has slide motion mechanisms with some
mechanical constraints for large stiness even in movement.
Figure 2.19 shows the slide motion mechanism. It consists of two lateral boards and a
central board. The central board is sandwiched by the two lateral boards and all the boards
are tightly connected. The two lateral boards include symmetrical motion mechanisms
that consist of two sets of wheels. They are allocated in vertical and horizontal directions,
which enable the two directional motions of cellular robots. The only one DC motor is
embedded in each lateral board, and jointly drives 4 wheels that are placed on the same
plane through a drive shaft in the central board.
To endow the robot with autonomy, several devices were integrated into each robot:
sensors, an electric controller and electric battery. The width of the central board is 50mm.
Photo sensors that communicate with neighboring robots are embedded on the surface of
the frame and force sensors are attached at the corner of a portion that produces large
strain by outer forces . The controller chosen was a PIC16F84.
Performance of CHOBIE II is achieved by succession of structural transformations. In
each transformation process, some robots drive their motors and the other dont drive,
the former are called D as meaning of driving robot and the later R as meaning of
resting robot.
Each robot communicates with surrounding robots and acquires the information about
the state of the structure. But it cannot process complicated data because it doesnt have
powerful calculation function.
In [Suzuki et al., 2006] it is proposed a scheme to accomplish the cooperative move-
ments focusing on a characteristic position which enables simultaneous driving. The posi-
27
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.19: Slide motion mechanism of Chobie II
tion is suitable as a starting point of a command and can be pinpointed by local commu-
nication. A robot which is located in the position becomes a temporary leader, and sends
a drive command to the line which should drive. Using this technique, it is possible to
determine a leader by local communication and to specify the robots on the line to drive
by a simple algorithm without depending on the number of robots. This communication
system is very interesting and similar to the one used in Microtub.
Here, it is important that the leader is temporary and is newly decided after each
transformation because the robots should be an autonomous distributed system. Someone
may think that a permanent leader could operate in an easier way. However, in order to
treat the large amount of information, it would require high intelligence for each robot.
The prime feature of the temporary leader scheme is to fulllment of transformation
by only local communication based on a simple rule. Of course, superior communication
devices and microcomputers could perform an equivalent task using global information by
more complicated rules. But it will be less exibility for a scale of a structure. In contrast,
the temporary leader scheme is independent of the scale because it follows a simple local
rule. The scheme is also applicable to other systems if they are composed of autonomous,
distributed and synchronous units.
As an example performance of robots using the temporary leader scheme, the leader
selection in the crawl motion is described (g. 2.20 b)):
1. All robots send signals to all direction.
2. If a robot receives a signal from top or bottom, it stops sending signals to left and
right directions.
3. If a robot has received signals from vertical and horizontal direction, it becomes the
temporary leader at the present conguration of the robots.
28
2.2. Modular robots
(a) Crawl motion with ve robots (b) Procedure of determining a Leader
Figure 2.20: Chobie reconguration
(a) Several modules on dierent congurations (b) ATRON module without electron-
ics and batteries
Figure 2.21: ATRON
In gure 2.20 a) it is possible to see the crawl motion with ve robots.
2.2.8 ATRON
The Maersk Mc-Kinney Moeller Institute for Production Technology.
University of Southern Denmark.
Project Coordinator: H. Hautop LUND
The HYDRA consortium consists also of Mobile Robots Group from University of
Edinburgh, AI Lab from University of Zurich and LEGO Platform Development Former
member: EVALife from University of Aarhus
Web-site of Project: http://hydra.mip.sdu.dk
ATRON is a lattice based self-recongurable robot [Jorgensen et al., 2004]. The
29
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
ATRON system consists of several fully self-contained robot modules, each having their
own processing power, power supply, sensors and actuators. The ATRON modules are
roughly spheres with equatorial rotation. ATRON has the following characteristics:
Self-assembling robots (i.e. shape-changing robots)
Self-repairing algorithms and cell-biology-inspired control
Power sharing to modules with low energy
Sphere-shaped (maximum diameter 11.4 cm), total weight 825 grams
Connection/disconnection time 2 seconds, 90-degree centre rotation time 3 seconds
Typical operation time per charge 150 minutes
1 degree of freedom, 8 connectors (4 active and 4 passive)
IR inter-modular communication between each connector pair
Wired (through gold-plated slip ring) intra-modular communication (I
2
C within the
hemispheres and RS-485 between the two hemispheres)
Tilt and proximity sensors
Dual axis accelerometer for orientation awareness
Fully self-contained (batteries, sensors, actuators, processing)
Each module is equipped with four microcontrollers (one pair in each hemisphere,
ATmega 8 and 128)
As a lattice-based system, modules are arranged in a subset of a surface centered cubic
lattice. In this lattice, modules are placed so that their rotation axis is parallel to the
x, y or z axis. Modules are placed so that two connected modules have perpendicular
rotation axes. The basic motion primitive for ATRONs is a 90 deg rotation around the
equator, while one hemisphere is rigidly attached to one or two other modules and the
other hemisphere is rigidly attached to the main part of the structure. This will cause
the attached module(s) to be rotated around the rotation axis of the active module. This
design is a compromise between many mechanical, electronic and control considerations.
Connectors in the ATRON system use a male-female design for mechanical reasons. The
connectors are arranged so that every second connector on a hemisphere is male, every
second is female. Self-reconguration will be realized by having a module connect to its
neighbor, rotate a multiple of 90 deg , let the rotated module connect to a new neighbor
and release the initial connection.
In order to realize self-reconguration with the ATRON system, the module is required
to:
Be able to connect and disconnect with its neighbors.
Have neighbor to neighbor communication.
30
2.2. Modular robots
Figure 2.22: Active Cord Mechanism (ACM): version III (a), R3 (b), R4 (c) and R5 (d)
Be able to sense the state of its connectors.
Perform 360 deg rotation around the equator.
Like the M-TRAN module, the ATRON has two parts connected by an actuated
joint. Where the M-TRAN is actuated around two parallel axes, the ATRON is actuated
around the axis perpendicular to the equatorial plane. The ATRON module, shown in
gure 2.26, is built mainly from aluminum with some brass (gearing for the center motor)
and steel (passive connectors and needle bearing in the center). In the ATRON, some
interesting properties from M-TRAN and CONRO are combined.
2.2.9 Active Cord Mechanism (ACM)
Hirose and Yoneda Lab. Dept. of Mechanical and Aerospace Engineering.
Tokyo Institute of Technology.
Main Researcher: S. Hirose and K.Yoneda.
ACM [Hirose, 1993] was the rst robot using the principles of a serpentine movement
which is the same as actual snakes (g. 2.22 a)). It was created in 1972, and it could
31
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
move at a speed of approximately 40 cm/sec . The entire length of the device is 2 m,
and it has 20 joints. Each joint consists of servo-mechanisms that can bend to the left
and right. To make contact with the ground, casters were installed along the direction of
the body, and characteristics were added that make it easy to slide in the direction of the
torso and dicult to slide in the normal direction. The propulsion motion was conducted
by inputting command values which impart sinusoidal bending motions to the head joint
servo-mechanism, and that bending signal was shifted at a xed speed to the following
joint servo-mechanisms. When this is done, the body as a whole begins to move by sending
a wave to the rear, but in order for the torso to slide over the oor surface with the casters,
all of the torso joints produced a serpentine movement like the ow of water which traces
the same loci. This principal of propulsion corresponds to the swimming motion of eel.
It has installed tactile sensors, based on limit switches, onto the sides of all the joints.
It is indispensable to know the tactile conditions between the torso and the environment
in an Active Cord Mechanism, but it is no good to simply bend the joint that is touched
by an object. It is best that both neighboring joints also bend at a speed one-half in
the opposite direction at the same time. This control closely corresponds with the kind
of lateral inhibition type neural net which is seen in the nervous system. Using lateral
inhibition type controls, the ACM III is capable of smooth movement which autonomously
coiled around optionally shaped objects, and of propulsion while following along a labyrinth
in the shape of the labyrinth, by combining this lateral inhibition control with angular
information shift control.
The version R3 of ACM (g. 2.22 b) ) is composed of 20 modules and has 2 dof (3D
movement). The size is 1755 x 110 x 110 mm and the weight 12.1 kg. The movable angle
is 62.5 deg and the output torque 19.1 Nm. moderately, with a moderate move speed
at serpentine locomotion secured. The most characteristic part is a large-sized passivity
wheel which can be easily detached and attached, and which covers the whole body. It
can perform conventional serpentine locomotion, lateral rolling, parallel translation and
sinus-lifting and pedal wave promotion.
ACM-R4 (g. 2.22 c) ) has the following 3 characteristics: active wheels, dust and
water proof and overload protection.
ACM-R5 (g. 2.22 d) ) can operate both on ground and in water undulating its long
body. It is equipped with paddles and passive wheels around the body. To generate
propulsive force by undulation, the robot need a resistance property as it glides freely
in tangential direction but cannot in normal direction. Due to the paddles and passive
wheels, ACM-R5 obtains that character both in water and on ground.
The control system of ACM-R5 is an advanced one. Each joint unit has CPU, bat-
tery, motors, so they can operate independently. Through communication lines each unit
exchanges signals and automatically recognizes its number from the head, and how many
units join the system. Thanks to this system operators can remove, add, and exchange
units freely and they can operate ACM-R5 exibly according to situations.
2.2.10 WormBot
Institute of Neuromorphic Engineering and Institute of Neuroinformatics.
ETH (University of Zurich)
32
2.2. Modular robots
Figure 2.23: WormBot: CPG-driven Autonomous Robot
Main researchers: Rodney Douglas and Jorg Conradt
http://www.ini.ethz.ch/
It is an autonomous mobile robotic worm [Conradt and Varshavskaya, 2003] designed
to explore motion principles based on neural Central Pattern Generator (CPG) circuits
in a truly distributed system. The main aim of the project is to demonstrate elegant
motion on a robot with a large number of degrees of freedom under the control of a simple
distributed neural system as found in many animals spinal cord. At this moment, the
robot consists of up to 60 individual segments that all run a local CPG. Sparse adjustable
short- and long-range coupling between these CPGs synchronizes all segments, thus gen-
erating overall motion. A wireless connection between a host computer and the robot
allows changing parameters during operation (e.g. individual coupling coecients, trav-
eling speed, and motion amplitude). The robot can demonstrate various motion patterns
based on extremely simple neural algorithms.
In the second design (g. 2.23) each segment is provided with its own re-programmable
microcontroller Atmel Mega8, several sensors, and a communications interface. Each seg-
ment microcontroller runs a local individual CPG, biased by current position and torque
stimuli and actuates the corresponding motor using PWM signals. The sensors are three
light-sensors in orthogonal directions, a temperature sensor and sensors for the segment
internal states (rotary position, applied motor torque, available voltage of power supply
battery). A two wire communication interface connecting all segments allows fast and
exible information exchange within the robot. Segments communicate all sensor read-
ings and internal states to all other segments, such that individual short- and long-range
coupling between segments can be adjusted in software. The software coupling allows
exible adaptation during operation, e.g., for changing gait or direction of motion. The
head segment in the second prototype is also connected to the communication bus, and
exchanges data with a PC over a wireless connection. Thus, users can interface to the
robot at runtime to adjust CPG parameters (e.g. coupling strengths, motion amplitude
and phase-shifts) during otherwise autonomous operation.
33
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.24: Prototype from the University of Camberra
2.2.11 Others
In order to keep record of other robotic designs related to this thesis although not as
relevant as the other ones, in this section they will be briey mentioned.
David Austin, from the Robotic Systems Lab (RSL) of the Australian National Uni-
versity, tried to develop a project investigating one form of self-reconguring robots [Jan-
tapremjit and Austin, 2001] that can assemble themselves and recongure their hardware
to take whatever shape is required for the current task. In g. 2.24 some modules are
shown: joint(a), power(b) and wheel(c) units. Unfortunately, due to the diculty of build-
ing this type of mechanism, they had to abandon the project.
The SuperBot modules [Salemi et al., 2006] (g. 2.25) are a design based on two
previous systems: CONRO (by the same research group) and MTRAN. It falls into the
chain/tree architecture. The modules have three degrees of freedom each. Each module
can connect to another module through one of its six dock connectors. They can com-
municate and share power through their dock connectors. Several locomotion gaits have
been developed for dierent arrangements of modules. For high-level communication the
modules use hormone-based control, a distributed, scalable protocol that does not require
the modules to have unique IDs.
MAAM (Molecule = ATOM ATOM + MOLECULE) is a project whose objective
is to dene, specify, realize and develop a set of robotic atoms [Brener et al., 2004] able to
assemble themselves into a molecule that will be able to develop a given task by progressive
reconguration. The atom is a mechanic structure with six legs. Each of them will be
able to join to other legs of other atoms. Each leg can perform two rotations and one
translation.
It is developed at the Laboratoire de Recherche en Informatique et ses Applications,
Universite de Bretagne-Sud.
http://www-valoria.univ-ubs.fr/Dominique.Duhaut/maam/index.htm
34
2.3. Microrobots
Figure 2.25: Superbot modules
I-Cubes [Unsal and Khosla, 2000] is a class of modular self-recongurable bipartite
robotic system developed in 2000 in the Advanced Mechatronics Laboratory, Carnegie
Melon University. It is of interest because it is an heterogeneous system composed of
independently controlled mechatronic modules (links) and passive connection elements
(cubes). A link has the ability to connect to and disconnect from the face of a cube.
While attached to a cube on one end, links are also capable of moving themselves and
another cube attached to the other end. All active (link) and passive (cube) modules are
capable of allowing power and information ow to their neighboring modules.
http://www-2.cs.cmu.edu/
~
unsal/research/ices/cubes/
M. Chen, from the Modular robotic and Robot Locomotion Group, School of MPE,
NTU (Singapure) has also been researching in modular robotics [Chen, 1994]. But this
work was focused on robotic arms.
T. Fujii and K. Hosokawa from the Institute of Industrial Science, Tokyo University,
where also working in the Vertical Modules (g. 2.26(b)), a kind of recongurable mod-
ular robot.
2.3 Microrobots
This section is dedicated to robots that have miniaturization as its main characteristic,
i.e. microrobots, being a microrobot a miniaturized, sophisticated machine designed to
perform a specic task or tasks repeatedly and with precision. Microrobots typically have
dimensions ranging from a fraction of a millimeter up to several millimeters.
35
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
(a) Atom of MAAM (b) Vertical Modules
Figure 2.26: MAAM and Vertical Modules
2.3.1 Micro size modular machine using SMAs
Intelligent Systems Institute.
National Institute of Advanced Industrial Science and Technology (AIST).
Main researcher: E. Yoshida, H. Kurokawa and S. Murata
http://unit.aist.go.jp/is/dsysd/index.html
http://www.mrt.dis.titech.ac.jp/english.htm (MURATA)
The microrobot developed at the AIST center is an example of SMA-based actuator.
It is a miniaturized self-recongurable modular robotic system using shape memory alloy
(SMA) [Yoshida et al., 1999]. The system is designed so that various shapes can be actively
formed by a group of identical mechanical units. The unit can make rotational motion by
using an actuator mechanism composed of two SMA torsion coil springs which generate
sucient motion range and torque for reconguration. Applicability of the developed unit
model to a 3D self-recongurable system is also under development
The actuator mechanism uses SMA by torsion spring: in the actuator mechanism, two
SMA torsion springs are pre-loaded by twisting each of them reversely by 180 degrees
(2.29). The rotation takes place when one of the springs is heated (usually by electric
current). By using Ti-Ni-Cu SMA whose stiness increases drastically when it is heated,
a large torque can be generated even in small size. The SMA keeps a relatively high
power/weight ratio even in micro-scale and thus is more advantageous than conventional
electromagnetic motors, which have limitations in miniaturization since they become in-
eective as its power/weight ratio decreases signicantly in micro-scale.
An example of movement can be seen in g. 2.28.
2.3.2 Denso Corporation
Research Laboratories, Denso Corporation, Nisshin, 470-0111 Japan
36
2.3. Microrobots
Figure 2.27: I-Cubes
It is an in-pipe microrobot ( [Shibata et al., 2001], [Nishikawa et al., 1999], [Kawahara
et al., 1999]) which moves at 10 mm/s in a pipe of 15 mm diameter without any power
supply wires (g. 2.30 a)). The robot consists of a microwave energy supply device,
a locomotive mechanism using a piezoelectric bimorph actuator, a control circuit and a
camera module. The energy supply device consists of rectifying circuits and a compact
receiving antenna. The required energy of 200 mW is supplied via microwaves without
wire. 14 GHz microwave is rectied into DC electric energy at a high converting eciency
of 52%. The locomotive device of multi-layered bimorph actuator consumes only 50 mW.
The control circuit consists of a saw tooth generator and a programmable logic device,
and controls the direction of the robot motion by outside light signal.
The locomotive mechanism consists of 8 layered bimorph, center shaft that connects
each center of the bimorph, and 4 clamps that connect the edges of each bimorph. When
the actuator is operated at 15V, it deforms approximately 6m between the center shaft
and the clamps.
The principle of motion is shown in gure 2.30 b). The mechanism moves according
to the inertia drive method. Initially, the locomotive device is suspended in tension to the
pipe wall with clamps. The locomotive mechanism is driven by saw-toothed wave voltage.
When the voltage is slowly increased, the actuator slowly deforms. The mass then moves
upward, but the clamps do not move because the limiting frictional force between the
clamps and the pipe wall exceeds the inertial force of the mass. When the voltage is
quickly decreased, the actuator quickly recovers. The clamp then slip upward, but the
mass does not slip because the inertial force of the mass exceeds the limiting frictional
force between the clamps and the pipe wall. The combination of clamp slippage and mass
movement creates an upward motion. In contrast, when the voltage is quickly increased
and slowly decreased, the mechanism moves downward.
37
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.28: Basic motion of Micro SMA
Figure 2.29: Estructure and real module of Micro SMA
2.3.3 Endoscope microrobots
A broad background in microrobots can be found in endoscope robots. Although their size
is the smallest of all microrobots, its capacities regarding movement and degrees of freedom
are very limited. The average mechanism allows only bending (the forward movement has
to be done by the operator), as in gure 2.31(a), and the use of a gripper ( [Maeda et al.,
1996], [Ikuta et al., 1988]) as in gure 2.31(b).
On the contrary, a few others are able to self propulsion, most of them performing a
work-like movement [Peirs et al., 2001], [Kim et al., 2002], as in gure 2.31(b).
2.3.4 LMS, LAB and LAI microrobots
Laboratoire d Automatique Industrielle (LAI) - INSA de Lyon
38
2.3. Microrobots
Figure 2.30: Denso microrobot
(a) Active endoscope with SMA coil springs (b) Example of worm-like endoscope microrobot
Figure 2.31: Endoscope microrobots
Laboratoire de Mecanique des Solides (LMS) - Universite de Poitiers
Laboratoire d Automatique de Besangon (LAB)
The microrobots from LAI, LMS and LAB are the result of the investigations of 3
laboratories involved in the micro robotics workgroup of the French National Centre of
the Scientic Research(CNRS) [Anthierens et al., 2000]. They have been conceived to
answer the locomotion problem inside industrial tubes of small diameter.
The LMS (g. 2.32 b) ) polymodular exible microrobot is able to progress inside
empty man-made canalization presenting bends. The rst realized prototype has got a
diameter of about 30 mm, but all work is done with the purpose to improve design and
machining techniques to realize a micro robot capable of inspecting canalization of less
than 10 mm diameter.
This robot is constituted of ve juxtaposed identical modules, called locomotion mod-
ules. The locomotion modules are joined together by passive elastic links, nearly jointed
coil springs, which present a resistive behavior to push or pull solicitation, but a compliance
39
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.32: LAI, LMS and LAB microrobots
to bending one.
A locomotion module is obtained by mounting a exible frame on a rigid skeleton of
smaller dimension, which causes its post-buckle. Thus, this module presents two states
of stable equilibrium : the rst one realizes the support of the robot inside the tube, the
second one generates the advance movement of the robot, as it is associated with the other
modules in a sequence of a locomotion cycle. The global movement can be also compared
to that of an earthworm.
The LAB inchworm in-pipe microrobot (g. 2.32 c) ) is able to move inside an
unspecied network of pipes of 10 mm diameter. The micro robot must be able to support
its own weight.
In order to actuate the support units, shape memory alloy wires are used. The central
unit is actuated by a SMA spring. Three legs positioned at 120

on the central unit


constitute each support unit. Every leg is actuated by one SMA wire. The leg shape was
chosen in order to amplify the displacement induced by the SMA wire contraction. When
the three SMA wires of a support unit are actuated at the same time, the leg structures
bend, and the contact between the support unit and the pipe side breaks. When the SMA
wire heating is stopped, the structure of the leg unbends back to its original shape and
makes the contact again with the pipe side.
The SMA spring is actuated by using Joule Eect; an elongation of about 3 mm is
obtained. Another spring is used to contract the SMA spring.
Finally, the LAI pneumatic microrobot (g. 2.32 a) ) is designed to move inside the
rectilinear part of a 17 mm diameter vertical pipes of vapor generators. The robot has
to carry modules for reparation or inspection (sensors, video camera.. .) that represent a
heavy load. In order to satisfy the requirements an inchworm locomotion mode has been
chosen. Modules are independent, allowing to distinguish the support function and the
stepping function.
40
2.4. Pipe Inspection robots
Figure 2.33: 12-legged endoscopic capsular robot
The central actuator is composed of a exible part (metal bellows) that is stressed
under pressure and a rod as a pneumatic jack. When the pressure in the chamber (outside
the metal bellows) is falling down, metal bellows work like a spring and then their length
is increasing to reach the initial state (length). That means the rod of actuator moves
back.
2.3.5 12-legged endoscopic capsular robot
This microrobot (gure 2.33) is included in the mesoscale (from hundreds of microns to
tens of centimeters). It has a robotic legged locomotion mechanism [Valdastri et al., 2009]
that is compact and strikes a balance between conicting design objectives, exhibiting high
foot forces and low power consumption. It enables a small robot to traverse a compliant,
slippery, tubular environment, even while climbing against gravity. This mechanism is
useful for many mesoscale locomotion tasks, including endoscopic capsule robot locomotion
in the gastrointestinal tract. It has enabled fabrication of the rst legged endoscopic
capsule robot whose mechanical components match the dimensions of commercial pill
cameras (11 mm diameter by 25 mm long). A novel slot-follower mechanism driven via
lead screw enables the mechanical components of the capsule robot to be as small while
simultaneously generating 0.63 N average propulsive force at each leg tip.
It has been tested in a series of ex vivo experiments demonstrating ability to traverse
the intestine in a manner suitable for inspection of the colon in a time period equivalent
to standard colonoscopy (about 30 min) at an average speed of about 50mm/min.
2.4 Pipe Inspection robots
Pipe inspection is the main task the results of this thesis are aimed to. Nowadays there
are several robots capable of performing this mission, amongst which it is possible to nd
MRInspect (III y IV) [Roh and Choi, 2004], Pipe Mouse by Foster Millar [fos, ], and
GMD-Snake2 [Worst and Linnemann, 1996] [Klaassen and Paap, 1999]. Although their
41
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.34: MRInspect pipe inspection robot
dimensions are beyond the limits pursued in this thesis (the former robots are design to
t in pipes whose diameter is 88mm (Foster), 100mm (MRInspect) and 135mm (GMD)),
they present very interesting features that, which once miniaturized, could be included in
the prototype described in this thesis.
2.4.1 MRInspect
MRInspect IV (Multifunctional Robotic crawler for INpipe inSPECTion) [Roh and Choi,
2004] [Roh et al., 2008] has been developed for the inspection of urban gas pipelines with a
nominal 4-inch inside diameter. Its steering capability with three-dimensional dierentially
driven method provides the outstanding mobility for navigation and the new mechanism
for steering can easily adjust itself to most of pipelines or tting, in which other former
in-pipe robots can hardly travel.
2.4.2 FosterMiller
Another robot that is able to adapt to changes in gas piping is Pipe Mouse [fos, ], an
autonomous inspection system for a live natural gas environment developed by Foster-
Miller together with New York GAS and the Department of Energy. Pipe Mouse is a
train-like robotic platform. Both front and rear drive cars propel the train forwards and
backwards inside the pipeline. Like a train, the platform includes additional cars to carry
the required payloads. The cars are used for various purposes including the installation
and positioning of sensor modules, the system power supply, data acquisition/storage
components, location/position devices and onboard micro-processors/electronics.
2.4.3 Helipipe
The architecture is based on helicoidal motion of the driving body (the rotor) direct
actuated by a DC motor with a built-in gear reducer, xed on a driven body (the stator).
42
2.4. Pipe Inspection robots
Figure 2.35: Foster Miller pipe inspection robot
Both bodies use wheeled structures on elastic suspensions. One or two universal joints
between bodies are used especially for curved pipes. The friction is used to produce the
active force (no direct actuated wheels) and self -guiding on the way (curved, horizontal
or vertical parts). Three dierent prototypes (locomotion systems) have been done for 70
mm and 40 mm diameter of pipes. Two of them are completely autonomous with power
supply on board and wireless control. Because of its high reliability (thanks to a very
simple kinematics) the robots can be used for inspection or carry out tasks. See [Horodinca
et al., 2002] for more information.
2.4.4 Theseus
Theseus [Hirose et al., 1999] are a set of in-pipe inspection vehicles for pipes of 25, 50 and
100mm diameter.
Thes-I (gure 2.37) is designed for gas pipes of 50mm in diameter. The rolling wheel
of the robot has four spring wires radial attached around it, and free rollers are installed
at the end of the spring with some inclination angle. As the free rollers are pressed on
inside wall of the pipe and driven to the circumference direction, it generates screw motion
and moves along the pipe. The Thes-I has a pair of rolling wheels rotating to the opposite
direction to cancel reaction torque. The main feature of the mechanism is that, as the free
rollers are supported by spring wires and the inclination angle of the free rollers changes
according to the magnitude of the resistance force, it shows the function of load sensitive
transmission and automatically reduces the velocity and increases the thrusting force when
it encounters large resistance force for propulsion.
Thes-III (gure 2.38) is designed for gas pipes of 150mm in diameter. Thes-III in-
troduced the layout of the active wheels arraying radial in a wheel plane, and drive
the wheels while pressing them on inside the pipe with spring force. But if the wheels
are driven like this, the wheel plane tends to be inclined and it can not maintain vertical
posture in relation to the pipeline axis. Thes-III thus introduced the detect wheels for
each active wheels, in order to detect the inclination angle of the active wheel in respect
to the pipeline axis, and at the same time, feedback control was executed to maintain the
vertical posture. Thanks to these, Thes-III can easily follow the bending of the pipeline
and it smoothly makes tight turn on the elbow joints.
43
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
Figure 2.36: Helipipe
2.5 Robot Summary
In tables 2.1, 2.2 and 2.3, a summary of the main properties of the modular robots shown
before can be viewed
2
2.6 Conclusions
Along this chapter, many dierent designs regarding chain and lattice modular robots,
microrobots and pipe pipe inspection robots have been shown. Each of them has been
included in this review for its especial characteristics in a specic eld.
But rst of all, it is important to remark that regarding the types of modules the
robots are composed of, most modular designs are homogeneous, at least in a locomotion
sense (Polypod and I-Cubes have two types of modules, but one of them is passive, its
function is mainly to carry the power supply). There is a lack of heterogenous drive module
combination, one of the objectives of this thesis. Thus, there is no clear state of the art
the microrobot propose in this thesis can be compared to, but several elds like the ones
mentioned in this chapter: chain and lattice modular robots, microrobots and pipe pipe
inspection robots.
Regarding chain and lattice robots, they have two drawbacks: they are medium size,
not suitable for narrow pipes, and they are homogeneous. Most of the recongurable
2
The characteristics left in blank where not available from the publications or websites of the authors
at the moment of the publication.
44
2.6. Conclusions
Figure 2.37: Thes-I pipe inspection robot
Figure 2.38: Thes-III pipe inspection robot
robots save this problem by reconguring themselves into a conguration where they are
able to move in the new terrain. But in pipes there is no place for reconguration. Here
is where a robot provided with several locomotion modes is important.
However, chain and lattice robots is a eld in which a lot of research in being done,
and consequently this robots have very interesting features regarding mechanical design,
sensors, control mechanisms, etc., that can be applied to the design in Microtub.
Amongst the homogenous ones, Polybot and M-TRAN are the most complete, regard-
ing locomotion gaits they can perform, sensor fusion, embedded control, etc. Together with
CONRO they have a very interesting control architecture (that will be shown in chapter
3). M-TRAN is very interesting for its generation of locomotion patterns via genetic algo-
rithms and CONRO for its hormone-based control mechanism. Molecule presents the use
of low-level assembly code onboard and high level code in a workstation, as well a very
interesting gait control to achieve movement cycles as climbing.
45
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
3D Class Homo/Hetero DOF Self recong Battery Control Year
Polypod Chain Hetero(2 types) 2 No Yes C 1993
Tetrobot Chain Homo 1 x Yes C 1996
Fracta3D Lattice Homo 6 Yes ? ? 1995
Molecule Lattice Homo 4 Yes x C 1998
CONRO Chain Homo 2 No x C/D 1998
Polybot Chain Homo 1 Yes No C/D 1998
Telecube Lattice Homo 2 Yes Yes C 1998
I-Cube Lattice Hetero(2 types) 3 Yes x C 1999
M-TRANIII Hybrid Homo 2 Yes Yes D/C 2005
ATRON Lattice Homo 1 Yes Yes D 2003
Superbot Hybrid Homo 3 Yes Yes C/D 2005
Molecube Chain Homo 1 Yes ? No 2005
Table 2.1: 3-D Robots summary
2D Class Homo/Hetero DOF Self recong Battery Control Year
CEBOT Mobile Hetero 1-3 Yes x C 1988
Metamorphic Lattice Homo 3 Yes x C 1993
Fracta2D ? Homo 0 Yes x D 1994
Chobie Lattice Homo 1 Yes Yes C 2003
Micro-module ? Homo 2 Yes x C
Crystalline Lattice Homo 2 Yes No C 1999
Table 2.2: 2-D Robots summary
1D Class Homo/Hetero DOF Self recong Battery Control Year
ACM Chain Homo 1 No No C 1972
ACM R5 Chain Homo 1 No Yes C/D 2007
Wormbot Chain Homo 1 No No C 2003
Table 2.3: 1-D Robots summary
Regarding intermodule local communication, Telecube shows the use of contact sensor
faces as a communication mechanism and to know if there are other modules connected,
very similar to the concept of synchronism line that will be presented later on. A similar
concept is found in Chobie, which presents a control algorithm in which a leader (that
controls the transformation phase) is determined by local communications and changed
every transformation. ACM also uses a communication line to know the conguration of
the chain robot.
In the electronic side, Digital clay stands out for the use of exible circuit boards, simi-
lar to the ones used in the modules of these thesis. ATRON presents I
2
C communications
and power sharing between modules.
Other interesting features can be found in Molecube, introducing the concept of self-
replication, the ability to create another robot similar to itself from separate modules, and
Chrystalline, presenting the use of extension-contraction mechanisms in lattice robots.
46
2.6. Conclusions
Regarding microrobots, there is a lack of designs regarding pipe inspection of small
diameters. Due to the characteristics of these pipes of small diameter, robots have to be
linear (chain type), small, and no reconguration can normally be done. The MicroSMA,
Denso, LMS, LAB and LAI microrobots are mainly based in SMAs and piezoelectric
actuators, which are very slow and power demanding.
It is dierent the case of the 12-legged endoscopic capsular robot, which uses an
electrical motor to drive a lead screw that moves its legs. It was conceived for endoscopic
purposes, but its mechanical concept could be applied to pipe inspection. Although its
speed is quite low (less than 1mm/sec), this is because in the intestine the movement has
to be smooth. In a pipe, speed could be probably much higher.
Regarding pipe inspection robots, navigation through dierent diameter pipes is an is-
sue covered in MRInspect, Theseus and Helipipe, through dierent concepts. This concept
is also applied to Microtub in some modules that will be described further on, like in the
helicoidal module (its wheels are able to expand and contract to adjust to smooth changes
of diameter) and in the worm-like drive module (can travel along pipes of diameters from
22 to 35mm).
In Fosters Miller robot, some modules are active modules and some other are passive.
This is the same idea in MICROTUB: some modules will act as drive modules while other
will be cargo: power supply, communications, camera, etc.
47
CHAPTER 2. Review on Modular, Pipe Inspection and Micro Robotic Systems
48
Chapter 3
Review on Control Architectures
for Modular Microrobots
It has yet to be proven that intelligence has any survival value
Arthur C. Clarke
In order to develop tasks, microrobots, and robots in general, need a brain. In the
case of modular robots, each module needs a brain. There are many theories (control
architectures) on how to build that brain and how to interconnect each of them in order
to build a bigger brain.
Control architectures can be classied in a rst step in deliberative and reactive ar-
chitectures. The deliberative or planner-based architecture is the classic model-based
articial intelligence (AI), also called Good old-fashioned AI in the tradition of Mc-
Carthy [McCarthy, 1958], based on models. In the 80s purely reactive architectures ap-
peared with the Subsumption architecture of Brooks [Brooks, 1986], giving rise to the
new behavior-based AI, based on the direct connection between sensors and actuators
(gure 3.1). After that, hybrid and behavior-based architectures were born, as a mixture
of the former ones, using reactive layers and for fast reaction to unforeseen events and low
level control, and deliberative layers for planning and high level control.
Another important classication is between centralized and distributed control. In
centralized control the decisions are taken by one agent (module, computer, etc.), while in
distributed control decisions are taken between several agents. It is also possible to combine
both of them, for example having a distributed control for simple or individual tasks and
a central control for tasks that require planning or cooperation between agents. Each of
them has its advantages and disadvantages. Centralized control is easier to implement,
but depends on one only agent, if it fails the whole systems falls. In distributed systems,
if one agent fails the rest can still keep working. Distributed systems have on the contrary
the problem of synchronization and coordination of agents, which is missing in centralized
control.
This chapter is dedicated to describe the control architectures this thesis is based
on. The rst section will explain the dierence between deliberative, reactive, hybrid
49
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.1: AI models: a) Deliberative b) Reactive c) Hybrid d) Behavior-based
and behavior-based architectures. Following sections will be dedicated to the study of
behavior-based systems and architectures, since it is the type of control chosen for the
control of the MICROTUB robot. Also hybrid architectures will be covered. Finally,
some interesting control designs of some state of the art robots will be described. In the
last section, a brief summary on adaptive control (use for high level control) is presented.
3.1 Classication of control architectures
Autonomous agents control architectures can then be classied in four main types:
Deliberative or planner-based
Purely reactive
Hybrid
Behavior-based
Deliberative (Planning): The architecture of robots built using this approach (see
gure 3.1) consists of a set of functional blocks which form a closed loop through which
the information ows from the robots environment, via sensing, through the robot and
back to the environment, via actuator control, closing the feedback loop (sense > plan
> act). Thus, it is called a top-down architecture. For the most part they have a
sequential order of executing each process, by sensing, building a representation (model)
of the state of the world, planning (presumably based on an a priori model and the built
model) and thus actuator control. This traditional approach has proved suitable for high-
level activities such as global planning, scheduling of activities, but due to its inherent
sequential ordering of the blocks this approach has proved inappropriate for dynamic
environments which require timely responses. As a summary:
Top-down
Sense > Plan > Act
50
3.1. Classication of control architectures
Rely on a centralized world (symbolic) model
Information in the world model is used by the planner to produce the most appro-
priate actions
Estimation of performance
Uncertainty in sensing/action and changes in environment require re-planning
Poorly scaling with complexity
Poor timely responses
Some examples of deliberative architectures are: NASREM [Albus et al., 1988], VINAV
[Andersen et al., 1992], Stanford Cart [Cox and Wilfong, 1990].
Traditional planning architectures were the rst robotic architectures that appeared,
and one of the most used has been NASREM [Albus et al., 1988].
The NASREM architecture (NASA/NBS Standard Reference Model for Telerobot
Control System Architecture), proposed by Albus, is represented in Figure 3.2. The per-
ceived information passes through several processing stages until a coherent view of the
current situation is obtained. After that, a plan is adopted and successively decomposed
by other modules until the desired actions can be directly executed by the actuators. It
is composed by 6 levels:
Servo: servo control
Primitive: generates smooth trajectories
Elemental move: collission free paths
Task: converts actions into elemental moves
Service Bay: converts group actions into single object actions
Service mission: decomposes missions into service bay commands
Reactive: In reactive systems, intelligent behavior is achieved through combination
of a set of rules, each connecting perception to action. Systems consist of concurrently
executed modules achieving specic tasks. Ex. avoid obstacle, follow wall...etc. Represen-
tative of such systems is Brooks subsumption architecture, in which the robot architecture
consists of a set of hardwired (behavioral) reactive modules. As a summary:
Bottom-up: it starts with a relatively simple abstract set of rules that is built to
learn by itself
Sense < > Act
Control as a pre-programmed condition-action pairs with minimal internal state.
Parallel processing
No internal models
Direct connection between stimuli and response. E.g. table, rules, circuit...etc
Rely on:
direct coupling between sensing and action
fast feedback from the environment
51
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.2: NASREM architecture
Good if completely specied at design time
Similar to planner with all plans computed oine before hand
Poorly scaling with complexity
Some examples of reactive architectures are: Subsumption architecture [Brooks, 1986],
Activation Networks [Maes, 1990]
Hybrid: take advantage of the potential strengths of both schools (traditional and
reactive). Hybrid system architectures can be characterized by combining high-level delib-
erative activities of the traditional planning approach with the low-level reactive behaviors
of the reactive approach. The reactive behaviors ensure safe navigation, enabling the robot
to handle run-time contingencies and emergency situations while the deliberative high-level
part of the system is committed to achieving the overall task. By guiding the reactive
behaviors to achieve specic goals, the planning component ensure ecient use of system
resources.
Compromise between reactive and deliberative
Low level reactive control
Higher level decision making planner
Intermediate level(s)
52
3.1. Classication of control architectures
Separate control system into two or more communicating but otherwise independent
parts (with dierent updating functions)
Low level > safety
Highest level > select action sequences.
Usually called Three Layer Architectures
Some examples of hybrid architectures are: Aura [Arkin and Balch, 1997], Atlantis
[Gat, 1992], Saphira [Konolige et al., 1997], 3T [Bonasso et al., 1995], DDP [Schonherr
and Hertzberg, 2002].
Behavior-based: It is an extension of reactive architectures but lies between reactive
and deliberative. It is theoretically dierent from a reactive architecture. It is a method-
ology for designing and controlling (autonomous) articial systems/agents (robots) based
on biological systems. Behaviors are seen as control laws. Behaviors can store state and
information, and integrate both low and high level control. Behaviors operate in parallel
and are as simple as possible. Basic reactions to stimuli are combined to generate resultant
(emerged) behaviors. As a summary:
Started with R. Brooks and his Subsumption architecture.
Extension of reactive arch. but between reactive and deliberative. Dierent from
reactive.
Methodology for designing and controlling (autonomous) articial systems/agents
(robots) based on biological systems.
Behavior = control law.
Behavior can store state and information.
Integrate both low and high level control.
Approach to modularity.
Behaviors operate in parallel.
Behaviors as simple as possible.
Designed at dierent levels.
The key dierence between behavior-based and hybrid systems is in the way represen-
tation and time-scale are handled. Hybrid systems typically employ a low-level reactive
system that operates on a short time-scale, and a high-level planner that operates on a
long time-scale. The two interact through a middle layer. Behavior-based systems attempt
to make the representation, and thus the time-scale, of the system, uniform. Behavior-
based representations are parallel, distributed, and active, in order to accommodate the
real-time demands of other parts of the system. They are implemented using the behavior
structure, much like the rest of the system.
In general terms, it is said that reactive systems, in comparison to Behavior and Hybrid
based systems are:
Less powerful
Good for well-dened tasks and environments and well-equipped robot
53
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Great run-time eciency but poor exibility
Relation between Built-in information vs On-line computation
and Behavior-based in comparison to Planner-based are:
Less clear
Depends on the behaviors, its denition and application.
Some examples of hybrid architectures are: DAMN [Rosenblatt, 1995], Motor Schemas
[Arkin, 1987], CAMPOUT [Pirjanian et al., 2000].
3.2 Behaviour-Based Systems
3.2.1 What is a behavior?
In the Webster dictionary, three denitions of the word behavior can be found:
1. the manner of conducting oneself
2. anything that an organism does involving action and response to stimulation
3. the response of an individual, group, or species to its environment
Simply put, a behavior is a reaction to a stimulus [Arkin, 1998]. Other denition
could be: anything observable that the system or robot does. It is clear that it is quite an
intuitive concept and that there is not a precise denition.
M. Mataric [Mataric, 1994] denes behaviors as processes or control laws that achieve
and/or maintain goals. For example, avoid-obstacles maintains the goal of preventing
collisions, go-home achieves the goal of reaching some home destination.
Behaviors can be implemented either in software or hardware; as a processing element
or a procedure. Each behavior can take inputs from the robots sensors (e.g., camera,
ultrasound, infra-red, tactile) and/or from other behaviors in the system, and send outputs
to the robots eectors (e.g, wheels, grippers, arm, speech) and/or to other behaviors.
Thus, a behavior-based controller is a structured network of such interacting behaviors.
She also states that behaviors themselves can have state, and can form representations
when networked together. Thus, unlike reactive systems, behavior-based systems are not
limited in their expressive and learning capabilities.
It is important to dierentiate between behavior and action. A behavior is based
on dynamic processes operating in parallel under no central control acting as fast cou-
plings between sensors and motors. It is exploiting emergence by using properties of the
environment and side-eects from combined processes. And nally it is reactive.
Action is discrete in time (well dened start and end points, allows pre- and post-
conditions) avoidance of side-eects (only one or few actions at a time, conicts are un-
desired and avoided) and deliberative. Actions are building blocks for behaviors.
Some important questions that can arise are: how do we distinguish internal behaviors
(components of a BBS) and externally observable behaviors? Should we distinguish?
Behaviors are tightly connected to reactive robots. Reactive robots display desired
external behaviors, e.g. avoiding obstacles, collecting cans, walking...etc.
54
3.2. Behaviour-Based Systems
3.2.2 Behavior-based systems
Behavior-based systems consist of sequential modules achieving independent functions.
Abstract representation is usually avoided in these systems. Behaviors are the building
blocks of the system. They have its basis in biological studies and so biology is an inspi-
ration for design. Example: generate a motor response from a given perceptual stimulus.
The main properties of behavior-based systems are:
Ability to act in real time
Ability to use representations to generate ecient (not only reactive) behavior
Ability to use a uniform structure and representation throughout the system (so no
intermediate layer)
Other important properties are:
Achieve specic tasks/goals: avoid others, nd friend, go home
Typically executed concurrently
Can store state and be used to construct world models/representations
Can directly connect sensors to eectors
Can take inputs from other behaviors and send outputs to other behaviors (connec-
tion in networks)
Typically higher-level than actions (go home, while an action would be turn left
45 degrees)
The can be inhibited by other behaviors or agents
They can be prioritized
3.2.3 Behavior representation
Behaviors can be expressed by dierent representations. When a control system is being
designed, the task is broken down into desired external behaviors.
There are several ways to express a behavior, there is not a universal method. Some
of the most common are:
Functional notation
Stimulus response (SR) diagrams
Finite state machines/automata (FSA)
Rule-based representations
Formal methods: RS (Robot Schema) and Situated Automata
Stimulus response (SR) diagrams
These are the most intuitive and the least formal method of expression. Any behavior
can be represented as a generated response to a given stimulus computed by a specic
behavior (gure 3.3).
55
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.3: Example of stimulus response diagram
Functional notation
A mathematical methods can be used to describe the same relationships using a functional
notation.
b(s) = r (3.1)
Where:
s = stimulus
r = range of response
b = behavioral mapping between S and R
Example for a navigational task of getting to a classroom:
coordinate-behaviors (
move-to-classroom ( detect-classroom-location ),
avoid-objects ( detect-objects ),
dodge-students ( detect-students ),
stay-to-right-on-path ( detect-path ),
defer-to-elders ( detect-elders )
) = motor-response
It is easily convertible to functional languages like LISP [McCarthy, 1960].
Finite State Automata (FSA)
FSA have very useful properties when describing aggregations and sequences of behaviors.
They make explicit the behaviors active at any given time and the transitions between
them. They are not so useful when encoding a single behavior.
A Finite State Automata is set up with sensor-events on the arcs and actions in each
state, see gure 3.4. The idea is that the robot is in one state doing one action, until the
transition-condition on one of the arcs leading from this state is satised, resulting in a
change to the state at the other end of the arc. One problem with this approach is, that
it gets increasingly dicult for a programmer to maintain the overview of the behavior of
the system as the number of states gets larger.
56
3.2. Behaviour-Based Systems
Figure 3.4: FSA encoding a door traversal mechanisms
Rule Based Representation
The mapping from sensor state to motor state can also be performed by a list of rules
expressed by traditional boolean conditioning, like if..then statements, where the rules
have dierent priorities as in [Mat95]. An example of a rule-based control program for
line-following is:
if (true) then leftSpeed=2; rightSpeed=2;
if (leftSensor>threshold) then rightSpeed=0;
if (rightSensor>threshold) then leftSpeed=0;
if (leftSensor>threshold) and (rightSensor>threshold) then
leftSpeed=-2; rightSpeed=-2;
The computational complexity of rule based behavior representation is equivalent to
feed forward neural networks in the sense that both types use a direct mapping from
sensor-space to motor space without internal state. However, the rule base approach gives
more abrupt changes in the output, which occur when the activation of a sensor traverses
a threshold.
Formal methods
They can potentially provide a set of very useful properties to the robot programmer:
They can be used to verify designer intentions
57
CHAPTER 3. Review on Control Architectures for Modular Microrobots
They can facilitate the automatic generation of robotic control systems
They provide a complete common language for the expression of robot behavior
They provide a framework for conducting formal analysis of a specic programs
properties, adequacy, and/or completeness.
They provide support for high level programming language design
Two types of formal methods are Robot Schemas (RS) and Situated Automata.
Although they are not going to be used in this thesis, as an example, the robot schema
representation for the example of the navigation is:
Class-going-robot = (Start-up ; (done? , Journey) : At-classroom)
Journey = (move-to-classroom , avoid-objects , dodge-students ,
stay-to-right-on-path , defer-to-elders)
and the situated automata representation is:
(defgoalr (ach in-classroom)
(if (not start-up)
(maint (and (maint move-to-classroom)
(maint avoid-objects)
(maint dodge-students)
(maint stay-to-right-on-path)
(maint defer-to-elders)))))
3.2.4 Behavioral encoding
A behavior can be expressed as a triple (S,R,), where S denotes the domain of all in-
terpretable stimuli, R denotes the range of possible responses and denotes the mapping
: S R.
The behavior encoding can be divided rstly into discrete and continuous encoding.
In discrete encodings, consists of a nite set of (situation,response) pairs. Sensing
provides the index for nding the appropriate situation. Another strategy is to use a
collection of If-Then rules.
Continuous response allows a robot to have an innite space of potential reactions
to its world. Instead of having an enumerated set of responses that discretizes the way
in which the robot can move (e.g. forward, backwards, left, right...etc), a mathematical
function transforms the sensory input into a behavioral reaction. One of the most common
methods for implementing continuous response is based on a technique referred to as the
potential elds method.
In an approach to motion planning: a robot is represented as a point in the inuence
of an articial potential eld produced by an attractive force at the goal conguration and
repulsive forces at the obstacles.
58
3.2. Behaviour-Based Systems
Figure 3.5: Potential elds
U(q) = U
att
(q) +U
rep
(q) (3.2)
3.2.5 Emergent behavior
The cathedral termite, found in parts of Australia, is capable of creating mounds for the
colony well over 10 feet high. Individual cathedral termites are just standard-looking
bugs with a tiny little primitive brain. But when combined with others of its species,
the cathedral termite is capable of constructing a huge, complex hive to house the colony.
Unlike human building projects, however, there is no foreman, no plan, and its unlikely
that any termite even knows what it is helping to create. [Emergent Behavior - Thriving
At The Edge Of Chaos by Chris Rollins]
How is this possible?
The answer lies in the fact that sometimes, a system can provide more complexity
than the sum of its parts - leading to what scientists call emergent behavior. Emergent
behavior could be dened as a behavior of a system that is not explicitly described by the
behavior of the components of the system, and is therefore unexpected to a designer or
observer.
Emergent behavior is an important but not well-understood phenomenon. Robot be-
haviors emerge from
interactions of rules
interactions of behaviors
interactions of either with environment
There is a coded behavior in the programming scheme, a observed behavior in the eyes
of the observer, but there is no one-to-one mapping between the two.
Example: Emergent ocking. When programming multiple robots under the following
premises:
dont run into any other robot
dont get too far from other robots
keep moving if you can
59
CHAPTER 3. Review on Control Architectures for Modular Microrobots
and they run in parallel on many robots, the result is ocking.
Another example: If a robot is programmed to do:
if too far, move closer
if too close, move away
otherwise, keep on
Over time, in an environment with walls, this will result in wall-following
So, is this really emergent behavior? It is argued yes because robot itself is not aware
of a wall, it only reacts to distance readings concepts of wall and following are not
stored in the robots controller.
Emergent behaviors depend on two aspects:
existence of an external observer, to observe and describe the behavior of the system
verify that the behavior is not explicitly specied
3.2.6 Behavior coordination
Coordination can be: competitive, cooperative or a combination of the two. The main
problem to solve id deciding what to do next,i.e. the action-selection problem [Pirja-
nian, 1999]: how a behavior or an agent in general can select the most appropriate or
the most relevant action to take next at a particular moment, when facing a particular
situation. This leads to the behavior-arbitration problem.
Basically, the action-selection mechanisms can be divided into competitive and coop-
erative.
Competitive coordination (Arbitration): Perform arbitration (selecting one behavior
amongst a set of candidates)
Priority-based: subsumption architecture (Brooks)
State-based: discrete event systems (Kosecka), Reinforcement learning (Q-
learning, W-learning)
Winner-take-all: activation networks (Maes)
Cooperative coordination (Command Fusion): combine outputs of multiple behav-
iors
Voting: DAMN (Rosemblatt & Payton), SAMBA (Riekki & Roning)
Fuzzy (formalized voting)
Superposition (vector addition): potential elds (Khatib), motor schemas (Arkin).
Competitive arbitration mechanisms select one behavior and give it total control, until
the next behavior is selected. In priority-based arbitration behaviors are ranked and
behaviors with higher priorities can override lower behaviors. State-based arbitration
selects a behavior that is associated with a given state of the agent (also termed FSA -
Finite State Automation), and winner-takes-all arbitration allows behaviors to compete
for control of the agent.
60
3.2. Behaviour-Based Systems
Figure 3.6: Basic block in subsumption architecture
Command Fusion ASMs combine recommendations from several behaviors to form
a consensus control action, aggregating one or more behaviors according to some rule;
essentially the dierence between dierent command fusion architectures resides in the
function which is used to aggregate behaviors.
Priority-based
The most well known implementation of a priority-based architecture is the subsumption
architecture, described by Brooks [Brooks, 1986]. In this architecture, the agent acquires
levels of competence in a layered format; this also allows modularity and upgrading with
more complex higher-level behaviors. Higher-level layers subsume lower levels when they
wish to assume control - this is done by suppressing signals sent from the higher level
behaviors to inhibit the lower level behaviors (3.6). More complex behavioral patterns
are obtained using priorities and subsumption relations between the behaviors, such that
certain behaviors can override others, or behaviors can operate in parallel, when two or
more behaviors are signaled by a stimulus. This architecture will be described on 3.3.1.
State-based
Behavior selection is done using state transition where upon detection of a certain event
a shift is made to a new state and thus a new behavior. Using this formalism systems are
modeled in terms of a nite state automata (FSA) where states correspond to execution
of actions/behaviors and events, which correspond to observations and actions, cause
transitions between the states. See example of FSA in gure 3.4.
State-based arbitration architectures include Discrete Event Systems [Kosecka and
Bajsy, 1993] and Bayesian Decision Analysis [Kristensen, 1997]. In Discrete Event Systems
(DES) agents are modeled in terms of nite state automata (FSA) where states correspond
to execution of actions/behaviors and events correspond to observations and actions, and
cause transitions between the states.
Bayesian Decision Analysis is based on sensor selection. The objective is to choose
the action that maximizes the expected utility of the agent. Selection of an action is
61
CHAPTER 3. Review on Control Architectures for Modular Microrobots
based around maximization of the expected utility, using Bayesian probability.Each sensing
action is associated with a certain cost or expense and benet is associated with the
information provided by the sensing action. It is a probabilistic method. Example: going
through the doorway when the door is open/closed.
Winner-take-all
Here the behaviors actively compete with each other based on sensory information and
agents goals and intentionality.
Activation Networks from Maes [Maes, 1990] is based on an engine where a commu-
nity of behaviors works to reduce the dierence between present and desired state. Each
behavior is specied in terms of pre- and post-conditions, and an activation level, which
gives a real-valued indication of the relevance of the behavior in a particular situation.
The higher the activation level of a behavior, the more likely it is that this behavior will
inuence the output of the agent. Once specied, a set of competence behaviors is com-
piled into a spreading activation network, in which the modules are linked to one-another
in ways dened by their pre- and post-conditions.
Behaviors are activated by their activation energy reaching a specied level; activa-
tion energy is added and removed to the network of behaviors by external (goal, state,
inhibition) and internal (predecessor, successor, inhibition) sources (gure 3.11).
When the activation level of an executable behavior exceeds a specied threshold, it
is selected to execute the most appropriate action from its point of view.
Voting
Many voting architectures exist, all sharing the common feature that a polling mechanism
is employed to select between competing behaviors.
Each active behavior has a certain number of votes to give to the behavioral response
set previously dened. The response with more votes is the action selected.
An example of a voting command fusion architecture is the DAMN architecture [Rosen-
blatt, 1995], covered in 3.3.4.
Fuzzy
Fuzzy architectures uses fuzzy inference and behavior rules which are combined into a
multivalued output. Behaviors that compete for control of the robot are then coordinated
to resolve potential conicts. Fuzzy behavior coordination is performed by combining the
fuzzy outputs of the behaviors using an appropriate operator; they are then defuzzed at
the end to provide a clean nal control action (gure 3.7).
Each behavior is synthesized by a rule-base controlled by an inference engine to produce
a multivalued output that encodes the desirability of each action from the behavior

Os point
of view. Example:
IF obstacle is close THEN avoid collision
IF NOT (obstacle is close) THEN follow target
62
3.3. Behavior-Based Architectures
Figure 3.7: Fuzzy command fusion example
Superposition
The most straightforward type of behavior superposition is a simple linear combination of
behaviors, with behaviors being weighted and combined. The most popular is the potential
eld approach which has been extensively used, where agents move under the inuence
of a simulated potential eld which treats goals as attractors and obstacles as repellers
(gure 3.5).
U(q) = U
att
(q) +U
rep
(q) (3.3)
Movement towards the lowest energy conguration of the system. There can be prob-
lems with local minima (the usual formulation of potential elds does not preclude the
occurrence of local minima other than the goal). An example of this architecture is Po-
tential Fields [Khatib, 1986].
Other architecture is Motor Schemas [Arkin, 1987], that will be covered further on.
3.3 Behavior-Based Architectures
The concept of behavior appeared with Reactive Architectures. Some common charac-
teristics of these architectures are:
emphasis on the importance of coupling sensing and action tightly
avoidance of representational symbolic knowledge
decomposition into contextually meaningful units (behaviors)
63
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.8: Example of structure in subsumption architecture
Behavior-based architectures were based on reactive architectures, but are not the
same, as it was explained before. In the following sections some of these architectures will
be described.
3.3.1 Subsumption Architecture
Brooks subsumption architecture [Brooks, 1986], is one of the best known implementations
of behavior that does not include a world model. Subsumption architecture systems are
incrementally built up of competences, where a new competence can subsume behavior
generated by the already existing competences, thereby altering the behavior of the system.
The competences all run simultaneously, and each can react on the present sensor state,
which gives a reactive behavior. The subsumption architecture is illustrated in gure
3.6. This gure shows how the higher levels can overwrite (subsume) the output of the
lower levels. The gure does not show that the higher levels can also change the behavior
generated by the lower levels. The system can be partitioned at any level, and the layers
below form a complete operational control system.
Several dierent robotic systems have been built using this approach. However, the
adding of new competences to a subsumption architecture system becomes increasingly
dicult as the system gets larger, because of the many dierent possibilities of connecting
the new competence, and because the generated behavior is an emergent property of the
interaction between all the competences
At the lowest level, each behavior is represented using an augmented nite state ma-
chine (AFSM), as shown in gure 3.8. Stimulus or response signals can be suppressed or
inhibited by other active behaviors.
There is no global memory, bus or clock. Each behavioral layer is mapped onto its own
processor. No central world models. Figure 3.9 shows an example of three layered robot.
Coordination in subsumption has two primary mechanisms:
Inhibition: to prevent a signal from reaching the actuators
Suppression : prevents a signal from being transmitted and replaces it with a sup-
pressing message.
The lowest level layer of control makes sure that the robot does not come into contact
with other objects. If something approaches the robot it will move away. The rst level
layer of control, when combined with the zeroth, imbues the robot with the ability to
64
3.3. Behavior-Based Architectures
Figure 3.9: Subsumption AFSM of a Three Layered Robot
Subsumption
Background Well-known early reactive architectures
Precursors Braitemberg 1984; Walter 1953; Ashby 1952
Principal design method Experimental
Developer Rodney Brooks (MIT)
Response encoding Predominantly discrete (rule based)
Coordination method Competitive (priority-based + inhibition and suppression)
Programming method AFSMs, Behavior Language
Robots elded Allen, Genghis, Squirt, Toto, Seymour, Polly ...
Table 3.1: Subsumption Architecture
wander around aimlessly without hitting obstacles. Level 2 is meant to add an exploratory
mode of behavior to the robot, using visual observations to select interesting places to visit.
3.3.2 Motor Schemas
Schemas are parameterized potential functions which give a generic specication of inde-
pendent process that are specialized for a specic task and domain.
There are two types of schemas:
motor schemas: concerned with control of actuators
perceptual schemas: concerned with goal-directed sensing of features in the environ-
ment
The agent-based structure of the Arkins Schema-Based architecture [Arkin, 1987] (g-
ure 3.10) allows that each element can be instantiated or killed at any time, according
to the task at hand and environment state, which endows the architecture with an inter-
esting run-time exibility. Each behavior is implemented as a motor-schema, which acts
according to the information provided by a set of perceptual-schemas.
65
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.10: Structure of Motor Schemas
Motor schemas are similar to animal behaviors. The may have internal parameters that
provide additional exibility. Each of them has an output (action vector) that denes the
way the robot should move. Perceptual schemas are embedded in each motor schema,
and provide the environmental information specic for that particular behavior. They
are recursively dened, that is, there can be perceptual subschemas providing information
that will be processed by the perceptual schema.
Many dierent motor schemas have been dened, including:
Move-ahead: move in a particular compass direction
Move-to-goal: move towards a detected goal object. Two versions exist of this
schema: ballistic and controlled
Avoid-static-obstacle: move away from passive or non threatening navigational bar-
riers
Dodge: sidestep an approaching ballistic projectile
Escape: move away from the projected intercept point between the robot and an
approaching predator
Stay-on-path: move toward the center of a path, road, or hallway. For three-
dimensional navigation, this becomes the stay-in-channel schema
Noise: move in a random direction for a certain amount of time
Follow-the-leader: move to a particular location displaced somewhat from a possibly
moving object. (The robot acts as if it is leashed invisibly to the moving object.)
Probe: move toward open areas
66
3.3. Behavior-Based Architectures
Motor Schemas
Background Reactive component of AuRA Architecture
Precursors Arbib 1981; Khatib 1985
Principal design method Ethologically guided
Developer Ronald Arkin (Georgia Tech)
Response encoding Continuous using potential eld analog
Coordination method Cooperative via vector summation and normalization
Programming method Parameterized behavioral libraries
Robots elded HARV, George, Ren and Stimpy, Buzz ...
Table 3.2: Motor Schemas Architecture
Dock: approach an object from a particular direction
Avoid-past: move away from areas recently visited
Move-up, move-down, maintain-altitude: move upward or downward or follow an
isocontour in rough terrain
Teleautonomy: allows human operator to provide internal bias to the control system
at the same level as another schema
The cooperative coordination mechanism is based on weighted vectorial sum, where
weights allow to distinguish motor-schemas in terms of priority. Modularity is provided
by its agent-based nature and standard vectorial form for motor-schemas output. In this
line, a set of motor-schemas and respective coordination nodes can be aggregated into a
single motor-schema, also called of assemblage behavior. An external entity, a sequencer,
can select which behavioral assemblages are active according to the task/environment. In
gure 3.10 it would be situated between the VECTOR and the Robot motors.
Arkin: The task of robot programming is fundamentally simplied through the use
of a divide and conquer strategy.
Example: an obstacle avoidance motor schema can control the robot away from the
obstacle. In order to detect obstacles several obstacle detection perceptual schemas can
be instantiated to keep track of obstacles and feed the obstacle positions to associated
obstacle avoidance motor schemas.
3.3.3 Activation Networks
It is a combination of traditional planners and reactive systems developed by Maes [Maes,
1990]. The architecture consists of a set of behaviors or competence modules which are
connected to form a network (gure 3.11). Action-selection is modeled as an emergent
property of an activation/inhibition dynamics among these modules. The set of behaviors
reduce the dierence between the systems present state and a goal state. Arbitration
among modules is a run-time process, which changes according to goals and current situ-
ation.
In the network each behavior is represented by a tuple (ci, ai, di, i) describing:
1. the preconditions under which it is executable (i.e. can be applied)
67
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.11: Activation Networks
2. the eects after successful execution in form of an add-list ai and delete-list di
3. activation level i which is a measure of applicability of the behavior
External sources of activation are: activation by the state, activation by the goals and
inhibition by protected goals.
Internal sources of activation are: activation of successors, activation of predecessors
and inhibition of conictors.
This ASM deals only with selection of behaviors and not with motor actions. When a
behavior is selected it will perform the most appropriate action from its point of view (it
is a winner-take-all mechanism).
The internal mechanisms of a competence model are independent of the architecture.
A competence model has a condition list, which aggregates all conditions to be met before
the module becomes executable, an add list and a delete list, which are the eects of
the module. Successor links connect add list items of one competence to condition list
of another competence. A predecessor link from module X to module Y exists for each
successor link from Y to X. A conicter link connects delete list items of one competence
to condition list of another competence. Provided that, a competence is executable, its
activation value is above a certain threshold, and it is greater than all other competence
modules activation, the competence is allowed to perform real actions (i.e. actuate directly
in the actuators). Briey, activation is increased every time an item in the condition list is
met and by the achievement of a global goal. Then, activation ows between competence
modules via successor, precessor, and conictor links. A decay function ensures that the
overall activation level remains constant.
68
3.3. Behavior-Based Architectures
Activation Networks
Background Dynamic competition system
Precursors Minsky 1986; Hillis 1988
Principal design method Experimental
Developer Pattie Maes (MIT)
Response encoding Discrete
Coordination method Arbitration via action-selection
Programming method Competence modules
Robots elded Only simulation
Table 3.3: Activation Networks Architecture
Figure 3.12: DAMN architecture
3.3.4 DAMN
DAMN is a Distributed Architecture for Mobile Navigation [Rosenblatt, 1995] which con-
sists of a set of asynchronous behaviors that pursue the system goals based on the current
state of the environment. Each behavior votes for or against the set of actions constituting
the possible set of actions of the agent. The best action is the one with the maximum
weighted sum of the received votes. Each behavior is assigned a weight. These weights
reect the relative importance or priority of the behavior in a given context. The arbiter
is then responsible for combining the behaviors votes and generating actions which re-
ects their objectives and priorities. A behavior can be a reactive behavior as well as a
planning module. Well suited for integration of high level deliberative planners with low
level reactive behaviors.
3.3.5 CAMPOUT
CAMPOUT is a Control Architecture for Multi-robot Planetary Outposts [Pirjanian et al.,
2000] [Pirjanian et al., 2001] by the Jet Propulsion Lab in Pasadena, CA. It is a three
layer behavior-based system:
69
CHAPTER 3. Review on Control Architectures for Modular Microrobots
DAMN (Distributed Architecture for Mobile Navigation)
Background Fine-grained subsumption-style architecture
Precursors Brooks 1986; Zadeh 1973
Principal design method Experimental
Developer Julio Rosenblatt (CMU)
Response encoding Discrete vote sets
Coordination method Multiple winner-take-all arbiters
Programming method Custom
Robots elded DARPA ALV and UGV vehicles
Table 3.4: DAMN Architecture
Low level control routines.
Middle behavior layer that uses either the BISMARC (Biologically Inspired System
for Map-based Autonomous Rover Control) or MOBC (Multi-Objective Behavior
Control) action selection mechanisms.
Hierarchical task planning, allocation and monitoring
CAMPOUT includes the necessary group behaviors and communication mechanisms
for the coordinated and cooperative control of heterogeneous robotic platforms. It is a dis-
tributed, hybrid, behavior-based system because it couples reactive and local deliberative
behaviors without the need for a centralized planner.
Control architectures closest to CAMPOUT are ALLIANCE, DAMN, BISMARC and
MOBC.
Its main characteristics are:
Cognizant of failure and Fault tolerant
Distributed control
Scalable and Ease of integration
Rational decision making
Explicit knowledge
Uncertainty handling
Adaptivity and Learning capabilities
Hybrid
Formal framework
Small overhead
Behaviors are divided in the following classes:
Primitive Behavior Library: behavior producing module (behavior). A behavior is
a perception to action mapping module that based on selective sensory information
produces actions in order to maintain or achieve a given, well specied task objective.
Ex. AvoidObstacle and GotoTarget
Composite Behaviors: combination of lower-level behaviors. Example: SafeNaviga-
tion
70
3.3. Behavior-Based Architectures
Figure 3.13: CAMPOUT: block diagram
Communication Behaviors: interaction with other robots
Shadow Behaviors (s-behaviors): a remote behavior (including for example state
information) running on a separate robot
Cooperation/Coordination Behaviors: coordination between c-behaviors and s-behaviors
CAMPOUT uses the following control mechanisms:
Arbitration mechanisms
Suitable for arbitrating between the set of active behaviors in accord with the
systems changing objectives and requirements.
CAMPOUT implements
priority-based arbitration, where behaviors with higher priorities are al-
lowed to suppress the output of behaviors with lower priorities;
state-based arbitration which is based on the discrete event systems (DES)
formalism, and is suitable for behavior sequencing.
Command fusion mechanisms
Voting: interpret the output of each behavior as votes for or against possible
actions and the action with the maximum weighted sum of votes is selected
(DAMN-style based on BISMARC);
Fuzzy command fusion mechanisms that use fuzzy logic and inference to for-
malize the action selection processes;
Multiple objective behavior fusion mechanisms that select an action with the
best trade-o between the task objectives and which satises the behavioral
objectives as much as possible based on multiple objective decision theory
71
CHAPTER 3. Review on Control Architectures for Modular Microrobots
CAMPOUT
Background ALLIANCE, DAMN, BISMARC and MOBC
Precursors DAMN, ALLIANCE
Principal design method Experimental
Developer JPL
Response encoding Both discrete and continuous
Coordination method Several: priority-based, state-based, voting, fuzzy, etc.
Programming method Distributed
Robots elded Campout Rover
Table 3.5: Control Architecture for Multi-robot Planetary Outposts (CAMPOUT) Archi-
tecture
Communications infrastructure: Provides a set of tools and functions for intercon-
necting a set of robots and/or behaviors for sharing resources (e.g., sensors or actuators),
exchanging information (e.g., state, percepts), synchronization, rendezvous etc.
Synchronization: Signal (destination, sig) and Wait (source, sig) can be used to send
and wait for a signal to and from a given robot
Data exchange: SendEvent (destination, event) and GetEvent (source, event) can
be used to send and receive an event structure
Behavior exchange: SendObjective (destination, objective) and GetObjective (source,
objective) can be used to send and receive objective functions (multivalued behavior
outputs)
3.4 Hybrid Deliberate-Reactive Architectures
Everybodys got plans... until they get hit (M. Tyson)
There is a need to solve the drawbacks of reactive and deliberative systems. And those
are the hybrid architectures. Some strategies to do this are:
Selection: Planning is viewed as conguration. Planner determines the behavioral
composition
Advising: Planning is viewed as advice giving. Planner suggests and reactive level
may or may not use it.
Adaptation: Planning is viewed as adaptation. Planner modify reactive components
depending on environment changes
Postponing: Planning is viewed as a least commitment process. Plans are elaborated
only as necessary.
Modern robot control architectures are hybrid, i.e., they contain dierent layers for
reactive and for deliberative control components. Typically, a middle layer (sometimes
called sequencing layer) mediates between the reactive and the deliberative components,
resulting in a three-layer architecture.
72
3.4. Hybrid Deliberate-Reactive Architectures
3.4.1 3-Tiered (3T)
3T is a three-layered (tiered) architecture [Bonasso et al., 1995], with skills, sequencing
and planning layers (gure 3.14).
Planner constructs partially ordered plans, listing tasks for the robot to perform
according to some goals. It can reason in depth about goals, resources, and timing
constraints by using a system, Adversarial Planner (AP). It is the Planners job to
select a RAP (Reactive Action Packages) to execute the corresponding task.
Sequencing tier includes RAPs, each of the tasks that are constructed by Planner,
corresponds to one or more sets of sequenced actions or RAPs. The job of the
Sequencing tier is to decompose selected RAP into other RAPs, and when it is
indivisible, corresponding set of skills are activated in Skills tier. Additionally, a
set of event monitors are activated in skills tier to notify sequencing layer of the
occurrence of certain conditions.
Skills (Reactive) tier includes dynamically reprogrammable set of reactive skills
coordinated by a skill manager. Sequencing tier will terminate or replace actions,
according to enabled event monitor or timeouts.
Skills for the robot-specic interface with the world, handling the real-time transforma-
tion of desired state into continuous control of motors and interpretation of sensors. Skill
development should be robot-independent because physical properties of robots change
and all interface between Skills and Sequencing tier should remain same. Skills should
be capable of being enabled and disabled in any combination from sequencing tier. To
provide this, skill manager is employed.
Sequencer is the RAPs interpreter, where RAP is simply a description of how to
accomplish a task in the world under a variety of conditions using discrete steps. Some
statements may cause RAP interpreter to block a branch (while expanding) of the task
execution until a reply is received from skills manager. Replies are produced by special
skills, called events (event monitors).
Planner should operate at the highest level of abstraction possible so as to make its
problem space as small as possible. Thus it should not have to deal with tasks that can
be routinely specied as sequences of common robotic skills.
Applications of 3T: A mobile robot that recognizes people, and a trash-collecting robot
without any planner, but recovery mechanisms in RAP and memory in RAP enables robot
not to stuck or shock in any situation. A mobile robot that navigates oce buildings where
planner is used for i.e.. nding a path to elevator, or re-plan its own path if a doorway is
blocked and reevaluated plan if no deadlines are violated.
3.4.2 Aura
It stands for Autonomous Robot Architecture and was developed by Arkin in 1986 [Arkin
and Balch, 1997]. It is based on hierarchical components:
Mission planner: interface to human commander
73
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.14: 3T intelligent controll architecture
Spatial Reasoner (Navigator): cartographic knowledge stored in memory to con-
struct paths. A*
Plan Sequencer (Pilot): translates the path into motor behaviors
It uses schemas as reactive component. Once reactive execution begins, the deliberative
component is not reactivated until failure (lack of progress). Some of its principles are:
modularity, exibility, generalizability.
3.4.3 Atlantis
Atlantis (Three-Layer Architecture for Navigating Through Intricate Situations) [Gat,
1992], like the subsumption architecture, is built in layers as shown in gure 3.16. In
Atlantis, however, all instantiations of the architecture have the same three layers, each of
which always performs the same duty. This architecture is both asynchronous and hetero-
geneous. None of the layers is in charge of the others, and activity is spread throughout
74
3.4. Hybrid Deliberate-Reactive Architectures
Figure 3.15: Aura Architecture
the architecture.
The Control Layer directly reads sensors and sends reactive commands to the eectors
based on the readings. The stimulus-response mapping is given to it by the sequencing
layer. It is implemented in ALFA, a LISP-based programming language
The Sequencing Layer has a higher-level view of robotic goals than the control layer.
It tells the control layer below it when to start and stop actions.
The Deliberative Layer responds to requests from the sequencing layer to perform
deliberative computations. It consists of traditional LISP-based AI planning algorithms
specic to the task at hand. The planners output is viewed only as advice to the sequencer
layer: it is not necessarily followed or implemented verbatim.
3.4.4 Saphira
The Saphira architecture [Konolige et al., 1997] is an integrated sensing and control system
for robotics applications (gure 3.17). Perceptual routines are on the left, action routines
on the right. The vertical dimension gives an indication of the cognitive level of processing,
with high-level behaviors and perceptual routines at the top. Control is coordinated by
PRS-Lite, which instantiates routines for navigation, planning, execution monitoring, and
perceptual coordination. At the center is the LPS (Local Perception Space), a geometric
representation of space around the robot. Because dierent tasks demand dierent repre-
sentations, the LPS is designed to accommodate various levels of interpretation of sensor
information, as well as a priori information from sources such as maps. The LPS gives
the robot an awareness of its immediate environment, and is critical in the tasks of fusing
sensor information, planning local movement, and integrating map information.
Saphira architecture is an integrated sensing and control system, which includes Local
Perception Space (LPS) at its center containing dierent levels of representation (from
occupancy grids, to geometric representation to high level artifacts of the world). Internal
75
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.16: Atlantis Architecture
artifacts (like it is a chair, doorway etc.) are viewed as beliefs of robot about the envi-
ronment. Perception and action modules in levels of complexity, all interact with LPS.
As another central module, Procedural Reasoning System (PRS) is used in more complex
behaviors and in interaction with modules like speech input, schema library, topological
planner.
At control level Saphira is behavior based, behaviors are written and combined us-
ing techniques based on fuzzy logic. These rules produce desirability function for each
behavior, the fuzzy connectives are used to combine dierent behaviors based on their
contexts (context depending blending), and defuzzication is used to choose preferred
control among selected behaviors, generally taking average.
Basic behaviors take their inputs from the LPS and use information like occupancy
data. Some behaviors like goal-seeking behaviors take input from artifacts. For example,
the behavior cross-door uses the coordinates of a door artifact in the LPS as an input.
Basic behaviors are combined to form complex behaviors, where outputs of desirability
functions for behaviors are combined (for example through a minimum operation as dened
by context dependent blending and defuzzication). In such a way for example if there
is an obstacle ahead, and there are two choices, turn left or right, the selection is done
according to the overall goal position.
The coordination mechanism is very similar to potential elds method.
PRS-Lite: Management process involves determining when to activate/deactivate be-
haviors as part of execution of a task, as well as coordinating them with other activities
in the system. It provides the smooth integration of goal-driven and event-driven activity,
while remaining responsive to unexpected changes in the world. The representational basis
of PRS-Lite is the activity schema, a parameterized nite-state-machine whose arcs are
labelled with goals to be achieved. Each schema embodies procedural knowledge of how
to attain some objective via sequence of subgoals, perceptual checks, primitive actions,
and behaviors.
76
3.4. Hybrid Deliberate-Reactive Architectures
Figure 3.17: Saphira system architecture
Reactive behaviors take their inputs directly from sensor readings and more goal-
directed behaviors such as wall-following can often benet from using artifacts (corridor
artifact). This is especially true when sensors give only sporadic and uncertain information.
3.4.5 DD&P
DD&P [Schonherr and Hertzberg, 2002] is a hybrid, two-layered architecture, composed
of a reactive layer based on DD (Dual Dynamics), which is a set of conceptually inde-
pendent behaviors; and a planning component which gives directions to behaviors located
in dierent levels, as well as it should dene the way in which a chosen operator from
the currently active plan inuences the current working of the DD&P part, and it should
dene how information from DD and the sensors goes into the planners world model.
In Dual Dynamics, behaviors are leveled and interact through shared variables. Every
individual behavior is regulated by its activation dynamics, which describes its degree of
activation, is calculated by some sensor values, some other behaviors and the inuence
from planning (in major or minor way). Only behaviors at the bottom level are allowed to
directly inuence actuators by output of their target dynamics. For every control variable
of some actuator, the product term combines target and activation dynamics for all level-0
behaviors. Activation of a motor is done by summation of the control variables, where
product term is used as gain. Direct inuence from higher level is restricted to next lower
level, behaviors are regulated by input from peers and next-higher level.
In DD&P, plan modules can aect any behavior in any level. Even highest behaviors
should obey the structure of activation, target dynamics. But there is no restriction for
77
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.18: DD&P Controller
planning. O-the-shelf propositional planner is used here: IPP. Given a set of mission
goals by human user, the planner is supposed to generate and keep updated an ordered
set of abstract actions as the current plan and to determine at each of its time cycles
the operator of that plan that proposes to execute, given its current knowledge about
the environment (in the KB - Knowledge Base). Executing an operator means biasing
behaviors rather than exerting hard control. An operator chosen for execution stimulates
(++) or mutes (). Information ow from the DD part to the deliberative part in form of
the activation history of the behaviors, yielding an image of the environment as perceived
through the eyes of the useful behavior.
Figure 3.18 shows the schema of an DD&P controller. Left part is a two-level DD
behavior set. Arrows among the behavior represent (possible) activation ow; current
activation is represented by the blankness level of behaviors. Arrows meeting behaviors
from the left represent sensor input or input from behaviors of the same level.
3.5 Modular Robot Architectures
3.5.1 CONRO
From the University of Southern California, by P. Will.
CONRO [Shen et al., 2000] [Shen et al., 2002] [Salemi et al., 2004] presents a distributed
control mechanism inspired by the concept of hormones in biological systems.
Hormones are special messages that can trigger dierent actions in dierent modules.
They can be used to coordinate motions and reconguration in the context of limited
communications and dynamic network topologies.
A self-recongurable robot can be viewed as a network of autonomous systems with
communication links between modules.
78
3.5. Modular Robot Architectures
Communication description
Each module has a unique ID and maintains a set of active communication link. To send
a message from a module P to a module Q in this network, P sends the message to all
of its active links. Upon receiving a message, a module will either keep the message, or
relay the message to all of its active links except the link through which the message was
received. Loops must be prevented.
Master vs masterless
Master controlled systems are characterize by
Advantage: synchronization
Disadvantage: communication cost
And masterless control system by
Free from communication, scalable
Loss of robustness, synchronization based on a internal clock
The CONRO control lies between both of them. It reduces the cost of communication
while keeping some degree of synchronization.
Hormones
Formally, a hormone message is a special type of message that has three important prop-
erties:
1. a hormone has no particular destination but oats in a distributed system
2. a hormone has a lifetime
3. the same hormone may trigger dierent actions at dierent receiving sites. For
example:modication and relay of the hormone, execution of certain local actions,
or destruction of the received hormone.
A hormone is terminated in three possible ways:
when it reaches its destination
when its lifetime expires
when it has nowhere to go (e.g., it arrives at a module that has no outlinks).
Since no hormone can live forever, this prevents them from circulating in the network
indenitely.
79
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Hormone classes
Hormone messages are classied into three classes:
1. Hormones for action specication. E.g. h(x) that will cause a receiving module to
relay its current DOF1 position to the next module and then move its DOF1 to the
x position
2. Hormones for synchronization: since hormones can wait at a site for the occurrence
of certain events before traveling further, they can be used as tokens for synchronizing
events between modules. s can be designed to ensure that all modules nish their
job before the next step begins.
3. Hormones for dynamic grouping of modules. In any distributed system, it is often
useful to dene a set of entities dynamically as a special group for certain operations.
Hormones can be used to dene sets on the y. Each module in the self-recongurable
robot has a set of local variables mi that can be marked dynamically by the module
itself.
Hormone management
No module can be the generator of two or more hormone sequences simultaneously. A
module can become the generator of a hormone sequence in two ways:
Self-promoted (i.e. by sensors)
Instructed (i.e. by a hormone)
3.5.2 M-TRAN
From the National Institute of Advanced Industrial Science and Technology (AIST) and
Tokyo Institute of Technology, by S. Murata [Murata et al., 2002] [Kurokawa et al., 2003]
[Kamimura et al., 2004] [Kurokawa et al., 2005] [Yoshida et al., 2003].
The MTRAN II controller is distributed, i.e., the controllers are distributed in all the
modules. However, self-reconguration motion is made by global synchronization. Each
module has its xed role and its own sequence data. One module is selected as a master
and others works as slaves. Global synchronization is maintained by masters polling.
Decentralized controller system: various types of controllers, centralized /decentralized
and synchronous / asynchronous The system consists of three layers.
1. The bottom layer contains several functions of the slave controllers directly related
to the hardware and an interface between the master and slaves. They include
PID/trajectory control of motors, connection control, and data acquisition by several
sensors.
2. The middle layer is for communication among modules and realizes mainly two func-
tions; remote control of other modules and a shared memory. CAN communication
3. In the upper layer, a sequence program designed by the kinematics simulator is
interpreted and executed.
80
3.5. Modular Robot Architectures
Figure 3.19: Control Architecture of M-TRAN
3.5.3 Polybot
From the Xerox Palo Alto Research Center, by M. Yim [Yim et al., 2000] [Yim et al.,
2001].
It describes a software architecture that features a multi-master/ multi-slave structure
in a multithreaded environment, with three layers of communication protocol.
1. The rst layer conforms to the data link communication on the physical media.
2. The second layer provides higher-level data integrity between any two addressable
nodes with network routing
3. The third layer denes the application middleware components and protocol based
on an attribute/service model.
MDCN
MDCN stands for Massively Distributed Control Nets. It is a CANBus-based proto-
col, which means low price, multiple sources, highly robust performance and already
widespread acceptance. Its main features are:
Addressing of up to 254 nodes and groups in standard CAN format and up to 100,000
in extended format.
Three types of communication: individual, group and broadcast, with eight priority
levels.
I/O (node-to-node) and port (point/process-topoint/process) communications, where
I/O type is mostly reserved for system processes with high priorities and short mes-
sage sizes that can be encoded in one data frame, and port type is for user applica-
tions, with lower priorities and possibly large message sizes encoded in many data
frames
81
CHAPTER 3. Review on Control Architectures for Modular Microrobots
(a) Nodes and Segments in Poly-
bot
(b) Service=square and Attributes=ellipse
Figure 3.20: Polybot control scheme
CAN bus has a limitation on the number of CAN controllers on one network - An
MDCN bridging has been implemented to transfer messages between multiple CAN buses
Attribute/Service Model
Multi-threading is essential for ecient handling of multiple hardware requests and com-
putation in real-time. Global tasks such as locomotion and reconguration require com-
munication between dierent modules.
The Attribute/Service model is a general and simple framework for applications that
require programming with multiple tasks/threads on multiple processors.
The Attribute/Service model is a component based architecture, where components
are either attributes or services distributed over the communication network.
Attributes are abstractions for shared memory/resources among multiple threads
located in one or more processors. E.g. desired joint angle. Services are abstractions
of hardware or software routines. In general: hardware services correspond to settings in
registers controlling hardware peripherals and software services are threads that run for
particular tasks. An example of a hardware service can be actuating a latch for docking.
Both attributes and services are accessible either locally or remotely.
Example
In PolyBot G3, masters are running on nodes and slaves are running on segments. Both
masters and slaves are multi-threaded.
Master and slaves run some common components, such as Attribute/Service servers,
IR ranging and other local sensing attached to the module.
Masters also run MDCN routers, global computation such as planning and inverse
82
3.6. Adaptive Behavior
kinematics, global environment sensing etc.
Slaves run motor control and local gait table generation.
PolyBot tasks are divided into three categories: locomotion, manipulation and recon-
guration, where locomotion is essentially a dual of reconguration.
3.6 Adaptive Behavior
3.6.1 Reinforcement Learning
Reinforcement learning (RL) is a class of learning algorithm where a scalar evaluation
(reward) of the performance of the algorithm is available from the interaction with the
environment. The goal of a RL algorithm is to maximize the expected reward by adjust-
ing some value functions. This adjustment determines the control policy that is being
applied. The evaluation is generated by a reinforcement function which is located in the
environment.
In Q-learning, the learning problem is divided into a set of simpler problems each
learned separately by a Q-learning module A Q-learning Action Selection Mechanisms
arbitrates among the modules
In W-learning (W = weight), each module/behavior recommends an action with some
weight. The action with the highest weight is selected and executed. The W-values are
then modied based on the dierence between the winning action and the action desired
by the behavior
3.6.2 Neural Networks
Many roboticist have experimented with Neural Networks for controlling robots [Mad00]
[FN98] [CM96]. The behavior produced by neural networks is an emergent property of
the weights of the connections, and the positions of motors and sensors on the real robot.
One advantage of neural networks is that a robust solution often can be found due to
the neurons natural way of handling noise. One disadvantage is that it is dicult to
set the weights by hand, and that simulation time of the network increases linearly with
the number of connections between nodes, that is O(n
2
) with the number of nodes, and
therefore does not scale too well. Another disadvantage is, that since the output of a
neural network is produced by an interplay of the weights of many dierent connections,
it can be very dicult to read the network, as well as to write a network by hand.
Because of this, dierent machine learning techniques are often used to set the weights,
such as back-propagation and articial evolution. When using neural networks to control
a robot, two dierent methods can be used: the neural network can be directly connected
to the motors, or it can be used to select a preprogrammed action.
Direct control A feed forward neural network can be set up to connect the sensors
of a robot with the motors, as shown in gure 3.21(a). In this approach, the speeds of
the motors are determined by the activation of the output nodes of the neural network.
Some simple transformation is applied to the activation of the neural network, to give
appropriate values for the speeds of the motors, but otherwise the motors are directly
controlled by the neural network. This approach gives a purely reactive behavior by a one
83
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.21: Neural Networks Scheme
to one mapping from sensor space to motor space, and can be used to implement simple
Braitenberg vehicles.
Action selection Instead of using the output of the neural network to set the
speeds of the motors directly, the output can also be used to decide which of a set of
pre-programmed actions should be taken, as shown in gure 3.21(b). The action corre-
sponding to the output-node with highest activation gets to control the robot. In this
approach, the role of the neural network is not to control the motors directly, but to select
which action is to control the robot.
Multilayer/recurrent networks Both neural networks with recurrent connections
and multi-layered neural networks can be used for both Direct control and Action selection
instead of feed-forward neural networks. A multi-layered neural network with a recurrent
connection is shown in gure 3.21(c). Using multi-layered networks increases the com-
puting capability of the network, meaning that more complex reactive behaviors can be
represented. Recurrent connections enable the control system to have some internal state,
giving some sort of memory to the neural network.
3.6.3 Fuzzy Behavioral Control
Fuzzy logic eliminates some of the problems from rule-based behavior (a set of if-then
statements) with abrupt changes in the output. The general method is to use fuzzy rules
to produce a fuzzy result. The result produced by the rules is then de-fuzzyed by an
algorithm, producing a non-fuzzy output. The eect is that the resulting output is more
smooth than if a normal rule-based approach was used. Fuzzy logic can be seen as a hybrid
between feed-forward neural networks and rule-based behavior, since the output is a fusion
of the outputs of several rules at one time, potentially giving a more smooth change in
motor-output as input is changed. Figure 3.22 shows a fuzzy logic system architecture.
The input (A) is processed by an array of fuzzy rules, which produces a fuzzy output (B),
which is defuzzifyed into an ordinary (crisp) output. The fuzzy output (B) is calculated
from the fuzzy outputs from each of the fuzzy rules.
84
3.6. Adaptive Behavior
Figure 3.22: Fuzzy Logic
3.6.4 Genetic Algorithms
Genetic Algorithms (GAs) form a class of gradient descent methods in which a high-
quality solution is found by applying a set of biologically inspired operators to individual
points within a search space, yielding better generations of solutions over an evolutionary
timescale. The tness of each member of the population is computed using an evaluation
function, called the tness function, that measures how well each individual performs
with respect to the task. The populations best members are rewarded according to their
tness, and poorly performing individuals are punished or deleted.
It it important to say that this method does nor guarantee an optimal global solution,
but it generally produces high-quality solutions.
Two elements are required for any problem before a genetic algorithm can be used to
search for a solution: First, there must be a method of representing a solution in a manner
that can be manipulated by the algorithm. Traditionally, a solution can be represented by
a string of bits, numbers or characters. Second, there must be some method of measuring
the quality of any proposed solution: the tness function.
A GA is composed of the initialization, selection, reproduction and termination steps.
Initialization: Initially many individual solutions are randomly generated to form an
initial population. The population size depends on the nature of the problem, but typically
contains several hundreds or thousands of possible solutions. Traditionally, the population
is generated randomly, covering the entire range of possible solutions (the search space).
Occasionally, the solutions may be seeded in areas where optimal solutions are likely to
be found.
Selection: During each successive epoch, a proportion of the existing population is
selected to breed a new generation. Individual solutions are selected through a tness-
based process, where tter solutions (as measured by a tness function) are typically
more likely to be selected. Certain selection methods rate the tness of each solution and
preferentially select the best solutions. Other methods rate only a random sample of the
population, as this process may be very time-consuming.
85
CHAPTER 3. Review on Control Architectures for Modular Microrobots
Figure 3.23: GA scheme in M-TRAN
Most functions are stochastic and designed so that a small proportion of less t so-
lutions are selected. This helps keep the diversity of the population large, preventing
premature convergence on poor solutions. Popular and well-studied selection methods
include roulette wheel selection and tournament selection.
Reproduction: The next step is to generate a second generation population of so-
lutions from those selected through genetic operators: crossover (or recombination), and
mutation.
For each new solution to be produced, a pair of parent solutions is selected for
breeding from the pool selected previously. By producing a child solution using the
above methods of crossover and mutation, a new solution is created which typically shares
many of the characteristics of its parents. New parents are selected for each child, and
the process continues until a new population of solutions of appropriate size is generated.
These processes ultimately result in the next generation population of chromosomes
that is dierent from the initial generation. Generally the average tness will have in-
creased by this procedure for the population, since only the best organisms from the rst
generation are selected for breeding.
Termination: This generational process is repeated until a termination condition has
been reached. Common terminating conditions are:
A solution is found that satises minimum criteria
Fixed number of generations reached
Allocated budget (computation time/money) reached
The highest ranking solutions tness is reaching or has reached a plateau such that
86
3.7. Conclusions
successive iterations do no longer produce better results
Manual inspection
Combinations of the above
GAs can be found in many robots to develop its controllers or algorithms. In M-TRAN
( [Kamimura et al., 2003]) it is used an automatic locomotion generation method (called
ALPG) aimed at making locomotion of arbitrary module congurations using neural oscil-
lator as a model of CPG (Central Pattern Generator) and Genetic Algorithm for evolving
parameters (gure 3.23).
ATRON makes also use of GAs to see how well the collective of modules performs
a given task when articial evolution was applied to develop the individual controllers
[Ostergaard and Lund, 2003]. The evolutionary algorithm was a simple GA working on
a string of bytes. Each byte is considered one gene, and mutation operations can either
replace the byte with a random byte, ip a bit, or add a truncated gaussian distributed
random byte to the gene. A population size of 500 is used with the best 50% of the
individuals used as candidates for reproduction (rank based), mutation rate of 5% per
gene, two-parent 10 point crossover and one elite individual.
The tness function (for maximization) was simply the sum of x-coordinates of the
modules integrated over time (200 time steps), to make the modules perform a simple
locomotion task along the x-axis.
3.7 Conclusions
This chapter has been dedicated to describe control systems and algorithms suitable for
modular microrobots. In this way, several control architectures have been presented, with
especial emphasis in the behavior based architectures. Its main characteristics have been
explained and the dierences with deliberative, reactive and hybrid architectures have
been pointed out.
Behavior-based architectures are especially suited for microrobots because they include
the possibility to react in real-time to the unforeseen (very important in eld robotics), to
be coded in simple procedures that dont need a big hardware to run (very important in
a microrobot), and to still be able to perform high level control.
A review of the most important behavior-based architectures has been given, from
pure reactive architectures like subsumption, to more hybrid like motor schemas, activation
networks or DAMN. Also state of the art architectures like CAMPOUT has been reviewed.
The subsumption architecture is very interesting from the point of view of fast response
to external events, since it is a pure reactive architecture. Motor schemas introduces the
concepts of motor and perceptual schemas, separating actuator related functions (motor
schemas) and sensor related functions (perceptual schemas), and presenting the use of
perceptual schemas as recursive information generation process that may lead to a kind of
high level control. Activation networks presents the concept of activation preconditions
of behaviors, that is, the situations that have to be fullled for the behavior to take con-
trol. DAMN introduces the concept of an arbiter between the behaviors votes that takes
into consideration the objectives and priorities. Thus, it is well suited for integration
87
CHAPTER 3. Review on Control Architectures for Modular Microrobots
of high level deliberative control with low level reactive behaviors. Finally, CAMPOUT
is a very interesting architecture because integrates dierent types of behaviors (primi-
tive, composite, communication, coordination, etc.), and dierent arbitration mechanisms
(priority-based, state-based, voting, fuzzy, etc.), a clear example of the great exibility
that provides the use of behavior-based architectures.
Most of the hybrid architectures included in this section are based on three layers:
reactive, middle (sequencer), and deliberative. There is a clear tendency in the control
design to this three layer scheme. This underlines the importance of a layer that links the
reactive and deliberative parts of the architecture. Although the architecture proposed
in this thesis does not follow an architecture with a dened three layer scheme, there is
also a middle layer that interconnects (translates) the embedded control with the central
control.
Regarding modular robot architectures, three of them have been reviewed: CONRO,
Polybot and M-TRAN. CONRO presents the concept of hormones, special messages that
can trigger dierent actions in dierent modules. This is similar in Microtub, because
some commands are send to all modules, but each of them adapt the command to its
characteristics. M-TRAN shows two interested features: one is that although it uses a
distributed control, for some tasks like recongurations, central control is used, showing
how complicated (or even impossible) it is to have a pure distributed mechanism. The
other feature is the three layer architecture similar to Microtub, with low-level, middle
layer for communication and high level control. Polybot presents the attribute/service
model, especially design for complex tasks that require communication between dierent
modules.
Finally, a brief summary of adaptive behavior techniques has been given, including
reinforcement learning, neural networks and fuzzy control. An extended description of ge-
netic algorithms (with examples of the use in modular robots like M-TRAN and ATRON)
has been done since they are used in this thesis.
88
Chapter 4
Electromechanical design
Design can be art. Design can be aesthetics. Design is so simple, thats why it is so
complicated
Paul Rand
In this chapter, the microrobotic modules used in this thesis will be described. Their
mechanical principles, their electronics, their concepts of work, the dierent versions they
have been through and the reasons why they have been designed will be covered in the
next sections.
Some modules have been designed and built. They are the rotation module (indeed it
is a double rotation module, but for simplicity it will be called rotation module) v1 and
v2, the helicoidal module v1 and v2, the support module v1, v1.1 and v2, the extension
module v1 and v2, the camera module v1 and v2, the contact module (it is included
in the camera module v2) and the battery module. Some others are still in the design
or conceptual phase, they are being designed but they have not been built yet. They
are the SMA-based module (there is already a prototype), the traveler module (in the
design phase) and the sensor module (in a conceptual phase). Table 4.1 shows the main
characteristics of the developed modules.
The most important characteristic of all modules is its tiny diameter, 27mm, the
smallest diameter found in a robot like this. Some parts thickness are thinner than 1mm.
As an example, a detail of one of the wheels made for the helicoidal modules can be seen
in g. 4.1
The reason why there have been build so many dierent modules is to implement
dierent locomotion gaits, depending on the environment the robot is moving in. These
gaits are: helicoidal (with the helicoidal module), snake-like (with the rotation module),
inchworm (with the support and extension modules) and a combination of all or some of
them.
Due to the narrowness of the pipes, it is not possible to rearrange the position of
the modules, so it is important that the microrobot can choose amongst dierent gaits
depending on the stretch.
89
CHAPTER 4. Electromechanical design
Module Length[mm] Diameter[mm] Weight[g]
Camera v2 25 27 6,5
Support v1 23,7 27 10,5
Support v2.1 27 27 12,5
Extension v2.1 30 27 16
Rotation v1 47 27 13
Rotation v2 64 27 27
Helicoidal v1 45 27 25
Helicoidal v2 28 27 15
Batteries 19,5 27 16,4
Table 4.1: Modules main characteristics
Figure 4.1: Detail of a wheel of the helicoidal module
In the next sections the modules will be described: rst the hardware, then the elec-
tronics, and nally the congurations in which they can work.
4.1 Developed modules hardware description
4.1.1 Rotation Module
The rotation module has been designed with two purposes: the rst one is to be used as
a rotation module for chain multi-congurable robots. The second one is for snake-like
robots.
Each module is composed of two servomotors (Cirrus CS-4.4), two connectors (one
male and one female) and the electronics for control, sensing and communication. Each
motor provide one degree of freedom. Both together provide rotation in two perpendicular
planes.
Rotation module V1
In this rst version (g. 4.3 a)), the servomotors come from commercial ones but have
been redesigned to have a more compact size. The gearset of the servomotors has been
rearranged (see g. 4.2) and placed in a new cover to save space. The torque given for
90
4.1. Developed modules hardware description
(a) Default conguration (b) Rearranged conguration
Figure 4.2: Gearhead design
each degree of freedom is 0.43Kg cm, down shifting the torque given by the servomotors
(1.3Kg cm) by 50%, an acceptable result. Each module is able to raise up to two other
modules of the same weight.
The work area of this module is shown in g. 4.3 b).
One of the requirements in the design of the rotation module was to be light. Its parts
have been made in resin by stereolithography (that is enough resistant for a prototype)
and will be fabricated in a more resistant material in the future. The weight of each
module is about 57g.
The diameter of the module is less than 27mm and the total length, including connec-
tors is 46mm.
Rotation module V2
In order to make the rotation module as robust as possible for the second version, a
commercial servomotor has been chosen as opposed to previous modules in which a mod-
ied gearset was used (the modication made it more compact but there was a lack of
torque [Brunete et al., 2005]) (g. 4.4). It is a CS-101 servomotor with a torque of 0.7
kg cm at 4.8 v. The chassis protects the electronics improving the robustness of the
module.
The concepts and work area of this module are similar to the previous one.
Several of this modules put together can emulate the movement of a snake (g. 4.5).
The principles of motion will be described later on this chapter.
For more information about these modules see [Torres, 2006].
Kinematics
The homogeneous transformation matrix of the module has been dened following the
Denavit Hartenberg convention [Denavit and Hartenberg, 1955] [Sciavicco and Siciliano,
91
CHAPTER 4. Electromechanical design
(a) Model (b) Work area
Figure 4.3: Rotation module V1
Figure 4.4: Rotation module v2 plus camera
1996] (see eq. 4.1 to 4.3), according to the reference system shown in g. 4.6 and the
parameters dened in table 4.2.
A
0
1
(
1
) =

cos
1
0 sin
1
L
2
cos
1
sin
1
0 cos
1
L
2
sin
1
0 1 0 0
0 0 0 1

(4.1)
A
1
2
(
2
) =

cos
2
0 sin
2
L
1
cos
2
sin
2
0 +cos
2
L
1
sin
2
0 1 0 0
0 0 0 1

(4.2)
92
4.1. Developed modules hardware description
Figure 4.5: Snake conguration plus camera
Table 4.2: Denavit-Hartenberg parameters
a
i
d
i

i

i
q
1
L
2
0 /2
1
q
2
L
1
0 /2
2
A
0
2
= A
0
1
(
1
) A
1
2
(
2
) =
=

c
1
c
2
s
1
c
1
s
2
L
1
c
1
c
2
L
2
c
1
s
1
c
2
c
1
s
1
s
2
L
1
s
1
c
2
L
2
s
1
s
2
0 c
2
L
1
s
2
0 0 0 1

(4.3)
To refer the system to the coordinate system XYZ situated at the origin, it is just
enough to apply a translation in the X axis, obtaining the matrix (4.4)
93
CHAPTER 4. Electromechanical design
Figure 4.6: Reference system for Denavit-Hartenberg
T =

1 0 0 L
1
0 1 0 0
0 0 1 0
0 0 0 1

A
0
2
=
=

c
1
c
2
s
1
c
1
s
2
L
1
c
1
c
2
L
2
c
1
L
1
s
1
c
2
c
1
s
1
s
2
L
1
s
1
c
2
L
2
s
1
s
2
0 c
2
L
1
s
2
0 0 0 1

(4.4)
Thus, the coordinates of the end-eector (connector) would be:
x = L
2
cos(
1
) L
1
cos(
1
)cos(
2
) L
1
(4.5)
y = L
2
sin(
1
) L
1
sin(
1
)cos(
2
) (4.6)
z = L
1
sin(
2
) (4.7)
94
4.1. Developed modules hardware description
from where it is possible to easily obtain the inverse kinematics equations :

2
= arcsin(z/L
1
) (4.8)

1
= arcsin(x L
1
/(L
2
+L
1
cos(
2
))) (4.9)
The coordinate systems have been chosen in order to have the same orientation in the
end-eector and in the reference system. In this way, if several modules are connected
together, the homogeneous transformation matrix of the whole system can be computed
by multiplying the homogeneous transformation matrix of every single module (eq. 4.10).
T
0
n
= T
0
1
T
1
2
... T
n1
n
(4.10)
4.1.2 Support and Extension modules
Although these modules can be used separately, the expansion and support modules (g-
ures 4.7 and 4.10) were designed to work together to simulate the movement of an inch-
worm. The support module is used to x the microrobot to the pipe, avoiding the mi-
crorobot to slide, while the extension module is used to extend the robot (make it go
forward), and to turn to right and left. A drive unit is composed of two support modules
and one expansion module (g. 4.7). An advantage of this kind of motion is that the robot
manages to maintain a rm grip on the surface at all times while other types of motion,
for example helicoidal motion, could show a tendency to slip as the slope increases.
The support module can be used together with others modules, as for example the
helicoidal and the rotation modules, to provide a rm grip while the robot is turning.
Support and Extension modules V1
The rst prototypes were design to test the locomotion principle and how small the micro-
robot could be made. In this prototype all modules use a 21x13x9mm microservomotor
(Cirrrus LS-3.0). It is a linear servo which weights 3.0g, has a maximum deection of
14mm in 0.15 sec and provides a maximum output force of 200g. The support modules
use one servo and the expansion module uses two.
The support module consists of three rubber bands positioned around the module at
120
o
from each other, which are bent when the servomotor is activated, exercising a force
against the walls of the pipe that allows the module to be still.
The extension module consists of two arms (each of them drove by a servo) that allows
expansion-contraction movements, as well as turns, depending on the relative position
between arms (g 4.7 b)).
When designing the mechanism, in a very rst time it was thought to use two per-
pendicular bars that lean in a base panel where they can rotate. If any of the bars push
95
CHAPTER 4. Electromechanical design
the plane, it turns. The panel can turn around an axis that is the perpendicular bisector
of the segment joining the two contact points of the bars. If both push, the panel goes
forward, and if they pull, it goes back.
However, there is a problem with this design: if the ends are not rigidly connected in
the servo end, the system would have one dof free, and if they are, the system will break
because the distance between the bars should be shorter when the panel turns, but it is
not possible since the connection is rigid.
Thus, the rst idea was change into the mechanism shown in g. 4.7 b). Having four
bars make it possible. There are two straight bars and two driver bars. All the joints
have one dof (rotation). All together confer the base panel two dof: one rotational and
one translational. The straight bars are used to move and turn the base panel. The driver
bars are used to avoid lateral displacement of the base panel.
The module has been tested in dierent pipes. The results obtained are shown in the
following table:
Minimum pipe diameter: 22mm
Maximum pipe diameter: 35mm
Maximum angle of rotation of the expansion module: 40
o
Maximum lengthening: 7.5mm
Average speed at 0
o
- 90
o
: 2mm/seg
The main advantage of the servomotor used in this prototypes is its small size and its
good torque. On the other hand, it is very fragile and some parts are weak, breaking very
easily under stress circumstances. Also, the position sent by PWM to the servo remains
saved until a new one is sent, even after power switch o. This is a problem if the servo
gets stuck because switch o the power would not be enough, it would be necessary to
send a new PWM position command. Finally, it does not have a cover and the gears can
touch the walls or others parts and break.
Support module V1.1
In order to improve the robustness of the 1.0 support module, a new support module was
created (g. 4.8). The servomotor was replaced by a rotational one (Cirrus CS-4.4) that
was linked to two racks xed to two plates. By turning the servomotor, the two plates
can be expanded or contracted.
The mechanism works properly, but it slides when it is in the pipe, due to the resin
the plates are made of. The plates must be covered by non slippery material in order to
use them.
The module was not used because its big size. Since it is a support module with no
self propulsion nor turning possibilities it was necessary to make it smaller.
Support and Extension modules V2
In order to solve the problems of previous versions, another prototype was created with
a new design and parts, but the same principle of movement. The second prototypes
incorporate rotational servomotors instead of the linear ones, increasing the robustness.
96
4.1. Developed modules hardware description
(a) Worm-like drive module (b) Detail of servomotor and
parallel arms
Figure 4.7: Worm-like microrobot V1
Figure 4.8: Support module 1.1
At a rst moment it was created the support and extension modules 2.0, and afterwards
the support 2.1.
The support module 2.0 (g. 4.9) is based on the cameras diaphragm mechanism. It
is composed of three legs that can be folded or unfolded as seen in the g. 4.9 a). This
module presented one problem: the force that the leg exercise against the wall was not
uniform (one of them was linked directly to the servo and had more power) and thus the
grip was not enough to keep the module xed to the pipe and the module slides.
Thus a new support module was needed. The support module 2.1 (g. 4.10) consists
of four rubber bands positioned around the module at 90
o
from each other. In order to
expand, the servomotor of the module pushes a ring where all the bands end, and as a
result of this movement the bands are bent and xed to the pipe, so that the module gets
a grip to the pipe. This module was the most satisfactory of all, and due to the rubber
bands the grip was really good.
The extension module 2.1 (see g. 4.11) is based on a parallel robot composed of a four
97
CHAPTER 4. Electromechanical design
Figure 4.9: Support module v2.0
Figure 4.10: Inchworm conguration based on v2.1 modules plus camera
bar linkage (two crank-connecting rod mechanisms with a common slide bar) between the
base and the top end, and a sliding bar in the center to eliminate one degree of freedom
as in the previous module, the lateral displacement. The relative movement of each arm
(driven by a servomotor) changes the length of the module and the orientation of the top
end. Consequently the module can extend and also turn. It is a similar design to the
previous one but with rotational servos instead of linear ones.
For more information about these modules see [Santos, 2007].
Kinematics
The kinematics of the support module are very simple but it is good to know them in
order to calculate the kinematics for the whole robot chain. It can be calculated with the
eq. 4.11 according to the axis shown in g. 4.12, being L3 the length of the module.
T =

1 0 0 L
3
0 1 0 0
0 0 1 0
0 0 0 1

(4.11)
98
4.1. Developed modules hardware description
(a) Model (b) Work area
Figure 4.11: Extension module detailed mechanism
The kinematics of the extension module are a bit complex to calculate using Denavit
Hartenberg, and they have been calculated using a geometrical approach based on gure
4.13.
First of all, two new links are dened as the sum of the links of each arm, obtaining
Y
A
and Y
B
. The coordinates of the rst link (on the left, Y
A
) are:
A
0
= (A
0x
, A
0y
) = (
L
6
2
, 0) (4.12)
A
1
= (A
1x
, A
1y
) = (
L
5
cos
F
2
, Y
F
+
L
5
sin
F
2
) (4.13)
and the coordinates of the second link (on the right, Y
B
) are:
B
0
= (B
0x
, B
0y
) = (
L
6
2
, 0) (4.14)
B
1
= (B
1x
, B
1y
) = (
L
5
cos
F
2
, Y
F

L
5
sin
F
2
) (4.15)
Thus it is possible to calculate the module and argument of each vector:
|Y
A
| =

(A
1x
A
0x
)
2
+ (A
1y
A
0y
)
2
(4.16)

A
= arctan
A
1y
A
0y
A
1x
A
0x
(4.17)
|Y
B
| =

(B
1x
B
0x
)
2
+ (B
1y
B
0y
)
2
(4.18)

B
= arctan
B
1y
B
0y
B
1x
B
0x
(4.19)
99
CHAPTER 4. Electromechanical design
Figure 4.12: Coordinate system for the kinematics of the support module
It is important not to forget that the central point of the end connector (C in 4.13 a))
moves always in a line due to the mechanism to which it is attached (a bar that moves
between the two servos and thus it can only move upwards and backwards). It holds that:
|Y
A
| cos
A
= |Y
B
| cos
B
(4.20)
And nally the inverse kinematic equations can be obtained applying the law of cosines
in triangle

L
1
, L
2
, Y
A
and

L
3
, L
4
, Y
B
:
q
1
=
A
arccos
L
2
1
L
2
2
|Y
A
|
2
2 L
1
|Y
A
|
(4.21)
q
2
=
B
arccos
L
2
4
L
2
3
|Y
B
|
2
2 L
3
|Y
B
|
(4.22)
Direct kinematics are a little more complicates to obtain. In order to make computation
faster, an approximation has been done: considering that the argument of Y
A
and Y
B
is
always 90

. This not true when


F
= 0, but the dierence is negligible. Thus it is possible
to write:
Y
A
= L
1
cos(90 q
1
) +

L
2
2
L
2
1
sin
2
(90 q
1
) (4.23)
Y
B
= L
3
cos(90 q
2
) +

L
2
4
L
2
3
sin
2
(90 q
2
) (4.24)
100
4.1. Developed modules hardware description
Figure 4.13: Kinematics diagrams of the extension module
Making use of equation 4.20, it is obtained:
Y
A
cos
A
+L
5
cos
F
+Y
B
cos
B
= L
6
(4.25)
Y
A
sin
A
+L
5
sin
F
Y
B
sin
B
= 0 (4.26)
And thus the direct kinematic equations are obtained:

F
= tan
1
Y
B
sin
B
Y
A
sin
A
L
6
Y
B
cos
B
Y
A
cos
A
(4.27)
Y
F
=
Y
B
sin
B
+Y
A
sin
A
2
(4.28)
4.1.3 Helicoidal drive module
The helicoidal module is so called because of the placement of its front wheels making an
helix (g. 4.14 a)). This module was designed to be a fast drive module. It is composed
of two parts: the body and the rotating head. The wheels in the rotating head are
distributed along the crown making a 15
o
angle with the vertical. When the head turns, it
goes forward in a helicoidal movement that pulls the body of the microrobot. The wheels
101
CHAPTER 4. Electromechanical design
Figure 4.14: Helicoidal module v1
of the body help to keep the module centered in the pipe.
The fact that the head of the robot rotates around the robot axis involves the necessity
to design a channel for electrical wires that goes through the entire robot to interconnect
the front and the rear part of the robot.
Helicoidal module V1
In the rst design (g. 4.14 b)), the head is linked to a 3 phase brushless Maxon micro-
motor (model EC20, 20mm) through a gearhead (g. 4.14 c)), which has been designed
in order to get the appropriate reduction (eq. 4.30, showing the combination of reduction
stages) and speed . The wheels, its axis and the support system have been manufactured
by micromachining, and the other parts (except for the gears) have been made using
stereolithography.
ratio = (ratio
stage1
) (ratio
stage2
) (ratio
stage3
) (4.29)
ratio = (
26
14
) (
27
9
) (
44 + 13
13
) = 24, 43 (4.30)
The control of the motor was performed by a Maxon motor control board AECS 35/3.
This board supplies the 3 phase signals that the motor demands.
The advantages of the motor are its shape, that ts perfectly into the cylindrical body,
its small size and its high torque. One of the main problems that the motor has is the
power consumption. It operates at a range of 8 to 30 V, and requires up to 5A. This is
a huge power demand, too high for the purpose of autonomous robots. Also, the fact of
using a especial control board with a 4 wires cable makes it unsuitable for interconnection
to other modules. Thus, another module was created further on.
This module was tested in a 30 cm straight pipe with dierent slope angles. The mi-
crorobot was able to go forward even when the pipe was set to a 90
o
vertical position. The
helicoidal approach shows itself as a very interesting mean of locomotion for microrobots.
102
4.1. Developed modules hardware description
Angle (

) 0 15 30 45 60 75 90
Velocity (mm/s) 30 27 22 19 15 13 12
Table 4.3: Velocity in a 30cm pipe at dierent angles (helicoidal module)
Figure 4.15: Helicoidal module V2 plus camera
The results obtained for this module are shown in table 4.3.
For more information about these modules see [del Monte Garrido, 2004].
Helicoidal module V2
A second prototype was developed in order to solve the problems of the previous one
(especially the consumption)(g. 4.15 a)). The design principles were the same, but two
modications were made: the Maxon motor was replaced by a servomotor (Cirrus CS-3)
and the gearser was simplied (eq. 4.31). Although the torque was reduced by the motor
change, so it was the gear reduction (g. 4.15 b) and c)), being in this way compensated.
Thus, it was enough to go forward, but the consumption was considerably decreased. Also
the size was smaller due to the reduction of the motor and the gearset.
ratio = (
44 + 13
13
) = 4, 38 (4.31)
In table 4.3 the results of the second prototype are shown. They are slower than in
the previous prototype, but this is due not only to the new motor and gearset but also
to the some assembling problems that let some parts a bit loose, probably because of the
tolerance of the stereolitography process.
Angle (

) 0 90
Velcity (mm/s) 10 4
Table 4.4: Velocity in a 30cm pipe at dierent angles (2nd helicoidal module)
103
CHAPTER 4. Electromechanical design
(a) Detail of camera parts (b) Detail of the rotation axis
Figure 4.16: Camera module v1
(a) Detail of module (b) Detail of the pin-hole cam-
era
Figure 4.17: Camera module v2
Kinematics
The kinematics of the helicoidal module are very simple and similar to the kinematics of
the support module. They can be calculated from eq. 4.11 according to the axis shown in
g. 4.12, being L3 the length of the module.
4.1.4 Camera module
The camera module plays a very important part in environment information acquisition,
in order to detect holes, breakages or cracks in the pipes. The module is provided with
a CMOS B&W camera which allows to visualize the inner part of the pipe and, in the
second version, with contact sensors which allow to detect obstacles (i.e. turns) inside the
pipe. The reason for using a B&W camera is that it was the smallest one available at that
moment.
Camera module V1
This module (4.16) is a 2 degrees of freedom structure composed of two servomotors
(Cirrus CS3), a camera (FR220 CMOS B&W) and two leds for illumination. Thanks to
the common interface, it can be assembled to any of the previous modules. Ideally the
two rotation axis should be aligned with the centre of mass, but it was not possible due
to the small size of the module, and one of the rotation axis had to be moved. This is the
reason why the camera is not symmetric.
104
4.1. Developed modules hardware description
Figure 4.18: Batteries Module
The camera used is a 8x8x20mm CMOS black and white camera, whose main charac-
teristics are: 320x240 pixels composite video, 20mA@9V, 7-12V. The main characteristics
of the servomotors are: 6.3mm x 22.25mm x 10.10 mm, 2.85 g, 400g/cm and 0.18 s/60o
at 4.8V.
For more information about these modules see [Le nero, 2004].
Camera module V2
This second module (4.17) presents some new features in respect to the previous one. The
two degrees of freedom have been suppressed to make the module shorter. And a bumper
detection mechanism has been incorporated (three contact sensors). Also the camera has
been replaced by a similar one of smaller dimensions (8, 5mm 8, 5mm 10mm), the
FR220P. The number of LEDs has been increased from 2 to 4 to increase the luminosity.
The camera is switched on and o through a MOSFET. Additionally, the 4 LEDs for
illumination are controlled by 2 MOSFET diodes (each diode controls two leds), allowing
the micro-controller to vary light intensity by means of a PWM signal.
For more information about these modules see [Santos, 2007].
4.1.5 Batteries module
The purpose of this module is to act as the power supply of the microrobot. It is base on
6V watch batteries giving 640mAh. They were the most powerful batteries that could be
foung having the size restriction of 27mm diameter and light weight.
The module was designed to keep the batteries, so it was very simple (g 4.18). The
length is 19.5mm and its weight is 16.4g.
The tests were not very successful due to the discharge curve, that went down to 2V at
the demanded intensity, voltage that was not enough to move the servomotors (although
good enough to keep the microcontroller on).
105
CHAPTER 4. Electromechanical design
Due to the problem mentioned above, the module has to be redesigned, and we are
waiting to nd a proper power supply unit.
Kinematics
The kinematics of this module are very simple and similar, as well of all pig-type modules,
to the support module, but it is good to know them in order to calculate the kinematics of
the whole robot chain. It can be calculated with the eq. 4.11 according to the axis shown
in g. 4.12, being L3 = 19, 5mm.
4.2 Other modules
This section describes some modules that are still in a conceptual phase and has not been
built yet (traveler module and the sensor module) and the SMA-based module.
4.2.1 SMA-based module
This module uses Shape Memory Alloys (SMA) to achieve a worm-like system of locomo-
tion, based on contraction and expansion of the SMAs. Each module is composed of a
support board, a control boards (act as support boards that additionally holds the elec-
tronics), SMAs wires to make the contraction and springs to make the expansion when
the SMAs releases (g. 4.19).
The onboard electronic for this microrobot is based on a PIC SMD of just 5x5 mm. It
is possible to put up to 32 modules together. Each module has three degrees of freedom.
There are also 4 wires that go through the entire microrobot carrying the control signals.
The main advantages of this module are a simple electronic circuit and a great ver-
satility (it can both contract-expand and rotate). The main disadvantages are the power
consumption, too high, the assembling diculty and the lack of robustness. Thus this
type of microrobot has not been used as a locomotion module for the heterogenous robot.
However, it could be interesting as a manipulator for an end-eector module that does not
need to handle heavy weights. For it used in combination with other modules it would be
necessary to add the common connector to its ends.
4.2.2 Traveler module
The purpose of this module is to measure the traveled distance inside a pipe.
The traveler module (g. 4.20) is still a concept and has not been manufactured yet.
It has been designed and tested in the simulation environment. It is composed of three
wheels provided with encoders that detects the distance that the microrobot has traveled.
The module is designed so that at least one wheel is in touch with the pipe in every
moment. But it is possible that two or three wheels are in contact at the same time, so the
measures of the encoders may dier at the end of the trajectory. That is why algorithms
must be applied in order to obtain useful data. Data coming from other sensors (like
accelerometers) of the microrobot can also be integrated in order to get useful information.
106
4.2. Other modules
Figure 4.19: SMA-based modules
4.2.3 Sensor module
The sensor module is a module conceived to carry several types of sensors, like proximity,
accelerometer, humidity and temperature. It is still in a conceptual phase.
Accelerometer sensors have already been placed in some of the modules and they will
be described in section 4.3.7. Proximity sensors will be used to navigate, by detecting
bifurcations. Temperature and humidity measurement and log can be obtained incorpo-
rating a chip like the Maxim DS1923.
Figure 4.20: Traveler Module
107
CHAPTER 4. Electromechanical design
Figure 4.21: Common interface
4.3 Embedded electronics description
The electrical design of the modules have been done under two premises: simplicity and
low-consumption. For that reason a low consumption microcontroller has been chosen
(NanoWatt technology).
Every module is provided with an electronic control board (with a low consumption
PIC microcontroller PIC16F767) which is able to perform the following tasks:
1. Control of actuators (servomotors)
2. Communications via I
2
C
3. Communications with adjacent modules via synchronism lines
4. Manage several types of sensors
5. Auto-protection and adaptable motion
6. Self-orientation detection
7. Low-level embedded control
The low-level control will be described in chapter 7. The remaining features will be
described next.
4.3.1 Common interface
A common interface (g. 4.21) has been designed to connect all modules and to allow a
bus carrying all necessary wires and signals to go from one another. This electrical bus
carries 8 wires:
Power (5v) and ground
I
2
C communication: data and clock
2 synchronism line
2 auxiliary line (for video signal for example)
4.3.2 Actuator control
The electronic board is ready to control dierent types of actuators, like servomotors and
leds. The servomotors are controlled by PWM signals sent from the microcontroller (g.
4.23).
108
4.3. Embedded electronics description
(a) Led control circuit (b) Bump detection circuit
Figure 4.22: Camera electronic circuits
The leds in the camera module can also be controlled through the electronic circuit
shown in g. 4.22(a). In the camera module there are two circuits, and each of them is
able to control 2 leds.
4.3.3 Sensor management
The electronic board is capable of controlling dierent types of sensors, as for example:
bumpers, accelerometers, power consumption.
The bumper detection system is implemented in the circuit of the camera module
shown in g. 4.22(b). Thanks to this circuit the servo can read the output of each of
the three bumpers placed in the front part of the camera module. It is a basic circuit
composed of a button and a lter to avoid bouncing of the signal.
Accelerometers will be covered in section 4.3.7.
4.3.4 I
2
C communication
I
2
C has been chosen as opposed to other protocols because it is already integrated in small
microcontrollers, only two bus lines are required and no terminators are needed, amongst
others. I
2
C is a very well known bus and information can be found over the internet. A
brief summary is included in Annex ??.
4.3.5 Synchronism lines communication
The synchronism lines are used for low level communication between adjacent modules.
It is a kind of peer to peer communication, unidirectional in each line. Since there are
two lines, the communication is bidirectional. The communication along the microrobot
is from module to module, and it seems like passing a baton. Thanks to these lines, every
module can be aware of which other modules are close to it, and the central control of the
robot is able to know which is the conguration of the microrobot.
Synchronism lines are connected from the digital output of one module to a digital
input of the next.
109
CHAPTER 4. Electromechanical design
Figure 4.23: Auto-protection control scheme
(a) Rotation modules (b) Other modules(support and extension)
Figure 4.24: Auto-protection circuits
4.3.6 Auto protection and adaptable motion
The auto-protection control is based in the control scheme shown in g. 4.23.
Actuators control is based in two feedback loops, position and consumption. This
allows the module to prevent harms to its servomotors if they try to reach an impossible
position, for example due to obstacles. Additionally, thanks to these feedback loops, it is
possible to implement a torque regulation to avoid high consumption when it is no needed.
This is very useful, since the modules require more energy when climbing a vertical pipe
than when moving horizontally.
Rotation module
To sense the current position of the servomotor, the potentiometer itself of the servomotor
is connected to the microcontroller by means of a cable connected from the variable part
of the potentiometer to the analog-to-digital converter. It is very important that the
potentiometer is linear to be able to get the current position from the measured voltage.
A small circuit has been designed to sense the consumption of the servomotor by means
of a resistor of low value (1) and a capacitor (470F) in parallel to stabilize the voltage.
The voltage at the resistor will be measured through the analog-to-digital converter (see
g. 4.24(a)) [Torres, 2006].
If by any cause a servomotor gets stuck, the consumption remains at his top value for
a long period, as it is shown in g. 4.25(a), as opposed to g. 4.25(b) that shows a normal
output. Thus, it is possible to detect these problems and send the servomotor to a safe
110
4.3. Embedded electronics description
Single Module Single Module Loaded
At Rest (mA) 10-15 30-35
Peak (moving 1 servo) (mA) 500 500
Peak (moving 2 servo) (mA) 1000 1000
Average (1 servo) (mA) 200 250
Average (2 servo) (mA) 400 500
Table 4.5: Power Consumption
position or to stop sending position commands (so the servomotor turns loose) to avoid
its damage.
Power consumption for rotation module v1 is shown in table 4.5. It is very important
to have low consumption in order to make robots more autonomous or avoid overheating.
As it is possible to see, the consumption of the module at rest is very low.
Other modules
Newer modules starting from the support and extension v2 were equipped with IC MAX4372.
It is used to sense the current. The connection diagram is shown in g. 4.24(b). The
concept is the same as the previous one, but the resistor and capacitor were replaced by
the MAX4372. The output of the IC was taken to a low pass lter to lter the noise made
by the PWM of the servomotor, and then to the A/D converter of the microcontroller.
The lter has a cuto frequency of f
c
= 0.33Hz [Santos, 2007].
4.3.7 Self orientation detection
Modules are equipped with three-axis accelerometers. With these new sensors inside the
micro-robot, it is possible to know how the robot is oriented in relation to the ground
(measuring the acceleration vector of gravity) and in which direction it is moving (and
how fast).
The three-axis accelerometer used is the MXR9150. This sensor can measure 5g with
a sensitivity of 150mV/g at 3.0V . It is also able to detect both dynamic (e.g. movement)
and static accelerations (e.g. gravity). The MXR9150 provides three radiometric analog
outputs set to 50% of the power supply voltage at 0g, 1.5V in this case [Torres, 2006]
[Santos, 2007].
In gure 4.26 it is possible to see the measurement of the gravity when the module is
placed with its Z axis down (a) and X axis down (b).
Figures 4.27, 4.28 and 4.29 show the results of some experiments. Figure 4.27 shows
the output of the accelerometers when the module is moving along a linear trajectory in
the XY plane, forward and backwards. The signals are very clear. In the Z axis there is
no variation while in the X and Y axis the signals rise and fall when the module moves
forward and backward.
In gure 4.28 the rotation module is moving one servomotor from 30 to 150

with no
load (about 0,35 kg*cm). The top gure shows the rotated angle. In the middle gure,
111
CHAPTER 4. Electromechanical design
(a) With servo blocking
(b) Normal consumption (non blocking)
Figure 4.25: Consumption output
112
4.4. Chained congurations
it is possible to see that the consumption increases (peaks) every time the servo moves,
but it is stable when the servo is still. In the bottom gure the output of each axis of
the accelerometer is drawn, showing a transition every time the servo moves. Thus the
direction of movement can be computed.
Figure 4.29 shows the results obtained moving the rotation module from 150 to 30
degrees loaded with the camera module. Apart of some stabilization problems in the
beginning it can be observed that the results are similar to the test without load.
In both gures 4.28 and 4.29 the period of the PWM signals varies (decreases) each
time the servo moves from the start position to the end position. Results show that the
lower the period is, the higher the torque is, but also the consumption and the noise.
4.4 Chained congurations
There are two main types of congurations in which the modules can be attached: ho-
mogeneous (if there is only one locomotion gait) and heterogeneous (if there are several
locomotion gaits that the robot can implement).
4.4.1 Homogeneous congurations
Homogenous congurations are those composed of one type of module. In this thesis it
will be considered as homogenous congurations those composed of only one drive unit
(meaning that it is able to perform only one locomotion gait). There are three main
types of homogenous conguration that the robot can implement: helicoidal, inchworm
and snake-like.
Snake-like conguration
A snake-like or serpentine conguration (g. 4.30) can be obtained by connecting several
rotation modules together. The dierence between snake-like and serpentine robots is that
in serpentine robots the propulsion is made out wheels or tracks, while in snake-like it is
made out of own body motions. For a detail classication the reader can consult [Gonzalez
et al., 2006]. Snake-like and serpentine robots oer a variety of advantages over mobile
robots with wheels or legs, apart from their adaptability to the environment. They are
robust to mechanical failure because they are modular and highly redundant. They could
even perform as manipulator arms when part of the multilink body is xed to a platform.
On the other hand, one of the main drawbacks is their poor power eciency for surface
locomotion. Another is the diculty in analyzing and synthesizing snake-like locomotion
mechanisms, which are not as simple as wheeled mechanisms (but nowadays a lot of
research has been done in this eld [Sato et al., 2002]). For big diameter pipes, wheeled
robots are much more convenient. But for narrow pipes with curves and bends, snake-like
robots can be a very interesting solution.
Snake-like movements are mainly based on CPG (Central Pattern Generator), sinu-
soidal waves that go along the modules. The position of the actuators follow a sinusoidal
wave. By changing its parameters, dierent movements can be achieved (see [Gonzalez
et al., 2006]).
113
CHAPTER 4. Electromechanical design
(a) Z axis pointing down
(b) X axis pointing down
Figure 4.26: Accelerometer tests: still module
114
4.4. Chained congurations
Figure 4.27: Module moving along a linear trajectory in the XY plane
Figure 4.28: Servo moving from 30

to 150

with no load
115
CHAPTER 4. Electromechanical design
Figure 4.29: Servo moving from 150

to 30

loaded
Figure 4.30: Snake-like conguration
116
4.4. Chained congurations
(a) Turning (1), rotating (2) and rolling (3)
(b) Serpentine, side-winding, concertina and rectilinear
Figure 4.31: Snake movements
With CPG it is possible to simulate some of the movements of the snakes (see g.
4.31(b)):
Serpentine locomotion: is the most common method of travel used by snakes. Each
point of the body follows along the S-shaped path established by the head and
neck, much like the cars of a train following the track. The key property of snakes
in achieving serpentine locomotion is the dierence in the friction coecients for
the tangential and the normal directions with respect to the body. In particular,
the normal friction tends to be much larger than the tangential friction, leading to
avoidance of side slipping.
Caterpillar (vertical serpentine or rectilinear): This slower technique also contracts
the body into curves, but these waves are much smaller and curve up and down
rather than side to side. When a snake uses caterpillar movement, the tops of each
curve are lifted above the ground as the ventral scales on the bottoms push against
the ground, creating a rippling eect similar to how a caterpillar looks when it walks.
The friction parameters are not so important.
Sidewinding: in environments with few resistance points, snakes may use a variation
of serpentine motion to get around. Contracting their muscles and inging their
bodies, sidewinders create an S-shape that only has two points of contact with the
117
CHAPTER 4. Electromechanical design
(a) Conguration 1 (b) Conguration 2
(c) Conguration 3 (d) Conguration 4
Figure 4.32: Snake-like congurations
ground; when they push o, they move laterally. Using this gait, the robot moves
parallel to its body axis.
Another common mode (gait) of locomotion in snakes that cannot be achieved by CPGs
is concertina [Gray and Lissmann, 1950] [Lissmann, 1950], see gure 4.31(b). Concertina
is the method used to climb. The snake extends its head and the front of its body along the
vertical surface and then nds a place to grip with its ventral scales. To get a good hold,
it bunches up the middle of its body into tight curves that grip the surface while it pulls
its back end up; it then springs forward again to nd a new place to grip with its scales.
This movement can be achieved by especial sequences of preprogrammed movements.
There are some other movements that can be created that are not inspired by real
snakes (gure 4.31(a)):
Rolling: The robot can roll around its body axis. The same sinusoidal signal is
applied to all the vertical joints and a 90

out of phase sinusoidal signal is applied to


horizontal joints.
Turning: The robot can move along an arc, turning left or right. The vertical joints
are moving as in 1D sinusoidal gait and the horizontal joints are at xed position
all the time. The robot has the shape of an arc. The radius of curvature of the
trajectory can be modied by modifying the oset of the horizontal joints.
Rotating: The robot can also rotate parallel to the ground clock-wise or anti-
clockwise. The robot can change its orientation in the plane. A phase dierence
118
4.4. Chained congurations
(a) Inside a 40mm pipe
(b) Negotiating an elbow in a 50mm pipe
Figure 4.33: Snake-like microrobot inside pipes
of 5

is applied to the horizontal joints and 120

for the vertical.


The most suitable locomotion gait for pipes turns out to be rectilinear and concertina
(for climbing). Inside the pipe there is not much space for sidewinding. Serpentine lo-
comotion is more suitable to negotiate bends and for straight stretches when the friction
between the robot and the pipe is strong enough. If the friction is small, or to climb pipes,
rectilinear and concertina locomotion are more appropriate.
The snake-like conguration is a very versatile robot which can adopt several shapes.
In g. 4.32 dierent congurations are shown: Caterpillar (g. 4.32(a)), serpentine (g.
4.32(b)), circle (g. 4.32(c)) and helix (g. 4.32(d)). Due to the 2 dof the robot can adopt
many 3D congurations.
The microrobot ts in pipes of 40 mm diameter (g. 4.33(a)) and is able to negotiate
90
o
angles (g. 4.33(b)) in 50mm diameter pipes.
A specic GUI has been implemented for the control of snake-like microrobots (g.
4.34). With it, it is possible to:
simulate movements
telecontrol the robot
record sets of movements and send them to the robot for later execution.
119
CHAPTER 4. Electromechanical design
Figure 4.34: Graphical User Interface
Worm-like conguration
The inchworm strategy is simple yet extraordinarily powerful. Having an body with
extension capabilities and many small foot pads placed at either end of its body, the
inchworms mode of locomotion is to rmly attach the rear portion of its body to a
surface via its foot pads, extending the remainder of its body forward, attaching it to the
surface and bring ing the rear part of its body to meet the forward part. In this way, the
inchworm always has at least one portion of its body rmly attached to a surface.
This type of movement is particularly suited to unstructured or even hostile environ-
ments. As an inchworm moves forward it has the opportunity to sense what is in front of
it without having to commit to attaching to an inappropriate surface. At the same time,
the systems low silhouette and centre of gravity provides the animal with a high degree
of stability.
An inchworm conguration (g. 4.21) can be obtained by connecting two support
and one extension modules together (support - extension - support). The sequence of
movement in an inchworm robot is as follows (g. 4.35):
1. The rear module (3) expands (making pressure against the pipe) and the front one
(1) releases.
2. The central module (2) expands straight or in angle.
3. The front module expands and the rear one releases.
4. The central module contracts.
This way of locomotion requires two dierent types of modules, and thus it could be
considered heterogenous. But since it is a single locomotion gait, it is included in the
homogenous section.
120
4.4. Chained congurations
(a) Concept (b) Real movement
Figure 4.35: Worm-like module: Sequence of movement
Figure 4.36: Helicoidal conguretion
Helicoidal conguration
The helicoidal conguration is the simplest of all, because it is composed of one helicoidal
module and optionally any number of pig (passive) modules. Thus, it is possible to use it
only in straight pipes and has no big utility unless it is used together with other modules
in an heterogenous conguration.
4.4.2 Heterogeneous congurations
By heterogeneous modular robot it is by denition understood a robot composed of dif-
ferent types of modules, either passive or active (meaning drive module or modules with
the capacity to move). But, as it has been mentioned in section 4.4.1, by heterogenous
conguration it will be understood in this thesis a conguration that is able to perform
dierent types of locomotion gaits.
121
CHAPTER 4. Electromechanical design
Figure 4.37: Multi-modular conguration
Although there are some developments including heterogenous modules, they only have
one active module (the others are passive) as it has been shown in chapter 2, and so there
is not a single one that combines several drive units.
The heterogeneous modular microrobot considered here can be any combination of
the previous modules. An example can be found in gure 4.37. Of course some of the
congurations will work better than others, and this will be studied in this thesis. The
control layer of the microrobot is able to detect what kind of modules it is composed of and
to select the optimum locomotion gait at every moment. Also it is possible to recongure
the micro-robot depending on the task being performed in order to adapt to the dierent
variety of pipes that can be found.
For example, a microrobot composed only of rotation modules is very slow in a narrow
pipe, but combined with one or two helicoidal modules it can move much faster but still
negotiate turns.
Another example: the helicoidal module is very fast in pipes of a specic diameter,
but combined with the worm-like modules, it is able to pass parts of the pipe of dierent
diameter or with broken parts.
The list of examples is quite long and it can be incremented by adding new modules
in the future with new locomotion modes.
The heterogeneous congurations will be deeply described in chapter 5.
4.5 Conclusions
In this chapter the modules that have been designed and built have been described both
in the hardware and the electronic sides. The dierent versions of the helicoidal, support,
extension, rotation, camera and batteries modules have been stated and the reasons to
build them and evolve from one prototype to another, have been explained. Some other
modules that are under development have also been mentioned, like the traveler and the
sensor modules. The SMA-based module has also been described although it is not going
to be used for the moment because of its high consumption and diculty to be mounted.
Although in the beginning it was a priority to make the modules as small as possible
(i.e. in the rotation module v1 the gearset was rearranged to gain space), there was a
tendency afterwards to make the modules bigger in order to improve the robustness and
122
4.5. Conclusions
stability of the modules. Thus, although the robot should be referred as minirobot, the
term microrobot is still used since it was the original design, and indeed, in literature
the term microrobot is used for small robots of about tens of millimeters [Caprari, 2003]
[Kawahara et al., 1999] [Xiao et al., 2004] [Yoshida et al., 2002]. . .
Each type of module is followed by its kinematic description, that helps to understand
the module and to facilitate its future use.
All modules have been designed as small as possible, and nally a diameter of 27mm
has been selected as the default for all of them. Some of them could have been smaller, but
in order to keep the same connector, the 27mm of diameter has been respected. Thus, they
are able to travel in pipes of 40mm diameter, but in order to make turns it is necessary
to have a bigger one (50mm diameter pipe is ok).
There is a couple of modules that has not been built yet, the traveler and the sensor
modules. The traveler module has nevertheless been used in the simulator and it will be
described in chapter 5.
The electronics of the modules have also been described. Although each module has a
dierent electronics, all of them share the same concept, and thus they can work together,
sharing a common interface and bus (I
2
C). In general, apart from accelerometers and po-
sition and consumption control, modules are lacking sensor integration (IR, temperature,
humidity sensors, etc.), which was not possible to integrate in the current versions of the
modules. However, some of them are simulated in chapter 5 and used in the simulated
microrobot (for example the use or IR sensors).
While many other prototypes are composed of homogeneous modules, in this research
it has been sought to have dierent drive modules and locomotion gaits: helicoidal, worm-
like and snake-like (in its dierent ways, like serpentine, rectilinear, side-winding, etc.).
Its use in homogeneous congurations have been described. How to use them together
and coordinate them in heterogenous congurations will be treated in the chapter 5.
It is important to remark how dicult and how much time and money it involves to
have so many dierent prototypes working together, one of the reasons why some similar
researches have been cancelled [Jantapremjit and Austin, 2001].
123
CHAPTER 4. Electromechanical design
124
Chapter 5
Simulation Environment
Imagination is the beginning of creation. You imagine what you desire, you will what
you imagine and at last you create what you will
George Bernard Shaw
A physically accurate simulation of robotic systems provides a very ecient way of
prototyping and verication of control algorithms, hardware design, and exploring system
deployment scenarios. It can also be used to verify the feasibility of system behaviors
using realistic morphology, body mass and torque specications for servos.
A simulator has been developed to create modules and testing environments as realisti-
cally as possible. It contains collision detection and rigid body dynamics algorithms for all
modules. It is built upon an existing open source implementation of rigid body dynamics,
the Open Dynamics Engine (ODE). ODE was selected for its popular open-source physics
simulation API, its online simulation of rigid body dynamics, and its ability to dene wide
variety of experimental environments and actuated models.
Simulated modules have been designed as simple as possible (using simple primitives)
to make simulation uid, but trying to reect as much as possible its real physic conditions
and parameters, leaving in a second plane the esthetics features.
The physical simulator has been enhanced with an electronic simulator that emulates
the microcontroller program that is running on the modules, including physical signals
(synchronization signal), I
2
C communications, etc. To maintain the independence of each
module, its control programs are running in dierent threads. This facilitates the transfer
of the code from the simulator to real modules.
The simulator has been validated using the information gathered from real modules
experiments and this has helped to adjust the parameters of the simulator to have an
accurate model of the motors (including servomotors torque and consumption), inchworm
and helicoidal speeds and ways of movement, and snake-like movements and gaits. Thus,
in the last section of the chapter, several congurations of the robot that were not possible
to test with the real modules were tested in the simulator.
In the control architecture that will be presented in section 7, it is included a model
concept. With a view to achieve that the system calculates and validates its possibilities,
125
CHAPTER 5. Simulation Environment
Figure 5.1: Simulation Environment
the inclusion of a dynamic model in this type of systems is totally necessary. In this way,
the simulator provides the tool to build and develop this model.
The simulator has been developed using C++ and a brief description of the program-
ming (classes, variables, etc.) is given in section 5.3.
5.1 Physics and dynamics simulator
5.1.1 Open Dynamics Engine (ODE)
ODE is an open source, high performance library for simulating rigid body dynamics. It is
fully featured, stable, mature and platform independent with an easy to use C/C++ API.
It has advanced joint types and integrated collision detection with friction. ODE is useful
for simulating vehicles, objects in virtual reality environments and virtual creatures. It
has been used in many computer games, 3D authoring tools and simulation tools from
2000.
It is very exible in many respects. It allows user to control many parameters of sim-
ulation, such as gravity, Constraint Mixing Force, Error Reduction Parameter,etc. ODE
also does not have any xed system of measurement units, and therefore accommodates
systems of dierent scales and ratios that could be more appropriate for a particular setup.
This exibility however makes it quite dicult to come up with a set of parameters that
result in stable and adequate simulation environment. A considerable amount of time has
126
5.1. Physics and dynamics simulator
Figure 5.2: Mathematical model of the servomotor
been spent testing dierent combinations of these settings and the experience has been
used to produce a tuned simulation that models most accurately the possible real settings
of the modules.
For more information about ODE, the reader can consult its webpage
1
.
5.1.2 Servomotor model
Although ODE provides a model for a motor, a more accurate model was needed in order
to simulate the servomotor used in the modules. Thus, a real servomotor model has been
developed (g. 5.2). This model is built upon the existing motor model provided in the
ODE library adding a simulation of its parameters.
The parameters are the typical ones in motors:
K[N m/A] is the torque constant
K
m
[V/rad/s] is the counter-electromotive force constant
K
p
[V/rad] is the proportional servo control constant
L
m
[H] and R[] are the electrical parameters of the motor, inductance and resistor
J
m
[N m/rad/s
2
] is the inertia parameter of the motor itself
B
m
[N m/rad/s] is the friction coecient of the motor itself
Also, there are some variables that are used which meaning is:

m
[rad] is the actual angle (gotten from ODE)

r
[rad] is the desired angle
[rad/s] is the velocity
e
a
[V ] is the voltage of the stator
1
http://www.ode.org/ and http://opende.sourceforge.net/wiki/index.php/Main_Page
127
CHAPTER 5. Simulation Environment
i[A] is the intensity of the current
e
m
[V ] is the inducted voltage
[N m] is the electromechanical torque of the motor

loss
[N m] is the loss of torque due to all intrinsic factors

effective
[N m] is the eective torque sent to ODE to move the servomotor to the
desired position
i
f
[A] is the intensity measured after the low pass RC lter that is used in the real
modules to lter the noise (g. 4.24). Although it has no purpose for the servomotor
model, it is necessary to compare the signal from the real and the simulated modules.
In the next paragraph the equations used for the simulation to compute the torque are
presented in the continuous time:
(t) =
d(t)
dt
(5.1)
e
a
(t) = K
p
(
r
(t)
m
(t)) (5.2)
e
m
(t) = K
m
(t) (5.3)
e
a
(t) e
m
(t) = L
m
dI(t)
dt
+i(t) R (5.4)
(t) = K

i(t) (5.5)

loss
(t) = J
m
(t) +B
m
(t) = J
m

d
2
(t)
d
2
t
+B
m

d(t)
dt
(5.6)

effective
(t) = (t)
loss
(t) (5.7)
[rad/s
2
] is the angular acceleration.
It is necessary to transform these equations to compute them. Starting from 5.1, it
can be expressed in the Laplace domain as:
(s) = (s) s (5.8)
where s C . Applying the transformation s =
1z
1
T
, where T is the sampling period,
it is obtained in the Z domain:
(z) = (z)
1 z
1
T
(5.9)
where z C. Thus:
(z) =
(z) (z) z
1
T
(5.10)
and thus, applying the inverse transform, the discrete time equation is obtained:
128
5.1. Physics and dynamics simulator
[n] =
[n] [n 1]
T
(5.11)
where n is the discrete time. To simplify, the process is the following:
x(t) X(s) X(z) x[n] (5.12)
Doing the same process to all the equations, the following equations are obtained:
e
a
[n] = K
p
(
r
[n]
m
[n]) (5.13)
e
m
[n] = K
m
[n] (5.14)
i[n] =
(e
a
[n] e
m
[n]) T +L
m
i[n 1]
L
m
+R T
(5.15)
[n] = K

i[n] (5.16)

loss
[n] =
J
m
T
2
([n] 2 [n 1] +[n 2]) +
B
m
T
([n] [n 1]) (5.17)

effective
[n] = [n]
loss
[n] (5.18)
E
a
must be limited to 5V, because that is the maximum voltage provided by the
power supply. In g. 5.2 it is the block before e
a
. In the real modules, the control of the
servomotor is done by PWM.
There is a range [0..I
threshold
] where the intensity does not produce any torque due to
the friction static coecient. This is represented in g. 5.2 with the block before K
t
.
5.1.3 Modules physical model
For the purposes of better performance and stability, the model of the modules were sim-
plied to a set of standard geometrical primitives (such as spheres, cubes, cylinders..etc)
connected by degrees of freedom, which were dened as (powered) joints. This simpli-
cation of changing odd shapes into standard shapes was necessary to make simulation
scalable (collision detection with odd shapes are very expensive in ODE). However, di-
mensions and masses were the values of the real modules.
The created geometric morphology model was assigned dynamic properties that corre-
spond to the modules design specications. Masses for each body part were assigned real
values. Degrees of freedom were limited by maximum torque and speed available from
specications of servomotors selected to be deployed in each module. To ensure proper
interaction of modules with the simulated environment, friction coecients were set to
values estimated for materials to be used for module manufacturing and possible surface
materials. These values were adjusted and validated experimentally in a last step, as it
129
CHAPTER 5. Simulation Environment
(a) Rotation Module (b) Helicoidal Module
Figure 5.3: Rotation Module and Helicoidal Module
will be shown in section 8.2.
To be capable of producing behaviors with dierent functionalities, modules should be
able to dock to each other forming dierent conguration shapes. In design specication,
modules have two docking faces, one on each side. In the simulated environment, the
docking capability was implemented by using a xed joint that is created connecting two
sides of dierent modules. This allows the modules to be attached to each other and
maintain the relative positions xed.
All modules try to keep as highly as possible the most possible similarities with the
real ones, either in mechanics (joints, dof, shape, mass...etc) or in electronics.
Each module model will be described in more detail in the following sections.
Rotation Module
The rotation module (g. 5.3 a)) is simulated by a capsule and two cylinders as connectors.
It has two servomotors to provide the two degrees of freedom.
Each servomotor is limited to 180

as real servomotors do.


Helicoidal Module
The helicoidal module (g. 5.3 b)) is simulated by a pig module (passive) upon which a
force is applied in the direction of movement, in order to simulate the driving force of the
rotating head of the module.
This is a simplied model of the real module in order to make the simulation faster
and less expensive in terms of CPU consumtion.
Support Module
The support module (g. 5.4 a)) is simulated by three cubes that simulate the arms
with three servomotors and two cylinders as connectors. The real module has only one
servomotor, but this is an easy way for simulation. One servomotor is the active one, the
130
5.1. Physics and dynamics simulator
(a) Support Module (b) Extension Module
Figure 5.4: Inchworm Modules
one that can be accessed and modied, and the other two just copy the position of the
main one.
In order to make the simulation more accurate, the passive servomotors should have a
smaller torque that the main one, because in the real module there is only one servomotor
that sends more torque to one arm than to the other two.
Extension Module
The extension module (g. 5.4 b)) is simulated by two cubes that can slide one over the
other in order to simulate the elongation of the modules. A control has been implemented
to simulate a linear servomotor (equations 5.19 and 5.20). A circular servomotor in the
front can simulate the rotation dof of the real module.
F
max
= F
maxservo
(5.19)
V = l
0
(Pos
ref
Pos
servo
) (5.20)
being F
maxservo
the maximum force of the servomotor, l
0
a proportional coecient,
Pos
ref
the desired position and Pos
servo
the actual position of the linear servomotor.
Touch Module
The touch module (g. 5.5 a)) simulates the camera and the touch sensor. The camera
is not simulated in any way but by including its weight in the total weight of the module.
The touch sensor is simulated by means of a cylinder that detects collisions.
The collision detection is simulated by the detection of the contact of the surface of the
cylinder (any part) with other object (i.e. the pipe), which is quite accurate, because the
real module has a cover stuck to three contact sensors. When the cover touches anything
it is detected by the sensors.
131
CHAPTER 5. Simulation Environment
Traveler Module
The traveler module (g. 5.5 b)) is still a concept, there is no real module yet. It has been
designed for the simulation, not in reality. It is composed of three wheels provided with
encoders that measures the distance that the microrobot has traveled.
The encoder is simulated by calling a function that gives the rotation (in degrees) of
the wheel. The function is provided by the ODE API.
Since there are three encoders and each of them can measure dierent distances depend-
ing on weather they are in contact with the surface or not, it is necessary an algorithm to
extract an accurate value from the single ones. This algorithm is embedded in the control
program of the module.
Encoders are taking measures continuously. At every step of the control algorithm
(every 15ms aprox), it takes a measurement of each of the three encoders and calculates
the maximum value of the three. This value adds up to the total value, that is the traveled
distance for the microrobot.
In order to simulate real wheel with encoders it is necessary to add some extra friction
to the wheels so they dont keep turning due to the inertia. For each wheel, a torque
proportional to its angular velocity is applied in the opposite turning direction.
%Pseudocode for traveled distance measurement
repeat{
m1 = measurement encoder 1
m2 = measurement encoder 2
m3 = measurement encoder 3
mtotal = mtotal + max(m1,m2,m3)
av1 = angular velocity wheel 1
av2 = angular velocity wheel 2
av3 = angular velocity wheel 3
torque1 = Kforce * av1
torque2 = Kforce * av2
torque3 = Kforce * av3
apply torque1, torque2, torque3
}
132
5.2. Electronic and control simulator
(a) Touch Module (b) Traveler Module
Figure 5.5: Touch Module and Traveler Module
5.1.4 Environment model
The simulated environment tries to be as similar to the real world as possible. Physical
parameters as gravity (weights), masses and dimensions of modules, shapes and dimensions
of the pipes, friction coecients, bouncing coecients...etc, try to be similar to reality.
Pipes have been designed in Autocad Inventor as similar to real ones as possible and
then imported into the simulation as trimesh objects. The microrobot may collide with
the pipe and also it is possible to dene the friction coecients of the pipe.
The attachment of modules has also been simulated. In reality, modules has to be
manually attached, male connector with female connector. In the simulation, after modules
are created one next to the other, they have to be attached by pressing a button.
Also, the procedure to run the simulation is similar to running the real microrobot.
The modules have to be connected together, attached and then powered on. From that
moment on, the microrobot is ready to receive command or to act autonomously.
5.2 Electronic and control simulator
5.2.1 Software description
The core of every robot behavior is the control algorithm that determines how modules
coordinate their actions to perform behavior functionality. In reality, each module has an
independent processor running almost similar control programs and exchanging messages
through the common bus. However the physics-based simulation runs only on one com-
puter and executes control programs for each simulated module along with solving the
dynamics equations. Thus, to achieve realistic results, the simulation environment has to
emulate concurrent execution of control programs for dierent modules and the resulting
communication issues. Ideally this emulation should be micro-processor specic, that is
to say, the simulation time of execution for a particular program instruction should be
equivalent to the real time it takes for the module processors to process that instruction.
This approach however introduces another level of simulation delity, and therefore,
133
CHAPTER 5. Simulation Environment
considerable overhead. It has been decided to follow a simpler route and use concurrency
mechanisms provided by the operating system namely threads to emulate simultaneously
running modules. Each simulated module control program has its own independent thread
of execution which runs in an innite loop. The physics simulation engine spawns all the
module threads in the setup routine and then proceeds to the simulation loop. Each
module thread yields execution control at the end of its program loop to give control
to the simulation thread which thus has highest execution priority. This helps to make
simulation smooth and reduce CPU load.
In order to simulate the existence of independent microcontrollers (processors), there
are several threads running on the same machine:
1 thread for each module
1 thread for the central control
1 thread for the simulation, in charge of iterate the world and physical parameters
of the modules (i.e. servos)
1 thread for oine genetic algorithm computation (it is only running when the GA
needs to be computed)
The emulated concurrency also forces discipline on control program development. The
fact that each simulated module runs an independent piece of code requires deep consid-
eration of synchronization and sensor data propagation among modules in conguration.
Thus semaphores (critical sections) have been used to protect data that is accessed by
several threads at the same time. This realistic approach makes the developed control al-
gorithms much more suitable for transferring them onto real modules, and makes it easier
to move the code from simulation algorithms to embedded routines running in modules
microcontroller.
The simulator is divided in four parts:
A part OS dependent that governs the inputs (mouse, buttons, etc.) and outputs
(messages)
The physics simulation (ODE)
The central control
The control of every module
Simulation parameters
The main application has a timer that executes two tasks every 20ms: the simulationloop
routine and the drawing stu. Thus the simulation is painted every 20 ms.
The simulationloop routine is in charge of iterate the world the dened step, that is
usually 0.0005s (i.e. 0.5ms), as many times as possible (i.e. 40 times: 20 / 0.5).
5.2.2 Actuator control
The position where the servo has to move is sent normally through a PWM signal (from
the microcontroller to the servo). In the simulation this is done by simulating the behavior
134
5.2. Electronic and control simulator
Figure 5.6: Accelerometer axis sketch
of the motors as shown in 5.1.2. A function with its parameter (the desired position of
the servomotor), setspangle(spangle), is used to set the position of the motors: in real
modules this functions sends the PWM signal and in the simulator it updates the variable
spangle that is used by the servomotor as its set point.
5.2.3 Sensor management
Sensors are a very important part of modules and are simulated in dierent ways.
Servo position
The servo position sensor is used in many cases to make decisions if the action is done or
which action should be selected next. This sensor is easily implemented through accessing
the current state of the modeled servo and retrieving the angle parameter.
Accelerometer
A gravity sensor or accelerometer is often used for dynamic locomotion and detecting
abnormal conguration position. Real modules are equipped with a three dimensional
accelerometer, readings of which will be accumulated over time and ltered to determine
direction of acceleration and gravity.
Accelerometers outputs a vector [a
x
, a
y
, a
z
] showing the direction of the acceleration
that they are suering. From this vector it is possible to know the orientation.
In the simulation this is simulated by accessing directly to the orientation vector of
every element of the simulation.
135
CHAPTER 5. Simulation Environment
When the module is stopped, it is sometimes possible to calculate its orientation from
the output of the accelerometers, a vector [a
x
, a
y
, a
z
].
For example, if the module is in the position shown in picture gure 5.6, the pitch
(rotation in the X axis) can be calculated as in eq. 5.21 and the roll (rotation in the Y
axis) as in eq. 5.22. For the yaw, extra computation is needed.
= arctan
a
y
a
z
(5.21)
= arctan
a
z
a
x
(5.22)
Encoders
The traveler module will be equipped with encoders in each of its three wheels in order to
evaluate the distance it has traveled.
This is simulated by a function that reads the rotation of the wheel every certain
amount of time and calculates how much the wheel has rotated in that period.
When it has the information of all the wheels, this information is processed to compute
the distance traveled by the module.
5.2.4 I
2
C communication
I
2
C is simulated through dierent classes: message, bus, and message queue. In reality, if
a message is sent to the bus, is listened by everybody that is connected to the bus. This
is by denition because all modules are connected to the bus and detect the dierences
in voltage of the wires. But in the simulation has to be implemented through a function
that sends the message to all modules connected to the bus.
5.2.5 Synchronism lines communication
The synchronism line is done by two internal variables of the modules, one for the S
in
signal, and one for the S
out
.
5.2.6 Simulation of the power consumption
As it has been shown in section 4.3.6, the auto protection mechanism is based in the
measurements of the consumption and position of the servomotors.
The servo position sensor is measured accessing the current state of the modeled servo
and retrieving its angle parameter.
For the consumption control, a model to simulate the consumption has been developed
following the real design used in the control boards of the modules. This model has been
included in the servomotor model, and calculates the current that is consuming the motor.
It is an experimental model taken from the tests with real modules.
If the consumption is increasing but the servo is not moving, there is almost certainly
a problem (i.e. the servo is stuck).
136
5.3. Class implementation
Figure 5.7: Class diagram
5.3 Class implementation
All the simulation has been build over C++ classes (gure 5.7). Each class represents a
part of the system. There are classes to simulate the I
2
C communication protocol (bus,
messages, message queue), a class to simulate the servomotor, a class for the whole robot,
a general class for a module, a specic class for each module, etc. The interaction between
classes can be seen in gure 5.8. A detailed information on the classes used in this thesis
can be found in an annexed document.
5.3.1 I
2
C classes
This sections refers to the classes aimed to simulate the I
2
C data bus. It includes three
classes: the I
2
C message, the bus (how to send and read information) and the message
queue.
In order to make the simulator as real as possible it was very important to simulate the
structure of the I
2
C protocol. Also, as the programs written in the simulator are thought
to be downloaded in the future into the modules microcontrollers, it was necessary to keep
the same structures and functions that would be used in real communications.
Class CI2CMessage
The CI2CMessage is used to handle the I
2
C messages internally. An I
2
C message is
composed of the following elds: address, param1, param2 and instruction. The meaning
of these elds will be explained in section 7.2
137
CHAPTER 5. Simulation Environment
Class CBusI2C
It emulates the behavior of the I
2
C bus and send messages to/from modules and PC. It
provides two functions used to send I
2
C messages: sendi2cmessage (Send an I2C message
to a specic address) and forwardI2Cmsg (Forward an I2C message in the bus).
The rst function is used to send an I
2
C message (one of the parameters). The body
of this function will substituted by the specic code of the module (it depends on the
libraries that it used) in C, assembler or whatever.
The second function is used to simulate the behavior of the bus: if a message is sent
to the bus, is listened by everybody that is connected to the bus. In real life this is
implemented by denition because all modules are connected to the bus and detect the
dierences in voltage of the wires. But in the simulation has to be implemented through
a function. This function calls the function getI2Cmsg of each module to deliver the
message.
The structure of the I
2
C frames will be explained in gure 7.6.
Class CMessageQueue
A class to simulate a queue of messages for each module. When a message arrives it is
copied to a queue to be handle while other messages are arriving.
5.3.2 Servo class
This class is aimed to simulate the control of the servomechanism. This control is usually
done by the electronics of the servomotor, but since in ODE it is used a simple motor it
was necessary to use an added control to be accurate.
The class is called CServo and it mainly develops the procedure described in section
5.1.2.
The position where the servo has to move is sent normally through a PWM signal (from
the microcontroller to the servo). In the simulation is done by the function setspangle used
by each module to set the position of the servo. In real modules this function is changed
for the PWM control function.
5.3.3 Module classes
This classes are intended to simulate the electronic inside the modules. It is composed
by a common class (CModule) that includes all that is common to all modules and one
dierent class for each type of module: rotation (CRotationModule), support (CSup-
portModule), extension (CExtensionModule), helicoidal (CHelicoidalModule), touch and
camera (CTouchModule) and traveler (CTravelerModule).
Class CModule
It is a general class from which the module inherit. It includes all common characteristics
of the modules:
138
5.3. Class implementation
Figure 5.8: Class interaction
c++ functions (create, delete, iterate, etc.)
I
2
C communication
common sensors (accelerometer)
attach (to simulate that two modules are linked) and dettach
control function
The control function is very important, because it simulates the routine that is running
in the microcontroller. It is an independent thread that is created when the module is
created and killed when the module is removed. It is designed to be as similar as possible
as the code that is going to be embedded in the microcontroller.
All the specic classes for each type of module share provide some common functions:
Create the physical model: bodies, geoms, joints, motors, I
2
C address.
Launch the thread
Drawing
Eliminate the module and kill the thread
Module control
Heterogenous layer (communications, conguration check, etc.)
Behavior handling (the dierent behaviors will be described in section ??)
139
CHAPTER 5. Simulation Environment
Class CRotationModule
The class represents the rotation module. Besides the features described previously, it
provides the following functions:
Control of two servomotors
Iterate servomotors
Central pattern generation (CPG)
Class CSupportModule
This class represents the support module. Besides the features described previously, it
provides the following functions:
Control of one servomotor
Iterate the servomotor
Expansion and contraction movements
Class CExtensionModule
This class represents the extension module. Besides the features described previously, it
provides the following functions:
Control of one servomotor and a linear servomotor
Iterate the servomotors
Extension and contraction movements
Class CHelicoidalModule
This class represents the helicoidal module. Besides the features described previously, it
provides the following functions:
Control of simulated motor that pushes forward
Class CTouchModule
This class represents the touch/camera module. Besides the features described previously,
it provides the following functions:
Control of touch sensor
Control of IR sensors
140
5.4. Heterogenous modular robot
Class CTravelerModule
This class represents the traveler module. Besides the features described previously, it
provides the following functions:
Control of simulated encoder
Control of odometry
5.3.4 Central Control class
The CCentralControl class is used to simulate the central control. As well as the modules
classes, an independent thread is created for its object when is created.
It simulates an independent thread where it is running all the central control behaviors.
It communicates via the I
2
C bus with the modules. The behaviors are member functions
of the class.
5.3.5 Robot class
This class (CRobot) represents the whole robot, and it is used to keep a record of the
modules that the microrobot is composed of, its position, etc.
It is a linked list of the modules. It is used for iterations, I
2
C communications, to
draw the modules, attachment, etc.
5.3.6 Graphical User Interface classes
This are the classes that implement the structure of the simulator and the graphical user
interface. It is composed of the classes of the main program, the dialogs, the dierent
views and the classes for drawing (with the OpenGL commands).
Class CMicrotubApp, CMainFrame and CChildView This are the classes created
by default in the visual studio environment. They refer to the main application, the main
frame and draw environment.
Class CAboutDialog A dialog with information about the version of the application.
Class CCentralControlDialog The dialog with the operator / central control com-
mands: create modules, attach modules, set gravity, run the simulator clear, etc.
Class CDrawWindow This is the class in charge of drawing the ODE environment.
5.4 Heterogenous modular robot
Thanks to the simulator it is possible to develop movement algorithms for the microrobot
composed of dierent types of drive modules. The simulator helps to detect problems
141
CHAPTER 5. Simulation Environment
before the real modules are built, but also helps to detect bad and optimum congurations.
It is a test bench where it is easier and faster to test dierent congurations.
The simulator has helped to identify the minimum number of modules that are needed
in order to use a helicoidal module (gure 5.9(a)). It is compose of a contact module,
two rotations and one helicoidal. This is because using only one rotation module, the
microrobot may get stuck in the pipe when negotiating an elbow.
Starting from the locomotion gaits presented in chapter 4, combining them and/or
adding other modules, it is possible to obtain better congurations, in the sense of faster,
more robust or congurations that are able to go to dierent places.
Several rotation plus helicoidal
Here it is possible to detect which is the optimal position for the helicoidal module in the
rotation module chain, or how many helicoidal modules are the optimum.
Figure 5.9(b) shows the microrobot in a exploration task. This includes going forward
and negotiating an elbow when a bifurcation is detected by the contact module. The
microrobot is composed of the following modules: one contact, two rotation, one helicoidal,
two rotation and one passive. The main drive force is made by the helicoidal module. The
rotation modules help to go forward with a snake-like movement, but their main task is
to turn.
Several support plus several extension modules
The inchworm gait can be improved by adding more modules. The homogeneous inchworm
conguration is composed of support + extension + support. In stead of one module it is
possible to put more than one, obtaining the advantages:
more grip, since there several support modules grasping the pipe
more velocity, because the extension is the number of modules times the extension
of one module in the same time
Rotation plus helicoidal plus support
The problem of the helicoidal module is that in order to have grip, all its wheels must be
in touch with the pipe. In bifurcations this is not always possible. Adding the rotation
and the support modules allows the microrobot to turn while the support module holds
the microrobot, and the rotation module turns putting the helicoidal module in the next
stretch of the pipe to continue moving forward.
Several rotation plus support plus extension plus helicoidal
In this combination all the locomotion gaits are together: snake-like, worm-like and heli-
coidal. The microrobot can change from one to another depending on the situation.
142
5.4. Heterogenous modular robot
(a) Minimal conguration: contact, rotation and helicoidal
(b) Contact, two rotation, one helicoidal, two rotation and one passive
Figure 5.9: Elbow Negotiation
143
CHAPTER 5. Simulation Environment
The extension module, since it has one rotation dof, can take part in some of the snake-
like movements. Other modules can act as passive modules in the snake-like movements
without aecting the overall movement.
5.5 Conclusions
This chapter has been dedicated to explain the simulator that has been used in this thesis.
The main reasons why this simulator has been build are:
it is a fast way to test prototypes before building them, because it is too expensive
to build a module without having a certainty that it is going to work more or less
as expected
provides a very ecient way of prototyping and verication of control algorithms
and hardware
provides a tool to build and develop the model for the algorithms that will be used
in the modules
The simulator has been built upon an existing open source implementation of rigid
body dynamics, the Open Dynamics Engine (ODE). ODE was selected for its popular
open-source physics simulation API, its online simulation of rigid body dynamics, and its
ability to dene wide variety of experimental environments and actuated models.
Over ODE it has been build a complex system to emulate the behavior of the mi-
crorobot. Since most of the modules use the same servomotor, an accurate model of the
servomotor has been build. Regarding the hardware, modules have been designed as sim-
ple as possible (using simple primitives) to make simulation uid, but trying to reect
as much as possible its real physic conditions and parameters, leaving in a second plane
the esthetics features. The morphology, body mass and torque specications have been
respected as much as possible.
The environment has also been simulated with especial regard to frictions, collisions
and interactions between objects.
Over all of this, an electronic and control simulator part has been placed. The sim-
ulated control program emulates the behavior of the modules by concurrent execution of
control programs for each module and the resulting communication issues. Each simulated
module control program has its own independent thread of execution which runs in an
innite loop. There is another thread for the central control and for the GUI.
It has also been simulated the actuator control, the sensors (accelerometers, encoders)
management, the I
2
C communication, the synchronism lines and the power consumption.
Everything has been developed in C++ and has been structured in classes for the
modules, the robot, the I
2
C communications, etc. All classes have been described but the
reader can nd more information in an annexed document that describes in detail all the
code.
Finally, last section describes how the simulator can be used to test dierent heteroge-
nous congurations obtaining interesting conclusions about its locomotion and behavior.
144
5.5. Conclusions
This simulator has been validated comparing the results of the simulator with the
ones obtained from real modules (see section 8.2), having very satisfactory results. It has
proved to be a very valid tool for testing congurations and develop prototypes. It helps
to obtain results much faster than with real modules and to avoid that the modules break
during tests.
145
CHAPTER 5. Simulation Environment
146
Chapter 6
Positioning System for Mobile
Robots: Ego-Positioning
By knowing where you are, you will know where you are going
Anonymous
In open spaces it is very important to know the orientation of the robot or module.
The EGO-positioning system is a method that allows all individual robots of a swarm (or
all modules in a modular robot) to know their own positions and orientations based in the
projection of sequences of coded images composed of horizontal and vertical stripes.
Thanks to several photodiodes placed in specic positions, modules or robots are able
to know their position and orientation out of the projection of images over them.
As opposed to the previous chapters, the ego-position system has been developed
under the framework of the I-SWARM project and it has been tested on the robot ALICE
(that will be described in the following sections). Although it has not been applied to the
modular microrobot described in the previous chapters, it is a very interesting system that
complements the work already done and can be perfectly integrated in this microrobot as
it will be explained later on.
In the following section, a brief review of positioning systems will be given.
6.1 Brief on Positioning Systems for Mobile Robots
The eld of global positioning systems for autonomous robots or swarms of robots has
been researched in recent years. Most of these systems are focused on indoor ubiquitous
computing and indoor localization of autonomous robots. Most of them rely on infrastruc-
ture, multi-mode ranging technologies (RF, ultrasonic and IR) and centralized or powerful
processing. There are not so many which propose a positioning system in which the robot
can calculate itself its position based on the information provided by the system.
Another important point is that none of the systems described in this chapter are
designed for micro-robots, neither have they been used an optical system similar to as the
one used in ego-positioning, which seems to be very innovative.
147
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.1: Experimental setup of iGPS
Some descriptions of the systems have been obtained from [Hightower and Boriello,
2001].
The following sections shows a review of some systems divided by the type of sen-
sor/detection that they use.
6.1.1 IR light emission-detection
The iGPS (indoor GPS) described in [Hada and Takase, 2001], is a system for multiple
mobile robot localization inside oce buildings. It is based on the IR light detection with
a camera of the emitted IR light by the robots.
In the article [Hernandez et al., 2003] a low cost system for mobile robots indoor
localization is presented. The system is composed of an emitter located on a wall and a
receptor at the top of the robot. The emitter is a laser pointer acting like B beacon, and
the receptor is a cylinder made by 32 independent photovoltaic cells. The robots position
and orientation are obtained from the times of impact of the laser an each cell.
The NorthStar system [nor, ] uses triangulation to measure position and heading in
relation to IR light spots that can be projected onto the ceiling (or other visible sur-
face). Because each IR light spot has a unique signature, the detector can instantly and
unambiguously localize. Because the NorthStar detector directly measures position and
heading, a localization result is intrinsically robust. A NorthStar-enabled product does not
require prior training or mapping to measure its position. There is no need for expensive
computational capabilities.
148
6.1. Brief on Positioning Systems for Mobile Robots
Figure 6.2: Behavior of the system for irregular oors
Figure 6.3: NorthStar
149
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.4: Indoor positioning network
6.1.2 Electrical elds
These systems are based on Ultra-Wideband (UWB) radio impulses. [Eltaher et al., 2005]
is a self-positioning system based on UWB that uses electric eld polarization together
with received signal level to auto-detect the position. Due to its large bandwidth it can
reach sub-centimetre range.
[Zhang and Zhao, 2005] is also an UWB impulse radio system. It is particularly
suitable for indoor localization using multiple antennas. It is based on time dierence of
arrival (TDOA) estimation techniques and time-hopping impulse radio system and signals
at the receiver. The error is in the range of centimetres.
The SpotON system [Hightower et al., 2000] implements ad hoc lateration with low-
cost tags. SpotON tags use radio signal attenuation to estimate intertag distance. They
exploit the density of tags and correlation of multiple measurements to improve both
accuracy and precision.
Under my point of view, one of the problems that this wireless ethernet systems have,
is that they cannot maintain a xed signal level at a specic location. Thus, the accuracy
is not enough to use them in small systems (micro-environments).
6.1.3 Wireless Ethernet
IEEE 802.11 wireless Ethernet is becoming the standard for indoor wireless communica-
tion. Many papers propose the use of measured signal strength of Ethernet packets as a
sensor for a localization system. [Ladd et al., 2004] is one them. It states that o-the-shelf
hardware can accurately be used for location sensing and real-time tracking by applying
a Bayesian localization framework.
150
6.1. Brief on Positioning Systems for Mobile Robots
Figure 6.5: Illustration of time dierence of arrival (TDOA) localization
Figure 6.6: Example of wireless ethernet distribution of ve base stations (enumerated
small circles)
151
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
[Dardari and Conti, 2004] addresses indoor localization techniques through ad-hoc
wireless networks, where anchors and unknown nodes are randomly positioned in a squared
area. The position of unknown nodes is estimated starting from received signal strength
(RSSI) measurements, since nodes are assumed to be not equipped with specialized local-
ization hardware.
[Haeberlen et al., 2004] allows for remarkably accurate localization across an entire
oce building using nothing more than the built-in signal intensity meter supplied by
standard 802.11 cards. The system can be trained in less than one minute per oce or
region, walking around with a laptop and recording the observed signal intensities of the
buildings unmodied base stations. It is possible to localize a user correctly in over 95
[Serrano et al., 2004] describes a method to estimate the position of a mobile robot
in an indoor scenario using the odometric calculus and the WiFi energy received from the
wireless infrastructure. This energy will be measured by wireless network card on-board a
mobile robot, and it will be used as another regular sensor to improve position estimation.
RADAR [Bahl and Padmanabhan, 2000] has been developed by the Microsoft Research
group. It is a building-wide tracking system based on the IEEE 802.11 WaveLAN wireless
networking technology. RADAR measures, at the base station, the signal strength and
signal-to-noise ratio of signals that wireless devices send, then it uses this data to compute
the 2D position within a building. Microsoft has developed two RADAR implementations:
one using scene analysis and the other using lateration. Several commercial companies
such as WhereNet (http://www.widata.com) and Pinpoint (http://www.pinpointco.com)
sell wireless asset-tracking packages, which are similar in form to RADAR.
Again, these systems present the same problems as in the previous section.
6.1.4 Ultrasound systems
The Cricket Location Support System [Priyantha et al., 2000] uses ultrasound emitters to
create the infrastructure and embeds receivers in the object being located. This approach
forces the objects to perform all their own triangulation computations. Cricket uses the
radio frequency signal not only for synchronization of the time measurement, but also to
delineate the time region during which the receiver should consider the sounds it receives.
Cricket uses ultrasonic time-of-ight data and a radio frequency control signal. Cricket
implements both the lateration and proximity techniques. Receiving multiple beacons lets
receivers triangulate their position. Receiving only one beacon still provides useful prox-
imity information when combined with the semantic string the beacon transmits on the
radio. Crickets advantages include privacy and decentralized scalability, while its disad-
vantages include a lack of centralized management or monitoring and the computational
burden-and consequently power burden- that timing and processing both the ultrasound
pulses and RF data place on the mobile receivers.
6.1.5 Electromagnetic
Electromagnetic sensing oers a classic position tracking method [Raab et al., 1979] [Pa-
perno et al., 2001].
Tracking systems such as MotionStar (http://www.ascension-tech.com/products/motionstar.php)
152
6.1. Brief on Positioning Systems for Mobile Robots
Figure 6.7: MotionStar system
sense precise physical positions relative to the magnetic transmitting antenna. These sys-
tems oer the advantage of very high precision and accuracy, on the order of less than 1
mm spatial resolution, 1 ms time resolution, and 0.1

orientation capability. Disadvan-


tages include steep implementation costs and the need to tether the tracked object to a
control unit. Further, the sensors must remain within 1 to 3 meters of the transmitter,
and accuracy degrades with the presence of metallic objects in the environment.
6.1.6 Pressure sensors
In Georgia Techs Smart Floor [Orr and Abowd, 2000] proximity location system, embed-
ded pressure sensors capture footfalls, and the system uses the data for position tracking
and pedestrian recognition. This unobtrusive direct physical contact system does not re-
quire people to carry a device or wear a tag. However, the system has the disadvantages
of poor scalability and high incremental cost because the oor of each building in which
SmartFloor is deployed must be physically altered to install the pressure sensor grids.
6.1.7 Visual systems
Microsoft Researchs Easy Living [Krumm et al., 2000] uses real-time 3D cameras to pro-
vide stereo-vision positioning capability in a home environment. Although Easy Living
uses high-performance cameras, vision systems typically use substantial amounts of pro-
cessing power to analyze frames captured with comparatively low-complexity hardware.
State-of-the-art integrated systems [Darrell et al., 1998] demonstrate that multimodal
processing-silhouette, skin color, and face pattern-can signicantly enhance accuracy. Vi-
153
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.8: Smart Floor plate (left) and load cell (right)
Figure 6.9: Ego-positioning system
sion location systems must, however, constantly struggle to maintain analysis accuracy as
scene complexity increases and more occlusive motion occurs. The dependence on infras-
tructural processing power, along with public wariness of ubiquitous cameras, can limit
the scalability or suitability of vision location systems in many applications.
An important drawback of visual systems is the need for a direct line-of-sight.
6.2 Introduction to EGO-positioning
The EGO-positioning system is a method conceived for robotic swarms to allow all in-
dividual robots of the swarm to know their own positions and orientations based in the
projection of sequences of images composed of horizontal and vertical stripes (coded) (g.
6.9).
Thanks to two photodiodes in opposite corners (gure 6.10 a)), robots are able to
know their position and orientation out of the projected images over them, according to
the following expressions (6.1 to 6.4):
x
r
=
x
1
x
2
2
(6.1)
154
6.2. Introduction to EGO-positioning
a) b)
Figure 6.10: Position and orientation calculation (a) and Alice robot (b)
y
r
=
y
1
y
2
2
(6.2)
= (6.3)
= arctan
X
Y
(6.4)
The idea of EGO-positioning can also be used in the modules described in chapter 4,
extending the 2D situation of the photodiodes to a 3D scenario as shown in picture 6.11
Projected images can also be divided in regions to transmit dierent information to
groups of robots.
In order to experiment the ego-positioning concept, tests have been performed on the
robot Alice (see g. 6.10 b)). In some of the next paragraphs there will be separate
notes for Alice and I-Swarm robots.
The setup used for Alice is shown in the table 6.1 compared to the setup that is
probably going to be used for I-SWARM.
Although for Alice a dierent stripe width has been chosen for the x and y axis, in
I-SWARM it is the same, which implies to have a dierent number of stripes for both axis.
The minimum resolution that can be chosen is the pixel size, which is 0.29 mm.
155
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.11: Ego-positioning extension to chained modular robots
6.3 Hardware
6.3.1 Sensing devices
Photodiode BPW34 on Alice
The light sensor used on Alice is a high speed and high sensitive PIN photodiode (BPW34),
sensitive to visible and infrared radiation (see g. 6.12).
For reasons that will be explained in further paragraphs, it is necessary to use a lter
between the photodiode and the entrance of the microcontroller. This lter would perform
1
The nal size of the arena will be approximately 297x223 mm
2
, corresponding to an A4 format. For
such surfaces the image cannot be focalized, and a special optic might be necessary
I Swarm Alice
Beamer resolution 1024x768pixels 1024x768pixels
Arena size 297 x 223 mm
2
(
1
) 512x385 mm
2
Photodiodes size 0.0625 - 0.09 mm
2
2.65 x 2.65 mm
2
Micro-robot size 4-9 mm
2
2 x 2 cm
2
Stripe width (x) 0.29 mm 4 mm
Stripe width (y) 0.29 mm 3 mm
Number of stripes (x) 1024 128
Number of stripes (y) 768 128
Pixel size 0.29 mm 0.5 mm
Table 6.1: Setup description
156
6.3. Hardware
a) b)
Figure 6.12: BPW34 main features (a) and photodiodes board (b)
the following tasks:
Transform the current coming from the photodiode into voltage that can be read by
the analog-to-digital converter
Polarize the photodiode
Low pass lter to get rid of the glitch and to stabilize the signal
The proposed lter is shown in g. 6.13 a). The rst resistor allows setting the output
reference voltage to any value (so it is possible to saturate the signal to avoid the eects
of the beamer). Assuming the input impedance of the ADC is very high, the cut o
frequency can be set by:
F
c
=
1
(R1 +R2) C
(6.5)
Solar cell on I-Swarm (aSi:H)
One possible solution for I-SWARM is to use amorphous silicon photodiodes (aSi:H) whose
spectral sensitivity is shown in picture 6.13 b):
The robots that will be used in the project I-SWARM will be as small as possible.
This means that the integration of resistors will be avoided if possible in order to save
space, due to the large size of the required resistors.
Because of this a current comparator will be used as a signal conditioning stage
(see g. 6.14). It will compare the current coming from the photodiode with a reference
current and will output a voltage level (logical 0 or 1) corresponding to the value of the
level of intensity of the image projected over the photodiode (normally a black or white
image).
An estimation of the maximum current that the solar cell can give is I
sc
= 45A/mm
2
(for more information see Deliverable D.5.2.Powering). For a total surface between 0.0625
157
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
a) b)
Figure 6.13: Optimal RC Filter (a) and Spectral sensitivity of aSi:H (b)
Figure 6.14: Current comparator for I-SWARM
mm
2
and 0.09 mm
2
it gives a maximum current of 2.81 A to 4.05 A.
The reference current should be adapted depending on the threshold to a percentage
of this value. For example if the threshold is set to 50%, the reference current will be
between 1.905 A and 2.025 A.
If there is a second source of light (to increase the power received by the robots) the
threshold and the reference current should be increased and adapted to the new powering.
6.3.2 Beamer
Beamer characterization
The color of images is produced by a color disk (see g. 6.15) in the DLP beamer (Digital
Light Processing beamer). This color disk is divided in 4 regions: red, green, blue and
158
6.3. Hardware
Figure 6.15: Color wheel of the DLP beamer
(a) Not saturated (b) Saturated
Figure 6.16: Response of the beamer to a white image
transparent. The white color is achieved by the combination of all the colors.
When a white image is projected over the photodiode, the output obtained is shown
in gure 6.16 a).
However, due to the polarization circuit of the photodiode (3.3V for the maximum
level of intensity), the output of the photodiode is usually saturated, as it is shown in
gure 6.16 b).
It is possible to see that all color levels are saturated (except for the blue). With a
higher resistance it would also be saturated. With a smaller value the previous gure is
obtained.
Without color wheel g. 6.17 is obtained for a white image.
In any case, the signal has to be ltered in order to obtain a stable signal at the
entrance of the analog-to-digital converter of the microcontroller and to avoid the glitches
observed in the pictures. These glitches are repetitive at a frequency of 60 Hz and the
typical width is about 40 s.
For red, green and blue colors g. 6.18 a) is obtained.
For its combination two by two (yellow, purple and cyan), which is a pulse of double
width, half for each color which forms it, the output is shown in g. 6.18 b).
159
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.17: Response of the beamer (without color wheel) to a white image
a) b)
Figure 6.18: Response of the photodiode to a red image (a) and a yellow image (b)
Maximum frequency
According to the beamer specications, the maximum rate at which images can be pro-
jected for a resolution of 1024 x 784 pixels is 85Hz. In order to test it, a sequence of black
and white images was projected over the photodiode at 60 and 85 Hz. The output of the
photodiode (after the lter of resistors of 68 K and 56 K and capacitor of 47nF) is
shown in gure 6.19. It is possible to see that at 60 Hz the output signal is as expected,
while at 85 Hz the signal is corrupted. This implies that the beamer is not capable of
sending sequences of images at 85Hz and that the maximum frame rate it is possible to
achieve is 60Hz.
Grayscale
Some experiments have been carried out regarding the possibility to use not only black
and white images, but also grey images. The output of the photodiode (without any lter)
160
6.3. Hardware
a) b)
Figure 6.19: Response of the photodiode to a projection of sequences of black and white
images at 60 Hz (a) and 85 Hz (b)
Figure 6.20: Response of the photodiode to a grey image
when a 50% grey image is projected is show in gure 6.20:
This means that the only possibility to detect it is to lter the signal via a low pass
lter and detect the mean value of the signal.
Figure 6.21 shows the output (with the same lter as before) of the photodiode when
a sequence of images composed by Black-Grey-White (3 levels) and Black-Grey1-Grey2-
White (4 levels) are projected.
It is possible to see that it would be possible to detect up to 4 levels of grey, but ac-
tually, the outputs of the analog-to-digital converter for 4 levels of grey are too close and
sometimes they overlap. Thus, the recommended number of levels that can be detected is
3.
Maximun number of levels that could be detected: 4
Recommended number of levels: 3
161
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
a) b)
Figure 6.21: Response of the photodiode to a projection of sequences of 3 (a) and 4 (b)
dierent grey scale images at 60 Hz
Figure 6.22: Distribution of intensity
Intensity distribution of the emitted light
The intensity of the light received by the photodiodes varies with the position. In gure
6.22 it is possible to see a gradient of the intensity
For a sequence of black and white images, the maximum level at the output of the
photodiode goes from 3.3 V in the point of maximum intensity to 2 V in the lowest (gures
6.23).
To overcome this problem, the rst white image in the sequence of images is used to
know the voltage level of the white in that position, and it is used as a reference for the
rest of the measures. All other values are referred to that one.
162
6.4. Software
a) b)
Figure 6.23: Output voltage for a black and white sequence at the point of higher (a) and
lower (b) illumination
6.4 Software
6.4.1 EGO-positioning procedures: theory and performances
Binary code
It is the simplest code. The arena is divided into 128 stripes for both the horizontal and
vertical planes and each stripe is then codied with 7 bits, meaning 0 is equal to black
and 1 to white. See g. 6.24 a).
Gray code
A Gray code is a binary numeral system where two successive values dier in only one
digit. In our particular case, as we can see in the next gure, this means that there are
always (except in the ends) two stripes of the same color together (in the binary code they
alternate always), what reduces the error rate to almost half when the photodiode is in
the middle of two stripes. See g. 6.24 b).
Performance
For transmission at 60 Hz with Black and White sequences images:
Start bits: 3 (B-B-W for binary , B-W-W for gray ...etc)
Position bits: 14 (7+7)
Stop bits: 0
Time (at 60Hz): 0.28s
Data rate: 60 bits/s (1 bit 0.017 s, 1 kB 133.3 s)
163
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
a) b)
Figure 6.24: Binary (a) and Gray (b) code
For transmission at 60 Hz with 3 color images (Black, White and Grey(RGB=(130,130,130))
):
Start bits: 3 (B-W-B)
Position bits: 10 (5+5)(symbols)
Stop bits: 0
Time (at 60Hz):0.22 sec
Data rate: 60 symbols/s (90bits/s) (1 bit 0.01 s, 1 kB 88.8 s)
Color-based transmission
It is possible to use the way the beamer produces the colors (seen in section 6.3.2) to send
data at a higher rate.
If the signal is sample at the right points , the value of red, blue and green of the
emitted color can be obtained (g. 6.25). Thus, it is possible to send three bits with every
image, and the data rate will be multiplied by 3.
Thus the sequence of bits that is to be sent can be divided into pieces of three and
codied according to table 6.2:
In the reception, the image has to be sampled to get the three values of blue, red and
green and remake the original sequence of bits.
For example, if the sequence to be sent is
01000110111001010 (startcode, xpos, ypos) (6.6)
The sequence is divided in
164
6.4. Software
Figure 6.25: Sampling time to get the RGB values of the projected image
BLUE RED GREEN
0 0 0 Black
0 0 1 Green Suitable
0 1 0 Red for
0 1 1 Yellow Startcode
1 0 0 Blue
1 0 1 Cyan
1 1 0 Purple
1 1 1 White
Table 6.2: Color coding table
010 001 101 110 010 10 + 0 to complete (6.7)
And then the images
Green Blue Purple Y ellow Green Red (6.8)
will be projected by the beamer. In the reception, the microcontroller will get three
samples for each image, corresponding to the three bits that it has coded.
6.4.2 I-Swarm considerations
In Alice, 7 bits (meaning 7 images) are used to send the position in the horizontal axis
and 7 for the vertical one. That is because the stripe resolution (i.e. the size of the
photodiodes) needed is 4 mm and 3 mm.
In the I-SWARM project, the stripe resolution that is required is about 0.25 to 0.3mm.
165
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
In this document it will be taken 0.29mm as the reference value. This means that 1024
stripes for X and 768 stripes for Y have to be used to cover the entire arena. Thus, 10 bit
must be used to transmit the position (29 = 512 and 210 = 1024).
An estimation of the performance it would be possible to have for I-SWARM for
transmission at 60 Hz with Black and White sequences images is:
Start bits: 3 (B-B-W for binary , B-W-W for gray ...etc)
Position bits: 20 (10+10)
Stop bits: 0
Number of color images to send: 23
Time (at 60Hz): 0.38s
Data rate: 60 bits/s (1 bit 0.017 s, 1 kB 133.3 s)
If using the color-based transmission technique described in the previous section, at
60 Hz, and for I-SWARM, the results would be:
Start bits: 3 (B-B-W for binary , B-W-W for gray ...etc)
Position bits: 20 (10+10)
Stop bits: 0
Number of color images to send: 8 (23 / 3)
Time (at 60Hz): 0.13 (8/60)
Data rate: 180 bits/s (1 bit 0.0055 s, 1 kB 44.4 s)
6.4.3 Image Sequence Programming
The images that are projected over the arena are drawn using the DirectX library. This
library allows accessing the graphic card directly, which is much faster than using a video.
In addition, it gives the following advantages:
The frame rate is guaranteed.
It doesnt need a fast processor, just a good graphic card.
It is possible to change the sequence easily
It is also possible to send individual information to the robots - Unidirectional com-
munication. Application: robot programming.
A user interface can be implemented
There are two possible ways to generate the sequence:
166
6.4. Software
Draw the images and show them in real time
Preload the images in memory and change from one to another
The second one is a little bit faster because the images are already in memory, but has
the inconvenient that every time we want to change the sequence new images have to be
made. On the contrary, the rst approach let the user draw anything in the screen and to
change the sequence even in real time.
6.4.4 Alice software
The software for Alice is divided mainly in the following parts:
-Interruption Service Routine Photodiodes: reads the value from the Analog-
to-Digital converter g. 6.26 a).
For projection at 60 Hz the microprocessor have to sample every 1/60=16.666ms. For
some microprocessors (like Alice) it is not feasible to sample at that rate.
To overcome this problem, what it has been done for Alice is to sample the signal
coming from the photodiode with an alternating period: every 17-16-17ms. Thus, it is
synchronized every 3 samples (50ms).
-Function SequenceTest: it checks continuously for a sequence of EGO-position
and process it g. 6.26 b).
In order to take samples when the signal is stable (in the middle of the pulse) and not
in the raising or falling edges, a procedure has been developed (see g. 6.27).
Normally a white image is projected by default. All the EGO-positioning sequences
start with a black image (part of the start code) to show the beginning of the sequence.
As shown in the gure, the signal is sample every 1ms. When a black image is detected,
the center of the pulse is calculated, and from then one sample is taken every 1/60 seconds
approximately.
With this procedure, the successful sequences received rate has reached the 100%
(60Hz).
-Function EGO Position: Decodes de sequence of images received and calculate
the position and orientation (g. 6.28 a)).
-Main program: an innite loop that which calls the functions SequenceTest and
EGO-position (g. 6.28 b)).
The size of the program as it is right now is 6.44 KB of Flash memory and 118 bytes
of RAM. Just the EGO position procedure would be about 2KB of Flash memory and 80
bytes of RAM.
167
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
a) b)
Figure 6.26: Interruption Service Routine Photodiodes (a) and function SequenceTest
(b) pseudocode
6.4.5 I-Swarm software
In the I-SWARM robots some of the software will be implemented via hardware in order
to make the programs faster and smaller. An example of what can be done by hardware
is the pass from binary to Gray code and vice versa (g. 6.29).
In Alice, the subroutine is checking every 1ms if there is a start of sequence (i.e. a
pass from white to black). In order to save energy in I-SWARM, the scanning for a start
of sequence will be done only when the robot want to know its position and orientation.
6.5 Applications
6.5.1 Transmission of commands
Using the same principle of image transmission as in EGO-positioning, it is possible to
send data, i.e. codify some commands and send them to the robots.
For example we can codify a command that is go to a position. For that a new start
code is needed. Then the target position is sent codied in the same way as images for
168
6.6. Results and conclusions
Figure 6.27: Sampling procedure
the EGO-position:
startcode(3bits) + Xtarget(7bits) + Y target(7bits) (6.9)
To send longer sequences of data it would be necessary to use a stop code also
The main application for this procedure is parallel robot programming.
6.5.2 Programming robots
Datarate
For Black and White it takes 133.3 seconds to send 1 Kbyte. For 1000 robots at the same
time it gives: 7.5 KB/sec.
For Black, White and Grey, 88.8second. For 1000 robots at the same time it gives:
11.3 KB/sec.
Time to ll the memory 4 KB for 1000 robots
4 * 133.3 = 533.2 sec for Black and White
4 * 88.8 = 355.2 sec for Black, White and Grey
Advantages:
Program all robots at the same time.
Selective programming: program the robots in groups.
Table 6.3 summarizes this results:
6.6 Results and conclusions
To test the reliability of the EGO-positioning system, it has been tested in three dierent
positions: high, low and medium illumination. Series of 100 Gray-coded sequences has
169
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
a) b)
Figure 6.28: Function EGO Position (a) and Main program (b) pseudocode
Black and white Black, white and grey
Time to send 1 KB 133.3 sec 88.8 sec
Time to send 1 KB for 1000 robots 7.5 KB/s 11.3 KB/s
Time to ll memory (4 KB) for 1000 robots 533.2 sec 355.2 sec
Table 6.3: Programming time and speed
been projected at 60 Hz every 2 seconds, and the position and orientation given by the
algorithm have been recorded. The results are shown in gure 6.30.
The results are very similar in the three positions, achieving successful rates of 98-99%.
The 1-2% errors are mainly due to:
Oscillations in the beamer
Placement of the photodiode between two stripes. This may cause that the received
signal has a mean value similar to the threshold, and due to this the oscillations and
noise may be more signicant
Threshold upset
The adjustment of the threshold is a very important issue, and it has a great inuence
in the successful rate. For the measures taken before it was set to 68% of the maximum
(173 out of 255 in the A/D converter).
170
6.6. Results and conclusions
Figure 6.29: Gray to Binary conversion scheme
It is also important to highlight that all the sequences where received, so the lost
sequences rate is 0. Thus, the stability of the system is high. And the wrong sequences
are easily detectable because there is usually a big dierence between the measures of both
photodiodes.
Regarding the demonstrator described in the deliverable D2.1-1 a great improvement
has been done in reliability and in speed, going from 20 Hz transmission to 60 Hz and
minimizing the error rate.
To prove the reliability of the EGO-positioning a demo has been made. In this demo,
using the principle explained in section 6.5.1 the robot Alice receives a command to go to
a position (to pick up something, for example) and based only on the information provided
by the EGO-positioning system, navigates to the target. Once there, it receives another
command to go to a dierent position (to place what it has taken before) and again it
navigates to that new position with the only help of the EGO-positioning system. The
experiment has been very successful and the video is available under demand.
171
CHAPTER 6. Positioning System for Mobile Robots: Ego-Positioning
Figure 6.30: Success - error rate
172
Chapter 7
Control Architecture
To repeat what others have said, requires education; to challenge it, requires brains
Mary Pettibone Poole
The control architecture to be described is a general architecture aimed at chained
modular robots composed of dierent types of modules (heterogeneous modules) that
can be arranged in dierent congurations, what is called multi-congurability. Thus,
the robot can be manually assembled in dierent congurations depending on the chosen
task. It is important to note that the architecture is not limited to the modules and their
capabilities described in chapter 4, but it can be extended to many other modules and
congurations.
Since it is not desirable to reprogram each module every time a new conguration or a
new task is chosen, the control architecture provides a mechanism for the central control
(CC) and the modules to realize which is the conguration of the microrobot and behave
according to this conguration. Thanks to this control architecture, the microrobot is able
to receive simple and complex commands and execute them no matter the conguration
it may have. Example: go, stop, turn, explore, etc.
The proposed control architecture is based on behaviors and is divided in three layers:
a low control layer embedded in the modules that takes decisions for the modules, a
high control layer that takes decisions that concern the whole robot, and an heterogenous
middle layer that acts as interpreter between the central control and the modules. The
heterogenous layer has a high importance because it makes the modules homogenous to
the CC, facilitating its control.
A Module Description Language (MDL) has been dened to describe the capabilities
(both driving and sensorial) of the modules. Thanks to MDL each module is able to report
to the CC what they are able to do (their capabilities, i.e. rotate, push forward, measure
temperature, measure distance, etc.) and the central control can set up actions for the
whole robot.
The dierent modules of the microrobot have to be manually assembled at the begin-
ning, due to the characteristics of the mechanical connectors. But future modules could
be able to attach and detach by themselves, via electromechanical latches or magnets, as
in [Murata et al., 2002] [Yim et al., 2000]
173
CHAPTER 7. Control Architecture
Figure 7.1: Control Scheme
This chapter is divided into several sections related to the control architecture. The
last one, called oine control, refers to a set of algorithms and tests aimed to the
optimization of the control architecture.
7.1 Description
Online control refers to the coordination amongst the dierent modules to achieve tasks
and objectives while the robot is running. It covers embedded control, communication
module to module and CC to modules, etc. For this, in this section the hardware descrip-
tion architecture is included as well as the physic and logic description of the dierent
elements. Figure 7.1 shows the set up: modules hold an embedded control board and are
connected via the I
2
C bus and the synchronism lines. A PC holding the CC is connected
through an interface board to the I
2
C bus.
For the proposed architecture a semi-distributed control has been chosen. It has a
behavior-based control planner that takes decisions for the whole robot and an embedded
behavior-based control in every module, capable to react in real time to unpredicted events.
There is also an interpreter acting between the central control and the behaviors: it is the
heterogeneous agent
1
. The heterogeneous agents of all modules form the heterogeneous
layer. It is called a middle layer because it acts between the CC (highest level layer) and
the onboard control. Regarding the physical layout, control is divided in (g. 7.2):
Central Control (CC): It could be a PC or one of the modules. Nowadays it is a PC.
In the future it will be one of the modules in order to make the robot autonomous.
It includes the layer:
High Control Layer: Control the robot as a whole. It will collect information
from the modules, processed it, and send back to the modules information on
the situation and state of the robot, and commands with the objectives. It
will also help the modules to take decisions and to coordinate them. It is also
1
This interpreter was rst though to be also a behavior, but in order to make things clearer it was
renamed as interpreter
174
7.1. Description
Figure 7.2: Control Layers
Figure 7.3: Behavior sketch
in charge of planning. It is composed of several parts, amongst which is an
inference engine and a behavior-based control.
Onboard Control: it is embedded in each module and it is based on behaviors. It
includes the layers:
Heterogeneous (Middle) Layer: agent that translates commands coming from
the CC into specic module commands. For example, it translates the command
extend into movements of the servomotors.
Low Control Layer: Composed of behaviors. It allows the modules to react in
real time (for example to sense external and internal stimuli, as overheating,
unreachable positions, adapt to the pipe shape, etc.) and to perform tasks that
dont need the CC (movements, communication with adjacent modules, simple
tasks, etc.).
According to section 3.2.1, behavior has several meanings. Within the framework of
this thesis, a behavior is going to be considered as an independent procedure or function
that is in charge of a specic task. Behaviors may have states or be inuenced by the
state of the module (g. 7.3).
The robot is controlled as a whole, taking into account the current conguration of
the microrobot (all modules). There is no need to send specic commands to each module
175
CHAPTER 7. Control Architecture
Figure 7.4: HLC and LLC commands
every time (only when a specic order is to be send to a specic module, for example to
retrieve specic data from it). Every module can perform individual actions and behave
in a dierent way to the same commands.
Considering the modules plus the central control as a whole, the microrobot can behave
autonomously (from the point of view of control, it still needs to be power supplied),
without any need of human intervention.
7.2 Communication protocol
The communication protocol is used for communication between modules and CC. It is
based on I
2
C, upon which the message structure is built.
7.2.1 Layer structure
The communication protocol can be divided in layers, as shown in gure 7.5. The two
bottom levels are directly the physical and data link layers of the I
2
C protocol. Over
these two levels the application data level is build, which is the responsible of forming the
messages that are going to be sent amongst the modules and central control
Messages can be divided in (see gure 7.4):
High level commands (HLC): messages sent from the operator to the central control
Low level commands (LLC): messages sent from the CC to the modules. According
to the processing of the messages in the module, LLC messages can be divided in:
LLC level 2 (LLC2), if they dont have to be translated by the heterogeneous
layer
LLC level 1 (LLC1), if they have to be translated by the heterogeneous layer
7.2.2 Command messages structure
I
2
C messages have the structure shown in gure 7.6.
176
7.2. Communication protocol
Figure 7.5: Communication Layers
In simulation, I
2
C messages are structures composed of the three elds: address,
instruction and parameters (depending on the instruction it may have none, one or several
parameters)
Address + Instruction + Parameter (+ Parameter2 + ... + etc.)
When they have to be transmitted thought the real I
2
C bus, messages have to be
formatted into the I
2
C data link format. I
2
C frames are composed of:
a start condition (S)
address (7 bits)
read/write ag
data (1 byte) + 1 bit (acknowledgement) (as many times as necessary to transmit
all the necessary data)
a stop condition (P)
Addresses are natural numbers starting from 0 up to 63 (2
7
).
Address 63 is for the CC
Address 0 is for broadcast messages
Addresses from 1 to 62 are for the modules
Each module has a pre-dened address assigned when it is programmed.
Parameters are codied in the following way: the rst byte for the type of parameter
(it is also used to know the length of the bytes coming afterwards) and the following bytes
for the information.
angles ([90

..90

]): 1 byte - degrees


enum: 1 byte - natural numbers
string: 1 byte per character
177
CHAPTER 7. Control Architecture
Figure 7.6: I
2
C frames
integer: 2 bytes
value: 2 bytes
bool: 1 bit
7.2.3 Low level commands (LLC)
Low level commands are the commands sent by the CC to the modules and the answers
to these messages. Commands LLC1 are shown in table 7.1 and the answer messages in
table 7.2 and LLC2 are shown in table 7.3 and the answer messages in table 7.4.
The parameters of SIM and AIM are:
1: Average consumption
2: Consumption peaks
3: Number of working motors
4: Orientation
5: Distance covered
6: State of the batteries
7: State of contraction/extention
2
The expansion and contraction instructions have no parameters because the module itself knows how
much it has to extend or contract.
178
7.2. Communication protocol
Acronym Instruction Description/Remarks Parameters
MS1 Move Servo 1 Indicates the position of the servo 1 Angle
MS2 Move Servo 2 Indicates the position of the servo 2 Angle
GS1 Get Value Servo 1 Demands the position of the servo 1 None
GS2 Get Value Servo 2 Demands the position of the servo 2 None
EX Expansion For extension/support module None
2
CT Contraction For extension/support module None
2
INH Inchworm position Indicates the position in the inchworm gait:
rst support (1), extension (2) or second sup-
port (3)
[1..3]
GP Get Position To demand what is the position in the chain None
GPS Get Position Start Chain identication phase starts None
GPF Get Position Finish Chain identication phase ends None
SPT Split Detach from the previous module None
ATT Attach Attach to the previous module None
Table 7.1: LLC1 commands: sending
Acronym Instruction Description/Remarks Data
SS1 Send Value Servo 1 Send the value of servo 1 Angle
SS2 Send Value Servo 2 Send the value of servo 2 Angle
TS Touch Sensor It points out that the touch sensor has
been activated
Enum
TSF Touch Sensor Final The elbow mode is over None
PC1 My Position in Chain is First Answer from the rst module None
PCM My Position in Chain is Middle Answer from the modules except the
rst and last
None
PML My Position in Chain is Last Answer from the last module None
Table 7.2: LLC1 commands: answering
179
CHAPTER 7. Control Architecture
Acronym Instruction Description/Remarks Parameters
MO1 1D Sinusoidal gait Vertical sinusoidal movement None
MOS Serpentine movement Horizontal sinusoidal movement None
MRO Rolling Lateral movement None
MSW Sidewinding Lateral movement None
TUR Turning Arc movement None
TUP Turning in pipe Pushing against the wall None
MWO Move inchWOrm Inchworm gait None
MHE Move HElicoidal Move pushing forward None
RTC Reset Time Counter For synchronization None
STP Stop Stop the module None
RST Restart Restart the module None
CM Change Mode To change the working mode Enum
PO Polling Anybody has something to say? None
SIE Send information of the environ-
ment
The information demanded will
be specied by the parameters
Enum
SIM Send info of the module Consumption, orientation, etc. None
SYC Send your capabilities Say what you can do: MDL None
Table 7.3: LLC2 commands: sending
Acronym Instruction Description/Remarks Data
AMC Answer: My Capabilities MDL especic capabilities String
AIE Answer information of the en-
vironment
Sends the information de-
manded
Enum + Value
AIM Answer: info of the module Consumption, orientation, etc. String
- Answer to the polling message It depends on the module -
Table 7.4: LLC2 commands: answering
The parameters of SIE and AIE are:
1: Temperature
2: Humidity
3: Picture
7.2.4 High level commands (HLC)
High level commands are commands that can be send to the CC by the operator in order
to perform a specic task. The commands are specied in tables 7.5 and 7.6.
Although they are now send from the GUI (PC) to the CC (PC), they could have been
implemented directly in TCP/IP or other protocols, but since they are thought to be send
from the GUI (PC) to the CC embedded in a module, they are also implemented in I
2
C.
The parameters of RPL are:
180
7.2. Communication protocol
Acronym Instruction Description/Remarks Parameters
STP Stop Refers to the whole robot None
RST Restart Refers to the whole robot None
RPL Reach a place Go to the end of a part of the
pipe, go to the next bifurcation,
go to a specic coordinate, etc.
The place will be specied by
the parameters
Enum + Value
DO Do a task Repair, make a hole, etc.The
task is specied by the param-
eter
Enum
EXP Explore None
SIR Send information of the robot The information demanded will
be specied by the parameters
Enum
SIE Send information of the envi-
ronment
The information demanded will
be specied by the parameters
Enum
Table 7.5: HLC commands: sending
Acronym Instruction Description/Remarks Data
AIR Answer information of the
robot
Sends the information de-
manded
Enum + Value
AIE Answer information of the en-
vironment
Sends the information de-
manded
Enum + Value
Table 7.6: HLC commands: answering
181
CHAPTER 7. Control Architecture
A rst value to indicate the type of position:
0: Coordinates (x, y, z), in millimiters (3 integers)
1: The end of this part of the pipe
2: The next bifurcation
3: Until you touch something
4: Home
Coordinates (x, y, z)
The parameters of DO are:
1: Repair
2: Make a hole
The parameters of SIR and AIR are:
1: Average consumption, mA (1 integer)
2: Consumption peaks, mA (1 integer)
3: Working modules, array of module IDs (string)
4: Orientation, degrees, (3 angle)
5: Distance covered, millimimeters (1 integer)
6: State of the batteries, ok or no ok, (bool)
The parameters of SIE and AIE are:
1: Temperature (1 integer)
2: Humidity (1 integer)
3: Picture
7.3 Module Description Language (MDL)
The Module Description Language (MDL) is a language created to describe the capabilities
of one module to the CC and other modules, in order to create units (groups of modules)
that are able to perform more complicated tasks.
MDL is based on a series of indicators that describe generally the tasks that the module
is able to do:
Ext:Extend/Contract
Sup:Get xed to the pipe
Push pipe: Push in pipe
3
Push at: Push in open air
RotX: Rotate in its x axis
3
For simplicity it is not distinguished between pushing forward and backwards, because most of the
systems that are able to push forward can also push backwards
182
7.4. Working modes
RotY: Rotate in its y axis
RotZ: Rotate in its z axis
Att: Attach / Detach to / from other modules
Sense proximity front
Sense proximity backwards
Sense proximity lateral
Sense temperature
Sense humidity
Sense gravity
Grab
Repair pipe
Drill
Power supply
Each parameter is associated to a value indicating the level in which the module can
perform such task. This value is divided in four levels:
0: no competence for that skill
1: little competence
2: medium competence
3: good competence
All the values referring the tasks are packed into a single structure, an array of values
from 0 to 3. For example, for the rotation module it would be:
Rot_mod(MDL) = [000033000000300000]
and the helicoidal module:
Heli_mod(MDL) = [00310000000000000]
Every time the module is demanded about its capabilities, it will send this array
corresponding to the tasks that it can or cannot do.
Then, if a rotation module is next to some other modules that can rotate in the
same axis as it does, they can form a unit that moves as a snake. If a extension module is
preceded and followed by modules that have the ability to expand/contract, they can
form a unit that moves as a worm, and so on.
7.4 Working modes
The working mode (WM), or simply the mode, of a module refers to the situation in
which the module or the robot is. It is information that the CC sends to the modules
after processing the information previously sent by the modules to the CC after compiling
the data obtained from its sensors.
This working mode can be:
183
CHAPTER 7. Control Architecture
1. Inside a pipe
Straight pipe
Elbow/Bifurcation
Horizontal pipe
Vertical pipe
Upwards pipe
Downwards pipe
Obstacle detected in coordinates (x, y, z)
2. Open air
4
Plain
Uphill
Downhill
Obstacle detected in coordinates (x, y, z)
3. General
Low consumption mode
Fast mode
Silence mode
It is information that has to be known by every behavior of the module in order to
perform its tasks.
7.5 Onboard control
Onboard control refers to the control programs running on each of the modules. It is mainly
based on behaviors, as it has been already stated. The behaviors will be enumerated in a
rst step and described subsequently.
All behaviors share some common characteristics:
Its goal can be:
To perform an activity
To attain a goal
To maintain some state
Are encoded to be relatively simple
Are introduced into the system incrementally, from the simple to the more complex
4
Open air can also be considered a wide pipe (a pipe with a big diameter) in which the microrobot can
perform movements as if it was in the open air
184
7.5. Onboard control
Figure 7.7: Behavior scheme
Can be concurrently executed
Encode time-extended processes, not atomic actions
Their inputs may come from sensors or from other behaviors, as well as its outputs
may go to actuators or to other behaviors
Generally, behaviors can be described as in g. 7.7. The activation conditions are
the only conditions for the behavior to run. If these conditions are fullled, the behavior
will run. Some behaviors dont have activation conditions, and they are always running.
Examples of activation conditions are: command (form CC or operator), enable signals
from other behaviors, low battery, etc.
Stimuli are the inputs of the behaviors, for example: position of the module, position of
the actuators, internal variables (intensity, torque), state of the module, etc. And Actions
are the output of the behavior, what it wants to do: position of the module, orientation,
block motor x, module state, actuator position, etc. The outputs of behaviors have to be
coordinated, as it will be explained in following sections.
Behavioral mapping refers to the algorithm that relates the inputs and the outputs.
As it was shown in chapter 3 it could be discrete or continuous, and it can be expressed
by pseudocode, nite state machines, etc.
7.5.1 Embedded Behaviors
There are several types of behaviors that have been classied in several categories (as
described in [Arkin, 1998]), according to the type and complexity of the tasks they are
developing. Some behaviors perform simple tasks, while some behaviors are based in other
behaviors to perform more complex tasks.
The behaviors that have been dened are:
1. Survival behaviors: try to maintain the integrity of the module.
185
CHAPTER 7. Control Architecture
Avoid overheating
Avoid actuator damage
Avoid mechanical damages (singular points, stress positions, etc.)
2. Perceptual behaviors: try to gather information about the module and its environ-
ment
Self diagnostic (to check if the module is working properly)
Situation awareness (to check if it is in a pipe, open air, etc.)
Environment diagnostic (temperature, humidity, images, etc.)
3. Walking behaviors
Vertical sinusoidal movement
Horizontal sinusoidal movement
Worm-like movement
Push-Forward movement
The execution of each behavior is independent of the others and is inuenced by the
situation and state in that particular moment. Not all behaviors can act at the same time,
thus they have to be coordinated
A description of the implemented behaviors is given next, followed by the coordination
mechanisms.
Avoid overheating
The purpose of this behavior is to control that the accumulated heat is under certain
limits that dont damage the circuits. A lot of heat produced, for example, in the coil of
the motors by the electric current may lead to have it burnt.
To avoid overheating of the circuits the current (that is the main source of heat -mainly
due to the consumption of the motors/servomotors) has to be limited.
The dissipated thermal power through the wires of the motor is proportional to the
intensity and the electrical resistance. Part of this power is transmitted to the environment
and part is absorbed by the wire (this is the cause of the overheating) as shown in gure
7.8 and equation 7.1. Equation 7.1 lead to equation 7.2, where:
R

is the electrical resistance in []


I is the electrical current in [A]
T
m
is the temperature of the motor in [

C]
T
e
is the temperature of the environment in [

C]
R
th
is the thermal resistance [

C/W]
C
th
is the thermal capacitance [W s/

C]
186
7.5. Onboard control
Figure 7.8: Heat dissipation sketch
P
dissipated
= P
transmitted
+P
absorbed
(7.1)
R

I
2
(t) =
T
m
(t) T
e
(t)
R
th
+C
th

dT
m
(t)
dt
(7.2)
In the Laplace domain, eq. 7.2 is expressed as eq. 7.3. In order to do the Laplace
transform, a variable change has been done, being (t) = I
2
(t)
R

(s) =
T
m
(s) T
e
(s)
R
th
+C
th
T
m
(s) s (7.3)
Thus, T
m
, the temperature of the motor, that is the variable that should be under
supervision, can be obtain as in eq. 7.4.
T
m
(s) =
T
e
(s) +R
th
R

(s)
1 +R
th
C
th
s
(7.4)
Applying the transformation s =
1z
1
T
, where T is the sampling period, it is obtained
in the Z domain:
T
m
(z) =
T
e
(z) +R
th
R

(z)
1 +R
th
C
th
(
1z
1
T
)
(7.5)
And solving equation 7.5 for T
m
(z):
T
m
(z) =
T T
e
(z) +T R
th
R

(z) +R
th
C
th
T
m
(z) z
1
T +R
th
C
th
(7.6)
Applying the inverse transform, the discrete time equation is obtained:
187
CHAPTER 7. Control Architecture
Behavior: Avoid overheating
Activation conditions None (always running)
Inputs (Stimuli) Internal variables: I
1
, I
2
, etc.
Outputs (Actions) Block M
1
, Block M
2
, etc.
Table 7.7: Behavior encoding: Avoid overheating
Behavior: Avoid actuator damage
Activation conditions None (always running)
Inputs (Stimuli) Internal variables: I
1
,
1
,I
2
,
2
,etc.
Outputs (Actions) Block M
1
, Block M
2
, etc.
Table 7.8: Behavior encoding: Avoid actuator damage
T
m
[n] =
T T
e
[n] +T R
th
R

[n] +R
th
C
th
T
m
[n 1]
T +R
th
C
th
(7.7)
T
m
[n] =
T T
e
[n] +T R
th
R

I
2
[n] +R
th
C
th
T
m
[n 1]
T +R
th
C
th
(7.8)
The temperature of the environment is measured by a temperature sensor, the electrical
current is calculated as in equation 5.15 and the electrical and thermal resistance, and
thermal capacitance are constants.
The behavior is monitoring continuously the temperature and intensity, and in case
that overheating in the servomotors is detected, they are stopped immediately.
Avoid actuator damage
The purpose of this behavior is to control that the torque of the motors in under certain
limits that dont damage the motors / actuators. If it is too high the servomotors are
stopped immediately.
This is achieved by controlling that the current intensity is under a certain limit. This
limit has been obtained experimentally. In gure 7.9, the servomotor is trying to move
from 100

to 180

, but it is blocked at 135

. The consumption is blocked around 120mA.


Avoid mechanical damages
This behavior is in charge of the mechanical security of the module, taken care of any
possible danger it may suer by wrong use of the actuators.
One of the tasks this behavior is in charge of, is to avoid singular points. Singularities
have to be avoided because they produce unexpected results, since joints are forced to
move at impossible speeds by the actuators, or to places that cannot be reached. There
are two possible singular points:
in the limits of the workspace of the robot
inside the workspace of the robot
188
7.5. Onboard control
(a) Intensity
(b) Angle
Figure 7.9: Maximun servomotor consumption with blocking
In the extension module, it avoids singular points in the two crank connecting mecha-
nisms. Singular points are produced, for example, when the links of each arm are aligned.
They are produced at the angles of 25.8

(inside the workspace) and 147.4

(limit of the
workspace). The smaller value cannot be reached because of the mechanical conguration
(see g. 7.10 a)), but it has to be avoided to send that position to the servomotor because
it could break itself or the module trying to reach it. The higher value could only be
avoided by software (See g. 7.10 b)).
Since the movement of the end connector of the extension module is limited by the
sliding bar placed at the center, there are several combinations of the angles of the two
actuators that are not physically reachable. These positions should also be avoided.
In the support module it happens the same thing, singular points occur when the two
links are aligned. And as in the extension module, the mechanical design avoids reaching
that place, but should also be avoided.
189
CHAPTER 7. Control Architecture
(a) Lower position (b) Higher position
Figure 7.10: Extension module at its higher and lower position
Behavior: Avoid mechanical damages
Activation conditions None (always running)
Inputs (Stimuli) Internal variables:
1
,
2
, etc.
Outputs (Actions) Block M
1
, Block M
2
, etc.

1
,
2
, etc.
Table 7.9: Behavior encoding: Avoid mechanical damages
In other modules it avoids high stress positions that may break some parts. For
example in the rotation module.
Self diagnostic
The purpose of this behavior is to check if everything is working ne in the module. If the
actuators can move, if the levels of intensity and torque are ok, if the communication bus
is ok, if the synchronism line is ok, if the sensors are working ne, etc.
It records the setpoints (desired positions) of the actuators, and compares them with
their real position. If they are not approaching (and there is no problem with the torque
and intensity, meaning it is not blocked), there may be a problem with the actuator and
an alarm is sent.
To verify the synchronism lines, in a conguration check phase it checks if the signals
Sin and Sout have been activated in any time. If not, there may be a problem and an
alarm is sent.
190
7.5. Onboard control
Behavior: Self diagnostic
Activation conditions No low battery
Inputs (Stimuli) Internal variables:
1
,
2
, etc.
I
2
C Communications state
Synchronism line communications state
Sensors state
Outputs (Actions) Module state [OK,PROBLEM]
Table 7.10: Behavior encoding: Self diagnostic
Behavior: Situation awareness
Activation conditions No low battery
Inputs (Stimuli) Contact sensor
IR sensors
Internal variables:
1
,
2
,etc.
Outputs (Actions) Module situation [narrow pipe, wide pipe, open space]
Touch detected
Table 7.11: Behavior encoding: Situation awareness
Situation awareness
This behavior tries to know where is the module/microrobot: inside a narrow pipe, a wide
pipe, open air. It makes use of the contact sensors, IR sensors, and the intensity and
torque control system of the servomotors, amongst other sensors.
Thanks to the IR sensors, the module is capable to know if it is in the open environment
or inside a pipe. If it is inside a pipe, it can detect if it is a wide pipe or a narrow one.
Thanks to the contact sensor it can detect if has crashed into something and thus other
behaviors may act consequently: if it is in an open environment, it may go around the
obstacle, if it is inside a pipe, the IR will tell if it is an elbow or a bifurcation, and it will
be able to negotiate the elbow or choose a path in the bifurcation.
The touch (and camera) module plays a very important role because it is the one that
has touch sensors to detect obstacles, in this case elbows and bifurcations. When the
touch module detects an obstacle sends a message to the CC, and the CC distributes the
information to all the modules.
Environment diagnostic
This behavior is in charge of gathering information from the sensors (contact, IR, temper-
ature, humidity, etc.) and taking images. It is continuously computing the data coming
from its sensors (whatever they are) and storing them.
It is capable of doing an historical record of these values to use them whenever it is
necessary:
if the temperature or the humidity is increasing dangerously
if it is approaching the end of a pipe by analyzing the pictures taken from the camera
If the module is running down on battery it may stop working.
191
CHAPTER 7. Control Architecture
Behavior: Environment diagnostic
Activation conditions No low battery
Inputs (Stimuli) Data form sensors: temperature, humidity
pictures
Outputs (Actions) Temperature alert
Humidity alert
Historical of temperature, humidity, etc.
Image processing data
Table 7.12: Behavior encoding: Environment diagnostic
It can detect if the module has touched something, it if close to an object (wall, pipe),
etc.
Vertical sinusoidal movement
This behavior is found in modules with one rotation dof. By moving the rotation actua-
tor following a sinusoidal wave, the module help the microrobot to perform a snake-like
movement:
Pos = A sin( t +) (7.9)
All parameters are constant but the time t. There are two ways to synchronize t:
t is reseted at the same time for all modules when a start sequence message is
received from the CC. This is the easiest way but the drawback is that they can lose
synchronization. To avoid this a synchronization message is sent every 2 seconds.
using the synchronism line:
the rst module steps t and activates the output synchronism line
the second detects the input line activated, steps t and activates the output
line
and so on, until the last module detects the input line activated, steps t,
activates its input line
the penultimate module detects the output line activated, steps t and acti-
vates the input line
etc.
Both ways have been implemented. In chapter 8 they will be analyzed and some results
will be given.
Horizontal sinusoidal movement
This behavior is in charge of two things: movement with horizontal joints and turning
movements.
Forward movement can be achieved performing, for example, serpentine movements,
and it is similar to the one described in the Vertical sinusoidal movement section.
192
7.5. Onboard control
Behavior: Vertical sinusoidal movement
Activation conditions Command
Mission
Inputs (Stimuli) Synchronism lines

1
,
2
, etc.
Outputs (Actions)
1
,
2
, etc.
Table 7.13: Behavior encoding: Vertical sinusoidal movement
Behavior: Horizontal sinusoidal movement
Activation conditions Command
Mission
Inputs (Stimuli) Synchronism lines

1
,
2
, etc.
Outputs (Actions)
1
,
2
, etc.
Table 7.14: Behavior encoding: Horizontal sinusoidal movement
Regarding turns, in open spaces, turns are achieved by putting horizontal joints at a
xed position all the time. The robot has the shape of an arc. The radius of curvature
of the trajectory can be modied by modifying the degree of rotation of the horizontal
joints.
Inside pipes, it is also possible to go forward by pushing against the pipe walls. The
sequence is as follows (g. 8.34):
The rst module (M1) turns 90 degrees.
M1 turns up the synchronism line with module M2.
When M2 detects the synchronism line up, M2 turns 90 degrees.
When M2 has turn a predetermined angle (about 60 degrees), M2 turns downs the
synchronism line with M1.
M1 gets back to the initial 0 degrees position.
When M2 has turned 90 degrees turns up the synchronism line with M3 and so on.
Passive modules have nothing to do but to pass the token to the next module
(through the synchronism line).
This solution includes P2P communication between adjacent modules besides the I
2
C
communication. An slave module can only communicate with adjacent modules.
Worm-like movement
This behavior can be found in modules with extension-contraction capabilities. Each
module knows if it has support or extension capabilities. Worm like movement is
performed by combination of extension-contraction mechanisms.
The mechanism is the following:
the rst module with support capabilities (S1) expands, and activates the output
synchronism line
193
CHAPTER 7. Control Architecture
Behavior: Worm-like movement
Activation conditions Command
Mission
Inputs (Stimuli) Synchronism lines

1
,
2
(Extension/Contraction state), etc.
Outputs (Actions)
1
,
2
(Extension/Contraction), etc.
Table 7.15: Behavior encoding: Worm-like movement
Behavior: Push-Forward movement
Activation conditions Command
Mission
Inputs (Stimuli) Synchronism lines
Outputs (Actions) Push, Stop, etc.
Table 7.16: Behavior encoding: Push-Forward movement
the next module with extension capabilities (E) contracts and activates the output
line
the next module with support capabilities (S2) expands and activates the input line
E activates the input line
S1 releases and activates the output line
E expands and activates de input line
Everything is repeated
A sketch of this procedure can be seen in former g. 4.35.
Push-Forward movement
This behavior can be found in modules which has self-propulsion capabilities, like the
helicoidal module. This behavior is in charge of activating the actuator to move forward
or backwards as demanded.
7.5.2 Behavior fusion
Behavior coordination is a complex task. Some behaviors collaborate to achieve its goal
(cooperation), some others compete (competition) and some others act independently form
each other. In gure 7.11 the scheme is explained.
Behaviors are divided in sets of priorities and tasks.
Walking behaviors (vertical sinusoidal, horizontal sinusoidal, etc.) control the actu-
ators of the module, some of them control one actuator, other two, etc. They may be
complimentary, and thus, its output is combined to achieve its goal. Its output is subject
to LLC1 commands coming directly from the CC.
Perceptual behaviors act independently since they only inform and they have no ac-
tuator control. But its output feeds back the other behavior with information regarding
broken actuators, current situation, etc.
194
7.6. Heterogeneous layer
Figure 7.11: Behavior fusion scheme
Survival behaviors have the highest priority (if they compete with other behaviors they
will win) since they try to keep the module up and running. They can inhibit the output
of the other behavior if they put in danger the integrity of the module. For example, if the
output of a behavior is to move a servomotor to an specic position and in this position
the consumption is too high, the position is released.
7.6 Heterogeneous layer
The heterogeneous layer is in charge of several tasks that take place between the module
and the CC and/or other modules, amongst which it is the communication. Every time
a command is received by the module, it is processed by the heterogenous layer and
translated into specic instructions for the module. On the other hand, when the module
needs to send a message, it is done by the heterogenous layer.
For example, when an action has to be done (i.e. go straight forward), the CC sends
an I
2
C message to every module with the command to follow. The heterogeneous layer of
each module translate this message to proper commands of the module. It is important
to remark that all messages are the same, no matter which module they are aimed to, and
thus it is the module that knows what actions it has to perform.
The heterogenous layer is in charge of the following tasks:
1. Communications
2. Conguration check
195
CHAPTER 7. Control Architecture
Figure 7.12: Conguration check sequence diagram
3. MDL phase
7.6.1 Communications
The heterogenous layer receives commands and send them when the CC asks if there is
something to say (polling).
Every certain time, the CC sends a message to all the modules demanding if they
have something to say (polling). That is the way in which the modules can communicate
with the CC or with other modules. This is the inverse procedure: the module sends a
command to the CC and if it necessary the heterogeneous layer translates the message.
7.6.2 Conguration check
The purpose of this task is to know the conguration of the microrobot and which position
in the chain the module is in. Normally, the rst time this behavior acts is after mechanical
connection of modules and power up, when the phase of awareness starts: every module
gets to know its position in the modular chain. After that the behavior can act any time
196
7.7. Central control
it is necessary to know the conguration (after split up, if a module is broken, etc.)
This procedure is as follows (see g. 7.12):
The CC sends an GPS message to all modules.
All modules activate their synchronism lines.
The one which is the rst (it knows that it is the rst because its S
in
synchronism
line is down) replies with a PC1 message (this message is sent to the CC and includes
the ID of the module: r for rotation, s for support, etc.).
The rst module puts the S
out
synchronism line down, so the second module knows
it goes next (because now its S
in
synchronism line is down).
The second module sends a PC1 message and puts its S
in
synchronism line down,
so the rst module knows it has nished.
The CC keeps collecting all the messages.
It goes the same way for all modules.
When it is the turn of the last module (it knows it is the last because its S
out
synchronism line is down) it sends a PCL message
The CC send a GPF message, so the last module knows it has nished.
7.6.3 MDL phase
MDL phase follows a similar mechanism as the conguration check phase, but instead
of sending the id, the module sends the MDL string showing its capabilities.
7.7 Central control
The central control (CC) is in charge of the most complicated calculations, as it is running
in a more powerful processor (either in an external PC -nowadays- or in a specic module
-in the future- ).
Central control represents the high control layer in the control architecture. It takes
care of the main decisions of the robot, what it is going to do and how, independently of
the module composition of the microrobot.
Central control is also based in behaviors, but in this case the behaviors have the whole
microrobot as its target.
In order to know what the robot is capable of, the central control makes use of an
expert system based on rules that takes the MDL commands coming from each of the
modules and outputs a set of capabilities of the whole microrobot.
Each module has several features that denes what it can do. But a set of modules
together can have newer features. Modules can be grouped in units to have dierent
capabilities, units can in turn be grouped in super-units to have newer capabilities. For
197
CHAPTER 7. Control Architecture
Figure 7.13: Ext / Contraction capabilites: a) grade 3 and b) grade 1
example, the rotation module doesnt have extension/contraction capabilities, but a unit
composed of three rotation modules together do have that feature (eq. 7.10 and 7.11).
Rot mod+Rot mod+Rot mod+Open air => Extension/Contraction (grade 3) (7.10)
Rot mod +Rot mod +Rot mod +Pipe => Extension/Contraction (grade 1) (7.11)
It is possible to see that the same combination of modules may have dierent results
depending if the microrobot is inside the pipe or in the open air (g. 7.13).
This expert system is based on a set rules and an Inference Engine. The rules are
already pre-charged, but new rules could be added by learning.
7.7.1 Rules
The capabilities of the whole microrobot are the consequence of the combination of the
capabilities of all the modules and its position in the chain. It is not the same having a
extension module in between two support modules, that having the extension module at
the side of two support modules in a row. In the rst case the chain could perform an
inchworm movement while in the second one it is not possible.
As we have seen, the functionalities of the modules can be useful or not for a determined
task depending on where they are placed. This importance is linked to three possible
location of the modules:
198
7.7. Central control
Anywhere Sequential Adjacent Robot
Bat Rot + Rot + Rot Ext
Bat Sup + Ext + Sup Forward/Backward
Movement (inch-
worm)
Bat Sup + .. + Sup Sup Unit
Bat Ext + .. + Ext Ext Unit
Bat Sup unit + Ext Unit
+ Sup Unit
Forward/Backward
Movement (inch-
worm)
Bat + Rot +
Push pipe
Turning
Bat + Push pipe Forward Movement
Bat + Push pipe Backward Movement
Bat Rot + Rot + Rot Forward Movement
(snake)
Table 7.17: Table of Rules
Anywhere: they can develop it capabilities independently of where are they placed
In sequential order (but not adjacent)
Adjacent: One after each other
To know what are capabilities of the microrobot, a set of rules has been implemented.
This rules can be extended either by writing new rules when new features appear or by
developing new rules by learning.
In a general way, rules can be described as:

MDL(left) +

MDL(right) +

MDL(anywhere) =>

MDL(robot) (7.12)
The set of rules are shown in table 7.17
7.7.2 Inference Engine
The inference engine has two functions:
It can deduce or infer what the robot is capable of. It goes through all of the rules
and select those which are fullled. Then the procedure is repeated including into
the premises the conclusions obtained previously. And so on until there are no new
rules fullled in a cycle.
It can deduce or infer which modules are needed for a specic task. For example if
the robot needs to split, it can decide which is the optimal point to split in order
that each part keeps the necessary modules to accomplish the task under execution.
As an example, the iterative process will allow to infer that a conguration like:
199
CHAPTER 7. Control Architecture
Sup mod + Rot mod + Rot mod + Rot mod + Sup mod
can move as a worm.
7.7.3 Central control Behaviors
Continuing with the classication made in section 7.5.1 the behaviors that have been
dened for the central control are shown in the following bullets. As explained before,
some behaviors perform simpler tasks, while others are based in them to perform more
complex tasks.
1. Postural behaviors
Balance / Stability
2. Walking behaviors
Move straight forward/backwards
Turn to left/right
Move laterally
Rotate
3. Path following behaviors
Edge following
Pipe following
Stripe following
4. Protective behaviors
Obstacle negotiation
5. Exploration behaviors
Wandering
6. Goal Oriented behaviors
Reach a landmark
Reach a place
Find a pipe break
Repair
200
7.7. Central control
Behavior: Balance / Stability
Activation conditions No low battery
Mission
Recharge possible
Inputs (Stimuli) Orientation [a
x
, a
y
, a
z
]

1
,
2
, etc.
Outputs (Actions)
1
,
2
, etc.
Table 7.18: Behavior encoding: Balance / Stability
Balance / Stability
It is very important for some tasks to be in the right position in order to know where is
the right and the left sides. To turn, to understand data from IR sensors, etc. Thanks to
the this behavior it is possible to know the orientation. This behavior is also in charge to
change the orientation when necessary to be facing upward, for example.
The information of the orientation of the module is taken from the accelerometer, from
the servomotors or from information received from other modules or the CC.
For example, in a module with two rotational DOF, if it wants to turn to the right,
depending on its orientation it will use one of the DOF or the other. If none of of them is
in the right position, the behavior will make the necessary movements to put the module
in the right position.
Move straight forward/backwards
This behavior is in charge of making the microrobot go forward or backwards. There are
several types of movements it can perform, like serpentine, caterpillar, inchworm, etc. The
use of one or another depends on which type of modules the robot is composed of, which
are the predominant modules, which environment it is moving on and the state of the
module (in terms of power supply, mechanical viability, etc.).
If the predominant modules are rotation modules, a snake-like gait is performed.
If the sinusoidal wave is propagated in a horizontal plane is called serpentine, while in
a vertical one it is called caterpillar locomotion. Serpentine is more suitable for open
spaces, while caterpillar is for pipes. Other possible gaits in open spaces are rolling and
sidewinding, but rst it is necessary to change the orientation of the microrobot.
If the predominant modules are the support and the extension modules, an inch-
worm gait is performed.
If the predominant modules are the rotation modules, it is also possible to perform
an inchworm locomotion. A group of three of them has contraction-extension capabilities
and could act as a unit similar to support or extension module, following the previous
procedure.
The helicoidal module has only one degree of freedom. It is able to go forward or back-
wards pushing other modules. Thus, an helicoidal module can be added to other modules
and its push will be added to the other modules push. If other modules locomotion is
not possible or desired, modules would acquired a conguration of minimum friction that
would easy the straight forward movement.
201
CHAPTER 7. Control Architecture
Behavior: Straight forward / backwards
Activation conditions CC Command
Operator command
Inputs (Stimuli) Orientation [a
x
, a
y
, a
z
]
Desired position [x,y,z]
Global State
Number and type of modules
Outputs (Actions)
1
,
2
, etc.
Table 7.19: Behavior encoding: Straight forward / backwards
Behavior: Edge Following
Activation conditions IR sensor
Touch sensor
Inputs (Stimuli) Orientation [a
x
, a
y
, a
z
]
Global State
Outputs (Actions) Direction [x, y, z]
Table 7.20: Behavior encoding: Edge Following
Other modules that have no actuators act as pig modules, they are carried out by the
drive modules. They only have to pass on the signals coming from the synchronism line.
Move laterally
It is possible to move laterally with the sidewinding and rolling gaits. For these movements,
modules need to have two dof, at least some of them.
Rotate
Rotation can only be performed with module that have one rotation dof by performing
the rotation gait described in section .
Turn left/right
In order to turn there are several possibilities
turning gait: caterpillar locomotion combined with rotation in the other dof actuator
stop, rotate in a rst place and go forward
Edge Following
This behavior makes use of the distance sensors (IR) and the touch sensor. It tries to
keep the microrobot not too close to a wall or object. Depending on the measures of the
IR sensors received from the modules, the behavior will output the coordinates the robot
should go.
202
7.7. Central control
Behavior: Pipe Following
Activation conditions IR sensor
Touch sensor
Inputs (Stimuli) Orientation [a
x
, a
y
, a
z
]
Global State
Outputs (Actions) Direction [x, y, z]
Table 7.21: Behavior encoding: Pipe Following
Behavior: Obstacle negotiation
Activation conditions Touch sensor
IR sensor
Inputs (Stimuli) Orientation [a
x
, a
y
, a
z
]
Global State
Outputs (Actions) Direction [x, y, z]
Table 7.22: Behavior encoding: Obstacle negotiation
Pipe Following
This behavior governs the movement of the robot inside a pipe, trying to keep the best
movement gait, and negotiating elbows and bifurcations.
Obstacle negotiation
Obstacle negotiation is one of the most important behaviors, and more complex also.
When something is detected in the path of the microrobot it is in charge of selecting the
appropriate actions to get around it.
If the robot is in a pipe, it is probably an elbow or bifurcation. Then it selects the
actions to negotiate the turn.
In the open air, it is a little bit more complicated because there are a lot of options.
The easiest way is to go back, then turn a little bit and go forward. It the object is
detected again, perform the same algorithm.
Wandering
This behavior controls the movement of the robot when there is no specic task selected.
It is especially indicated for pipes.
The robot is moving around looking for possible damages trying not to collide. It also
may follow the pipes making a map of the path, using the traveled distance measuring
system.
Goal Oriented behaviors
These behaviors are the highest level ones. They make use of other behaviors in order to
complete their tasks.
The behaviors reach a place and reach a landmark work in a similar way. Starting
203
CHAPTER 7. Control Architecture
from its own position, it estimates where is the objective and move the robot in that
direction. In a pipe, the objective could be
go to the next bifurcation
go forward/backwards 2 meters
go up/down
In open air, objectives can be:
go to position (x,y)
go to the next corner
get into the pipe in front
The behavior nd a pipe break makes use of the wandering behavior to move inside
the pipe while looking for breaks or holes with he camera and IR sensors.
The repair behavior is not implemented, but it is an example of what it will be possible
to do when repairing tools were developed and added to the robot. The behavior will be
in charge of the movement of the robot while repairing the damaged pipe.
7.7.4 Behavior fusion
A behavior fusion scheme for the CC algorithms can be found in gure 7.14. Higher
level behaviors (i.e. path following, obstacle negotiation, exploration (wandering) and
goal oriented) follow a subsumption-like procedure to coordinate. If no one wants to take
control, wandering is the active behavior, but it can be subsumed by path following,
which in its turn can be subsumed by goal oriented, and nally this one by obstacle
negotiation. Thus, obstacle negotiation is the behavior with the higher level.
Each of the path following and goal oriented behaviors contribute to the selection of
the place to go. Thus, it is shown in the bottom part of the gure 7.14 as a summation of
all the individual outputs.
The output of all of the previous behaviors is the coordinates or directions where they
want to go. This output is received by the walking behaviors, which compete amongst
them for the control of the modules. The output of the action selection mechanism can
be suppressed by the balance/stability behavior, which is in charge of keeping the mi-
crorobot in the most appropriate position.
Action Selection Mechanism
The outputs of the four walking behaviors (go forward, turn, move laterally and rotate)
have to be merged into a unique output. Since they are all competing behaviors, there
should be a winner that takes the decision to follow. The selection criteria depends on
two factors: the situation and the place the microrobot is going to.
The situation is very important because it is not the same to move inside a pipe than
in the open air, or to move in a plain terrain / pipe than in an uphill terrain / vertical
pipe.
204
7.8. Oine Control
Figure 7.14: Behavior fusion scheme for Central Control behaviors
The place where the robot has to go has the biggest importance: if the robot has to
go to a place that is in front or to the left / right or in diagonal, if it is near or far, etc.
Depending on all of this one behavior or another will be chosen to take the control.
7.8 Oine Control
Oine control refers to the control algorithms that takes place when the microrobot is
not running. They are aimed to select the best conguration of the modules (regarding
both module positioning and parameters) for later use in the online control.
One of the use cases proposed is the conguration demand, in which, for a specic
mission, the CC selects the modules to use and its position. This is not done in real time,
and so it is referred as oine control.
This task is achieve by using a genetic algorithm (GA) and it is described in the next
205
CHAPTER 7. Control Architecture
sections.
GAs can also be used to optimize parameters in the microrobot. For example, in a
snake-like conguration, the microrobot is composed of rotation modules that moves one
of its dof following a sinusoidal wave:
Pos = A sin( t +) (7.13)
All the parameters, A, and can be optimized to make the microrobot faster, to
have lower consumption or anything else.
For the rest of this section, two options for GA will be considered:
conguration demand: in heterogenous congurations, for a given task, the GA has
to determine the modules to use to have an optimal conguration.
parameter optimization: for a given conguration, the GA has to determine the opti-
mum parameters for the best performance. This is especially useful in homogeneous
congurations when the microrobot is performing a snake or inchworm movement.
7.8.1 Brief on genetic algorithms
A GA is a search technique used in computing to nd exact or approximate solutions to
optimization and search problems. Genetic algorithms are categorized as global search
heuristics. Genetic algorithms are a particular class of evolutionary algorithms (EA) that
use techniques inspired by evolutionary biology such as inheritance, mutation, selection,
and crossover.
Genetic algorithms are implemented in a computer simulation in which a population of
abstract representations (called chromosomes or the genotype of the genome) of candidate
solutions (called individuals, creatures, or phenotypes) to an optimization problem evolves
toward better solutions. Traditionally, solutions are represented in binary as strings of 0s
and 1s, but other encodings are also possible. The evolution usually starts from a popu-
lation of randomly generated individuals and happens in generations. In each generation,
the tness of every individual in the population is evaluated, multiple individuals are
stochastically selected from the current population (based on their tness), and modi-
ed (recombined and possibly randomly mutated) to form a new population. The new
population is then used in the next iteration of the algorithm. Commonly, the algorithm
terminates when either a maximum number of generations has been produced, or a satis-
factory tness level has been reached for the population. If the algorithm has terminated
due to a maximum number of generations, a satisfactory solution may or may not have
been reached.
A typical GA requires:
1. a genetic representation of the solution domain
2. a tness function to evaluate the solution domain
A standard representation of the solution is as an array of bits. Arrays of other types
and structures can be used in essentially the same way. The main property that makes
206
7.8. Oine Control
these genetic representations convenient is that their parts are easily aligned due to their
xed size, which facilitates simple crossover operations. Variable length representations
may also be used, but crossover implementation is more complex in this case.
The tness function is dened over the genetic representation and measures the qual-
ity of the represented solution. The tness function is always problem dependent. For
instance, in the knapsack problem one wants to maximize the total value of objects that
can be put in a knapsack of some xed capacity. A representation of a solution might be
an array of bits, where each bit represents a dierent object, and the value of the bit (0 or
1) represents whether or not the object is in the knapsack. Not every such representation
is valid, as the size of objects may exceed the capacity of the knapsack. The tness of the
solution is the sum of values of all objects in the knapsack if the representation is valid,
or 0 otherwise.
Once genetic representation is done and the tness function dened, GA proceeds to
initialize a population of solutions randomly, then improve it through repetitive application
of mutation, crossover, inversion and selection operators.
Example of a simple generational genetic algorithm
A simple GA can be described with the following pseudocode:
BEGIN /*Simple GA*/
Generate initial population
Evaluate the fitness of each individual in that population
WHILE NOT finished DO
BEGIN /*Produce new generation*/
FOR (population_size / 2) DO
BEGIN /*Reproduction cycle*/
Select 2 individuals (based on fitness function probability)
Crossover the 2 individuals with a probability
Mutate the 2 individuals with a probability
Evaluate the new individuals
Insert new individuals into population
END
IF population_has_converged THEN /*time limit, convergence, etc.*/
finished:=true
END
END
The algorithm proposed in this thesis starts from this algorithm to build a more com-
plex and evolved GA.
207
CHAPTER 7. Control Architecture
Phases of a GA
More advanced GA are divided in dierent phases:
1. Initialization: individual solutions are randomly generated to form the initial pop-
ulation. The population is generated randomly, covering the entire range of possible
solutions (the search space).
2. Evaluation: consists on applying the tness function on every individual of the
population. The result of the evaluation will help to select the best individuals for
reproduction.
3. Selection: during each generation, a proportion of the existing population is selected
to breed a new generation, what is called the mating pool. Individual solutions are
selected through a tness-based process, where tter solutions are more likely to
be selected. Most functions are stochastic and designed so that a small proportion
of less t solutions are selected. This helps keep the diversity of the population
large, preventing premature convergence on poor solutions. Popular and well-studied
selection methods include roulette wheel selection and tournament selection.
4. Reproduction: it is aimed to generate a new population through the application
of genetic operators: single/two/uniform/arithmetic point crossover, mutation (bit
inversion, order changing, adding a number ).
It is very important to dene the probability of crossover and mutation to nd
reasonable settings for the problem being worked on. A very small mutation rate may
lead to genetic drift. A recombination rate that is too high may lead to premature
convergence of the GA. A mutation rate that is too high may lead to loss of good
solutions unless there is elitist selection.
5. Termination: when the terminating condition has been reached, the GA ends.
Common terminating conditions are:
A solution is found that satises minimum criteria
Fixed number of generations has been reached
Allocated budget (computation time/money) has been reached
The highest ranking solutions tness is reaching or has reached a plateau such
that successive iterations no longer produce better results
Manual inspection
Combinations of the above
Remarks
One problem that may appear is that GAs may have a tendency to converge towards
local optima or even arbitrary points rather than the global optimum of the problem.
The likelihood of this occurring depends on the shape of the tness landscape: certain
problems may provide an easy ascent towards a global optimum, others may make it easier
for the function to nd the local optima.
This problem may be alleviated by using a dierent tness function, increasing the
rate of mutation (called triggered hypermutation), occasionally introducing entirely new,
208
7.8. Oine Control
Parameter Value
Rotation module 1
Helicoidal module 2
Support module 3
Extension module 4
Touch/Camera module 5
Traveler module 6
Table 7.23: GA Conguration demand genes value range
randomly generated elements into the gene pool (called random immigrants) or by using
selection techniques that maintain a diverse population of solutions. Diversity is important
in GAs because crossing over a homogeneous population does not yield new solutions.
It is very important to choose a good codication of the genotype and a good tness
function to have satisfactory results.
7.8.2 Codication and set up
Due to the dierent nature of the two considered problems, the way to resolve each phase
/ operator will be distinguished in the implementation of the algorithm. As a reminder,
the two purposes the algorithm is used for are:
Parameter optimization: to nd the optimum parameters to perform a specic
movement: amplitude, phase or frequency in snake movements, times of extension /
contraction in inchworm.
Conguration demand: to nd the best combination of modules to perform a
specic task: to cover a stretch of pipe, to negotiate an elbow, to have the lowest
consumption.
The rst thing to do in the GA is to dene the parameters (codication): chromosomes,
population, number of generations, tness function, termination condition, etc.
The chromosome is one of the most important parts to dene. If the chromosome is
not well chosen it will be impossible to achieve good results.
The codication for each of theGAs explained before will be completely dierent, and
thus it will be explained separately.
Conguration demand
For this case, the chromosome is an array of the types of modules of the robot (i.e.
genes, parameter numgenes). If the robot has 6 modules, the chromosome will be an
array of 6 elements, each of them representing the type of modules (rotation, helicoidal,
support, extension, touch/camera, traveler), according to table 7.23.
For example, for a microrobot composed of 1 touch module, 5 rotation modules and 1
helicoidal modules, the chromosome would be 5111112. For a microrobot composed of
1 touch, 1 support, 1 extension, 1 support, 1 rotation, 1 support, 1 extension, 1 support,
209
CHAPTER 7. Control Architecture
Parameter Value
Population 16
Number of genes 6
Maximum number of generations 20
Maximum number of chromosomes selected for reproduction 16
Maximum number of times that a chromosome can be selected not dened
Crossover probability 0.8
Mutation probability 0.05
Table 7.24: GA Conguration demand parameters
1 rotation, it would be 534313431.
The population is the set of chromosomes. It may vary between 16 to 500, depending
on the experiments (it will be shown in chapter 8).
The tness function may vary. It can be related to the time the microrobot takes
to perform a task (i.e. to cover a part of the pipe -the smaller it is, the better) or the
distance covered in an amount of time (the bigger the better).
The probabilities experimentally chosen for this algorithm are:
Crossover probability = 0.8
Mutation probability = 0.05
A standard set of parameters is shown in table 7.24.
Parameter Optimization
For this case, the chromosome is an array of parameters. If the robot is composed of
rotation modules to perform a snake-like movement, this parameter could be:
Amplitude (A)
Angular velocity (V)
Phase (P)
Oset (O)
Phase between vertical and horizontal modules (D)
The range of the values for these parameters can be seen in table 7.25.
The chromosome will be composed of AV POD. For example it could be: 60.0; 1.0; 2/3; 0; 0.
Then, since each gene has a dierent value range, it is neither possible to exchange the
genes inside the chromosome nor mix them. Thus, the values have to be converted to a
common range. The range that has been selected is [0..63]. Then this value is converted
into binary code (7 digits for each value, 2
7
) and it is ready to be used.
The previous example would turn into:
21; 6; 21; 0; 0 but now, all values in [0..63]
210
7.8. Oine Control
Parameter Value
Amplitud [90

..90

]
Angular velocity [0..10]
Phase [0..2]
Oset [0..90

]
Phase between vertical and horizontal modules [0..2]
Table 7.25: GA Parameter optimization genes value range
that in binary is: 0010101; 0000110; 0010101; 0000000; 0000000
And so the individual is: 00101010000110001010100000000000000
If the robot is composed of support and extension modules to perform an inchworm
movement, this parameter will be:
Extension time (T)
Expansion time (P)
Extension lenght (L)
Support servo angle (S)
The chromosome is TPLS, and it would follow the same transformation as before.
7.8.3 Phases of the GAs
Initialization
Initially many individual solutions are randomly generated to form the initial population
specied previously. The population size depends on the nature of the problem, but
typically contains several hundreds or thousands of possible solutions. The population
is generated randomly, covering the entire range of possible solutions (the search space).
Occasionally, the solutions may be seeded in areas where optimal solutions are likely to
be found, but this is not the case.
In the conguration demand, genes are random number between 0 and 6. In the
parameter optimization, genes are random number selected from the possible values, as
seen in table 7.25.
Evaluation
The evaluation consists on applying the tness function on every chromosome of the
population.
The tness function is a function that evaluates the performance of the chromosome.
For that reason, it transforms the chromosome into the modules that represents and run
the simulator with these modules.
The tness function is a procedure that follows the next steps:
starts the simulation
211
CHAPTER 7. Control Architecture
creates the modules specied by the genes
runs the simulation (faster than in normal simulation) to achieve the specied ob-
jective. Several objectives have been specied:
terminates the simulation when:
1. either the objectives are completed
2. or the maximum number of iterations have been reached
returns a value (depending on the goal):
cover a part of the pipe in as little time as possible: the tness function returns
time[s].
cover a part of the pipe with the lowest possible consumption: the tness
function returns intensity[A].
negotiate an elbow: the tness function returns time[s].
cover a distance in open air: the tness function returns time[s].
The values return for the tness function are stored for later use in selection phase.
For example, lets suppose that we have 6 chromosomes composed of 6 modules:
C1:RRRRRR=111111
C2:CRRRRH=511112
C3:RRRHHH=111222
C4:CRSEST=513436
C5:SESSES=343343
C6:RSRSRS=131313
and the tness function calculates the distance it covers in a straight pipe in 20s. The
following results are obtained:
C1:0.4m
C2:0.6m
C3:1m
C4:0.45m
C5:0.8m
C6:0.2m
Selection
After all the population has been evaluated, the selection phase starts. During each
generation, a proportion of the existing population is selected to breed a new generation,
what is called the mating pool. Individual solutions are selected through a tness-based
process, where tter solutions are more likely to be selected.
212
7.8. Oine Control
Figure 7.15: Roulette probabilty
The basic part of the selection process is to stochastically select from one generation to
create the basis of the next generation. The requirement is that the ttest chromosomes
have a greater chance of transmitting their genetic information than weaker ones. This
replicates nature in that tter individuals will tend to have a better probability of survival
and will go forward to form the mating pool for the next generation. Weaker individuals
are not without a chance. In nature such individuals may have genetic coding that may
prove useful to future generations.
In this theses the roulette wheel selection, a stochastic sampling done with replacement,
has been used.
This sampling method selects parents according to a spin of a weighted roulette wheel
(g. 7.15). The roulette wheel is weighted according to the tness values obtained pre-
viously. A high-t value will have more area assigned to it on the wheel and hence, a
higher probability of ending up as the choice when the biased roulette wheel is spun. The
roulette wheel selection is a high-variance process with a fair amount of scatter between
expected and actual number of copies.
Taking the example of the distance covered in 20s, the previous chromosomes have the
following selection probabilities:
C1:0.12, 12\%
C2:0.17, 17\%
C3:0.29, 29\%
C4:0.13, 13\%
213
CHAPTER 7. Control Architecture
Figure 7.16: Single point crossover example
C5:0.23, 23\%
C6:0.06, 6\%
Thus, selecting a random number from 0 to 1, the chromosomes selected would be:
C1: From 0 to 0.12 [0, 0.12]
C2: From 0.12 to 0.29, (0.12, 0.29]
C3: From 0.29 to 0.58, (0.29, 0.58 ]
C4: From 0.58 to 0.71, (0.58, 0.71]
C5: From 0.71 to 0.94, (0.71, 0.94]
C6: From 0.94 to 1, (0.94, 1]
Obtaining the random numbers: 0.08, 0.4, 0.68, 0.45, 0.015, 0.9, C3 is elected twice,
C6 none, and the rest one time for reproduction.
Reproduction
The next step is to generate a second generation population of solutions from those selected
in the selection phase through genetic operators: crossover (also called recombination),
and/or mutation.
For each new solution to be produced, a pair of parent solutions is selected for
breeding from the mating pool selected previously. By producing a child solution using
the above methods of crossover and mutation, a new solution is created which typically
shares many of the characteristics of its parents. New parents are selected for each new
child, and the process continues until a new population of solutions of appropriate size is
generated.
214
7.8. Oine Control
Figure 7.17: Mutation example
These processes ultimately result in the next generation population of chromosomes
that is dierent from the initial generation. Generally the average tness will have in-
creased by this procedure for the population, since only the best organisms from the rst
generation are selected for breeding, along with a small proportion of less t solutions, for
reasons already mentioned previously.
In this thesis, it is used single point crossover (gure 7.16): one crossover point is
selected, the genes from the beginning of the chromosome to the crossover point is copied
from one parent, and the rest is copied from the second parent.
The crossover point is selected randomly and could be any number between 1 and the
length of the chromosome - 1.
Afterwards mutation is being performed. Each gene of each chromosome may be
changed based on the mutation probability. For each gene a random number is selected,
and if it is smaller that the mutation probability the gene is changed for another number
obtained randomly (gure 7.17).
In the same example as before, we take two parents for the crossover, 111222 and
343343. Then a number from 1 to 5 is selected randomly, obtaining 3, so the rst 3
genes of parent 1 go the rst ospring, and the last 3 genes go to the second ospring.
The opposite with parent 2.
In the case of mutation, if the chosen parent is 513436, a random number from 0 to
1 is obtained for each gene. If it is smaller than the mutation probability, the gene change.
In this case, the only gene that changes is the fourth one. Another random number is
selected from 1 to 6 (the number of modules) that will replace the gene. In this case it is
the 1.
Termination
This generational process is repeated until a termination condition has been reached.
In our case the conditions used are when a xed number of generations has been
reached (50) or the highest ranking solutions tness is reaching or has reached a plateau
such that successive iterations no longer produce better results.
215
CHAPTER 7. Control Architecture
7.9 Conclusions
In this chapter a control architecture for chained modular robots composed of hetero-
geneous modules has been presented. This architecture is not limited to the modules
developed in chapter 4 but to any kind of chained module able to work and interact with
the others.
Amongst all possible choices stated in 3.1 a behavior-based architecture has been
chosen because:
it is specically appropriate for designing and controlling semi-autonomous articial
microrobots based on biological systems
it is suitable for modular systems
it integrates both low and high level control
The control architecture is structured in three levels. It is similar to an hybrid archi-
tecture, and indeed it has many features of them, but behaviors can be found both in low
and high level control layers, not only in the reactive layer.
The lower level is entirely based on behavior and includes behaviors related to the
module, as for example reactive behaviors that take care of the health of the module,
walking behaviors in charge of the movement of the robot and perceptual behaviors in
charge of gathering information about the module and its environment.
On the contrary, the higher level has two main parts: one is also behavior-based,
composed by behaviors related to the whole microrobot. The other is an inference engine
in charge of taking decisions based on the information provided by the modules. Behaviors
in this layer take care of the stability of the robot, of its movement, reaching goals and
avoiding obstacles, etc.
Behavior fusion is a very important part at both low and high levels. Both coordination
and competition are selected depending on the behaviors. Also noteworthy on their own
are the the coordination of walking behaviors, since the combination of heterogenous drive
movements is one the distinguishing elements of this thesis.
It is important to highlight the role of the intermediate layer, that allows the central
control to treat all modules in the same way, since the heterogenous layer translates its
commands into module specic commands.
In order to communicate all actors, a communication protocol based on I
2
C has been
developed. It allows to send messages from the operator to the central control, from
central control to the modules and between behaviors.
Another important part of the control architecture is the Module Description Language
(MDL), a language that has been developed to allow modules to transmit their capabilities
to the central control, so it can process this information and choose the best conguration
and parameters for the microrobot.
The architecture includes as well an oine genetic algorithm aimed at optimizing the
conguration of the modules and its locomotion parameters in order to achieve the best
conguration for a set of modules and the best locomotion gait for a conguration.
To conclude, the control architecture described in this chapter presents a new solution
to control chained modular microrobots composed by several drive units, contributing
216
7.9. Conclusions
with a new research line to the world of modular robots, mainly composed of homogenous
modules.
217
CHAPTER 7. Control Architecture
218
Chapter 8
Test and Results
Dont be afraid to give your best to what seemingly are small jobs. Ever time you
conquer one it makes you that much stronger. If you do the little jobs well, the big ones
tend to take care of themselves
Dale Carnegie
In this chapter some of the tests performed are presented. It is divided in three parts:
the rst part shows experiments with the real modules. Due to the limitations of the real
modules, these experiments cover locomotion tests of snake-like, worm-like and helicoidal
congurations separately.
The next section is dedicated to validation tests aimed at proving the suitability of
the simulated modules with respect to the real ones. Several tests have been performed
regarding consumption, torque and speed.
The goal of the nal section is to show the experiments carried out in the simulator to
prove the concepts presented in this thesis, regarding heterogenous modular robots, that
couldnt be tested in real modules.
8.1 Real tests
It is not possible to perform the same tests with real modules than with simulated ones,
due to several reasons: not all modules have been build, some modules are not robust
enough to do some movements, some modules features does not work as expected...etc.
But nevertheless, many test can be done to prove some of the concepts of the hardware
and software design.
In order to compare the characteristics of each type of drive module, several tests
have been performed to prove each type of locomotion: helicoidal, inchworm (two support
modules plus one extension module) and snake-like. Table 8.1 shows the speed of the
robot at dierent angles and with dierent congurations.
219
CHAPTER 8. Test and Results
Slope Speed[cm/s] Modules involved
0

2,5 Inchworm
30

2 Inchworm
45

1,5 Inchworm & Camera


90

1,3 Inchworm & Camera


90

1 Inchworm, Rotation & Camera


0

3 Helicoidal v1.0
90

1,2 Helicoidal v1.0


Table 8.1: Speed and slope for dierent congurations
Figure 8.1: Images taken form the camera inside a pipe
8.1.1 Camera/Contact Module
The camera/contact module is provided with a camera for exploration tasks. As it is
shown in g. 8.1, with the camera it is possible to distinguish objects stuck in the pipe
(subgure b) and c)), the way out (a)) and breakages in case there were any. It is also
shown (subgure d)) that it provides enough illumination in the open air.
In case that the microrobot is teleoperated, a GUI has been developed to visualize and
control the camera and its leds (g. 8.2).
8.1.2 Helicoidal
The helicoidal module has proved to be the fastest module inside pipes (the only environ-
ment where it can be used due to its conguration with a rotating head that needs the
pipe to push against and move forward).
In gure 8.3 it is possible to see the module going forward and up in the pipes. In
order to turn it needs other modules to help, i.e. rotation module.
8.1.3 Worm-like
The worm-like conguration is slower that the helicoidal module, but is more versatile. It
can perform turns and it can adapt to pipes of dierent diameters. In g. 8.4 it is possible
to see the microrobot going forward in a pipe at two dierent slopes, 0 and 30 degrees,
and also negotiating an elbow.
220
8.1. Real tests
Figure 8.2: Camera Interface
Figure 8.3: Helicoidal module inside a pipe
221
CHAPTER 8. Test and Results
(a) Worm module at an elbow (b) Worm module at 30

(c) Worm module at 0

Figure 8.4: Worm module tests


222
8.2. Validation tests
Figure 8.5: Snake-like movement over undulated terrain
8.1.4 Snake-like
The snake-like conguration is the most versatile conguration of all. Its main feature is
that it can be able to move in the free space, i.e. outside pipes. As an example, in g. 8.5
the microrobot is negotiating an undulated terrain.
Another advantage of this conguration is that the robot can use obstacles (like an
elbow in a pipe or a corner) to help it to go forward, as in gure 8.6.
8.2 Validation tests
This section shows some experiments aimed to validate the simulation environment by
comparison with the real modules that have been already developed. The experiments
cover mainly position, intensity and torque tests.
8.2.1 Servomotor tests
A complete model of the servomotors used has been developed as described in section
5.1.2. In order to validate its operation, two tests have been performed with rotation
modules v2, one rising its own weight and another one rising the batteries module of mass
13,2g. In both cases the servo has moved from 30

to 120

and from 90

to 30

.
Position and intensity have been measured through the A/D converter. The maximum
223
CHAPTER 8. Test and Results
Figure 8.6: Corner negotiation
K
p
K
m
K
t
R L B J
Parameter (V/rad) (V/rad/s) (Nm/A) () (H) (Nm/rad/s) (Nm/rad/s
2
)
Values 12 0.14 0.14 12 0.0075 0.00000035 0.0000007
Table 8.2: Parameters for the servomotor tests
static torque at 0

can be calculated from the following equations


1
:
Torque
noload
=
L
2
2
m
rotmod
= 0.0406kg cm (8.1)
Torque
loaded
=
L
2
2
m
rotmod
+ (
L
1
2
+L
2
) m
load
= 0.1060kg cm (8.2)
The results are shown in the following sections, with the conguration shown in table
8.2. These values were obtained from the theoretical model (starting from real values
obtained from catalog) after an adjustment iterative process. All parameters K
p
,K
m
and
K
t
have been calculated having into account the gearset of the servomotor.
1
Values obtained from [Torres, 2008]
224
8.2. Validation tests
30

to 120

unloaded
Figure 8.7: 30

to 120

unloaded: rotation angle


Figure 8.8: 30

to 120

unloaded: intensity
225
CHAPTER 8. Test and Results
Figure 8.9: 30

to 120

unloaded: torque
30

to 120

loaded
Figure 8.10: 30

to 120

loaded: rotation angle


226
8.2. Validation tests
Figure 8.11: 30

to 120

loaded: intensity
Figure 8.12: 30

to 120

loaded: tau
227
CHAPTER 8. Test and Results
90

to 30

unloaded
Figure 8.13: 90

to 30

unloaded: rotation angle


Figure 8.14: 90

to 30

unloaded: intensity
228
8.2. Validation tests
Figure 8.15: 90

to 30

unloaded: tau
90

to 30

loaded
Figure 8.16: 90

to 30

unloaded: rotation angle


229
CHAPTER 8. Test and Results
Figure 8.17: 90

to 30

unloaded: intensity
Figure 8.18: 90

to 30

unloaded: tau
An additional test has been performed for the rotation module v1. The test consist of
moving with only one servomotor the maximum number of similar modules. With the
real rotation module V1, it was possible to move two modules, as shown in the g. 8.19.
With more than two it is not able to move. The same results have been obtained in the
simulation.
230
8.2. Validation tests
(a) Real (b) Simulated
Figure 8.19: Rotation module v1 torque test
8.2.2 Inchworm tests
The inchworm conguration (two support modules and one extension module) was tested
as a drive unit. A comparison between the real modules and simulated ones is shown in
table 8.3. The real tests are obtained from [Santos, 2007])
It is possible to note that in the simulation it is possible to achieve similar results than
in reality.
The reason why the module is slower in pipes with slopes (apart from the gravity force)
is that expansion and contraction in the front and rear modules cant be done at the same
time (because the module would slip down), while in horizontal pipes it is possible.
Angle (

) 0 30 90 45
2
Speed (cm/s) (Real) 2,5 1,5 0,6 0,5
Speed (cm/s) (Simulation) 1,5 1,3 0,3 0,8
Table 8.3: Speed test of the inchworm conguration
2
Carrying the camera module
231
CHAPTER 8. Test and Results
8.2.3 Helicoidal module test
The helicoidal module was tested in dierent slopes with the characteristics shown in table
8.4.
Angle (

) 0 30 60 90
Speed (cm/s) (Real) 3 2,1 1,5 1,2
Speed (cm/s) (Simulation) 3 - - -
Table 8.4: Speed test of helicoidal module
8.2.4 Snake-like gait tests
When the locomotion is performed by rotation modules, the movements are similar to those
of a snake (4.5). They are based on CPG (Central Pattern Generator). The position of
the actuators follow a sinusoidal wave (eq. 7.9).

i
= A sin( t + (i 1)) +O (8.3)
Since rotation modules have two degrees of freedom, there will be two sinusoidal waves,
vertical and horizontal
v
i
= A
v
sin( t + (i 1)
v
) +O
v
(8.4)
h
i
= A
h
sin( t + (i 1)
h
+
vh
) +O
h
(8.5)
By playing with the parameters of eq. 8.4 and 8.5, dierent movements can be achieved
(as covered in [Gonzalez et al., 2006]). These movements can be fully implemented in the
simulation environment. In the next paragraphs some of these movements are going to be
described. These experiments proved the reliability of the simulator.
Going forward/backwards (1D sinusoidal gait)
For the locomotion in 1D, forward and backward movements are achieved by means of
variations only in vertical joints. The horizontal modules are kept in their home position
all the time.
v
i
= A
v
sin(
2
0.5
t + (i 1)
2
3
) (8.6)
h
i
= 0 (8.7)
232
8.2. Validation tests
Figure 8.20: 1D sinusoidal movement
Figure 8.21: Turning movement
Turning
The robot can move along an arc, turning left or right. The vertical joints are moving
as in 1D sinusoidal gait and the horizontal joints are at xed position all the time. The
robot has the shape of an arc. The radius of curvature of the trajectory can be modied
by modifying the oset of the horizontal joints.
v
i
= 60 sin(
2
0.5
t + (i 1)
2
3
) (8.8)
h
i
= 0 (8.9)
In the example shown in gure 8.21 it has been used a value h
i
= 30

. Experimentally,
it has been proof that this conguration is able to stand for h
i
> 24

. If h
i
< 24

the
microrobot would fall down.
Rolling
The robot can roll around its body axis. The same sinusoidal signal is applied to all the
vertical joints and a ninety degrees out of phase sinusoidal signal is applied to horizontal
joints.
233
CHAPTER 8. Test and Results
Figure 8.22: Rolling movement
v
i
= 30 sin(
2
0.5
t) (8.10)
h
i
= 30 sin(
2
0.5
t +

2
) (8.11)
Rotating gait
The robot can also rotate parallel to the ground clock-wise or anti-clockwise. The robot
can change its orientation in the plane. This is achieved by using two sinusoidal signals
with dierent phase.
v
i
= 30 sin(
2
0.5
t + (i 1)
2
3
) (8.12)
h
i
= 30 sin(
2
0.5
t + (i 1)
2
7.2
) (8.13)
Lateral shift
Using this gait, the robot moves parallel to its body axis. A phase dierence of 100 degrees
is applied both for the horizontal and vertical joints. The orientation of the body axis
does not change while the robot is moving.
v
i
= 30 sin(
2
0.5
t + (i 1)
2
3.6
) (8.14)
234
8.2. Validation tests
Figure 8.23: Rotating movement
Figure 8.24: Lateral shifting movement
235
CHAPTER 8. Test and Results
h
i
= 30 sin(
2
0.5
t + (i 1)
2
3.6
) (8.15)
8.3 Simulation tests
The simulator has been used to carry out several experiments concerning new locomotion
gaits combining dierent types of modules, the performance of the dierent behaviors
used, the control algorithms and the evolution through the genetic algorithms, amongst
others. In the following subsections these tests are presented.
8.3.1 Locomotion tests
In section 8.2 it has been proven that the simulator is able to successfully simulate the
snake-like, inchworm and helicoidal gaits. In this section new types of movements devel-
oped using the simulator are described.
These new types of movements have been achieved by combining all modules in dier-
ent ways.
In the experiments, it will be mentioned the use of passive modules, meaning modules
without drive capabilities. Passive modules will be represented by battery or traveler
modules.
Rotation plus helicoidal modules
Rotation modules can perform several types of snake-like movement. Although these
movements can be quite fast in open air, inside pipes this movement can be quite slow.
By combining rotation modules with helicoidal modules, it is still possible to do snake-like
movements while increasing the speed of movement.
Helicoidal modules push forward trying to make the robot go forward while the rotation
modules perform a snake-like gait that also helps to go forward, reduces the friction (less
parts are touching the pipe) and allow rotations.
The inner diameter chosen for the experiments is usually 36mm. This is enough for
rotation modules to negotiate elbows. But when including helicoidal modules in the chain,
depending on the position of the helicoidal modules, the robot will be able to negotiate
the elbow or not. In gure 8.25 it is possible to see that for the 36mm diameter the robot
gets stuck in the elbow (a) and b)). With a higher diameter of 40mm, it negotiates the
elbow without problems (c) and d)).
But if the helicoidal modules are placed in the front and the back, the robot is able to
negotiate the 36mm elbow, as shown in gure 8.25.
The simulator is very useful to identify the dimensions of the modules to t in a specic
pipe.
Rotation plus passive modules
In section 8.2.4 several movements achieved with the rotation modules have been shown.
This movements can still be performed if there are other modules placed between the
236
8.3. Simulation tests
Figure 8.25: R+H elbow negotiation
237
CHAPTER 8. Test and Results
Figure 8.26: R+H elbow negotiation depending on pipe diameter
238
8.3. Simulation tests
Figure 8.27: Rotation + passive modules in a vertical sinusoidal movement
rotation modules.
In gure 8.27, the importance of the position of the passive modules in the chain can
be observed. In gures a) and b) the movement of the microrobot is almost negligible.
However, if the passive modules are placed symmetrically, the robot can perform the same
movement. This is extensible to many snake-like movements.
In gure 8.28 it is possible to observe that the negotiation of elbows depends in an
important way on the overall drive force of the microrobot. In gures a) and b) the robot
composed of touch, rotation and passive modules gets stuck in an elbow. With the help
of an helicoidal module, in gure c) and d) the microrobot is able to manage the elbow.
Several support plus extension modules
It is possible to combine several support modules to create support units, and several
extension modules to create extension units. After the MDL phase, the CC is able to
detect the support and extension modules and to identify support and extension units.
These units can be composed of a dierent number of modules, from one to several.
In gure 8.29 it is possible to see an example of unit composed of 2 modules (a) and
b)), three modules (c) and d)) and a combination (e) and f)). In g), an example of two
dierent inchworm drive units working together, to show that modules are aware of their
position (although this conguration does not work because the support modules of one
unit avoids the movement of the extension module of the other unit).
239
CHAPTER 8. Test and Results
Figure 8.28: Rotation + passive modules negotiating an elbow with and without helicoidal
module
240
8.3. Simulation tests
Figure 8.29: Inchworm locomotion composed of several extension and support modules
241
CHAPTER 8. Test and Results
Figure 8.30: Example of heterogenous conguration
Rotation plus support plus extension plus helicoidal modules
By combining several types of movements, several types of movements work together:
snake-like, worm-like and helicoidal. Each of them t better for each situation in pipes or
open air.
Figure 8.30 shows an example of the touch, rotation, helicoidal, extension and support
modules working together and performing simultaneously vertical sinusoidal, helicoidal
and worm-like movements.
8.3.2 Control tests
Conguration check
The conguration check phase is used to determine which modules are connected and
in which order, as explained in section 7.6.2. This information is used by modules and
especially the CC to specify types of movements and patterns.
This phase starts when the button PowerUp is pressed in the simulator, or the
modules are powered up in real life. An example of the results can be seen in picture 8.31.
242
8.3. Simulation tests
Figure 8.31: Conguration check example
Orientation
For certain movements It is important to keep a specic posture. For example, in the
vertical sinusoidal wave movement, if the robot lays down, it is necessary to recover the
position before continue with the vertical sinusoidal move.
In gure 8.32 an example of the performance of the orientation behavior is shown. In
a) it is possible to see that the rst degree of freedom is horizontal. In b) the robot makes
an arc, and consequently in falls down as shown in c). Then it puts back straight, leaving
the rst degree of freedom vertical.
Wandering
Figures 8.34 and 8.33 show an example of the microrobot executing the wandering
behavior. This includes going forward and negotiating an elbow when a bifurcation is
detected by the contact module.
In the rst case (gure 8.34), the micro-robot is composed of one contact and several
rotation modules. The micro-robot goes forward in a snake-like movement (using one of
the DOF). When it reaches the elbow, the micro-robot uses the other DOF of the module
to make the turn.
In the second case (gure 8.33), the micro-robot is composed of the following modules:
one contact, two rotation, one helicoidal, two rotation and one passive. The main drive
force is made by the helicoidal module. The rotation modules help a little bit in going
forward, but their main task is to turn.
Split
If the microrobot is composed of enough modules it may split (at a bifurcation for example)
in order to explore several stretches at the same time.
243
CHAPTER 8. Test and Results
Figure 8.32: Example of orientation behavior
Figure 8.33: Contact, Rotation, Helicoidal and Passive
244
8.3. Simulation tests
Figure 8.34: Contact and rotation modules
245
CHAPTER 8. Test and Results
Figure 8.35: Example of chain splitting
Figure 8.35 shown an example of splitting. In a) the robot is a chain composed of 6
modules. In b) it splits in two parts, and in c) each part composed of three modules moves
as an independent unit.
246
Chapter 9
Conclusions and Future Works
Un libro, como un viaje, se comienza con inquietud y se termina con melancola
(A book, like a journey, begins with concern and ends with melancholy)
Jose de Vasconcelos
9.1 Conclusions
Along the previous chapters a description of the work that has been done designing and
constructing an heterogeneous modular multi-congurable chain-type microrobot has been
given.
Starting from an analysis of the state of the art on modular, pipe inspection and
microrobotic systems, it has been determined a lack of a microrobotic system as the one
described in this thesis. This is the reason that has motivated the start of this thesis.
Several modules have been developed to perform dierent types of movements (some
of them new), and what is more important, a combination of all of them.
A simulator has been developed to go beyond the limits of the mechanical modules and
to develop modules with more capacities and abilities. This simulator has been developed
over a physics dynamic engine (ODE) to maintain the veracity in all the experiments.
The simulator has been tested and validated through comparison with the real modules
tests and some examples have been given. New locomotion gaits have been presented and
explained.
On top of all of this, a behavior-based control architecture specically designed for
heterogenous chain-type modular robots has been developed. While inspired by the phi-
losophy of reactive control, behavior-based systems are fundamentally more expressive and
powerful, enabling representation, planning, and learning capabilities. Distributed behav-
iors are used as the underlying building blocks for these capabilities, allowing behavior-
based systems to take advantage of dynamic interactions with the environment rather
than rely solely on explicit reasoning and planning. As the complexity of robots continues
247
CHAPTER 9. Conclusions and Future Works
to increase, behavior-based principles and their applications in robot architectures and
de- ployed systems evolve as well, demonstrating increasingly higher levels of situated
intelligence and autonomy.
Although the architecture is generally behavior-based, it has also a central control that
is model-based and takes decisions for the whole robot and provides behaviors with useful
information.
In order to control the modules already designed and some other that may come in
the future, a Module Description Language (MDL) has been developed for the modules to
communicate their capabilities (push, rotate, extend, measure temperature, sense proxim-
ity, etc.).
Another important point in the control architecture is the oine genetic algorithm,
that allow the microrobot to optimize the modules layout and its locomotion parameters
for specic tasks.
Finally, some test presenting the results and the feasability of the algorithms have been
included.
9.2 Main contributions of the thesis
This thesis presents the following original contributions:
Electromechanical design and construction of an heterogeneous multi-congurable
chain-type microrobot
The main contribution of this thesis lies in its contribution to the chain-type modular
microrobots with the inclusion of heterogeneous multicongurables modules to achieve
the combination of dierent types of movements. As opposed to the literature reviewed
in chapter 2, where most of the designs are homogeneous, it has been decided to choose a
heterogeneous conguration to allow the inclusion of dierent types of modules.
A common mechanical interface has been designed to physically connect the modules
and to carry the control signals and the power supply to all modules.
An original mechanism has been presented for the extension module v1.
Control architecture for chain-type heterogenous modular robots
The control architecture is itself, an original contribution, because there is no architecture
for this kind of heterogenous robots. One of the most familiar is the CONRO control
based in hormones, but it is designed for homogenous modules, not for heterogenous.
The heterogenous agent placed between the embedded control and the central control
is a contribution. Although it is similar to the medium layer in three layers architectures,
it is new in the sense that it acts as an interpreter between the CC and dierent modules.
Thus the CC can send global commands to all modules and the heterogenous agent will
translate them to the modules.
The Module Description Language (MDL) has been especially designed for this archi-
tecture, allowing modules to send their capabilities to the CC.
248
9.3. Publications and Merits
Several behaviors have been designed, both for the CC and the embedded control. It
is especially new the behaviors related to gait control, since modules can perform several
types of movements.
An oine genetic algorithm has been developed to optimize the conguration of the
layout of the modules and its parameters. It has been integrated with the simulator.
Simulation environment for chain-type heterogenous modular robots
The simulation has been a very important part in the thesis, and although similar de-
velopments can be found (like in [Salemi et al., 2006]), this one presents a very powerful
model of the servomotor model integrated in the physical dynamic simulator based on
ODE. As in previous points, it is unique in the sense that it allows the simulation of the
combination of heterogenous gaits
The electronic and control simulation, together with the fact that the code written in
the simulator for the modules is ready to be transferred to the real microprocessors (with
minor changes) are very important contributions, although not original, and may inspire
future deigns.
A traveled distance measurement system for chain-type heterogenous robots has been
designed (the traveler module) based on the combination of several encoders.
Enhancement of the ego-positioning system
The ego-position concept developed for the I-SWARM robots has been enhanced in order
to use dierent codes (binary, gray) and scales (levels of intensity or grey scales).
Although rst conceived for self detection of the position and rotation, the ego-positioning
system has been enhanced to allow the transmission of commands and the programming
of the robots.
9.3 Publications and Merits
Throughout these years of research, the following publications related to the theses have
been produced.
9.3.1 Publications
Journal Articles
A Proposal for a Multi-Drive Heterogeneous Modular Pipe-Inspection Micro-
robot. Brunete, A.; Torres, J.; Hernando, M. and Gambao, E. (2008). International
Journal of Information Acquisition, Vol.5, Issue 2, pp 111 126
249
CHAPTER 9. Conclusions and Future Works
Book Chapters
Arquitectura para robots modulares multicongurables heterogeneos de tipo
cadena. Brunete, A.; Torres, J.; Hernando, M. and Gambao, E. (2007). Arquitecturas
de control para robots. Escuela Tecnica Superior de Ingenieros Industriales (ETSII), Uni-
versidad Politecnica de Madrid (UPM).pp 151167. ISBN:978-84-7484-196-1.
Conference Proceedings
Multi-Drive Control for In-Pipe Snakelike Heterogeneous Modular Micro-
Robots. Brunete, A.; Torres, J.; Hernando, M. and Gambao, E. (2007). Proceedings of
the 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO 2007).
A 2 DoF Servomotor-based Module for Pipe Inspection Modular Micro-
robots. Brunete, A.; Torres, J.; Hernando, M. and Gambao, E. (2006). Proceedings of
the 2006 IEEE International Conference on Intelligent Robots and Systems (IROS).
Solar Powering with Integrated Global Positioning System for mm3 Size
Robots. Boletis, A.; Brunete, A.; Driesen, W. and Breguet, J. M. (2006). Proceedings
of the 2005 IEEE International Conference on Intelligent Robots and Systems (IROS).
Multicongurable Inspection Robots for Low Diameter Canalizations.
Gambao, E.; Brunete, A. and Hernando, M. (2005). International Symposium on Au-
tomation and Robotics in Construction (ISARC).
Modular Multicongurable Architecture for Low Diameter Pipe Inspec-
tion Microrobots. Brunete, A.; Hernando, M. and Gambao, E. (2005) Proceedings of
the 2005 IEEE International Conference on Robotics and Automation (ICRA).
Drive Modules for Pipe Inspection Microrobots. Brunete, A.; Hernando,
M. and Gambao, E. (2004) Proceedings of the 2004 IEEE International Conference on
Mechatronics and Robotics (MECHROB), pp. 925 - 930.
Conference Video Proceedings
Drive Modules for Low Diameter Pipe Inspection Multicongurable Micro-
robots. Brunete, A.; Hernando, M. and Gambao, E. (2006). Video Proceedings of the
2006 IEEE International Conference on Robotics and Automation (ICRA)
250
9.4. Future Work
Conference Poster Sessions
Drive Modules for Low Diameter Pipe Inspection Multicongurable Micro-
robots. Brunete, A.; Hernando, M. and Gambao, E. (2006). Proceedings of the 2006
IEEE International Conference on Robotics and Automation (ICRA)(Poster)
9.3.2 Merits
Nominations to best paper in a conference
Multi-Drive Control for In-Pipe Snakelike Heterogeneous Modular Micro-
Robots. Brunete, A.; Torres, J.; Hernando, M. and Gambao, E. (2007). Proceedings of
the 2007 IEEE International Conference on Robotics and Biomimetics (ROBIO 2007).
Campus de Excelencia
Elected for participacipation in the 2005 Campus de Excelencia, by the ANECA (Agencia
Nacional de Evaluacion de la Calidad y Acreditacion), based on the thesis interests and
merits.
http://www.campusdeexcelencia.info/index.php
9.4 Future Work
Several ideas that will be researched in the future are:
Embedding of the Central Control in a specic module
In stead of having the central control in a PC, it can be embedded in a specic module
with a more powerful processor.
Development of new modules
Some ideas regarding new modules are already being considered:
Centipede: legs can be simulated with piezoelectric actuators (i.e. I-Swarw) or
similar to [Valdastri et al., 2009]
Module with two drive wheels (especially for open spaces)
Newer versions of the modules more robust
Construction of the modules traveler and sensor
251
CHAPTER 9. Conclusions and Future Works
Learning of new rules by the Central Control based on the experience acquired
The design of the central control and its inference engine leads in the future to include
learning algorithms to develop new rules online, i.e. while the robot is moving and not in
the simulation (oine).
Coordination between several microrobots
Moving a little bit into the swarm robotics, and interesting line of research would be to
have several microrobots to explore the environment and the way they coordinate to share
tasks out.
Split and rejoining tasks would be of a lot of interest for those purposes.
Visual control
The inclusion of visual control would give the camera module another very interesting
sensorial capability to auto-detect obstacles.
252
Appendix A
Fabrication technologies
A.1 Stereolithography
The stereolithography process was rst developed in the eld of rapid prototyping, which
was capable of generating physical parts with features and renements that made it at-
tractive and useful as an aid in the development of new products. The generation system
discussed here was patented in 1986 by the company 3Dsystem.
Basically the system relies on the possibility for certain resins, especially designed for
it, to solidify when attacked by a laser beam of very specic frequency and power.
As usual in the various rapid prototyping techniques currently existing, aimed at the
generation of physical parts, these parts are made by horizontal laminated of theoretical
geometry made by 3D design software. Together, all of them should lead to the desired
piece. The starting point in all cases is a le in STL format.
A.1.1 Part generation mechanics
The work area consists of a vat containing liquid resin, an elevation plate in which the
support for the parts and the parts themselves are generated, a stabilizing bar, the laser
transmitter and a set of mirrors that enable to project the laser beam precisely on the top
sheet of the resin tray where it draws the outline of the dierent horizontal slices of the
part, as well as the inner padding.
The tray that supports the columns (required for the overhangs of the part not to
collapse at the bottom of the tray) and the part, get down in the resin, immersing the
whole set a distance of typically 0.15mm, which denes the layer hop of the part to
generate. To avoid surface deformations to the top layer that could be caused due to
surface tension of the liquid resin, a stabilizing bar that will ensure a completely at and
smooth surface is moved over the surface.
At this point, the laser beam is directed by the mirrors to the surface of the resin,
drawing the outline and the inner padding to be solidied. This process will be more
or less hardworking in terms of the total area that the laser has to cover. It may be
necessary to stop the process a few seconds to ensure that all resin attacked by the beam
253
CHAPTER A. Fabrication technologies
(a) General description (b) Laser
Figure A.1: Stereolithography process
Figure A.2: Support columns removal
is suciently solidied.
If there are new layers to do, the tray would submerge again a new layer hop and
repeat the process until the end of the upper bound of the piece. (Note that the geometry
is generated from bottom to top)
Once the last layer of the piece is done, the tray rises, so it can be easily removed. At
this point the support columns are mechanically removed in order to leave the part clean.
As a nal process, a post-cured of the part is performed in a light furnace in order to cure
the resin to achieve better mechanical properties.
A.1.2 Images from real work process
Through this sequence of photos, some of the most important stages in the process of
generating stereolithography parts are illustrated.
In the rst photograph, made at the beginning of a work tray, it is shown the path
the laser performs to generate the support columns that will support the various parts.
In this case the beam does not sweep the inner of the outlines, because it is intended to
254
A.1. Stereolithography
Figure A.3: Laser trajectory
get a fragile support that is easy to remove later.
In the second picture, the solidication process of the resin has been nalized and
therefore the tray is located in its uppermost position to proceed with removal.
In the next image it is shown the column removal process, in order to release the parts.
They have to be scrupulously cleaned to remove any residue that may contain liquid resin.
Finally, in the following pictures, it is shown the post-curing furnace and the parts
inside it.
A.1.3 Advantages, drawbacks and limitations
It may be noted as advantages:
1. It is one of the rapid prototyping techniques more accurate from a dimensional point
of view, making it particularly suitable for parts in which this feature has a special
relevance. As a rule, it is generally considered a good technique to apply in small
parts with many details.
2. The parts obtained are pleasant to touch and sight, can be polished and/or paint
with ease, resulting in excellent surface nishes. They are therefore very suitable to
be used as models, in the case of wanting to subsequently create silicone molds and
then make vacuum casting in materials with similar characteristics to the nal ones.
3. The nal material is translucent, making it likely to be particularly suitable for
certain sets where you want to appreciate (or at least imply), internal interferences.
And as drawbacks it can be pointed out that, although progress has being made in
the development of new photosensitive resins with better mechanical properties, the most
255
CHAPTER A. Fabrication technologies
Figure A.4: Solidication process
Figure A.5: Post-cure oven
commonly used are fragile and inexible, and once cured, the parts are very sensitive to
both humidity (including the atmosphere) and temperature. These two parameters may
easily cause the lost of their mechanical characteristics and suer dimensional changes
over time.
As limitations it is possible to say:
1. The work process of stereolithography causes the dierent layers of the solidied
part to require support columns to avoid collapsing and nishing at the bottom of
the tray. These columns should be generated in parts of the part it would be easy to
extract from, so that the orientation of the piece in the tray cannot be free. If this is
not taken into account, geometries that are aected by interior columns that cannot
be removed and that make it partially or completely lose their usefulness might have
been generated.
256
A.2. Micro-milling
Figure A.6: Detail of some parts of the rotation module v1
2. Within the working area of the tray it is possible to place dierent pieces, but always
at the same level. It is not possible, or at least appropriate, try to nest parts on
each other.
3. Although the level of detail is established by the precision set by the layer hop (which
allows for extrusions or indentations on the surfaces), generally is not desirable to
make wall thicknesses below 1 mm because of the fragility of the material. In extreme
cases, depending on the size of the potential overhang, it is possible to obtain vertical
walls of 0.6 mm and horizontal of 0.8 mm thickness.
A.2 Micro-milling
Micro-milling is a process that becomes more important as sectors such as medicine,
telecommunications, and aerospace demand parts increasingly smaller. Unlike what hap-
pens with conventional milling, in micro-milling there is a lack of recommendations for
process operation and cutting conditions selection. One of the tasks in which a lot of
research is being done is the development of recommendations obtained by several pro-
cedures based on empirical methods, or contrasted with those provided by the limited
literature in this eld. Most of the work carried out in this area comes from extrapolat-
257
CHAPTER A. Fabrication technologies
Figure A.7: Micro-milling system
Figure A.8: Fixation System
ing the knowledge gained in conventional and high speed milling, with the use of very
small diameter tools on small pieces, as well as dening the needs of measurement and
verication procedures in this type of parts.
The micro-milling imposes restrictions on the machines and tooling used:
1. The machines should be equipped with high speed cutting spindles to reach the
appropriate cutting speed with the small dimension tools used.
2. Due to the small part dimensions, it is required a very accurate axis positioning.
3. The fastening of the part is a new challenge to be solved, given the lack of fastening
systems adapted to the new sizes similar to the conventional ones.
The achievable dimensional characteristics will be limited by the minimum size of the
available tool. In known commercial tools (named sintering carbide microgram), those
dimensions are on the order of 20 m.
258
A.2. Micro-milling
Figure A.9: Contouring machining
The system developed in the manufacturing division of the Department of Mechanical
and Manufacturing Engineering of the ETSII of Madrid, in the framework of its project
INDUS-MST to perform micro-milling process, consists of a diabase bed with a full spec-
ication of 1 m, which supports units designed to achieve a displacement of less than
1m repeatability in each axis. The units of displacement have been mounted with a
specication of perpendicularity 50rad.
One of the great advantages of using this manufacturing process is that you can get
steel parts much stronger than the pieces obtained by stereolithography. The precision
obtained is very high. In some trials have been achieved walls as thin as 25 m.
The drawbacks that can be pointed out are the formation of burrs (dicult to elim-
inate), the need to use dierent tools depending on the material in which you want to
machine and the precision required, and the high cost of micro-milling tool.
The main limitations are that the parts generated by micro-milling should be relatively
simple geometric parts. Comparing this method with stereolithography, the result is that
in the latter there is more freedom when designing parts. In this project micro-machining
has been used mainly in the manufacture of axles and wheels.
The dimensional quality of the parts is closely related to the machining material, as
well as the supercial nished on which there is also a determining inuence of the feed
rate used.
There are applications that are complicated to conduct due to the diculty in holding
the piece in the micro-milling system. Especially during the experiments several parts
have been broken while trying to machine them.
259
CHAPTER A. Fabrication technologies
Figure A.10: Helicoidal module leg generated by micromachining
260
Appendix B
Terms and Concepts
Robustness
The ability to handle imperfect inputs, unexpected events and sudden malfunctions.
Reliability
The ability to operate without failures or performance degradation over a certain period.
Modularity
The ability of control system of autonomous vehicles to be divided into smaller subsystems
(or modules) that can be separately and incrementally designed, implemented, debugged
and maintained.
Flexibility
Experimental robotics require continuous changes in the design during the implementation
phase. Therefore, exible control structures are required to allow the design to be guided
by the success or failure of the individual elements.
Expandability
A long time is required to design, build and test the individual components of a robot.
Therefore, an expandable architecture is desirable in order to be able to build the system
incrementally.
Adaptability
As the state of the world changes very rapidly and unpredictably, the control system must
be adaptable in order to switch smoothly and rapidly between dierent control strategies.
261
CHAPTER B. Terms and Concepts
Classication of Modular Robots
Modular robotic systems can be generally classied into several architectural groups by
the geometric arrangement of their unit (lattice vs. chain). Several systems exhibit hybrid
properties.
Lattice architectures have units that are arranged and connected in some regular,
space-lling three-dimensional pattern, such as a cubical or hexagonal grid. Con-
trol and motion are executed in parallel. Lattice architectures usually oer simpler
computational representation that can be more easily scaled to complex systems.
Chain/tree architectures have units that are connected together in a string or tree
topology. This chain or tree can fold up to become space lling, but underlying
architecture is serial. Chain architectures can reach any point in space, and are
therefore more versatile but more computationally dicult to represent and analyze.
Tree architectures may resemble a bush robot
Modularity and Recongurability
Modularity is a general systems concept, typically dened as a continuum describing the
degree to which a system components may be separated and recombined. It refers to both
the tightness of coupling between components, and the degree to which the rules of the
system architecture enable (or prohibit) the mixing and matching of components.
Modular robots are composed of dierent copies of simple modules. Modules cant
do much by themselves, but when many of them are connected together, a system that
can do complicated things appears. In fact, a modular robot can even be recongured in
dierent ways to meet the demands of dierent tasks or dierent working environments.
Each module is virtually a robot in itself having a computer, a motor, sensors and the
ability to attach to other modules.
Multicongurability vs self-recongurability
Recongurable robots present the ability to change its conguration either manually or
autonomously. If the reconguration is done autonomously, it is called self-reconguration.
On the other hand, if the reconguration has to be done manually, we talk about multi-
conguration. Modules attach together to form chains (which can be used like an arm, a
leg or a nger), caterpillar, double-thread caterpillar, wheel, 4/6 leg walker, sidewinder,
spider, etc.
In the development of inpipe robots, autocongurability is not an essential charac-
teristic due to the lack of space inside the tube to change conguration. It is better
to talk about multiconguration: the robot presents dierent congurations prior task
development. Once the task is started, the conguration must be kept.
Homogeneous vs Heterogeneous modules
Depending on the type of modules that the robot is compound of, the robot can be
classied in homogeneous (all the modules are the same) and heterogeneous (dierent
262
modules). Modular robots can be dened as in [Yim et al., 2001] as a n-modular robots,
being n the number of dierent modules. The main advantage of homogeneous robots
is that they are easy to build. On the contrary they are limited in the movement gaits.
Heterogeneous robots are more versatile and can perform other tasks and several movement
gaits.
Miniaturization
The term microrobot appears nowadays in many articles referring to mini-robots, robots
of very small dimensions (millimeters). This is because we are still far from seeing a
real micro robot (m). Keeping in mind that for most of the researches it is not pos-
sible to build a real microrobot, it is necessary to miniaturize its components and to
make the mechanical and electronic design together to minimize the space (mechatronics).
This work is what has been carried out, and it is what makes the design so expensive.
Miniaturization can be seen in many microrobots too, as in the microrobot of DENSO
Corporation [Nishikawa et al., 1999], the Micro Modular Robot of AIST [Yoshida et al.,
2002], and the three microrobots of the French CNRS: LMS, LAB and LAI [Anthierens
et al., 2000]. SMAs are extensively used in microrobots because they give a good torque in
small displacements. It is used for example in the former robots. Polybot and M-TRAN
use shape memory alloy as latches and docking of modules together with infrared emitters
and detectors aid. SMAs could be very useful too for grippers and connecting pads. The
main disadvantage is the great power consumption.
Energy supply
Energy supply is a big problem in mobile microrobots because the available supplied
power is very limited. Most of developers adopt batteries or cable as the solution to
transfer power supply to the robot. In autonomous micro-robots the solution is limited
to onboard batteries. A very innovative solution is presented by DENSO Corporation,
which has solved this problem in its Microrobot [Nishikawa et al., 1999] by developing a
wireless energy supply system (together with a low power consumed actuator, high ecient
energy conversion device and power management system). The microrobot functions as a
complete wireless link system traveling in small pipes at 10mm per second with wireless
data communication of 2.5Mbps and wireless energy supply of 480mW. It includes devices
such as CCD camera, locomotive actuator, control circuit, wireless energy supply device,
and RF circuit installed into a small body of 10mm diameter and 50mm length. To send
energy through radio frequency is a very interesting solution but it is limited to low power
devices.
Centralized vs Distributed control
Generally most of the robots use centralized control: one agent (PC, one of the modules)
tells every module what it has to do in every moment. A distributed system is a collection
of (probably heterogeneous) automata whose distribution is transparent to the user so
that the system appears as one local machine. It is possible to consider the microrobot as
a distributed system, in which every module do their job but it looks like a whole entity
263
CHAPTER B. Terms and Concepts
to an external observer. This is the case of M-TRAN: the robot motion is controlled by
all the modules CPUs. Either if it is a PC (as it is now) or one of the modules (in the
future), a great intelligence centralized control makes the control much more powerful and
easy to implement.
Statically Stable Locomotion
Locomotion is dened to be the act or power of moving from place to place. Statically
stable locomotion has the added constraint that the moving body be stable at all times.
In other words, if the body were to instantaneously stop all motion, the body would still
be standing. More specically, the vertical projection of the center of gravity will be
contained within the convex hull of the bodys points of contact with the ground at all
times.
Gaits
A gait is dened to be one cycle of a repeated pattern of motion that is used to move from
one place to another. Simple gaits are those which cannot be broken down into separate
gaits. This is as opposed to compound gaits which are combinations of simple gaits. One
example of two simple gaits being combined into a compound gait is a (1) person walking,
(2) a small toy 4-wheeled car, (1+2) a person roller skating.
Servomotors
Servomotors are a special type of motor characterized by their ability to position imme-
diately in any position within its operating range.
Servos are composed of an electric motor mechanically linked to a potentiometer.
Pulse-width modulation (PWM) signals sent to the servo are translated into position com-
mands by electronics inside the servo. When the servo is commanded to rotate, the motor
is powered until the potentiometer reaches the value corresponding to the commanded
position.
Due to their aordability, reliability, and simplicity of control by microprocessors, RC
servos are often used in small-scale robotics applications.
The servo is controlled by three wires: ground (usually black/orange), power (red) and
control (brown/other color). This wiring sequence is not true for all servos, for example the
S03NXF Std. Servo is wired as brown (negative), red (positive) and orange (signal). The
servo will move based on the pulses sent over the control wire, which set the angle of the
actuator arm. The servo expects a pulse every 20 ms in order to gain correct information
about the angle. The width of the servo pulse dictates the range of the servos angular
motion.
A servo pulse of 1.5 ms width will set the servo to its neutral position, or 90

. For
example a servo pulse of 1.25 ms could set the servo to 0

and a pulse of 1.75 ms could set


the servo to 180

. The physical limits and timings of the servo hardware varies between
brands and models, but a general servos angular motion will travel somewhere in the
range of 180

- 210

and the neutral position is almost always at 1.5 ms.


264
I
2
C
I
2
C (Inter-Integrated Circuit) is a multi-master serial computer bus invented by Philips
that is used to attach low-speed peripherals to a motherboard, embedded system, or
cellphone.
I
2
C uses only two bidirectional open-drain lines, Serial Data (SDA) and Serial Clock
(SCL), pulled up with resistors. Typical voltages used are +5 V or +3.3 V although
systems with other, higher or lower, voltages are permitted.
The I
2
C reference design has a 7-bit address space with 16 reserved addresses, so a
maximum of 112 nodes can communicate on the same bus. The most common I
2
C bus
modes are the 100 kbit/s standard mode and the 10 kbit/s low-speed mode, but clock
frequencies down to DC are also allowed. Recent revisions of I
2
C can host more nodes
and run faster (400 kbit/s Fast mode, 1 Mbit/s Fast mode plus or Fm+, and 3.4 Mbit/s
High Speed mode), and also support other extended features, such as 10-bit addressing.
The reference design, as mentioned above, is a bus with a clock (SCL) and data (SDA)
lines with 7-bit addressing. The bus has two roles for nodes: master and slave:
Master node: node that issues the clock and addresses slaves
Slave node: node that receives the clock line and address.
The bus is a multi-master bus which means any number of master nodes can be present.
Additionally, master and slave roles may be changed between messages (after a STOP is
sent).
265
CHAPTER B. Terms and Concepts
266
Appendix C
Equipment used
C.1 Hardware
Dimax U2C-12 card
The Dimax U2C-12 (g. C.1), all-in-one USB-I2C, USB-SPI and USB-GPIO Bridge de-
vice, converts PC USB transactions to the I2C Master, SPI Master transactions and GPIO
functions. U2C-12 turns the PC running Windows, Linux or MacOS into acomprehensive
I2C/SPI Bus master.
GTP USB Lite Programmer
GTP USB Lite is a simple USB based PIC programmer that is capable of programming
almost any type of PIC till date, with good software support (winpic800) on the PC side.
Communication Box
The communication box (g. C.2) is a device that has been build in order to integrate all
necessary equipment to run the microrobot, including:
5V power supply
Dimax U2C-12 card
GTP USB Lite Programmer
A box has been designed integrating all of these elements, so it is easy to connect to
the robot, download software and send commands.
267
CHAPTER C. Equipment used
Figure C.1: U2C-12 card
Figure C.2: Communication box
268
C.2. Software
C.2 Software
C.2.1 Modelling
Autodesk Inventor
Autodesk Inventor, developed by U.S.-based software company Autodesk, is 3D parametric
solid modeling software for creating 3D mechanical models. With Inventor, it is possible
to create digital objects that simulate physical objects. Inventor models are accurate 3D
digital prototypes.
http://usa.autodesk.com/adsk/servlet/pc/index?siteID=123112&id=13717655
Rheingold 3D
Rheingold3D is a standalone 3D polygon modeller that greatly speeds up creation and ma-
nipulation of 3D polygon models. It oers a rich set of tools to deal with polygon meshes,
starting with strong import/export capabilities, over many tools to generate/manipulate
polygon based objects and multiple UV Mapping creating/editing features, and ending
with powerful Low Polygon commands.
http://www.tb-software.com/products_1.html
Meshlab
MeshLab is an open source, portable, and extensible system for the processing and editing
of unstructured 3D triangular meshes. The system is aimed to help the processing of the
typical not-so-small unstructured models arising in 3D scanning, providing a set of tools
for editing, cleaning, healing, inspecting, rendering and converting this kind of meshes.
http://meshlab.sourceforge.net/
C.2.2 Simulation
Microsoft Visual C++
Microsoft Visual C++ (often abbreviated as MSVC) is a commercial integrated develop-
ment environment (IDE) product engineered by Microsoft for the C, C++, and C++/CLI
programming languages. It has tools for developing and debugging C++ code, especially
code written for the Microsoft Windows API, the DirectX API, and the Microsoft .NET
Framework.
http://msdn.microsoft.com/en-us/visualc/default.aspx
ODE
ODE (Open Dynamics Engine) is an open source, high performance library for simulating
rigid body dynamics. It is fully featured, stable, mature and platform independent with
an easy to use C/C++ API. It has advanced joint types and integrated collision detection
269
CHAPTER C. Equipment used
with friction. ODE is useful for simulating vehicles, objects in virtual reality environments
and virtual creatures. It is currently used in many computer games, 3D authoring tools
and simulation tools.
http://www.ode.org/ and http://opende.sourceforge.net/wiki
MATLAB
MATLAB is a numerical computing environment and fourth-generation programming lan-
guage. Developed by The MathWorks, MATLAB allows matrix manipulation, plotting of
functions and data, implementation of algorithms, creation of user interfaces, and inter-
facing with programs in other languages. Although it is numeric only, an optional toolbox
uses the MuPAD symbolic engine, allowing access to computer algebra capabilities. An
additional package, Simulink, adds graphical multidomain simulation and Model-Based
Design for dynamic and embedded systems.
It has been used in this thesis for the validation part, for data management and gure
development of the modules servomotors variables intensity, torque and angle position.
http://www.mathworks.com/products/matlab/
C.2.3 Microchip programming
MPLAB
MPLAB Integrated Development Environment (IDE) is a free, integrated toolset for the
development of embedded applications employing Microchips PIC and dsPIC micro-
controllers. MPLAB IDE runs as a 32-bit application on MS Windows, is easy to use and
includes a host of free software components for fast application development and super-
charged debugging. MPLAB IDE also serves as a single, unied graphical user interface
for additional Microchip and third party software and hardware development tools. Mov-
ing between tools is a snap, and upgrading from the free software simulator to hardware
debug and programming tools is done in a ash because MPLAB IDE has the same user
interface for all tools.
As a C compiler has been used PIC-C Compiler.
http://www.microchip.com
C.2.4 Editing
L
A
T
E
X
LaTeX is a document markup language and document preparation system for the Tex
type- setting program. Within the typesetting system, its name is styled as L
A
T
E
X.
LaTeX is most widely used by mathematicians, scientists, engineers, philosophers,
economists and other scholars in academia and the commercial world, and other profes-
sionals. As a primary or intermediate format, LaTeX is used because of the high quality
of typesetting achievable by TeX. The typesetting system oers programmable desktop
publishing features and extensive facilities for automating most aspects of typesetting and
270
C.2. Software
desktop publishing, including numbering and cross-referencing, tables and gures, page
layout and bibliographies.
LaTeX is intended to provide a high-level language that accesses the power of TeX.
LaTeX essentially comprises a collection of TeX macros and a program to process LaTeX
documents. Because the TeX formatting commands are very low-level, it is usually much
simpler for end-users to use LaTeX.
LaTeX was originally written in the early 1980s by Leslie Lamport at SRI International.
It has become the dominant method for using TeX -relatively few people write in plain
TeX anymore.
The term LaTeX refers only to the language in which documents are written, not to
the editor used to write those documents. In order to create a document in LaTeX, a
.tex le must be created using some form of text editor. While most text editors can be
used to create a LaTeX document, a number of editors have been created specically for
working with LaTeX.
For the writing of this theses two programs have been used: TexShop and MacTEX(for
Mac) and TexnicCenter and MikTEX (for Windows).
http://www.uoregon.edu/
~
koch/texshop/
http://www.tug.org/mactex/2009/
http://www.texniccenter.org/
http://miktex.org/
JabRef
JabRef is an open source bibliography reference manager. The native le format used by
JabRef is BibTeX, the standard LaTeX bibliography format. JabRef runs on the Java VM
(version 1.5 or newer), and should work equally well on Windows, Linux and Mac OS X.
BibTeX is an application and a bibliography le format written by Oren Patashnik
and Leslie Lamport for the LaTeX document preparation system.
Bibliographies generated by LaTeX and BibTeX from a BibTeX le can be formatted
to suit any reference list specications through the use of dierent BibTeX style les.
http://jabref.sourceforge.net/
CamStudio
CamStudio is a screencasting program for Microsoft Windows released as free software.
The software renders videos in an AVI format. It can also convert these AVIs into Flash
Video format, embedded in SWF les. CamStudio is coded in Microsoft Visual C++.
It has been mainly used for recording videos from the simulator environment.
http://camstudio.org/
271
CHAPTER C. Equipment used
272
Glossary
ACM Active Cord Mechanism, 35
ASM Action-Selection Mechanisms, 55
CAMPOUT Control Architecture for
Multi-robot Planetary Outposts, 66
CAN Controller Area Network, 80
CC Central Control, 117
CEBOT Cellular Robotic System, 10
CHOBIE Cooperative Hexahedral Ob-
jects for Building with Intelligent En-
hancement, 24
CPG Central Pattern Generator, 37
DAMN Distributed Architecture for Mo-
bile Navigation, 64
DD&P Dual Dynamics & Planning, 76
dof Degree of Freedom, 94
EGO-positioning Auto-positioning Sys-
tem, 163
FSA Finite State Automata, 52
GA Genetic Algorithms, 82, 128
HLC High Level Commands, 120
I-SWARM Intelligent Small-World Au-
tonomous Robots for Micro-
manipulation, 164
I
2
C Inter-Integrated Circuit, 109, 110, 117,
119, 135, 141, 205
Inference Engine Inference Engine, 123
LLC Low Level Commands, 119
M-TRAN Modular TRANsformer, 14
MAAM Molecule Atom Atom Molecule,
26
MDCN Massively Distributed Control
Nets, 80
MDL Module Description Language, 117
ODE Open Dynamics Engine, 135
RL Reinforcement Learning, 81
SMA Shape Memory Alloy, 30, 31, 34
SR Stimulus response, 51
273
Glossary
274
Bibliography
[fos, ] Fostermiler: http://www.foster-miler.com/.
[nor, ] Northstar: http://www.evolution.com/products/northstar/.
[Albus et al., 1988] Albus, J., Lumia, R., and McCain, H. (1988). Hierarchical control
of intelligent machines applied to space station telerobots. IEEE Transactions on
Aerospace and Electronic Systems,, 24(5):535 541.
[Andersen et al., 1992] Andersen, C. S., Christensen, H. I., Kirkeby, N. O. S., Knudsen,
L. F., and Madsen, C. B. (1992). Vinav, a system for vision supported navigation.
In Christensen, H. I., editor, Proceedings Nordic Summer School on Active Vision and
Geometric Modeling, Aalborg, 1992, pages 251257. Laboratory of Image Analysis.
[Anthierens et al., 2000] Anthierens, C., Libersa, C., Touaibia, M., Betemps, M., Arsi-
cault, M., and Chaillet, N. (2000). Micro robots dedicated to small diameter canaliza-
tion exploration. In Proceedings of the 2000 IEEE/RSJ International Conference on
Intelligent Robots and Systems, pages 480485.
[Arkin, 1987] Arkin, R. (1987). Motor schema based navigation for a mobile robot: An
approach to programming by behavior. In IEEE International Conference on Robotics
and Automation, volume 4, pages 264 271.
[Arkin, 1998] Arkin, R. C. (1998). Behavior-Based Robotics. MIT Press.
[Arkin and Balch, 1997] Arkin, R. C. and Balch, T. (1997). Aura: Principles and practice
in review. Journal of Experimental and Theoretical Articial Intelligence, 9:175189.
[Bahl and Padmanabhan, 2000] Bahl, P. and Padmanabhan, V. (2000). Radar: An in-
building rf-based user location and tracking system. In IEEE Proceedings of Infocom,
pages 775 784, IEEE CS Press, Los Alamitos, Calif.
[Bonasso et al., 1995] Bonasso, R. P., Kortenkamp, D., Miller, D. P., and Slack, M. (1995).
Experiences with an architecture for intelligent, reactive agents. Journal of Experimental
and Theoretical Articial Intelligence, 9:237256.
[Brener et al., 2004] Brener, N., BenAmar, F., and Bidaud, P. (2004). Analysis of self-
recongurable modular systems: a design proposal for multi-modes locomotion. In IEEE
International Conference on Robotics and Automation, volume 1, pages 9961001.
[Brooks, 1986] Brooks, R. A. (1986). A robust layered control system for a mobile robot.
IEEE Journal of Robotics and Automation, 2(1):1423.
275
BIBLIOGRAPHY
[Brunete et al., 2005] Brunete, A., Hernando, M., and Gambao, E. (2005). Modular mul-
ticongurable architecture for low diameter pipe inspection microrobots. In Proceed-
ings of the 2005 IEEE International Conference on Robotics and Automation (ICRA),
Barcelona, Spain.
[Butler et al., 2004] Butler, Z., Kotay, K., Rus, D., and Tomita, K. (2004). Generic de-
centralized locomotion control for lattice-based self-recongurable robots. Intl. Journal
of Robotics Research, 23(9):919937.
[Caprari, 2003] Caprari, G. (2003). Autonomous Microrobots: Applications and Limita-
tions. PhD thesis,

ECOLE POLYTECHNIQUE F

ED

ERALE DE LAUSANNE.
[Chen, 1994] Chen, M. (1994). Theory and Applications of Modular Recongurable Robotic
Systems. PhD thesis, Division of Engineering and Applied Science, California Institute
of Technology, Pasadena, CA, USA.
[Chirikjian, 1994] Chirikjian, G. S. (1994). Kinematics of a metamorphic robotic system.
In Proceedings of the 1994 IEEE International Conference on Robotics and Automation,
pages 449455.
[Conradt and Varshavskaya, 2003] Conradt, J. and Varshavskaya, P. (2003). Distributed
central pattern generator control for a serpentine robot. In Proceedings of the Interna-
tional Conference on Articial Neural Networks (ICANN), pages 338 341, Istanbul,
Turkey.
[Cox and Wilfong, 1990] Cox, I. J. and Wilfong, G. T. (1990). The stanford cart and the
cmu rover. Autonomous Robot Vehicles, 1:407419.
[Dardari and Conti, 2004] Dardari, D. and Conti, A. (2004). A sub-optimal hierarchical
maximum likelihood algorithm for collaborative localization in ad-hoc networks. In
First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Com-
munications and Networks, pages 425 429.
[Darrell et al., 1998] Darrell, T., Gordon, G., Harville, M., and Woodll, J. (1998). Inte-
grated person tracking using stereo, color, and pattern detection. In Proceedings of the
Conference on Computer Vistion and Pattern Recognition, pages 601 609.
[del Monte Garrido, 2004] del Monte Garrido, S. (2004). Dise no y construccion del modulo
tractor de un microrobot para inspeccion de tuberias. Masters thesis, E.T.S.I.I. - UPM.
[Denavit and Hartenberg, 1955] Denavit, J. and Hartenberg, R. (1955). A kinematic no-
tation for lower-pair mechanisms based on matrices. Transactions of the ASME Journal
of Applied Mechanisms, 23:215221.
[Eltaher et al., 2005] Eltaher, A., Ghalayini, I., and Kaiser, T. (2005). Towards uwb self-
positioning systems for indoor environments based on electric eld polarization, signal
strength and multiple antennas 5-7 sept. 2005 page(s):. In 2nd International Symposium
on Wireless Communication Systems, pages 389 393.
[Flynn, 1987] Flynn, A. M. (1987). Gnat robots (and how they will change robotics. In
Proceedings of the IEEE Micro Robots and Teleoperators Workshop, Hyannis, MA.
276
BIBLIOGRAPHY
[Fukuda and Kawauchi, 1990] Fukuda, T. and Kawauchi, Y. (1990). Cellular robotic sys-
tem(cebot) as one of the realization of self-organizing intelligent universal manipulator.
In Proceedings of the 1990 IEEE International Conference on Robotics and Automation,
pages 662667.
[Gat, 1992] Gat, E. (1992). Integrating planning and reacting in a heterogeneous asyn-
chronous architecture for controlling real-world mobile robots. In Proceedings of the
National Conference on Articial Intelligence (AAAI), pages 809815.
[Gonzalez et al., 2006] Gonzalez, J., Zhang, H., Boemo, E., and Zhang, J. (2006). Locomo-
tion of a modular robot with eight pitch-yaw-connecting modules. In 9th International
Conference on Climbing and Walking Robots.
[Gray and Lissmann, 1950] Gray, J. and Lissmann, H. (1950). The kinetics of locomotion
of the grass-snake. J. Exp. Biology, 26:354 367.
[Hada and Takase, 2001] Hada, Y. and Takase, K. (2001). Multiple mobile robot naviga-
tion using the indoor global positioning system (igps). In Proceedings. 2001 IEEE/RSJ
International Conference on Intelligent Robots and Systems, volume 2, pages 1005
1010.
[Haeberlen et al., 2004] Haeberlen, A., Flannery, E., Ladd, A. M., Rudys, A., Wallach,
D. S., and Kavraki, L. E. (2004). Practical robust localization over large-scale 802.11
wireless networks. In Proceedings of the Tenth ACM International Conference on Mobile
Computing and Networking (MOBICOM04).
[Hamlin and Sanderson, 1996] Hamlin, G. J. and Sanderson, A. C. (1996). Tetrobot mod-
ular robotics: prototype and experiments. In Proceedings of the 1996 IEEE International
Conference on Intelligent Robots and Systems, volume 2, pages 390395.
[Hernandez et al., 2003] Hernandez, S., Morales, C., Torres, J., and Acosta, L. (2003).
A new localization system for autonomous robots. In Proceedings. ICRA 03. IEEE
International Conference on Robotics and Automation, volume 2, pages 1588 1593.
[Hightower and Boriello, 2001] Hightower, J. and Boriello, G. (2001). Localization sys-
tems for ubiquitous computing. IEEE Computer, 34(8):5766.
[Hightower et al., 2000] Hightower, J., Want, R., and Borriello, G. (2000). Spoton: An
indoor 3d location sensing technology based on rf signal strength. Technical Report
2000-02-02, University of Washington, Computer Science and Engineering.
[Hirose, 1993] Hirose, S. (1993). Biologically Inspired Robots: Snake-Like Locomotors and
Manipulators. Oxford University Press, New York, USA.
[Hirose et al., 1999] Hirose, S., Ohno, H., Mitsui, T., and Suyama, K. (1999). Design
of in-pipe inspection vehicles for 25, 50, 150mm pipes. In Proceedings. 1999 IEEE
International Conference on Robotics and Automation, volume 3, pages 2309 2314.
[Horodinca et al., 2002] Horodinca, M., Doroftei, I., Mignon, E., and Preumont, A.
(2002). A simple architecture for in-pipe inspection robots. In International Collo-
quium on Mobile and Autonomous Systems, Magdeburd.
277
BIBLIOGRAPHY
[Ikuta et al., 1988] Ikuta, K., Tsukamoto, M., and Hirose, S. (1988). Shape memory
alloy servo actuator system with electric resistance feedback and application for active
endoscope. In Robotics and Automation, 1988. Proceedings., 1988 IEEE International
Conference on, pages 427430 vol.1.
[Inou et al., 2003] Inou, N., Minami, K., and Koseki, M. (2003). Group robots forming a
mechanical structure. In Proceedings 2003 IEEE International Symposium on Compu-
tational Intelligence in Robotics and Automation, Kobe, Japan.
[Jantapremjit and Austin, 2001] Jantapremjit, P. and Austin, D. (2001). Design of a mod-
ular self-recongurable robot. In Australian Conference on Robotics and Automation.
[Jorgensen et al., 2004] Jorgensen, M., Ostergaard, E., and Lund, H. (2004). Modular
atron: Modules for a self-recongurable robot. In Proceedings of the 2004 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Japan.
[Kamimura et al., 2003] Kamimura, A., Kurokawa, H., Toshida, E., Tomita, K., Murata,
S., and Kokaji, S. (2003). Automatic locomotion pattern generation for modular robots.
In IEEE International Conference on Robotics and Automation, 2003. Proceedings.
ICRA 03., volume 1, pages 714 720.
[Kamimura et al., 2004] Kamimura, A., Kurokawa, H., Yoshida, E., Tomita, K., Kokaji,
S., and Murata, S. (2004). Distributed adaptive locomotion by a modular robotic
system, m-tran ii. In Proceedings of IEEE/RSJ International conference on Intelligent
Robots and Systems, pages 23702377.
[Kawahara et al., 1999] Kawahara, N., Shibata, T., and Sasaya, T. (1999). In-pipe wireless
microrobot. Proc SPIE, Microrobotics and Microassembly, 3834:166171.
[Khatib, 1986] Khatib, O. (1986). Real-time obstacle avoidance for manipulators and
mobile robots. The International Journal of Robotics Research, 5:90 98.
[Kim et al., 2002] Kim, B., Jeong, Y., Lim, H., Kim, T. S., Park, J., Dario, P., Menci-
assi, A., and Choi, H. (2002). Smart colonoscope system. In Proceedings of the 2002
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems.
[Klaassen and Paap, 1999] Klaassen, B. and Paap, K. (1999). Gmd-snake2: a snake-like
robot driven by wheels and a method for motion control. In Proceedings of the 1999
IEEE International Conference on Robotics and Automation, volume 4, pages 3014
3019.
[Konolige et al., 1997] Konolige, K., Myers, K., Ruspini, E., and Saotti, A. (1997). The
saphira architecture: A design for autonomy. Journal of Experimental and Theoretical
Articial Intelligence, 9:215235.
[Kosecka and Bajsy, 1993] Kosecka, J. and Bajsy, R. (1993). Discrete event systems for
autonomous mobile agents. In Proceedings of the Intelligent Robotic Systems Conference,
pages 21 31.
[Kotay and Rus, 2005] Kotay, K. and Rus, D. (2005). Ecient locomotion for a self-
reconguring robot. In Proc. of IEEE Intl. Conf. on Robotics and Automation,
Barcelona, Spain.
278
BIBLIOGRAPHY
[Kotay et al., 1998] Kotay, K., Rus, D., Vona, M., and McGray, C. (1998). The self-
reconguring robotic molecule. In Proceedings of the 1998 IEEE International Confer-
ence on Robotics and Automation, pages 424431.
[Kristensen, 1997] Kristensen, S. (1997). Sensor planning with bayesian decision theory.
Robotics and Autonomous Systems, 19:273286.
[Krumm et al., 2000] Krumm, J., Harris, S., Meyers, B., Brumitt, B., Hale, M., and
Shafer, S. (2000). Multi-camera multi-person tracking for easy living. In Third IEEE
International Workshop on Visual Surveillance, pages 3 10. IEEE Press, Piscataway,
N.J.
[Kurokawa et al., 2003] Kurokawa, H., Kamimura, A., Yoshida, E., Tomita, K., Kokaji,
S., and Murata, S. (2003). M-tran ii: metamorphosis from a four-legged walker to a
caterpillar. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, 2003, volume 3, pages 2454 2459.
[Kurokawa et al., 2005] Kurokawa, H., Tomita, K., Kamimura, A., Yoshida, E., Kokaji,
S., and Murata, S. (2005). Distributed self-reconguration control of modular robot
m-tran. In IEEE International Conference on Mechatronics and Automation, volume 1,
pages 254 259.
[Ladd et al., 2004] Ladd, A., Bekris, K., Rudys, A. P., Wallach, D., and Kavraki, L.
(2004). On the feasibility of using wireless ethernet for indoor localization. IEEE
Transactions on Robotics and Automation, 20:555 559.
[Le nero, 2004] Le nero, M. (2004). Sistema de control para robot de inspeccion de tuberas.
Masters thesis, E.T.S.I.I. - UPM.
[Lissmann, 1950] Lissmann, H. (1950). Rectilinear locomotion in a snake (boa occiden-
talis). J. Exp. Biol, 26:368 379.
[Maeda et al., 1996] Maeda, S., Abe, K., Yamamoto, K., Tohyama, O., and Ito, H. (1996).
Active endoscope with sma (shape memory alloy) coil springs. In Micro Electro Me-
chanical Systems, 1996, MEMS 96, Proceedings. An Investigation of Micro Structures,
Sensors, Actuators, Machines and Systems. IEEE, The Ninth Annual International
Workshop on, pages 290295.
[Maes, 1990] Maes, P. (1990). Situated agents can have goals. In Maes, P., editor, De-
signing Autonomous Agents, pages 4970. MIT Press.
[Mataric, 1994] Mataric, M. J. (1994). Interaction and Intelligent Behavior. PhD thesis,
Massachissetts Institute of Technology (MIT).
[McCarthy, 1958] McCarthy, J. (1958). Programs with common sense. In Proceedings of
the Symposium on the Mechanization of Thought Processes, National Physical Labora-
tory, Teddington, England. H. M. Stationery Oce.
[McCarthy, 1960] McCarthy, J. (1960). Recursive functions of symbolic expressions and
their computation by machine, part i. Commun. ACM, 3(4):184195.
[Murata and Kurokawa, 2007] Murata, S. and Kurokawa, H. (2007). Self-recongurable
robots. Robotics &amp; Automation Magazine, IEEE, 14(1):7178.
279
BIBLIOGRAPHY
[Murata et al., 1998] Murata, S., Kurokawa, H., Yoshida, E., Tomita, K., and Kokaji, S.
(1998). A 3-d self-recongurable structure. In Proceedings of the 1998 IEEE Interna-
tional Conference on Robotics and Automation, pages 432439.
[Murata et al., 2002] Murata, S., Yoshida, E., Kamimura, A., Kurokawa, H., Tomita,
K., and Kokaji, S. (2002). M-tran: self-recongurable modular robotic system. In
Proceedings of the IEEE/ASME Transactions on Mechatronics, volume 7.
[Nishikawa et al., 1999] Nishikawa, H., Sasaya, T., Shibata, T., Kaneko, T., Mitumoto, N.,
Kawakita, S., and Kawahara, N. (1999). In-pipe wireless micro locomotive system. In
Micromechatronics and Human Science, Proceedings of 1999 International Symposium
on, pages 141147, Japan.
[Orr and Abowd, 2000] Orr, R. and Abowd, G. (2000). The smart oor: A mechanism
for natural user identication and tracking. In Proceedings of the 2000 Conference on
Human Factors in Computing Systems. ACM Press, New York.
[Ostergaard and Lund, 2003] Ostergaard, E. H. and Lund, H. H. (2003). Evolving control
for modular robotic units. In Proceedings. 2003 IEEE International Symposium on
Computational Intelligence in Robotics and Automation, 2003., volume 2, pages 886
892.
[Paperno et al., 2001] Paperno, E., Sasada, I., and Leonovich, E. (2001). A new method
for magnetic position and orientation tracking. IEEE Transactions on Magnetics,
37:1938 1940.
[Peirs et al., 2001] Peirs, J., Reynaerts, D., and Brussel, H. V. (2001). A miniature manip-
ulator for integration in a self-propelling endoscope. Sensors and Actuators A, 92:343
349.
[Pirjanian, 1999] Pirjanian, P. (1999). Behaviour coordination mechanisms. Technical
report, University of Southern California.
[Pirjanian et al., 2001] Pirjanian, P., Huntsberger, T., , and Schenker, P. (2001). De-
velopment of campout and its further applications to planetary rover operations: A
multirobot control architecture. In In Proc. SPIE Sensor Fusion and Decentralized
Control in Robotic Systems.
[Pirjanian et al., 2000] Pirjanian, P., Huntsberger, T. L., Trebi-Ollennu, A., Aghazarian,
H., Das, H., Joshi, S. S., and Schenker, P. S. (2000). CAMPOUT: a control architecture
for multirobot planetary outposts. In Proceedings of SPIE, pages 221230.
[Priyantha et al., 2000] Priyantha, N., Chakraborty, A., and Balakrishnan, H. (2000). The
cricket location-support system. In Proceedings of the 6th International Conference on
Mobile Computing and Networking, pages 32 43. ACM Press, New York.
[Raab et al., 1979] Raab, F., Blood, E., Steiner, T., and Jones, H. (1979). Magnetic posi-
tion and orientation tracking system. IEEE Transactions on Aerospace and Electronic
Systems, AES-15(5):709717.
[Roh and Choi, 2004] Roh, S. and Choi, H. (2004). Dierential-drive in-pipe robot for
moving inside urban gas pipelines. IEEE Transactions on Robotics.
280
BIBLIOGRAPHY
[Roh et al., 2008] Roh, S., Choi, H., Lee, J., Kim, D., and Moon, H. (2008). Modularized
in-pipe robot capable of selective navigation inside of pipelines. In Proceedings of the
2008 IEEE International Conference on Intelligent Robots and Systems.
[Rosenblatt, 1995] Rosenblatt, J. K. (1995). Damn: A distributed architecture for mobile
navigation. In AAAI Spring Symposium on Lessons Learned from Implemented Software
Architectures for Physical Agents, MEnlo Park, CA. AAAI Press.
[Rus and Vona, 2000] Rus, D. and Vona, M. (2000). A physical implementation of the
self-reconguring crystalline robot. In IEEE International Conference on Robotics and
Automation, pages 17261733.
[Salemi et al., 2006] Salemi, B., Moll, M., and Shen, W.-M. (2006). Superbot: A deploy-
able, multi-functional, and modular self-recongurable robotic system. In Intelligent
Robots and Systems, 2006 IEEE/RSJ International Conference on, pages 36363641.
[Salemi et al., 2004] Salemi, B., Will, P., and Shen, W.-M. (2004). Autonomous discovery
and functional response to topology change in self-recongurable robots. In Intelligent
Robots and Systems, 2004. (IROS 2004). Proceedings. 2004 IEEE/RSJ International
Conference on, volume 3, pages 26672672 vol.3.
[Santos, 2007] Santos, L. (2007). Dise no y construccion de un micro-robot modular mul-
ticongurable. Masters thesis, E.T.S.I.I. - UPM.
[Sato et al., 2002] Sato, M., Fukaya, M., and Iwasaki, T. (2002). Serpentine locomotion
with robotic snakes. Control Systems Magazine, IEEE, 22:64 81.
[Schonherr and Hertzberg, 2002] Schonherr, F. and Hertzberg, J. (2002). The dd&p robot
control architecture (a preliminary report). In Revised Papers from the International
Seminar on Advances in Plan-Based Control of Robotic Agents,, pages 249269, London,
UK. Springer-Verlag.
[Sciavicco and Siciliano, 1996] Sciavicco, L. and Siciliano, B. (1996). Modelling and Con-
trol of Robot Manipulators. McGraw-Hill.
[Serrano et al., 2004] Serrano, O., J.M.Ca nas, Matellan, V., and Rodero, L. (2004). Robot
localization using wi signal without intensity map. In Proceedings of V Workshop de
Agentes Fsicos, Universitat de Girona,.
[Shen et al., 2000] Shen, W.-M., Lu, Y., and Will, P. (2000). Hormone-based control
for self-recongurable robots. In Proceedings of the International Conference on Au-
tonomous Agents, Barcelona, Spain.
[Shen et al., 2002] Shen, W.-M., Salemi, B., and Will, P. (2002). Hormone-inspired adap-
tive communication and distributed control for CONRO self-recongurable robots. EEE
Transactions on Robotics and Automation, 18(5):700712.
[Shibata et al., 2001] Shibata, T., Sasaya, T., and Kawahara, N. (2001). Development of
in-pipe microrobot using microwave energy transmission. Electronics and Communica-
tions in Japan (Part II: Electronics), 84:1 8.
281
BIBLIOGRAPHY
[Suh et al., 2002] Suh, J., Homans, S., and Yim, M. (2002). Telecubes: mechanical de-
sign of a module for self-recongurable robotics. In IEEE International Conference on
Robotics and Automation, volume 4, pages 40954101.
[Suzuki et al., 2006] Suzuki, Y., Inou, N., Kimura, H., and Koseki, M. (2006). Recong-
urable group robots adaptively transforming a mechanical structure (crawl motion and
adaptive transformation with new algorithms). In Proceedings of the 2006 IEEE/RSJ
International Conference on Intelligent Robots and Systems, pages 22002205.
[Suzuki et al., 2007] Suzuki, Y., Inou, N., Kimura, H., and Koseki, M. (2007). Recong-
urable group robots adaptively transforming a mechanical structure (numerical expres-
sion of criteria for structural transformation and automatic motion planning method).
In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots
and Systems, pages 23612367.
[Tomita et al., 1999] Tomita, K., Murata, S., Kurokawa, H., Yoshida, E., and Kokaji, S.
(1999). Self-assembly and self-repair method for a distributed mechanical system. IEEE
Transactions on Robotics and Automation, pages 10351045.
[Torres, 2006] Torres, J. (2006). Dise no mecatronico de un micro-robot serpiente modular.
Masters thesis, E.T.S.I.I - UPM.
[Torres, 2008] Torres, J. (2008). Dise no mecatronico de un sistema micro-robotico. Mas-
ters thesis, Universidad Politecnica de Madrid (UPM).
[Unsal and Khosla, 2000] Unsal, C. and Khosla, P. K. (2000). Mechatronic design of
a modular self-recongurable robotics system. In IEEE International Conference on
Intelligent Robots and Systems, pages 17421747.
[Valdastri et al., 2009] Valdastri, P., Webster, R., Quaglia, C., Quirini, M., Menciassi, A.,
and Dario (2009). A new mechanism for mesoscale legged locomotion in compliant
tubular environments. IEEE Transactions on Robotics, 25:1047 1057.
[Worst and Linnemann, 1996] Worst, R. and Linnemann, R. (1996). Construction and
operation of a snake-like robot. In IEEE International Joint Symposium on Intelligence
and Systems, pages 164169, Japan.
[Xiao et al., 2004] Xiao, J., Xiao, J., Xi, N., Tummala, R. L., and Mukherjee, R. (2004).
Fuzzy controller for wall-climbing microrobots. IEEE Transactions on Fuzzy Systems,
12(4):466480.
[Yim, 1994] Yim, M. (1994). New locomotion gaits. In Proceedings of the 1994 IEEE
International Conference on Robotics and Automation, pages 25082514.
[Yim et al., 2000] Yim, M., Du, D., and Roufas, K. (2000). Polybot: A modular recon-
gurable robot. In Proceedings of the 2000 IEEE International Conference on Robotics
and Automation, pages 514520.
[Yim et al., 2001] Yim, M., Du, D., and Roufas, K. (2001). Evolution of polybot: A mod-
ular recongurable robot. In COE/Super-Mechano-Systems Workshop, Tokyo, Japan.
282
BIBLIOGRAPHY
[Yim et al., 2007] Yim, M., Shen, W.-M., Salemi, B., Rus, D., Moll, M., Lipson, H.,
Klavins, E., and Chirikjian, G. S. (2007). Modular self-recongurable robot systems
[grand challenges of robotics]. Robotics &amp; Automation Magazine, IEEE, 14(1):43
52.
[Yoshida et al., 1999] Yoshida, E., Kokaji, S., Murata, S., Kurokawa, H., and Tomita, K.
(1999). Miniaturised selfrecongurable system using shape memory alloy. In Proceedings
of the 1999 IEEE International Conference on Intelligent Robots and Systems, pages
15791585.
[Yoshida et al., 2002] Yoshida, E., Murata, S., Kamimura, A., Kokaji, S., Tomita, K.,
and Kurokawa, H. (2002). Get back in shape! a hardware prototype self-recongurable
modular microrobot that uses shape memory alloy. IEEE Robotics and Automation
Magazine, 9(4):5460.
[Yoshida et al., 2003] Yoshida, E., Murata, S., Kamimura, A., Tomita, K., Kurokawa, H.,
and Kokaji, S. (2003). Evolutionary synthesis of dynamic motion and reconguration
process for a modular robot m-tran. In Proceedings of the 2003 IEEE International
Symposium on Computational Intelligence in Robotics and Automation, 2003., volume 2,
pages 1004 1010.
[Zhang and Zhao, 2005] Zhang, Y. and Zhao, J. (2005). Indoor localization using time
dierence of arrival and time-hopping impulse radio. In 2005 IEEE International Sym-
posium on Communications and Information Technology ISCIT, volume 2, pages 964
967.
[Zykov et al., 2007] Zykov, V., Chan, A., and Lipson, H. (2007). Molecubes: An open-
source modular robotics kit. In IROS.
[Zykov et al., 2005] Zykov, V., Mytilinaios, E., Adams, B., and Lipson, H. (2005). Self-
reproducing machines. Nature, 435:163164.
283

You might also like