3D-Position Tracking and Control For All-Terrain Robots
3D-Position Tracking and Control For All-Terrain Robots
3D-Position Tracking and Control For All-Terrain Robots
Volume 43
3D-Position Tracking
and Control for All-Terrain
Robots
ABC
Professor Bruno Siciliano, Dipartimento di Informatica e Sistemistica, Università di Napoli Federico II, Via
Claudio 21, 80125 Napoli, Italy, E-mail: siciliano@unina.it
Professor Oussama Khatib, Robotics Laboratory, Department of Computer Science, Stanford University,
Stanford, CA 94305-9010, USA, E-mail: khatib@cs.stanford.edu
Professor Frans Groen, Department of Computer Science, Universiteit van Amsterdam, Kruislaan 403, 1098
SJ Amsterdam, The Netherlands, E-mail: groen@science.uva.nl
Author
DOI 10.1007/978-3-540-78287-2
c 2008 Springer-Verlag Berlin Heidelberg
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or
parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in
its current version, and permission for use must always be obtained from Springer. Violations are liable for
prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws and
regulations and therefore free for general use.
ON
European
STAR (Springer Tracts in Advanced Robotics) has been promoted ROBOTICS ***
***
***
***
Research
EUR
During the first year of my doctoral thesis I went to Carnegie Mellon University
for an internship. The goal was to implement the Mars Autonomy software1 on
Shrimp, a six wheeled rover with extended climbing capabilities. At that time,
Shrimp had limited sensing capabilities and was only able to travel autonomously
for about ten meters in a flat and simple environment. The robot couldn’t reach
farther goals because it rapidly lost track of its position. Thus, I decided to focus
my research on position tracking in order to extend the range of autonomous
navigation.
At that time, most of the research works in all-terrain rover navigation as-
sumed flat environments and addressed 2D localization. Even though 2D local-
ization is sufficient for many applications, the extension to 3D is necessary in
cluttered environment, when the rover has to climb over the encountered ob-
stacles. In such conditions, accurate 3D position tracking is crucial for both
autonomous navigation and obstacle negotiation. The subject of my thesis and,
a fortiori this book, was clear: 3D-position tracking and control for all-terrain
robots.
Methodology
Autonomous mobile robotics is a fascinating field of research that involves tech-
nical and scientific domains. Thus, the research community working in this field
is composed of people with very different backgrounds. One finds for exam-
ple mathematicians, physicians, computer scientists, engineers and biologists. It
is interesting to note that each researcher has his or her own definition of an
autonomous mobile robot and has an individual way to address a given prob-
lem. However, two main categories of approaches can be distinguished i.e., the
top-down and the bottom-up. One or the other is favored depending on the re-
searcher’s scientific and technical background. The top-down approach consists
in developing a theoretical formulation of the problem and proposing a solution
based on mathematical models. Although such a solution can be analyzed with
respect to e.g. optimality and mathematical complexity, it does not necessarily
1
The Mars Autonomy project: http://www.frc.ri.cmu.edu/projects/mars
XII Preface
work for the real application. Indeed, it often occurs that the models do not fully
capture the reality or that they cannot be applied because they make use of un-
known parameters. A failure to apply a model happens when the abstraction
level is too high and when the technical constraints are not fully considered dur-
ing the development phase. On the other hand, the bottom-up approach starts
from a real application and proposes pragmatic solutions to given problems. The
risk with such an approach is to tailor solutions in an incremental way, that is,
to patch the system as problems arise. Such a reactive development favors the
use of heuristics that may limit the system’s performance and reliability.
The methodology used throughout this book is an attempt to reconcile the
top-down and the bottom-up approaches and to avoid their respective traps.
Even though the development was driven by the application, the bottom-up ap-
proach was not particularly favored. Thus, simple but valid heuristics were pro-
posed only when modeling was not applicable. On the other hand, the technical
constraints were considered during the modeling phases to avoid the generation
of inapplicable models. In other words, the methodology used in this book tries
to make the best tradeoff between scientific and technical solutions.
Acknowledgment
This work is an adventure during which I met many different people ready to
help and to act as a source of inspiration. First of all, I am grateful to my advisor,
Roland Siegwart, for convincing me to do a doctoral thesis at the Autonomous
Systems Laboratory. All this has been possible thanks to his positive attitude,
mentorship and support.
During the thesis, I had the chance to spend several months in other labs.
Each time, the experience was very positive and stimulating. The first exchange
was at CMU, were I discovered the world of linux and autonomy applied to rough
terrain rovers. I would like to thank Reid Simmons for agreeing to supervise my
work, Sanjiv Singh and Dennis Strelow for their help related to visual motion
estimation, and Bart Nabbe and Jianbo Shi for their good advice. The next
two exchanges took place at the Laboratoire d’Analyse et d’Architecture des
Systèmes (LAAS-CNRS). In particular, I would like to thank Simon Lacroix,
Anthony Mallet and Raja Chatila for their help and for having hosted me in
excellent conditions.
Most of the student projects related to this research provided very good re-
sults, which helped to validate the theory through experiments with SOLERO.
I would like to thank Ambroise Krebs for his excellent masters thesis, which en-
abled the development of a new approach to slip minimization in rough terrain.
Also, I am grateful to Stéphane Michaud for the development of the mechanical
structure of SOLERO, Martin Nyffenegger for the nice remote control interface,
Benoı̂t Dagon for the mechanical design of the panoramic vision system and
Gabriel Paciotti for the stereovision rig.
The help of my colleagues was invaluable, and enabled me to develop the var-
ious systems discussed in this book. In particular, I would like to thank Grégoire
Terrien and Michel Lauria for their expert advice related to the mechanical
Preface XIII
aspects, Agostino Martinelli for the mathematics, Daniel Burnier, Ralph Piguet
and Gilles Caprari for the electronics and finally Rolf Jordi and Frédéric Pont
for the questions related to informatics. The positive atmosphere in the lab pro-
vided favorable conditions for efficient and constructive work. Special thanks to
Marie-José Pellaud, Nicola Tomatis, Daniel Burnier and my office-mate Gilles
Caprari for their psychological support. I’m grateful to everybody in the lab for
the great time I’ve spent during these four years.
Thanks also to the members of my thesis committee, Simon Lacroix, Paolo
Fiorini and Bertrand Merminod, for their careful reading of the thesis and for
their constructive feedback.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Autonomy in Rough Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The Open Challenges of Rough Terrain Navigation . . . . . . . . . . . 2
1.2.1 Lack of Prior Information . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Research Context and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Structure of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 3D-Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 3D-Odometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1.1 Bogie Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.2 3D Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Contact Angles Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Control in Rough-Terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1 Quasi-static Model of a Wheeled Rover . . . . . . . . . . . . . . . . . . . . . . 34
4.1.1 Mobility Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.2 A 3D Static Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Torque Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2.1 Wheel Slip Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2.2 Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2.3 Torque Optimization for SOLERO . . . . . . . . . . . . . . . . . . . . 41
XVI Contents
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
B Linearized Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
B.1 Accelerometers Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
B.2 Gyroscopes State Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
1 Introduction
Researchers have worked for years toward making mobile robots move by them-
selves and take their own decisions. Thanks to the efforts of a large research
community and the evolution of technology, fully autonomous robots are today
ready for applications in structured environments. However, their level of au-
tonomy is still limited and the environments in which they are deployed are
often engineered in order to guarantee reliability. Most successful applications
are limited to indoor, office and industrial environments.
Outdoors, the operation of mobile robots is more complex, and a lot of effort
has been deployed recently toward enabling a greater level of autonomy for out-
door vehicles. Such robots find their application in scientific exploration of hostile
environments like deserts, volcanoes, the Antarctic, or other planets. There is also
a high level of interest in such robots for search and rescue operations after natural
or industrial disasters. Two examples of such applications are:
• The NASA project “Life in Atacama” aims to search autonomously for life
in the Atacama desert in Chile. The first results are very promising but
the following extract illustrates the difficulties encountered: “The farthest
Zoë ran autonomously was 3.3 kilometers but on average a traverse would
terminate after just over 200 meters” [2].
• The recent NASA “Mars Exploration Rover” mission (MER) aimed to un-
derstand how past water activity on Mars has influenced the red planet’s
environment over time. Because of the narrow bandwidth of communication
and time delay, the tele-operation of the rovers was a slow process. For safety
reasons, autonomous navigation was enabled only on relatively easy terrain
and for short traverses.
These examples show that human supervision is still required for operating rovers
in rough terrain and that further effort is required in order to enable fully au-
tonomous operation.
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 1–5, 2008.
springerlink.com c Springer-Verlag Berlin Heidelberg 2008
2 Introduction
1.2.2 Perception
Perception is a difficult task in mobile robotics because sensors are error-prone
and their measurements are uncertain. In comparison with indoor environments,
perception in natural scenes is even more complex and the acquired measure-
ments are more difficult to analyze and interpret. For example, changing lighting
conditions strongly affect the quality of acquired images and the vibrations due
to uneven soils yield noisy signals.
In order to illustrate the problems involved in interpreting data, let us con-
sider scans acquired by a 2D laser range finder with a 360◦ field-of-view. In an
office environment, two scans taken from the same spot but with different head-
ing contain the same information. A simple scan-matching algorithm confirms
that both scans have been taken from the same position. On the other hand, a
small heading change can lead to a large change in the rover’s attitude in rough
terrain. Even if the scans have been taken from the same position they can
be completely different. This example demonstrates the substantial increase of
perception complexity in 3D and shows the importance of choosing appropriate
sensor configurations for outdoor environments.
Another problem of sensing in cluttered terrains lies in the limited field-of-
view. The occlusions caused by the obstacles and terrain slope changes force the
robot to adopt a complex motion strategy: the maneuvers shall maximize the
information gain while limiting the risk of being trapped.
1.2.3 Locomotion
Indoors, the environment can be modeled as obstacle and obstacle-free areas and
it is assumed that the robot can move freely within the obstacle-free regions.
Research Context and Scope 3
once per sol using a pre-scheduled sequence of specified commands (e.g., turn on
the spot 30◦ , drive forward for 1 meter, etc.). So, having an accurate position esti-
mate during the operation of the rovers was of critical importance [22]. The maxi-
mum range the rovers could reach within a sol was limited by the positioning error
accumulated by dead reckoning. Because the position error grows as a function of
the traveled distance, any method for reducing this error accumulation will con-
siderably increase the traveled distance per sol.
Thus, the intent of this research work is to combine different approaches
to reduce as much as possible the errors of dead reckoning. In particular, we
show how 3D position tracking can be improved by considering rover locomotion
in challenging environments as a holistic problem. Most of the research work
focuses on a particular aspect of position tracking and/or is limited to relatively
flat and smooth terrains. This work proposes to extend position tracking to
environments comprising sharp edges such as steps and to consider not only
sensing but also mechanical aspects (chassis configuration) and motion control
(slip minimization).
In rough terrain, it is crucial to carefully select appropriate sensors and use
their complementarity to produce reliable position estimates. Each sensor has its
own advantages and drawbacks depending on the situation and thus relying on
only one sensor can lead to catastrophic results. Although the choice of sensors is
important, it is not the only aspect to consider. Indeed, the specific locomotion
characteristics of the rover and the way it is driven also have a strong impact on
pose estimation. The sensor signals might not be usable if an unadapted chassis
and controller are used in rough terrain. For example, the signal-to-noise ratio
is poor for an inertial measurement unit mounted on a four-wheel drive rover
with stiff suspensions. Likewise, odometry provides bad estimates if the wheel
controller does not include slip minimization, or if the kinematics of the rover
is not taken into account in the algorithms. The aim of this research work is to
reduce the position error growth by considering all these aspects together.
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 7–19, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
8 The SOLERO Rover
(a) (b)
Fig. 2.1. (a) SOLERO mechanical structure without sensors; (b) prototype equipped
with a solar panel, power management board and scientific payload
(a) (b)
pin joints
resulting movement
pin joints
applied force
Fig. 2.4. Parallel mechanisms of SOLERO: (a) front fork kinematics. Because the
instantaneous rotation center is placed below the wheel axis, the fork passively folds
for climbing an obstacle; (b) parallel bogie with its virtual rotation axis placed between
the wheels
that a horizontal force applied toward the wheel center translates into an eleva-
tion motion of the fork. The equations of the wheel center trajectory together
with the fork parameters are presented in Appendix A.1.3
The bogies provide lateral stability. To ensure similarly good ground clearance
and climbing capabilities, their virtual center of rotation is set to the height of
the wheel axis using the parallel configuration shown on Fig. 2.4b.
Finally, the steering of the rover is realized by synchronizing the rotation of the
front and rear wheels and the speed difference of the bogie wheels (skid-steering).
The I2 C Modules
The use of an I2 C bus allows for the easy integration of various types of sensors
and actuators (up to 127 modules can be connected). Because each module
contains its own processing unit, the computational load on the master processor
is substantiality reduced; the main processor can read pre-processed data directly
and send commands to the modules.
EPFL developed various I2 C modules implementing interfaces for different
kinds of sensors and actuators, including infrared and ultrasound distance sen-
sors, linear camera, accelerometers, inclinometer, GPS, servo controller and DC
motor controller. For SOLERO, two new types of I2 C modules have been devel-
oped: an absolute angular sensor and an energy management module.
12 The SOLERO Rover
SOLERO
ieee1394 bus
vme
eth0
wireless
soleropc104 eth0 eth1
central onboard
i2c bus rs232
parallel port serial port
I2C slave (0x40) I2C slave (0x41) I2C slave (0x42) I2C slave (0x43) I2C slave (0x44)
Front wheel Rear wheel Right front Right rear Left front
I2C slave (0x45) I2C slave (0x46) I2C slave (0x47) I2C slave (0x48) I2C slave (0x03)
Left rear Front steering Rear steering Stereo servo Energy board
wireless
soleroap eth0
g h
ZR e
d
YR
f
a
XR
k
b
c j
Front
i
Fig. 2.6. Sensors, actuators and electronics of SOLERO: (a) steering servo mecha-
nism; the same design is used for the rear wheel; (b) passively articulated bogie and
spring-suspended front fork (equipped with absolute angular sensors); (c) 6 motor-
ized wheels (DC motors); (d) omnidirectional vision system; (e) stereovision module,
which can be oriented around the tilt axis; (f ) laptop (solerovaio); (g) micro-computer
(soleropc104); (h) energy management board; (i) batteries (NiMh 7000 mA h); (j) I2 C
modules (motor controllers, angular sensor module, servo controllers, etc.); (k)
inertial measurement unit
Angular Sensor
In order to know the state of the chassis, the angular position of the fork and the
bogies have to be measured. The angle of a pin joint is obtained by measuring
the direction of the field of a magnet fixed to the rotating axle. The magnetic
field direction is measured by means of a magneto-resistive bridge that is fixed to
the main body. This contactless sensing mechanism is robust against drift, com-
pact, and provides angle measurements with a precision of less than 0.2◦ . This
accuracy is much higher than that obtained using a potentiometer-based ap-
proach. Furthermore, our solution provides absolute angles and does not require
initialization each time the system is started (initialization would be required if
a relative encoder were to be used). The components used to measure the front
fork angle are depicted in Fig. 2.7.
c
d e f
Fig. 2.7. Angular sensor for the front fork: (a) magneto-resistive bridge; (b) commu-
nication and power bus; (c) linearization chip; (d) magnet; (e) magnet holder (fixed
to the axle); (f ) lower arm of the front fork
• Currents and voltages of all the sources and consumers are measured and
monitored.
• The battery voltage is monitored and the board warns the user by means of
a blinking led and an acoustic signal. Below a certain voltage, the system is
turned off automatically in order to protect the battery and the system.
• The battery status can be read from a seven-segment display.
• All the functions of the board can be accessed through I2 C commands, i.e.,
switch on/off, battery status, currents and voltages, etc.
Stereovision
The stereovision rig is a MegaD module from VidereDesign, which can acquire
grayscale images up to 1280 × 960 pixels at 15 Hz. The combination of 2/3”
CMOS chips and lenses with 4.8 mm focal length results in a field-of-view of
85◦ × 69◦ . The rig is mounted on top of a mast and can be oriented around the
tilt axis (Fig. 2.9). This allows the rig to keep ground features in the field-of-view
of the cameras even if the rover is tilted upward or downward. In order to keep
the center of gravity as low as possible, the motor is mounted close to the rover’s
main body. The rotational motion is transmitted to the stereovision module by
means of a traction pole.
A lot of effort has been deployed to design a system with high stiffness and
low mechanical play between the parts. This is important because the relative
position between the camera coordinate system and the rover coordinate system
has to be known precisely. Inaccuracies in the estimation of this transformation
can lead to significant localization errors.
Omnicam
The omnicam depicted in Fig. 2.10 has been especially designed for SOLERO. Its
main components are a camera and a panoramic mirror. A transparent cylinder
Control Architecture 15
g h i j k l
b c d
Fig. 2.8. The energy management board and the rear panel: (a) battery connector;
(b) external charger connector; (c) external power supply connector; (d) I2 C bus
connector; (e) external display connector; (f ) regulated 5–12 V supplies; (g) source
switch; (h) battery status display; (i) system state display; (j) low battery warning
buzzer; (k) main system switch; (l) rear panel connector (used for both external power
supply and charger)
(a) (b)
Fig. 2.9. Stereovision support with tilt mechanism: (a) global view; (b) enlarged view
of the motor transmission. The motor is equipped with an optical encoder, used for
measuring the tilt angle.
16 The SOLERO Rover
mirror
θ r
r0
F
camera
Fig. 2.10. Omnicam hardware with a Fig. 2.11. Equiangular mirror and vi-
transparent cylinder for dust protection sion system parameters
is used to hold the mirror and to protect the vision system against dust. The
camera has been chosen for its compactness and low-power consumption. These
properties make it suitable for mobile applications. Grayscale and color images
up to 640 × 480 pixels can be acquired through firewire at a rate of 20 Hz. The
characteristic equation of the mirror is given by
− 1+α
1+α r 2
cos θ = (2.1)
2 r0
with r0 the distance from the focal point F to the mirror, α the intrinsic param-
eter of the mirror, θ the angle of the ray, and r the distance from F to the mirror
along the ray (see Fig. 2.11). Such a mirror belongs to the family of equiangular
mirrors: each image pixel covers the same solid radial angle. Thus, when moving
radially in the image, the shape of an image feature is less distorted than it
would be by using other mirror shapes (such as spherical or parabolical). This
facilitates feature tracking and data association between two consecutive images.
More information about this kind of mirror can be found in [21, 52].
c
d
a b e
Fig. 2.12. The main window (left) and the data browser (right): (a) data replay slider.
By manipulating this slider the scene is updated with the corresponding robot state;
(b) 3D rendering area; (c) variable selection lists one and two; (d) plot areas one and
two; (e) data browser slider
18 The SOLERO Rover
a
e
c f
d g
Fig. 2.13. The remote control interface for SOLERO: (a) robot’s position and attitude;
(b) text box for warning messages, e.g., the pitch angle exceeds a predefined value; (c) 3D
representation of the robot state and 3D cloud of stereo points. The operator can inter-
actively change the perspective; (d) rover control area. The motion commands are given
by clicking and moving the mouse cursor. The robot can be also driven with a joystick;
(e) first image area. The user can select the left, right, stereo or omnicam image. All the
imagers’ settings can be modified; (f ) second image area; (g) panoramic view area.
acquisition, it performs image processing and sends the results to onboard for sen-
sor fusion. The two remaining modules Solero3D and SoleroGUI are described
in more detail in the following sections.
Solero3D
This program has been developed for visualizing and logging data acquired by the
robot during an experiment. Its main use is for testing and debugging algorithms
offline. Figure 2.12 shows the main window (left ) and the data browser (right ).
The main window integrates a full 3D rendering area, allowing the user to
change views. By manipulating a slider, the set of data stored during an experi-
ment can be replayed step by step. All the variables such as the robot position,
the internal links’ angles and the attitude angles are plotted in the data browser.
This module is an important tool to test the system, debug and compare algo-
rithm performance.
interface displays the images taken onboard the rover together with a 3D view
of the current rover state. Furthermore, stereoscopic information is displayed
in the 3D scene, allowing the operator to have a better understanding of the
environment in front of the rover and thus properly avoid hazardous areas. A
panoramic image is displayed at the bottom of the main window. It provides a
wide field-of-view and helps the operator to navigate through the scene. Finally,
all parameters of the imagers are accessible through dialog boxes.
Other features of the GUI (Graphical User Interface) are listed here:
• Warning messages such as “low battery” and “dangerous rover posture” are
printed on the screen in order to avoid critical situations, which could damage
the rover.
• The operator can control the rover with a game pad or the mouse. Smooth
trajectories are generated using nonlinear optimization accounting for both
maximal wheel acceleration and speed.
• Two operation modes are available. The first one is called “coordinated and
allows the robot to drive on any circular arc. The second mode is called “non-
coordinated” and only straight line and point-turning motion are allowed.
• A watchdog timer is implemented to detect communication problems. The
GUI sends a signal every second and the rover stops in case the signal is not
received.
• The GUI uses cross-platform libraries and can be compiled and run on dif-
ferent OS.
2.3 Summary
Because no generic hardware setup exists for such rovers, a lot of effort had to
be deployed to make SOLERO a well-performing platform for research. Further-
more, a set of powerful tools has been developed for speeding up the process of
debugging the algorithms and analyzing the data stored during the experiments.
The modularity and portability of the system allows easy adaptation of new ac-
tuators and sensors. For example, a GPS can be easily added to the I2 C bus and
more cameras attached to the firewire or USB buses. The possibility to access
all the schematics and firmware of the I2 C sensors allows to have a low-level
control of data transmission timings and thus improve the reactivity of the sys-
tem. Off the shelf components are rarely well documented, especially concerning
time-stamping of data, which is of high importance in robotics. The energetic
autonomy of the system running on batteries depends on the intensity and du-
ration of the driving phases. In average, the autonomy is around three hours,
allowing to run long sessions of experiments. This autonomy can be doubled by
replacing NiMh batteries by LiPo, while keeping the same weight.
3 3D-Odometry
Until recently, autonomous mobile robots were mostly designed to run in indoor
environments that are partly structured and flat. Many problems arise in rough
terrain and position tracking is more difficult. Although odometry is widely
used indoors (2D), its application is limited in natural environments (3D). The
wheels are more likely to slip because of the rough structure of the soil, and the
error in the position estimation can grow quickly. For these reasons, odometry is
generally avoided in challenging terrains. However, we can look at the problem
differently and ask: “How could we limit wheel slip in rough terrain ?”
Two different aspects can be improved to minimize wheel slip. The first one is
to develop a controller that guarantees good balance of wheel torques and speeds
to optimize the robot’s motion. A torque controller minimizing slip and maxi-
mizing traction is presented in Chapter 4. The second aspect is to improve the
mechanical structure of the robot. Indeed, a well-designed mechanical structure
yields a smooth trajectory and thus enables limited wheel slip. As described in
Chapter 2, SOLERO can adapt passively to a large range of obstacles and al-
lows limited wheel slip in comparison with rigid structures such as four-wheel
drive rovers. Thus, the odometric information is usable even in rough terrain.
A new technique called 3D-Odometry, which provides 3D motion estimates of
SOLERO, is presented in this chapter.
3.1 3D-Odometry
Odometry is widely used to track the position (x, y) and the heading ψ of a
robot in a plane π [19]. The vector [ x y ψ ]Tπ is updated by integrating small
motion increments sensed by the wheel encoders between two time-steps. When
combined with the robot’s attitude angles φ (roll) and θ (pitch), this technique
allows the estimation of the six degrees of freedom in a global coordinate system,
i.e., [ x y z φ θ ψ ]TW . The orientation of the plane π on which the robot moves
is determined using an inclinometer and the motion increment is obtained by
projecting the robot displacements measured in the plane π into the global co-
ordinate system. This method, used in [40], is referred later as the standard
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 21–32, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
22 3D-Odometry
method. The approach works well only if the environment does not comprise too
many slope discontinuities. Indeed, the pose estimation error grows quickly dur-
ing transitions because it is assumed that the rover is locally moving in a plane.
In rough terrain, this assumption is not appropriate and the transitions problem
must be addressed properly. Odometry based on vehicle kinematics and wheel
encoders are addressed in [34, 9]. However, the presented results focus mainly on
straight trajectories and smooth terrains. In this work, we present an approach
that deals with sharp obstacles and full 3D trajectories.
The 3D-Odometry computation is divided into two steps: a) the computation
of the left and right sides of the robot (Sect. 3.1.1), and b) the computation of the
resulting 3D displacement of the robot’s center (Sect. 3.1.2). The main reference
frames and variables used for 3D-Odometry are defined in Fig. 3.1.
Zw Zw
Zr Zr
Xr
O Yw
Xw O
L
L
η Yr
Δ
F
F zb
B xb
ob
These equations can be solved for φw and ρw , which are the wheel-ground
contact angles. However, this equation system can be inconsistent in some patho-
logical cases. For example, if ε equals zero then ER must be equal to EF because
the distance between the wheels remains constant. In practice, ER and EF can
be slightly different because the wheels are subject to slip. When the equation
zb
F’
sta xb
te Ob
t+1
EF B’
φw ε μ
F Δz’ B state t R
Δx’ ρw
R’’ t γr
gro
2 ER
und
R’
hb und
gro
Fig. 3.2. Displacement of B between state t and t + 1. The final position of the
rear/front wheel is on a circle of radius ER/EF centered at R/F , respectively.
24 3D-Odometry
system is inconsistent, we set the bogie displacement as being equal to the aver-
age of the displacements of the two wheels. In the general case, valid solutions
are found and the sine theorem is applied in the RR R triangle to compute Δx
and Δz , which are the coordinates of the displacement of B expressed in the
bogie’s coordinate system Ob xz
hb hb
Δx = t1 + − t2 cos ε Δz = − − t2 sin ε
2 2
with
ER |sin(π − ρw − ε)| hb ER |sin ρw |
t1 = − and t2 =
|sin ε| 2 |sin ε|
Fig. 3.3 defines the parameters for computing the displacement of L consid-
ering the displacement of B and the mechanical structure of the bogie. The
effective bogie angle change between state t and t + 1 is given by
ε = θ2 + φ2 − (θ1 + φ1 ) (3.2)
L’
F’
Δ
μ zr
η φ1
EF
θ1 xr
B’ φw
k L o
φ
2
xb zb
F
R’ φ
1
ob
B
ER ρ
w
Fig. 3.3. Bogie’s configuration change between two time steps. The distance between
the states has been artificially increased for the purpose of clarity.
3D-Odometry 25
cx = −k (sin φ2 − sin φ1 )
(3.3)
cz = k (cos φ2 − cos φ1 )
Δx = Δx + cx (3.4)
Δz = Δz + cz (3.5)
Finally, the norm of the displacement Δ and the motion angle η expressed in
the robot’s frame are given by
√
Δ= Δx2 + Δz 2 η = φ1 + μ
where μ is defined as
Δz
μ = − arctan (3.6)
Δx
3.1.2 3D Displacement
The previous section described the computation of the norm Δ and the direction
of motion η of one bogie. The aim of this section is to derive the equations
for computing the 3D displacement of the robot’s center O using the left and
right bogie translations. The main schematic for the 3D-Odometry is depicted in
Fig. 3.4. In the latter, the subscripts l and r are used to denote variables related
to the left and right bogies respectively. The angles ηr and ηl define the tilt of
the planes πr and πl containing C and L . C and L are situated on circles
centered in C and L with radius Δr and Δl , respectively. These constraints lead
to the following equations
−−→
nr · CC = 0 (3.7)
−−→
nl · LL = 0 (3.8)
2
Wb
Δ2r = x2r + yr + + zr2 (3.9)
2
2
Wb
Δl = xl + yl −
2 2
+ zl2 (3.10)
2
26 3D-Odometry
The infinitesimal displacements of the left and right sides of the robot mainly
occur in the bogie planes. When the robot is turning, however, the norm of the
displacement of one side is larger than the other and that forces a fraction of the
motion to occur out of the bogie planes (along Yr ). Because of the nonholonomic
constraint, this displacement cannot be measured directly. Thus, we assume that
the smallest displacement among Δr and Δl takes place in the corresponding bogie
plane, giving to the other side an additional degree of freedom along Yr . In the
Zr Yr
nl
L Δl
L’(x l, y l , z l)
Wb
2 ηl
πl
Xr
O
t
Wb te O’(xo , yo, z o)
s ta
2
t+1
te
πb sta
nr
C
Δr
C’(x r, yr , z r )
ηr
πr
example of Fig. 3.4, Δr is smaller than Δl . Therefore the right displacement vector
is constrained to remain in the bogie plane πb . This constraint is expressed by
2
Wb
+ Δ2r = x2r + yr2 + zr2 (3.11)
2
In case Δr is bigger than Δl , one substitutes the subscript r with l in 3.11. Be-
cause the wheelbase Wb remains constant, one can write the additional constraint
the true trajectory of the rover using its kinematic model and the geometric di-
mensions of the obstacles. The rover was remotely controlled several times over
the obstacles and the trajectories were computed online with both 3D-Odometry
and the standard method. For each run, we measured the rover’s final position and
computed the absolute and relative errors. We tested the system for different ob-
stacle configurations such as depicted in Figs. 3.6, 3.7 and 3.9. The obstacles are
made of wood and the ground is a mixture of concrete and small stones.
Fig. 3.5. The Shrimp breadboard. This robot is a small scale version of SOLERO.
For all experiments, the position estimation error of the standard method
grows quickly. This is due to the fact that the method does not account for the
kinematic model of the robot and only considers the attitude of the main body.
Whereas the consequences of this approximation are less relevant for smooth
terrains (Fig. 3.6), they become disastrous while climbing sharp-shaped obstacles
(Fig. 3.7).
On the other hand, the 3D-Odometry estimates the angle ε (3.2), which corre-
sponds to the actual motion direction of the bogie center B. This angle, together
with the wheel encoders’ data allows computation of the wheel-ground contact
angles and the norm and direction of motion of a bogie (point L or C). This di-
rection corresponds neither to the pitch of the main body nor to the mechanical
angle of the bogie: it is the actual direction of motion of the point L or C in
the global coordinate system. Thus, the motion increment Δz (3.5) is computed
and used to correct the trajectory along zr during the slope transitions. The
corrections corresponding to the first experiment are plotted in Fig. 3.8. They
are positive when the first wheel starts climbing the obstacle, zero when there is
no slope change and become negative when the bogie finally reaches the top of
the obstacle. The slope transitions are well detected and this allows correction of
Experimental Results 29
Fig. 3.6. Steep slope experimental setup. The robot starts in front of the obstacle,
climbs a 35◦ slope and stops on top after traveling a distance of 870 mm along the x
direction. The height of the obstacle is 175 mm. The slope changes are relatively smooth
for this experiment.
the z-coordinate to thus better track the robot’s position. The position estima-
tions computed by 3D-Odometry are more accurate and the numerical figures
presented in Tables 3.2 and 3.2 confirm these qualitative results.
In order to correctly analyze the error sources of the 3D-Odometry, the main
mechanical parameters of the robot were calibrated before the experiments. That
way, the remaining errors are only due to nonsystematic errors and errors in-
troduced by our approach. The wheel diameters and the wheelbase of the robot
have been calibrated using [19]. The pitch offset of the IMU has been estimated
using the trajectory computed by the standard approach on flat ground. After
the robot completes a full loop, the final height shall be equal to the initial
height. The pitch offset of the IMU, Oimu , is obtained using
Oimu = arcsin (Δh/s) (3.15)
where Δh is the error accumulated along the z axis and s the total path length.
After the calibration of the offsets, the remaining errors related to the 3D-
Odometry shall be due to nonsystematic errors and approximation of the
algorithm. The first source of error we might think about is wheel slip. In case
of slip the calculated distance is bigger than the measured one and the results
presented in Tables 3.2 and 3.2 can be interpreted that way. However, wheel slip
is not the biggest source of error in these experiments. The errors are mainly
along the z direction and they are due to nonlinearities, variation of the wheel
30 3D-Odometry
Fig. 3.7. Sharp edges experimental setup. The robot goes over a 300 by 70 mm ob-
stacle and stops after 1 m travel. Even with sharp slope transitions, the 3D-Odometry
trajectory accurately follows the obstacle shape.
Fig. 3.8. z-coordinate correction for the first experiment. For clarity, both Pitch and
Bogie Angle curves have been scaled by a factor of 12 and 8, respectively. The time-
step between two samples is about 60 ms (computation time for the 3D-Odometry).
Experimental Results 31
Table 3.1. Relative error for the steep slope experiment (870 mm)
Table 3.2. Relative error for the sharp edges experiment (1000 mm)
diameters, and inaccuracy in the mechanical dimension. For the steep slope
experiment, the final averaged error of 11 mm can be explained by a remaining
angular offset. Indeed, an offset of 1◦ leads to an error of around 15 mm in the
z direction for an 870-mm horizontal motion. For the sharp edges experiment,
these errors canceled out because of the symmetry of the obstacle.
(a)
(b)
Fig. 3.9. Full 3D experiment performed with the Shrimp. Only the right bogie wheels
climbed the obstacle (a). Then, the rover was driven over obstacle (b) (with an incident
angle of approximatively 20◦ ).
32 3D-Odometry
The experimental setup used to test the full 3D capability of the 3D-Odometry
is depicted in Fig. 3.9. Like for the first two experiments, the real robot was
used. This time, the rover followed the sequence of preprogrammed commands
consisting of a) going straight for 1 m b) turning to the right 70◦ c) go straight
for 1 m. Only the right bogie wheels climbed the first obstacle (a) whereas the
other wheels kept ground contact. Then, the rover was driven over the second
obstacle (b) with an incident angle of approximatively 20◦ . The value of such
an experiment is that it forced the chassis to adopt asymmetric configurations
and allowed us to test the full 3D-capability of the 3D-Odometry. The true
final position and orientation of the rover was hand-measured and compared
with the computed final position. The average error at the goal was only x =
0.02 m, y = 0.02 m, z = 0.005 m, ψ = 3◦ for a total path length of around 2 m.
This corresponds to a relative error of 1.4, 2, 2.8 and 4%, respectively.
3.3 Summary
This chapter described a new approach called 3D-Odometry that estimates the
3D displacement of a rover in rough terrain. The sensory inputs are the wheel
encoders of the four bogie wheels, the bogie angular sensors and an inclinometer.
It has been shown that this approach performs better than the standard method
used traditionally. The position estimation is significantly improved when the
rover overcomes sharp-shaped obstacles because the method accounts for the
kinematic model of the rover.
Thanks to its nonhyperstatic mechanical structure, SOLERO overcomes obsta-
cles in a smooth manner, with limited wheel slip. When applying 3D-Odometry,
such a design enables the use of odometry as a means to estimate the rover motion
in rough terrain. Moreover, the accuracy of odometry can be further improved
using a controller minimizing wheel slip. The description of such a controller is
presented in the next chapter.
4 Control in Rough-Terrain
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 33–51, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
34 Control in Rough-Terrain
mo2
mo1 mo3
mo1
mo3
mo1
mo4
mo3 mo1
mo3
mo1
mo1
mo1 mo1
mo4
mo3
mo1
mo4
mobility of the bogies and the fork is one and the final mobility can be calculated
using 4.1 to produce
M O = 6 × 18 − 5 × 14 − 4 × 1 − 3 × 7 − 2 × 6 = 1 (4.2)
Model of SOLERO
SOLERO has 18 parts and is characterized by 6×18 = 108 independent equations
describing the static equilibrium of each part and involving 14 external ground
forces, 6 internal wheel torques and 93 internal forces and torques for a total of
113 unknowns. The weight of the fork and the bogies’ link has been neglected,
whereas the weight of the main body and the wheels is considered. Of course,
it is possible to reduce this set of independent equations because we have no
interest in implicitly calculating the internal forces of the system. The variables
of interest are the three ground contact forces on the front and the rear wheels,
the two ground contact forces on each wheel of the bogies and the six wheel
torques. This makes 20 unknowns of interest and the system can be reduced to
20 − (113 − 108) = 15 equations. This leads to the following matrix equation
where M is the model matrix depending on the geometric parameters and the
state of the robot, U a vector containing the unknowns and R a constant vector.
The details of the model together with the mechanical parameters are described
in Appendix A.
minimizing slip. Other functions, such as energy, can be used instead. In this
chapter, we focus on slip minimization and this section describes the concepts
and the optimization algorithms.
The wheel is balanced if the friction force fulfils 4.5. This case represents static
friction. If the static friction force cannot balance the system, the wheel slips and
the friction force is set by 4.6.
Fstatic ≤ μ0 N (4.5) Fdynamic = μ N (4.6)
In order to avoid wheel slip, the friction force that depends directly on the
motor torque M should satisfy
M
T = ≤ μ0 N (4.7)
R
The above equations suggest that there are two ways to reduce wheel slip.
First, assume that μ0 is known and set
T ≤ μ0 N (4.8)
38 Control in Rough-Terrain
m1 = 1 m2 + 2 m3 + δ (4.11)
Ni = α1i m2 + β1i m3 + γ1i (4.12)
Ti = α2i m2 + β2i m3 + γ2i (4.13)
with i = 1 . . . 3
Torque Optimization 39
Fig. 4.3. The ThreeWheels 2D model. This rover belongs to the passively suspended
robots family. m4 is an uncontrolled torque generated by a torsion spring with known
characteristics.
Fig. 4.4. Function f to minimize, for the ThreeWheels rover in a given state. The op-
timal solution (circled ), minimizing slip and fulfilling the physical constraints, is found
using numerical optimization. The optimal solution corresponds to the intersection of
μ1 , μ2 and μ3 .
Equal no Simplex
Torques Valid ?
Method B
yes 91 %
Fig. 4.5. Optimization algorithm. The execution times for the algorithms A, B and
C are 6, 5 and 20 ms, respectively (on a 1.5 GHz processor). The worst case is about
31 ms but most cases (71%) are handled by A, which takes only 6 ms.
A. Fixed-point Optimization
This optimization method is based on the fixed-point algorithm. The aim of this
algorithm is to numerically find the intersection of functions when an analytical
solution is difficult or impossible to obtain. In our case, the optimal solution
corresponds to the intersection of μ1 , μ2 and μ3 . The corresponding flow chart
of the algorithm is presented in Fig. 4.6.
This algorithm is computationally light and provides good results for most cases.
Nevertheless, it diverges sometimes and does not account for the aforementioned
constraints. This can lead to torques that cannot be produced by the motors.
Torque Optimization 41
Fig. 4.6. Fixed-point based algorithm. The quasi-static model (2) is solved with an
initial set of torques (1). Then the average of μ1 , μ2 and μ3 is computed by (3) (the
forces provided by (2) are used to compute μ1 , μ2 and μ3 ). The torques corresponding
to this average are computed in (4) and fed back into block (2). Twenty iterations are
generally sufficient for convergence.
B. Simplex Method
This method is based on the simplex algorithm which solves linear programs in
a constrained solution space. The simplex method tries to maximize an objective
function considering a set of constraints on the variables. In our case, the algorithm
is only used to provide a valid initial solution, and thus, many objective functions
can be used. However, in order to get closer to the final optimal solution, we choose
the function h defined in 4.15, which tends to minimize the ratio Ti /Ni
n
h= Ni (4.15)
i=1
C. Gradient optimization
This algorithm seeks an optimum in the constrained solution space given a
known valid initial solution. Gradient optimization is similar to the potential
field method: at each step, the gradient is computed and the next solution is
generated following the maximum slope.
Ni > 0 (4.16)
|mi | < M axT rq (4.17)
42 Control in Rough-Terrain
Fig. 4.7. Snapshot of the graphical user interface used to test our optimization proce-
dure. The user can interactively modify the state of the robot, i.e., the wheel-ground
contact angles, roll angle, pitch angle and the bogies and fork angles. The optimal
torques and forces are computed when the state changes (the forces are expressed in
the global reference frame): (a) side view; (b) right bogie view; (c) decomposed view
from behind. In this view, the arrows represent the projections of the reaction forces
in the global frame of reference.
Fig. 4.8. Histogram of μm for about 20,000 different robot postures. 80% of the states
correspond to a friction coefficient smaller than 0.6 and 65% to a friction coefficient
smaller than 0.3.
Fig. 4.9. Images of the Mars surface taken by the NASA rover Spirit next to the
Bonneville Crater
Fig. 4.10. Rover motion control loop. The global loop is a speed-control loop whereas
the controllers for the wheels are torque controllers. The rover state vector s includes
the wheel-ground contact angles, the internal links angles and the roll and pitch angles.
The kernel of the control loop is a PID controller. It enables the estimation of
the additional torques to apply to the wheels in order to reach the rover’s desired
velocity Vd . The PID minimizes the velocity error Vd −Vr , where Vr is the rover’s
actual velocity measured using onboard sensors1. Mc is actually the estimate of
1
The rover velocity is estimated using the sensor fusion algorithm presented in Chapter 5.
Experimental Results 45
sik = Δw(k−1,k)
i
− Δθ(k−1,k)
i
·R (4.19)
46 Control in Rough-Terrain
i i
where Δw(k−1,k) is the true wheel displacement, Δθ(k−1,k) the angular change
and R the wheel radius. The total slip of the rover integrated during an experi-
ment is defined as
6
S= sik (4.20)
k i=1
4.4.2 Experiments
We set the nominal rover speed for all the simulations to 0.1 m/s and the fric-
tion coefficient to 0.6. Different terrain shapes such as depicted in Figs. 4.11 and
Fig. 4.12 were generated to perform the experiments in a wide range of condi-
tions. The terrain of Fig. 4.11 is a simple 2D profile (in the plane x-z), whereas
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.5 1 1.5 2
x [m]
Fig. 4.11. Trajectory of the center of gravity and terrain profile for experiment 1. This
kind of terrain is difficult for a wheeled rover because it comprises sharp edges.
Experimental Results 47
Fig. 4.12. Terrain for experiment 2. This kind of terrain is challenging for a wheeled
rover because much side slip occurs.
the terrain of Fig. 4.12 is a full 3D mesh. The latter was generated randomly
using step, sinusoidal, circle and particle deposition functions.
Because both controllers behave differently in a given situation, the trajec-
tories of the rover can differ significantly. Indeed, the slip of the wheels is not
the same for both controllers, and this can cause the rover to take different
routes. To allow performance comparison, we consider an experiment as valid if
the distance between the final positions of both paths is smaller than a certain
threshold (for a path length of about 3.5 m). A distance threshold of 0.1 m is
small enough to consider the routes as similar.
For all the valid experiments, predictive control showed better performance
than reactive control. In some cases, the rover was unable to even climb some
obstacles and to reach the final position when driven using the reactive approach.
The results for the two examples are presented in Table 4.1. It can be seen that
the integrated wheel slip is smaller if predictive control is used.
All the results obtained during the experiments are similar to the one depicted
in Figs. 4.13 and 4.14. On flat terrain the performance of both controllers are
identical and wheel slip is zero. As soon as bumps and slopes are introduced, the
instantaneous slip for each wheel is reduced when using predictive control: the
0.0006
1, 3 reactive
2, 4 predictive
0.0004
2 1
0.0002
0
0 0.5 1 1.5 2
x [m]
Fig. 4.13. Total slip and rear-wheel slip for experiment 1 (total slip is scaled by a
factor of 500). Locally, wheel slip can be bigger with predictive control but the total
slip remains always smaller: 20% better than reactive control.
0.0006
1, 3 reactive
2, 4 predictive
0.0004
1
0.0002
2
0
0 0.5 1 1.5 2 2.5 3 3.5
x [m]
Fig. 4.14. Total slip and front wheel slip for experiment 2 (total slip is scaled by a
factor of 800). The difference gets bigger as the rover encounters rough terrain. At the
end of the experiment, the predictive controller performs 16% better than the reactive
controller.
Wheel-Ground Contact Angles 49
peaks are at the same places for both controllers, but the amplitude is smaller for
the predictive controller. That means that when a wheel slips at a given place, it
slips less when predictive control is used. This major behavior can be observed
in Figs. 4.13 and 4.14 when looking closely at curves 1 and 2. The peaks may not
be aligned but that is mainly due to slight differences in the trajectories. The
improvement is difficult to quantify because it strongly depends on the kind of
terrain. However, because this behavior has been observed systematically for all
the experiments we performed, it validates the benefit of using a predictive con-
troller. As noted above, the focus was on the comparison of the two controllers,
not on the absolute figures.
Another important result of the simulations is that they allowed us to test the
approach against violations of the strong assumptions that were made during
the development of the controller, i.e., the wheels always have ground contact,
there is no slip and there is no dynamic effect. During the experiments, these
assumptions were all violated but no instability was observed and the controller
performed well in all the tested configurations.
Finally, the simulations showed good results and promising perspectives. Fur-
thermore, they allowed us to detect potential problems and address implemen-
tation details, bringing us a step closer to the real application.
(a) (b)
Fig. 4.15. The tactile wheel (developed at EPFL by Michel Lauria). (a) Sixteen
infrared proximity sensors measure the tire deflection all around the wheel; (b) picture
of the front wheel of the robot Octopus, equipped with tactile wheels.
With such a wheel, the contact angle is estimated simply using a weighted
mean of the proximity sensor signals. In this way a smooth transition of the
measured angle is obtained even when sharp slope changes are encountered. In
Fig. 4.15b, the force on the tire is transferred from vertical to horizonal as the
wheel climbs the step in a continuous manner. The weighted mean translates
this behavior in the wheel-ground contact estimation. There are at least two
other advantages of including deflection sensors in the wheels:
4.6 Summary
strategies, the proposed method does not require the use of soil models, instead
it estimates the rolling resistance as the robot moves. As a consequence, the
rover is able to operate on different types of soil, which is the main requirement
for exploration missions. Furthermore, our approach can be adapted to any kind
of wheeled rover and the needed processing power remains low, which enables
online computation. The simulations showed that such a controller performs
better than a reactive controller and that the system is mature enough to be
implemented on the rover for real experiments.
Another interesting aspect of our controller is that the normal forces are
explicitly computed and can be used to estimate a slip probability for each
wheel: the less pressure on the wheel, the more likely the wheel is to slip. The
slip probability can then be propagated through the 3D-Odometry equations to
finally obtain the covariance matrix for the robot’s displacement. This covariance
is valuable information for probabilistic multi-sensor fusion that is presented in
the next chapter.
5 Position Tracking in Rough-Terrain
The aim of this section is to give a survey of sensors that can be used for position
tracking and emphasize the difficulties of motion perception in unstructured
outdoor environments.
The family of 1D/2D distance sensors such as ultrasound and 2D laser scan-
ners are commonly used indoors (structured environments), but are not well
adapted for outdoors (unstructured environments). In structured environments,
it is generally assumed that a rover moves on flat ground and that its working
space is delimited by walls, perpendicular to the ground. This strong assumption
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 53–79, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
54 Position Tracking in Rough-Terrain
absolute heading, and the star sensors also provide latitude and longitude. These
sensors can only be used when the rover is perfectly still, and require good
meteorological conditions. Furthermore, a sun sensor can be utilized only during
the day, whereas the measurements of a star sensor are only available during the
night. However, they are of high interest for global re-localization: the absolute
position acquired by a star sensor during the night can be used to relocate the
rover globally after a long traverse. Thus, three conclusions can be drawn from
the above discussion:
1. No single sensor can provide all the required information. All have their own
drawbacks and advantages. In general, the value of the provided information
is inversely proportional to the sampling rate, e.g., an inertial measurement
unit can provide data at 100 Hz but the heading estimation diverges quickly,
whereas a sun sensor provides absolute heading but requires the rover to
remain at rest over a longer measurement time.
2. Since the data provided by absolute sensors contain no drift, they have more
value than those acquired using dead reckoning sensors.
3. Because a small angle error (e.g., heading error) leads to a large position
error, it is more important to have precise information about angles than
about distances.
These conclusions enforce the fact that the use of complementary sensors is re-
quired for robust position tracking. In this chapter, we use three different sources
of information for sensor fusion, though our approach can be easily extended to
more sources:
• Wheel encoders: as discussed in Chapter 3, odometry can be used to predict
the robot’s displacement and a reasonably accurate motion estimation can be
obtained in rough terrain. Moreover, Chapter 4 proposes an algorithm to limit
wheel slip and enhance the accuracy of odometry. This motion estimation
technique provides data at a relatively high rate (10 Hz).
• IMU: an inertial measurement unit enables the direct measurement of sys-
tem dynamics. When a gravity field is present, the attitude can be estimated
without any drift, which is very valuable information. Furthermore, the mea-
surements are available at a high frequency (75 Hz).
• Stereovision: a method similar to [48, 53] is used to estimate the six degrees
of freedom of the rover. Instead of using pixel tracking, interest points are
extracted for each frame and are matched using the algorithm presented in
[35, 36]. This technique, called Visual Motion Estimation (VME), usually
provides better pose estimation than the other sensors but at a much lower
rate (0.5 Hz).
For this research, absolute sensors such as GPS have not been considered
because the rover is supposed to track its position in an unknown environment
without the help of any artificial beacons (e.g., exploration of Mars).
56 Position Tracking in Rough-Terrain
where s and c denote, respectively, the sin(.) and cos(.) functions. Such a matrix
includes both rotation and translation in three dimensions. The rotation is a
three by three direction cosine matrix expressed in terms of the angles φ, θ, ψ and
the components of the translation are xt , yt and zt . Thus, such a transformation
is parametrized by six parameters that are the six degrees of freedom of a body in
a 3D space. It is interesting to note that a homogeneous matrix can also be used
to describe incremental motion (in a time interval t, t + 1) or even to describe
the robot’s pose as illustrated in Fig. 5.1. Between times t and t + 1, the sensor
moves from point S to S . The 3D transformation reflecting the sensor motion
is given by HS S . Thus, the six parameters defining HS S include all the motion
information between t and t + 1 (expressed in the sensor frame). We define pS ,
the vector of the parameters defining HS S
T
pS = φS θS ψS xS yS zS (5.2)
t+1 t
H S’S
zs
ys
zw
xs zr
H SR H RW
S’
S H SR
R’ xr
xw
H R’R R W
yw
yr
Fig. 5.1. Transformations between the different coordinate systems. W, R and S are,
respectively, the centers of the coordinate systems associated with the World, the Robot
and the Sensor. The homogeneous matrix Hij allows the transformation of coordinates
expressed in the reference frame i into that of frame j. “Prime” signs added to variable
names (e.g., p ) denote values related to time-step t + 1.
Using a similar approach, the pose of the robot at time t + 1, expressed in the
world coordinates system, is given by
Finally, we define pW and pW as the parameters vectors of HRW and HR W
T
pW = φW θW ψW xW yW zW (5.7)
T
pW = φW θW
ψW xW yW
zW (5.8)
HR R . The uncertainties associated with the transformation HSR are neglected
because HSR can be calibrated accurately. In the latter, we define CS and CR
as the covariance matrices associated with the vectors pS and pR respectively.
In order to propagate these covariances, the function q, expressing pR as a func-
tion of pS has to be derived. q is determined using 5.4 and the properties of the
homogeneous transformation matrices
∂q ∂q
JR = JW = (5.13)
∂pR ∂pW
Partially Observable Markov Decision Processes, or the Kalman filter are derived
from the Bayes rule and provide a framework to fuse uncertain data [10]. The
choice of one or another depends on the application. For continuous state spaces,
the Kalman filter is the preferred choice for sensor fusion. Even if this method can
be applied to fuse the measurements acquired by any number of sensors, most of
the applications found in the literature generally use only two sensors. The most
commonly used pairs are: odometry/laser scanner, odometry/inertial measure-
ment unit [19], inertial measurement unit/vision [63, 56, 71], compass/inertial
measurement unit [55], inertial/GPS [51], etc. Furthermore, even for rough ter-
rain rovers, only the 2D case (x, y, ψ) is generally considered.
In this section, a method to estimate the six degrees of freedom of the rover
(i.e. x, y, z, φ, θ, ψ) using the measurements of three different sensors is presented.
Its principle is presented in Fig. 5.2. Our approach uses an extended information
filter (EIF) to combine the information acquired by the sensors. This formulation
of the Kalman filter has very interesting features: its mathematical expression is
well suited to implement a distributed sensor fusion scheme and allows for easy
extension of the system in order to accommodate any number of sensors, of any
kind [49]. In this application, the observation and transition equations are not
linear and a nonlinear information filter is required. The observation equation
assumes additive zero mean Gaussian noise and is written
zj (k) = hj [k, x(k)] + νj (k) (5.14)
where zj is the measurement vector of sensor j and hj the non-linear observation
model transforming the state vector x(k) from the state space to the observation
space. We define the measurement covariance matrix as being the expectation of
the measurement noise: Rj = E νj νjT . Similarly, the nonlinear state transition
equation can be written as
x(k) = f [k, x(k − 1)] + ω(k) (5.15)
where f is the non-linear state transition model describing the transition of the
state from one time-step to another as a nonlinear function of the state. The
covariance matrix of the state transition is defined as Q = E w wT . The first
order nonlinear information filter is similar to the linear information filter if the
following substitutions are made
∇x hj [k, x̂(k)] ≡ Hj (k) (5.16)
Next step
Fig. 5.2. Sensor fusion scheme. The Inertial Sensor (IS) is divided into two logical
sensors: an inclinometer (inc) and an inertial measurement unit (imu).
with S = {imu, inc, odo, vme} as the set of sensors and z (k) = z(k) for the linear
filter, otherwise
z j (k) = zj (k)− (hj [k, x̂(k|k − 1)] − ∇x hj [k, x̂(k|k − 1)] x̂(k|k − 1)) (5.20)
sensors used in the experiments. The measurement vectors and matrices for all
the sensors are defined in Fig. 5.2.
in the camera coordinate system, is first converted into the robot’s coordinate
system using 5.4. Then the same method as presented in the previous section is
applied to derive Hvme .
with
x = [ xx xy xz xba xω xbω xΔ ]T
The angular rates, biases, scaling errors and accelerations are random pro-
cesses that are affected by the motion commands of the rover, time and other
unmodeled parameters. However, they cannot be considered as pure white noise
exclusively because they are highly time-correlated. In order to illustrate this
statement, let us assume that the rover is subject to an acceleration of 2g at
time t. At time t + 1, the acceleration cannot reach −2g because the rover has a
certain inertia and the elapsed time between t and t + 1 is short. Thus, these sig-
nals are time-correlated and cannot be considered as white noise. Instead, they
can be modeled as first order Gauss–Markov processes2 whose auto-correlation
function is
Rp (t) = σ 2 e−τ |t| (5.33)
where 1/τ is the time constant defining the correlation time and σ 2 is the variance
of the process. Such a process can also be considered as a low pass filter, with τ
2
The detailed derivation of the equations related to the first and second integral of
a Gauss–Markov process is presented in Appendix C. In particular, the covariance
matrix associated with such a process is developed.
64 Position Tracking in Rough-Terrain
being the time constant. The discrete differential equations of the first and second
integral of such a process are computed using the inverse Laplace operator
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
p1 e−τ h 00 p1 p
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ 1⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ p2 ⎥ =⎢ (1 − e−τ h )/τ 1 0 ⎥ ⎢ p2 ⎥ = Φ(τ, h) ⎢ p2 ⎥ (5.34)
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
−τ h
p3 (τ h − 1 + e 2
)/τ h 1 p3 p3
k+1 k k
where p2 and p3 are, respectively, the first and second integral of the Gauss–
Markov process p1 , and h is the sampling time. It is interesting to note that if τ
tends toward zero, and if p1 corresponds to an acceleration, then 5.34 is written
as ⎡ ⎤ ⎡ ⎤⎡ ⎤
ẍ 1 00 ẍ
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ẋ ⎥ = ⎢ h 1 0 ⎥ ⎢ ẋ ⎥ (5.35)
⎣ ⎦ ⎣ ⎦⎣ ⎦
x h2 /2 h 1 x
k+1 k
The biases and scaling factors are simply propagated through a low-pass filter
using
Fba = diag(e−τbax , e−τbay , e−τbaz )
might not be the optimal parameters but they have proven to give good filter
performance.
roll and pitch angles were set to the square of half the value given in the IMU
datasheet (2σ = 1◦ ). These values correspond to the diagonal elements of Rinc .
The uncertainty model of the odometry is difficult to assess because it depends
on the type of terrain. As it is not possible to classify all terrain types, we set the
uncertainty of the odometry as being proportional to the acceleration undergone
by the rover. Indeed, slip mostly occurs when negotiating an obstacle, while the
robot is subject to acceleration. Furthermore, at constant speed, the acceleration
is zero and does not bring much information about the rover’s motion. In this
particular case, motion estimation can only rely on odometry. For the same
reasons, the variance for the yaw angle was set proportional to the angular rate
ωz . Thus, the covariance matrix associated with 3D-Odometry is written as
⎡
kx (1 + ẍzR − gx ) 0
⎢
⎢
⎢ 0 z
ky (1 + ÿR − gy )
CR = ⎢
⎢
⎢ 0 0
⎣
0 0
⎤
0 0
⎥
⎥
0 0 ⎥
⎥ (5.41)
⎥
z
kz (1 + z̈R − gz ) 0 ⎥
⎦
0 kψ (1 + ω zz )
where kx , ky , kz and kψ are proportional gains and gx , gy and gz are the grav-
itational components in the rover-fixed frame. The gains are set depending on
the detected wheel slip, which is obtained using exteroceptive sensors such as vi-
sual odometry. This approach proved to work well and was validated during the
experiments. The same equations as presented in Sect. 5.2.2 are applied to prop-
agate the covariance matrix CR through the coordinate system transformation
and to finally obtain Rodo .
Experimental Validation
In order to test the position-tracking algorithm, the robot was driven across sev-
eral experimental setups for a predefined distance. For each of five test runs per
setup, the 3D-Odometry trajectory and the filtered trajectory3 were compared.
In this section, we present the results corresponding to the setup depicted in
Fig. 5.3, which is the most difficult obstacle the rover had to climb during the
experiments.
The experimental setup of Fig. 5.3 is very difficult to negotiate for a wheeled
rover because of the sharp edges and the low-friction coefficient. A lot of slip oc-
curred during the step climb and the robot bounced on the ground when the rear
bogie wheels descended from the obstacle. The different events occuring during
3
The filtered trajectory is the trajectory estimated by the EIF filter.
Experimental Results 69
Fig. 5.3. Experimental setup along with the corresponding 3D model used to analyze
the results. The maximum height of the obstacle is 135 mm and the final height is 45 mm.
a b c d
Fig. 5.4. Accelerometers’ raw signals for a run performed on the setup depicted in
Fig. 5.3: (a) front wheels climbing; (b) rear wheels climbing; (c) front wheels descend-
ing; (d) rear wheels descending from the concrete blocks
the obstacle negotiation are easily identified when looking at the z-accelerometer
signal in Fig. 5.4.
Table 5.4.1 reports the estimation errors of the final position of the trajectories
computed with the 3D-Odometry and the filter. The third run, in bold in the
table, is used as the reference experiment for the next two figures.
70 Position Tracking in Rough-Terrain
Table 5.2. Experimental results for the setup depicted in Fig. 5.3 (in mm)
The error along the x-axis is nearly the same for both the filtered and the 3D-
Odometry trajectories. This is mainly due to the fact that there are situations
where the rover’s velocity change is not detected by the accelerometers. Such
situations occur in case of wheel slip and when the signal-to-noise ratio of the
accelerometers is poor. A typical example is when the rover starts climbing the
obstacle. In this case, a lot of wheel slip occurs and the acceleration along x
is low. Thus, less weight is given to acceleration and the filter mainly accounts
for the odometry measurements. As a consequence, the estimations of the 3D-
Odometry and the filtered one do not differ.
On the other hand, when the rover goes down the obstacle the accelerations are
significant (Fig. 5.4c and d) and the uncertainty of the odometry increases accord-
ing to 5.41. Thus, more weight is given to the acceleration measurements and this
allows for trajectory correction such as highlighted in Fig. 5.5. For all the experi-
ments the filtered final z-coordinate is always closer to the true height of 45 mm.
The error along the z-axis is only 5 mm instead of 8 mm for the 3D-Odometry.
The position error along the y-axis is mainly due to the heading (yaw) error.
Indeed, a small heading error translates into large position errors as the traveled
distance increases. The error of the heading computed with odometry is mainly
due to asymmetric wheel slip, which occurs frequently during this experiment.
The heading estimation is significantly improved when the z-gyroscope measure-
ments are integrated. These measurements contain valuable information about
the heading rate that is used to filter asymmetric slip and thus to produce better
estimates as shown in Fig. 5.6. This results in a significant reduction of the error
along the y-axis (see Table 5.4.1).
The position error in the y-z plane is plotted in Fig. 5.7. It clearly appears
that the average error of the filtered estimate is closer to the ground-truth (the
origin) than the 3D-Odometry estimate. The total error in the plane y-z is 20 mm
for the filtered and 52 mm for the 3D-Odometry. Thus, there is an improvement
of 44% when using sensor fusion.
For testing the filter in a more general case, the rover was driven several times
across the scene depicted in Fig. 5.8. Each time, the operator remote-controlled
the rover in order to close the loop. For each run, the final error of the filtered
trajectory was smaller than that estimated using odometry only. For the particular
Experimental Results 71
Fig. 5.5. z-coordinate of the trajectory. The acceleration measurements are integrated
to correct the trajectory (circled areas). The circled areas correspond to events labeled
(c) and (d) in Fig. 5.4.
Fig. 5.6. Yaw-angle estimate (the true final angle is close to zero). The yaw gyroscope,
measuring the angular rate around z, allows filtering asymmetric slip.
experiment of Fig. 5.8, the final error [ x , y , z , ψ ] was [0.16 m, 0.142 m, 0.014 m,
18◦ ] for 3D-Odometry and only [0.06 m, 0.029 m, 0.012 m, 1.2◦] when using sensor
fusion.
72 Position Tracking in Rough-Terrain
-6
-8
-10
-12
-14
10 20 30 40 50 60 70 80 90
Error y [mm]
Fig. 5.7. Position error in the y-z plane. The 3D-Odometry estimates are represented
by the circles and the filtered estimates by the triangles.
a
b
Fig. 5.8. Comparison between (a) 3D-Odometry and (b) filtered trajectory. This
experiment was realized with the real rover.
Discussion
The experimental results show that the inertial measurement unit helps to cor-
rect odometric errors and significantly improves the pose estimate. The main
contributions occur locally when the robot overcomes sharp-shaped obstacles
and during asymmetric wheel slip. In all the experiments, the fusion of odom-
etry and inertial sensors provided better motion estimates than that obtained
Experimental Results 73
using odometry only. The improvement brought by the sensor fusion process
becomes more and more significant as the total path length increases. In com-
parison with other works integrating inertial sensors on ground vehicles, this
work distinguishes itself with the following features:
• This work addresses the full 3D case.
• An error model for the 3D-Odometry is established based on the IMU mea-
surements according to 5.41.
• The IMU is used in rough terrain, where the signal-to-noise ratio is unfavor-
able.
• It was shown that the integration of the accelerations can be used to correct
the robot’s position locally.
In the previous section, only proprioceptive sensors were integrated to track the
robot’s position. Even if the inertial sensors complete the odometric information,
there are situations where the measurements do not provide enough information
about the rover’s motion. For example, the situation where all the wheels slip,
e.g., on ice, is not correctly handled by the filter. In this case, the acceleration
is close to zero and the filter only integrates odometric information to produce
erroneous position estimates. Thus, in order to increase the robustness of the
position tracking and to limit the error growth, it is necessary to incorporate
measurements from exteroceptive sensors. These sensors allow computing the
rover’s ego-motion by tracking features in the environment. This way, wheel slip
is detected and correctly handled by the filter.
Here we use the visual motion estimation technique presented in Appendix D
to compute the rovers’s ego-motion. This new motion information is integrated
with the filter using the scheme depicted in Fig. 5.2. The uncertainty model of
VME, presented in [36], is based on an error model of stereovision. The covariance
matrix Rvme expressed in the robot’s frame is obtained using the equations in
Sect. 5.2.
Experimental Results
One of the experimental setups used to test our position-tracking algorithm
is depicted in Fig. 5.9. The use of obstacles of known shape enables the pre-
calculation of reference trajectories (ground-truth trajectories), that are used to
assess the accuracy of our approach. The experiment consisted in driving the
rover on top of the obstacle while computing the rover’s trajectory using the
sensor fusion algorithm. A sequence of snapshots taken during the experiment
is shown in Fig. 5.10.
The graph of Fig. 5.11 plots four trajectories, i.e. 3D-Odometry, VME, Esti-
mated and the Reference trajectory. The Estimated trajectory is the result of
the sensor fusion of all three sensors. The Reference trajectory was computed
considering the kinematics of SOLERO and the geometric dimensions of the
experimental setup (the final x coordinate was hand measured). As the rover
74 Position Tracking in Rough-Terrain
(a) (b)
Fig. 5.9. Experimental setup for sensor fusion using VME, 3D-Odometry and IMU:
(a) side view; (b) image taken by the left camera of the stereovision system. The dots
represent the extracted Harris features. In comparison with a natural scene, only a few
features are detected.
Zone A Zone B
Zone C
did not deviate significantly from straight motion, the computed trajectory is
considered as the ground-truth.
The graph is divided into three zones, characterizing three different phases of
the trajectory as depicted in Fig. 5.10. In zone A, the VME trajectory is very
close to the Reference trajectory, whereas the Estimated trajectory is slightly
Experimental Results 75
X-Z trajectories
0.25
VME 1
1
3D-Odometry 2 2
Estimated 3
0.2
Reference 4 4
3
0.15
z [m]
0.1
0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1
x [m]
Fig. 5.11. x-z trajectories for the experimental setup of Fig. 5.9
offset. This offset is mainly due to the great amount of wheel slip occurring
in zone A. Because the amplitude of the acceleration is low in that part of the
trajectory, the weight given to the odometry is not significantly decreased. Thus,
the integration of the odometry measurements contributes to push the Estimated
trajectory slightly away from the Reference.
In zone B, the Estimated trajectory is closer to the Reference trajectory than
VME (see Fig. 5.12). When climbing the obstacle, the motion estimates provided
by VME are less accurate because of the lower number of features matched be-
tween successive frames. The low number of matches also increases the uncer-
tainty associated with the VME estimates. This explains why the jagged steps of
the VME trajectory are not propagated to the level of the Estimated trajectory.
Zone C corresponds to the phase of the trajectory where the rear wheel passes
from the sloped surface to the top plane. This transition causes the rover to tilt
forward rapidly and thus creates a high discrepancy between two successively
acquired images. In such conditions, feature matching is more difficult and only
a few pairings are established. The worst case occurs between images number 31
and 32 where only 29 pairings are found. As a consequence, the VME provided
a very bad motion estimate with a high uncertainty (see Figs. 5.13 and 5.14). In
this case, less weight is given to VME and the sensor fusion can perfectly filter
this bad information to produce a reasonably good estimate using the odometry
and the IMU instead. Finally, the estimated final position is very close to the
measured final position. A final error of four millimeters for a trajectory longer
than one meter (0.4%) is very satisfactory, given the difficulty of the terrain
(high wheel slip and field-of-view changes).
It is interesting to analyze the plot of the variance of the estimated x coordi-
nate as a function of time. As shown in Fig. 5.15, the variance increases globally,
76 Position Tracking in Rough-Terrain
0.14
0.13
z [m]
0.12
0.11
0.1
0.09
0.08
0.5 0.55 0.6 0.65
x [m]
Fig. 5.12. Enlarged view of zone B. The Estimated trajectory is closer to the Reference
trajectory than VME.
0.21
z [m]
0.2
0.19
0.18
0.17
0.7 0.75 0.8 0.85 0.9 0.95 1 1.05 1.1
x [m]
Fig. 5.13. Enlarged view of zone C. Twenty-nine features were matched between
images 31 and 32. This yields a bad motion estimate of VME, associated with a large
uncertainty. Thanks to sensor fusion, the system successfully filtered this information
to provide a good position estimation.
Fig. 5.14. Variance of the motion estimations of VME along x (in meters). The uncer-
tainty increases suddenly at image 32 because only a few features were matched (only
29 pairings).
level is due to the odometry updating the position estimated on the sole base
of the IMU measurements. In other words, the motion estimates of VME have
the biggest weight, followed, respectively, by the 3D-Odometry and the IMU. In
Fig. 5.15, we can also observe the effects of the filter updates when uncertain
VME estimations are provided. When the uncertainty of VME is large (see
Fig. 5.14), the estimations have less weight and the variance along x remains high.
Such filter behaviors are expected and prove that the filter provides consistent
estimates.
Fig. 5.15. The variance of the position estimates along the x-axis. Because no ab-
solute positioning mechanism is available to reset the position, the variance globally
increases over time. The variance significantly decreases each time a VME measure-
ment is available. At a lower level (circled ), the odometry periodically corrects the
position estimates computed using IMU measurements only.
5.5 Summary
In this chapter, a robust method for combining different sensor measurements to
track the rover’s position has been presented. The sensor fusion scheme is flex-
ible and can accommodate any number of sensors of any kind. In order to test
the approach, a set of experiments was performed. Three different types of sen-
sors were used, i.e., 3D-Odometry, inertial sensors and a visual motion estimation
Summary 79
This book is about 3D position tracking and control for all-terrain robots. Its
intent is to contribute toward extending the range of possible areas a robot can
explore. The work covers a wide range of problems, from system integration
aspects up to the development of high-level control algorithms and models. It
is shown that control over both the technological and scientific aspects allows
reaching better system performance. Improvement of the accuracy of 3D position
tracking is obtained by considering rover locomotion in challenging terrains as a
holistic problem. Thus, most of the aspects having an influence on pose tracking
are addressed, i.e., mechanical design, locomotion control and sensing.
In Chapter 2, the development of a new all-terrain rover has been presented.
The platform, called SOLERO, is equipped with two computers, a stereovision
module, an omnidirectional vision system, an inertial measurement unit, numer-
ous sensors and actuators and dedicated electronics for power management. The
mechanical structure of SOLERO allows smooth motion across rough terrain
with limited wheel slip and vibration. This yields good signal-to-noise ratios for
the sensors and, in particular, enables the use of inertial sensors in rough terrain.
Thus, the intrinsic performance of such a chassis directly contributes to better
position tracking.
A method called 3D-Odometry that enables the computation of the rover’s
motion increments based on wheel encoders and chassis state sensors is presented
in Chapter 3. Because it accounts for the rover’s kinematic model, this technique
provides better motion estimates than the standard approach. The improvement
brought by 3D-Odometry becomes particularly significant when considering ob-
stacles comprising sharp edges. Another interesting feature of the approach is
that it internally computes the wheel-ground contact angles, which are required
inputs for traction control.
To further improve the accuracy of odometry, a predictive controller mini-
mizing wheel slip and maximizing traction has been developed. The approach,
presented in Chapter 4, can be applied to various types of terrain because it does
not explicitly rely on complex wheel-ground interaction models, whose param-
eters are generally unknown. Instead of a model-based approach, the controller
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 81–82, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
82 Conclusion
uses the sensing capabilities of the rover to estimate rolling resistance as the
robot moves. The approach can be adapted to any kind of passive-wheeled rover
and runs in real time.
A sensor fusion involving inertial sensors, 3D-Odometry and visual motion
estimation is presented in Chapter 5. The experiments demonstrated how each
sensor contributes to increase the accuracy and robustness of the 3D pose esti-
mation. In particular, the inertial measurement unit helps to correct odometric
errors locally, when the robot overcomes sharp-shaped obstacles and when asym-
metric wheel slip occurs. Furthermore, the use of complementary sensors allows
increasing the robustness of the tracking algorithm against sensor failure: the
rover was able to keep track of its position even in the presence of inaccurate
visual motion information.
Because position error grows as a function of the traveled distance, the benefit
of combining all these approaches increases as the rover drives longer distances.
A Kinematic and Quasi-static Model of
SOLERO
This appendix describes the kinematic model of SOLERO and the quasi-static
model that supports the slip minimization algorithm presented in Chapter D.
In the figures below, the subscript r refers to the robot’s frame whereas W
refers to the global frame (world frame). The label N is used to denote a normal
force, T a traction force and Tω a wheel torque. In order to increase the read-
ability of the figures, the forces between the mechanical parts, i.e., the internal
forces, have been omitted. Only the relevant forces and torques are depicted.
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 83–90, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
84 Kinematic and Quasi-static Model of SOLERO
10 11 4 7 5 6
12
3
13
2
19
9
14 16 8
17
18
1
15
zr
j
s
K xr
yr
φ J
l,r
wheel 5,3
R5,3
Fig. A.2. Variables and dimensions of a bogie. Subscripts l and r denote the left and
right bogie respectively. The bogies are attached to the main body through pin joints
placed at points J and K.
Kinematic Model 85
C xg zr
C1
ωc s’
dc K zg
ωk xr
G yr
A J dk
ωa ωj
dj
da θ
Mg
wheel 2
zw T9 R2
θ N9
τ2
Tw9
yw
L9 θ
xw
mw+s g
e
d
ε’
e’
ρe
ψ’ Δ γ’
ρc Ω C1
d2 γ’’
Fsp
Φ b b’
α
h αf
c’ A
c
wheel 1
zr
R1
P
T13 N13 xr
Tw13 yr
τ1
θ
L13 m w+s g
Fig. A.4. Parameters of the front fork. The fork is attached to the main body through
points A and C1 .
86 Kinematic and Quasi-static Model of SOLERO
Trajectory of Point P
The position of the front wheel center P is a function of the angle αf . Its coor-
dinates are obtained using the parametric equations of angles α and ψ
π
α(αf ) = − αf + φ (A.1)
2
c − b cos(α)
ψ (αf ) = arccos
b2 + c2 − 2bc cos(α)
(A.2)
b2 + c2 − 2bc cos(α) + d2 − e2
+ arccos
2d b2 + c2 − 2bc cos(α)
Rx = cos(π − αf − ρc ) R
Rz = − sin(π − αf − ρc ) R
mxF = mw+s g sin(θ)
myF = −mw+s g sin(φ) cos(θ)
mzF = −mw+s g cos(φ) cos(θ)
mxR = mw+s g sin(θ)
myR = −mw+s g sin(φ) cos(θ)
mzR = −mw+s g cos(φ) cos(θ)
mxBR = mw g sin(θ)
myBR = −mw g sin(φ) cos(θ)
mzBR = −mw g cos(φ) cos(θ)
mxBL = mw g sin(θ)
myBL = −mw g sin(φ) cos(θ)
mzBL = −mw g cos(φ) cos(θ)
M x = (M − 4 mw − 2 mw+s ) g sin(θ)
M y = −(M − 4 mw − 2 mw+s ) g sin(φ) cos(θ)
M z = −(M − 4 mw − 2 mw+s ) g cos(φ) cos(θ)
−R1 T13 + T w13 = 0 −R2 T9 + T w9 = 0 −R3 T2 + T w2 = 0
−R4 T3 + T w3 = 0 −R5 T14 + T w14 = 0 −R6 T15 + T w15 = 0
− (M z + 2 · mzBL + 2 · mzBR + mzF + mzR + T13 cos(τ1 ) + T9 cos(τ2 ) + T2 cos(τ3 ) + T3 cos(τ4 ) + T14 cos(τ5 ) + T15 cos(τ6 ))
+ N13 sin(τ1 ) + N9 sin(τ2 ) + N2 sin(τ3 ) + N3 sin(τ4 ) + N14 sin(τ5 ) + N15 sin(τ6 ) = 0
2·B ·mzBR+T8,10x +M y·(cb +zg )+L9 ·R2 sin(τ2 )+B (T2 cos(τ3 )+T3 cos(τ4 )+N14 sin(τ5 )+N15 sin(τ6 ))−(2k(myBL+myBR)
+ 2 · B · mzBL + M z · yg + B (T14 cos(τ5 ) + T15 cos(τ6 ) + N2 sin(τ3 ) + N3 sin(τ4 )) + dj (L9 + myR) sin(ωj )) = 0
− (b · F8,11x + b · F8,11z + dja · mzF + F8,19x s + T w9 + M z · xg + dja · T13 cos(τ1 ) + dj · N9 sin(τ2 − ωj ) + dj · mxR sin(ωj ))
+ F7,8x s + M x · (cb + zg ) + dj · T9 cos(τ2 − ωj ) + dj · mzR cos(ωj + dja · N13 sin(τ1 )) = 0
Kinematic and Quasi-static Model of SOLERO
The Quasi-static Model of SOLERO 89
This set of equations still contains internal forces, i.e., F7,8x , F8,10x , F8,10z ,
F8,11x , F8,11z and F8,19x . These unknowns are removed using the Gauss–Jordan
elimination.
with F a vector containing the unknown forces and M the vector containing the
torques. The matrices Q, A and B contain the information about the gravity,
the rover’s geometry and state. Equation A.4 is rewritten to express the forces
as a function of the wheel torques
with pinv(Q), the pseudo-inverse of Q. Now the linear dependence of the torques
is proven using the null space property.
with
Equation A.11 proves that the torques are linearly dependent. Furthermore,
this confirms that the solution space is of dimension m−1 where m is the number
of wheels.
90 Kinematic and Quasi-static Model of SOLERO
A1x1 T
E6x1 = 6 111111 (A.12)
(1, i)
B1x6
i=1
B Linearized Models
The linearized forms of the nonlinear equations of Chap. 5 are developed in this
appendix. In what follows, c and s correspond to the cosine and sine functions. A
variable over-lined with a bar denotes an operating point value and the variable
h corresponds to the sampling time.
−(z̈¯ − g)cθ̄ − ẍ
¯ cψ̄sθ̄ − ÿ¯ sθ̄sψ̄
¯ cθ̄cψ̄sφ̄ − (z̈¯ − g)sθ̄sφ̄ + ÿ¯ cθ̄sφ̄sψ̄
ẍ
¯ cθ̄cφ̄cψ̄ − (z̈¯ − g)cφ̄sθ̄ + ÿ¯ cθ̄cφ̄sψ̄
ẍ
⎤
ÿ¯ cθ̄cψ̄ − ẍ¯ cθ̄sψ̄
⎥
⎥
¯ ψ̄sθ̄sφ̄ − cφ̄sψ̄) + ẍ(−c
ÿ(c ¯ φ̄cψ̄ − sθ̄sφ̄sψ̄) ⎥ (B.2)
⎦
¯ ψ̄sφ̄ − cφ̄sθ̄sψ̄) + ÿ(c
ẍ(c ¯ φ̄cψ̄sθ̄ + sφ̄sψ̄)
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 91–92, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
92 Linearized Models
where
⎡
e−τωx h 0 0
⎢
⎢
⎢ h 1 + h(ω̄y cφ̄ − ω̄z sφ̄) tan θ̄ hsφ̄ tan θ̄
⎢
⎢
⎢ 0 0 e−τωy h
Fω = ⎢
⎢
⎢ 0 h(−ω̄z cφ̄ − ω̄y sφ̄) hcφ̄
⎢
⎢
⎢ 0 0 0
⎣
0 h
cθ̄
(ω̄y cφ̄ − ω̄z sφ̄) h scφ̄θ̄
⎤
0 0 0
⎥
⎥
h
(ω̄z cφ̄ + ω̄y sφ̄) hcφ̄ tan θ̄ 0 ⎥
c2 θ̄ ⎥
⎥
0 0 0⎥
⎥ (B.4)
⎥
1 −hsφ̄ 0 ⎥
⎥
⎥
0 e−τωz h 0 ⎥
⎦
h cφ̄
cθ̄
(ω̄ z c φ̄ + ω̄ y s φ̄) tan θ̄ h cθ̄
1
C The Gauss–Markov Process
where 1/τ is the time constant of the process and σ 2 its variance. The power
spectral density function of P is given by
∞
2σ 2 τ
SP (jω) = RP (t) · e−jωt dt = (C.2)
ω2 + τ 2
−∞
2
SP (jω) = |H(jω)| SU (jω) (C.4)
where SU (jω) is a unity white noise signal. Figure C.1 depicts the double in-
tegration of a Gauss–Markov process p1 . The signal u(t) is a unity white noise
and p2 and p3 are, respectively, the first and second integrals of p1 . The transfer
functions between u and p1 , p2 and p3 are, respectively,
√ √ √
2σ 2 τ 2σ 2 τ 2σ 2 τ
H1 (s) = H2 (s) = H3 (s) = (C.5)
s+τ s(s + τ ) s2 (s + τ )
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 93–95, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
94 The Gauss–Markov Process
√
2σ2 τ s−1 s−1
u(t) s+τ p1 (t) p2 (t) p3 (t)
where h is the sampling time. The covariance matrix of the process, Q, is obtained
by computing the expectations of p1 , p2 and p3 . The covariance of two sequences
pi and pj is written as
h h
E {pi pj } = hi (λ) hj (ε)E[u(λ)u(ε)] dλ dε (C.9)
0 0
The Gauss–Markov Process 95
h h h
E {pi pj } = hi (λ) hj (ε)δ(λ − ε) dλ dε = hi (λ)hj (λ) dλ (C.10)
0 0 0
All the elements of the covariance matrix Q can be computed using C.10.
Only E {p2 p2} and E {p2 p3} are derived here. The other terms are obtained in
a similar way
h h 2
2σ 2
E {p2 p2 } = h2 (λ)h2 (λ) dλ = 1 − e−τ λ dλ
τ
0 0 (C.11)
2σ 2 2 1
= h− 1 − e−τ h + 1 − e−2τ h
τ τ 2τ
h
E {p2 p3 } = h2 (λ)h3 (λ) dλ
0
h
2σ 2 −τ λ 2σ 2 (C.12)
= 1−e · τ λ − 1 + e−τ λ dλ
τ τ3
0
2σ 2 τ 2 −τ h 1 − e−τ h 1 − e−2τ h
= 2 h −h 1−e + −
τ 2 τ 2τ
Finally Q becomes
⎡ ⎤
E {p1 p1 } E {p1 p2 } E {p1 p3 }
⎢ ⎥
⎢ ⎥
Q = ⎢ E {p2 p1 } E {p2 p2 } E {p2 p3 } ⎥ (C.13)
⎣ ⎦
E {p3 p1 } E {p3 p2 } E {p3 p3 }
D Visual Motion Estimation
This appendix presents the principle of the visual motion estimation technique
used for SOLERO. Such a technique allows computing an estimate of the 3D
camera motion between two stereo pairs acquisitions1 . The six displacement
parameters are computed on the basis of a set of 3D point-to-point matches
established by tracking the pixels in the image sequence acquired by the robot.
The following figure summarizes the approach:
1. At time t, a stereo pair is acquired and the Harris corner detector [29] is ap-
plied to extract interest points from both images. Then, the point-matching
algorithm presented in [35] is used to find correspondences between the in-
terest points in both images. Finally, the cloud of 3D points is obtained
using stereovision and a second outlier rejection cycle is performed [36].
Stereovision is only computed for the interest points in order to reduce the
computation time.
2. At time t+1, a new stereo pair is acquired and the Harris points are again
extracted from both images. Then, the correspondences between the interest
point extracted in the left images (acquired at time t and t+1) are searched
using the same technique as presented in 1.
3. The stereovision is used to compute the cloud of 3D points at time t+1.
4. Finally, the six displacement parameters between t and t+1 are computed
using the least square minimization technique presented in [28].
The approach is robust and provides accurate motion estimates. A precision
of 1% can be attained after a 100-m traverse. More details about the visual
motion estimation technique can be found in [36]. In particular, the error model
for the six displacement parameters is presented. This uncertainty model is used
in Chapter 5 for sensor fusion.
1
The algorithm used in this project has been developed at the LAAS (Laboratoire
d’Analyse et d’Architecture des Systèmes).
P. Lamon: 3D-Position Track. & Cntrl. for All-Terrain Robots, STAR 43, pp. 97–98, 2008.
springerlink.com
c Springer-Verlag Berlin Heidelberg 2008
98 Visual Motion Estimation
Fig. D.1. Principle of the visual motion estimation (illustration from LAAS)
References
1. Ipc, http://www.cs.cmu.edu/afs/cs/project/tca/www/ipc/index.html
2. Life in atacama project, http://www.frc.ri.cmu.edu/atacama
3. Ntp, http://www.ntp.org
4. Open dynamic engine, http://www.ode.org
5. Telemax, http://www.telerob.de
6. Andrade, G., Amar, F.B., Bidaud, P., Chatila, R.: Modeling robot-soil interaction
for planetary rover motion control. In: IEEE/RSJ International Conference on
Intelligent Robots and Systems, Victoria, Canada (1998)
7. Andrade-Cetto, J., Sanfeliu, A.: Environment Learning for Indoor Mobile Robots.
In: A Stochastic State Estimation Approach to Simultaneous Localization and
Map Building. Springer Tracts in Advanced Robotics, vol. 23, Springer, Heidelberg
(2006)
8. Arras, K.O.: An introduction to error propagation: Derivation, meaning and exam-
ple of equation cy = fx cx ftx. Technical Report EPFL-ASL-TR-98-01 R3, EPFL
(2003)
9. Balaram, J.: Kinematic observers for articulated rovers. In: Proceedings of the 2000
IEEE International Conference on Robotics and Automation, San Francisco, CA
(2000)
10. Bar-Shalom, Y., Li, X.R.: Estimation and Tracking: Principles, Techniques, and
Software. Artech House, Boston, MA (1993)
11. Bares, J., Wettergreen, D.: Lessons from the development and deployment of dante
ii. In: Proceedings of the 1997 Field and Service Robotics Conference (1997)
12. Barshan, B., Durrant-Whyte, H.F.: Inertial navigation systems for mobile robots.
IEEE Transaction on Robotics and Automation (1995)
13. Baumgartner, E.T., Aghazarian, H., Trebi-Ollennu, A., Huntsberger, T.L., Gar-
rett, M.S.: State estimation and vehicle localization for the fido rover. In: SPIE
Proceedings of Sensor Fusion and Decentralized Control in Autonomous Robotic
Systems III, Boston, MA, p. 4196 (2000)
14. Bekker, G.: Theory of Land Locomotion, University of Michigan, Ann Arbor (1956)
15. Bekker, G.: Introduction to Terrain-Vehicle Systems. University of Michigan Press
(1969)
16. Bertrand, R., Lamon, P., Michaud, S., Schiele, A., Siegwart, R.: The solero rover
for regional exploration of planetary surfaces. European Geophysical Society, Geo-
physical Research Abstracts 5 (2003)
100 References
17. Bevly, D.M., Sheridan, R., Gerdes, J.C.: Integrating ins sensors with gps veloc-
ity measurements for continuous estimation of vehicle sideslip and tire cornering
stiffness. In: Proceedings of the American Control Conference, Arlington (June
2001)
18. Bonnafous, D., Lacroix, S., Simeon, T.: Motion generation for a rover on rough
terrains. In: Proceedings of the IEEE/RSJ International Conference on Intelligent
Robots and Systems (2001)
19. Borenstein, J., Feng, L.: Measurement and correction of systematic odometry errors
in mobile robots. IEEE Journal of Robotics and Automation 12(6), 869–880 (1996)
20. Brooks, C., Iagnemma, K., Dubowsky, S.: Vibration-based terrain analysis for
mobile robots. In: IEEE International Conference on Robotics and Automation,
Barcelona, Spain (2005)
21. Chahl, J.S., Srinivasan, M.V.: Reflective surfaces for panoramic imaging. Applied
Optics 36, 8275–8285 (1997)
22. Cheng, Y., Maimone, M.W., Matthies, L.: Visual odometry on the mars exploration
rovers. IEEE Robotics and Automation Magazine (June 2006)
23. Cozman, F., Krotkov, E., Guestrin, C.: Outdoor visual position estimation for
planetary rovers. Autonomous Robots 9(2) (2000)
24. Dissanayake, G., Sukkarieh, S., Nebot, E., Durrant-Whyte, H.: The aiding of a
low-cost strapdown inertial measurement unit using vehicle model constraints for
land vehicle applications. IEEE Transactions on Robotics and Automation 17(5),
731–747 (2001)
25. Estier, T., Crausaz, Y., Merminod, B., Lauria, M., Piguet, R., Siegwart, R.: An
innovative space rover with extended climbing abilities. In: Proceedings of Space
& Robotics, the Fourth International Conference and Exposition on Robotics in
Challenging Environments, Albuquerque, USA (2000)
26. Gancet, J., Lacroix, S.: Pg2p: a perception-guided path planning approach for long
range autonomous navigation in unknown natural environments. In: IEEE/RSJ
International Conference on Intelligent Robots and Systems, USA (2003)
27. Goldberg, S.B., Maimone, M.W., Matthies, L.H.: Stereo vision and rover navigation
software for planetary exploration. In: IEEE Aerospace conference proceedings, Big
Sky, Montana, USA, March 2002, vol. 5, pp. 2025–2036 (2002)
28. Haralick, R., Joo, H., Lee, C.N., Zhuang, X., Vaidya, V.G., Kim, M.B.: Pose esti-
mation from corresponding point data. IEEE Transactions on Systems, Man and
Cybernetics 19(6), 1426–1445 (1989)
29. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of
the 4th AlveyVision Conference, August 1988, Manchester, pp. 147–151 (1988)
30. Iagnemma, K., Dubowsky, S.: Mobile robot rough-terrain control (rtc) for planetary
exploration. In: Proceedings of the ASME Design Engineering Technical Confer-
ence, Baltimore, USA (2000)
31. Iagnemma, K., Dubowsky, S.: Vehicle wheelground contact angle estimation: with
application to mobile robot traction control. In: 7th International Symposium on
Advances in Robot Kinematics, ARK, Piran Portoroz (June 2000)
32. Iagnemma, K., Dubowsky, S.: Mobile Robots in Rough Terrain: Estimation, Motion
Planning, and Control with application to Planetary Rovers, June 2004. Springer
Tracts in Advanced Robotics (STAR), vol. 12 (2004)
33. Iagnemma, K., Shibley, H., Dubowsky, S.: On-line terrain parameter estimation for
planetary rovers. In: IEEE International Conference on Robotics and Automation,
Washington DC, USA (2002)
References 101
34. Iagnemma, K., Shibly, H., Rzepniewski, A., Dubowsky, S.: Planning and control
algorithms for enhanced rough-terrain rover mobility. In: Proceedings of the Sixth
International Symposium on Artificial Intelligence, Robotics and Automation in
Space, i-SAIRAS, Montreal, Canada (June 2001)
35. Jung, I.-K., Lacroix, S.: A robust interest point matching algorithm. In: Interna-
tional Conference on Computer Vision, Vancouver, Canada (2001)
36. Jung, I.-K., Lacroix, S.: Simultaneous localization and mapping with stereovision.
In: International Symposium on Robotics Research, Siena, Italy (2003)
37. Kalker, J.J.: Three dimensional elastic bodies in rolling contact. Kluwer Academic
Publishers, Dordrecht (1990)
38. Kemurdjian, A.L., Gromov, V., Mishkinyuk, V., Kucherenko, V., Sologub, P.: Small
marsokhod configuration. In: International Conference on Robotics and Automa-
tion, Nice (1992)
39. Kubota, T., Kuroda, Y., Kunii, Y., Natakani, I.: Micro planetary rover micro5. In:
Proceedings of Fifth International Symposium on Artificial Intelligence,Robotics
and Automation in Space (ESA SP-440), Noordwijk, pp. 373–378 (1999)
40. Lacroix, S., Mallet, A., Bonnafous, D., Bauzil, G., Fleury, S., Herrb, M.: Au-
tonomous rover navigation on unknown terrains: functions and integration. In-
ternational Journal of Robotics Research 21(10–11), 913–942 (2002)
41. Lauria, M.: Nouveaux concepts de locomotion pour véhicules toutterrain robotisés.
Thesis nr. 2833, EPFL, Lausanne (2003)
42. Lauria, M., Conti, F., Maeusli, P.-A., Van Winnendael, M., Bertrand, R., Sieg-
wart, R.: Design and control of an innovative micro-rover. In: Proceedings of 5th
ESA Workshop on Advanced Space Technologies for Robotics and Automation,
Noordwijk (1998)
43. Lauria, M., Piguet, Y., Siegwart, R.: Octopus-an autonomous wheeled climbing
robot. In: Proceedings of the Fifth International Conference on Climbing and Walk-
ing Robots, Bury St Edmunds and London,UK (2002)
44. Lauria, M., Shooter, S., Siegwart, R.: Topological analysis of robotic n-wheeled
ground vehicles. In: Proceedings of the 4th International Conference on Field and
Service Robotics, Yamanashi, Japan (2003)
45. Lemaire, T., Berger, C., Jung, I.K., Lacroix, S.: Vision-based slam: Stereo and
monocular approaches. International Journal of Computer Vision 74(3) (September
2007)
46. Leppanen, I., Salmi, S., Halme, A.: Workpartner hut automation’s new hybrid
walking machine. In: CLAWAR 1998 First international symposium, Brussels
(1998)
47. Mabie, H.H., Reinholtz, C.F.: Mechanisms and Dynamics of Machinery, 4th edn.
John Wiley and Sons, NewYork (1987)
48. Mallet, A., Lacroix, S., Gallo, L.: Position estimation in outdoor environments us-
ing pixel tracking and stereovision. In: IEEE International Conference on Robotics
and Automation, San Francisco, USA (2000)
49. Manyika, J., Durrant-Whyte, H.: Data fusion and sensor management: A decen-
tralized information-theoretic approach (1994)
50. Michaud, S., Schneider, A., Bertrand, R., Lamon, P., Siegwart, R., van Winnendae,
l.M., Schiele, A.: Solero: Solar powered exploration rover. In: Proceedings of the
7 ESA Workshop on Advanced Space Technologies for Robotics and Automation,
Noordwijk, The Netherlands (2002)
51. Nebot, E., Sukkarieh, S., Durrant-Whyte, H.: Inertial navigation aided with gps
information. In: Proceedings of the Fourth Annual Conference of Mechatronics and
Machine Vision in Practice (1997)
102 References
52. Ollis, M., Hermann, H., Singh, S.: Analysis and design of panoramic stereo vi-
sion using equi-angular pixel cameras. Technical report CMU-RI-TR-99-04, CMU
(1999)
53. Olson, C.F., Matthies, L.H., Schoppers, M., Maimone, M.W.: Stereo ego-motion
improvements for robust rover navigation. In: IEEE International Conference on
Robotics and Automation, Seoul, Corea (2001)
54. Peynot, T., Lacroix, S.: Enhanced locomotion control for a planetary rover. In:
Proceedings of the 2003 IEEE/RSJ Intl. Conference on Intelligent Robots and
Systems, Las Vegas, USA (2003)
55. Roumeliotis, S.I., Bekey, G.A.: 3d localization for a mars rover prototype. In: 5th
International Symposium on Artificial Intelligence, Robotics and Automation in
Space (i-SAIRAS 1999), ESTEC, The Netherlands (1999)
56. Roumeliotis, S.I., Johnson, A.E., Montgomery, J.F.: Augmenting inertial naviga-
tion with image-based motion estimation. In: IEEE International Conference on
Robotics and Automation, Washington, USA (2002)
57. Scheding, S., Dissanayake, G., Nebot, E.M., Durrant-Whyte, H.: An experiment
in autonomous navigation of an underground mining vehicle. IEEE Transactions
on Robotics and Automation 15(1), 85–95 (1999)
58. Siegwart, R., Estier, T., Crausaz, Y., Merminod, B., Lauria, M., Piguet, R.: In-
novative concept for wheeled locomotion in rough terrain. In: Proceedings of the
Sixth International Conference on Intelligent Autonomous Systems, Venice, Italy
(2000)
59. Siegwart, R., Lamon, P., Estier, T., Lauria, M., Piguet, R.: Innovative design for
wheeled locomotion in rough terrain. Journal of Robotics and Autonomous Sys-
tems 40(2–3), 151–162 (2002)
60. Singh, S., Simmons, R., Smith, T., Stentz, A., Verma, V., Yahja, A., Schwehr, K.:
Recent progress in local and global traversability for planetary rovers. In: IEEE
International Conference on Robotics and Automation, Francisco, USA (2000)
61. Stone, H.W.: Mars pathfinder microrover: A low-cost,low-power spacecraft. In: Pro-
ceedings of the 1996 AIAA Forum on Advanced Developments in Space Robotics,
Madison WI (1996)
62. Strelow, D., Mishler, J., Singh, S., Herman, H.: Extending shape-frommotion to
noncentral onmidirectional cameras. In: IEEE/RSJ International Conference on
Intelligent Robots and Systems, Hawaii, USA (2001)
63. Strelow, D., Singh, S.: Online motion estimation from image and inertial measure-
ments. In: 11th International Conference on Advanced Robotics, Portugal (2003)
64. Thueer, T., Lamon, P., Krebs, A., Siegwart, R.: Crab - exploration rover with
advanced obstacle negociation. In: Proceedings of the 9th ESA workshop on Ad-
vanced Space Technologies for Robotics (2006)
65. Titterton, D.H., Weston, J.L.: Strapdown inertial navigation technology. In: Nav-
igation and Avionics, Stevenage (1997)
66. Tomatis, N.: Hybrid, Metric - Topological, Mobile Robot Navigation. Thesis nr.
2444, EPFL (2001)
67. Trebi-Ollennu, A., Huntsberger, T., Yang, C., Baumgartner, E.T., Kennedy, B.,
Schenker, P.: Design and analysis of a sun sensor for planetary rover absolute
heading detection. IEEE Transactions on Robotics and Automation 17(6), 939–
947 (December)
68. Tunstel, E.: Evolution of autonomous self-righting behaviors for articulated nan-
rovers. In: Proceedings of Fifth International Symposium on Artificial Intelligence,
Robotics and Automation in Space (ESA SP-440), Noordwijk, pp. 341–346 (1999)
References 103
69. van der Burg, J., Blazevic, P.: Anti-lock braking and traction control concept for all-
terrain robotic vehicles. In: Proceedings of the 1997 IEEE International Conference
on Robotics and Automation, Albuquerque, USA (April 1997)
70. Van Winnendael, M., Visenti, G., Bertrand, R., Rieder, R.: Nanokhod microrover
heading towards mars. In: Proceedings of Fifth International Symposium on Ar-
tificial Intelligence Robotics and Automation in Space (ESA SP-440), Noordwijk,
pp. 69–76 (1999)
71. Vieville, T., Romann, F., Hotz, B., Mathieu, H., Buffa, M., Robert, L., Facao,
P.E.D.S., Faugeras, O.D., Audren, J.T.: Autonomous navigation of a mobile robot
using inertial and visual cues. In: IEEE/RSJ International Conference on Intelli-
gent Robots and Systems, Tokyo, Japan (1993)
72. Volpe, R., Balaram, J., Ohm, T., Ivlev, R.: Rocky 7: A next generation mars rover
prototype. Journal of Advanced Robotics 11(4) (1997)
73. Wada, M., Kang, S.Y., Hashimoto, H.: High accuracy multisensor road vehicle
state estimation. In: 26th Annual Conference of the IEEE Industrial Electronics
Society, IECON (2000)
74. Yoshida, K., Hamano, H., Watanabe, T.: Slip-based traction control of a plane-
tary rover. In: Proceedings of the 8th International Symposium on Experimental
Robotics, ISER, Italy (2002)
Index
ABS, 33 joint
accelerometers, 16, 61, 70, 91 pin, 35
algorithm spherical, 34
fixed-point, 40
gradient, 41 Kalman filter, 16, 59
simplex, 41
locomotion, 3, 4, 7
Atacama, 1
magneto-resistive, 13
bogie, 10, 22, 84 MER, 1, 3, 54, 78
mobility, 34
contact angles, 27, 34, 42, 50
control nonholonomic, 26, 66
predictive, 46, 48
reactive, 46 omnicam, 14
parallel mechanism, 10
error propagation, 56, 57
quasi-static, 34, 43, 86
fork, 10, 85
friction coefficient, 8, 38, 42, 68 rolling resistance, 43, 46
tactile wheel, 47
hyperstatic, 34
virtual center of rotation, 11
IMU, 16, 55, 61, 65 VME, 55, 62, 73, 97
inertial sensor, 54, 65
information filter, 59, 60 wheel encoders, 21, 28, 54
Springer Tracts in Advanced Robotics
Edited by B. Siciliano, O. Khatib and F. Groen
Vol. 43: Lamon, P. Vol. 31: Ferre, M.; Buss, M.; Aracil, R.;
3D-Position Tracking and Control Melchiorri, C.; Balaguer C. (Eds.)
for All-Terrain Robots Advances in Telerobotics
105 p. 2008 [978-3-540-78286-5] 500 p. 2007 [978-3-540-71363-0]
Vol. 42: Laugier, C.; Siegwart, R. (Eds.) Vol. 30: Brugali, D. (Ed.)
Field and Service Robotics Software Engineering for Experimental Robotics
300 p. 2008 [978-3-540-75403-9] 490 p. 2007 [978-3-540-68949-2]
Vol. 41: Milford, M.J.
Vol. 29: Secchi, C.; Stramigioli, S.; Fantuzzi, C.
Robot Navigation from Nature
Control of Interactive Robotic Interfaces – A
194 p. 2008 [978-3-540-77519-5]
Port-Hamiltonian Approach
Vol. 40: Birglen, L.; Laliberté, T.; Gosselin, C. 225 p. 2007 [978-3-540-49712-7]
Underactuated Robotic Hands
Vol. 28: Thrun, S.; Brooks, R.; Durrant-Whyte, H.
241 p. 2008 [978-3-540-77458-7]
(Eds.)
Vol. 39: Khatib, O.; Kumar, V.; Rus, D. (Eds.) Robotics Research – Results of the 12th
Experimental Robotics International Symposium ISRR
563 p. 2008 [978-3-540-77456-3] 602 p. 2007 [978-3-540-48110-2]
Vol. 38: Jefferies, M.E.; Yeap, W.-K. (Eds.) Vol. 27: Montemerlo, M.; Thrun, S.
Robotics and Cognitive Approaches to FastSLAM – A Scalable Method for the
Spatial Mapping Simultaneous Localization and Mapping
328 p. 2008 [978-3-540-75386-5] Problem in Robotics
Vol. 37: Ollero, A.; Maza, I. (Eds.) 120 p. 2007 [978-3-540-46399-3]
Multiple Heterogeneous Unmanned Aerial
Vehicles Vol. 26: Taylor, G.; Kleeman, L.
Visual Perception and Robotic Manipulation – 3D
233 p. 2007 [978-3-540-73957-9]
Object Recognition, Tracking and Hand-Eye
Vol. 36: Buehler, M.; Iagnemma, K.; Coordination
Singh, S. (Eds.) 218 p. 2007 [978-3-540-33454-5]
The 2005 DARPA Grand Challenge – The Great
Robot Race Vol. 25: Corke, P.; Sukkarieh, S. (Eds.)
520 p. 2007 [978-3-540-73428-4] Field and Service Robotics – Results of the 5th
International Conference
Vol. 35: Laugier, C.; Chatila, R. (Eds.) 580 p. 2006 [978-3-540-33452-1]
Autonomous Navigation in Dynamic
Environments Vol. 24: Yuta, S.; Asama, H.; Thrun, S.;
169 p. 2007 [978-3-540-73421-5] Prassler, E.; Tsubouchi, T. (Eds.)
Vol. 34: Wisse, M.; van der Linde, R.Q. Field and Service Robotics – Recent Advances in
Delft Pneumatic Bipeds Research and Applications
136 p. 2007 [978-3-540-72807-8] 550 p. 2006 [978-3-540-32801-8]
Vol. 33: Kong, X.; Gosselin, C. Vol. 23: Andrade-Cetto, J,; Sanfeliu, A.
Type Synthesis of Parallel Environment Learning for Indoor Mobile Robots
Mechanisms – A Stochastic State Estimation Approach
272 p. 2007 [978-3-540-71989-2] to Simultaneous Localization and Map Building
130 p. 2006 [978-3-540-32795-0]
Vol. 32: Milutinović, D.; Lima, P.
Cells and Robots – Modeling and Control of Vol. 22: Christensen, H.I. (Ed.)
Large-Size Agent Populations European Robotics Symposium 2006
130 p. 2007 [978-3-540-71981-6] 209 p. 2006 [978-3-540-32688-5]
Vol. 21: Ang Jr., H.; Khatib, O. (Eds.) Vol. 10: Siciliano, B.; De Luca, A.; Melchiorri, C.;
Experimental Robotics IX – The 9th International Casalino, G. (Eds.)
Symposium on Experimental Robotics Advances in Control of Articulated and
618 p. 2006 [978-3-540-28816-9] Mobile Robots
259 p. 2004 [978-3-540-20783-2]
Vol. 20: Xu, Y.; Ou, Y.
Control of Single Wheel Robots Vol. 9: Yamane, K.
188 p. 2005 [978-3-540-28184-9] Simulating and Generating Motions
of Human Figures
Vol. 19: Lefebvre, T.; Bruyninckx, H.; 176 p. 2004 [978-3-540-20317-9]
De Schutter, J. Nonlinear Kalman Filtering
for Force-Controlled Robot Tasks Vol. 8: Baeten, J.; De Schutter, J.
280 p. 2005 [978-3-540-28023-1] Integrated Visual Servoing and Force
Control – The Task Frame Approach
Vol. 18: Barbagli, F.; Prattichizzo, D.; 198 p. 2004 [978-3-540-40475-0]
Salisbury, K. (Eds.)
Vol. 7: Boissonnat, J.-D.; Burdick, J.;
Multi-point Interaction with Real
Goldberg, K.; Hutchinson, S. (Eds.)
and Virtual Objects
Algorithmic Foundations of Robotics V
281 p. 2005 [978-3-540-26036-3]
577 p. 2004 [978-3-540-40476-7]
Vol. 17: Erdmann, M.; Hsu, D.; Overmars, M.;
van der Stappen, F.A (Eds.) Vol. 6: Jarvis, R.A.; Zelinsky, A. (Eds.)
Algorithmic Foundations of Robotics VI Robotics Research – The Tenth
472 p. 2005 [978-3-540-25728-8] International Symposium
580 p. 2003 [978-3-540-00550-6]
Vol. 16: Cuesta, F.; Ollero, A.
Intelligent Mobile Robot Navigation Vol. 5: Siciliano, B.; Dario, P. (Eds.)
224 p. 2005 [978-3-540-23956-7] Experimental Robotics VIII – Proceedings
of the 8th International Symposium ISER02
Vol. 15: Dario, P.; Chatila R. (Eds.) 685 p. 2003 [978-3-540-00305-2]
Robotics Research – The Eleventh
International Symposium Vol. 4: Bicchi, A.; Christensen, H.I.;
595 p. 2005 [978-3-540-23214-8] Prattichizzo, D. (Eds.)
Control Problems in Robotics
Vol. 14: Prassler, E.; Lawitzky, G.; Stopp, A.; 296 p. 2003 [978-3-540-00251-2]
Grunwald, G.; Hägele, M.; Dillmann, R.;
Iossifidis. I. (Eds.) Vol. 3: Natale, C.
Advances in Human-Robot Interaction Interaction Control of Robot Manipulators –
414 p. 2005 [978-3-540-23211-7] Six-degrees-of-freedom Tasks
120 p. 2003 [978-3-540-00159-1]
Vol. 13: Chung, W.
Nonholonomic Manipulators Vol. 2: Antonelli, G.
115 p. 2004 [978-3-540-22108-1] Underwater Robots – Motion and Force Control
of Vehicle-Manipulator Systems
Vol. 12: Iagnemma K.; Dubowsky, S. 268 p. 2006 [978-3-540-31752-4]
Mobile Robots in Rough Terrain –
Estimation, Motion Planning, and Control Vol. 1: Caccavale, F.; Villani, L. (Eds.)
with Application to Planetary Rovers Fault Diagnosis and Fault Tolerance for
123 p. 2004 [978-3-540-21968-2] Mechatronic Systems – Recent Advances
191 p. 2003 [978-3-540-44159-5]
Vol. 11: Kim, J.-H.; Kim, D.-H.; Kim, Y.-J.;
Seow, K.-T.
Soccer Robotics
353 p. 2004 [978-3-540-21859-3]