Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Metodologias para a Locomocao de Robots
Humanoides
Carlos Rui Neves
Dissertacao para obtencao do Grau de Mestre em
Engenharia Electrotecnica e de Computadores
JuriPresidente: Doutor Carlos SilvestreOrientador: Doutor Rodrigo VenturaCo-Orientador: Doutor Joao SequeiraVogais: Doutor Vıtor Santos
Dezembro de 2010
Acknowledgments
Over the course of this dissertation, many were the people who supported me, some of them
directly, others simply by being there. First, I would like to thank my family for all the opportunities
I have been granted and that eventually led me to this point. My parents have always taught me to
do my best in every aspect of life and they have helped me every time I needed. I am deeply thankful
to both. Also, to my girlfriend, who has patiently supported and cheered me up when was needed.
Secondly, many thanks to my advisor, Doutor Rodrigo Ventura, and my co-advisor Doutor Joao
Sequeira, for all they have put into this work, all the ideas and the platforms I was provided to achieve
my goals. I would also like to thank Limor Schweitzer and the users of Robosavvy Forums, who have
shared their time and knowledge in order to improve the Bioloid platform.
A special acknowledgement has to be made to all the users in Robosavvy Forums, who have
contributed with much information related not only to the robot but also about all of its systems,
many useful tips and possible improvements, and for Ricardo Oliveira, who has been working on the
Bioloid in order to implement many improvements that will make the robot able to run the controller
proposed on this work.
Finally, a big cheers for all my colleagues and friends who have helped me through many classes
and, ultimately, through this dissertation. They made my stay at IST a great experience. A special
thanks to Ricardo Malhado, Ricardo Preguica, Dario Figueira, Bruno Dias, Sergio Bras and Tiago
Gaspar, who have put up with me the most!
Obviously, nothing of this would have been possible without Instituto Superior Tecnico and Insti-
tute for Systems and Robotics, where I have been studying for the past 6 years. For that, I thank all
those responsible for both institutions.
Abstract
The controlled locomotion of a legged robot presents several challenges, mainly due to the insta-
bility inherent to legged locomotion and to the fact of being very sensitive to slope changes and other
obstacles. Complex sensory systems and demanding processing abilities are often required to maintain
balance, that together with precise servo control leads to high energy consumption. This dissertation
deals with some of these problems by creating a simple controller and implementing a learning system
that allows a Bioloid Humanoid robot to adapt to different slopes while walking and maintaining its
balance.
In order to reach this goal, a simulation of this robot in different platforms was developed in the
Webots simulator and a Python controller was developed. The controller is centered around a quater-
nion based Extended Kalman Filter that uses the measurements of a six axis inertial measurement
unit as inputs to perform an attitude estimation of the torso. Using this estimation, it is possible to
create a controller to adjust to new ground slopes and change the walking gait. The walking gaits are
defined in motion files and using interpolation it is possible to adapt gaits for different ground slopes.
Finally, a learning system was developed with the goal of improving the performance of the con-
troller by identifying the ground slope without resorting to the position controller and therefore in-
creasing the speed of the algorithm. Several simulations were run in Webots and the results presented
show the success of the algorithm.
A Bioloid Comprehensive Kit has been purchased and the Humanoid model has been assembled and
tested. This has allowed the creation of a better virtual model and will allow a future implementation
of the developed controller on the robot.
Keywords: Humanoid robot, Legged Locomotion, Bioloid, Inverted Pendulum, Quaternions, Ex-
tended Kalman Filter, Webots.
iii
Resumo
O controlo da locomocao de um robot humanoide apresenta diversos desafios, essencialmente devido
a instabilidade inerente a um movimento de marcha e devido ao facto de ser muito sensıvel a alteracoes
de declives e outros obstaculos. De modo a que se consiga manter o equilıbrio de um robot humanoide
e torna-lo adaptavel a alteracoes sao necessarios um sistema sensorial complexo e uma boa capacidade
de processamento. Para alem disso, a manutencao das forcas de torque em todos os servos e a
alimentacao dos sensores aumenta o consumo de energia. Esta dissertacao lida com alguns destes
problemas criando um controlador simples e implementando um sistema de aprendizagem que permite
a um robot humanoide Bioloid adaptar-se a diferentes inclinacoes de terreno enquanto anda e manter
o equilıbrio.
Para atingir este objectivo, foi criado um modelo de simulacao do robot humanoide em diversas
plataformas no programa Webots e desenvolveu-se um controlador em Python. Este controlador esta
centrado em torno de um Filtro de Kalman Estendido baseado em quaternioes, que utiliza leituras de
uma unidade inercial de seis eixos como entrada para estimar a atitude do torso do robot. Utilizando
esta estimativa, e possıvel criar um controlador para se ajustar a novas inclinacoes do chao e alterar
o porte de marcha. Estes portes estao definidos em ficheiros de movimento e e possıvel adaptar os
ficheiros para diferentes declives utilizando metodos de interpolacao.
Para terminar, um sistema de aprendizagem foi desenvolvido com o objectivo de melhorar o de-
sempenho do controlador permitindo a identificacao da inclinacao do chao sem recorrer ao controlador
e melhorar a velocidade do algoritmo. Diversas simulacoes foram corridas no Webots e os resultados
apresentados demonstram o sucesso do algoritmo.
Um Bioloid Comprehensive Kit foi adquirido e o modelo de um humanoide foi montado e testado.
Isto permitiu o desenvolvimento de um bom modelo virtual e permite uma futura implementacao do
controlador no robot humanoide.
Palavras-chave: Robot Humanoide, Marcha, Bioloid, Pendulo Invertido, Quaternioes, Filtro de
Kalman Estendido, Webots.
v
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 State of the art 5
3 Webots Simulator 9
3.1 Building the virtual model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Developing the controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Simulation and Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Validation of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 Static Balance 15
4.1 General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Quaternion Based Extended Kalman Filter (EKF) . . . . . . . . . . . . . . . . . . . . 16
4.2.1 Attitude representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2.2 3D Linear Inverted Pendulum Model . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2.3 Matrices and Operators Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2.4 Attitude Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Error Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5 Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5.1 Jacobian Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5.2 Zero Moment Point and Center of Pressure . . . . . . . . . . . . . . . . . . . . 27
4.6 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
vii
Contents
5 Walking Balance 35
5.1 General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 The learning table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 Slope Interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4 Slope Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5 Gait Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5.1 Frontal Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5.2 Lateral Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.5.3 Combination of Slopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.6 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6 Conclusions 49
Bibliography 51
viii
List of Figures
2.1 The Honda humanoids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 The Sony QRIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.1 Real vs. Simulated Bioloid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 The HUVRobotics 6-Axis IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Adjusted measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 Static Balance Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 The Bioloid Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.3 Static Balance Error Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Compensating Slopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 Jacobian Controller Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.6 Static Balance Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.7 Rotation around x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.8 xCoM positions vs. xEKF estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.9 Faster Rotation around x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.10 xCoM positions vs. xEKF estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.11 Rotation around z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.12 xCoM positions vs. xEKF estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.13 Rotation around x and z . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.14 xCoM positions vs. xEKF estimations . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.15 Ramp rotation and Robot attitude, with σ2max = 50σ2
real . . . . . . . . . . . . . . . . . 33
4.16 xCoM positions vs. xEKF estimations with σ2max = 50σ2
real . . . . . . . . . . . . . . . 34
5.1 Walking Balance Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Example of a multiple frontal slope platform . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 xEKF (t) in assisted mode for ψx = 0 and ψz = 0 . . . . . . . . . . . . . . . . . . . . . 41
5.4 xEKF (t) in assisted mode for ψx = 0 and ψz = 0 . . . . . . . . . . . . . . . . . . . . . 42
ix
List of Figures
5.5 xEKF (t) in assisted mode when playing different gaits on a ψx = 4 slope . . . . . . . 43
5.6 xEKF (t) in assisted mode when playing different gaits on a ψz = 4 slope . . . . . . . 44
5.7 xEKF (t) in autonomous mode on single slope ramp . . . . . . . . . . . . . . . . . . . . 44
5.8 xEKF (t) whit high noise variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.9 xEKF (t) in autonomous mode, without learning table . . . . . . . . . . . . . . . . . . 46
5.10 xEKF (t) in autonomous mode, with learning table . . . . . . . . . . . . . . . . . . . . 47
x
List of Tables
3.1 Tipping Point angles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Variance of the 6-axis IMU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1 Example of a populated learning table for a frontal balance . . . . . . . . . . . . . . . 37
5.2 Variance of the noise models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
xi
List of Tables
xii
1Introduction
Contents1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Main contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Dissertation outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1
1. Introduction
It is undeniable the importance of robotics in today’s society. Autonomous machines, with greater
strength, precision and speed than humans are able to perform a multitude of tasks, like explor-
ing hazardous environments, handling dangerous chemicals, assisting doctors in surgery, performing
assembly line tasks and many more tasks, that would be too difficult or expensive for a human to do.
In the past few years, thanks to the technological advances in the fields of microprocessors, sensors
and image handling, a wave of new robots is surfacing: small and cheap robots that work together
to accomplish a task; large and complex robots, with many sensing and acting systems that are fully
autonomous in different environments; and robots that are able to emulate emotions and are more
easily accepted by the general population.
The first appearance of the word robot, back in 1920s, took place in a play by Karel Capek named
Rossum’s Universal Robots [1], that dealt with what we now call androids, or humanoid robots.
While this word now stands for most autonomous machines, the original idea is still alive: to create
anthropomorphic robots. These robots are able to blend in our human-oriented world with more ease
and most people feel more comfortable dealing with anthropomorphic robots. With the advances
mentioned before, we are now able to build humanoid robots with a multitude of sensing abilities and
with good adaptation to the world around them.
1.1 Motivation
So that humanoid robots are be able to blend in, there are numerous problems that need to be
handled. One of the most complex challenges with humanoid robots is locomotion. Legged locomotion
on humans is extremely complex, not only thanks to the many muscles of the leg and foot, but also to
the timing of certain movements, like the heel strike and swing leg movement. In spite of it, human
locomotion is automatic, does not require much concentration or energy and we do not need to have
precise knowledge of each muscle.
When implementing a walking gait on a robot, the resources required for keeping it in balance,
controlling precisely each servo and adapting to different terrains are enormous. Heavy computation
and communication is usually required to evaluate the position of the servos and center of mass, most
require image handling to predict changes in the terrain, and precise knowledge of the characteristics
of the humanoid are needed to keep it balanced. Another problem is the energy consumption needed
to keep all the systems running and controlling all the servos.
While a passive walker humanoid is a good solution to reduce the algorithm complexity and
consumption, the design and construction of a robot with this ability is extremely complicated. In
addition to that, both the passive and the active walkers share a common problem: the capacity to
adapt when facing different terrains conditions and slopes.
2
1.2 Objectives
1.2 Objectives
This dissertation addresses the implementation of an adaptive walking gait using a simplified
model for humanoid robots by combining previously developed motion files to match the ground slope
estimations. These estimations are done with the measurements of inertial sensors and to not involve
heavy computations, which allows robots do dedicate less resources on walking, enabling them to
improve other systems, like task or path planning.
1.3 Main contributions
This work addresses on two situations: a static balance and an adaptive walking gait. The static
balance consists of a humanoid robot keeping its balance while the platform in which he is standing
tilts at an arbitrary speed along an arbitrary horizontal axis. This objective is achieved by using
a quaternion based Kalman Filter, fed by the readings of an Inertial Measuring Unit, comprising a
tri-axial gyroscope and a tri-axial accelerometer, that estimates the slope of the floor and a simple
position controller to maintain balance.
The adaptive walking gait situation uses the same attitude estimation system as above, while our
robot walks in ramps of different slopes. A new control system is implemented using generated open-
loop walking gait sequences, which are created from user made gaits using linear interpolation, and a
slope detector, that uses the knowledge of the current gait being used and the estimated position of
the center of mass to identify a change of slope.
1.4 Dissertation outline
Many solutions are being studied and have been presented by companies and research groups
addressing the problem of humanoid modelling and locomotion, ranging from highly complex systems
to passive walkers. Some of these solutions are briefly reviewed in the second chapter of this work.
In the third chapter, the Webots simulation environment is analysed and the development of a
virtual model of the Bioloid Humanoid is presented, based on a previous work by the Mechanics
Department of Instituto Superior Tecnico.
Chapters four and five deal with two simulation setups. In chapter four, a controller is imple-
mented so that the Bioloid Humanoid can be kept in balance while standing on a moving platform,
featuring an Extended Kalman Filter and a simple position error correction controller. In chapter
five, this controller is used to estimate slopes in order to generate gaits from pre-programmed gaits.
An unsupervised learning system is also implemented for a faster recognition of slopes.
The conclusions of this work and our proposals for future work are presented in the sixth and final
chapter.
3
1. Introduction
4
2State of the art
5
2. State of the art
In the field of legged locomotion, two main areas can be identified. On one hand, we have robots
with controlled walking gaits, where the trajectories of the center of mass is previously calculated and
the trajectory of each servo is obtained by Inverse Kinematics. We see these robots on the media,
since these are the more advanced and complex robots, able to run, climb and descend stairs, and
interact with humans. On the other hand, we have passive walkers. These robots are mostly built at
research centers and their goal is to create a more energy efficient and passive walking gait, similar to
the human gait. However, these methodologies are recent and these constructions are usually unable
to perform any other task.
In order to produce a stable walking gait, the joints must be able to track trajectories, that
can be calculated offline or in real-time. If the robot is designed to adapt to different slopes, avoid
collisions and handle other unexpected disturbances in the environment, then a real-time trajectory
planning is required. However, most anthropomorphic robots have a high number of joints that need
to be controlled, making this planning in joint space a very complex task, with computational needs
that surpass the available solutions. Therefore, simplifications must be found in order to reduce the
dimensional complexity. Several ground reference points [2], like Zero Moment Point (ZMP) [3], Foot
Rotation Indicator (FRI) [4], Center of Pressure (CoP) and Centroidal Moment Pivot (CMP) [5] can
be used to plan trajectories for the center of mass (CoM) of a robot, and the position of the CoM can
be used to map joint position in the joint space by using Inverse Kinematics.
As examples for active controllers of legged locomotion, we have the Honda robots, that have been
under development since 1986, when the E series of humanoids was developed. In the 90s, Honda
released the P series [6], consisting of three models, and in the past few years the ASIMO series [7].
As shown in Figure 2.1, the evolution is pretty amazing 1. The first models were simply two legs with
6 degrees of freedom and no upper body, capable of taking one step in 5 seconds and only walking on
straight line and in terrains with no slope. The E6, launched in 1993, had 12 DoF and was able to
autonomously balance, avoid obstacles and climb stairs, reaching speeds of around 5 km/h. However,
it weighted 150 Kg and had heavy energy consumption. The P3 model was slightly lighter, walked
at 2 Km/h and was able to carry weights, having an autonomy of 25 minutes. The latest ASIMO
model, is the more anthropomorphic model, weighting only 50 Kg, capable of running at 6 Km/h
and carrying up to 1Kg on hand. The energy consumption was greatly improved, reaching 1 hour of
autonomy with a 51.8V battery.
Another example of dynamic controlled humanoid is the SURALP2, developed at Sabanci Univer-
sity in Turkey. It features 29 DoF, and all its systems were built for this project. It uses a variety of
controllers for landing impact reduction, body inclination and Zero Moment Point (ZMP) regulation
[3], early landing trajectory modification, and foot-ground orientation compliance and independent
1Honda robots at http://world.honda.com/ASIMO/history/history.html - Accessed at 20 October 20102SURALP website at http://people.sabanciuniv.edu/∼erbatur/humanoid%20robot%20project.html - Accessed at 20
October 2010
6
(a) Honda E Series (b) The Honda P3 and ASIMO
Figure 2.1: The Honda humanoids
joint position controllers.
As for passive walker robots, the concept first appeared in the late 1980s and the idea is to use the
momentum of the previous movement for the next one, maximizing energy efficiency and making the
gait more human-like [8, 9]. This approach requires greater knowledge of the physics of the system,
since the movement is based on a critical balance in order to create momentum, and it can lead to the
fall of the humanoid. One center that does research on this subject is the Biorobotics and Locomotion
Lab 3, at Cornell University. They study biped and quadruped locomotion [10, 11, 12], striving to
optimize the specific cost of transport (energy per weight per distance) and achieving larger travelling
distances. In Wisse’s Phd thesis [13] the basic models are explained and more detailed experiments
are exposed.
Sony’s QRIO (Quest for cuRIOsity) humanoid robot [14, 15], shown in Figure 2.2, is a platform
designed to support several innovative systems. It is 58 cm tall, weights approximately 7 kg and has 38
degrees of freedom. It features several distance sensors, a stereo camera, microphone and speakers and
several other sensors. The three CPUs have distinct tasks: the first deals with audio recognition and
speech synthesis; the second with image handling, short and long term memory and behaviour control;
the third is responsible for motion control. The Emotionally Grounded (EGO) architecture can be
divided in five main blocks [16]. The perception block handles the information gathered from the
sensory system. The internal model block receives the sensory feedback and acts upon internal state
variables and the emotion model. The memory block deals with the short term memory (the most
recent feedbacks to certain states, object detection) and long term memory (recognition and association
3Biorobotics and Locomotion Lab site in http://ruina.tam.cornell.edu/research/ - Accessed at 20 October 2010
7
2. State of the art
of voice and image for example). The behaviour control uses an activation level (AL) to select certain
behaviours (or sub-behaviours). In the matter of the motion module, the researchers opted by a
biologically inspired approach to control a stable and robust walking pattern by implementing neural
oscillators, that unlike ground reference approaches does not need precise modelling of the platform
and reduces the energy consumption [17, 18]. However, Sony has announced the termination of the
project before the robot was available for purchase.
Figure 2.2: The Sony QRIO
This chapter could not be completed without focusing on another theme: sensing and actuating.
Some robots are designed specifically for human interaction, others to aid on dangerous or repetitive
tasks. The new Robonaut24 [19] is a good example. This robot was designed by NASA and GM and
will be aiding astronauts in future voyages. It can hold heavy weights and maintain the necessary
dexterity for sensitive jobs, while handling the same tools as humans. It can be fully autonomous or
be remotely controlled by a human, wearing sensory gloves and a vision helmet, and is able to learn
tasks through the teleoperation.
For a more entertainment related robot, we have TOPIO 56, built by TOSY, a table tennis player
robot. The third version of this humanoid player was presented in 2009, and features 2 high speed
cameras, 39 DoF (7 in each arm, 5 in each hand, 2 in the head, 1 in the torso and 6 in each leg) and
artificial intelligence, balanced bipedal walking and accurate movement control software.
4Robonaut2 site in http://www.nasa.gov/topics/technology/features/robonaut.html - Accessed at 20 October 20105TOSY website in http://www.tosy.com/ - Accessed at 20 October 20106TOPIO wikipage in http://en.wikipedia.org/wiki/TOPIO - Accessed at 20 October 2010
8
3Webots Simulator
Contents3.1 Building the virtual model . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Developing the controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Simulation and Implementation . . . . . . . . . . . . . . . . . . . . . . . . 12
3.4 Validation of the Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
9
3. Webots Simulator
The humanoid that has been chosen for this work is the Bioloid Humanoid. It comes as one of the
assembling possibilities of the Bioloid Comprehensive Kit, by Robotis. In this chapter, a brief review
of the Bioloid platform is done and the implementation of the virtual model is described.
The virtual model of the humanoid was built using the the Webots software, a program designed for
fast prototyping and simulation of mobile robots, used by several universities and research companies
worldwide. The program allows the user to design new robots and environments, create control
algorithms in multiple coding languages and test new ideas.
The Webots documentation [20, 21, 22], divides the process in four stages: modelling, program-
ming, simulating and transferring.
3.1 Building the virtual model
The first stage deals with building the physical model of the robot and of the environment. This
is done in a tree-like form: each robot and item in the world is a node and each component of these
is a new node. This allows us to build, for example, a wheeled robot by creating a new node in the
world file, and adding nodes for each sensor, for each actuator and for the main body of the robot.
There are also several parameters for each device that allows us to add noise or range for sensors and
limit the torque each servo can apply. There are also advanced options for creating physics plug-ins,
that can be used to model water environments, space environments or even other planets.
In our case, a very realistic model of the Bioloid Humanoid was built, in order to assure that the
simulation results can be applied to the real robot. The virtual model was built using .vrml models
of the blocks that make the robot, which were created by Jean-Christophe Fillion-Robin, author of
[23], and available at the Biologically Inspired Robotics Group (BIRG) website 1. In Figure 3.1 the
Webots model of the Bioloid Humanoid can be seen, in contrast with the real robot. As for the
physical properties of this model, we used the properties obtained on a previous research published by
Humanoid Robot Lab 2 in [24], where the dimensions, total mass and position of the center of mass
of each block are detailed.
In addition to these characteristics, the virtual model is also equipped with 18 rotational servos,
with similar characteristics to the AX-12+ servos of the real Bioloid, a gyroscope and an accelerometer,
which have also been added to the Bioloid robot. For testing some of the algorithms, we also added
a GPS system, that was used to obtain ground truth of the torso position, but there are no plans for
adding one to the Bioloid. In addition to the robot, some simulations also have a ramp or a moving
platform, as shown in Chapters 4 and 5.
In order to be able to estimate the attitude, an Inertial Measuring Unit (IMU) was added to the
original Bioloid robot. In order to facilitate the programming and communication of the devices in
1BIRG website at http://birg.epfl.ch/page66584.html - Accessed at 20 October 20102HRL website at http://humanoids.dem.ist.utl.pt/ - Accessed at 20 October 2010
10
3.2 Developing the controller
(a) Real Bioloid (b) Simulated Bioloid
Figure 3.1: Real vs. Simulated Bioloid
the robot, we chose the 6 Axis IMU from HuvRobotics, as in Figure 3.2, which features a 3 Axis
Accelerometer and a 3 Axis Gyroscope. This IMU outputs the projection of the linear acceleration
on the 3 axis and the angular velocity along the 3 axis, with an update rate of 100Hz. While the real
IMU requires a linear calibration of its output, the values given by the simulated IMU already are in
International Units System. The datasheets of the devices used in this IMU can be checked in [25],
[26], [27] and in the wikipage 3.
Figure 3.2: The HUVRobotics 6-Axis IMU
3.2 Developing the controller
The second stage is the programming of the robots present in the simulation, which can be done
in C, C++, Java, Python or URBI, and by adding a few specific Webots commands. Each robot must
have a controller, and different controllers can be programmed in different languages. A controller is
3http://www.bioloid.info/tiki/tiki-index.php?page=6-Axis+Bus+IMU+Documentation - Accessed at 20 October2010
11
3. Webots Simulator
usually divided into three parts, that cycle in user-defined intervals: reading the sensors, computing
the gathered information and sending orders to the servos.
With the Bioloid simulation, two approaches were attempted. The first was using the URBI
language, which was also a first approach for programming the Bioloid. URBI is an event-based
programming language, targeted for controlling robots, that simplifies greatly waiting for a specific
event to trigger an action (additional information can be found in [28] and [29]). However, at the time
this was experimented, the URBI language had just been released for the Bioloid, and was still under
beta testing and quite buggy. As such, this approach was set aside. In an effort to keep the simulation
as close as possible to reality, it was decided to create a Python controller, since the Gumstix board
can be programmed using this language. Also, since different modules were created to handle different
parts of the gathered data, these modules can be transfered to the Gumstix board and be used in a
future implementation.
Figure 3.3: Simulation Environment
3.3 Simulation and Implementation
The third step is the actual simulation, in which one can see the behaviour of the controlled
robot. The user can also interact with several objects of the simulation while it is running, change the
viewing position or even speed up the visualizer (the simulation will still run at the user defined time
step). The user can also view output data from the controller, the servos axis and contact points.
This visualization was extremely helpful for us, since the first approach for qualitatively evaluating a
walking gait is visually.
The fourth step only works on a few specific robots, that allows a direct transfer of the controller
to the real robot. Since this was not used for this work, we will not add details and advice the reading
of the Webots manual [20].
12
3.4 Validation of the Model
3.4 Validation of the Model
In order to ascertain the accuracy of the virtual model of the Bioloid, a few trials were conducted.
The first trial consisted on placing the robot on a platform and increasing its slope until the humanoid
reached the tipping point, both on the real world and the simulation. The results are shown in Table
3.1. The conclusion is that the tipping point of the real robot is always smaller than the simulated one,
by approximately 1, probably due to small errors introduced when building the simulation model.
Table 3.1: Angles at which the robot reaches the tipping point, both on real and simulated situationsSimulation Real world
Front Back Left Right Iter. Front Back Left Right11.5 16.6 19.2 19.2 1 11 15 18 17
2 10 15 17 17
3 10 16 18 18
4 11 16 16 18
5 11 15 18 17
It is also required to compute the variance of the accelerometer and the gyroscope measurements
for the Extended Kalman Filter model of the sensors. Since the output of these devices ranges from
0 to 1023, a calibration was required to determine the parameters a and b0 of the linear function in
Equation (3.1) that relates the output O to the measurements in Metric Units M .
M = F (O) = a×O + b0 (3.1)
For the accelerometer, the device was rotated so that each of the axis was aligned with the gravitic
acceleration and therefore the expected value of the measurements in that axis is 9, 82m.s−2, while the
other two axis provide values of 0m.s−2. As for the gyroscope, it was assembled with a servo rotating
at a constant speed around one axis. As with the accelerometer, the gyroscope was rotated in order
to keep one of its axis aligned with the rotation axis of the servo. Therefore, the expected value of
the measurements of the axis aligned with the rotation axis provided values of πrad.s−1. From these
data we were able to adjust the measurements for the Metric System and calculate the variance of
the measurements, that are shown in Table 3.2. Depicted in Figure 3.4(a) are the adjusted values of
the accelerometer when the Y axis is aligned with the gravity and in Figure 3.4(b) are the adjusted
values of the gyroscope for each axis, when they are aligned with the rotation axis of the servo.
Table 3.2: Variance values for each of the axis of the accelerometer and of the gyroscopeAccelerometer ((m.s−2)2 Gyroscope ((rad.s−1)2
X axis 0.0042 Pitch 0.0261Y axis 0.0059 Roll 0.0194Z axis 0.0038 Yaw 0.0390
13
3. Webots Simulator
0 10 20 30 40 50 60 70 80 90 1000
0.1
0.2
Z a
xis(
ms−
2)
Iter.
0 10 20 30 40 50 60 70 80 90 1000
0.2
0.4
X a
xis(
ms−
2)
Iter.
0 10 20 30 40 50 60 70 80 90 1009.7
9.8
9.9
Y a
xis(
ms−
2)
Iter.
(a) Accelerometer adjusted measurements
0 50 100 1502
3
4
Pitc
h
Iter.
0 50 100 1502.5
3
3.5
Rol
l)
Iter.
0 50 100 1502
3
4
Yaw
Iter.
(b) Gyroscope adjusted measurements
Figure 3.4: Adjusted measurements of the accelerometer, when Y is aligned with gravity, and of thegyroscope, when it rotates at πrad.s−1 around each axis
14
4Static Balance
Contents4.1 General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Quaternion Based Extended Kalman Filter (EKF) . . . . . . . . . . . . 16
4.3 Error Determination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.5 Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.6 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
15
4. Static Balance
4.1 General Overview
The goal of this setup is to make the Bioloid maintain its balance while standing on a moving
platform. This situation also allows the implementation and test of some systems that will be used
on the next Chapter. The block diagram for the developed controller is shown in Figure 4.1. The
system is controlled by using the measurements from the gyroscope and accelerometer of the IMU
block to update the Extended Kalman Filter attitude estimation, which is represented in the form of
a quaternion. The Error Determination block receives the state estimation and produces a variable to
represent a slope, that is transformed into positions for the 18 Servos by the Controller block. Each
block will now be analysed in greater detail.
Figure 4.1: Static Balance Controller
4.2 Quaternion Based Extended Kalman Filter (EKF)
The Kalman Filter is a recursive process in which the current state of the system is estimated at
each timestep, by using the previous state estimation, the inputs of the system, the system’s dynamics
and the sensors noisy measurements. This filter estimates the system state, taking into account the
uncertainties of the system dynamics and of the measurements (the sensors have noise and drift, the
theoretical dynamics suffers from approximations and external factors are not accounted for) and
weights the various contributions in order to minimize the variance of the estimations.
The Kalman Filter works in two phases: prediction and update. On the prediction phase, the
current state mean and covariance of the system are predicted from the previous state, the inputs
given to the system and its known dynamics. For the update phase, the filter uses sensor measurements
to compute what the sensor measurements should be on the predicted state and compares this value
with the real measurements. Using this residual, the covariance is updated in order to determine how
this measurements are used to update the current state estimation.
For this work we use an adaptation of the original Kalman Filter, that allows the estimations of
the attitude of the robot using the measurements of the gyroscope for the prediction phase, instead
16
4.2 Quaternion Based Extended Kalman Filter (EKF)
of the inputs sent to the servos. Following the work on [30], [31] and [32], we used a quaternion based
Extended Kalman Filter for 3D attitude estimation. Before explaining the process, a few notions must
be given.
4.2.1 Attitude representation
The attitude of a body is the representation of its orientation relative to a given reference frame.
This representation can take multiple forms and can be parameterized using different methods.
Rotation Matrix
A rotation matrix is a linear transformation used to rotate vectors, where each column is the
projection of the axis of current reference frame on the axis of the new frame. In R3, this matrix has
9 elements defined as
dim(R) = 3× 3 (4.1)
RTR = I3×3 (4.2)
det (R) = 1 (4.3)
and while this is fairly simple for a user to read, a more compact parameterization of the rotation
matrix is often preferred
Euler Angles
The Euler angles is a 3 elements parameterization of a rotation matrix, each one being the rotation
of the current reference frame around each of its axis, in a given sequence, in order to coincide with
the new frame. The sequence of rotation is given in the name of the representation, up to a total
of 12 possible sequences. If we are given the Euler Angles Z-Y-X α, β and γ, we can obtain the
corresponding rotation matrix by applying (4.4), where cα = cos(α), sα = sin(α), cβ = cos(β),
sβ = sin(β), cγ = cos(γ) e sγ = sin(γ). In order to obtain the Euler Angles from the rotation
matrix, we apply (4.7), where rij is the element at row i and line j of the rotation matrix. This
parameterization has the inconvenience of having singularities at β = ±π2 .
ABRZYX =
cα cβ cα sβ sγ − sα sγ cα sβ cγ + sα sγsα cβ sα sβ sγ + cα cγ sα sβ cγ − cα sγ−sβ cβ sγ cβ cγ
(4.4)
β = arctan 2
(−r31,
√r211 + r221
)(4.5)
α = arctan 2
(r21cβ
,r11cβ
)(4.6)
γ = arctan 2
(r32cβ
,r33cβ
)(4.7)
17
4. Static Balance
Rotation Vector
An attitude can also be represented as a rotation around an unitary array that is normal to the
rotation plane. While more compact than a rotation matrix, in order to rotate vectors we need to use
(4.9) to obtain a rotation matrix to perform the calculations.
θ = arccos
(r11 + r22 + r33
2
)(4.8)
AkB =1
2 sin(θ)
r32 − r23r13 − r31r21 − r12
(4.9)
This parameterization has two major setbacks. The first is that it becomes ill-conditioned for
small rotations and the second is that shows singularities at θ = 0 e θ = π.
Quaternions
The quaternions form a mathematical system, described by Sir William Rowan Hamilton in 1863,
with very specific algebraic operations, that are analysed in [31]. This parameterization is more
compact than rotation matrices, it has no singularities and since it is homomorphic to the rotation
matrices space it is not necessary to convert them to rotation matrices in order to apply a rotation to
an array. However, it is far less intuitive.
A quaternion q is an extension of the complex numbers, formed by the linear combination of four
dimensions, represented by the symbols 1 (usually omitted), i, j and k. The scalar part q4 of a
quaternion is defined as the 1 dimension and the vector part is defined as q = i·q1 + j·q2 + k·q3. It
can also be represented by a R4 array of the four qi, which is very useful for matrices algebra. Given
the rotation vector and angle of rotation, a quaternion can be obtained by applying (4.10), where k
is the rotation vector and θ is the rotation angle.
q =
[qq4
]=
[k sin(θ/2)cos(θ/2)
]=
kx sin(θ/2)ky sin(θ/2)kz sin(θ/2)cos(θ/2)
(4.10)
4.2.2 3D Linear Inverted Pendulum Model
The Inverted Pendulum (IP) is a model where the dynamics of the humanoid robot are represented
by a single mass, located at the center of mass of the robot, connected to the point of contact on the
floor by a telescopic leg. The physics resemble the one of an inverted pendulum, hence the name,
where the point of equilibrium is unstable. Following the work presented in [33],[34] and [35], we use
the 3D version of this model to interpret and predict readings from the IMU sensors. In Figure 4.2
the Inverted Pendulum model is shown on the left, with the acting force of gravity, and the rotation
angles are shown on the right, for future reference.
18
4.2 Quaternion Based Extended Kalman Filter (EKF)
This model is specially useful when the knowledge of the robot dynamics is limited, since the
complexity of the system is strongly reduced. A humanoid robot can be represented by a single IP or
by a series of IPs (for example, one representing each leg and one representing the torso). In our case,
the sensory feedback of the IMU only provides information about the center of mass of the robot,
which leads us to choose the single IP model. While this model is ideal for representing the robot
when is on a single support phase (only has one foot on the ground), it is also used to represent the
robot on double support phase, considering the point of contact on the floor the middle point between
the feet. However, this simplification is only valid under certain conditions. For starters, the distance
between the support surface and the center of mass should be maintained within certain values in
order for the telescopic leg accurately represent the robot. Also, the control of the joints must ensure
that the changes in position do not suffer from rapid changes or discontinuities.
(a) The 3D LIP model (b) Lateral Slope Diagram
Figure 4.2: The Bioloid Model
4.2.3 Matrices and Operators Definitions
The following definitions will be given without explanation on how to obtain them. If needed, one
may check [30] or [31], where all the necessary steps are explained.
The skew-symmetric matrix operator ⌊ω×⌋ is used for cross product of vectors and is defined as
⌊ω×⌋ =
0 −wz wy
wz 0 −wx
−wy wx 0
(4.11)
The matrix Ω is used on quaternion integrations and is defined as
Ω(ω) =
[−⌊ω×⌋ ω−ωT 0
]=
0 wz −wy wx
−wz 0 wx wy
wy −wx 0 wz
−wx −wy −wz 0
(4.12)
The integration of a quaternion between tk and tk+1 can be obtained using a first order integrator,
as long as we are given the turn rates at both timestamps. This can be done by writing the Taylor
19
4. Static Balance
series expansion of LGqtk+1
around tk and assuming the turn rate has a linear evolution, so we can
define ω = ω(tk+1)−ω(tk)2 . The full procedure is depicted in [31] and Equation (4.13) is obtained,
where LGq is the quaternion that relates the Local frame L to a Global frame G at a given timestamp
tk.
LGqtk+1
=
(exp
(1
2Ω (ω)∆t
)+
1
48
(Ω(ω(tk+1)
)Ω(ω(tk)
)−Ω
(ω(tk)
)Ω(ω(tk+1)
))∆t2
)LGqtk (4.13)
The error between two quaternions is defined as a small rotation δq that relates the estimated
frame L with the actual frame L. Therefore, instead of using the arithmetic difference between two
quaternions we define the error as in Equation (4.15).
LGq =
LLδq ⊗ L
Gˆq (4.14)
LLδq =L
Gq ⊗ LGˆq−1
(4.15)
Since the error is defined as a small rotation, one can apply the small angle approximation and
use the definition of Equation (4.16)
δq =
[δqδq4
]=
[k sin
(δθ2
)cos(δθ2
) ] ≈ [ 12δθ1
](4.16)
The Kalman Filter uses the state transition matrix Φ and the discrete time noise covariance Qd
in the propagation phase. The Φ matrix is defined as
Φ(t+∆t, t) =
[Θ Ψ
03×3 I3×3
](4.17)
where
Θ = cos(|ω|∆t) · I3×3 − sin(|ω|∆t) · ⌊ ω|ω|
×⌋+ (1− cos(|ω|∆t)) · ω|ω|
ω
|ω|
T
(4.18)
and
Ψ = −I3×3∆t+1
|ω|2(1− cos(|ω|∆t))⌊ω×⌋ − 1
|ω|3(|ω|∆t− sin(|ω|∆t))⌊ω×⌋2 (4.19)
As for the discrete time noise covariance Qd, it is defined as
Qd =
[Q11 Q12
QT12 Q22
](4.20)
where each entry is defined as
Q11 = σ2r∆t · I3×3 + σ2
w ·
(I3×3
∆t3
3+
(|ω|∆t)3
3 + 2 sin(|ω|∆t)− 2|ω|∆t|ω5 ⌊ω×⌋2
)(4.21)
Q12 = −σ2w ·
(I3×3
∆t2
2− |ω|∆t− sin(|ω|∆t)
|ω|3⌊ω×⌋+
(|ω|∆t)2
2 + cos(|ω|∆t)− 1
|ω|4⌊ω×⌋2
)(4.22)
Q22 = σ2w∆t · I3×3 (4.23)
20
4.2 Quaternion Based Extended Kalman Filter (EKF)
However, when the values of ω are small, it is possible to take the limit and apply L’Hı¿12pital’s
rule for simplification of both matrices. The error committed by using this simplification is in the
order of ∆tn+2|ω|2. The simplified definitions are
lim|ω|→0
Θ = I3×3 −∆t⌊ω×⌋+ ∆t2
2⌊ω×⌋2 (4.24)
lim|ω|→0
Ψ = −I3×3∆t+∆t2
2⌊ω×⌋ − ∆t3
6⌊ω×⌋2 (4.25)
lim|ω|→0
Q11 = σ2r∆t · I3×3 + σ2
w
(I3×3
∆t3
3+
2∆t5
5!· ⌊ω×⌋2
)(4.26)
lim|ω|→0
Q12 = −σ2w ·(I3×3
∆t2
2− ∆t3
3!· ⌊ω×⌋+ ∆t4
4!· ⌊ω×⌋2
)(4.27)
4.2.4 Attitude Estimation
In order to estimate the attitude of the humanoid robot, a quaternion based Extended Kalman
Filter was implemented, as was stated before. The notation used on this chapter will follow the one
used on [31]. This implementation does not require precise knowledge of the dynamics of the body,
instead it uses a dynamic model replacement by using the readings of the gyroscope as inputs for the
prediction phase.
Gyroscope Noise Model
Given the measurement ωm made by the gyroscope, we relate it to actual angular velocity ω, the
gyroscope bias b and white noise nr using
ωm = ω + b+ nr (4.28)
In this model, b is considered a random walk process that follows
b = nw (4.29)
and both nw and nr are assumed to be Gaussian white noise, where
E[nr] = E[nw] = 03×1 (4.30)
E[nr(t+ τ)nTr (t)] =N rδ(τ) (4.31)
E[nw(t+ τ)nTw(t)] =Nwδ(τ) (4.32)
In both cases, it will be assumed that the noise is independent of the spatial direction and equal
on all three dimensions, but a discretization of the standard deviation is still necessary. In (4.33)
to (4.36), the subscript c indicates a continuous variable, d indicates a discrete one and ∆t is the
simulation time step.
N r = σ2rc · I3×3 (4.33)
21
4. Static Balance
Nw = σ2wc
· I3×3 (4.34)
σrd =σrc√∆t
(4.35)
σwd= σwc ·
√∆t (4.36)
State Equation
The next step is to define the state that will be estimated and how it will be predicted. The state
x is a 7× 1 vector, consisting on the attitude quaternion and the gyroscope bias, as in (4.37).
x(t) =
[q(t)b(t)
](4.37)
The state is governed by the following differential equations, obtained from quaternion derivation
algebra and the gyro bias characteristics, where LGq is the quaternion that relates the Local frame L
to a Global frame G and Ω is the matrix defined at (4.12)
LG˙q(tk) =
1
2Ω(ωm − b− nr)
LGq(t) (4.38)
b = nw (4.39)
Taking the expected values from the above equations, we obtain the equations for the state pre-
diction
LG˙q(tk) =
1
2Ω(ωm − b)LG ˆq(tk) (4.40)
˙b = 03×1 (4.41)
Prediction Phase
The prediction or propagation phase consists of five steps in order to obtain the a priori estimation
of the current state.
First, we compute the a priori estimate of the bias for the current timestep k by using
bk+1|k = bk|k (4.42)
Then, the a priori estimate of the current turn rate is computed according to
ωk+1|k = ωmk+1− bk+1|k (4.43)
The attitude prediction ˆqk+1|k is obtained by using the first order integration defined in (4.13) and
the state transition matrix and the discrete time noise covariance matrix are computed as defined in
(4.17) and (4.20).
The final step of the prediction phase is to propagate the state covariance matrix P k+1|k by using
(4.44).
P k+1|k = ΦP k|kΦT +Qd (4.44)
22
4.2 Quaternion Based Extended Kalman Filter (EKF)
Update Phase
In this phase, the a priori state estimate is used to predict the measurements of other sensors
and these predications are compared to the actual measurements. The residual error is then used to
update the initial state estimation.
The residual error r depends on the actual readings z and the expected readings that we compute
using the Inverted Pendulum model and the estimation of the EKF, and is defined as
r = z − z =H ·[δθ
b
]+ nm (4.45)
The definition of the measurement matrix H, that relates the expected error of the state with the
residual error r, in [31] can be simplified for this work, since the calculations needed to obtain an
estimation of the sensor readings do not require the same rotations and projection as in the original
work, and we obtain
H =[⌊p×⌋ 03×3
](4.46)
where p are the estimated readings from the accelerometer, based on the Inverted Pendulum model
and the a priori state estimation. The matrix R(ˆqk+1|k)is defined as the rotation matrix between the
World frame and a local frame corresponding to the a priori estimated state, and is used to compute
the estimated readings, as in
p = R(ˆqk+1|k) ·
0g0
(4.47)
The covariance of the residual is obtained by
S =HPHT +R (4.48)
The Kalman gain is computed using
K = PHTS−1 (4.49)
The correction that will be used on update is a 6 × 1 array, consisting on the quaternion vector
multiplicative error and the bias additive error and is computed by
∆x(+) =
[δθ(+)
∆b(+)
]=
[2 · δq(+)
∆b(+)
]=Kr (4.50)
Knowing the correction, the update of the quaternion estimation is done by applying the multi-
plicative error model on the quaternion prediction as
ˆqk+1|k+1 = δ ˆq ⊗ ˆqk+1|k (4.51)
where the quaternion correction is, for δq(+)T δq(+) 6 1,
δ ˆq =
[δq(+)√
1− δq(+)T δq(+)
](4.52)
23
4. Static Balance
or for δq(+)T δq(+) 6 1,
δ ˆq =1√
1 + δq(+)T δq(+)
[δq(+)
1
](4.53)
In order to update the bias estimation, we use the additive error ∆b(+) to obtain
bk+1|k+1 = bk+1|k +∆b(+) (4.54)
The turn rate and the covariance matrix are also updated to be used on the prediction phase of
the next iteration, following equations (4.55) and (4.56).
ωk+1|k+1 = ωmk+1− bk+1|k+1 (4.55)
P k+1|k+1 = (I6×6 −KH)P k+1|k (I6×6 −KH)T+KRKT (4.56)
4.3 Error Determination
The Error Determination block receives the quaternion representing the attitude of the robot and
the desired attitude ϕd and outputs γ, a representation of the deviation that the servo positions should
cause on the robot to keep it in balance, as is depicted in Figure 4.3. Using the notation depicted in
Figure 4.2(b), ψ represents the rotation between the World frame U and the Floor frame F, caused
by the slope of the platform, γ represents the rotation between the Floor frame F and the Bioloid
frame B, caused by the position of the servos, and ϕ represents the rotation between the World frame
U and the Bioloid frame B, that is the combination of the previous rotations.
Figure 4.3: Static Balance Error Determination
From the quaternion estimated by the Extended Kalman Filter it is possible to compute the center
of mass position in the World frame using
UxCoM =
xCoM
yCoM
zCoM
= R−1(ˆqk+1|k)
· B0h0
(4.57)
where h is the length of the telescopic leg used on the Inverted Pendulum model, that represents the
height of the Bioloid. The ϕ angle of the Inverted Pendulum is computed using
ϕ =
[ϕz
ϕx
]=
− arctan(
xCoM
yCoM
)arctan
(zCoM
yCoM
) (4.58)
The γ angles that represent the robot leaning are obtained by adding the leaning γ0, when the
controller starts running, to an incremental variable, that is obtained by integrating the error between
24
4.4 Controller
the current ϕ angles and the reference ψd that is given. The incrementation of γ depends on a gain
diagonal matrix G = diag(g1, g2) and is obtained by
∆γ = G · (ϕ0 − ϕ) (4.59)
4.4 Controller
The controller takes ∆γ values, computes the γt = γt−1 + ∆γ adjustment and outputs servo
positions. θi represents the position of servo i and θi0 represents the position of the servo i before
the controller is run. θi must be given in radians. In general, each servo position is a function of γx
and γz, but in order to simplify this model, each servo will only be responsible for frontal or lateral
adjustment. For instance, servos 1 and 2 correspond to lateral rotation of the arms and 15 and 16
correspond to the lateral rotation of the ankles. Therefore, the position of these servos should be
updated in order to minimize ϕz. As for the servos that correspond to frontal rotation of the arms
(servos 3 and 4) and to frontal rotation of the ankles (servos 17 and 18), these are updated with ∆γ0
in order to minimize ϕx. The servos 5 to 10 (these correspond to the forearm and hip servos) are not
relevant for the balance algorithm, so they all maintain their initial position.
θ =
θ1θ2θ3θ4θ11θ12θ13θ14θ15θ16θ17θ18
=
θ10 + 3∆γ1θ20 − 3∆γ1θ30 + 3∆γ0θ40 + 3∆γ0θ110 − ρ1θ120 + ρ0θ130 − 2 · ρ1θ140 + 2 · ρ0
θ150 +∆γ1 + ρ1θ160 −∆γ1 − ρ0θ170 +∆γ0θ180 +∆γ0
(4.60)
where ρ is a non-linear term, resulting of the need of bending the knee servo to compensate for lateral
slopes, as shown in Figure 4.4(b), that is computed using (4.61). If the robot needs to compensate
the slope by leaning to the right then ρ0 = ρ and ρ1 = 0. If the robot leans to the left, then ρ0 = 0
and ρ1 = ρ.
ρ = −11.159 · |θ17|2 + 3.855 · |θ17| (4.61)
4.5 Other approaches
4.5.1 Jacobian Controller
Figure 4.5 depicts the controller proposed in this Section.
25
4. Static Balance
(a) Frontal Slope (b) Lateral Slope
Figure 4.4: Compensating Slopes
Figure 4.5: Jacobian Controller Diagram - Matrix J is computed using an estimation of the slope andservo positions (either the real ones, represented by the dashed line, or the expected ones, representedby the full line)
Using the Inverted Pendulum model, the measurements given by the accelerometer are computed
by
p = BURϕ ·
0g0
=
cos(ϕ0) − sin(ϕ0) · cos(ϕ1) sin(ϕ0) · sin(ϕ1)sin(ϕ0) cos(ϕ0) · cos(ϕ1) − cos(ϕ0) · sin(ϕ1)
0 sin(ϕ1) cos(ϕ1)
·
0g0
(4.62)
The rotation matrix that relates the Bioloid and the World reference frames can be obtained in
two ways. The first is by using the ϕ angles on the rotation matrix defined in (4.62). The other way
is by using the γ angles of the robot and the estimated ψ angles of the ramp. Therefore, we can state
that
BURϕ = B
FRγ · FURψ (4.63)
where F is the Floor frame. While the calculation of FURψ is fairly simple, the complexity of calculating
BFRγ is much greater, since it depends on the joint positions of 18 servos and several translations.
26
4.5 Other approaches
However, after calculating these matrices, one can state
p = g ·[BUR1,2BUR3,2
]⇒ δp = g ·
[∂BUR1,2
∂γ0
∂BUR1,2
∂γ1
∂BUR3,2
∂γ0
∂BUR3,2
∂γ1
]·[∂γ0∂γ1
]↔ ∂p = g · J · ∂γ ↔ ∂γ =
1
g· J−1 ·
(pref − p
)(4.64)
In order to obtain the new joint positions, one can use the method described in [36], in which the
error ∂γ is mapped into the joint space using an inverted Jacobian matrix. In our case, one needs to
integrate ∂γ over a defined timestep and the same Controller as the one used in the simulations can
be implemented, as in
θk+1 = F (γk +K · ∂γk) (4.65)
This method was not used because the Jacobian matrix has two setbacks. In order to construct the
Jacobian, we need to build the rotation matrix depending on the 18 servo positions and the slope of
the platform. This means that the slope of the platform must be estimated at each step, increasing the
complexity of the problem, and that we should be able to have a feedback from the servos (represented
as a dashed line in Figure 4.5), which is only possible in simulation. Also, after some experiments,
the Jacobian proved to be highly unstable, probably due to the large number of variables that caused
divisions by numbers close to 0, leading to very large variations of the servo positions and the tipping
over of the robot.
4.5.2 Zero Moment Point and Center of Pressure
This method is widely used to evaluate the tipping point of a walking robot and create walking
gaits, as in the already previously cited [6], [35] but also in [37] and [38], and a set of interpretations
of the term can be found in [3].
One of the interpretations given in [3] is: ZMP is the point on the floor at which the moments
around x and y axis (the horizontal axis) generated by reaction force and moment are zero. Calculating
the ZMP for a gait provides a tool for evaluating the balance of the robot, since the gait is balanced
as long as the ZMP stays inside the supporting polygon at all times. If the ZMP reaches the border
of the polygon, the gait is said to be critically balanced and once outside the polygon, the gait is
unbalanced. It is possible to use unbalanced and critically balanced gaits in order to take advantage
of the momentum caused by this situation.
It is also possible to use the concept of the Center of Pressure (CoP), which is defined in the same
article as: CoP represents the point on the support foot polygon at which the resultant of distributed
foot ground reaction forces act. The CoP exists only on the supporting polygon, where is coincident
with the ZMP if the robot is balanced. When the ZMP goes outside the support polygon, also called
imaginary ZMP, the CoP becomes the tipping point of the robot.
However, it requires heavy computation and precise knowledge of the dynamics of the robot and
of the walking terrain, both reasons for which this method was not implemented.
27
4. Static Balance
4.6 Simulation Environment
This simulation has two object nodes, both with distinct physical properties, as shown in Figure
4.6: the Bioloid Humanoid and a platform. The platform is a Solid box node on two rotational servos,
Figure 4.6: Static Balance Simulation
located at the center of the solid, and it rotates around x and z axis. The position of the servos, which
dictate the slope of the platform, are defined as ψ = [ψx ψz]T . The platform controller will ensure
that both angles follow ψx ∈ [−11.5 11.5]T and ψz ∈ [−11.5 11.5]T and will randomly change
the direction and speed of the rotation.
The virtual Bioloid is equipped with a tri-axial gyroscope and a tri-axial accelerometer, all with
update frequencies of 100 Hz and white noise was added to the measurements, following the results
obtained in Section 3.4, to increase the accuracy of the simulation and increase the chances of a
successful implementation on the real robot. In order to filter the white noise added to the measure-
ments and to obtain accurate estimates from the EKF, the noise model for all sensors must reflect
the expected variance of the measurements. If the expected variance is higher, the estimations will
be less affected by noise, but the evolution of the state will be slower. On the other hand, if the
expected variance is smaller the estimations can have faster changes, but is also more susceptible to
noisy measurements. Knowing the values of σ2rc and by applying Equation (4.35), this setup will use
a gyroscope noise model with
σ2rd
=σ2
rc
∆t=
0.02610.01940.0390
· 1
0.02rad·s−1 (4.66)
and a bias rate σwd= 10−6rad·s−1. As for the accelerometer noise model, the simulations in this
chapter will assume the same variance values obtained in Section 3.4, except when indicated otherwise.
The feet of the Bioloid have an infinite friction with the floor, so it does not slide when the platform
is rotated.
The main goals of this simulation are to evaluate the estimations of the Extended Kalman Filter
and to test if the controller is able to correct the servos in such way that the robot is kept in balance.
In order to evaluate the estimations from EKF, a comparison is done between the positions of the
28
4.7 Results and Discussion
center of mass of the robot that are given by the simulador xCoM = (xCoM , zCoM ) and the coordinates
xEKF = (xEKF , zEKF ) obtained from the EKF estimations defined in (4.57). The evaluation of the
controller is done by comparing the values of ψ taken from the platform servos with the correction γ
and by the magnitude of the error between the xCoM to the origin (0,0).
4.7 Results and Discussion
The simulation results presented on this Section deal with the evolution of the angular position of
the platform and of the robot in reference to the World frame. The slope of the platform is defined
by the controller and the evolution is depicted on the top left plot on each simulation. The bottom
left plot depicts the angular position of the robot, that the controller aims to minimize. The right
figure shows the final position of the robot. The bottom plot shows the comparison between the
(xCoM , zCoM ) coordinates given by the simulator and the coordinates (xEKF , zEKF ) obtained from
the Extended Kalman Filter estimations.
Rotation around x
0 2 4 6 8 10 12 14 16 18 20−0.05
0
0.05
0.1
0.15
0.2Rotation of the ramp around x and z
Ang
le (
rad)
Time (s)
Rotation X Rotation Z
0 2 4 6 8 10 12 14 16 18 20−0.05
0
0.05Rotation Error of the robot
Err
or (
rad)
Time (s)
E(x) E(z)
(a) Ramp rotation and Robot attitude (b) Robot final position
Figure 4.7: Rotation around x
For this first experiment, the platform is rotated only around x axis, starting at ψx = 0 and turning
until reaching ψx = 0.15rad over a period of 14 seconds, and maintaining ψz = 0. This evolution is
depicted in Figure 4.7(a). The γx angle of the robot is able to correct this change of slope but it
maintains a tracking error of 10−2rad while the platform is moving, that is only compensated after
2.8 seconds. As for the ψz coordinate, the slope estimations only have slight variations from the real
29
4. Static Balance
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06x(
t)
Time (s)
Xcom vs. Xekf
Xcom Xekf
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
z(t)
Time (s)
Zcom vs. Zekf
Zcom Zekf
Figure 4.8: xCoM positions vs. xEKF estimations
slope, induced by noisy measurements. In Figure 4.7(b) is depicted the stance of the Bioloid when
the platform is tilted forward.
By analysing Figure 4.8, we can conclude that the robot is kept in balance, since xCoM indicates
the projection of the center of mass is kept near the origin, even with an error of 10−3m in the z
coordinates at the end of the experiment. One can also conclude that the Extended Kalman Filter is
working correctly, since it is able to filter the noisy measurements and produce estimations very close
to the real position of the center of mass.
Faster Rotation around x
0 2 4 6 8 10 12 14 16 18 20−0.2
−0.15
−0.1
−0.05
0
0.05Rotation of the ramp around x and z
Ang
le (
rad)
Time (s)
Rotation X Rotation Z
0 2 4 6 8 10 12 14 16 18 20−0.05
0
0.05Rotation Error of the robot
Err
or (
rad)
Time (s)
E(x) E(z)
Figure 4.9: Faster Rotation around x - Ramp rotation and Robot attitude
This experiment is very similar to the previous, with rotation only around x axis. However, the
rotation is done faster, in only 4 seconds, and in the opposite direction. We can see in Figure 4.9 that
the controller is able to correct the robot’s pose, but that the error while the platform is moving has
increased to 4× 10−3rad. This is expected since the controller corrects the γ angles based on position
error and it does not take in account the speed at which the rotation occurs.
30
4.7 Results and Discussion
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06x(
t)
Time (s)
Xcom vs. Xekf
Xcom Xekf
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
z(t)
Time (s)
Zcom vs. Zekf
Zcom Zekf
Figure 4.10: xCoM positions vs. xEKF estimations
The results shown in Figure 4.10 support those shown in Figure 4.9, since the xCoM coordinates
are maintained around the origin, and the error induced by the rotation of the platform is greater
that on the previous experiment. Again, the noisy measurements are well handled by the EKF, that
produces estimations very similar to the xCoM .
Rotation around z
0 2 4 6 8 10 12 14 16 18 20−0.05
0
0.05
0.1
0.15
0.2Rotation of the ramp around x and z
Ang
le (
rad)
Time (s)
Rotation X Rotation Z
0 2 4 6 8 10 12 14 16 18 20−0.05
0
0.05Rotation Error of the robot
Err
or (
rad)
Time (s)
E(x) E(z)
(a) Ramp rotation and Robot attitude (b) Robot final position
Figure 4.11: Rotation around z
In the third experiment, the platform only rotates around z axis, starting at ψz = 0 and turning
until reaching ψz = 0.15rad over a period of 14 seconds, and maintaining ψx = 0. As before, the
controller maintains a tracking error of 10−2rad between the ψ angles of the ramp and the γ angles
of the robot, shown in Figure 4.11(a). The main difference between this result and the previous ones
is that the slope of the platform is not completely corrected by the controller, that ends with an error
of 10−3rad. In Figure 4.11(b) is depicted the stance of the Bioloid when the platform is tilted left.
31
4. Static Balance
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06x(
t)
Time (s)
Xcom vs. Xekf
Xcom Xekf
0 5 10 15 20
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
z(t)
Time (s)
Zcom vs. Zekf
Zcom Zekf
Figure 4.12: xCoM positions vs. xEKF estimations
In Figure 4.12 one can see that the xCoM coordinates indicate that the robot is in balance and
that the xEKF coordinates have an error of 5 × 10−4m, caused by noisy measurements. However,
such a small amount of error should have no influence in the balance of the robot.
Rotation around x and z
0 5 10 15 20 25 30 35 40−0.2
−0.1
0
0.1
0.2Rotation of the ramp around x and z
Ang
le (
rad)
Time (s)
Rotation X Rotation Z
0 5 10 15 20 25 30 35 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
Rotation Error of the robot
Err
or (
rad)
Time (s)
E(x) E(z)
(a) Ramp rotation and Robot attitude (b) Robot final position
Figure 4.13: Rotation around x and z
For this experiment, the platform rotates around both axis at different rates. The rotation ψz goes
from 0 to 0.15rad in 12 seconds. The rotation ψx has the same amplitude, but each time it reaches
the maximum amplitude the direction is changed and the speed of rotation is increased, as shown in
Figure 4.13. The controller is able to maintain the robot in balance, but the error between ψx and γx
increases from 3× 10−2rad up to 6× 10−2rad. This was expected since the turn rate of the platform
is higher than the one on previous experiments and there is no settling time. The error between ψz
and γz is much smaller, since the rotation is slower and there is a settling period.
32
4.7 Results and Discussion
0 10 20 30 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06x(
t)
Time (s)
Xcom vs. Xekf
Xcom Xekf
0 10 20 30 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
z(t)
Time (s)
Zcom vs. Zekf
Zcom Zekf
Figure 4.14: xCoM positions vs. xEKF estimations
When comparing the xCoM coordinates and the xEKF estimations, there are multiple aspects to
be noted. While the x position on the center of mass is kept near 0, the z coordinates show the
tracking error noticed in Figure 4.13. However, even under these conditions, the error induced by the
constant motion of the platform never exceeds 3× 10−2m.
Robustness to Noise
0 5 10 15 20 25 30 35 40−0.2
−0.1
0
0.1
0.2Rotation of the ramp around x and z
Ang
le (
rad)
Time (s)
Rotation X Rotation Z
0 5 10 15 20 25 30 35 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
Rotation Error of the robot
Err
or (
rad)
Time (s)
E(x) E(z)
Figure 4.15: Ramp rotation and Robot attitude, with σ2max = 50σ2
real
Using the variance of the sensors measured in Section 3.4, the controller was able to handle the
noise of the sensors and produce good simulation results. However, it is expectable that the real robot
has higher noise rates, caused by the movement of the several servos, and therefore it is important to
establish the maximum noise variance the controller is able to cope with.
By repeating the previous simulation and increasing the variance of the noise until the robot is
unable to keep the balance, a value of σ2max = 50σ2
real resulted in the plots presented in Figures 4.15
and 4.16. Higher values of noise do not cause the robot to fall, but rather cause the EKF estimations
33
4. Static Balance
0 10 20 30 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06x(
t)
Time (s)
Xcom vs. Xekf
Xcom Xekf
0 10 20 30 40
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
z(t)
Time (s)
Zcom vs. Zekf
Zcom Zekf
Figure 4.16: xCoM positions vs. xEKF estimations with σ2max = 50σ2
real
to fluctuate, making them unusable for any other applications. As for the plots shown, in Figure 4.15
it can be seen that the robot is unable to maintain a steady evolution of its balancing, even though
the error between the rotation of the ramp and the correction of the robot stays within reasonable
limits. As for the plot in Figure 4.16, the noisy measurements result in noisy estimations from the
EKF, but the true position of xCoM shows the robot maintains its balance. This is the result of how
the inclination of the robot is computed: the small incrementations never correct the error in one
iteration, thus resulting that the correction tends to a mean value.
Discussion
Analysing these results, we can reach the conclusion that despite the simplifications made with
these models the results were satisfactory. The Extended Kalman Filter is able to deal with the noisy
measurements and produce trustworthy estimations, very close the the real positions of the center
of mass of the robot. The controller shows some lag when the platform rotates at higher rates, but
this is expected since the controller only uses position error to determine the action it takes. The
final simulation also shows that, in this setup, the controller has the ability to handle higher amounts
of noise than those predicted. This controller could be improved by estimating the turn rate of the
platform and using this estimation to compute the correction needed, but this would bring no benefit
for the main goal of this work since the platforms for the walking balance do not turn.
34
5Walking Balance
Contents5.1 General Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 The learning table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 Slope Interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.4 Slope Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5 Gait Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.6 Simulation Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.7 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
35
5. Walking Balance
5.1 General Overview
This section deals with the problem of creating an adaptive walking gait for different slopes. A
block diagram is shown in Figure 5.1 where the main components are identified.
Using the controller shown in 5.1(a), the slope of the ramp and the gait to be used are given by
the user and the platform has a constant slope. This method serves two purposes: on one hand allows
us to test and improve pre-designed gaits; on the other hand serves as a teacher, filling up a learning
table, explained in Subsection 5.2, that is used on the autonomous mode. Due to the need of human
supervision, this is called assisted mode.
The controller depicted in 5.1(b) automatically detects the slope of the platform, using the bal-
ancing algorithm of the previous chapter, and updates the learning table without exterior help, thus
the name autonomous mode. Using the state estimations, a walking gait is generated after each leg
swing, using linear interpolations of pre-designed gaits. The learning table is used to speed up this
process and is also updated on each step.
(a) Assisted Mode (b) Autonomous Mode
Figure 5.1: Walking Balance Controller
5.2 The learning table
On a first approach, only the autonomous method was used. This method consists in detecting the
slope of the platform on which the robot is standing at each step and creating a new gait at each step,
through the Gait Generator block. This process was successfully implemented but it ran very slowly
due to the need of rebalancing the robot many times. In order to speed up the process, a learning
algorithm was implemented, based on a data table with discrete entries of slopes and gates. This
table consists on entries where each column represents a slope of the ramp, each line represents the
slope for which a gait was created and each cell contains a 25 values mean of the estimated position
of the center of mass of the robot when standing on a single support.
36
5.3 Slope Interpolator
If the table is complete, one can use the knowledge of the slope for which the current gait was
generated and search the corresponding line for the closest estimation of the center of mass recorded
on the table. If a close enough value is found, then the new slope is the column where the recorded
value is found. If there is not a close enough value, an interpolation between the two closest center of
mass values is made and the new slope is estimated using this interpolation. If an interpolation is not
possible, then the Slope Detector needs to be run.
The more values the table has, the more effective this process is. While the autonomous method
allows to save data online, the process takes some time. In order to improve the start of the process,
the assisted method was developed. In this method, the motion files created by the generator are
played and the estimations of the EKF are saved into this learning table, without needing to balance
the robot.
An example of the learning table is shown in Table 5.1. This table is populated with values of
zCoM , obtained using (4.57), that are used for balancing the robot around the x axis. As it is shown
in the table, the values of zCoM are monotonically increasing in function of the slope of the platform
for a given gait and this is what makes possible the interpolation used to estimate changes of slopes.
Table 5.1: Example of a populated learning table for a frontal balance - each column is a slope, eachline a gait and each cell is the zCoM mean value
0.0 2.0 4.0 8.0 10.0 12.0 15.0
0.0 -0.0056m -0.0072m * * * * *2.0 0.0151m 0.0004m -0.0195m * * * *4.0 * 0.0098m -0.0064m * * * *6.0 * * 0.0112m -.0106m * * *8.0 * * * -0.0019m -0.0140m * *10.0 * * * 0.0049m -0.0058m -0.0136 *12.0 * * * * 0.0131m 0.0022m *15.0 * * * * * 0.0124m -0.0013m
5.3 Slope Interpolator
The Slope Interpolator block is responsible for identifying the current slope of the platform at
the beginning of the algorithm and after each leg swing. This block needs an estimation of the center
of mass position, and this is done by reducing the value of σmd, defined in Section 4.6, to 10−6m·s−2
and running a few iterations of the EKF, in order to increase the noise filtering and reduce the error
caused by the walking movement.
This process estimates the current slope in order to bypass the Slope Detector and speed up the
control loop. For this process to be ran it is required that the current gait is recorded in the learning
table and that there are usable values of zCoM or xCoM to approximate or estimate a slope. If there
is an entry with an error between it and the estimated coordinate from the EKF smaller than 10−3m,
the corresponding slope is passed to the Gait Generator. Otherwise, the controller will search for a
37
5. Walking Balance
higher and a lower value than the estimated coordinate and apply a linear interpolation to estimate
the current slope and pass it to the Gait Generator. Only when the approximation or estimation of
a slope is not possible, the Slope Detector is called.
5.4 Slope Detector
This block implements the algorithm developed in Chapter 4 to balance the robot and to estimate
the current slope of the platform. Since the Slope Detector is ran while the robot is on a single support
phase, the reference ϕ0 will change depending on the supporting foot, in order to maintain balance:
if the foot on the ground is the right one, the reference will be ϕd =[0.15rad 0
]Tand if it is the left
one, the reference will be ϕd =[−0.15rad 0
]T.
This block returns not only the new slope for the gait generator to create a more adequate gait, if
necessary, but it also returns the error between the detected slope and the closest gait present on the
learning table.
A remark note should be made about saving data obtained by running the Slope Detector. This
method allows us to save two pairs of data: the [oldGait][newSlope] and the [newGait][newSlope],
since the position of the center of mass can be measured before and after balancing the robot.
5.5 Gait Generator
There are several methods that can be used to plan the trajectory of the center of mass so that the
robot can keep its balance while walking, but in order to obtain a walking gait from this trajectory
one must use Inverse Kinematics, which is a slow process considering the robot has 18 servos.
Using the Webots Motion Editor feature, several motion files were created by hand for walking on
different slopes. The motion files are a set of poses, or servo positions, with a timestamp, that can
be edited and played by Webots. This also resembles the motion files created by the Motion Editor
software from Robotis. The motion files were created for the robot to walk on a plan surface, on a
slope of 0.175 rad (10) around x and on a slope of 0.175 rad (10) around z.
When inserted in the control loop, it will only create a new gait if (1) the slope has changed and
if (2) the difference between the slope for which the current gait was created and the current slope,
on any rotation axis, is higher than 0.3 rad.
5.5.1 Frontal Slope
When supplied with the new slope that the robot is standing, the gait generator will use two
pre-designed motion files to apply a linear interpolation to the position of each servo for each pose of
the motion. One of the pre-designed gaits is used for a plain platform and the other one should be for
a slope has high has possible, which improves the interpolation and creates balanced gaits. Using this
38
5.5 Gait Generator
method, if the servo position is independent of a frontal slope, for example the arm servos, the new
gait will keep the position. The first step is to determine the equation of the linear function relating
the two positions, using Equations (5.1) and (5.2), where θio is the position of servo i for γxo and θif
is the position of servo i for γxf .
m =θif − θioγxf − γxo
(5.1)
b = θif −m · γxf (5.2)
In order to obtain the new position, Equation (5.3) is used, where θid is the position of servo i for
the desired slope γxd.
θid = m · γxd + b (5.3)
By using different world simulations this method was used to create gaits for slopes where γxϵ [0, 20]
degrees.
5.5.2 Lateral Slope
The creation of gaits for lateral slopes is very similar to the previous case. By using two pre-
designed motion files for the cases where the slope is lateral and using the same linear interpolation
as before, it was possible to generate new gaits for lateral slopes. However, in order to deal with
the bending of the knee, as seen in Figure 4.2(b), the position of the leg servos result in the sum of
two components. The first component is related to the walking movement and is obtained by the
same interpolation method used in the previous section. The second component is ρ, as was defined
in Equation (4.61), which is independent of the interpolation, and is used to compensate the lateral
inclination by leaning the robot.
For example, if we have θ14 = −0.4 rad for γz = 0 and θ14 = −1.08rad for γz = 0.175 rad, we
compute that ρ(0) = 0 in the first case and ρ(0.175) ≈ 0.34 in the second case. Assuming we would
want a new gait for γz = 0.087 rad, we would interpolate between θ14a = −0.4 − 0 = −0.4 and
θ14b = −1.08 + 2× 0.34 = −0.4, and obtain the new θ14new = −0.4. After the interpolation, the new
value of ρ(.087) = 0.25 is added to the new value of θ14new = −0.4− 2× 0.25 = 0.9 rad.
This gait generation method was also tested on different slopes where γzϵ [0, 0.21] rad.
5.5.3 Combination of Slopes
For the two previous gait generation methods, the rotations were always around a single axis.
The interpolations used to create new gaits can be seen as corrections to the walking gait for a plain
platform, that depended only on one variable. If the platform is rotated around both axis, a better
mapping of the correction as a function of (ψx, ψz) needs to be done. Until this mapping is done, an
approximated method was implemented.
39
5. Walking Balance
To create a gait, a composition of the previous two methods was developed. By taking the original
gait, a temporary motion file is created for the case were ψx = 0 and ψz = 0, using the interpolation
method detailed in Subsection 5.5.1. Then we apply the method developed in Subsection 5.5.2 on the
temporary motion file. The gaits created like this have worked for several rotation combinations as
long has √ψ2x + ψ2
z < 0.175 (5.4)
However, these gaits were not as successful when used on a platform with different slopes, because
on several occasions the step that is taken before the controller identified a change of the slope would
lead the robot to fall, since the rotation around two axis creates a more unstable balance, that can be
easily broken.
5.6 Simulation Environment
Several simulation worlds were created for the experiments, featuring static platforms with different
slopes, around both x and z axis, and one world with several platforms of different slopes to test the
algorithm. The definitions given on the previous chapter in 4.6 are maintained, and the Bioloid
structure is maintained.
In these simulations the Bioloid alternatively walks and balances itself and the movements involved
require that the noise model of the EKF is adjusted to cope with faster evolution of the state and the
need for more accurate estimations. In order to maintain a notion of quantity, the co-variances used
in the noise models of the EKF will be proportional to the ones established in Section 3.4 so that
σ2i = α · σ2
real, where i is either the accelerometer or the gyroscope. With that in mind, the noise
models follow the values shown in Table 5.2.
Table 5.2: Co-variance values for each sensor noise model of the EKFα
Accelerometer GyroscopeWalking 107 107
Balancing 10−3 10−6
While these settings result in high noise rates while the robot is walking, it also ensures that
the estimations are more accurate when the robot is balancing itself. Most simulations will provide
sensor readings with the variance of the real sensors, obtained in Section 3.4; when the simulation
uses different values it will be said so.
The simulations presented in the next Section will show that the gaits generated by the controller
are well balanced and the effects of using a gait on a different slope. It will also be shown how the
usage of the learning table is able to speed up the process of walking on a platform with multiple
slopes and how the controller handles noisy measurements.
40
5.7 Results and Discussion
5.7 Results and Discussion
Figure 5.2: Example of a multiple frontal slope platform
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15Position Estimations of the EKF for frontal slope
Time(s)
x(m
)
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15
Time (s)
z(m
)
0º2º4º8º12º
Figure 5.3: xEKF (t) in assisted mode for ψx = 0 and ψz = 0
The effectiveness of this work can be evaluated according to several performance metrics. Not only
it is important that the robot is able to adapt to a change of the slope, but the controller must also be
robust to handle deviations on the starting point and to ensure the robot keeps walking on a straight
line. In addition to those characteristics, the controller should also be able to show an improvement on
speed thanks to the learning table. The plots shown in this section represent xEKF = (xEKF , zEKF ),
the coordinates of the center of mass obtained from the EKF estimations defined in (4.57), while
running the controller in assisted and autonomous mode. When taking a step, the robot will go
through the following phases:
• Balancing - The robot will balance itself by reducing the error between ϕ and ϕd. In the first
run, we have ϕd = (0, 0), because the robot will have both feet on the floor (double support
phase). Afterwards, we will have ϕd as was defined in Section 5.4, since the balance is achieved
41
5. Walking Balance
while on single support.
• Leg Swing - During this phase the robot will lean forward and advance the leg that is suspended.
In the plots, this phase is represented by an increase on the value of zEKF and xEKF will go to
zero.
• Double Support - On this phase the robot needs to shift the supporting point from the initial
supporting foot to the other one. It is very important that the momentum gained on the previous
stage is used, in order to save energy. In this phase, zEKF will be at its maximum and xEKF
will be around zero.
• Leg Swing - When the robot changes the supporting foot, it can suspend the other leg and move
to a more stable stage. This phase is identified when zEKF decreases to values close to zero and
xEKF strays away from zero.
Evaluating the generated gaits
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15Position Estimations of the EKF for lateral slope
Time(s)
x(m
)
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15
Time(s)
z(m
)
0º2º4º8º10º
Figure 5.4: xEKF (t) in assisted mode for ψx = 0 and ψz = 0
The plots in Figures 5.3 and 5.4 show the evolution of xEKF and zEKF as a function of time
when using different gaits on the slopes for which they were designed. The first plots were taken on
platforms with slopes around x and the second plots on platforms with slopes around z.
The evolution of both variables follow an expected behaviour: xEKF has a trapezoidal wave form
with zero mean, where the steady zones correspond to the stable phase, which is where the balance
controller will act on the autonomous mode, and the fast change zones correspond to the leg swings
and double support phase; zEKF takes mostly positive values when the robot leans forward to gain
momentum and values close to zero when in a more stable stage. After the initial balancing movement,
42
5.7 Results and Discussion
the gaits will overlap, in general, showing the corrections that are made to the original gait in order to
handle new slopes are adequate. The leg swing stages also show rapid changes of the EKF estimations,
due to the noise on the sensors. Instead of filtering the noise, it was preferred to allow these changes
to occur since they also enable a faster stabilization of the estimations when the robot finishes the leg
swing movement.
It should also be made a remark about the fact that the initial heading of the robot is maintained
while executing these gaits. This means that the gaits handle the momentum gained when taking a
step and that the supporting foot is well placed on the floor. Since the feet of the robot are very rigid,
it could occur that a misplaced foot would cause the robot to rotate around itself.
Using the learning table
0 5 10 15 20 25
−0.1
−0.05
0
0.05
0.1
0.15
Time(s)
x(m
)
0 5 10 15 20 25
−0.05
0
0.05
0.1
0.15
Time(s)
z(m
)
0º2º4º8º
Figure 5.5: xEKF (t) in assisted mode when playing different gaits on a ψx = 4 slope
In order to populate the learning table, the assisted method can be used to run a given gait on
different slopes. In Figures 5.5 and 5.6 are respectively displayed the xEKF and zEKF coordinate when
the robot uses different gaits for a slope of ψx = 4 and ψz = 4. The displacement created by the
error between gait and slope is what enables us to use the learning table to speed up the estimation of
the slope, since it creates a monotonically increasing function. Also of note is the increased instability
for lateral slopes, that can be seen in Figure 5.6. While for ψx = 4 using gaits designed for slopes of
0 up to 8 are good enough for the robot to keep walking, for ψz = 4 there is only a 1 margin before
the walking gaits start failing. Looking at the x(t) on this figure, we see that the gaits for 2 and 6
do not allow the robot to complete the leg swing and regain balance. This limits the adaptability of
the robot to changes of lateral slope larger than 1 for step.
43
5. Walking Balance
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15
Time(s)
x(m
)
0 5 10 15 20 25−0.1
−0.05
0
0.05
0.1
0.15
Time(s)
z(m
)
2º3º4º5º6º
Figure 5.6: xEKF (t) in assisted mode when playing different gaits on a ψz = 4 slope
0 5 10 15 20 25 30 35−0.1
−0.05
0
0.05
0.1
Time(s)
x(m
)
0 5 10 15 20 25 30 35−0.1
−0.05
0
0.05
0.1
Time(s)
z(m
)
(a) First Run
0 5 10 15 20 25 30 35−0.1
−0.05
0
0.05
0.1
Time(s)
x(m
)
0 5 10 15 20 25 30 35−0.1
−0.05
0
0.05
0.1
Time(s)
z(m
)
(b) Second Run
Figure 5.7: xEKF (t) in autonomous mode on single slope ramp - The red trajectories represent therunning of the Slope Detector
Using the Autonomous controller
In this simulation, the controller is run using the autonomous mode on a platform with ψx = 10
and ψz = 0. The plot in 5.7(a) represents the evolution of xEKF when the learning table is not used.
The controller starts by balancing the robot for the first 6 seconds. After the initial evaluation of the
slope, there are 6 more moments were the evaluation is repeated, taking approximately 1 second each
time since the center of mass is already close to the goal. The total time of this run is 33 seconds.
The plot in 5.7(b) represents the evolution of xEKF using the learning table, that has been
previously filled with values. After the initial evaluation of the slope, the estimation of the slope is
made using the interpolation of the values in the table, and the Slope Detector is ran only twice,
when the noise of the measurements does not allow the controller to recognize the slope. This run is
44
5.7 Results and Discussion
completed in 29 seconds, 4 seconds faster than in the previous experiment, and this difference is even
greater when the platform has multiple slopes.
Robustness to Noise
0 10 20 30 40 50 60 70 80−0.2
−0.1
0
0.1
0.2
Time (s)
x(m
)
0 10 20 30 40 50 60 70 80−0.1
0
0.1
0.2
0.3
Time (s)
z(m
)
(a) Variance values of σreal
0 10 20 30 40 50 60 70−0.15
−0.1
−0.05
0
0.05
0.1
Time (s)
x(m
)
0 10 20 30 40 50 60 70−0.2
−0.1
0
0.1
0.2
Time (s)
z(m
)
(b) Variance values of σ2 = 20 · σ2real
Figure 5.8: xEKF (t) when the robot adjusts to a new slope, both with normal noise variances andhigh noise variances
In this case, the robot is placed on the floor, with no slope, and walks up to a platform with 3 of
slope. The plots in Figure 5.8(a) show its progress when using normal noise values, and it is clear the
transition from the plain floor to the platform at the 31st second, when it takes him longer to balance.
As in the previous chapter, it is important to know how the controller handles higher noise rates. In
this case, the value of σ2max = 20σ2
real was the highest possible to ensure the robot did not stumble
or lose its walking direction. The estimation results are shown in Figure 5.8(b), where it can be seen
that are much more unstable than on the previous plots. It is also of note that the estimations tend
to stray from the real positions of xCoM after the first 40 seconds of walking.
Using the Autonomous controller on a multiple slope platform
This simulation will place the robot at the bottom of a platform with increasing slope, up to 0.35
rad (20), as shown in Figures 5.9 and 5.10. The first run is done without the aid of the learning
table, and the slope has to be re-evaluated after each of the 20 steps. The second run is done using
the data collected during the first run and saved on the learning table.
A comparison between Figures 5.9 and 5.10 shows that the robot reaches the final position much
faster when uses the learning ability than when it does not, taking less 31.5 seconds. While on the
upper plot all steps take approximately the same time, on the plot in Figure 5.10 it is possible to
distinguish between the times when the controller needs to run the Slope Detector from when it is
able to use interpolation to estimate the slope: the large intervals between peaks of zEKF occur when
45
5. Walking Balance
0 20 40 60 80 100 120 140 160 180 200 220
−0.1
0
0.1
Time(s)
x(m
)
0 20 40 60 80 100 120 140 160 180 200 220−0.1
0
0.1
Time (s)
z(m
)
0 20 40 60 80 100 120 140 160 180 200 2200
0.2
0.4
Time (s)
Slo
pe E
stim
atio
n (r
ad)
Figure 5.9: xEKF (t) in autonomous mode on multiple slope ramp, without using the learning table.The red trajectories represent the running of the Slope Detector
the robot is balancing itself. It is also clear that each step takes, in general, less time when using
the learning table. Around the 100th second, a disturbance can be noted in the plot, due to a bad
evaluation of the change of slope, but it is well handled and corrected by the controller.
It should also be noted that the third plot in Figures 5.9 and 5.10, which represents the estimations
of slope made by the controller, show that the estimations of the last steps display signs of instability.
Indeed, if the robot takes many steps, the error at each evaluation of the EKF is added and causes
instability and the fall of the robot. This can be fixed by using an external fixed point, a beacon that
allows the controller to correct the estimations by measuring the distance and orientation between the
two.
46
5.7 Results and Discussion
0 20 40 60 80 100 120 140 160 180 200 220
−0.1
0
0.1
Time (s)
x(t)
0 20 40 60 80 100 120 140 160 180 200 220−0.1
0
0.1
Time (s)
z(t)
0 20 40 60 80 100 120 140 160 180 200 2200
0.2
0.4
Time (s)
Slo
pe E
stim
atio
n (r
ad)
Figure 5.10: xEKF (t) in autonomous mode on multiple slope ramp, using the learning table. The redtrajectories represent the running of the Slope Detector
47
5. Walking Balance
48
6Conclusions
49
6. Conclusions
The controller proposed on this work enables the Bioloid Humanoid robot to have an adaptive
walking gait in the presence of sloped terrains, while using a simple sensory system consisting only on
an accelerometer and a gyroscope. This controller uses a quaternion based Extended Kalman Filter
using these sensors as inputs for attitude estimation, with good estimation results, and the Inverted
Pendulum model for a humanoid robot allows the simplification of the involved physics without adding
any significant error, as it is shown in Section 4.7.
As for the adaptive walking, the generation of walking gaits by linearly interpolating previously
made gaits is also applicable to other humanoid robots, since there is no need of a kinematic model of
the robot. Unfortunately, the creation of gaits for slopes consisting on rotations around two axis still
needs to be implemented. Also, as is shown in Section 5.7, the usage of the learning table improves
the initial controller, making it faster and more adaptable if it is used on other humanoid robots.
The simulations also show that the controller can handle greater amounts of noise in the mea-
surements than the ones obtained while validating the virtual model in Section 3.4. Due to the low
hardware requirements, this system can be applied on other robots to simplify the walking process.
Future work
In order to implement this controller on the real Bioloid there are a few improvements that need
to be done on the platform. A new circuit board is under development so that the humanoid can run
the controller autonomously and adding pressure sensors on the feet would also be very useful, since
they would provide a early warning if the foot touched ground sooner or later than expected.
As for the controller, there are two main features open for improvement. On one hand, the
generation of walking gaits for slopes where ψx = 0 and ψz = 0 must be worked on and a good mapping
that replaces linear interpolation in cases where the slope changes in both directions should be made.
Also, the learning system can be upgraded from the learning tables to function approximation, so that
less training values become necessary for the same level of performance. Also, the simulation should
be ran with different noise models, in order to experiment with non-white noise.
The controller in this project works with internal measurements from the IMU, and the error
induced by noise adds in each iteration, making it hard to maintain good slope estimations on longer
walks. This error could be reduced with an additional input system. For example, a pressure sensor on
the feet can prevent the robot from taking a step that would make him stumble and fall. Knowing the
distance and orientation to an external feature would also enable the controller to correct estimations
from the EKF and handle even higher noise rates.
In addition, future work on this subject should also focus on developing trajectory planning and
creating gaits that allow the robot to turn instead of simply walking forward.
50
Bibliography
[1] “R.U.R. (Rossum’s Universal Robots),” accessed at 31-March-2010. [Online]. Available:
http://jerz.setonhill.edu/resources/RUR/
[2] M. B. Popovic, A. Goswani, and H. Herr, “Ground reference points in legged locomotion: Def-
initions, biological trajectories and control implications,” in International Journal of Robotics
Research, vol. 24, no. 12, 2005.
[3] M. Vukobratocic, B. Borovac, and D. Surdilocic, “Zero-Moment Point - proper interpretation,”
USC Neuroscience, 2004.
[4] A. Goswani, “Postural stability of biped robots and the foot rotation indicator (FRI) point,” in
International Journal of Robotics Research, vol. 18, no. 6, 1999.
[5] D. E. Orin and A. Goswami, “Centroidal momentum matrix of a humanoid robot: Structure and
properties,” in IROS 2008, 2008.
[6] K. Hirai, M. Hirose, Y. Haikawa, and T. Takenaka, “The development of Honda humanoid robot,”
in Procedings of the 1998 II-1-1 International Conference on Robotics and Automation, 1998.
[7] H. Masato and T. Tooru, “Development of humanoid robot Asimo,” Honda R&D Tech Rev.,
vol. 13, no. 1, pp. 1–6, 2001.
[8] S. H. Collins, “A bipedal walking robot with efficient and human-like gait,” in Proceedings of
IEEE International Conference on Robotics and Automation, 2005, pp. 1995–2000.
[9] M. J. Kurz, T. N. Judkins, C. Arellano, and M. Scott-Pandorf, “A passive dynamic walking robot
that has a deterministic non linear gait,” Journal of Biomechanics, vol. 41, pp. 1310–1316, 2008.
[10] M. Wisse and A. L. Schwab, “First steps in passive dynamic walking,” in Proceedings of the 7th
International Conference CLAWAR 2004, 2004, pp. 745–756.
[11] M. Wisse, C. G. Atkenson, and D. K. Kloimwieder, “Swing leg retraction helps biped walking
stability,” in Proceedings of 2005 5th IEEE-RAS International Conference on Humanoid Robots,
2005, pp. 295–300.
51
Bibliography
[12] M. Wisse, “Three additions to passive dynamic walking: actuation, an upper body and 3D
stability,” International Journal of Humanoid Robotics, vol. 2, pp. 459–478, 2004.
[13] ——, “Essentials of dynamic walking: analysis and design of two-legged robots,” Ph.D. disserta-
tion, Delft University of Technology, 2004.
[14] T. Ishida, “Development of a small biped entertainment robot QRIO,” Proceedings of the 2004
International symposium on micro-nano mechatronics and human science, 2004.
[15] T. Sawada, T. Takagi, and M. Fujita, “Behaviour selection and motion modulation in emotion-
ally grounded architecture for QRIO SDR-4X II,” Proceedings of 2004 IEEE/RSJ International
Conference on Intelligent Robots and Systems, 2004.
[16] F. Tanaka, K. Noda, T. Sawada, and M. Fujita, “Associated emotion and its expression in an
entertainment robot QRIO,” IFIP International Conference Entertainment Computing, 2004.
[17] G. Endo, J. Morimoto, J. Nakanishi, and G. Cheng, “An empirical exploration of a neural oscil-
lator for biped locomotion control,” Proceedings of the 2004 IEEE International Conference on
Robotics and Automation, 2004.
[18] G. Endo, J. Nakanishi, J. Morimoto, and G. Cheng, “Experimental studies of a neural oscillator
for biped locomotion with QRIO,” Proceedings of the 2005 IEEE International Conference on
Robotics and Automation, 2005.
[19] W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza, F. Rehnmark, C. Lovchik,
and D. Magruder, “Robonaut: A robot designed to work with humans in space,” Autonomous
Robots, vol. 14, pp. 179–197, 2003.
[20] O. Michel, F. Rohrer, N. Heiniger, and wikibooks contributors, Cyberbotics Robot Curriculum,
Cyberbotics, 2009.
[21] Webots 6.20 Reference Manual, Cyberbotics, 2009.
[22] Webots 6.20 User Guide, Cyberbotics, 2009.
[23] J.-C. Fillion-Robin, “Modeling of a real quadruped robot using Webots(TM) simulation plat-
form,” School of Computer and Communication Sciences (I&C) - Ecole Polytechnique Federale
de Lausanne, Tech. Rep., 2007.
[24] P. Teodoro, “Humanoid robot: Development of a simulation environment of an entertainment
humanoid robot,” Master’s thesis, Instituto Superior Tecnico, 2007.
[25] ADXL330 Accelerometer Datasheet, Analog Devices, 2006.
52
Bibliography
[26] ADXRS610 Yaw Axis Gyroscope Datasheet, Analog Devices, 2007.
[27] IDG-300 Dual Axis Gyroscope Datasheet, Invensense, 2006.
[28] J.-C. Baillie, URBI Language Specification, Gostai, 2007.
[29] J.-C. Baillie, M. Nottale, B. Pothier, and N. Despres, URBI Tutorial for URBI 1.5, Gostai, 2007.
[30] E. Lefferts, F. Markley, and M. Shuster, “Kalman filtering for spacecraft attitude estimation,”
Journal of Guidance, Control and Dynamics, pp. 417–429, 1982.
[31] N. Trawny and S. I. Roumeliotis, “Indirect Kalman filter for 3D attitude estimation - a tutorial for
quaternion algebra,” Department of Computer Science&Engineering - University of Minnesota,
Tech. Rep., March 2005.
[32] A. I. Mourikis, N. Trawny, S. I. Roumeliotis, D. M. Helmick, and L. Matthies, “Autonomous stair
climbing for tracked vehicles,” The International Journal of Robotics Research, pp. 737–758, 2007.
[33] S. Kajita, F. Kanehiro, K. Kaneko, K. Yokoi, and H. Hirukawa, “3D inverted pendulum mode: A
simple modelling for a biped walking pattern generation,” in Proceedings of the 2001 IEEE/RSJ
- International Conference on Intelligent Robots and Systems, 2001, pp. 239–246.
[34] T. Hirabayashi, B. Ugurlu, A. Kawamura, and ChiZhu, “Yaw moment compensation of biped
fast walking using 3D inverted pendulum,” Advanced Motion Control 2008, pp. 296–300, 2008.
[35] T. Sugihar, Y. Nakamur, and H. Inoue, “Realtime humanoid motion generation through ZMP
manipulation based on inverted pendulum control,” in Proceedings of the 2002 IEEE International
Conference on Robotics & Automation, 2002, pp. 1404–1409.
[36] J. J. Craig, Introduction to Robotics Mechanics and Control. Addison Wesley, 1989.
[37] Q. Huang, S. Kajita, N. Koyachi, K. Kaneko, K. Yokoi, H. Arai, K. Komoriya, and K. Tane,
“A high stability, smooth walking pattern for a bipedal robot,” in Proceedings of the 1999 IEEE
International Conference on Robotics & Automation, 1999, pp. 65–71.
[38] S. Kagami, T. Kitagawa, K. Nishiwaki, T. Sugihara, M. Inaba, and H. Inoue, “A fast dynamically
equilibrated walking trajectory generation method of humanoid robot,” Autonomous Robots,
vol. 12, pp. 71–82, 2002.
53
Bibliography
54