|
A current review of
open-ended evolution in evolutionary robotics:
“Embodied
Artificial Life at an Impasse: Can Evolutionary Robotics
Methods Be Scaled?” Proceedings
of the 2014 IEEE Symposium Series on Computational
Intelligence (IEEE SSCI’14), Orlando, FL, Dec. 9-12,
2014. link
to paper
A review of the first
2 decades of evolutionary robotics research: “Fitness
functions
in evolutionary robotics: A survey and analysis,” Robotics
and
Autonomous Systems, vol. 57, no. 4, pp. 345-370, Apr.
2009. link
to paper
A very well
written forward-looking analysis of evolutionary
robotics: Doncieux
and Mouret, "Beyond
black-box optimization: a review of selective pressures for
evolutionary robotics." Evolutionary Intelligence
7.2 (2014): 71-93. link
to paper
What is Evolutionary
Robotics?
Evolutionary
robotics (ER) is a focus of research within the much larger
fields of artificial life (ALife) and fully-autonomous robots.
One of the primary goals of evolutionary robotics is to develop
automatic methods for creating intelligent autonomous robot
controllers, and to do so in a way that does not require direct
programming by humans. The primary advantage of robot design
methods that do not require hand coding or in-depth human
knowledge is that they might one day be used to produce
controllers or even whole robots that are capable of functioning
in environments that humans do not understand well.
Possibly the most difficult aspect of this research is the
development of an evolutionary system in which truly open-ended
evolution can be harnessed. Note: This site has been live for
over a decade and is currently being updated to include a
discussion of recent research and references to many additional
current ER and AL researchers (as of 2015). A very partial
list of these includes: (C. Ampatzis, N Bredeche, D. M. Bryson,
Clune, A. L. Christensen, S. Doncieux, M. S. Duarte, A. E.
Eiben, D. Fussell, J. Gomes, Hickinbotham, Joachimczak, J.
Lehman, D. Lessin, J.-M. Montanier, J.-B. Mouret, G. S.
Nitschke, B. Østman, Shao, F. Silva, K. O. Stanley, L. B. Soros,
V. Trianni, E. Tuci, P. Urbano). In addition, some very
early work in computational artificial evolution still holds
relevance to prominent current issues such as how to implement
the open-ended evolution of complexity (Friedman-1956,
Fogel-1966, Von Neumann-1949).
Evolutionary robotics uses population-based artificial evolution (fogel-1966, holland-1975, Friedman-1956) to evolve autonomous robot controllers (i.e. robot brains) and sometimes robot morphologies (i.e. robot bodies)(lipson-n-2000). Generally, the robots are evolved to perform tasks requiring some level of intelligence, for example moving around in an environment without running into things.
The process of
controller evolution consists of repeating cycles of controller
fitness testing and selection that are roughly analogous
generations in natural evolution. Evolution is initialized by
creating a population of randomly configured robots (or robot
controllers). During each subsequent cycle, or generation, each
of the robot controllers competes in an environment to perform
the task for which the robots are being evolved. This process
involves placing each controller into a robot and then allowing
the robot to interact with its environment for a period of time.
Following this, each controller’s performance is evaluated using
a fitness selection function (objective function) that measures
how well the task was performed. The controllers in the better
performing robots are selected, altered and propagated in a
repeating process that mimics natural evolution. The alteration
process is also inspired by natural evolution and may include
mutation and trading of genetic material. Cycles are repeated
for many generations to train populations of robot controllers
to perform a given task.
Figure 1. An overview of a typical evolutionary robotics training cycle
What is Created During
Evolution?
In the majority
of evolutionary robotics work only the control programs are
created and configure by the evolutionary process. These
controllers come in a variety of forms including neural
networks, genetic programming structures (koza-ecal-1992), fuzzy logic
controllers (hoffmann-ipmu-1996)
and simple look-up and parameter tables that relate sensor
inputs to motor outputs (augustsson-gecco-2002).
There have also been several examples of evolvable hardware
circuits being evolved for robot control ().
Neural networks
are by far the most common type of controllers used in
evolutionary robotics. These can be encoded for the process of
evolution in a variety of ways. For instance, a neural
controller can be represented as a set of connection weights. In
this case it is the weights of the network that are actually
evolved. The majority of neural networks used in evolutionary
robotics are small and accommodate less than 10 sensor inputs (nolfi-iwal-1994, quinn-iwbir-2002). These
networks usually have less than ten neurons and between ten and
fifty weighted connections. In such cases just the set of weight
represented by ten to fifty numbers would be evolved. The
largest networks in ER have about 150 inputs and about 5000
connections (nelson-jras-2006).
For these large networks the set of weights and neuron
configuration are evolved in the form of a variable sized matrix
of numbers.
Figure 2. Robot Brains: Example neural network
robot controllers
Not only controllers can be evolved. It is possible to find a way to encode the physical structure of a robot and evolve that also. Although there were attempts to do this in the early years of ER research, it has only be in the past 10 to 15 years that such methods have led to robots able to function in the real world. These recent results were accomplished by formulating a set of modular building units that could be easily simulated and fabricated, but that could also be configured and combined into an almost infinite variety of non-trivial robot bodies (lipson-n-2000, hornby-icra-2001, macinnes-al-2004, Bongard-2010).
What
do the Robots Actually Do?
Much of the research done to date evolved robots capable
of very simple behaviors. Common benchmark tasks that have been
studied include simple locomotion, locomotion with object
avoidance (cliff-spie-1993,
grefenstette-mlwrl-1994),
phototaxis (moving toward light sources), and learning how to
walk in the case of legged robots (beer-ab-1992,
jakobi-1998).
There are only a handful of experiments that have investigated
tasks of any significant degree of difficulty. In one example,
robots learned to visit three goal locations in a specific order
(capi-ab-2005).
In
another example of a relatively difficult task, teams of robots
were evolved to compete against one another to find goal objects
in very large complicated environments (nelson-jras-2006).
In order to perform these tasks the robots had to learn to see,
and then to discriminate and react to several different types of
objects in their environment. Several tasks that required robots
to perform sequential movements have also been studied (floreano-nn-2000).
In
these tasks robots typically must move to an initial goal
position before traveling to another final home position. In
another sequential task, robots were evolved to search for and
pick up objects in an arena and then to drop the objects outside
the border of the arena (beer-ab-1992,
jakobi-1998).
Recent work demonstrating up to an order of magnitude greater
task complexity has been reported (Eiben, Christensen, Silva,
among others).
How
and Where are the Robots Actually Evolved?
The robots and
their controllers can be evolved in a variety of ways. Early
work dating from the 1990’s generally employed either embodied
evolution or evolution in simulation with transfer to real
robots after the evolutionary process was complete. More recent
research has made use of more complex methods that may use
simulation for a portion of the evolution and real robots for
another phase of the evolution. In addition, work done in the
last five years has co-evolved controllers and morphologies in
simulation in a way that allowed physical robots to be
fabricated after evolution.
In the case of
embodied evolution, physical robots are used during the
evolutionary process (nolfi-iwal-1994,
mondada-jras-1995, watson-cec-1999). In the
simplest cases controllers are loaded into robots, the robots
are tested, and the associated controllers’ fitnesses are evaluated based on the
performance of the real robots. Although this procedure insures
that the controllers can function in real robots (as opposed to
simulated ones), the process is slow –real time. An additional
and more serious problem is that even the worst controllers
cannot be allowed to damage the real robots during testing,
because this would put a stop to the evolutionary process, at
least until the robots could be repaired or new ones built. What
this really means is that embodied evolution can’t make use of
fitness measures that measure the true survivability of robots.
Designers must instead decide what behaviors a robot is likely
to need to perform the task at hand without causing damaging to
the robot. In order to do this, the designers must have a
reasonably good idea of how to perform the given task, and how
to constrain the robot’s training environment to that the robots
won’t be damaged. This is a problem when the goal is to get the
robots to learn how to do something that the designers don’t
know how to do.
Evolution in
Simulation with Transfer to Reality
An alternative
to embodied evolution is to evolve the controllers in simulated
robots living in simulated environments. Now robots can be
destroyed during testing and fitness can be based more directly
on actual survival. In the long term this is quite important.
However, we should point out that for the current state of
evolutionary robotics research, robots generally simply succeed
or fail to perform their given tasks and do not face mortal
challenges. For instance, in the case of an object avoidance and
navigation task, poorly performing robots will likely just bump
into objects and become immobilized rather than actually being
damaged. Evolution in simulation can proceed much faster than
evolution using only real robots. Care must be taken in
designing the simulation environments so that the controllers
evolved in simulation can function in real robots. A large
proportion of current ER research uses evolution in simulation
with transfer to reality. One of the most sophisticated
simulation environments allowed robots relying on video to see
their environment to be evolved in simulation and then
transferred to real robots (nelson-iros-2003).
Open ended
evolution is now a prominent focus of ER and ALife theory and
research. Currently, the natural biosphere of Earth is
only known example of a system capable of truly open ended
evolution of complexity. This section (currently under
construction) will discuss the work of (Doncieux, Lehman,
Mouret, Standish, Stanley,) among others as well as more recent
work of prominent researchers in the field (Adami, Nolfi, Ray,
Miikkulainen, Ofria, Watson and others...). In particular,
current evolutionary theory is beginning to play a much larger
roll in evolutionary robotics due in part to the un-solved
problems related to open ended evolution. (McShea-2001,
Pattee-1987, Lynch-2007, Ruiz-Mirazo-2004, Lichocki-2012,
Dawkins-2003, Joachimcza-2012, Fontana-2010, Kauffman-1993 and
many others )
This
page is maintained by A
Nelson
All artwork©1990-2010
A.L.Nelson, All rights reserved
Site administrator contact: alnelson @ ieee dot org