Learning Center
Plans & pricing Sign in
Sign Out



									Machines with a human touch
The Economist Sep 20th 2001

Instead of using the ones and zeros of digital electronics to simulate the way
the brain functions, “neuromorphic” engineering relies on nature's biological
short-cuts to make robots that are smaller, smarter and vastly more energy-

PEOPLE have become accustomed to thinking of artificial
intelligence and natural intelligence as being completely
different—both in the way they work and in what they are
made of. Artificial intelligence (AI) conjures up images of
silicon chips in boxes, running software that has been
written using human expertise as a guide. Natural
intelligence gives the impression of “wetware”—cells
interacting biologically with one another and with the
environment, so that the whole organism can learn
through experience. But that is not the only way to look at
intelligence, as a group of electronics engineers,
neuroscientists, roboticists and biologists demonstrated
recently at a three-week workshop held in Telluride,

What distinguished the group at Telluride was that they
shared a wholly different vision of AI. Rather than write a
computer program from the top down to simulate brain functions, such as object
recognition or navigation, this new breed of “neuromorphic engineers” builds machines
that work (it is thought) in the same way as the brain. Neuromorphic engineers look
at brain structures such as the retina and the cortex, and then devise chips that
contain neurons and a primitive rendition of brain chemistry. Also, unlike conventional
AI, the intelligence of many neuromorphic systems comes from the physical properties
of the analog devices that are used inside them, and not from the manipulation of 1s
and 0s according to some modelling formula. In short, they are wholly analog
machines, not digital ones.

The payoff for this “biological validity”, comes in size, speed and low power
consumption. Millions of years of evolution have allowed nature to come up with some
extremely efficient ways of extracting information from the environment. Thus, good
short-cuts are inherent in the neuromorphic approach.

At the same time, the electronic devices used to implement neuromorphic systems are
crucial. Back in the 1940s, when computers were first starting to take shape, both
analog and digital circuits were used. But the analog devices were eventually
abandoned because most of the applications at the time needed equipment that was
more flexible. Analog devices are notoriously difficult to design and reprogram. And
while they are good at giving trends, they are poor at determining exact values.

In analog circuits, numbers are represented qualitatively: 0.5 reflecting, say, a voltage
that has been halved by increasing the value of a resistor; 0.25 as a quarter the
voltage following a further increase in resistance, etc. Such values can be added to
give the right answer, but not exactly. It is like taking two identical chocolate bars,
snapping both in half, and then swapping one half from each. It is unlikely that either
of the bars will then be exactly the weight that the
manufacturer delivered.

One of the contributions of the father of the field—Carver
Mead, professor emeritus at the California Institute of          “Neuromorphic
Technology in Pasadena—was to show that this kind of
precision was not important in neural systems, because the     engineers look at
eventual output was not a number but a behaviour. The           brain structures
crucial thing, he argued, was that the response of the          and then devise
electronic circuits should be qualitatively similar to the     chips that contain
structures they were supposed to be emulating. That way,        neurons, axons,
each circuit of a few transistors and capacitors could
“compute” its reaction (by simply responding as dictated by      and a primitive
its own physical properties) instantly. To do the same thing,  rendition of brain
a digital computer would have to perform many operations           chemistry.”
and have enough logic gates (circuits that recognise a 1 or a
0) for the computation. That would make the device not only
slow and power-hungry, but also huge and expensive. For a
fuller account of Carver Mead and his unique contribution to the whole of information
technology, see this article.

Another advantage of the analog approach is that, partly because of their speed, such
systems are much better at using feedback than their digital counterparts. This allows
neuromorphically designed machines to be far more responsive to their environment
than conventional robots. In short, they are much more like the biological creatures
they are seeking to emulate.

Going straight

One of the many projects demonstrating this concept at the Telluride meeting was a
robot that could drive in straight lines—thanks to electronics modelled on the optic
lobe in a fly's brain. The vision chip, built by Reid Harrison at the University of Utah in
Salt Lake City, is a “pixellated” light sensor that reads an image using an array of
individual cells, with additional circuitry built locally into each cell to process the
incoming signals. The fact that these processing circuits are local and analog is crucial
to the device's operation—and is a feature that is borrowed from the biological model.

Dr Harrison and his supervisor at Caltech and co-founder of the Telluride summer
school, Christof Koch, identified the various processes taking place in the so-called
lamina, medulla and lobular-plate cells in a fly's brain as being worth implementing in
silicon. These cells form a system that allows the fly to detect motion throughout most
of its visual field—letting the insect avoid obstacles and predators while compensating
for its own motion.
In the chip, special filters cut out any constant or ambient illumination, as well as very
high frequencies that can be the source of electronic noise in the system. The purpose
is to let the device concentrate on what is actually changing. In a fly's brain, this
filtering role is played by the lamina cells.

In a fly's medulla, adjacent photodetectors are paired together, a time delay is
introduced between the signals, and the two are then multiplied together. The length
of the delay is crucial, because it sets the speed of motion that the detector is looking
for. In the chip, since the delay and the distance between the two adjacent photo-
diodes are known, the speed of an image moving over the two detectors can be
determined from the multiplier output. Large numbers of these “elementary motion
detectors” are then added together in the final processing stage. This spatial
integration, which is similar to that performed in a fly's large lobular plate cells,
ensures that the broad sweep of the motion is measured, and not just local variations.
The same kind of mechanism for detecting motion is seen in the brains of cats,
monkeys and even humans.

To prove that the chip not only worked, but could be useful, Mr Harrison attached it to
a robot that had one of its wheels replaced by a larger-than-normal one, making it
move in circles. When instructed to move in a straight line, feedback from the vision
chip—as it computed the unexpected sideways motion of the scenery—was fed into
the robot's drive mechanism, causing the larger wheel to compensate by turning more
slowly. The result was a robot that could move in a straight line, thanks to a vision
chip that consumed a mere five millionths of a watt of power.

For comparison, the imaging device on NASA's little Sojourner Rover that explored a
few square metres of the Martian surface in 1997 consumed three-quarters of a watt—
a sizeable fraction of the robot's total power. The image system that helps make the
“Marble” trackball developed by Logitech of Fremont, California, a handy replacement
for a conventional computer mouse, takes its cue likewise from a fly's vision system.
In this case, the engineering was done mainly by the Swiss Centre for Electronics and
Microtechnology in Neuchatel and Lausanne.

The concept of sensory feedback is a key part of another project shown at the
Telluride workshop. In this case, a biologist, robotics engineer and analog-chip
designer collaborated on a walking robot that used the principle of a “central pattern
generator” (CPG)—a kind of flexible pacemaker that humans and other animals use for
locomotion. (It is a chicken's CPG that allows it to continue running around after losing
its head.) Unlike most conventional robots, CPG-based machines can learn to walk and
avoid obstacles without an explicit map of their environment, or even their own

The biological model on which the walking robot is based was developed in part by
Avis Cohen of the University of Maryland at College Park. Dr Cohen had been studying
the way that neural activity in the spinal cord of the lamprey (an eel-shaped jawless
fish) allowed it to move, with the sequential contraction of muscles propelling it
forward in a wave motion. The findings helped her develop a CPG model that treated
the different spinal segments as individual oscillators that are coupled together to
produce an overall pattern of activity. Tony Lewis, president and chief executive of
Iguana Robotics in Mahomet, Illinois, developed this CPG model further, using it as the
basis for controlling artificial creatures.

In the walking robot, the body is mainly a small pair of legs (the whole thing is just
14cm tall) driven at the hip; the knees are left to move freely, swinging forward under
their own momentum like pendulums until they hit a stop when the leg is straight. To
make the robot walk, the hips are driven forwards and backwards by “spikes” (bursts)
of electrical energy triggered by the CPG. This robot has sensors that let it feel and
respond to the ground and its own body. Because outputs from these sensors are fed
directly back to the CPG, the robot can literally learn to walk.

The CPG works by charging and discharging an electrical capacitor. When an additional
set of sensors detect the extreme positions of the hips, they send electrical spikes to
the CPG's capacitor, charging it up faster or letting it discharge more slowly, depending
on where the hips are in the walking cycle. As the robot lurches forward, like a toddler
taking its first steps, the next set of “extreme spikes” charge or discharge the
capacitor at different parts of the cycle. Eventually, after a bit of stumbling around,
the pattern of the CPG's charging and discharging and the pattern of the electrical
spikes from the sensors at the robot's hip joints begin to converge in a process known
as “entrainment”. At that point, the robot is walking like a human, but with a gait that
matches the physical properties of its own legs.

Walking is only the start. Mr Lewis has endowed his robot with an ability to learn how
to step over obstacles (see photo, top). It does this by changing the length of the
three strides before the object, using miniature cameras as eyes, and the same kind
of interaction with the CPG that it uses to synchronise its hip movement for normal

The interesting thing is that the obstacle does not have to be defined in any way. It
appears simply as an unexpected change in the flow of visual information from the
cameras that the robot uses to see with. This makes the technique extremely
powerful: in theory, it could be applied to lots of other forms of sensory input. Another
factor that makes this project impressive is that its key component—the CPG chip,
designed by Ralph Etienne-Cummings of Johns Hopkins University in Baltimore,
Maryland—consumes less than a millionth of a watt of power.

The efficiency of CPG-based systems for locomotion has captured commercial attention.
For the first time, parents can now buy their children analog “creatures”, thanks to
Mark Tilden, a robotics expert at Los Alamos National Laboratory in New Mexico.
Hasbro, one of America's largest toy makers, is marketing a product called BIO Bugs
based on Dr Tilden's biomechanical machines.

After teaching robots to walk and scramble
over obstacles that they have never met
before, how about giving them the means for
paying attention? Giacomo Indiveri, a
researcher at the Federal Institute of
Technology (ETH) in Zurich, has been using a
network of “silicon neurons” to produce a
simple kind of selective visual attention.
Instead of working with purely analog devices,
the ETH group uses electrical circuits to
simulate brain cells (neurons) that have many
similarities with biological systems—displaying
both analog and digital characteristics
                                                Robot bugs invade the toy market
simultaneously, yet retaining all the
advantages of being analog.

Like the locomotion work, the silicon neurons in the ETH system work with electrical
spikes—with the number of spikes transmitted by a neuron indicating, as in an animal
brain, just how active it is. Initially, this is determined by how much light (or other
stimulus) the neuron receives. This simple situation, however, does not last long.
Soon, interactions with the rest of the network begin to have an effect. The system is
set up with a central neuron that is connected to a further 32 neurons surrounding it
in a ring. The outer neurons, each connected to its partners on either side, are the
ones that receive input from the outside world.

The neural network has two parameters that can be tweaked independently: global
inhibition (in which the central neuron suppresses the firing of all the others); and
local excitation (in which the firing of one neuron triggers firing in its nearest
neighbours). By varying these two factors, the system can perform a variety of
different tasks.
The most obvious is the “winner-takes-all” function, which
occurs when global inhibition is turned up high. In this case,
the firing of one neuron suppresses firing in the rest of the      “Millions of years
network. However, global inhibition can also produce a
subtler effect. If several neurons fire at the same time, then     of evolution have
they stimulate the central neuron to suppress the whole           allowed nature to
network, but only after they have fired. The inhibition is only      come up with
temporary, because the electrical activities of all the neurons     some extremely
have natural cycles that wax and wane. So the synchronised         efficient ways of
neurons now have some time to recover before firing again,
without other neurons having much chance to suppress                   extracting
them.                                                              information from
                                                                  the environment.”
In this situation, the important thing to note is that the
synchronised signals tend to come from the same source.
Consequently, if one can find a way of allowing all to cause
firing at once, then it is possible to separate an individual object from the visual scene.
Local excitation improves this situation further, since the synchronised neurons are
likely to be next to each other.

This combination of local excitation and global inhibition is a feature of the human
brain's cerebral cortex. The combination between winner-takes-all and synchronisation
produces a mechanism for visual attention, because it allows one object—and one
only—to be considered. Importantly, the global inhibition makes it difficult for other
objects to break in, so the attention is stable. The ETH team is thinking of building a
more advanced version of its attention-getter, in which the focus of attention can be
switched, depending on the novelty and importance of a fresh stimulus.

Neuromorphic engineering is likely to change the face of artificial intelligence because
it seeks to mimic what nature does well rather than badly. For centuries, engineers
have concentrated on developing machines that were stronger, faster and more
precise than people. Whether tractors, sewing machines or computer accounting
software, the automata have been simply tools for overcoming some human
weakness. But the essential thing has been that they always needed human
intelligence to function. What neuromorphic engineering seeks to do is build tools that
think for themselves—making decisions the way humans do.

But the neuromorphic route will not be an easy one. The highly efficient analog
systems described above are far more difficult to design than their conventional
counterparts. Also, billions of dollars have been invested in digital technology—
especially in CAD (computer-aided design) tools—that makes analog tools look, in
comparison, like something from the stone age. More troubling still, almost all
neuromorphic chips developed to date have been designed to do one job, albeit
remarkably well. It has not been possible to reprogram them (like a digital device) to
do many things even adequately.

However, as work advances, neuromorphic chips will doubtless evolve to be general
purpose in a different sense. Instead of using, say, a camera or a microphone to give
a machine some limited sense of sight and hearing, tool makers of tomorrow will be
buying silicon retinas or cochleas off the shelf and plugging them into their circuit
At the other extreme, the combination of biological short-cuts and efficient processing
could lead to a whole family of extremely cheap—albeit limited—smart sensors that do
anything from detecting changes in the sound of a car engine to seeing when toast is
the right colour.

In fact, the neuromorphic approach may be the only way of achieving the goal that
has eluded engineers trying to build efficient “adaptive intelligent” control systems for
years. Neuromorphic chips are going to have enormous implications, especially in
applications where compactness and power consumption are at a premium—as, say,
for replacement parts within the human body. This is slowly being recognised. For the
first time in the Telluride workshop's history, one of the participants was a venture
capitalist. After genomics, perhaps the next stockmarket buzz will be neuromorphics.

To top