Neural Interfaces

Document Sample
Neural Interfaces Powered By Docstoc
					Brain-Computer Interfaces for
Wearable Computers

Thesis guidance               Student
Bart Barnard             David Morgan
In communication among people, a „thought‟ is encoded into an error -prone
format, like spoken language, written text, or an oil painting. How pure it
would be, if one could communicate thoughts directly.

1      Illustrations ........................................................................................ iv

2      Abstract.............................................................................................. 1

3      Introduction ........................................................................................ 2

4      Wearable Computers .......................................................................... 4

    4.1      What are wearable computers? ..................................................... 4
    4.2      Why is there demand for wearable computing? .............................. 4
    4.3      Desirable characteristics for wearable computers .......................... 5
    4.4      Conventional Input and Output Devices for Wearable Computers .... 8
       4.4.1       Output................................................................................... 8
       4.4.2       Input ..................................................................................... 9
    4.5      Why make use of the brain computer interface for wearable
    computers? .......................................................................................... 11
5      Brain-Computer Interfaces ................................................................. 12

    5.1      What is a Brain-Computer Interface? ........................................... 12
    5.2      Technology of Brain Computer Interfaces .................................... 13
    5.3      Examples of BCI ........................................................................ 16
       5.3.1       BCI Example 1: Monkeys Adapt Robot Arm as Their Own ...... 16
       5.3.2       BCI Example 2: Cyberkinetics .............................................. 18
       5.3.3       BCI Example3: Non-Invasive Brain-Actuated Control of a Mobile
       Robot.      19
    5.4      Neural/BIO Feedback ................................................................. 20
6      Ethical and Social Implications of Brain-Computer Interfaces .............. 21

7      Conclusion ....................................................................................... 23

8      References ....................................................................................... 25

1 Illustrations

Figure number   Caption                  Page number

Figure 1-1      Heads-up display         9
Figure 1-1      KITTY                    10
Figure 1-2      Burton iPod controller   10
Figure 1-3      Xybernaut                11
Figure 1-4      Monkey BCI control setup 17
Figure 1-5      electrode array          18
Figure 1-6      array position           19

2 Abstract

Wearable computers are on the verge of becoming more widespread. Based
on desirable characteristics for wearable computers, the evaluation of current
interfaces currently used for wearable computers gives very disappointing
results. In theory a brain-computer interface could be an ideal interface
technology for wearable computers. Brain-computer interfaces used in
medical situations mainly make use of surgically implanted chips to interface
with the brain. This method is very precise, yet not acceptable in a consumer
setting, as it requires brain surgery. A most promising method of interfacing
with the brain is that using nanotechnology. This method is, however, in too
early a stage in its development to be considered a feasible interface method
for wearable computers. The other method of interfacing with the brain, to be
considered, is a non-invasive method, by sensing brain waves externally, and
deciphering patterns with software. This external method is the least precise
and has the slowest response time, but these factors are constantly
improving. In the short term, this type of interface should be pra cticable for
wearable computers for simple menu navigation, and perhaps even for cursor
movement. It will not do for text input; conventional tactile interfaces are still
best used for that. Development in the field of brain-computer interfaces is
complicated by the fact that it is such a multi-disciplinary field, ranging from
neurobiology to interaction design.
3 Introduction

“The next care to be taken, in respect of the Senses, is a supplying of their
infirmities with Instruments, and as it were, the adding of artificial Organs to
the natural…. And as glasses have highly promoted our seeing… there may
be found many mechanical inventions to improve our other senses of
hearing, smelling, tasting, touching”[R]

As shown by this 1665 quote from Robert Hook, the idea of creating
wearable devices to enhance human information processing capabilities is
very old, but not until recently have the developments in the field of devices
been moving at such a rapid rate. The last decade has seen the introduction
of many portable (or mobile) devices such as mobile phones, iPods, Personal
Digital Assistants (PDAs) and digital cameras. It is clear that, as technology
advances, devices that are now portable will, in time, become wearable. To
demonstrate the difference between portable and wearable devices, one can
think of the difference between the pocket watch and the wristwatch. The
pocket watch needs to be taken out of the pocket to be used, whereas the
wristwatch is always available to fulfil its function. Many devices are making
the transition from portable to wearable. In the process their functions are
also becoming more and more complicated and diverse, increasing the need
for effective and flexible user interfaces. The ideal interface for wearable
devices is such that the use of hands is not required. The interfaces currently
used for wearable computers, such as voice control and tactile interfaces do
not provide the desired control for a wearable computer.
       In the medical sector we find a technology that could provide t he
characteristics required for an interface for wearable devices. This
technology is „Neural Interface Technology‟ or „brain-computer interface
technology‟. These brain-computer interfaces (BCIs) are currently being used
to help paraplegics and amputees by giving them control over the physical
world directly from their brains. For example, a robotic arm controlled by the
brain as though it were a real arm, could replace the missing limb on an

The aim of this thesis is to research the following q uestion: “Are brain-
computer interfaces a feasible interaction method for wearable
computers?” This is done by first defining a wearable computer and the
desirable characteristics of interfaces for these wearable computers; this
shows the reason for researching brain-computer interfaces. Secondly, brain-
computer interfaces are defined and explored, by giving an understanding of
the technology behind brain-computer interfaces and presenting case studies
of functioning brain-computer interfaces. The different types of brain-
computer interfaces are evaluated according to the defined ideal
characteristics for a wearable computer. Thirdly, ethical and social issues are
taken into account resulting finally in a conclusion stating the feasibility of
using brain-computer interfaces to interact with wearable computers.

4 Wearable Computers

In this chapter an overview will be given of wearable computers and their
relation to this thesis.

4.1 What are wearable computers?

The term wearable computer might, at first glance, seem self-explanatory, yet
in practice, the term is used very broadly. In this thesis, what is meant by a
„wearable computer‟ is a data processing system attached to the body, with
one or more output devices and means of input providing for interaction with
the data processing system by use of one or no hands (Mann, 1997). The
output is perceptible instantly, independent of the activity of the user. The
wearable computer should offer largely the same functionality common in
personal computers, such as storing and playing media files, sending and
receiving e-mail, and browsing the web.

In this document the intended target group for wearable computers is that of
consumers and professionals who regularly carry more than one mobile
device and frequently communicate through the use of e-mail and/or instant

4.2 Why is there demand for wearable computing?

Computers are becoming more and more ubiquitous (Starner, 2001); a large
part of communication is done via personal computers (e -mail and chatting)
and mobile phones. Most people carry one or more mobile electronic

devices, such as phones, camera‟s, mp3 players and videogames. These
devices often share common components, such as a microprocessor,
memory, power, and a display. It is therefore quite natural t hat one sees the
emergence of multipurpose devices, such as mobile phones with a built -in
camera, able to play music and record sounds and also containing PDA
functionality such as word processing, web browsing and e -mail applications.
Sharing these common components eliminates cost and weight, resulting in
more portability for the same functionality. For most of the functions of these
multipurpose devices, however, it is necessary to take the device out of one's
pocket with one hand and control it with the other hand, making it impossible
to perform a great number of tasks at the same time.
     As Robert Heinlein (1907 - 1988) said: “Progress isn't made by early
risers. It's made by lazy men trying to find easier ways to do something.”
People want to make their life easier, and transforming a portable device into
a wearable one makes that device less burdensome and therefore more
comfortable to use. This has happened to many devices in the past
(Pentland, 2001): The pocket watch became the wristwatch, the monoc le
became spectacles and even contact lenses, and mobile phones have gained
handsfree functionality. A multifunctional wearable computer is a next step in
making appliances „easier‟ and more suited to the user‟s desires.
       Apart from making life „easier‟, there are many practical situations in
which one is using both hands, but still desires the aid of a computer. For
instance a surgeon could be aided by seeing the vitals of the patient he is
operating on, or a mechanic who needs to refer to a set of diagram s while
using tools.

4.3 Desirable characteristics for wearable computers

Steve Mann (1997) compiled a list of “eight attributes for defining a wearable
computer”, the list is as follows.

   1. CONSTANT: Always ready. May have „sleep modes‟ but never „dead‟.
       Unlike a laptop computer which must be opened up, switched on and
       booted up before use. Always on, always running.
   2. UNRESTRICTIVE to the user: ambulatory, mobile, roving, “you can do
       other things while using it”, e.g. you can dial a phone number or
       respond to e-mail while jogging, etc.
   3. UNMONOPOLIZING of the user‟s attention: it does not cut you off
       from the outside world like a virtual reality game or the like, so that
       you can attend to other matters while using the apparatus. In fact,
       ideally, it will provide enhanced sensory capabilities. It may, however,
       mediate (augment, alter, or diminish-with-purpose) the sensory
   4. OBSERVABLE by the user: It can get your attention continuously if
       you want it to (e.g. it can notify you of an incoming call or messa ge).
   5. CONTROLLABLE by the user: Responsive. You can grab control of it
       at any time you wish. Even in automated processes you can override
       manually to break open the control loop and become part of the loop
       at any time you want to (example: a big „Halt‟ button to deal with the
       situation arising when an application opens all 50 documents that were
       highlighted when you accidentally pressed „Enter‟).
   6. ATTENTIVE to the environment: Environmentally aware, multimodal,
       multi-sensory. (As a result this ultimately gives the user increased
       situational awareness).
   7. COMMUNICATIVE to others: Can be used as a medium of expression
       or as a communications medium any time you want.
   8. PERSONAL: Human and computer are inextricably intertwined.

A list, such as that of Mann (1997), is an excellent tool to help evaluate
wearable devices and their interfaces as being truly wearable and
satisfactory. A new list of desirable characteristics for wearable computers is
compiled from an interaction design point of view. This new list is larg ely
based on that compiled by Mann (1997) and takes into account the “ideal
attributes for wearable computers” posed by Starner (2001).

Desirable characteristics for wearable computers

A wearable computer should be instantly accessible, regardless of the
actions of the wearer. No actions should be necessary to bring the wearable
computer into an operable condition. This is a characteristic that is unique to
the wearable device as opposed to for instance a mobile device that needs to
be brought to an active state. It is an important factor in the demand for
wearable computers (Chapter4.2)

A wearable computer should in no way restrict the wearer, It should not
obstruct any physical movement nor be a source of any form of discomfort.

Usage of a wearable computer should not exclude other actions. It is
important that the wearer is not fully occupied by the using of the computer,
but is able to perform other tasks, such as riding a bicycle or interacting with
another person.

In order for wearable computers to be used in practice, it is important that the
technology used causes no social imposition and that it has the possibility of
being shaped into inconspicuous or conventional forms. This means that the
technology used must be either very small or mouldable, but also that it
should not make a noise or smell bad etc.

The form of interaction with a wearable computer should vary depending on
the requirements of task and/or environment.

It should be possible to keep private all the wearer‟s interaction with a
wearable computer.

4.4 Conventional Input and Output Devices for Wearable

To interact with a system, an interface providing for both input and output is
required (MacKenzie, 1995). Input and output methods for wearable
computers have specific requirements, as defined in the previous chapter.
This chapter provides an overview of interface technology currently used for
interaction with wearable computers. The listed methods are evaluated on
the basis of the desirable characteristics for wearable computers defined in
the previous chapter.

4.4.1 Output
Output can be given in several different ways. By means of sound, visual
output an augmented visual output.

Firstly, output can be given means of sound; the user can receive feedback
by the use of sound. In the case of an audio output device, the choice of
sound feedback makes a lot of sense, as the main functionality is making
sound. Other sound feedback can be a sound confirming the pressing of a
button. Sound feedback is instant, a user can be constantly wearing a
headset for personal audio or a speaker can generate audio for the direct
surroundings. This could be the kind of sound a mobile phone makes.
       Sound output can also be non-restrictive; headphones and speakers
are small and comfortable. Sound can be non-monopolizing, one can play
music and continue a conversation. However headphones can override all
surrounding audio. Sound output is potentially obtrusive, surrounding pe ople
can dislike the sound created; also it is often considered anti -social to wear
headphones while interacting with another person. Sound output can be
either very private or non-private In-ear headphones are private, yet a
loudspeaker is audible to anyone around.

A second output method is visual feedback. The most important feedback
users get from computers is the visual feedback given by the use of a screen.
Visual feedback is a very efficient method of output for wearable computers.

Visual feedback can be used for showing the time on a clock, the name of a
friend in a phone book, or the results of a web search with a mere glance.
       There are two types of screens
for wearable computers. Firstly there is
the arm mounted display, known from
watch-like devices. Secondly there are
heads-up displays (HUD). A small
display is placed in front of the eye
(figure 4-1).
                                             Figure 4-1    Heads-up display
       In particular the heads-up displays
are instant, but both arm-mounted and heads-up displays are non-restrictive.
An arm-mounted display is more monopolizing than a heads-up display, as it
requires the attention of the wearer to be directed to a specific location.

In addition to „conventional‟ visual feedback, it is important to mention
„augmented visual feedback‟, which superimposes information on the actual
view seen by the eye. This is achieved by the use of a heads -up display. The
visual output is created by projecting a partially opaque image in front of an
eye; this can be done either by placing an opaque display in front of an eye
and merging this with a video feed recorded by a camera in front of the eye,
or by using a transparent display. The advantage of augmented visual
feedback over conventional visual feedback is mainly the fact that it is far
less monopolizing; the wearer maintains regular stereo vision with only
necessary information displayed over it.

4.4.2 Input
As with output, input for wearable computers has several options and
requires a unique selection of input devices meeting the requirements for use
with wearable computers.
One-handed tactile input is one of the input methods for wearable computers.
The most common tactile input methods are those of buttons, sliders, knobs,
and keyboards. Such input devices are controlled by use of one hand.
       One example is that of the one handed „keyboard‟: Kitty (Mehring et
al., 2004). KITTY stands for Keyboard Independent Touch Typing and is a full
alphabetic keyboard that is controlled by making contact combinations
between the thumb and different parts of the fingers (Figure 4 -2).

       Another example is that of the functions of
an iPod integrated into a sleeve, a selection of
Burton Snowboard jackets (Figure 4-3) are
produced with this functionality.
       A drawback of tactile input is the requirement
for the use of hands. Tasks that require the use of
two hands, need to be interrupted in order use this
interaction method.

A second method of input for wearable computers is
                                                         Figure 4-2   KITTY
that of command gestures. This method is
applicable only in an augmented reality system. A
camera on a headset can enable the system to
recognize certain predefined gestures made by the
hands (Moeslund, Störring & Granum, 2004).
Gestures that can be recognized are, for example,
3D pointing gestures and click gestures.

                                                         Figure 4-3
Thirdly, input can be given by voice commands to
                                                         Burton iPod controler
control a device. Voice control has been used in
practice for years with mobile phones; a handsfree
set enables a phone number to be selected from the phonebook by using the
correct voice command. Speech recognition software for personal computers
enables text input by speech instead of by typing. A drawback of spoken
input is the total lack of privacy. Since voice recognition systems require
clearly articulated speech, any person close to the user can hear every input
made. These spoken command inputs can also be ver y obtrusive for the
surroundings. The advantage is that giving a voice command does not
require any muscle action, apart from the speaking itself, making it very non -
restrictive, and non-monopolizing.

4.5 Why make use of the brain computer interface for
     wearable computers?

When current wearable computers are
considered in relation to the desirable
characteristics of wearable computers, the
interfaces used are not adequate. For
example Xybernaut, the leading wearable
computer company (Schwartz 2004), rely on
spoken input and wrist-mounted keyboard
control for their line of mobile assistants
(figure4-4). As shown in chapter 4.4.2, both
spoken input and keyboard (or tactile) input
have undesirable characteristics for wearable
                                                    Figure 4-4
computers, even though the combination of
spoken and tactile input does cover the basic
requirements for a wearable interface. An ideal interface would be such that
the wearer is able to control a wearable device naturally, as though it were
actually part of the body. The writer theorises that a brain-computer interface
could be just that. This will be further explored in the next chapter.

5 Brain-Computer Interfaces

In this chapter an overview is given of the current state of the art in brain -
computer interfaces. This is done by laying out the technology used for brain-
computer interfaces and by reviewing several case studies.

5.1 What is a Brain-Computer Interface?

A lot of research has been done on brain-computer interfaces and many
definitions have been given. Wolpaw et al. (2002) define a brain-computer
interface as “a device that provides the brain with a new, non -muscular,
communication and control channel”. Levine (2000) says “A direct brain -
computer interface accepts voluntary commands directly from the human
brain without requiring physical movement and can be used to operate a
computer or other technologies.” Kleber and Birbaumer (2005) say that “A
brain-computer interface provides users with the possibility of sending
messages and commands to the external world without using their muscles”.
       The following is the definition for brain-computer interfaces that will be
used in this thesis.
       A brain-computer interface is a direct technological interface
between a brain and a computer that does not require any motor o utput
from the user. Neural impulses in the brain are intercepted and used to
control an electronic device.

To explain and justify the reasons for researching brain-computer interfaces
as an input method for wearable computers, a description of a „perfect ‟ brain-
computer interface is given as follows.
A perfect brain-computer interface is such that any interaction a user wants
with a computer is understood by that computer, at the moment the user
wants that interaction to take place. This should cover any desirable
interaction a user could want with a computer, such as menu navigation, text

input, and pointer control. This interaction should cause no straining or
monopolization of the mind and should be performed as though it were as
natural as moving an arm or a leg.
          It is understood that the described „perfect‟ brain-computer interface is
beyond current technology, but is meant as a model against which brain -
computer interfaces can be evaluated.

5.2 Technology of Brain Computer Interfaces

Since the discovery of the electroencephalogram or EEG 1 in 1969, by Pierre
Gloor and Hans Berger, the Brain Computer Interface was a reality. Having
the possibility of electronically registering brain activity enables one to use
this electronic output to control any electroni c device.

The study of brain activity has shown that when a person performs a certain
task such as stretching the right arm, a „signal‟ is created in the brain and is
sent through the nervous system to the muscles. Research has shown that
when the same person moves his arm in the same way ten times, there is
clearly a pattern to the neural activity. Scientists (Carmen et al., 2004) have
therefore concluded that if one is able to read the brain activity and scan for
certain specific patterns, this information can then be used for interaction.
The more specific the registering of the neural activity, the more precise and
detailed the possible interaction.

There are several methods of monitoring brain activity. These methods can
be divided into three distinct groups, namely external, surgical, and

    “Electroencephalography is the neurophysiologic measurement of the electrical activity of
the brain by recording from electrodes placed to the scalp, or, in special cases, on the
cortex. The resulting traces are known as an electro encephalogram (EEG) and represent so -
called brainwaves. This device is used to assess brain damage, epilepsy, and other
problems. In some jurisdictions it is used to assess brain death. EEG can also be used in
conjunction with other types of brain imaging.

Firstly there are several external or non-invasive methods of monitoring brain
activity including Positron Emission Tomography (PET) 2, functional Magnetic
Resonance Imaging (fMRI) 3, Magnetoencephalography (MEG) 4, and
Electroencephalography (EEG) techniques, which all have advantages and
          In practice only EEG yields data that is easily recorded with relatively
inexpensive equipment, is rather well studied, and provides high temporal
resolution(Krepki et al., 2003). EEG is therefore the only external method to
be researched in this thesis.
          In order to use EEG as an interface method, electrodes need to be
placed over the scalp of a user. These electrodes pick up the neural activity
in the regions of the electrodes. The readings made by an EEG, however, are
considered very „fuzzy‟ (Millan et al. 2003). Because the electrodes are
relatively far from the brain, it is not possible to pick up the activity of
individual brain cells. Lack of accuracy is therefore the main shortcoming of
the EEG method, but this is counterbalanced against the non -invasive nature
of EEG. The electrodes that are commonly used are placed on the scalp with
a paste to ensure conductivity. This fact makes EEG a very o btrusive and
restrictive method for use with a wearable computer. The electrodes are both
socially unacceptable and physically uncomfortable.

    Positron emission tomography (PET) is a nuclear medicine medical imaging technique
which produces a three dimensional image or map of functional processes in the body.

    Functional Magnetic Resonance Imaging (or fMRI) describes the use of MRI to measure
hemodynamic signals related to neural activity in the brain or spinal cord of humans or other
animals. (

    Magnetoencephalography (MEG) is the measurement of the magnetic fields produced by
electrical activity in the brain, usually conducted externally, using extremely sensitive
devices such as SQUIDs. Because the magnetic signals emitted by the brain are in the order
of a few femtotesla (1 fT = 10         T), shielding from external magnetic signals, including the
Earth's magnetic field, is necessary. An appropriate magnetically shielded room can be
constructed from Mu-metal, which is effective at reducing high -frequency noise, while noise
cancellation algorithms reduce low-frequency common mode signals. (www.wikipedia.o rg)

Secondly, brain activity can be measured by surgically implanting computer
chips with microelectrodes onto the brain, thus enabling the measurement of
activity in certain areas of the brain (Nicolelis 2003). This technology allows
for the reading of the activity of individual neurons, making it a very precise
method. There are certain risks involved with applying th ese chips to the
brain. In practice, this method is used only on laboratory animals and people
who are completely paralyzed.

Finally nanotechnology 5 offers a method of reading brain activity.
Nanotechnology, being the young field of science that it is, is also the least
developed method of registering neural activity. However, the technology is
so promising that it is worth considering in this thesis. Neuroscientist Llinas
(2005) and his colleagues describe a method of creating platinum nanowires
100 times thinner than a human hair and using blood vessels as conduits to
guide the wires to monitor individual brain cells.
         Llinas (2005) is working on creating a method for feeding an entire
array of nanowires through a catheter tube, so that the wires could b e guided
through the circulatory system to the brain. The nanowires would then spread
out into a kind of bouquet, branching out into thinner and thinner blood
vessels, to reach specific neurons.
         These wires are far thinner than even the smallest blood ves sels, so
they could be guided to any point in the body without blocking blood flow.
         In a proof-of-principle experiment in which they first guided platinum
nanowires into the vascular system of tissue samples, they successfully
detected the activity of adjacent neurons.

    “Nanotechnology comprises technological developments on the nanometer scale,
usually 0.1 to 100 nm. (One nanometer equals 10        m/one thousandth of a
micrometer or one millionth of a millimeter.) The term has sometimes been applied
to microscopic technology.” (

5.3 Examples of Brain-Computer Interfaces

In order to clarify the working of brain-computer interfaces, some case
studies are given.

5.3.1 BCI Example 1: Monkeys Adapt Robot Arm as Their Own
At Duke University medical Center, neurobiologists have bee n teaching
monkeys to control external devices with their brain signals. Unexpectedly,
the monkeys‟ brain structures are adapting to treat the robotic arm as one of
their own appendages.

This finding has important implications for understanding the adapta bility of
the primate brain and promises great possibilities for giving paraplegics
physical control over their environment.

Led by neurobiologist Miguel Nicolelis (2002) of Duke's Center for
Neuroengineering, the experiments consisted of implanting an a rray of
microelectrodes, thinner than a human hair, into the frontal and parietal lobes
of the brains of two female rhesus macaque monkeys. A specially developed
computer system analyzed the faint signals from the electrode arrays and
recognized patterns that represented particular movement by a monkey‟s

Initially the monkeys were taught to use a joystick to position a cursor over a
target on a video screen and to grasp the joystick with a specified force
(Figure 5-1).

Figure 5-1   Monkey BCI control setup

After this initial training, the researcher made the cursor more than a simple
display, incorporating the dynamics, such as inertia and momentum, of a
robot arm functioning in another room. The performance worsened initially,
but the monkeys soon learned to deal with these new dynamics and became
capable of controlling the cursor, that reflects the movements of the robot

Following this, the researchers removed the joystick and the monkeys
continued moving their arms in the air where the joystick used to be, still
controlling the robot arm in the other room. After a few days, the monkeys
realized they did not need to move their own arms in order to move the
cursor. They kept their arms at their sides and controlled the robot arm with
only their brain and visual feedback.

The extensive data drawn from these experiments showed that a large
percentage of the neurons become more „entrained‟. In other words, their
firing becomes more correlated to the operation of the robot arm than to the
monkey‟s own arm.

According to Nicolelis (2002), this showed that the brain cells originally used
for the movement of their own arm had now shifted to controlling the robot
arm. The monkeys could still move their own arms, but the control for that
had been shifted to other brain cells.

Further analysis by Lebedev (2002) showed that the monkey could
simultaneously be doing one thing with his real arms and another thing with
the robot arm. He said "So, our hypothesis is that the adaptation of brain
structures allows the expansion of capability to use an artificial appendage
with no loss of function, because the animal can flip back and forth between
using the two. Depending on the goal, the animal could use its own arm or
the robotic arm, and in some cases both. This finding supports our theory
that the brain has extraordinary abilities to adapt to incorporate artificial
tools, whether directly controlled by the brain or through the appendages"
said Nicolelis (2002).

In fact, the scientists suggest, that it is a fundamental trait of higher primates,
that their brains have the adaptability to incorporate new tools into the very
structure of the brain.

The conclusion drawn from these experiments is that a brain -computer
interface becomes a „natural‟ extension of the brain.

5.3.2 BCI Example 2: Cyberkinetics
The company Cyberkinetics is leading research on brain-computer interfaces
in the private sector. In 2004 the company took in its first patient, Matthew
Nagle, a quadriplegic, paralyzed from the neck down in a stabbing three
years ago. He has been participating in a clinical trial to test Cyberkinetic‟s
BrainGate system. Sitting in his wheelchair, Nagle can now open e -mail,
change TV channels, turn on lights, play video games such as Tetris, and
even move a robotic hand, just by thinking.

The device, which is implanted underneath the skull
in the motor cortex, consists of a computer chip that
is essentially a 2 mm by 2 mm array of 100
                                                             Figure 5-2
                                                             electrode array

electrodes (Figure 5-3). Surgeons attached the electrode array like Velcro to
neurons in Nagle's motor cortex(Figure 5-3), which is located in the brain just
above the right ear. The array is attached by a wire to a plug that protrudes
from the top of Nagle's head.

Richard Martin (2005) visited Nagle for an
article in Wired Magazine. After his accident
in 2001, Nagle begged to be Cyberkinetics‟
first patient. With his young age and his
strong will to walk again, he turned out to be
an ideal patient. The chip was surgically
implanted, and, after a period of recovery,        Figure 5-3
the tests could start. Nagle had to think „left‟   array position

and „right‟, the way he was able to move his
hand before being paralysed. After only several days he succeeded in
controlling the cursor on a computer.
       When asked what he was thinking, he replied: "For a while I was
thinking about moving the mouse with my hand. Now, I just imagine moving
the cursor from place to place." His brain has assimilated the system. Nagle
is now able to play Pong (and even win), read e-mail, control a television set
and control a robot hand.

5.3.3 BCI Example3: Non-Invasive Brain-Actuated Control of a
       Mobile Robot.
Krepki et al. (2003) researched brain control of a mobile robot, using non -
invasive EEG as the interface method.
       Two subjects wore a commercial EEG cap and learned to control three
mental tasks during an initial training period. These tasks were chosen from
the following: “relax”, imagination of “left” and “right” hand (or arm)
movements, “cube rotation”, “subtraction”, and “word association”.
       The three chosen commands were used to steer a robot forwards,
right, and left. This was trained over a period of days. After the training, the
subjects were able to steer the robot through a maze. Correct recognition of
the commands was above 60%, whereas errors were below 5%.

5.4 Neural/BIO Feedback

In much the same way as you can directly control machines with your brain,
you can also receive feedback directly into the brain. Nicolelis(2004) is
currently working on artificial sense of touch for robotic arms. Succ essful
experiments have also been carried out with robotic eyes connected to the
brain. Veraart et al. (2002) have developed a system, whereby a video feed
is sent to electrodes activating the optical nerve. This currently gives blind
people rudimentary vision.

6 Ethical and Social Implications of Brain-
    Computer Interfaces

Controlling a system directly with one‟s brain raises certain ethical issues.
When you can control external systems in the same way as you control your
own body, the question can arise of whether or not this system becomes part
of you. Also, the idea of integrating technology with their bodies can scare
people off. These issues are considered in this chapter.

As seen in the BCI examples of the research done by Nicolelis (2002) and
the Braingate System developed by Cyberkinetics(2005), the subjects using
the brain-computer interfaces assimilate the functionality into their brains. In
fact, the brains have reorganised the functions of specific neurons to improve
the functionality of the interface, at the same time moving the control of the
original appendages to other areas of the brain. The brain is adapted as
though the body has an extra limb. “From a philosophical point of view, we
are saying that the sense of self is not limited to our c apability for
introspection, our sense of our body limits, and to the experiences we‟ve
accumulated.” said Nicolelis (2002). “It really incorporates every external
device that we use to deal with the environment.” One could indeed ask: “Is it
right to alter one‟s Self?” But, does a blind man not alter himself by using a

In these cases too, a physical connection is made between the body and
technology, introducing the „cyborg‟ into reality. The term „cyborg‟ was first
mentioned by Clynes and Kline in 1960. “A cyborg is a combination of human
and machine in which the interface becomes a „natural‟ extension that does
not require much conscious attention, such as when a person rides a
bicycle.” (Starner 1999)
        Warwick (2003) suggests that, once the technology to directly
interface with the brain exits, technology becomes part of the „self‟. People
could then in fact be „enhanced‟, and this raises the issue of whether cyborg
morals would be the same as human morals. Warwick also states a „murky‟

view, that cyborgs are likely to be networked, and one could ask if it is
morally acceptable for cyborgs to give up their individuality.
       Other questions arise. Should all humans have the right to be
„upgraded‟ to cyborgs? How will the relationship between humans and
cyborgs develop?

These are not current issues, but one should consider them when working
with brain-computer interfaces.

7 Conclusion

At the present time, computer technology is mature enough for wearable
computing. A computer with all the capabilities of a desktop computer can be
fitted into a package no larger than a pack of cards. There are heads -up
displays, that appear not much different from a pair of spectacles, and
wireless networks are widespread. Yet most of the wearable systems rely on
speech control as an input method. When judged against the desirable
characteristics for wearable computers, defined in chapter 3, the speech
interface does not perform at all well. It is socially very obtrusive; one feels
uncomfortable speaking out commands in public, and at the same time it
intrudes on the surrounding space. The lack of privacy in the interaction is
also significant.
       Also the tactile interfaces too do not deliver all desirable
characteristics, as the existing interfaces are fairly restrictive. A device either
needs to be held in a hand, or the hands are covered in interface technology.
       It can therefore be concluded that there is a demand for an interface
technology that delivers more of the desirable characteristics for wearable
computers than those technologies in use at present.

The „perfect‟ brain computer interface would indeed fulfil all the desirable
characteristics for wearable computers. However, in the current state of the
art, the „perfect‟ brain computer interface does not exist. Th is is in spite of
the promising advances currently being made. The surgically implanted
devices show the best results for successful brain-computer interfaces, but
the risk and social antipathy is too large for it to be considered for non -
medical use. To paraplegics, the life enhancement value is so great that the
risks and fears associated with an operation do not matter.
       The nanotechnological method promises the most precise method of
interaction. Even though it is unknown how the general public would p erceive
such technology, it could be a far „cleaner‟ and safer method than surgical
implants. The development still has a long way to go and can therefore
currently not be considered as „the way to go‟ for interfacing with wearable

       This leaves the non-invasive method of EEG interfaces. The current
level of precision of such interfaces is sufficiently high for simple interfaces,
such as four-way menus. The reaction speed is still very slow, but, with smart
software algorithms, researchers are finding more and more efficient and
precise interaction solutions. There also needs to be more development on
the electrodes required for EEG reading, if people want to feel comfortable
walking around with them.

An EEG based brain-computer interface is quite feasible in the short term;
however, it could not replace the tactile interface for text input, but could be
an excellent interface for menu, mail, or web navigation. The most promising
approach to constructing an interface for wearable computers involves using
a combination of interface technologies. In this combination, the addition of
an EEG based interface would be an improvement over current interfaces for
wearable computers.
       Further development could, however, much improve the current state
of the art. A great challenge for the development of brain-computer interfaces
is the fact that the field is exceptionally multi -disciplinary, covering fields
such as neurobiology, nanotechnology, engineering, mathematics, computer
science, and interaction design. And the current technology is a long way
from the „perfect‟ brain-computer interface.

8 References

 1. Boahen, K. (2005), Neuromorphic Microchips. Scientific American,
    May, pp. 39-44.
 2. Carmena, J.M., Lebedev, M.A., Crist, R.E., O'Doherty ,J.E., Santucci,
    D.M., Dimitrov, D.F., Patil, P.G., Henriquez, C.S. & Nicolelis,
    M.A.L.(2002), Learning to Control a Brain–Machine Interface for
    Reaching and Grasping by Primates. PLoS Biology Vol. 1, No. 2, e42
 3. Clynes, M. & Kline, N. (1960), Cyborgs and space. Astronautics, vol
    14, pp. 26-27.
 4. Ditlea S. (2000), The PC goes ready-to-wear. IEEE Spectrum Online,
    Vol 37 Nr 10
 5. Hooke, R. (1665), Micrographia, preface.
 6. Kleber, B. & Birbaumer N. (2005), Direct brain communication:
    neuroelectric and metabolic approaches at Tübingen. Cognitive
    Processes, vol. 6, pp. 65-74.
 7. Kortuem, G., Segall, Z. & Bauer, M. (1998), Context -Aware, Adaptive
    Wearable Computers as Remote Interfaces to 'Intelligent'
    Environments. Proceedings of the 2nd IEEE International Symposium
    on Wearable Computers, pp. 58.
 8. Krepki, R., Blankertz, B., Curio, G. & Müller, K.-R. (2004), The Berlin
    Brain-Computer Interface (BBCI): towards a new communication
    channel for online control in gaming applications. Journal of
    Multimedia Tools and Applications.
 9. Levine, S.P., Huggins, J.E., BeMent, S.L., Kushwaha, R.K., Schuh,
    L.A., Rohde, M.M., Passaro, E.A., Ross, D.A., Elisevich, K.V., &
    Smith, B.J. (2000), A direct brain interface based on event -related
    potentials. IEEE Trans Rehabiitation vol. 8, pp. 180-5.
 10. MacIntyre, B.& Feiner S. (1996), Future Multimedia User Interface.
    Multimedia Systems 4: pp. 250-268.
 11. MacKenzie, I. S. (1995), Input devices and interaction techniques for
    advanced computing. W. Barfield, & T. A. Furness III (Eds.), Virtual
    environments and advanced interface design, pp. 437-470.

12. Mann, S. (1996) Special issue on Wearable Computing an Personal
   Imaging. Personal Technologies Journal, Introduction to the Special
13. Mann S. (1997), Wearable Computing: A First Step Toward Personal
   Imaging. Cybersquare Computer, vol. 30, No2.
14. Mehring, C., Kuester, F., Singh, K.D.& Chen, M. (2004), KITTY:
   Keyboard Independent Touch Typing in VR. Proceedings of the IEEE
   Virtual Reality 2004 (VR'04), Volume 00.
15. Moeslund, T.B., Störring, M., & Granum, E. (2004), Pointing and
   Command Gestures for Augmented Reality. ICPR workshop on Visual
   Observation of Diectic Gestures (Pointing'04).
16. Pentland, A. (2001), Wearable Information Devices, IEEE Micro, May-
   June 2001, pp. 12-15.
17. Schwartz, E. (2004), Dressed for Success. InfoWorld vol. 26 nr.23, pp.
   18 (1).
18. Starner, T. (1999), Wearable Computing and Contextual Awareness.
   Massachusetts Institute of Technology.
19. Starner, T. (2001). The Challenges of Wearable Computing: Part 1.
   IEEE Micro, vol. 21, pp. 44 (9).
20. Starner, T. (2001). The Challenges of Wearable Computing: Part 2.
   IEEE Micro, vol. 21, pp. 54 (14).
21. Veraart, C., Raftopoulos, C., Mortimer, J.T., Delbeke, J., Pins, D.,
   Michaux, G., Vanlierde, A., Parrini, S. & Wanet-Defalque, M.C. (1998),
   Visual sensations produced by optic nerve stimulation us ing an
   implanted self-sizing spiral cuff electrode. Brain Research, vol 813:
   pp. 181-186
22. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G. &
   Vaughan, T.M. (2002). Brain-computer interfaces for communication
   and control. Clinical neurophysiology, 113. pp. 767-791.