Lecture Series in Mobile Telecommunications and Networks

Document Sample
Lecture Series in Mobile Telecommunications and Networks Powered By Docstoc
					The Royal Academy
of Engineering




      Lecture Series in Mobile Telecommunications
                      and Networks




             FROM WIRELESS NETWORKS
         TO SENSOR NETWORKS AND ONWARD
         TO NETWORKED EMBEDDED CONTROL




                      Wednesday, 26 March 2008

                       7 Carlton House Terrace
                          London SW1Y 5AG



                         Speaker: Professor PR Kumar
      Franklin W. Woeltge Professor of Electrical and Computer Engineering
                              University of Illinois




                        Chair: Professor Michael Walker
              Lecture Series in Mobile Telecommunications and Networks


                          FROM WIRELESS NETWORKS
                            TO SENSOR NETWORKS
                 AND ONWARD TO NETWORKED EMBEDDED CONTROL


                                 Wednesday, 26 March 2008


                              Speaker: Professor PR Kumar

                              Chair:     Professor Michael Walker, FREng




               Professor Michael Walker: Good evening, ladies and gentlemen. For those
of you who do not know me, I am the Research and Development Director for the Vodafone
Group, and I would like to welcome you here this evening to the first lecture in the third series
of lectures in mobile telecommunications and networks. These lectures are hosted by the
Royal Academy of Engineering and sponsored by Vodafone.

[Housekeeping notices: mute mobile phones; emergency exits]

       For those of you who are attending these lectures for the first time, let me just say a
little about their purpose and history. To put things in perspective, at the moment more than
3 billion people in the world carry a mobile phone: more than half the population of the planet
now have a mobile phone. Eighty per cent of those phones use one technology, GSM, which
was invented in Europe and the UK played a very significant role not just in the invention of
that technology but also in its commercialisation. This is a huge achievement.

       It has transformed people’s lives. I do not know any business that does not now rely
on its employees having mobile phones for all sorts of things. Pretty well everybody in the
Western world who wants a mobile phone has one – if people do not have one, it is because
they choose not to and not for any other reason.          Perhaps more importantly, in many
developing countries, the mobile phone has been a force for total social change. Rural
populations, for instance, can keep in contact with each other and the markets in which they
sell their products. A few years ago, Kenyan farmers or Kenyan fishermen had no idea of
the price of their product on the market: now, they know the price of their product on the
market before they start harvesting, and that is all down to mobile phones and what you can
do with them. This is a tremendous development in something under 20 years.



                                               1
       The purpose of these lectures, when we instigated them three years ago, was to
celebrate the tremendous engineering innovation that is still behind mobile systems and still
contributing to their development today.

       Before we start the third series, I should just mention that the second series is
available in a brochure which contains each lecture. Each series consists of three lectures,
the first of which is always hosted here at the Royal Society, with the subsequent two taking
place within the Royal Academy itself. For the first lecture, we always invite a world famous
figure in the subject of mobile communications and its applications and tonight it is my
pleasure to introduce Professor Kumar. Before I invite him to take the lectern and deliver his
lecture, let me say a few words about his distinguished career. I hope you will forgive me if I
keep referring to my notes here, because his career has been exceptionally long and
distinguished and I could not possibly remember everything.

       Professor Kumar started with his first degree in Electrical Engineering at the IIT
Madras in India. He then went to the US and read both Systems Science and Mathematics
at Washington University, St Louis.        He became a member of the Department of
Mathematics at the University of Maryland and then, since 1985, he has been at the
University of Illinois in Urbana, where currently he is the Franklin W Woeltge Professor of
Electrical and Computer Engineering.

       Professor Kumar has received the Donald P. Eckman Award of the American
Automatic Control Council; the IEEE Field Award in control Systems, and the Fred Ellersick
Prize of the IEEE Communications Society, which was awarded in 2007. He is a fellow of
the IEEE and a member of the US National Academy of Engineering.

       He has worked on problems in a huge number including game theory; adaptive
control; stochastic systems; simulated annealing; neural networks; machine learning;
queuing networks; manufacturing systems; scheduling and wafer fabrication. The list goes
on to sensor networks and the subject about which he will talk tonight.

       The title of Professor Kumar’s lecture this evening is From Wireless Networks to
Sensor Networks and Onward to Networked Embedded Control. Professor Kumar, I invite
you to take the podium and address us.




                                              2
                  FROM WIRELESS NETWORKS TO SENSOR NETWORKS
                AND ONWARD TO NETWORKED EMBEDDED CONTROL


                                   Professor PR Kumar
        Franklin W. Woeltge Professor of Electrical and Computer Engineering
                                University of Illinois




       Thank you, Professor Walker, for that kind introduction. I must say that it is a great
honour to be present in such an illustrious place in front of such a distinguished audience.
Thank you for inviting me.

       I shall be talking about three things: wireless networks, sensor networks and network
embedded control.

The oncoming wireless era: from communication to sensing control

       The underlying key is that we may be on the cusp of a wireless era. Let me sketch
the elements of what that era could be. As Professor Walker pointed out, we are very much
in the cellular systems era and countries like India and China are adding 8 million or so
phones per month. I will look a little into the future, to see what else may be coming down
the road.

       I will be talking about wireless networks where there is no need for any infrastructure.
In cellular systems, there is a wired infrastructure and your telephone makes one wireless
hop to the infrastructure and then it travels with the wired network. In the future, however,
we may all be talking to each other without any infrastructure.

       It is not just communications. Already, we have these small gadgets. This is what is
called a moat and it comes out of the University of California, Berkeley. You can connect
your favourite sensor to that – it could be temperature sensor, or a light sensor, or a
magnetic sensor or whatever.      That gives you the ability to sense the environment, in
addition to communication and computing – but it does not stop there. None of us is content
with sensing and, the moment we sense something, we want to take action. I am not content
with just knowing the speed of my car, but I want to go faster or slower. Once we have
sensing and acting, that is called control and we may be exercising control over networks.

       In the US, since last year people have been talking about a new phrase,
‘cyberphysical systems’, and these are computers interacting with the environment, the cyber
and the physical worlds coming together. This is really the convergence of communication,
computation and control, and that is what I want to talk about.



                                               3
Ad hoc wireless networks

         There are several themes, the first of which is wireless networks.         The type of
wireless networks I shall be talking about are what are called ‘ad hoc wireless networks’.
These are things that you can set up spontaneously, anywhere. A bunch of us could open
up our laptops in this room or on a campus and then we could start interacting with each
other.

         The current proposal for operating such wireless networks is what is called multi-hop
relaying.   How does that work?        Let us say that this node wants to send packets of
information to that node. This thing here says, ‘I want to talk’, and the nodes which hear it
agree to keep quiet. Then this node says, ‘Okay, go ahead and talk to me’, and any node
which hears that also keeps quiet. At this point, the neighbours of both these nodes have
kept quiet and that facilitates a packet from here being sent to there, and being received
without nearby interference.      We can also send an acknowledgement back, saying ‘I
received your packet’.      So this four-phase handshake takes place and the packet of
information makes one hop. It then continues on in a similar fashion until it reaches its
destination, and that is a multi-hop wireless network.

         All of these operations can be mapped onto an architecture, which is reminiscent of
an OSI stack. If you look a little more closely at this, what is really happening is that this
node sends a packet, which is basically a radio signal, to that node. However, that is not the
only thing that this receiver is hearing because there is also concurrent interference and
noise. The thing about the wireless world is that it is a shared medium and so, if two people
are talking to you simultaneously, perhaps you cannot understand either one of them. That
is why there is this four-phase handshake, which attempts to keep your neighbours quiet, so
that you reduce the interference.

         Then, the signal is decoded in the presence of interference and noise – interference
from different faraway sources, and digitally regenerated, so that you are buying into the
digital evolution at this point, and then re-transmitted, regenerated and transmitted to the
next node which, again, decodes it in the presence of interference plus noise, forwards it to
the next node, and so on.

         The point is that wireless transmissions interfere with each other, so there is a great
deal of handshaking and so on to mitigate the interference. In fact, that is what lies behind
the notion of spatial re-use of frequency. In the cellular world, for example, supposing you
use your blue frequency in your local cell, then it is not used in an adjoining cell but it may be
used further away. So frequency is re-used at a different point in space, further away from
where this conversation is going on.



                                                4
       One question you could ask is, how much traffic can wireless networks carry when
we treat interference as noise? Is it possible that the entire world can become wireless and
that we can just get rid of all wires altogether?

Scaling law for wireless networks

       We really want to study the scalability of wireless networks and how large they can
get. This slide shows a model and let us suppose that I have some domain and, in that
domain, let us suppose that there are n nodes, randomly located. You never know where
your users will be, so suppose that they are randomly located in this domain. Let us suppose
that every node wants to talk to some other random destination and that it wants to send
traffic at the rate of lambda bits per second throughput, to the destination. Similarly, all the
other nodes also want to send lambda bits per second.

       The question you can then ask is, what is the largest lambda you can support? What
is the largest throughput that you can furnish to each user in a large wireless network with n
nodes? Here is the result. It says that the probability that you can support a multiple of [1
over square root n log n], converges to 1 as n goes to infinity. There is a larger multiple,
whose probability of being supported can resist to zero. This is what is called a sharp cut-off
phenomenon and it tells you that a wireless network essentially can support [1 over square
root of n log ] in bits per second per user. This means that there is a law of diminishing
returns: as the number of users increases, what you can provide to each user decreases and
so we cannot get rid of wires with this technology. As you try to accommodate more and
more people, each of us will have to give up some of our own throughput.

       We also understand architecture when we treat interference as noise. Here is an
order optimal architecture, so here are your random nodes. It turns out that you can operate
it in a cellular fashion, which means that you can divide up groups of nodes into cells, very
much like the cellular systems. All nodes choose a power level which is sufficient to reach
nodes in neighbouring cells. Basically, you can have nearest neighbour conversations.

       As far as the routing of packets is concerned, it turns out that you can pretty much
follow a straight line path, and such straight line paths will average out the load over the
network so that there are no hotspots. That is an order optimal architecture. That is what we
will get if we treat interference as noise and there is a law of diminishing returns, as I have
pointed out.

But is spatial reuse the right design principle?

       However, the fundamental question to ask is, is spatial reuse the right design
principle? Why do I want to challenge that? As I pointed out, spatial reuse, as we can see,



                                                    5
is not used in an enabling cell but further away. If you really believe in spatial reuse of
frequency, then you should believe that a sharper path loss is better for wireless networks.
What do I mean? As radio signals travel, their amplitude attenuates. This is a gradual
attenuation [on slide] and that is a more rapid attenuation. The philosophy behind spatial
reuse is that if you do not want to have interference from people further away then you
should prefer a sharper attenuation. If you believe in that, then you should believe that this
[on slide] is better than that. Or, to put it more colourfully, you should believe that jungles are
better than deserts for wireless networks, if that is true.

Is spatial reuse really the right way to operate wireless networks?

       The problem is that wireless networks are formed by nodes with radius, not with
wires. Therefore, to draw a picture like this is wrong, and it is reminiscent of wires. Actually,
you cannot see it here, but we have a whole bunch of antennae which are just radiating
energy. That is the picture of a network that you should have, with everybody talking away.
There is no a priori notion of links – nodes simply radiate energy and so the network is
Maxwellian rather than Kirchoff. In the Maxwellian world, strange things can happen. Nodes
can actually co-operate in much more complicated ways than they could in a wire-line
network.

       For example, it turns out that if somebody shouts in your ear at the same time as
somebody else is whispering softly, Shannon would actually say that was fantastic. In fact,
the louder this person shouts, the better. Why? Because, when this person shouts really
loudly, you can decode that person perfectly and subtract out that person’s signal if you know
the channel activation, and pick up the whisper. It turns out that it is actually more difficult
and you can decode neither of them – so that when one is louder, that is actually better for
you.   The point is that interference is not interference: everything is information – even
interference – and the only question is how you deal with it.

       Here is another bizarre notion. We have this notion of signal to interference plus
noise ratio. Signal is the good guy and interference and noise are the bad guys, and this
ratio tells you how good the signal is, compared to what is interfering. Usually, we try to
mitigate interference but perhaps we can try to cancel interference actively, just as in your
acoustic sound-cancelling headphones. For example, this node could transmit something
which cancels the effect of what this node is transmitting at this point, so you can have co-
operative cancellation and perhaps you should try to reduce the denominator.

       Alternatively, perhaps you should not even buy into the digital revolution. Instead of
re-generating the packet at each intermediate node, why not just amplify what you heard,
without bothering to decode? In fact, the information is only intended for the final destination,



                                                 6
so why should you insist that intermediate nodes should be able to decode information? In
fact, if you are sending this information over multiple paths to your destination, then perhaps
it is only the case that the end-point has sufficient signal strength to decode the information,
and not the intermediate points. Thus, this architecture of decoding and treating interference
as noise limits itself from the start. As Shakespeare said, through Hamlet, the world is
extremely complicated.       The fundamental question is, what is the best architecture for
wireless networks? The point is that the design space is infinite-dimensional and there are
so many sophisticated things we could do. To get to the bottom of this, we have to seek
answers with information theory.

Network information theory

        As many of you know, information theory was invented by Claude Shannon in 1948,
about 60 years ago. A very celebrated formula, for example, is the capacity for Gaussian
channel which says that, if you have a spectrum of bandwidth (B), signal strength (S) and
noise strength (N), then this is the absolute limit on the number of bits per second of
information that you can transmit. So, for example, if you have a telephone wire and you tell
me how much noise there is on it, Shannon tells us exactly how much throughput that wire
can take. That kind of fundamental result allows for any mode of operation: no matter what
you do, you cannot beat this formula. This is fundamentally good because, once you have
some kind of law of thermodynamics, you have some limit, then you know when you are
close to it that you can quit trying.

        Information theory has had much success in point-to-point communication: point-to-
point means one person talking to another. However, we are in the world of networks and
not just one transmitter and one receiver, but we have a whole bunch of people co-operating
to transmit information to each other. So how should we do it?

        Here is the result, which is a theoretical one which says the following, under some
assumptions. It says that if your path loss is sufficient, then multi-hopping is indeed the order
of real architecture. That is the way that the present design efforts have been going, and that
is indeed the right choice. So, out of this whole complicated space, what we have been
doing is the right thing but, unfortunately, it also means that there is a law of diminishing
returns. We cannot beat this limit, even if you throw over the digital revolution and even if
you decide to have these other strategies.

In-network information processing

        Let me move on to another theme: in-network information processing. I am now
talking about what are called ‘sensor networks’.        Instead of just having the facility to




                                               7
communicate and perhaps to local computations on your processor, nodes can also sense
their environment, so you can plug in your favourite sensor there.

       What tasks might be a sensor network be deployed to perform?                    There is
environmental monitoring and so, for example, in some domain you may throw a whole
bunch of nodes, all of which measure the temperature. Being extremely simplistic, let us
suppose that the n nodes which take the temperature X1 to XM, and that perhaps there is
some collector node or a sink – then what the sink is interested in is the average temperature
of the domain. You just want to monitor the average temperature.

       Or, perhaps on an alarm network, perhaps a collector node may be interested in the
maximum temperature. Is there a fire, or isn’t there? The point is that sensor networks are
not just data networks.      You should not think of sensor networks as just good old
communication networks where you simply replace files by sensor measurements, for the
following reasons. In the internet, one never looks inside another person’s packet. For
example, I do not look at your packet and say, ‘This is not interesting information – I will drop
it.’ However, in a sensor network I may do that. If I see a high temperature, then I may drop
a low temperature. So, depending on what I have heard, I may say that this information is
uninteresting, or I confuse information and combine information and so on. The point is that
in sensor networks, the nodes do not just forward information but they also compute. In
other words, they process information in the network – the network itself processes
information, and so this whole thing is like a Maxwellian computer, if you will.

       There are many interesting questions that you can ask and I will say something really
simplistic. Let us say that I want to compute a symmetric function in a sensor network. What
is a symmetric function? It is one where, if you change the identity of which node has which
temperature, the result does not change.            For example, the average is invariant to
permutations. In fact, most statistical quantities are symmetric functions.

Computing symmetric functions: the mean versus max

       It turns out that we are very much at the beginning point of developing theories for
how to operate such networks. What I would like to illustrate for you is a kind of dichotomy
about how you treat different functions. It turns out that the rate at which you can exfiltrate
average temperature readings from sensor networks is that - [alpha-1 over log n]. The
architecture for that is the commonsense architecture. If you have a bunch of nodes, you
break them up into cells and you tessellate them. In each cell, you add up the temperatures,
you sum the temperatures, and then you propagate all the sums along an entry route at the
collector node, and that is the commonsense thing that anybody would do.




                                                8
          However, it turns out that if you want to compute the max temperature, not the
average, then you can do it exponentially faster. The rate at which you can do it is [1 over
log log n]. I just want to show you how you could take advantage of what you are computing.
You can take advantage of what is called block coding. In other words, I do not compute the
maximum temperature every day but I gather together a bunch of temperatures and then
spew out a bunch of maximum temperatures.

          Just to show you the idea, let us suppose that all temperatures are binary, 0 or 1. So
anybody who has a temperature of 1, has a maximum temperature automatically. Let us
also suppose that we are all collocated so that, when I talk, everybody hears, and whenever
anybody talks, everybody hears. Then the first node can simply announce the set of times at
which it has a max temperature – so at times 10, 15 and 20, it has a temperature of 1. Then
the second node can butt in and say, ‘Okay, I have a maximum temperature at these three
times.’    Such information can be very efficiently compacted and therefore you can get
exponential speed-ups. The way you operate these networks can be quite sophisticated.

          In computer science, beginning with the work of Cook in Canada, that leads to this
whole theory of complexity, and we need similar theories of complexity for sensor networks –
in fact, for all these cyberphysical systems.

Clock synchronization in distributed systems

          Let me turn to another theme, that of time and clocks. It turns out that a knowledge of
time is important in cyberphysical systems. If computers are just talking to each other, their
conversations could be purely event-based – time is irrelevant. However, when you are
interacting with physics and physics-based systems, time is important because if of two of us
were at the same spot at the same time, we would collide. Knowledge of time is therefore
important in cyberphysical systems. What these things do is that they presage a movement
towards not just event-based computing, but time-cum-event-based computing.

          However, no two clocks in the world agree and the question is, how do you
synchronise clocks? What are the limits to synchronisability? The traditional approach to
clock synchronisation is really simple. Let us say that this is your route clock, and here is
your network, and then these two nodes talk to each other. Essentially, this clock finds out
how much ahead of that clock it is, the offset, and then these two clocks talk to each other,
with this one saying, ‘I’m this much ahead of you’, and so on. This node [on slide] then finally
adds up all these offsets and gets its estimate of time and perspective as well.




                                                 9
Spatial smoothing in random networks

       It turns out that each of these conversations is a little noisy and so a certain error is
introduced. Basically, the standard deviation of the error is the sum of the errors along the
diameter of the graph, if you will, which grows like the square root of the diameter. If there
are n nodes arranged in a plain, then the diameter is [square root n] and so basically the
error is growing like [n to the one-fourth].       This means that, in a large network, your
synchronisation error will increase and that is not good if you want to build applications. So
can we do better?

       It turns out that you can, and I just want to show you an idea. Let us suppose that I
have a random network, and suppose that all nodes choose some range with which the
network is connected. In a connected graph, there is a multiplicity of paths from one node to
another and you can average out the errors over all these paths and actually build very
simple distributed algorithms so that – and this is a fundamental result – the standard
deviation of error is bounded. We can actually prove that we can keep synchronisation
errors bounded in large networks of arbitrary size. That lends support to the feasibility of
time-based computation in wireless networks.

Object tracking by directional sensors

       Let me show you one application of tracking an object purely through measurements
of time. We have a domain and let us suppose that, in that domain, I throw down a highly
directional sensor which could be like a laser beam. Whenever anybody trips the laser
beam, I record the time when they crossed the laser beam, so I have time measurements of
crossing – but I do not know where the object crossed, in what direction or at what speed. I
do not know any of those things – just the time.

       Let us suppose that I randomly throw down a bunch of such directional sensors, and
that objects cross the domain at constant speed. So this object crosses and, as it crosses, I
get the measurements of the times of crossing. Then there is another object that crosses
and, again, I have measurements of time of crossing and so on. These objects would be
moving at different speeds but each object has a constant velocity. Let us suppose that
everything about this network is unknown – I do not know the sensor locations or directions, I
just ‘air drop’ them. I do not know about object locations, tracks, speeds – none of that, but
only the times of crossings are known. However, I want to estimate everything – I want to
estimate the positions of all sensors and of all objects and everything.

       Of course, it is impossible to solve this problem because we have to agree on a co-
ordinate system – so we can solve it up to a co-ordinate system, but all these co-ordinate
systems are equivalent.


                                               10
Implementation with laser pointers, motes and Lego car

         I want to show you an application and there is another reason why I have introduced
it.   It turns out that these kinds of technologies will have a fundamental impact on our
university curricula.    This is a particularly simple experiment which requires minimal
hardware to set it up – in fact, it uses paper cups, a Lego car and these laser beams, and a
few of these Berkeley modes, as you see.

[Shows video – model vehicles in lab]

         You have these Berkeley modes and these paper cups and you have a small Lego
car with a little flag. When the flag is up, it trips these laser beams and when the flag is down
it does not. You just move it up and down the domain. Initially, I was a little sceptical about
whether this experiment would work, given the possibilities for numerical error, but I was
surprised. If this is the actual path of the object, these black dots indicate the estimates.
Also, the system picks up how the laser beams are pointed. I believe that similar things
should find their way into our curricula at universities.

The third generation of control systems

         Let me move on to control, and set the historical stage here. We are at the cusp of
the third generation of control systems and so, if we think of the first generation of control
systems as analogue control systems, the technology for that was electronic feedback
amplifiers. The technology created a great deal of need for theory, which was admirably met
by people like Bode, Evans, Nyquist and so on. In fact, there is a beautiful book on the
history of technology by a professor at MIT, David Mindell, which shows how the fields of
computing and communication control all arose together about 60 or so years ago.

         Starting in about 1960, we had the second generation, which is digital control. This
was when digital computers came along, and you could do a little computation before you
closed the loop. I believe that people like Rudy Kalman and so on happened to come along
at the right time which needed a certain theory, which was a states based theory, and that
was supplied by several researchers.

         At the computer science end, this technology was supported by developments in real-
time scheduling which took place in some pioneering work in Illinois, with the work of
Leonard Lewin. However, things have changed over the last 40 years and computers have
become much more powerful and embedded in all kinds of devices. The whole fields of
wireless and wire LAN networking have come about and software has also become much
more powerful. This is actually leading to a revolution which I call network embedded control
systems.



                                                11
Challenge of abstractions and architecture

       I believe that the first and most important challenge for this dual technology is to
decide what are the appropriate abstractions and what is the right architecture. Let me make
the case for why abstractions and architecture are important for the evolution of technology.

       Let me start with the architecture of the internet – and many of you may be familiar
with this. There is a layer in the hierarchy. If you are a person in communication theory who
knows a great deal about modulation, then you would work on the physical layer problems
here. If you were a graft theorist, you would work on the networking layer over here, and if
you were working on HTTP or something, then you would work up here. So there is a
hierarchical segregation of tasks.

       This architecture, along with what is called peer-to-peer protocols – I claim that the
architecture has been more important for the proliferation of networking, in comparison to the
algorithms, even though the algorithms are very important. Why do I say that? By the
segregation of tasks, we guaranteed that each of us can work on our little specialty and,
when we put them all together and compose them, the interfaces have been worked out so
that the whole thing works.     Not only that, but this layering also gives this technology
longevity. Over the course of time, therefore, you may have a different and better idea on
one of the layers – let us say TCP. Then, you do not need to replace the whole stack but
you just replace that little sliver of it and the rest of it can work. That longevity allows this
technology to evolve and the longevity also facilitates proliferation. Proliferation drives down
the cost per implementation.

       The point I want to make is that there is always a tension between architecture and
performance people, because performance people always want to bust architecture. They
want to take short cuts – they say, ‘Gee, if I could just expose parameters of this layer to this
layer, then I could improve this by 10 per cent.’ However, at the back of the room there could
be another person saying, ‘Wait a second! If I expose these parameters to this one, I could
do 15 per cent.’ If you start implementing all of these short cuts, however, in the end you will
have a spaghetti architecture – which will mean no architecture at all, and it will be hard to
maintain and to upgrade and so on. You may have a faster system, but you will not get a
million of them. So, even though there is an apparent tension between architecture and
performance, I contend that architecture is also performance oriented, while keeping in mind
the long time horizon.

       Another example that I like very much is due to Les Valiant at Harvard. He claims
that the success of serial computation is due to the von Neumann Bridge. The way to think
about it is that you have Microsoft and Intel and they do not need to talk to each other but, as



                                               12
long as they conform to a vague abstractional on the other side, their products by and large
will inter-operate. That has been the reason why serial computation has been phenomenally
successful.

       In contrast, parallel computation has no von Neumann Bridge. It is very hard to
separate architecture from algorithms and that is the reason why there has not been
proliferation. Similarly, in communication, there is the separation between source coding and
channel coding – source coding can be done in your JPEG, or you may do it in software,
whereas channel coding may be done in your network interface card, and so on. This is
Shannon’s result, and so on.

       The point is that we are now getting into very complicated systems where we have
communication control, serial, parallel, everything. What are the appropriate abstractions
and what is the architecture? What is the goal here? The critical resource is not the cost of
the equipment but the critical resource is the designer’s time – your time and my time. When
projects go into over-runs, it is not because something is a little too expensive but it is usually
because there is something wrong in the whole design. Our goal therefore is to make it very
easy to enable rapid design and deployment. Standardised abstractions and architecture
help, so we can just build these things like levels and therefore enable proliferation.

Information technology convergence lab: the systems

       Let me show you some efforts that we have in our lab. We have these model cars
running around on this plywood sheet, with these cameras up in the sky. There is image
processing going on here. There are all the levels of decision making – set points, tracking,
scheduling, planning, re-planning, re-scheduling and so on. You may not be able to see it on
the slide, but there is a wireless ad hoc network here of laptops, and all these laptops are
controlling the cars. This is a completely closed-loop system - it is closed over vision and it is
closed over wireless networking, but what I want to focus on is the fact that it is closed over
middleware. I will explain what I mean.

IT convergence lab

       Let me first show you what we can do in this lab.

[Video shown – model vehicles in lab]

       That is the interaction between logical dynamics and differential dynamics. This is a
pursuit evasion scenario, OJ Simpson style. There is a car being driven by my student and
these other two cars have to follow automatically in formation, and he is trying to confuse
those two. [Video continues]




                                                13
Abstraction of virtual collocation

       The first abstraction I want to propose is what I call ‘virtual collocation’. What do I
mean by that? Here, we have the car and the actuator, which could be the gas pedal,
steering wheel or whatever. Then we have all the layers of decision-making. Then there are
the sensors, and you have more than one sensor, so you have a server and data fusion
because you have lots of cars. Then, of course, you need to supervise all of these and so
on. If you ask a control designer to design directly for this very complicated view of the
system, it is a difficult task because the notion of time is different at different nodes, and the
notion of IP addresses is different.

       There is a great deal of detail which someone needs keep track of. What I want to do
is to reduce this complicated system to an input/output view. If you go back and ask an
electrical engineer what architecture is, they will probably say block diagrams -
interconnecting by lines and block diagrams and that is architecture. It is the whole notion of
signal flow graphs. What we want to do therefore is to reduce these complicated software
systems to such input/output loops.

The abstraction layers

       How do we do that? We do that through middleware – let me explain. Here, we have
the system and the first abstraction that we have is that of a node – everything we call a
node. Then the link layer creates the illusion of links. The networking layer clears the
illusion of a graph – the notion of connectivity, enter and connectivity. The transport layer, for
those of you who know communication, creates the illusion of pipes, so if you drop a file
here, it shows up there and you do not need to worry about the graph any more.

       We are moving from nodes to graphs to pipes and so on, and then next natural thing I
contend is just to think of the whole system in its entirety, as a collocated system, not
variable to the complexities of interconnection and the way you would design applications for
it is through component logic. If you are a Kalman filtering expert, you write a few lines of
equations, which sit as a Kalman filter. If you are an image processing expert, you write an
algorithm for image processing or deadlock avoidance, or whatever. The middleware, which
I shall explain, will take care of how all these components are operated at these nodes. So
you try to hide the complexity of the system from the designer and you allow specialisation –
and the middleware manages the components. If the network layer is created by some
version of distribution Bellman-Ford, and the transport layer is created by TCP, the virtual
collocation layer is created by Etherware, which is our version for the middleware that we
have developed.




                                               14
       I should say one other thing. Clock, for example, could be a service, and so an
application will use the facilities of a virtual collocation layer and also specialised services.
For example, the set of cars on a street could be a service which is composed out of other
parameters.

Component migration

       Let me show you an application – component migration. Let us suppose that we
have a camera here which is generating pixels. You could ship all these pixels from this
camera to this computer but then you may stress the communication link. On the other
hand, you might ask, why not just compute just the latitude and longitude at the camera and
then ship those co-ordinates over, thus reducing the data bandwidth? However, that would
stress the relay processor, so which should you do?

       That is exactly the kind of detail that you do not want the designer to worry about
because, after all, it can change. If you replace your camera with a more sophisticated
camera, or your network with a better network, the decision point to change. For example, if
your Kalman filter is running on this computer and generating excessive overhead, then you
would like the Kalman filter as a component automatically to take its stead and migrate over.

[Video shown – models in lab, middleware migration]

       So basically, we are changing the engine of the car when it is running. This kind of
facility is very useful because the steady state of all large systems is that there are failures in
some way or another. You want the system to operate and function in those failures and so,
for reliability, you need these kinds of mechanisms, which we are providing easily to the
control designer.

System-wide safety and liveness: automatic traffic control

       What about theory?       It turns out that we need theory.        This is a traffic control
example. There are these traffic lights in streets that you cannot see and the cars are
supposed to obey those traffic lights politely, which they do on occasion.

[Video – models in lab]

       There is a traffic light there, and that car is in a rush.        So that is the kind of
application.

       If you want to design that kind of an application, there is a great deal of complexity
underneath it. For example, this is a graph of concurrency at the traffic light. All things can
happen at the traffic light and, to reason about this entire system, we have to discretise the
entire domain, and that adds a great deal of discrete complexity. In computer science, we



                                                15
are dealing with these discrete calculations, interfacing with the physics of the car, so those
are called hybrid systems. We ultimately need proof of safety of these hybrid systems.

         This is a Mickey Mouse answer of a kind of theorem, just to illustrate the point.
Imagine a directed graph, and here are some properties of the graph. Imagine a number of
cars, some properties of the environment, road width and so on, and the angles of
intersections.     There are also some models of the car and some notion of real-time
scheduling. We can then guarantee that all these cars could be operated without collisions,
safely, and without gridlocks, liveness.

         This is a very simple, hand-crafted proof for this application but, as we design very
complicated systems, we will have to automate these kinds of proofs of safety and so on,
and that is a huge challenge for theorists because complexity is a huge barrier.

Collision avoidance

         I want to show you collision avoidance. It turns out that, in the US and elsewhere,
hundreds and millions of dollars are wasted on collisions, and injury from collisions.

[Video – model vehicles in lab]

         In the town of Champaign, where I come from, people pay money to see these kinds
of events. The point I want to make is that most accidents are caused by human error. In
fact, if you could just have someone tap you on your shoulder two seconds before, you could
prevent many accidents.       In this day and age, we should not be having these kinds of
accidents. That is the first point.

         Secondly, it turns out that designing systems with the human being in them is actually
more difficult than designing automatic systems. If you automate things, you can actually do
a little better.

Intelligent intersections

         The next topic is intelligent intersections. I realised, after spending a day here, that I
am probably talking about the wrong topic in the wrong place because you already have
better technology with roundabouts – you do not have these intersections. It turns out that
stop lights and traffic lights are very wasteful.         For example, in suburban roads at
intersections you often stop when there is no need for you to stop – and in some countries
they do not – and this also applies at night. The idea therefore is that we should just get rid
of these altogether. We can then probably lower fuel consumption, reduce delays, create
greater safety and so on. This is an application and I want you to decide for yourself whether
you would sit in one of these cars – there are no lights, but they just negotiate packets and




                                                16
cross the intersection. [Animated diagram] There are obviously significant legal and social
challenges here, and perhaps psychological ones too.

The oncoming pedagogical challenges

       This is my last slide. It turns out that the convergence of all these technologies is
giving rise to pedagogical challenges which we, at universities, have had to confront. We live
in this post-Maxwell, von Neumann, Shannon, Bardeen-Brattain world, and we can do rather
sophisticated things. I believe that the 21st century is the age of large system building. As
we recognise the era of limitations – environmental, energy and so on – we will be building
smart transportation systems, smart energy grids and so on.

       If you look back over the last 50 years or so – and perhaps you should never trust an
account of history so soon after the event – you could argue that the last 50 years was the
age of building strong individual disciplines. For example, computation is about 60 years old,
and modern control is about 45 years old. Communication – with Shannon – is about 60
years old, and signal processing, Cooley and Tookey, is about 40 years old, and so on.
However, we are now seeing a unification of all these things. For example, even the simple
problem of calculating the average temperature in a sensor network – is that a
communication problem? No, not completely, because you also need to do computations. Is
it a computation problem? No, it is communications also.

       You can see that all these fields are coming together and somehow we need to
communicate the totality of all of this to undergraduate students. At the same time, if you
want to do research, you need a deep knowledge of these individual fields and the question
is, how will we meet all these challenges?

       I will stop there. Thank you. [Applause]




                                   Questions & Answers



               Mike Walker: Thank you, Professor Kumar, for that fascinating talk which
took us from ad hoc networks, radio networks, through cars changing their engine while still
moving, to traffic control and collision avoidance systems.

       Professor Kumar has agreed to take some questions. Let me start because, in our
business, which is based on cellular radio systems, they are all very old-fashioned. It is that
very old spatial diversity planning scenario that you painted right at the beginning. You



                                              17
showed that in fact Maxwellian networks is the way we ought to look at things, and this is the
best possible. However, we have seen many attempts to build ad hoc networks with direct
communication from one mobile device to another.         These have all more or less failed
commercially. Do you have a feeling about why that is so, and whether there will be a
turning point so that we will see some of these networks being deployed commercially for
real?

                PR Kumar: There are two answers to that. If we look at the pace of evolution
of communication technology, it is just amazing.       Telephones are about 100 years old,
cellular phones are about 35 years old and the internet is about 25 years old. Who knows
what it will be like in 20 years from now? There could be an information fabric connecting
individuals and so on. One cannot rule out such things, and we may see them in the future.

        One of the issues has been what is the application of these ad hoc networks. Of
course, the military is always hungry for any application or any technology but what about in
the civilian world? There have been some applications and, for example, when Hurricane
Katrina struck New Orleans, there was actually a team from Champaign-Urbana Wireless
who went down there to set up the emergency network without any infrastructure, so you
have seen those kinds of things.

        One big way it could potentially take off would be in vehicular networks. In the US,
there has already been spectrum set aside for vehicular networks. Those are systems with
some structure, when cars go along the road and so on, so that it is not completely
unstructured. This actually facilitates people who are trying to design systems, because at
least they have a target in mind. People are thinking of safety applications, infotainment
applications, highspeed tolling and so on. I think vehicular networks could be a domain that
you will see being realised sooner.



                Professor Lajos Hanzo (University of Southampton): I very much enjoyed
your lecture and you set out a number of interesting challenges for us. One of these is in the
field of cross-layer optimisation. We set up the seven-layer OSI architecture and it is very
convenient, as you said, to have our little systems optimised on a layer by layer basis, but of
course mobile communications does not quite fit into that mould. For example, we have
power control and all the related functions which are stuck to the side of the seven-layer
architecture.

        You then went beyond that and spoke about these complex systems where we now
integrate control, communications and even mechanical systems. You also provided a very
interesting architecture for that. However, in a way, you are still also suggesting that perhaps


                                              18
even these complex structures will have to use some form of cross-layer optimisation as well
as some form of logical ordering of the different functions into a greater entity. Do you have
any comments on this?

                PR Kumar: Some of the theory that I talked about earlier clearly ignores
multiplicative cost – so that factors of 200 or 300 are irrelevant in these theories. You are
searching for an architecture for a Maxwellian network in this infinite dimensional space,
whereas there are all kinds of possibilities. The theory gives you some guidance on what the
architecture is but of course in the real world we want to improve performance by factors of
two, three and so on. A great deal of design work is being done and so, in that sense, there
is a little bit of a disconnect between the architecture that this theory says, and the real world.

        A very good example, as you mentioned, is power control. Power control is, how
loudly should you talk? When I send a packet, I can decide at what power level to transmit it,
and that has all kinds of implications. For example, if you think that the power level is
affecting signal quality, then that is a physical layer issue that classical communication
engineers are worried about.         On the other hand, when I talk really loudly, it causes
interference to somebody else, and interference is treated at the condition control layer of the
transport layer, which is somewhat higher up. On the other hand, power control also affects
connectivity and so, when I talk loudly, I can get to my final destination in three hops rather
than in 30 hops if I were to talk softly. That affects the network layer.

        Power control affects all the layers and the question is, where should you address it?
That is actually one of the most difficult problems there. It is not sufficient to say that you will
address it all over the place because then different layers would be fighting with each other
and turning knobs.      We still do not have any satisfactory answer on that and, even in
communication networks, we are still in research mode.

        Turning to these more sophisticated systems, they are in the very beginning stages of
this and what I have just spelled out is one proposal. I think there will be further evolution of
thinking and competing proposals and so on. However, it is true that I have suggested a kind
of layered hierarchy, if you will.



                Sreebhusan Ghosh (AT Consultancy): I have one question for Professor
Kumar with regard to the practical application in poor countries. He mentioned earlier that
mobile phones have taken over like fireflies in India and China – I think that in China, 60 per
cent of the population have mobile phones and it is rapidly progressing in India. I was very
surprised at how quickly it has taken off because five years ago there were hardly any but




                                                19
now there is the better part of nearly 20 per cent, for which Mr Sarin of Vodafone spent a
great deal of money for a network in India.

         The point in question is this. Seeing that the mobile phone technology works so well,
despite the relative level of ignorance and lack of technological development, do you see any
particular application with this networking, also without any land line involved, which will help
the traffic system in major cities like Calcutta or Bombay, where at any given time 50 per cent
of the traffic lights do not work because of communications alone? Or likewise, in big cities
in China or elsewhere?       Or do you think that the technology itself is so complicated,
comparing from the mobile phone to mobile networking, that it is a long time before this takes
place?

                  PR Kumar: Interestingly, these new technologies that we are seeing allow
countries to leapfrog the industrial revolution.    Actually it is a paradox: less developed
countries probably have more wireless infrastructure than developed countries, because
developed countries have so many wires in the ground and the capital is already there and it
is hard to compete with that.

         There are many applications, for example in remote medicine. In the villages in India,
as you mentioned, if you can transmit the image of your skin or whatever, then a doctor could
at least do some preliminary diagnosis. There is greater justification for using these kinds of
technologies in less developed countries than in developed countries where there is an
infrastructure.

         There are also other applications, which I should have mentioned in answer to
Professor Walker’s question.      In hospitals, every time you are hooked up to all these
instruments in the emergency room, with these all these wires trailing around, we can start to
interconnect things wirelessly. Those kinds of applications, regardless of whether it is the US
or India or wherever, mean that there is a great deal of potential. For traffic, the answer
would probably have to lie in mass transit.



                  Professor Ralph Benjamin (University of Bristol and UCL): Early on in
your talk, you pointed out that, in a cellular network, frequency reuse with minimum
interference benefits from a rapid increase of attenuation with range. The scope for doing
this is rather limited but one can do a little by choice of frequency. Do you have any views on
an appropriate combination of frequency reuse and time reuse, in order to minimise
interference?




                                               20
               PR Kumar: At high levels, there is not much difference between the two, or
even CDMA. These are all ways of orthoganalising your channel, that is breaking up your
overall channel into pieces – whether you package them in blocks of frequency, blocks of
time or blocks of codes or whatever. At a high level you are just partitioning your resources
and so there is not that much of a difference. Of course, there may be differences in other
kinds of ways, for example perhaps CDMA may allow softer entry into and exit from the
system and so on. At a high level, however, there is not much of a difference between the
two, fundamentally.



               Mike Walker: Your traffic control is fascinating. Do you think we will ever
reach the stage where people will really trust systems that will enable them to zoom across
crossroads, seemingly without paying any attention whatsoever?

               PR Kumar: Very often, what we are familiar with is very comfortable, while
we think that what we are not familiar with is just impossible. We seem to be on that edge all
the time.    However, the technology is already coming where you have given up the
longitudinal motion of your car with cruise control – so you already do such things. I guess
that lateral motion is the next step.

        Human beings will also evolve and we will have to adapt to these new technologies.
Of course, there are all kinds of challenges and not just us adapting – there are also legal
challenges and so on. I do not want to minimise the challenges that exist, but there is the
potential.

               Mike Walker: They make superb games anyway. Your students must love
them.



               Professor Fu-Chung Zheng (University of Reading): I would be interested
in the clock synchronisation progress that you mentioned.           Some of us know this
phenomenon, which happens in the forest in some Far Eastern countries, which is that
hundreds upon thousands of bats would flash together during in the night, like fire blocks,
and they have a really, really intricate synchronisation mechanism between them while we,
as human beings, have to use other tricks as you have just mentioned – averaging and so
on. How are we doing compared with those bats? The real question is about the accuracy
that we can achieve now – it is milliseconds or nanoseconds?

               PR Kumar: The last question is always easier. The most we can achieve is 6
microseconds. There are many things that biology does which we cannot begin to get close



                                             21
to, and vision is a good example of that.        Even the way that bats can spatially locate
something that is less than the resolution of the neural spike – I do not know. We still have
much to learn from biology.



                Dr Graham Woodward (Toshiba): [Without microphone] This relates back
to the shared bridge network. So much of our feedback system is motivated by underlying
information theory and you made an interesting observation on capacity by taking a Maxwell
view rather than a virtual view of the network. I have seen the result of experiments based
on unicast users collaborations. Network coding shows that you can get enormous capacity
by collaborative coding across the network, especially when there is unicasting and
broadcast. Is there a network coding theory emerging for Maxwell’s view of the network, and
what does that tell us about networks taking a Maxwell view rather than a virtual view.

                PR Kumar:      That is a good question.        As far as unicast systems are
concerned, the information theory allows for network cording or anything else.            It is an
absolute theory, but it is only up to three constants, as I mentioned. On the other hand, there
are simple examples where, when you are relaying packets in opposite directions, then you
can use network coding to give a certain factor improvement and so on. So network coding
definitely buys you constant factors improvement.

        The one thing I did not touch upon is multicast, where network coding has a great
deal of potential.



                Ralph Benjamin:      Concerning the intelligent crossroad junction, to operate
intelligently, clearly you have to monitor the traffic approaching from all sides, and have
distinct lanes for turning and keeping straight on, or other indication of that. Subject to that,
you could give each block of traffic a target velocity with tolerances, which normally allows
continuous flow and, in a limited addition, these target velocities can go down to .… In the
present system, the lower limit of intelligence can no longer improve things.

                PR Kumar: Our system works on something like that, with the added feature
that there is a proof of safety. The proof of safety is impervious to whatever the car in front of
you does, subject to the limitations of Newtonian mechanics. It is that kind of system.



                Mike Walker: If there are no further questions, I suggest that we move into
the Marble Room for drinks.




                                               22
       I would like to thank Professor for his excellent presentation and the very interesting
discussion that followed. [Applause]



                                          -------




                                             23