Software Challenges in Wireless Sensor Networks by lee92256


									Software Challenges
Wireless Sensor Networks

                  Jeremy Elson
             Microsoft Research
           IPSN/SPOTS Tutorial
       Wednesday, April 27, 2005
Outrageous(?) Opinion Tutorial
Outrageous opinion survey
– A dozen contributors
– Ideas that deserve credit are theirs, blame for
  dumb ideas is mine
More questions than answers
Basic question: Why is writing software for
sensor networks so hard?
Some examples of ongoing research
               Is it hard?
– Cost of new devices is really just the device
– Pressure is on hardware designers – smaller,
  cheaper, faster
– We can network every node that exists
Far more motes exist than we can network
– 100,000 (?) exist
– Biggest networks, ~100 or maybe ~1000
– Why is this?
Can we just blame hardware?
Energy is “non-negotiable”
– Un-tethered sensing is the reason we’re here
– Will always limit radio ranges, link qualities

“Architecture challenges” come somewhere in
between hardware and software
– e.g., how do we structure hierarchies?
– Billions of PCs can’t be wrong!

Even with the right structure, it’s harder
– Why?
Data              Worst of All Worlds
  Real-World                                          Distributed
     Sensor       Robotics                            Robotics Sensor
       Inputs                                                     Networks

     Timing-                         OS Kernels/
  Dependent                          Device Drivers           TCP

                               MS Word
       “Text”       cat

                Single- or Few-     Multi-Threaded     Distributed,
                Threaded                               Timing Dependent System
                             System Uncertainty
      Big Challenge 1:
Visibility, Visibility, Visibility!
       Of a dozen sensor network
      researchers polled, 10 listed
   “debugging support” in some form
           as a core challenge
   Visibility, visibility, visibility…
Ground truth is hard to capture, even with
unlimited capacity systems – damn you, real world!
– We want to interpret responses to stimuli
– The stimuli, not the sensor outputs, are ground truths
The lab is not the same as the field
– Things that worked in the lab don’t work during
– Bugs in MAC layers, positioning errors, non-Guassian
It’s also hard to model
– Models are appearing (e.g., Cerpa, Whitehouse)
– But notice that models are highly environment-
  Visibility, visibility, visibility…
The differences have substantial effects
– If only I had $100 for every routing algorithm
  designed with a circular radio model
– I still couldn’t afford to buy a circular radio

Observing reality concurrent with design
should be mandatory
– Brings us back to needing visibility
  Visibility, visibility, visibility…

Even traditionally easy-to-observe things
(e.g. messages, states) become hard to
capture, because…
Debugging information is now huge
compared to data, instead of the opposite
(vs. Internet)
You can’t store it on Mote-sized devices,
Simulators, Emulators, Testbeds
TOSSIM (Levis)
– Real-code-simulator for motes

Avrora (Titzer, Palsberg)
– Simulates down to microcode level

Motelab (Welsh, et al.)
– Always-on testbed, lowers the massive systems motivation effort
  required for deployments

EmStar (Girod, Elson, et al.)

– Sensor outputs are not ground truths
– Matt’s office is not the same as a Redwood forest
Simulators, Emulators, Testbeds
 EmStar’s runtime environments allow high-visibility debugging
         before jumping into low-visibility deployment
          Pure Simulation

                 Data Replay

                          Ceiling Array

                                            Portable Array

       Visibility for Design
Even understanding the behavior of a big
parallel process is hard
– How do you know what’s going on

  But even if you do…

Designing local rules to cause global
behavior is hard (Culler, Liu, Welsh)
– How do you control what’s going on
   Visibility for Management
The need for visibility doesn’t end when the
design is done (Madden, Polastre)

Which sensors have failed?
– For repair purposes
– Because it tells you about the data

Doesn’t obviate the need for statistical
elimination of sensors we think are bad
– “Shut up, you’re confusing everyone!”
   Another take: Predictability
Instead of observing what happened
predict what will happen (static analysis)
                 “Giving Up”
Tenet (Kohler) – motes can only do the most
basic tasks (e.g., thresholding), and route back
to a microserver
– In the low-visibility nodes, complexity is limited to
  reasoning about one node’s data
– “Hard part” happens at microservers

Mechanism vs. policy separation for routing
– Motes send link states to microserver
– Microserver computes routes; installs them on motes
    Big Challenge 2:
Higher Level Abstractions

   How long can we keep on
      doing it this way?
Why are abstractions important?
They let you reason about software at a
higher level
– Right now we manually script every packet
  sent and received, most timers…

They (can) let software interoperate better
– Applications can share the underlying building
  blocks; system is smaller, more consistent
– Services like TCP port numbers
    Higher Level Abstractions
Consider compiler arc:
– First, “compiled code is too slow”
– Second, “But computers are now fast”
– Third, “The compiler does it better anyway”
We’re still at Step 1
– Not with CPU cycles (we use compilers)…
– … but bandwidth, energy, and memory.
Unfortunately there may not be a Step 2
    What if there’s no Step 3?
The Internet has TCP, which is well-behaved,
and which many apps use
– Nice model: fixed data len, variable time
Some (minority?) apps don’t fit this model, adapt
at the app layer (e.g. video quality)
Sensor network congestion control (e.g. Woo,
Hull): good first steps but still has a focus on
collision avoidance
Root of the problem: rate adaptive sensor apps
must be the common case; they aren’t!
Common case is no longer “fixed data size,
transport it when you can” model – infinite data
like streaming video
        Some Abstractions
TinyDB (Madden)
– Among the first; programming interface is queries
Abstract regions (Welsh)
– Program collections at a higher layer than sending
Reliable Multi-Hop State Sync (Girod)
– Publish and update structs over lossy nets
This week, we’ve seen State-Machines (Kasten)
and new intermediate languages (Newton)
But the real question: do they work across a
diversity of applications? Only time will tell.
          Sub Challenge:
        Re-Usable Software
TinyOS, EmStar, etc. are modular, yet
reuse isn’t as pervasive as it “should be”
One part software engineering, one part
Big Problem (as in congestion control)
An encouraging first step: SP (Sensor
Protocol) by Polastre, Culler, et al.
– Standardized interface to MAC, with some
  basic feedback in both directions
Meta-Challenge: Applications That
  Do More Than Web Cameras
1999 “Grand Challenges” paper:

    “Data processing must be in-network”

Where are we now?
Many (most?) applications are “bring all
the data back”
 – Some notable exceptions including sniper
   tracking (vanderbilt), Magneto-Car Tracking
   (berkeley), Self-Healing networks (sensoria)
        Self-Healing Networks
Sensoria Corp under
contract from DARPA
Goal: Nodes localize
themselves within 1m,
MOVE to fill in gaps
Network completely
autonomous at many
          Closing the Loop
20 Nodes; 10 MOBILE. Then, network partitions…
Ultimately we want to get to systems that
do amazing things
We just keep building on ideas; we have to
build on each others’ systems
Abstractions are needed, so we can build
additive systems instead of just more
None of this will happen without visibility
And above all…
 What the heck
     is the
David Culler           Sam Madden
Henri Dubois-Ferrier   Andrew Parker
Lew Girod              Joe Polastre
Richard Guy            Matt Welsh
Bill Kaiser            Alec Woo
Eddie Kohler           Yan Yu
Jie Liu                Feng Zhao
     Thank you!

Questions? Comments?

To top