VIEWS: 0 PAGES: 38 POSTED ON: 4/27/2013 Public Domain
Distributed Localization using Noisy Distance and Angle Information Jie Gao Joint work with Amitabh Basu*, Joseph Mitchell, Girishkumar Sabhnani* @ Stony Brook To appear in ACM MobiHoc 2006 Localization in sensor networks • Given local measurements – Connectivity – Distance measurements – Angle measurements • Find – Relative positions – Absolute positions Localization in sensor networks • Location info is important for – Integrity of sensor readings – Many basic network functions • Topology control • Geographical routing • Clustering and self-organization. Localization problem • Extensively studied. • Anchor-based methods – Anchors know positions, e.g., via GPS. – Triangulation-type of methods, e.g., [Savvides et al.] • Anchor-free methods – Local measurements global layout. – We use this approach. Anchor-free localization • Distance information only – Global optimization • MDS [Shang 03], SDP [Biswas & Ye 04] – Localized, distributed algorithm • Mass-spring optimization, robust quadrilateral [Moore 04], etc. • Graph rigidity! Our approach • Distance + angle information • Measurements are noisy. Assume a global north. Upper/lower bound on distance and direction of neighbors. Goal: find an embedding that satisfies all the constraints. Our results • Finding a feasible solution with noisy distance + angle is NP-hard. • A distributed, iterative algorithm for a relaxation. Hardness results • Accurate distance + angle: trivial. • Infinite noise, non-neighbors >1 = Unit disk graph embedding: NP-hard [Breu & Kirkpatrick]. • Accurate angle, infinite noise in distance, non- neighbors >1: NP-hard [Bruck05]. • Accurate distance, infinite noise in angle, non- neighbors >1: NP-hard [Aspnes et. al. 04]. This paper 1. εnoise in distance, δnoise in angle, for arbitrarily smallε,δ, finding a feasible solution is NP-hard. 2. Accurate distance, relative angle, non-neighbors >1: NP-hard. or • Reduction from 3SAT. Solve a relaxation • Use a convex approximation to the non- convex frustum, e.g, a trapezoid. All the constraints are linear. Use linear programming to solve for an embedding. Solution not unique. Compute all of them. Weak deployment regions • We solve for Regions of Deployment • Weak deployment – All feasible solutions. Upper bound. – Fix a sensor, a feasible solution for the other sensors. Strong deployment regions • We solve for Regions of Deployment • Strong deployment – Inherent uncertainty. Lower bound. – Pick any point within each region independently a feasible solution. Linear programming • We can also solve weak and strong deployment by LP. • Let’s look at weak deployment first. Weak deployment and LP • LP for feasibility of embedding. • n sensors, m edges. • Variables: (xi, yi) for each sensor i. • # variables 2n, # constraints: 8m. • A valid embedding is a point in R2n. • The feasible polytope P in R2n : collection of all feasible solutions. Weak deployment region for sensor i = projection of P onto plane (xi, yi). Theory of convex polytope • The feasible polytope P has 8m faces. • In general, the complexity of P (# vertices) and its projection, can be exponential in 8m. Solve for weak deployment Our problem has special structures: • The weak deployment region has O(m) complexity in the worst case. • We can solve it in polynomial time by linear programming. • There is a distributed algorithm that finds the same solution as the global LP. What next? • A distributed, iterative algorithm for the weak deployment problem. • Show why the complexity of weak deployment region is O(m). • Simulation results. • Strong deployment. Forward constraint propagation • Each node keeps a current feasible region Ri. Rj • Region Ri shrinks Fij region Rj. • Rj Rj ∩ Ri Fij. Ri Minkowski sum XY={p+q | p ∊ X, q ∊ Y} Backward constraint propagation • When Rj shrinks, then Rj Ri can also shrink. • Ri Ri ∩ Rj (-Fij). Ri -Fij Iterative algorithm • Pin down one node at the origin. • Initialize all other regions as R2. • Until all regions stabilize – For each sensor, compute new regions from all neighbors’ regions • Both forward & backward propagation. – Shrink its current region to the common intersection. Iterative algorithm correctness • The iterative algorithm computes the weak deployment regions. • Proof sketch: – Regions always shrink. – It converges to weak deployment region when shrinking stops. – The algorithm stops after a finite number of steps Convergence • Prove by contradiction. Assume a point p Ri* for sensor i. • For every sensor j, propagate the constraints from i to j along all possible paths. • Take the common intersection of these regions, say Pi. p Convergence • Recall p Ri*. Thus either 1. One region Pj is empty. 2. The origin k is outside Pk. • 1 is not possible. – The shape of Pj doesn’t depend on p. – Start from a point in Ri*, the LP is infeasible. Pj p p* Convergence • Recall p Ri*. Thus either 1. One region Pj is empty. 2. The origin k is outside Pk. • If 2 happens. – Reverse the paths from k to i. – The point p will be eliminated. – The algorithm hasn’t converged. p k=origin Why the regions are O(m)? • All the operations are Minkowski sums and intersections. Minkowski sum XY: boundary comes from the boundaries of X and Y Why the regions are O(m)? • All the operations are Minkowski sums and intersections. • Slopes of the region boundary come from the original constraints. • There are only 8m different slopes. • If we use rectangle constraints, then all the deployment regions are rectangles. Convergence rate • Nodes randomly deployed. • Communication graph: unit disk graph. Robustness to link variation • Links switch on ↔ off with prob p: 0~1. • The deployment regions are stable. Robustness to link variation • Links switch on ↔ off with prob p: 0~1. Due to network disconnection. When p is small, it is slow to get re-connected. Comparison to SDP [Biswas & Ye] • SDP only uses noisy distance measurements. • We use angle range /4. Less dependency on # anchors. Comparison to SDP [Biswas & Ye] • SDP only uses noisy distance measurements. • We choose angle range /4. Two metrics: • Center • furthest point. WD: weak deployment SD: strong deployment Strong deployment • Strong deployment – Inherent uncertainty. Lower bound. – Pick any point within each region independently a feasible solution. Strong deployment • More subtle! • One can shrink the region for one to get a larger region for the others. • We propose to find the same shaped region for every node, e.g., square, as large as possoble. • Formulate as LP? Infinite # constraints? Strong deployment • By convexity, if the constraints are satisfied for every pair of corners of the deployment regions, then the constraints are satisfied for every pair of internal points. • Formulate a LP w/ constraints on all pairs of corners. • Maximize the size of the region r. Strong deployment • Reduce to weak deployment. • Distributed algorithm. – Guess the size r. – Solve for center of the strong deployment region. – Binary search on r. Conclusion • Localization with noisy distance + angle measurements. • Complete the hardness results. • Upper/lower bound: weak/strong deployment regions. • Linear programming and distributed implementation. Future work • Convergence rate of the distributed iterative algorithm. • Bound the approximation through the relaxation of non-convex constraints. • Generalize the noise model to probabilistic distributions. Questions? • Thank you!