Slides by Flavio58

VIEWS: 26 PAGES: 41

									GroundCam: A Tracking Modality
    for Mobile Mixed Reality
    Stephen DiVerdi, Tobias Höllerer
                Four Eyes Lab
      Department of Computer Science
     University California, Santa Barbara
           http://ilab.cs.ucsb.edu/
                                            1
     Anywhere Augmentation
• High quality MR is expensive to set up
  – calibration, scene modeling, instrumentation
• Significant barrier to entry
• Goal is to reduce startup cost
  – online algorithms, ubiquitous data sources,
    cheap hardware




                                                   2
                 Overview
• GroundCam tracking modality
  – uses a camera like an optical mouse
  – similar to inertial and pedometer sensors
  – high resolution, high frequency
  – indoors and outdoors
• Hybrid GroundCam / GPS
  tracker

                                                3
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           4
        Mobile Mixed Reality
• e.g. Outdoor architectural visualization
• Tracking requirements
  – high-resolution
  – high-frequency
  – outdoors
  – wide-area



                                             5
            Available Technologies
technology     range (m) setup (hr) resolution (mm)   time (s)   environ
magnetic          1          1             1             ∞       in/out
ultrasound        10         1            10             ∞         in
inertial          1          0             1            10       in/out
pedometer        1000        0            100          1000      in/out
optical,
  beacons         10         1             1             ∞         in
  passive         10         10           10             ∞         in
  markerless      10         0            10             ∞       in/out
hybrid            10         10            1             ∞         in
GPS               ∞          0           1000            ∞        out
beacons          100         10          1000            ∞       in/out
WiFi / RF        100         10          1000            ∞       in/out
                                                                          6
           The GroundCam
• Vision-based, 2DOF person tracking
• Inspired by desktop optical mouse
  technology, robot odometry
• High resolution, high frequency, indoor /
  outdoor, untethered, <1min setup




                                              7
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           8
                Hardware
•   1394 Camera
•   InertiaCube 2
•   GPS Receiver




                           9
                   Algorithm
• per-frame:
    •   undistort image
    •   find new features
    •   compute optical flow
    •   find inlier features
    •   compute velocity in camera coords
    •   get orientation
    •   compute velocity in world coords
    •   integrate into position

                                            10
             Undistortion
• Camera’s intrinsic parameters calibrated
  once offline
  – Zhang’s algorithm, OpenCV
• Distortion coeffs unwarp each frame




                                             11
            Feature Tracking
•   Shi and Tomasi corner features
•   Lucas and Kanade pyramidal optical flow
•   Using OpenCV
•   Maintains ~100
    features




                                              12
            Inlier Detection
• Uses RANSAC
• One feature for estimate
• Translation distance and direction
  thresholds matched
  separately
• Inliers averaged for
  final value

                                       13
        Motion Computation
• Convert pixel motion to camera motion

                        Scale Factor
                         2D    F
                F           tan 
                    D     P    2


 12° 1.1m 640x480 => 0.37mm/pixel

                                          14
           World Transform
• Use InertiaCube2 for absolute orientation
• Calibrated at startup, ~30sec
  – acquire readings at world’s cardinal directions
  – linearly interpolate between them
• Orients the GroundCam velocity vector




                                                  15
Demo




       16
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           17
         Position Integration
• Drifts over time
• Favorable compared to inertial sensor
  – single integration vs. double
• Compared to pedometer
  – fewer assumptions about motion




                                          18
        Motion Computation
• Assumes constant distance
  – not actually correct
  – 12° FOV => up to 0.5% error



                      D   D +ε



• Walking stride also influences
                                   19
        Motion Computation
• Max speed limited by FOV
  – 0.18m x 0.24m (portrait orientation)
  – @20fps, ~2.3m/s (avg walking is 1.3m/s)
  – also limited by motion blur, ground texture




                                                  20
Ground Texture




                 21
        Tracking Distractions
• User’s legs, feet
  – proper mounting and FOV
• Height differences
  – RANSAC handles debris
  – systematic changes unaddressed
• Illumination changes
  – recovers
    automatically
                                     22
        Camera Parameters
• Wide FOV
  – faster motion okay
  – needs larger features
  – more distractions
• Narrow FOV
  – fewer distractions
  – distractions more
    damaging

                            23
          Camera Mounting
• Perpendicular to ground
  – minimizes frustum volume
  – simplifies calculations
• May become misaligned
  – 5° off => up to 2% error




                               24
             Error Summary
• Many small sources of error
  – some systematic, some temporary
• Effects overall negligible
• Most significant error from RANSAC
  failure
  – insufficient ground texture



                                       25
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           26
          Slip Compensation
• RANSAC failure similar to wheel slipping
  in robot odometry
  – result is shortening of path
• Compensate by scaling good estimates up
  to account for slips
                                   −1
                    s = (1 − r )
     slip rate r=0.8 => scale s=5.0

                                             27
Slip Compensation




                    28
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           29
           Hybrid Tracking
• Need periodic correction
• Outdoors, GPS
  – low frequency, medium resolution
  – ubiquitously available
• Loosely coupled with a complementary
  Kalman filter



                                         30
       Complementary Filter
• Filters error between two sensors
• Low CPU cost for fast updates
                   +ς



                            ς
                        -




                                      31
Demo




       32
Example Output




                 33
Example Output




                 34
         Alternate Couplings
• GPS good outdoors
• Need something else indoors
  – indoor AR, wide-area VR
• Beacon systems are easy to set up
  – IR, RF id beacons
  – sparse fiducial markers



                                      35
Simulation




             36
                 Outline
•   Motivation
•   Implementation
•   Error Analysis
•   Slip Compensation
•   Hybrid Tracking
•   Conclusions


                           37
             Conclusions
• Cheap, easy to setup, useful for many
  mixed reality application scenarios
• Performs favorably compared to
  alternatives
• Easy coupling for more robust, wide-area
  tracking
• Important technology in Anywhere
  Augmentation toolbox
                                             38
                Future Work
•   Remove orientation tracker dependency
•   Improve robustness of feature tracking
•   Integrate the GroundCam in more
    wearable MR systems
    – source code will be available on lab website
                http://ilab.cs.ucsb.edu/



                                                     39
          Acknowledgements
•   This research was supported in part by
    NSF grant #IIS-0635492, NSF IGERT
    grant #DGE-0221713 in Interactive Digital
    Multimedia, and a research contract with
    the Korea Institute of Science and
    Technology (KIST) through the Tangible
    Space Initiative project.


                                            40
41

								
To top