Computer Vision - UvA Startpagina - Universiteit van Amsterdam

Document Sample
Computer Vision - UvA Startpagina - Universiteit van Amsterdam Powered By Docstoc
					            INCAPS
An Intelligent Camera-Projection System
                Framework



               Roberto Valenti
                Jurjen Pieters
              Overview

•   The State of The Art
•   What are we going to do
•   The Framework
•   First Year(s) Project Plan
•   Examples
•   References
•   Equipment Costs Estimation
State of The Art
State of The Art
State of The Art
State of The Art
                  The Situation
                                              Hardware
                                Hardware                 Hardware
       Company
                                            Hardware
 Company        Company
                                                          Algorithms
        Company
                                            Algorithms
                                                         Algorithms
                                       Algorithms

Databases       Databases

      Databases                             Software
    Databases               Software
                                                         Software
                                       Software
  What are we going to do?

• The scene needs a framework
  – Easier way to implement application
  – Interoperability
  – New possibilities (more on this later)


The framework will extract information
  from “the world”, and will give the
 opportunity to select what is relevant
                The Framework
                                               Users or
                                              Companies
Input Devices
  (Camera)


          Input Analysis
                                                DB
                             Internal world
                INCAPS       representation

        Picture Processing                    Modules


Output Devices
  (Projector)
   The world as a source..

• We have to represent the world
  internally to understand it better
• Single components are not enough
• All the input devices are used to sample
  the world on different aspects (a lot of
  information out there!)
• REALTIME!
     The Project Planning

• First year: Research and tests of
  algorithms on multiple input devices,
  setup and tuning.
• Second year: Have an efficient internal
  world representation.
• Third year: Addition of multiple
  input/output devices, creation of API’s
Reality 2 Digital
Reality 2 Digital
  3D Information Extraction

• We have an internal world-map, we can
  specify rules (I.e. pedestrians)
• Companies/Users can decide what is
  important (tumors, broken bones, road, traffic
  signs,components)
• Possibility to apply further algorithms to
  subsets of world-data
• Depending on the application, we can
  recognize and label objects, matching them
  with databases and ad-hoc algorithms
               Digital 2 Reality

• The extracted information
  is displayed to the user in
  the most useful way,
  depending on the
  application.
    Digital 2 Reality: Examples



                       Night Vision




Pedestrian detection
Equipment Costs Estimation
      A very powerful computer, a car and ..
• Input Devices:          • Output Devices :
  – Stereo Camera             – One or two out of:
  – Radars                       •   Projector
                                 •   VR glasses
  – IR Night-viewer
                                 •   Translucent screen
  – Database System
                                 •   HUD
  – Wireless
    Communication
  – GPS

           Total: ~30.000 €
                                                    References

[1] Y. Baillot, D. Brown, and S. Julier. Authoring of Physical Models Using          [14] D.M. Krum, R. Melby, W. Ribarsky, and L.F. Hodges.IsometricPointer
       Mobile Computers. Fifth International Symposium on Wearable                           Interfaces for Wearable 3D Visualization. Submission to ACM CHI
       Computers, pages 39–46, October 7-10,2001.                                            Conference on Human Factors in Computing System, April 5-10, 2003.
[2] J.F. Bartlett. Rock ’n’ Scroll is Here to Stay. IEEE Computer Graphics and       [15] D.M. Krum, O. Omoteso, W. Ribarsky, T. Starner, and L.F.Hodges. Speech
       Applications, 20(3):40–45, May/June 2000.                                             and Gesture Control of a Whole Earth 3D Visualization Environment. Joint
[3] U.S. Census Bureau. Topologically Integrated Geographic Encoding and                     Eurographics - IEEE TCVG Symposium on Visualization, May 27-29, 2002.
       Referencing System, http://www.census.gov/geo/www/tiger. 2002.                [16] D.M. Krum, O. Omoteso, W. Ribarsky, T. Starner, and L.F.Hodges. Evaluation
[4] A. Cheyer, L. Julia, and J. Martin. A Unified Framework for Constructing                 of a Multimodal Interface for 3D Terrain Visualization. IEEE Visualization,
       Multimodal Applications. Conference on Cooperative Multimodal                         October 27-November 1, 2002.
       Communication (CMC98), pages 63–69,January 1998.                              [17] D.M. Krum, W. Ribarsky, C.D. Shaw, L.F. Hodges, andN. Faust. Situational
[5] W.J. Clinton. Statement by the President Regarding the United States’                    Visualization. ACM Symposium on Virtual Reality Software and Technology,
       Decision to Stop Degrading Global Positioning System Accuracy. Office                 pages 143–150, November 15-17, 2001.
       the the Press Secretary, The White House, May 1, 2000.                        [18] K. Lyons and T. Starner. Mobile Capture for Wearable Computer Usability
[6] P.R. Cohen, M. Johnston, D. McGee, S. Oviatt, J. Pittman,I. Smith, L. Chen,              Testing. Fifth International Symposium on Wearable Computers.
       and J. Clow. Quickset: Multimodal Interaction for Distributed Applications.   [19] J. Murray.Wearable Computers in Battle: Recent Advances in the LandWarrior
       ACM International Multimedia Conference, pages 31–40, 1997.                           System. Fourth International Symposium on Wearable Computers, pages
[7] R.T. Collins, A.R. Hanson, and E.M. Riseman. Site Model Acquisition under                169–170, October 18-21, 2000.
       the UMass RADIUS Project. Proceedings of the ARPA Image                       [20] S.L. Oviatt. Mutual Disambiguation of Recognition Errors in a Multimodal
       Understanding Workshop, pages 351–358,1994.                                           Architecture. Proceedings of the Conference on Human Factors in
[8] D. Davis, T.Y. Jiang, W. Ribarsky, and N. Faust. Intent, Perception, and Out-            Computing Systems (CHI’99), pages 576–583, May 15-20, 1999.
       of-Core Visualization Applied to Terrain. IEEE Visualization, pages 455–      [21] W. Piekarski and B.H. Thomas. Tinmith-Metro: New Outdoor Techniques for
       458, October 1998.                                                                    Creating City Models with an Augmented Reality Wearable Computer. Fifth
[9] D. Davis, W. Ribarsky, T.Y. Jiang, N. Faust, and Sean Ho. Real-Time                      International Symposium on Wearable Computers, pages 31–38, October 7-
       Visualization of Scalably Large Collections of Heterogeneous Objects.                 10,2001.
       IEEE Visualization, pages 437–440,October 1999.                               [22] J. Rekimoto. Tilting Operations for Small Screen Interfaces. ACM User
[10] N. Faust, W. Ribarsky, T.Y. Jiang, and T. Wasilewski. Real-Time Global                  Interface Software and Technology, pages 167–168, November 6-8, 1996.
       Data Model for the Digital Earth. IEEE Visualization,March 2000.              [23] J.C. Spohrer. Information in Places. IBM Systems Journal, 38(4):602–628,
[11] T.L. Haithcoat, W. Song, and J.D. Hipple. Building Footprint Extraction and             1999.
       3-D Reconstruction from LIDAR.Data Remote Sensing and Data Fusion             [24] J. Vandekerckhove, D. Frere, T. Moons, and L. Van Gool.Semi-Automatic
       over Urban Areas,IEEE/ISPRS Joint Workshop, pages 74–78, 2001.                        Modelling of Urban Buildings from High Resolution Aerial Imagery.
[12] K. Hinckley, J.S. Pierce, M. Sinclair, and E. Horvitz. Sensing Techniques               Computer Graphics International Proceedings, pages 588–596, 1998.
       for Mobile Interaction. ACM User Interface Software and Technology,           [25] T. Wasilewski, N. Faust, and W. Ribarsky. Semi-Automated and Interactive
       pages 91–100, November 5-8, 2000.                                                     Construction of 3D Urban Terrains. Proceedings of the SPIE Aerospace,
[13] J.M. Kahn, R.H. Katz, and K.S.J. Pister. Mobile Networking for Smart Dust.              Defense Sensing, Simulation and Controls Symposium, 3694A, 1999.
       ACM/IEEE International Conference on Mobile Computing and                     [26] S. You, U. Neumann, and R. Azuma. Orientation Tracking for Outdoor
       Networking (MobiCom 99), pages 116–122, August 17-19, 1999.                           Augmented Reality Registration. IEEE Computer Graphics and Applications,
                                                                                             19(6):36–42, Nov/Dec 1999.
Questions




  ?

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:7
posted:10/3/2012
language:English
pages:19