; Photorealistic Augmented Reality
Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out
Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

Photorealistic Augmented Reality

VIEWS: 27 PAGES: 36

The so-called Augmented reality (referred to as AR), refers to the physical world and computer generated scenes the data combined to produce more than one's own sensory-rich scenes. As mobile phones become better and better multimedia hardware support, combined with large amounts of data on the Internet, has developed the AR is more compelling applications, such as Sekai Camera, Layar, GraffitiGeo and after Google acquisition for refusing to become famous Yelp.

More Info
  • pg 1
									Photorealistic Augmented Reality


Didier Stricker, Fraunhofer for Computer Graphics, Germany
     Javier-Flavio Vigueras-Gomez, Inria-Loria, France
      Simon Gibson, Manchester University, England
         Patrick Ledda, Bristol University, England
       Timetable of the tutorial

10:00 Introduction: the ARIS project (Stricker)
10:15 Camera calibration/scene reconstruction (Stricker)
10:45 Coffee break
11:00 Markerless real-time tracking (Vigueras)
12:00 Lunch
13:30 Illumination reconstruction (Gibson)
14:00 Image generation (Gibson)
15:00 Coffee break
15:30 Assessing image quality (Ledda)
16:30 End
      Photorealisitic Augmented Reality

The presented work has been achieved within the European project
                                  ARIS
                 Augmented Reality Image Synthesis
        through Illumination Reconstruction and its Integration
             in Interactive and Shared Mobile AR-systems
                for E-(motion)-Commerce Applications


                        European IST Research Project

                             IST-2000-28707
Consortium

C1   Fraunhofer Institute IGD

P2   Intracom S.A
P3   Inria-Loria
P4   University of Manchester


P5   University of Bristol

P6   Athens Technology Center


P7   House Market S.A. (IKEA)
      Photorealistic Augmented Reality

The goal is to achieve seamless integration of
the virtual objects in the real scene
No difference between a real and a added
object
Solution
   Reconstruction of the lighting condition for a given
   image
   Light simulation with these data
   Consistent synthesis of the new augmented image

   Can not be done per hand (eg. Photshop)
   Possible to compute the hightlights, reflexion on of
   the surrounding on the virtual object
  Light simulation




Without Light simulation   With light simulation
Seamless integration of virtual objects in real images



                                                   Addition of a
                                                   virtual lamp
                                                   (which project
                                                   virtual shadow
                                                   on the real wa



                                                   Virtual lamp
                                                   can be
                                                   switched on
Seamless integration of virtual objects in real images
    Scientific activities


Geometry reconstruction

Illumination reconstruction


Combined light simulation


Perceptual evaluation
        Illumination reconstruction




                                               University of Manchester
1. Geometric model   2. Lightprobe is   3. Illumination data
build from images    located within     is mapped from the
of the scene         the model          lightprobe onto the
                                        model
    Visual perception


Which table is the real one?
    Applications



E(motion)-commerce


  Interactive Web-system

  Mobile collaborative shared system
e-(motion)-commerce
e-(motion)-commerce
Mobile Unit & Collaboration
Camera calibration from a single view
    Basics

Pinhole camera model                             M       z
                                                                 Scene
  R : rotation matrix
  t : translation vecktor                            o
                                                             y
  K : intrinsic parameter
                                     o               x
   f.s 0 c                                 m
          x
K= 0 f c 
          y
  0 0 1
                          Camera                           R, t
                                         z
   m~PM                         O
                                             x
m~K(RM+t)
  P=K[R|t]                           y
    Goals

To develop a camera calibration solution usable by
end-users
Determine all the camera parameters in a as simple as
possible way
Do not required 3D knowledge on the scene
Use single view, rather than multiple images (higher
usability & simplicity)
    Camera calibration with vanishing points

v and w are two vanishing points on lines with
orthogonal directions


For v and w, we have: vTK-TK-1w = 0


  1 linear equation with the parameters of:
K-TK-1= ω (Image of the Absolute Conic)
    Camera calibration with vanishing points

2 orthogonal vanishing points
      focal length f


3 pair-wise orthogonal
vanishing points
       f , ( x0 , y0 )
    Camera calibration with vanishing points

Intersection point “u” of parallel lines in the image


                 u
    Camera calibration with vanishing points

Least square linear method
Best fit intersection point “v”, is the point that
minimizes the sum ∑i e of the squared perpendicular
                        2



distance to the lines
                             l2
                                        l1

                         v
      Camera calibration with vanishing points

Optimisation method: Maximum Likelihood Estimate
[Liebowitz-Zissermann-99]


  (v, l1 , l2 ,...ln ) = ∑i d (li , a ) + d (li , b )
                                          i        i

                            ˆ
                            v




                                ˆ
                                li
Non-linear equations
Levenberg-Marquardt                  ai

                                              bi
    Method II: Camera calibration with
    homographies

A linear transformation of a plane in projective space
is defined by a 3x3 matrix H called „homograhy“:



                    H




 H completely defined through four 2D/2D point
 correspondences
      Method II: Camera calibration with
      panorama images (pure rotation)

A mosaic is constructed with help overlapping images
The camera motion is a pure rotation




The mapping between 2
images is characterised by
a homography H
      Camea calibration with
      panorama images (pure rotation)

Calibration for panorama images [Hartley-99]


Hi is the mapping from image i to the reference image „0“


We have the following equation:
     Hi-T ω Hi-1 = ω with: ω = K-TK-1


  6 linear equations with the parameters of ω


3 images are required to solve the system
             Camera calibration


Bild            Ground Truth   Panorama      Vanishing
                                             points
1 (Venice)      f = 1128.69    f = 1124.20   f = 1189.12
                x0 = 512       x0 = 525.8    x0 = 524.05
                y0 = 384       y0 = 380.57   y0 = 352.09

2 (Bridge)      f = 1228.79    f = 1255.40   f = 1237.10
                x0 = 512       x0 = 491.39   x0 = 498.55
                y0 = 384       y0 = 361.18   y0 = 381.75

3 (Towers)      f = 1638.40    f = 1637.23   f = 1585.30
                x0 = 512       x0 = 516.47   x0 = 501.82
                y0 = 384       y0 = 358.08   y0 = 373.34
    Camera calibration with homographies

If the plane is defined with z=0, the matrix HTωH is
defined as follows:

                       λ 0 ×
              H T ωH =  0 λ ×
                             
                        × × ×
                             
   2 linear equations containing the parameters of: K-TK-1 = ω
    Camera calibration

1 homography
      focal length fx, fy


2 homographies
      fx , fy ,( x0 , y0 )


3 homographies
      all parameters of the K matrix
Determination of the position and orientation
                 of the camera
    Camera position und orientation

It exists the following methods:


  the vanishing points
  Knowing plans in the images and the 3D scene
  2D/3D correspondences
     Camera position and orientation
Rotation
   If v(1,0,0,0) is the vanishing point in the direction x, then

                 1          1
                             
                 0           0
           v ≈ P  ≈ K [R t ]  ≈ Kr1          r1 ≈ K −1v
                  0             0
                             
                 0           0
                             
For u(0,1,0,0):
                0           0
                            
                1           1
          u ≈ P  ≈ K [R t ]  ≈ Kr2           r2 ≈ K −1u
                 0             0
                            
                0           0
                            

And: r3 = r1 ^ r2
    Camera position and orientation

Translation:

 If o is the projection of the 3D-points O with coordinates (0,0,0,1)

             0           0
                         
             0           0
       o ≈ P  ≈ K [R t ]  ≈ Kt
              0             0                 t ≈ K −1o
                         
            1           1
                         
     Camera position und orientation knowing a 2D plan


For the plan z=0, the matrix H is defined as follows:

                 H ≅ K [r1    r2   t]

Knowing K-1 H, r1, r2 and t can be determined


r3 = r1 ^ r2
Example
Thank you!

								
To top