High Dynamic Range Panoramic Imaging

Document Sample
High Dynamic Range Panoramic Imaging Powered By Docstoc
					                         Appeared in Int'l Conf. on Computer Vision, Vol. 1, pp. 2-9, 2001
                                                   c 2001 IEEE

                        High Dynamic Range Panoramic Imaging
                              Manoj Aggarwal                   Narendra Ahuja
                                University of Illinois at Urbana-Champaign
                               405 N. Mathews Ave, Urbana, IL 61801, USA

                      Abstract                                   conventional digital camera, however, is limited in all
                                                                 three aspects, namely, resolution, eld of view and dy-
   Most imaging sensors have a limited dynamic range             namic range. The eld of view is constrained by the
and hence can satisfactorily respond to only a part of il-       sensor size and focal length of the lens (e.g. 30 22
lumination levels present in a scene. This is particularly       on a 16mm lens with 2/3" CCD sensor), the resolu-
disadvantageous for omnidirectional and panoramic                tion is determined by the number of pixels on the sen-
cameras since larger elds of view have larger bright-            sor (640 480 for NTSC cameras) and the sensor and
ness ranges. We propose a simple modi cation to exist-           camera electronics determine the dynamic range. Most
ing high resolution omnidirectional/panoramic cameras            CCD sensors provide only 8-bits of brightness informa-
in which the process of increasing the dynamic range             tion, which are usually insu cient to capture the entire
is coupled with the process of increasing the eld of             brightness variation in real scenes. The three limita-
view. This is achieved by placing a graded transparency          tions of a conventional imaging system have been ex-
(mask) in front of the sensor which allows every scene           amined individually and in pairs in various contexts in
point to be imaged under multiple exposure settings as           the literature. However, the problem of acquiring im-
the camera pans, a process anyway required to capture            ages with high resolution, large eld of eld of view and
large elds of view at high resolution. The sequence of           high dynamic range is still open and is the subject of
images are then mosaiced to construct a high resolution,         this paper. We will rst review the existing techniques
high dynamic range panoramic/omnidirectional image.              which individually overcome the limitations on eld of
Our method is robust to alignment errors between the             view and dynamic range, with or without high resolu-
mask and the sensor grid and does not require the mask           tion, and then present the proposed approach.
to be placed on the sensing surface. We have designed a          Large eld of view: There are two main classes of
panoramic camera with the proposed modi cations and              large eld of view cameras: panoramic and omnidirec-
have discussed various theoretical and practical issues          tional. Omnidirectional cameras are capable of cov-
encountered in obtaining a robust design. We show                ering a eld of view upto 360 360 , for instance
with an example of high resolution, high dynamic range           over a sphere or a hemisphere. Most of the om-
panoramic image obtained from the camera we designed.            nidirectional cameras proposed in literature use sh-
                                                                 eye lenses 16, 23, 26] or hyperbolic/parabolic mir-
                                                                 rors 4, 8, 18, 24, 25] to image a large eld of view onto
1. Introduction                                                  a single sensor. These camera are capable of delivering
                                                                 images at video-rate, however, resolution of the images
   In many imaging applications including surveillance,          is constrained by that of the sensor. Recently, Coorg et.
navigation and photography, there is a need to capture           al. 5] and Nayar 19] proposed a high resolution omni-
high resolution images of a scene over large elds of             directional camera. The higher resolution is obtained
view. In addition, it is desirable that the large bright-        by sequentially acquiring multiple images covering dif-
ness variation present in a real-world scene is captured         ferent parts of the visual eld and then stitching these
without many areas being too dark (clipped) or too               images together, a process known as mosaicing. These
bright (saturated). The range of brightness levels that          designs trade the ability to get video-rate images for
can be recorded by a sensor without clipping or sat-             higher resolution images, and are better suited for sta-
uration is often referred to as the dynamic range. A             tionary scenes.

   Panoramic cameras cover upto a 360 degree cylin-              captured on a single sensor at N th the resolution of

drical strip of the scene. The existing high resolution          the sensor. Interpolation is used to construct a full-
panoramic cameras just as the omnidirectional coun-              resolution image. The eld of view of all the camera
terparts are based on mosaicing a sequence of images.            designs reviewed above is that of the individual images.
The images can be obtained either in parallel using mul-
tiple cameras 17] or sequentially by panning the cam-            2. High dynamic range panoramic imag-
era 1, 10, 14, 21, 22]. The panoramic cameras based on              ing
multiple sensors can deliver images at video rate, how-
ever, they require precise arrangement of sensors and               A straightforward approach to obtain a high resolu-
other optical elements. The panoramic cameras based              tion, panoramic image with high dynamic range would
on sequential capture require a single camera, but are           be to construct a high dynamic range image of the (nar-
not video-rate and hence are suitable for only stationary        row) eld of view at each angular position of the cam-
scenes.                                                          era using intensity space mosaicing, and then stitch to-
                                                                 gether the individual high dynamic range images us-
Dynamic Range: Ordinarily, the dynamic range of                  ing spatial mosaicing. Alternatively, panoramic images
any camera is given by that of the sensor used. This             generated under multiple exposure settings can be fused
is particularly disadvantageous for omnidirectional and          together by intensity space mosaicing to obtain a high
panoramic cameras since larger elds of view have larger          dynamic range panoramic image. Either variant of this
brightness ranges. The dynamic range of a sensor can             simple approach requires two controlling mechanisms,
be improved through a number of techniques. The basic            one for camera rotation and other for altering exposure
idea is to acquire multiple images using di erent expo-          settings.
sure settings, thus capturing di erent sections of the il-          We describe a simple modi cation to any existing
lumination range. These multiple exposure images are             panoramic/omnidirectional cameras in which the pro-
then combined (intensity space mosaicing) to cover a             cess of improving the dynamic range is coupled with
larger brightness range which is a union of those cov-           the process of increasing the eld of view. Our method
ered by the individual images. There are two main types          requires only a single control mechanism for camera ro-
of high dynamic range cameras, depending on whether              tation, with no need for expensive hardware modi ca-
they have video-rate capability or not. The rst type             tions or additional processing requirements. The ba-
sequentially acquires multiple exposure images, each of          sic idea is to capture every scene point under di erent
which has a resolution identical to that of the sensor.          exposure settings as the camera rotates. The desired
These camera do not have video-rate capability and are           change in exposure setting has been achieved using a l-
suited for only stationary scenes 3, 6, 12, 13, 15]. The         ter with spatially varying transmittance (mask) as also
second type acquires images at video-rate, which can be          done in 20]. However, the spatial variations in expo-
achieved using several techniques, each having di erent          sures suggested in this paper are di erent both in form
tradeo s. A number of special cameras have been dis-             and functionality from that in 20].
closed in patents, which employ a single lens but multi-
ple sensors such that the same scene gets simultaneously         2.1. Proposed Approach
imaged on di erent sensors, preset to di erent exposure
settings. Many of these designs typically require precise           Consider a mask with spatial variation in transmit-
alignment of various optical elements and the sensors.           tance as shown in Fig. 1(a), placed adjacent to the sen-
In another approach, a special sensor is reported which          sor of a high resolution panoramic camera based on ro-
has multiple sensing elements with di erent sensitivi-           tation about lens center and mosaicing 14](Fig. 1(c)).
ties for recording light for every pixel on the sensor.          The darker regions of the mask represent low trans-
This approach uses a single sensor, but requires spe-            mittance or equivalently low exposure setting for the
cialized and expensive hardware design. A special high           underlying portion of the sensor and brighter regions
dynamic range sensor has been proposed in 2], that in-           of the mask represent higher exposure. We will refer
stead of measuring the amount of charge accumulated              to the regions of common transmittance on the mask
in a pixel, measures the time it takes to reach satura-          as bands. As the camera rotates, the image of a scene
tion. The recorded times are used to convert them into           point falls at a di erent location on the sensor as shown
a high dynamic range image. All the approaches dis-              in Fig. 2. Since the transmittance of the mask varies
cussed above generate high dynamic range images with             along the direction of camera rotation, every scene point
resolution that of the sensor. A technique to trade o            visible during the camera scan, gets imaged under dif-
spatial resolution and quality for dynamic range is pre-         ferent exposure settings as the camera rotates. The
sented in 20]. N images with di erent exposures are              sequence of images obtained for each scene point can

                                   Top view                                                  Ray R


               (a)                                                            P4                     center


                                                                Figure 2. The rays from an object point O intersect
                                                                the sensor at different locations for different angu-
                (b)                    (c)                      lar positions (P1 - P4) of the camera. Since the
                                                                mask adjacent to the sensor has spatially varying
Figure 1. Spatially varying masks for large field of             transmittance, any visible object point gets imaged
view high dynamic range image and their place-                  under multiple exposure settings.
ment (a) Mask with transmittance varying along the
rotation direction, suitable for panoramic cameras,
(b) A circularly symmetric mask, better suited for              3. Design of a high dynamic range
omnidirectional high resolution imaging. (c) Top
and side view of the proposed camera with the
                                                                   panoramic camera
mask and the motor.                                                There are a number of practical and theoretical issues
                                                                involved in the complete design of the camera, which we
                                                                will discuss in the various subsections.
                                                                3.1. Preparation and placement of mask
then be combined to construct a high dynamic range                 A mask can be accurately implemented using several
panoramic image. The design of this camera parallels            techniques summarized in 20]. These include placing
the design of the Non-frontal imaging camera (Nicam)            a suitable mask adjacent to the detector array or using
 11]. In Nicam, the sensor is tilted which allows a scene       special sensors with di erent sensing elements having
point to be imaged under di erent focus settings as the         sensitivities corresponding to the mask. These tech-
camera rotates. The resulting images can be used to             niques require very special handling and could be di -
construct omnifocus panoramic images. In our method,            cult to implement. For example, most solid state sensor
we instead use a graded lter to allow a scene point to          have a protective glass cover, which is di cult to remove
be imaged under di erent exposure settings as the cam-          without destroying the sensor and hence a mask cannot
era rotates.                                                    be easily placed adjacent to the sensor. Our approach
                                                                does not require placing the mask adjacent to the sen-
   The key idea in this approach is to exploit the rota-        sor. It is also robust to irregularities in the band bound-
tion of the camera which is anyway necessary to obtain          aries, and misalignments of the bands with the sensor
a large eld of view (with high resolution), to obtain           grid. A mask can be simply implemented by manually
images of a scene point under di erent exposure set-            attaching a combination of neutral density thin- lm l-
tings as well. Depending on the eld of view to be cov-          ters on the sensor's protective surface. For example, if
ered, a cylindrical strip (panoramic camera) or a hemi-         only a two band mask is required, with transmittance
sphere/sphere (omnidirectional cameras), the required           values 100% and 25%, a lter with 25% transmittance
variation in exposure can be realized by a suitable mask.       covering approximately half of the sensor's glass surface
For example, a omnidirectional camera in which the              would su ce.
camera is panned and tilted to cover a sphere 5], or               A consequence of such a simple implementation of
the high resolution omnidirectional camera proposed             the mask is that the e ective spatial transmittance is
in 19], a circularly symmetric mask might be more suit-         not identical to that on the mask, but in fact a blurred
able, such as one shown in Fig. 1(b).                           version of the mask. The phenomenon is explained in

Fig. 3. It shows cones of light from three scene points,                           Band A Sensor
undergoing refraction in the lens, passing through the
mask and then converging on the sensor. The intercept
of the cone with the mask varies with the angular po-
sition of the scene point. For example, for scene point                                         O3

O1 , the area of intersection lies entirely in band A. The                         Band B
cone of scene point O2 barely misses intercepting the
                                                                             (a)                                          (c)
mask at band boundary. While, the cone correspond-

ing to point O3 intersects the mask on the boundary of
                                                                 O2                                                             Zone
two bands. This implies, the amount of light reaching
the sensor is given by a weighted combination of the
intercept area in band A and band B . The resulting                                                                         0
amount of light that reaches the sensor as a function                                                                Angular position
of the angular position of the scene point is plotted in                     (b)                                           (d)
Fig. 3(d). We will refer to the transition regions in the
e ective transmittance as forbidden zones. The size of           Figure 3. The effect of placing the mask at a dis-
the forbidden zones depends on the aperture size and             tance from the sensor. (a)-(c) show how the light
the distance of the mask from the sensor. As we will             cone from the object point intercepts the mask after
show in Sec. 3.2, the angular step size of the motor can         refraction. In (a) and (b), the intercept is confined to
be suitably chosen to ensure each scene point gets im-           a single band, but in (c), it straddles both bands. (d)
aged through each band entirely at least once, which is          A plot of the resulting amount of light that reaches
su cient to create a high dynamic range mosaic.                  the sensor as a function of angular position of the
   Another practical issue in the construction of the            object point.
mask is that the transition between bands may be quite
rough on a micron scale (Typical pitch of pixels on a
CCD sensor is 5-10 microns). This edge irregularity will
a ect the e ective spatial transmittance near the band           units. This constraint can be used to easily determine
boundaries and may result in a slightly larger forbidden         the motor step size. From Fig. 4(b) we have
zone.                                                                      tan 1 = Rv and tan 2 = Rh 2v 2x :
                                                                                      h                      ;
3.2. Angular step size of the motor
                                                                 Then, the di erence ( 1 ; 2 ) gives the maximum angu-
   Consider a mask and camera con guration in which              lar step for the motor. For all step sizes smaller than the
each forbidden zone spans at most k columns on the               maximum step, it is ensured that every scene point gets
sensor. If we choose the angular step of the motor such          imaged at least once for each desired exposure (band).
that every scene point gets imaged at least once in non-
forbidden zone of each band, then it is possible to recon-       3.3. Spatial variation in the sensitivity of the sensor
struct a high dynamic range image which is not a ected               Photometric sensitivity refers to the ratio of image
by the irregularities or blur in the transmittance of the        irradiance to radiance of a scene point towards the en-
mask.                                                            trance pupil. Even in the absence of the graded mask,
   Let the horizontal resolution of the sensor be Rh and         the sensitivity of a lens is spatially variant and decays in
the number of exposure settings (bands) required to              directions o the optical axis. In real lenses, various fac-
cover the desired dynamic range be n. It is clear from           tors contribute to this decay including lens foreshorten-
Fig. 4, that in order to ensure that every scene point           ing, lens aberrations and vignetting, which are di cult
gets imaged at least once for each of the exposures,             to estimate theoretically without any knowledge of the
the distance between the images of the same point for            internal structure of the imaging lens 9]. The impact of
two successive positions of the camera must not exceed           spatially varying sensitivity is that the irradiance at the
smax, where                                                      sensor from the same scene point but at di erent angu-
                   smax = Rh ; k :
                                                                 lar positions of the camera are di erent. A correction
                                                                 factor which is an inverse of the sensitivity variation is
Since the image of a point shifts the maximum near               thus necessary in order to normalize the recorded inten-
the edge of the sensor, the maximum angular step size            sity values to a common scale. The sensitivity is further
or the speed for the motor must be small enough to               altered by the presence of the mask, and the nal sen-
ensure an edge pixel does not shift by more than smax            sitivity is a product of the (blurred) transmittance of

                          R                                                                                        (ϕ,θ)

                                                 θ                2
      v                                           2                                                            θ
                      k       k     k

                                                           (b)                                            Lens center/
                                                                                                          Point of
                                                                                                          rotation of
                                                                                                          the camera
                                                Forbidden zones
                                                Valid bands
                                                                           Figure 5. The spherical grid and coordinate system
                          S max         R

                              (a)                                          (with lens cap on) and label it as the "darkimage".
                                                                           For CCD cameras, this darkimage usually has non-zero
Figure 4. Forbidden and valid zones on the sensor                          mean and is a consequence of dark current present in
and estimation of angular step for the camera                              CCD's 7]. We subtract the darkimage from each of the
                                                                           images and select the reliable portions. For each image,
                                                                           the intensities in the selected regions are scaled by di-
                                                                           viding them by the shutter time for that image. Since
the mask and the inherent sensitivity of the lens and                      each pixel falls in the reliable intensity range in at least
sensor combination. Since the dependence of irradiance                     one of the images, at the end of the procedure, we will
from di erent directions is a complex function of lens                     have a number associated with each pixel on the sensor.
design and imaging parameters, we chose to estimate                        This array of numbers after dividing by the maximum
the net variation in sensitivity empirically through a                     value among them will give the variation in the sensi-
simple experiment described below.                                         tivity across the sensor. The scaling factor required in
   We used a spatially uniform light source as the object                  the mosaicing process is obtained by taking an inverse
and observed its image on the sensor equipped with a                       of the sensitivity values.
mask. The variation in the image gives the desired (net)                      The technique suggested above is applicable for lin-
variation in the sensitivity of the sensor. We used a                      ear cameras. In general, the relationship between the
light box (KLV7000, as the light                        sensor irradiance and recorded intensity is nonlinear. A
source. The image of the light source was observed to                      number of techniques have been suggested in literature
change on repositioning the light box, which implied                       to determine this nonlinearity 6, 15] and these can be
that the light source was not as uniform as we desired.                    adapted to estimate the scaling function required to ac-
We added light di using elements (plexi-glass and opal                     count for the nonlinearity and the spatial variation in
sheets) to the front of the lightbox, re-tested it for uni-                the photometric sensitivity.
formity and used the combination as the light source.                      3.4. Imaging geometry, calibration and image ac-
   Since, the mask creates a signi cant variation in the                         quisition
amount of the light reaching the sensor, a single im-
age of the uniform light source may not be adequate to                         The camera-lens system we used was perspective
accurately estimate the net spatial variation in the sen-                  with slight distortion. The camera was mounted on
sitivity of the entire sensor. To overcome this problem,                   motor with the rotation axis passing through the lens
we acquire multiple images with di erent shutter speed                     center, to ensure a common viewpoint at all angular po-
settings (provided on a camera) such that each pixel                       sitions of the motor. The camera rotates with a max-
on the sensor gets imaged within the reliable intensity                    imum angular step size as determined in Sec. 3.2 and
range in at least one of the images. We de ne reli-                        the manner in which the camera covers the eld of view
able range to be the range of intensity/luminance levels                   is shown in Fig. 2.
whose minimum is above the noise levels and maximum                            We chose a spherical coordinate system ( ) as
is below the saturation level minus the noise variance                     shown in Fig. 5. Let the ray joining a object point to
of the camera. We then acquire a image of a dark scene                     the lens center be denoted as R . In order to construct

the mosaic, we need to nd the intersection of the ray           The ray Ri j may not intersect the images at sensor grid
Ri j from object at ( ) with the sensor for any given           points, therefore, the intensity value for Mi j needs to
angular position ( ) of the camera. If = 0, the prob-           be determined using interpolation. In any single im-
lem is identical to regular camera calibration which has        age not all pixels in the neighborhood around a point
been addressed by several researchers, even for lenses          of intercept may be within reliable intensity range. To
with distortion. For = 0, let the relationship between          avoid using clipped or saturated neighborhood values
the coordinates (x y) of the image of an object point           to determine Mi j we interpolate from only the reliable
at angular position ( ) be given by x = f ( ) and               neighborhood pixels but from all selected images where
y = g( ). Since, the camera is rotating about the               Ri j intercepts, instead of any single best image.
lens center, the ray joining the object point to the lens          Interpolation weights for reliable reliable neighbor-
center is invariant and rotating the camera amounts to          hood pixels are computed as fractions representing the
rotating the coordinate system. Therefore, for non-zero         relative distance of the intersection point to the pixels.
gamma, x = f ( ; ) and y = g( ; ). The                          The intensities for the reliable pixels are scaled using the
relationship between the angular position in degrees            brightness correction functions determined in Sec. 3.3.
to the motor count is known and documented in the               An interpolation scheme, such as cubic or bilinear inter-
manuals for the motor.                                          polation can then used to determine the intensity value
   Using the above geometric relationship, we can com-          corresponding to grid point Mi j . This procedure is re-
pute the position of the object point in the image for          peated for all grid points on the mosaic surface yielding
di erent positions of the camera. This eliminates the           the desired mosaic. It is worth noting that the inter-
need to track a scene point in an image sequence.               polation is a necessary step for mosaicing a panoramic
                                                                image on a spherical grid and is not a consequence of
3.5.    Constructing the      high    dynamic     range         our method.
       panoramic mosaic

   Let us label the sequence of images obtained at reg-
                                                                4. Experiments
ular angular intervals ( k ) of the camera by Ik . We              We designed a grayscale panoramic camera with a
construct a spherical mosaic surface aligned with the           mask as shown in Fig. 6. The mask consisted of a
object space' coordinate system. We refer to the mo-            25% transmittance lter (neutral density Kodak Wrat-
saic surface as M, which is sampled uniformly along             ten gelatin lter) covering left half of the sensor. The
both and axis. The sampling rate is chosen based                camera used was Pulnix TM720, with a 16mm Navitar
on the sensor resolution.                                       lens (DO 1614) focused to in nity and aperture setting
   Let us assume that the sensor resolution in the hor-         5.6. There was one forbidden zone of width 100 columns
izontal and vertical directions is Rh and Rv , respec-          (we chose to be conservative). The rotation step used
tively, and the eld of view of the individual images be         for the experiments was approximately 3 degrees which
Fh Fv . Then the sampling rate along the and axes               was equivalent to a 100 pixel shift along the axis on
are chosen to be Rh and Rv , respectively. The range
                              v                                 the spherical grid.
along axis is given by the total eld of view covered               We present a sample high dynamic panoramic images
by the rotating camera and can be as large as 360 .             acquired with the camera. Fig. 7 displays results for an
The range along axis is given by Fv . Consider a point          experiment on an indoor scene. The scene had signi -
on the spherical grid ( i j ) and the ray Ri j joining          cant light variation, bricks were generally darker, while
that point to the lens center. The ray Ri j would in-           the regions around the staircase and the poster boards
tersect the sensor only in some of the images. Further,         much brighter. The rst four images (a) - (d) display a
the intercept on the sensor varies as a function of the         subset of the images in the acquired sequence. The left
angular position of the camera (Fig. 2), which can be           halves of these images faithfully capture the brighter
computed using the projection equations obtained by             areas of the scene, while the right halves capture the
camera calibration as discussed in Sec. 3.4.                    darker areas. Fig 7(e) displays a panoramic of the same
   The intensity for point Mi j is obtained as follows.         scene under identical camera settings, but without the
We nd the points of intersection of ray Ri j in all the         mask. The high dynamic range panoramic image con-
images. Images in which Ri j does not intersect the             structed from the sequence is shown in Fig. 7(f). The
sensor, or intersects within forbidden bands are disre-         image has been histogram equalized to compress the
garded. We note that the rotation step size for the cam-        dynamic range to 256 gray levels for the purpose of
era motor has been chosen such that each scene gets             display. Figs. 7(g)-(l), show some sections of the high
imaged once for each exposure, without counting the             dynamic range panoramic image, which have been mag-
situations when the point lands in the forbidden zones.         ni ed and shown under di erent photometric scales to

                                                                  11] A. Krishnan and N. Ahuja. Range estimation from fo-
                                                                      cus using a non-frontal imaging camera. International
                                                                      Journal of Computer Vision, 20(3):169{185, December
                                                                  12] B. C. Madden. Extended intensity range imaging.
                                                                      Technical Report MS-CIS-93-96, Grasp Lab, Univer-
                                                                      sity of Pennsylvania, 1996.
            (a)                            (b)                    13] S. Mann and R. W. Picard. On being `undigital' with
                                                                      digital cameras: Extending dynamic range by com-
Figure 6. (a) The mask used in the prototype cam-                     bining di erently exposed pictures. In Proceedings of
era. The darker band was a neutral density filter                      IS&T 46th Annual Conference, pages 422{428, May
with 25% transmittance. the lighter portion repre-                    1995.
sents absence of filter (b) The resulting transmit-
                                                                  14] L. McMillan and G. Bishop. Plenoptic modeling: An
                                                                      image based rendering system. In Proceedings of SIG-
tance variation as observed on the sensor.                            GRAPH, pages 36{46, August 1995.
                                                                  15] T. Mitsunaga and S. K. Nayar. Radiometric self cali-
                                                                      bration. In Conference on Computer Vision and Pat-
demonstrate the detail captured by the camera in both                 tern Recognition, volume 1, pages 374{380, June 1999.
dark and bright regions.                                          16] K. Miyamoto. Fish eye lens. Journal of Optical Society
                                                                      of America, 64:1060{1061, August 1964.
                                                                  17] V. Nalwa. A true omnidirectional viewer. Technical
References                                                            report, Bell Laboratories, February 1996.
                                                                  18] S. K. Nayar. Catadioptric omnidirectional camera. In
 1] R. Benosman, E. Deforas, and J. Devars. A new                     Conference on Computer Vision and Pattern Recogni-
    catadioptric sensor for the panoramic vision of mobile            tion, pages 482{488, 1997.
    robots. In Workshop on Omnidirectional Vision, pages          19] S. K. Nayar and A. Karmarkar. 360 360 mosaics. In
    112{116, 2000.                                                    Conference on Computer Vision and Pattern Recogni-
 2] V. Brajovic and T. Kanade. A sorting image sensor:                tion, volume 2, pages 388{393, June 2000.
    An example of massively parallel intensity-to-time pro-       20] S. K. Nayar and T. Mitsunaga. High dynamic range
    cessing for low latency computational sensors. In IEEE            imaging: Spatially varying pixel exposures. In Con-
    Conference on Robotics and Automation, pages 1638{                ference on Computer Vision and Pattern Recognition,
    1643, April 1996.                                                 volume 1, pages 472{479, June 2000.
 3] P. J. Burt and R. J. Kolczynski. Enhanced image cap-          21] S. Peleg. Panoramic mosaics by manifold projection.
    ture through fusion. In International Conference on               In Conference on Computer Vision and Pattern Recog-
    Computer Vision, pages 173{182, 1993.                             nition, pages 338{343, June 1997.
 4] J. S. Chahl and M. V. Srinivasan. A complete                  22] H. S. Sawhney and S. Ayer. Compact representations
    panoramic vision system, incorporating imaging, rang-             of videos through dominant and multiple motion esti-
    ing, and three dimensional navigation. In Workshop on             mation. IEEE Transactions on Pattern Analysis and
    Omnidirectional Vision, pages 104{111, 2000.                      Machine Intelligence, 18(8), August 1996.
 5] S. Coorg, N. Master, and S. Teller. Acquisition of a          23] Y. Xiong and K. Turkowski. Creating image-based VR
    large pose-mosaic dataset. In Conference on Computer              using a self-calibrating sheye lens. In Conference on
    Vision and Pattern Recognition, pages 872{878, 1998.              Computer Vision and Pattern Recognition, pages 237{
 6] P. E. Debevec and J. Malik. Recovering high dynamic               243, 1997.
    range radiance maps from photographs. In Proceedings          24] Y. Yagi and S. Kawato. Panorama scene analysis with
    of ACM, SIGGRAPH, pages 369{378, 1997.                            conic projection. In International Workshop on Intel-
 7] G. E. Healey and R. Kondepudy. Radiometric CCD                    ligent Robots and Systems, pages 181{187, 1990.
    camera calibration and noise estimation. IEEE Trans-          25] K. Yamazawa, Y. Yagi, and M. Yachida. Omnidirec-
    actions on Pattern Analysis and Machine Intelligence,             tional imaging with hyperboloidal projection. In Inter-
    16(3):267{276, March 1994.                                        national Conference on Intelligent Robots and Systems,
 8] R. A. Hicks and R. Bajcsy. Catadioptric sensors that              pages 1029{1034, July 1993.
    approximate wide-angle perspective projections. In            26] S. Zimmerman and D. Kuban.                   A video
    Workshop on Omnidirectional Vision, pages 97{103,                 pan/tilt/magnify/rotate system with no moving
    2000.                                                             parts. In IEEE/AIAA Digital Avionics Systems
 9] C. Kolb, D. Mitchell, and P. Hanrahan. A realistic                Conference, pages 523{531, 1992.
    camera model for computer graphics. In Proceedings
    of SIGGGRAPH, pages 317{324, 1995.
10] A. Krishnan and N. Ahuja. Panoramic image acquisi-
    tion. In Conference on Computer Vision and Pattern
    Recogntion, pages 379{384, 1996.

Figure 7. Experimental results of high dynamic range panoramic imaging of an indoor scene. (a)-(d) Four
images from the sequence acquired by a panoramic camera with a mask shown in Fig. 6. (e) A panoramic
image constructed using a camera without the mask that shows many areas overexposed (f) The high
dynamic range panoramic constructed using the sequence obtained from a camera with the mask. The
image has been histogram equalized to show that the entire range of scene radiance was successfully
captured. (g)-(l) Magnified results for some sections of resulting panoramic image. The brightness range
within each individual section has been appropriately compressed between 0-255 graylevels to best display
the intensity variations captured in that section.


Shared By: