Docstoc

Color Image Processing (PowerPoint)

Document Sample
Color Image Processing (PowerPoint) Powered By Docstoc
					          Chapter 2
Digital Image Fundamentals

  Those who wish to succeed must ask the
  right preliminary questions.
                               Aristotle


                                           1
      Contents in Chapter 2
2.1 Elements of Human Visual Perception
2.2 Light and Electromagnetic Spectrum
2.3 Image Sensing and Acquisition
2.4 Image Sampling and Quantization
2.5 Some Basic Relationships Between Pixels
2.6 Linear and nonlinear operations

                                       2
2.1 Elements of Visual Perception
The importance of visual perception in DIP:
Although the foundation of digital image processing
is based on mathematical and probabilistic
formulations, human subjective visual judgments
plays a central role in the choice of one technique
over another.
We are interested in the mechanics and parameters
related to how images are formed in our eyes.


                                                3
2.1.1 Structure of the Human Eye
                  Figure 2.1 shows a
                  simplified diagram of a
                  cross section of the
                  human eye.
                   The eye is a sphere
                  with average diameter
                  20mm.

                 Fig. 2.1

                                     4
2.1.1 Structure of the Human Eye
Three membranes enclose the eye:
Layer1(outer cover):Cornea (角膜) and Sclera (鞏膜)
Layer2              : Choroid (脈絡膜)
 Front: Ciliary body 睫狀體 控制眼球轉動及聚焦
 Iris diaphragm 虹膜 Control the amount of incoming light
   Pupil (瞳孔) :the central opening of Iris varied from 2-8mm




                                                        5
2.1.1 Structure of the Human Eye
Lens (晶狀體) :
 contains 60-70% of water, 6% fat and protein.
 Cataract (白內障) may cause poor vision.
 Absorb 8% of visible light and most infrared&
 ultraviolet by protein. Excessive light may damage.
Layer3               : Retina (視網膜)
Contains distribution of discrete two classes of light
receptors (1) Cones (2) Rods (See Fig. 2.2)

                                                   6
2.1.1 Structure of the Human Eye
Cones: 6-7 million/eye (see Fig. 2.2 Distribution)
>Distribution centered at fovea
> Highly sensitive to color;
> can resolve fine details because has its own nerve.
> called photopic or bright light vision.
Rods: 75-150 millions/eye,
> Lower resolution because several rods share a
    nerve.
 > Gives a general overall picture of a view.
 > not involved in color, but sensitive to lower level
    of light. Called scotopic or dim-light vision. 7
2.1.2 Image Formation in the Eye
Figure 2.3
人眼的lens,厚薄曲度可由眼肌(睫毛體)之鬆緊控制以便調整
焦距 (focal length).
 看遠物:lens 較平坦 (鬆) (Max. focal length ~17mm)

 看近物:lens 較厚曲 (緊) (min. Focal length ~14mm)

 The outside object has an image on Retina as
shown in Fig. 2.3.
 The Process of “seeing”:
                                   電化學
  光     Receptor      電波    神經      大腦
       (Cones/Rods)
                                                8
   2.1.3. Brightness and Adaptation and
          Discrimination (Fig. 2.4)
Light intensity I (光之強度 Milli-lambert mL,客觀)
and subjective Brightness B (人眼感受之亮度,主觀)
 大約          01[  I ,) I (gol  K  B   ] 4 01 ~
           6

How to achieve the perception of long range of
intensities? Brightness adaptation!!
 人類視覺無法同時感受10-6~104的光強度
 例:白天進電影院
 BaBb:Perceived brightness at level BaBb 以下則
   一片黑
                                                     9
JND(Just Noticeable Difference)

 I            ΔI:閃動的微量


              I+ΔI


ΔIc:The difference that a subject can notice 50% of
the time. 例:觀測四次,辨識出兩次
(Brightness Discern)
Weber Ratio    I c
                I
    若Ratio小:在光強度I時,亮度分辨力強
    若Ratio大:在光強度I時,亮度分辨力弱                        10
JND(Just Noticeable Difference)
一般人Brightness discern 之能力

                            如何解讀?

        Rods



                 Cones




                                    11
          實驗:JND
  觀測自己的 JND 曲線 (Use MATLAB GUI)
  考慮參數
     周圍亮度
     觀察距離遠近
  每隔10個gray level觀測一個 JND值

ΔIc



                             12
      0        255
        (Figure 2.7a)
Perceived Brightness is not a
simple function of Intensity!!




                                 13
    (Figure 2.8) Example of
     Simultaneous Contrast




Fig. 2.9
Some Well-know Optical Illusions
                               14
      Fig. 2.9 Some well-know
           Optical Illusions
  觀測自己的 JND 曲線 (Use MATLAB GUI)
  考慮參數
     周圍亮度
     觀察距離遠近
  每隔10個gray level觀測一個 JND值

ΔIc



                                15
      0         255
         2.2 More on Light and
             EM Spectrum
Fig. 2.10 EM spectrum
Wavelength= (light speed)/(frequency HZ)
where light speed is a constant (2.998x10^8m/s)
The energy of a photon
E=(Planck’s constant)x(its frequency) (in Electron volts)

 Visible light: EM of wavelength(0.43~0.79)x10^ -6 m

                                                     16
       2.2 More on Light and
        EM Spectrum (color)
Color: Perceived color of an object is determined
by the light reflected from the surface of the object.

For Example: An object perceived as white color
(void of color or monochromatic) because it
reflects relatively balanced in all visible
wavelengths.

An object perceived as green, because ……

                                                  17
        2.2 More on Light and
         EM Spectrum (color)
Mono-Chrome light has only one attribute:
intensity. (described in gray level)
Dark  Gray  White
Chromatic light has three attributes:
 (1) Radiance (Watt): Energy flows from light source
(2) Luminance (Lumens): Energy perceived by an observer.
         (Ingrared has high W but very low Lm)
(3) Brightness: Subjective description of light perception
               which is impossible to measure.

                                                     18
      2.2 More on Light and
          EM Spectrum
Applications of Non-visible light (see page
45, 2nd paragraph)

Wavelength of EM wave required to see an
object must be the same size or smaller
than the object.
  A limitation of image sensors.

                                          19
2.3 Image Sensing and Acquisition
  “Illumination” On “Scene”, and detect
   the energy reflected from or passing through
  the scene.
   Possible Illuminations: Visible Light, EM,
  sound, or even computer generated
  illumination pattern etc.

   Possible Scenes: molecules, burried rock
  formations, human brain etc.

                                                  20
2.3 Image Sensing and Acquisition
  Sensor: To convert detected energy to
          electrical voltage signal.

  Fig 2.12a Single Image Sensor (2.3.1)
  Fig 2.12b Line sensor (2.3.2)
  Fig 2.12c Array sensor. (2.3.3)

                                          21
2.3.4 A Simple Image model
Image:2D light-intensity function
   f(x,y), 0<f(x,y)<∞
                                                0  i ( x, y )  
Light        f ( x, y )  i ( x, y ) r ( x, y ) 
                                                 0  r ( x, y )  1
   入射光:illumination
      i(x,y):在(x,y)之入射光強度
      r(x,y):在(x,y)之反射率

   反射率:reflectance


                                                                 22
     2.3.4 A Simple Image model

        i(x,y)                 r(x,y)

大晴天    9000 foot-candle 黑絲絨 0.01 foot-candle

陰天     1000 foot-candle 不銹剛 0.65 foot-candle

全月     0.01 foot-candle 平白牆 0.8 foot-candle

辦公室     100 foot-candle 白雪    0.93 foot-candle

                                           23
     2.3.4 A Simple Image model
      Gray level(l)
        The intensity of a monochrome image of
         f(x,y)
        Lmin < l <Lmax

imin‧rmin≒ 10 (office)   imax‧rmax≒ 1000 (office)
      Gray Scale
        Shift the interval [Lmin, Lmax] → [0, L-1]
        0 – dark, L-1: White, Gray: In between

                                                      24
 2.4 Sampling and Quantization
An Image f(x,y)
Sampling: Digitization of 2 spatial coordinates
Quantization: Digitization of the amplitude.
空間取樣與 亮度量化
Fig. 2.16 Generating a digital image
(a) Continuous image
(b) A Scan line from A to B in (a)
(c) Sampling
(d) Quantization.
                                          25
2.4.2 Representation of Digital Images
     Fig. 2.18 shows the Coordinate convention used
      in this book.
     An digital image can be viewed as a matrix.


                f (0,0)      f (0,1)  f (0, N  1) 
               
  f ( x, y )                                   
                                                     
                f ( N  1,0)   f ( N  1, M  1)
                                                    
     N,M,L are Positive integer L is power of 2.
     Dynamic range [0,L-1]

                                                         26
2.3.1 Uniform Sampling and quantization

 Each element of the Digital Image
  Image Element or
  Picture Element or
  Pixel or
  Pel or

 用語
  Image:digital image
  Pixel:basic element in a digital Image


                                            27
    2.4.2 Representation of Digital Images
    How many number of bits are required to
    store for a digital image?

             M
                         f ( x, y )  [ Lmin , Lmax]  [0, L](  [0,255 ])

                         * Common practice in D.I.P.
N
                            L+1=2m , # of gray levels


    * Integer power of 2 is easier to handle in D.I.P.
    Ans:共需 b=(NxM)xm bits/Image                                   28
2.4.2 Storage Required to store an
           Digital Image
Example
 N=128=M, with 64 gray level [0,63]

 M=log264=6 bits

 b=(N x M) x m = (128 x 128) x 6 bits
   = 98304 bits
   ≒ 12288 bytes
 其他m,n直參考 Table 2.1 Page 56

                                         29
2.4.3 Spatial an Gray Level Resolution

  Spatial Resolution: The smallest
  number of discernible pixel pairs per
  unit distance.
  Example: 100 pixel pairs per millimeter.
  Gray-Level Resolution: 8 bits (mostly),
  10, 12, 16 bits.
  Commonly we refer to an L-level image
  of size MxN.(see Fig. 2.19)
                                             30
        2.4.3 Spatial and Gray-level
         Resolution (Figure 2.19a)




                      Fig 2.19 A 1024x1024 (MxN), 8 bit
                      (L=256) image sub-sampled down to
                      size 32x32 pixels.

@ Also see Fig. 2.20 for effects of low resolution. 31
Typical Effects of varying the number
  of gray levels in a digital image
                  See Figure 2.21a-h. 452x
                  374 CAT projection image.
                  A rough rules of thumb:
                  An image of 256x256 and
                  64 gray levels are about
                  the smallest images that is
                  free of objectionable
                  sampling checkerboards
                  and false contour.
                                         32
   N and m for three types of contents




Fig.2.22 (a) Image with a low level of details
         (b) With medium level of details
         (c ) Image with relatively large amount of details.
                                                               33
  N and m for three types of contents
Low spatial frequency component
Low in N and high in m


High spatial frequency component
 High in N and low in m.

                                   34
      2.4.5 Zooming and Shrinking
             Digital Images
  Two steps in Zooming (放大):
Fro example: 500x500 750x750
Step1: Laying 750x750 grids over the 500x500 image
Step2: Assigning each of the 750x750 pixels value by
         nearest neighbor interpolation (NNI)
   Pixel replication is a special NNI for enlarging the
  image by any integer number of times
   may cause checkerboard effects (Fig20e-f)
A more sophisticated interpolation is Bilinear
   Interpolation (see Fig. 2.25)
                                                   35
       2.4.5 Zooming and Shrinking
              Digital Images
   Shrinking (縮小) (same process as zooming):
Fro example: 500x500 350x350
Step1: Laying 350x350 grids over the 500x500 image
Step2: Assigning each of the 350x350 pixels value by
         nearest neighbor interpolation (NNI)
   Pixel deletion is a special NNI for shrinking the
  image by any integer number of times
   Smoothing before deletion to avoid aliasing.
   Using more neighbors for interpolation are important in 3D
   interpolation. more computations.
   Bilinear Interpolation is usually the best choice.
                                                           36
   2.5 Some basic Relationships
         Between Pixels
 2.5.1 Neighbors of a pixel P at location (x,y)
The 4 neighbors of pixel P (denoted as N4 (P)):
 are pixels at (x-1,y), (x+1,y), (x, y-1), (x,y+1).
The 4 diagonal neighbors of pixel P (ND (P))
are pixels at (x-1,y-1),(x-1,y+1), (x+1, y-1), (x+1,y+1).




                                                    37

				
DOCUMENT INFO
Shared By:
Categories:
Stats:
views:10
posted:8/29/2011
language:English
pages:37