chapter8 by pengxuezhi

VIEWS: 5 PAGES: 74

									          Chapter 8


Implementation of a Renderer



                               1
Rendering as a Black Box




                           2
Object-Oriented v.s Image-Oriented
  for(each_object) render (object);




  for(each_pixel) assign_a_color(pixel);

                                           3
           Four Major Tasks
   Modeling
   Geometric processing
   Rasterization
   Display




                              4
                 Modeling
   Chapter 6: modeling of a sphere
   Chapter 9: hierarchical modeling
   Chapter 10: curves and surfaces
   Chapter 11: procedural modeling
   Can be combined with clipping to reduce
    the burden of the renderer


                                              5
         Geometric Processing
   Normalization
   Clipping
   Hidden-Surface Removal
    (visible surface determination)
   Shading (combined with normals and
    lighting information to compute the color
    at each vertex)

                                                6
               Rasterizatoin
   Also called scan-conversion
   Texture value is not needed until
    rasterization




                                        7
                   Display
   Usually this is not the concern of the
    application program
   Dealing with aliasing is one possible task
    at this stage
   Half-toning (dithering)
   Color-correction


                                                 8
Implementation of Transformation
   Object (world) coordinates
   Eye (camera) coordinates
   Clip coordinates
   Normalized device coordinates
   Window (screen) coordinates




                                    9
Viewport Transformation




                             xv max  xv min
 x p  xv min  ( x  xmin )                 ,
                              xmax  xmin
                               yv max  yv min
 y p  yv min  ( y  ymin )
                                ymax  ymin
                                                 10
     Line-Segment Clipping




Primitives pass through the clipper are accepted,
Otherwise they are rejected or culled.
                                                    11
     Cohen-Sutherland Clipping
   Replace most of the expensive floating-
    point multiplications and divisions with a
    combination of floating-point subtractions
    and bit operations




                                                 12
                   Breaking Up Spaces




         Each region is represented by a 4-bit outcode b0b1b2b3:
     1 if y  ymax      1 if y  ymin      1 if x  xmax       1 if x  xmin
b0                b1                b2                b3  
     0 otherwise        0 otherwise        0 otherwise        0 otherwise

                                                                                   13
                 Four Possible Cases
    Given a line segment, let
     o1=outcode(x1,y1), o2=outcode(x2, y2)
1.   (o1=o2=0)  it is in the clipping window (AB)
2.   (o10, o2=0; or vice versa)  one or two intersections must be
     computed, and the outcode of the intersection point is re-
     examined (CD)
3.   (o1&o20)  it is on the same outside sides of the window (EF)
4.   (o1&o2=0)  Cannot tell, find the outcode of one intersection
     point (GH, IJ)




                                                                 14
                Discussion
   Cohen-Sutherland algorithm works best
    when there are many line segments but few
    are actually displayed
   The main disadvantage is it must be used
    recursively
   How to compute intersection?
    y=mx+h (cannot represent a vertical line)

                                                15
          Liang-Barsky Clipping
   Represent parametrically a line segment:
    p1  x1 , y1  , p2  x2 , y2  by
                 T                T


    p( )  (1   ) p1  p2 or as twoscalar equations,
    x( )  (1   ) x1  x2 ,
    y ( )  (1   ) y1  y2

   Note that this form is robust and needs no
    changes for horizontal or vertical lines
                                                           16
              Examples
                                                 4

               3 4                
                                   3 2
        2
   1                  1




1>4> 3> 2> 1>0          1>4> 2> 3> 1>0

                                                      17
Avoid Computing Intersections
                                    y m ax  y1
For intersecting with the top:   
                                      y 2  y1
 ( y 2  y1 )  y  ymax  y1  ymax

All the tests required by the algorithm can be done by
comparing ymax and y. Only if an intersection is
needed, because a segment has to be shortened, is the
division done. This way, we could avoid multiple
shortening of line segments and the re-execution of
the clipping algorithm.

                                                         18
          Polygon Clipping




Creation of a single polygon 

                                 19
Dealing with Concave Polygons
 Forbid the use of concave polygons or tessellate them.




                                                          20
Sutherland-Hodgeman Algorithm
   A line-segment clipper can be envisioned
    as a black box




                                               21
  Clipping Against the Four Sides




                        x2  x1
x3  x1  ( ymax  y1 )         ,
                        y2  y1
y3  ymax

                                    22
Example 1




            23
Example 2




            24
    Clipping of Other Primitives
   Bounding Boxes and Volumes
   Curves, Surfaces, and Text
   Clipping in the Frame Buffer




                                   25
       Bounding Boxes and Volumes
Axis-aligned bounding box
(Extent)

Can be used in collision detection!




                                      26
    Clipping for Curves and Surfaces
   Avoid complex intersection computation by
    approximating curves with line segments
    and surfaces with planar polygons and only
    perform the calculation when it’s necessary




                                                  27
             Clipping for Text
   Text can be treated as bitmaps and dealt
    with in the frame buffer
   Or defined as any other geometric object,
    and processed through the standard
    viewing pipeline
   OpenGL allows both
       Pixel operations on bitmapped characters
       Standard primitives for stroke characters
                                                    28
      Clipping the Frame Buffer
   It’s usually known as scissoring
   It’s usually better to clip geometric entities
    before the vertices reach the frame buffer
   Thus clipping within the frame buffer is
    only required for raster objects (blocks of
    pixels)


                                                     29
Clipping in Three Dimensions




               xm in  x  xm ax,
               ym in  y  ym ax,
               zm in  z  zm ax
                                    30
      Cohen-Sutherland 3D Clipping
   Replace the 4-bit outcode with a 6-bit outcode




                                                     31
Liang-Barsky and Pipe-line Clipper
    Liang-Barsky: add the equation
      z ( )  (1   ) z1  z2
    Pipe-line Clipper: add the clippers for the
     front and back faces




                                                   32
Intersections in 3D
               p( )  (1   ) p1  p2
               n  ( p( )  p0 )  0,
                 n  ( p0  p1 )
              
                 n  ( p2  p1 )

  Requires six multiplications and one division


                                              33
Clipping for Different Viewings




Orthographic Viewing       Oblique Viewing

Only need six divisions!


                                             34
OpenGL Normalizaton




                      35
      Hidden-Surface Removal
   Object-Space Approaches
   Image-Space Approaches




                               36
        Object-Space Approach
1.   A completely obscures B from the camera; we
     display only A
2.   B obscures A; we display only B
3.   A and B both are completely visible; we display
     both A and B
4.   A and B partially obscure each other; we must
     calculate the visible parts of each polygon

                                             O(k2)!

                                                       37
        Image-Space Approach
   Assuming nm pixels, then using the
    Z-buffer algorithm takes nmk running time,
    which is O(k)
   May create more jagged rendering result




                                                 38
               Back-Face Removal
                                 90    90
                                cos  0
                                nv  0
                               In normalized device coordinates:
                        
                                 0
                                 0
                               v 
                                 1 
                                  
                                 0
If the polygon is on the surface ax+by+cz+d=0, we just need to
check The sign of c. In OpenGL, use glCullFace() to turn on
back-face removal
                                                                 39
            The z-Buffer Algorithm




The frame buffer is initialized to the background color.
The depth buffer is initialized to the farthest distance.
Normalization may affect the depth accuracy.
Use glDepthFunc() to determine what to do if distances are equal.
                                                                    40
         Incremental z-Buffer Algorithm
Suppose that ( x1 , y1 ) and ( x2 , y2 ) are
two points on the polygon. If
Δx  x2  x1
Δy  y2  y1
Δz  z2  z1
then the equation for the plane can be
written in a differential form as
a x  b y  c  z  0
For moving along a scan line, y  0
        a
 z   x
        c


                                               41
Painter’s Algorithm




 Back-to-front rendering
                           42
Depth Sorting – 1/2




                      43
Depth Sorting – 2/2




                      44
Two Troublesome Cases for
     Depth Sorting




May resolve these cases by partitioning/clipping

                                                   45
  The Scan-Line Algorithm




Scan-line by scan-line or polygon by polygon?
                                                46
                DDA
(digital differential analyzer) Algo.
                    y2  y1 y
               m          
                    x2  x1 x
               we assume that 0  m  1
                Δy  mx and since x  1
               therefore
               Δy  m
               Pseudo code: m float, y float, x int
               For (ix=x1; ix<=x2; ix++)
               {
                 y+=m;
                 write_pixel(x, round(y), line_color);
               }
                                                    47
              Using Symmetry




                         With symmetry to handle
Without using symmetry
                         the case where m>1
                                                   48
    Bresenham’s Algorithm – 1/4
   The DDA algorithm, although simple, still
    requires floating point addition for each
    pixel generated
   Bresenham derived a line-rasterization
    algorithm that avoids all floating-point
    calculation and has become the standard
    algorithm used in hardware and software
    rasterizers
                                                49
      Bresenham’s Algorithms – 2/4
   Assume 0m  1
   And assume we
    have placed a pixel
    at (i+1/2, j+1/2)
   Assume y=mx+h
                            –
   At x=i+1/2, this line
    must pass within one-half
    the length of the pixel
    at (i+1/2, j+1/2)
                                     50
       Bresenham’s Algorithms – 3/4
Define a decsion varible
d  ab
If d  0, then the line is
closer to the lower pixel,
otherwise,it' s closer to
                                 –
the upper pixel. However,
this still requires floating point
comparision. Define a new                          y2  y1 y
                                       Using m              and
decision variable                                  x2  x1 x
                                       h  y2  mx2
d  ( x2  x1 )(a  b)  x ( a  b)   we could prove that such a d
                                       is always an integer           51
             Bresenham’s Algorithm – 4/4


    –                                                     –

                 –                                                     –

                                          1
Define d k to be the value of d at x  k  , we would like to computed k 1 incrementally from d .
                                          2
Observe that a decreases by m or increases by 1 - m when we increment x.
Likewise, b either increases by m or decreases by 1 - m when we increment x.
Multiplyin g by x, we find that           d k 1  ( a   b) x         d k 1  ( a   b) x
                                            ((a  m )  (b  m ))x        ( a  (1  m )  (b  (1  m )))x
                2y          if d k  0
d k 1  d k                              ( a  b )  x  2 m x         ( a  b  2(1  m ))x
               2( y  x ) otherwise.
                                            d k  2 y                     d k  ( 2  y  2 x )
                                                                                                             52
    Scan Conversion of Polygons
   One of the major advantages that the first
    raster systems brought to users was the
    ability to display filled polygons.
   Previously rasterizing polygons and
    polygons scan conversion means filling a
    polygon with a single color



                                                 53
             Inside-Outside Testing




Crossing or odd-even test: draw a semi-infinite line
starting from a point and count the number of intersections.
                                                          54
                Winding Number




Color a region if its winding number
is not zero.
                                       55
 OpenGL and Concave Polygons
Declare a tessellator object

mytess=gluNewTess();
gluTessBeginPolygon(mytess, NULL);
gluTessBeginContour(mytess);
For(i=0; i<nvertices; i++)
   gluTessVertex(mytess, vertex[i], vertex[i]);
gluTessEndContour();
gluTessEndPolygon(mytess);                        56
Polygon Tessellation




                       57
      Scan Conversions with the
              Z Buffer
   We process each polygon, one scan line at
    a time
   We use the normalized-device-coordinate
    line to determine depths incrementally




                                                58
     Polygon Filling Algorithms
   Flood fill
   Scan-line fill
   Odd-even fill




                                  59
                         Flood Fill
   First find a seed point
   Flood_fill (int x, int y)
    {
       if (read_pixel(x, y)==white)
       {
                write_pixel(x, y, BLACK);
                flood_fill(x-1, y);
                flood_fill(x+1, y);
                flood_fill(x, y-1);
                flood_fill(x, y+1);
       } Can remove the recursion by working on one scan-line
    }        at a time.                                       60
Scan-Line Algorithms




          Generating the intersections
          for each edges.




                                         61
Y-X Algorithm
       bucket sorting for each line




                                      62
                       Singularities
   We could rule it out by ensuring that
    no vertex has an integer y value:
       Perturb its location slightly
       Consider a virtual frame buffer
        of twice the resolution of the real
        frame buffer. In the virtual frame
        buffer, pixels are located at only
        even values of y, and all vertices are
        located at only odd values of y
        Placing pixel centers half way between
        integers, as does OpenGL, is equivalent
        to using this approach.
                                                  63
Antialiasing of Lines




 Antialiasing by area averaging   64
         Antialiasing of Polygons




Assign a color based on an area-weighted average of the colors
of the three triangles. (Use accumulation buffer as in Chapter 7)
                                                                 65
Time-domain (Temporal) Aliasing




   Solution: use more than one ray for each pixel.
   It’s often done off-line, as antialiasing is often
   computation intensive.
                                                        66
                Color Systems
   The same colors may cause different impressions
    on two displays
   C1=[R1, G1, B1]T, C2=[R2, G2, B2]T, then there is a
    color conversion matrix M such that C2=MC1
   Printing industry usually uses CMYK color
    system than RGB
   The distance between colors in the color cube is
    not a measure of how far apart the colors are
    perceptually. For example, humans are more
    sensitive to color shifts in blue. (Thus YUV, Lab) 67
       Chromaticity Coordinates
   For tristimulus values T1, T2, T3, for a
    particular RGB color, its chromaticity
    coordinates are
              T1
    t1               ,
         T1  T2  T3
              T1
    t2               ,
         T1  T2  T3
              T3
    t3 
         T1  T2  T3                          68
Visible Colors and Color Gamut
          of a Display




                                 69
The HLS Color System




 Hue, Lightness and Saturation
                                 70
                    The Color Matrix
It can be looked at part of the pipeline that converts a color, rgba, to
a new color, r´g´b´a´, by the matrix multiplication
   r  r
   g  g
     C 
   b  b
       
   a   a                       1 0 0     1
                                   0 1 0     1
 For example, if we define:     C             
                                  0   0 1    1
                                               
                                  0   0 0     1
 then it converts the additive
 representation of a color to its subtractive representation.
                                                                      71
           Gamma Correction – 1/2
   Human visual system perceives
    intensity in a logarithmic manner
   If we want the brightness steps to
    appear to be uniformly space, the
    intensities that we assign to pixels
    should increase exponentially




                                           72
        Gamma Correction – 2/2
   The intensity I of a CRT is related to the voltage
    V applied by
    I V
    or
    logI=c0 + logV
    where the constant  and c0 are properties of the
    particular CRT
   Two CRT may have different values for these.
    We could have a lookup table to correct this.

                                                         73
            Dithering and Halftoning
   Trade spatial resolution for gray-scale
    or color resolution.
   For a 4x4 group of 1-bit pixels,
    there are 17 dither pattern, instead of 216 patterns.
   We could avoid always using the same patterns,
    which may cause beat of moire patterns.
   glEnable(GL_DITHER) (normally it is enabled)
    Using this may cause the pixels to return different
    values than the ones that were written
                                                            74

								
To top