Your Federal Quarterly Tax Payments are due April 15th

# Introduction To Ray Tracing by hcj

VIEWS: 40 PAGES: 109

• pg 1
Introduction To Ray Tracing

Greg Prisament
Stanford ESP Spring 2010

Pebbles by Jonathan Hunt
What Is Ray Tracing?
Ray Tracing is a rendering technique that
simulates light rays as they bounce around a
scene.

It can produce realistic (and surrealistic) images.

It’s fun to study because it brings together
math, physics, programming & art to produce
amazing images.
What Is A Ray Tracer?

3D Scene Description                                2D Image
Renderer
Ray
Tracer
Program
3d Modeler
(Bryce 3D)   Text-based Description
(POV-Ray)
Course Outline
Part 1: POV-Ray
Part 2: Background Math

--- 5 min break ---

Part 3: Ray Tracing Algorithm
What Is POV-Ray
POV-Ray is a free and open-source Ray Tracer.

It uses text-based scene descriptions as input.

POV stands for “Persistence of Vision” but in
graphics it usually stands for “Point of View”.

Get it here:
Simple POV-Ray Scene
background { blue 1 }

camera {
location   <0.0, 0.5, -4.0>
look_at    <0.0, 0.0, 0.0>
}

light_source {
<-30, 30, -30>
color rgb <1, 1, 1>
}

plane {
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
}

sphere {
<0, 0, 0>, 1
texture {
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }
}
}
Simple POV-Ray Scene
background { blue 1 }                 All scenes have:
camera {
location   <0.0, 0.5, -4.0>

}
look_at    <0.0, 0.0, 0.0>          Camera
light_source {
<-30, 30, -30>

}
color rgb <1, 1, 1>
Light Source(s)
plane {
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
}

sphere {
<0, 0, 0>, 1                         Object(s)
texture {
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }
}
}
Simple POV-Ray Scene
background { blue 1 }
X , Y,      Z
camera {
location   <0.0, 0.5, -4.0>
Camera
look_at    <0.0, 0.0, 0.0>
}
The camera can be specified
light_source {
<-30, 30, -30>
with two points:
color rgb <1, 1, 1>
}

plane {
location – Position of
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
camera.
}

sphere {
<0, 0, 0>, 1
look_at - Point to look at.
texture {
pigment { color rgb <1, 1, 0> }

}
finish{ specular 0.6 }
There are also other options
}
for tilt, field-of-view, etc.
Simple POV-Ray Scene
background { blue 1 }
X , Y,      Z
camera {
location   <0.0, 0.5, -4.0>
look_at    <0.0, 0.0, 0.0>
}
Coordinates are specified with three
numbers x, y, z inside “triangle brackets”<>.
light_source {
<-30, 30, -30>                      If we place the camera on the –Z axis and
color rgb <1, 1, 1>
}                                     face positive +Z then:

plane {                               -X = Screen Left
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
+X = Screen Right
}                                     -Y = Screen Bottom
+Y = Screen Top
sphere {                              -Z = Out of the screen towards your face.
<0, 0, 0>, 1
texture {                           +Z = Into the screen away from your face.
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }            POV-Ray uses a left-handed coordinate
}
}
system.
Simple POV-Ray Scene
background { blue 1 }
X , Y,      Z
camera {
location   <0.0, 0.5, -4.0>
look_at    <0.0, 0.0, 0.0>
}                                     Numbers in POV-Ray are unit-
light_source {                        less. If you’re rendering
<-30, 30, -30>
color rgb <1, 1, 1>
molecules they could represent
}                                     nano-meters. If you’re rendering
plane {                               a city they might represent feet.
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
If you’re rendering the galaxy
}                                     they could represent light-years.
sphere {
<0, 0, 0>, 1
texture {                           But POV-Ray doesn’t care what
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }
they represent.
}
}
Simple POV-Ray Scene
background { blue 1 }

camera {
location   <0.0, 0.5, -4.0>
Light Source(s)
look_at    <0.0, 0.0, 0.0>
}
Position
A basic “point light” source is
light_source {                        specified with a position and
<-30, 30, -30>
color rgb <1, 1, 1>   Color         a color.
}           R, G, B
plane {
<0, 1, 0>, -1                       It emits light in all directions.
pigment { color rgb <0, 1, 0> }
}                                     The intensity of the light
sphere {                              does not fall off with
<0, 0, 0>, 1
texture {                           distance (unrealistic).
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }
}
}
Simple POV-Ray Scene
background { blue 1 }

camera {
location   <0.0, 0.5, -4.0>
Object(s)
look_at    <0.0, 0.0, 0.0>
}
This scene has two objects: a
light_source {
<-30, 30, -30>
plane and a sphere.
color rgb <1, 1, 1>
}

plane {
All objects have a texture
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
which describes pigment,
}
Textures
normal, and finish
sphere {
<0, 0, 0>, 1
properties. If any of these
texture {
pigment { color rgb <1, 1, 0> }
are missing then default
}
finish{ specular 0.6 }
values are used.
}
Simple POV-Ray Scene
background { blue 1 }

camera {
location   <0.0, 0.5, -4.0>
Planes
look_at    <0.0, 0.0, 0.0>
}                                     A “plane” is a flat surface that goes on
light_source {
infinitely in two dimensions.
<-30, 30, -30>
color rgb <1, 1, 1>                 It is specified with a normal vector and
}
Normal Vector                    an offset.
plane {
<0, 1, 0>, -1    Offset
pigment { color rgb <0, 1, 0> }     The normal vector is the direction the
}                                     plane faces. For example, a room’s floor
sphere {
faces “up” (0, 1, 0) and a room’s ceiling
<0, 0, 0>, 1                        faces down (0, -1, 0).
texture {
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }            The offset is the plane’s displacement
}                                   from the origin.
}
Simple POV-Ray Scene
background { blue 1 }

camera {
location   <0.0, 0.5, -4.0>
Spheres
look_at    <0.0, 0.0, 0.0>
}
A sphere is specified with a
light_source {
<-30, 30, -30>
color rgb <1, 1, 1>
}

plane {
This sphere is centered on
<0, 1, 0>, -1
pigment { color rgb <0, 1, 0> }
the origin and has a radius of
}
Center Point                   1.
<0, 0, 0>, 1
texture {
pigment { color rgb <1, 1, 0> }
finish{ specular 0.6 }
}
}
Other Shapes
Torus
torus {

1.65, 0.3

translate <0, -.7, 1>

texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
}
Other Shapes
Torus
torus {
1.65, 0.3

translate <0, -.7, 1>      Position
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
}
Other Shapes
Box
box {
<-0.5, -1, -0.5>, <0.5, 0, 0.5>
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.8 }
}
}
Other Shapes
Box
Bottom-left-near corner position
box {
<-0.5, -1, -0.5>, <0.5, 0, 0.5>
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.8 }
}
}

Top-right-far corner position
Other Shapes
Cylinder
cylinder {
<0, -1, 0>, <0, 0, 0>, 1
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.8 }
}
}
Other Shapes
Cylinder
Endpoint 1      Endpoint 2
cylinder {
<0, -1, 0>, <0, 0, 0>, 1    Radius
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.8 }
}
}
Other Shapes
Cone
cone {
<0, -1, 0>, 1, <0, 1, 0>, 0
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
}
Other Shapes
Cone              Endpoint 2
Endpoint 1
cone {
<0, -1, 0>, 1, <0, 1, 0>, 0
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
}
Image-Based Height Field

height_field {
jpeg "wedding.jpeg"
texture {
pigment {
image_map {
jpeg "wedding.jpeg"
map_type 0
interpolate 2
once }
rotate x*90
}
finish {
specular 0.5
ambient 0.2
reflection 0.1}
}
translate <-0.5, 0, -0.5>
scale 1.2*<4, 1, 3>
scale <1, 0.4, 1>
translate <0, -.99, 0>
}

Photo by David Zaveloff
Text
text {
ttf
"arial.ttf",
"SPLASH!",
0.5,
0
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
translate <-2, -1, 0>
}

Blobs
#declare StrengthVal = 1.0;
blob {
threshold 0.6
sphere { < 0.75,   0,    0>, RadiusVal, StrengthVal }
sphere { <-0.375, 0.65, 0>, RadiusVal, StrengthVal }
sphere { <-0.375, -0.65, 0>, RadiusVal, StrengthVal }
scale 1
texture {
pigment { color rgb <1, 1, 0> }
finish{ ambient 0.2 specular 0.6 }
}
}
Textures – Overview
Pigment
describes the
surface color.

Normal
simulates
coarse surface
imperfections.
Finish
describes how
the surface
interacts with
light.
Textures - Pigments

pigment {                 pigment {               pigment {                 pigment {
color rgb <1, 0, 0>       granite                 agate                     wood scale 0.1
}                         }                       }                             turbulence 0.1
}

pigment {                 pigment {               pigment {
ripples scale 0.1         ripples scale 0.1       ripples scale 0.1 turbulence 0.5
}                             turbulence 0.5          color_map {
}                              [0.0 color rgb <0, 0, 0.5>]
[1.0 color rgb <0.7, 0.7, 1>]
}
}
Textures - Normals

No normal specified.    normal {                normal {        normal {
granite scale 0.2       agate           wood scale 0.1
}                       }                   turbulence 0.1
}

normal {                normal {                normal {        normal {
ripples scale 0.1       granite 0.1             agate 0.2       dents 2 scale 0.1
}                           scale 0.2           }               }
}
Textures - Finishes
Ambient Light             Diffuse Light       Specular Highlight Reflected Light

finish{                  finish{                finish{               finish{
ambient 0.5              ambient 0.0            ambient 0.0           ambient 0.0
diffuse 0.0              diffuse 1.0            diffuse 0.0           diffuse 0.0
specular 0.0             specular 0.0           specular 1.0          specular 0.0
reflection 0.0           reflection 0.0         reflection 0.0        reflection 0.75
}                        }                      }                     }

The ambient              The diffuse            The specular          The reflection
component gives a        component              component             component
little bit of color to   approximates light     approximates light    simulates a mirror-
shadow areas, which      that comes directly    that comes directly   like surface that
otherwise would be       from a light source    from a light source   reflects light coming
completely black.        and scatters in all    and is reflected.     from other objects.
It approximates          directions.
indirect light.
Textures - Finishes
Ambient Light    Diffuse Light   Specular Highlight Reflected Light

finish{            We combine the
ambient 0.1
diffuse 0.75     contributions of
specular 0.75    these various
reflection 0.2
}                  types of light to
get the surface
finish we desire.
Textures - Examples
#declare DentedChrome = texture
{
pigment {color rgb <0.9, 0.95, 1.0>}

normal { dents 0.2 scale 0.05}

finish{
ambient 0.1
diffuse 0.8
specular 1.0 roughness 0.001
reflection 0.2
}
}

torus {
1.65, 0.3
translate   <0, -.7, 1>
texture {   DentedChrome }
}
sphere {
<0, 1, 0>   0.75
texture {   DentedChrome }
}
Textures - Examples
#declare RedFrostedGlass = texture
{
pigment {color rgb <1.0, 0.9, 0.9>
filter 0.8}
normal {agate 0.03}

finish{
ambient 0.2
diffuse 1.0
specular 1.0
ior 1.4
}
}

difference
{
cone {
<0, -1, 0>, 0.8, <0, 1, 0>, 1.0
}
cone {
<0, -0.8, 0> 0.7 <0, 1.1, 0>, 0.9
}

texture {RedFrostedGlass}
}
Transformations
#declare MyTexture = texture {
pigment { color rgb <1, 1, 0> }
All shapes can be rotated, translated, and     finish{ specular 0.6 }
}
scaled, as many times as you want.
sphere {
<0, 0, 0>, 1

scale <0.5, 1, 0.5>
rotate -45*z
translate <-1.5, 0.5, 0>

texture {MyTexture}
}

sphere {
<0, 0, 0>, 1

scale <1, 0.2, 1>
translate <0, -.9, 0>

texture {MyTexture}
}

sphere {
<0, 0, 0>, 1

scale <0.75, 1.0, 0.5>
rotate 60*x
translate <1.5, 0.5, 0>

texture {MyTexture}
}
Transformations
Be careful of the order you do your transformation in!

Rotate then Scale                      Scale then Rotate

sphere {                               sphere {
<0, 0, 0>, 1                           <0, 0, 0>, 1

rotate -45*z                           scale <0.3, 1, 0.3>
scale <0.3, 1, 0.3>                    rotate -45*z

texture {MyTexture}                    texture {MyTexture}
}                                      }
POV-Ray has many more types of
shapes, textures, light sources and
settings.

experiment with what it can do.
POV-Ray “Hall of Fame” Images
Boreal by Norbert Kern
The Wet Bird by Gilles Tran
Family and Main Street by Gilles Tran
Glasses by Gilles Tran
Course Outline
Part 1: POV-Ray
Part 2: Background Math

--- 5 min break ---

Part 3: Ray Tracing Algorithm
3D Vectors and Scalars
A 3D vector is a triplet of numbers:

v  ( x, y, z)
Example:

v  (1,3,2)
Geometrically, think of it as an arrow.

A scalar is just a plain-ol’ number, like 15.3.
Magnitude and Direction
All vectors have a length (magnitude) and direction.

Magnitude:

v  ( x, y , z )

v  x2  y2  z 2
Example:

v  (1,3,2)

v  12  (3) 2  2 2  14


v1  ( x1 , y1 , z1 )

v2  ( x2 , y 2 , z 2 )
 
v1  v2  ( x1  x2 , y1  y2 , z1  z2 )

                 
v1                v2               
v1
                         
v2
 
(v1  v2 )
Vector Subtraction

Vectors Can Be Subtracted

v1  ( x1 , y1 , z1 )

v2  ( x2 , y 2 , z 2 )
 
v1  v2  ( x1  x2 , y1  y2 , z1  z2 )

                  
v1                 v2              
                    v1

v2         
(v1  v2 )
4 Types of Vector Multiplication

There are 4 types of vector multiplication:

• Scalar Multiplication

• Component-wise Multiplication

• Dot Product

• Cross Product
Scalar*Vector Multiplication

The scalar “distributes” through the parenthesis.


v  ( x, y , z )

sv  s( x, y, z )  ( sx, sy , sz )
Magnitude is “scaled” by |s|.

Direction reverses if s < 0.
Component-wise Multiplication

Warning! Don’t do this in Math class: it’s not considered the correct way to multiply vectors.

v1  ( x1 , y1 , z1 )

v2  ( x2 , y2 , z2 )
 
v1 * v2  ( x1 x2 , y1 y2 , z1 z2 )

Ray tracers use component-wise multiplication when multiplying colors together.

Basically, it treats the vectors as independent scalars packed into vector form.
Dot Product

v1  ( x1 , y1 , z1 )

v2  ( x2 , y 2 , z 2 )
 
v1  v2  x1 x2  y1 y2  z1 z 2

The dot product takes two vectors and produces a scalar.

The dot product is related to the angle between the vectors and their magnitudes:
                               

v1  v2  x1x2  y1 y2  z1z2  v1 v2 cos( )
      
where ϴ is the angle between v1 and v 2 .
Dot Product - Properties
                                                          
1) If v1 and v 2 point in the same direction then: then v1  v2  v1 v2

                                  
2) If v1 and v 2 are perpendicular then v1  v2  0

                                                     
3) If v1 and v 2 are normalized meaning v1  v2  1 then v1  v2  cos( )
Cross Product

v1  ( x1 , y1 , z1 )

v2  ( x2 , y2 , z2 )
 
v1  v2  ( y1 z2  y2 z1 , x2 z1  x1 z2 , x1 y2  x2 y1 )
Cross Product - Properties
1) The magnitude of the cross product equals the area of the parallelogram the vectors make.

v2                       
A           A  v1  v2

v1
      
2) The direction of the cross product is perpendicular to both v1 and v 2 .
                                                 
3) If v1 and v 2 point in the same direction then: then v1  v2  (0,0,0)

                                         
4) If v1 and v 2 are perpendicular then v1  v2  v1 v2

                              
5) If v1 and v 2 are normalized then v1 v2  sin( )
Exercises

Complete the “Vector Math Exercises” worksheet now.
Course Outline
Part 1: POV-Ray
Part 2: Background Math

--- 5 min break ---

Part 3: Ray Tracing Algorithm
Course Outline
Part 1: POV-Ray
Part 2: Background Math

--- 5 min break ---

Part 3: Ray Tracing Algorithm
Forward Ray Tracing
Goal: Simulate light rays coming from the light
sources, bouncing off objects, and entering
camera.
Forward Ray Tracing
Problem: The majority of light rays never reach the
camera. These rays do not contribute to the final
image and so we should not waste time simulating
them.
Backwards Ray Tracing
Solution: Do the simulation in reverse.

This is called backwards ray-tracing and this is how
ray tracers like POV-Ray work.
Backwards Ray Tracing
Fire rays from the camera. When a ray hits an object
split it into several rays that go towards each light
source.
This way, only the rays that contribute to the final
image are simulated.
Algorithm Overview
1) Fire “primary” ray from the camera.

2) Determine closest object it hits and where
the intersection occurs.

3) Perform shading, including firing “secondary”
rays from intersection point:
a) towards each light source.
b) in the reflection direction.
c) in the refraction direction.
Algorithm Overview
In pseudo-code:
Color FireRay(Ray r, Scene s):
(obj, t, N, I) = RayGetClosestIntersection(s, r)
color = Shade(obj, t, N, I, r.dir)
return color

RenderScene(Scene s, Camera c, Image img):
for x from 0 to (X_RES-1)
for y from 0 to (Y_RES-1)
r = GenCameraRay(c, x, y, X_RES, Y_RES)
img.pixel[x,y] = FireRay(r, s)
endfor
endfor
Algorithm Overview
1) Fire “primary” ray from the camera.

2) Determine closest object it hits and where
the intersection occurs.

3) Perform shading, including firing “secondary”
rays from intersection point:
a) towards each light source.
b) in the reflection direction.
c) in the refraction direction.
Ray Definition
A ray is half a line. It starts at a point and goes
off to infinity in one direction.

Mathematically it can be written as a function:
 
R(t )  r0  trd , t  0
                                     
Where r0 is the ray’s origination point and rd is
the direction it goes in.
Camera and Viewing Frustum
The camera defines a pyramid-shaped viewing frustum that
determines which section of the 3D scene is seen.

You can imagine sticking a grid of pixels into this viewing
frustum near the camera.

We will fire a ray through each of these pixels.
Camera and Viewing Frustum
The camera defines a pyramid-shaped viewing frustum that
determines which section of the 3D scene is seen.

You can imagine sticking a grid of pixels into this viewing
frustum near the camera.

We will fire a ray through each of these pixels.
Camera and Viewing Frustum
The camera defines a pyramid-shaped viewing frustum that
determines which section of the 3D scene is seen.

You can imagine sticking a grid of pixels into this viewing
frustum near the camera.

We will fire a ray through each of these pixels.
Generating Camera Rays
The shape and location of the viewing frustum is
determined by several camera properties:

 = Horizontal viewing angle: field of view.
a = Aspect Ratio: ratio of image width to height.

f = Normalized forward vector: direction camera is facing.
                                                              
u = Normalized up vector: direction which is up, should be    f.
 = Normalized right vector: computed as        
r                                              f  u.

p = Camera position.
Generating Camera Rays
It is convenient to scale the up and right vectors
based on the aspect ratio and horizontal viewing
angle:
View of frustum from above:
          
r   r tan( )
2
 1          
u   u tan( )                      
r
a    2                   
f


2

p
Generating Camera Rays
It is convenient to scale the up and right vectors
based on the aspect ratio and horizontal viewing
angle:
View of frustum from above:
          
r   r tan( )
2
?
 1          
u   u tan( )                      
r
a    2                   
1   f


2

p
Generating Camera Rays
It is convenient to scale the up and right vectors
based on the aspect ratio and horizontal viewing
angle:
View of frustum from above:
          
r   r tan( )
2                            
tan( )
 1                                     2
u   u tan( )                      
r
a    2                   
1   f


2

p
Generating Camera Rays
If output image is W by H pixels large, we can
generate the ray Rx , y (t ) for pixel (x, y) as follows:

                y             x 
Rx , y (t )  p  t  f  (1  2 )u   (1  2 )r  
            H              W 
Generating Camera Rays
If output image is W by H pixels large, we can
generate the ray Rx , y (t ) for pixel (x, y) as follows:

                y             x 
Rx , y (t )  p  t  f  (1  2 )u   (1  2 )r  
            H              W 

Questions:
1) What does this reduce to for pixel (0, 0)?
2) What does this reduce to for pixel (W, H)?
3) What does this reduce to for pixel (W/2, H/2)?
Algorithm Overview
1) Fire “primary” ray from the camera.

2) Determine closest object it hits and where
the intersection occurs.

3) Perform shading, including firing “secondary”
rays from intersection point:
a) towards each light source.
b) in the reflection direction.
c) in the refraction direction.
Ray-Sphere Intersection
A sphere at position ( s x , s y , s z ) with radius r is
mathematically defined as follows:

( x  sx ) 2  ( y  s y ) 2  ( z  sz ) 2  r 2

How to tell if this sphere and ray intersect?

 
R (t )  p  td , t  0
Ray-Sphere Intersection
To determine if a ray and sphere intersect, we
plug in for x, y, z using the ray’s equation, and
solve for t.

( x  sx ) 2  ( y  s y ) 2  ( z  sz ) 2  r 2
 
R(t )  p  td  ( px , p y , pz )  t (d x , d y , d z )  ( px  d xt , p y  d yt , pz  d zt )
x        y            z
Plugging in:
( px  d xt  sx )  ( py  d yt  s y )  ( pz  d zt  sz )  r
2                                 2                                 2       2

(d xt  ( px  sx ))  (d yt  ( py  s y ))  (d zt  ( pz  sz ))  r
2                                    2                                 2       2
Ray-Sphere Intersection
(d xt  ( px  sx ))2  (d yt  ( py  s y ))2  (d zt  ( pz  sz ))2  r 2

Substitute k x  ( p x  s x )           k y  ( p y  s y ) k z  ( pz  sz )

(d xt  kx )2  (d yt  k y )2  (d zt  kz )2  r 2

Expand squared binomials:

d x t 2  2d x kxt  kx  d y t 2  2d y k yt  k y  d z2t 2  2d z kzt  kz2  r 2
2                   2     2                     2
Ray-Sphere Intersection
d x t 2  2d x kxt  kx  d y t 2  2d y k yt  k y  d z2t 2  2d z kzt  kz2  r 2
2                   2     2                     2

Combine like terms:
(d x  d y  d z2 )t 2  2(d x kx  d y k y  d z kz )t  (kx  k y  kz2  r 2 )  0
2     2                                                  2     2

Now what?
Ray-Sphere Intersection
d x t 2  2d x kxt  kx  d y t 2  2d y k yt  k y  d z2t 2  2d z kzt  kz2  r 2
2                   2     2                     2

Combine like terms:
(d x  d y  d z2 )t 2  2(d x kx  d y k y  d z kz )t  (kx  k y  kz2  r 2 )  0
2     2                                                  2     2

a                              b                             c

Use quadratic formula to solve for t!

 b  b 2  4ac
t
2a
Ray-Sphere Intersection
 b  b 2  4ac
t
2a
# of real roots              What it means

0 real roots                 R(t)’s line does not intersect sphere.

1 real root                  R(t)’s line is tangent to sphere.

2 real roots                 R(t)’s line intersects sphere twice.

But what about negative values of t?
Ray-Sphere Intersection
 b  b 2  4ac
t
2a
# of real roots                       What it means

0 real roots                          R(t)’s line does not intersect sphere.

1 real root                           R(t)’s line is tangent to sphere.

2 real roots                          R(t)’s line intersects sphere twice.

# of positive real roots              What it means

0 positive real roots                 Ray R(t) does not intersect sphere.

1 positive real root                  Ray R(t) is tangent to sphere or originates
inside it.
2 positive real roots                 Ray R(t) intersects sphere twice. The
closest intersection is the smaller t value.
Ray-Sphere Intersection
Assume ray and sphere intersect.

Let t be the smallest positive real root.

The point of intersection I is given by:

                 
I  R(t )  p  t d
 
The surface normal N at I is determined by:
 
N  I  (sx , s y , sz )
Ray-Sphere Intersection
Assume ray and sphere intersect.

Let t be the smallest positive real root.

The point of intersection I is given by:

                 
I  R(t )  p  t d
 
The surface normal N at I is determined by:
 
N  I  (sx , s y , sz )
Other Shapes
A similar approach can be used to intersect rays
with other shapes.

For example, the equation for a torus is:

x   2
 y  z r R
2    2   2
  4R( z
2 2          2
 r2)  0

We can plug in the ray equation and get a
quartic in terms of t. Using the quartic formula
we can determine the closest positive real value
of t (if there are any).
Other Shapes

Exercise: How to compute ray-plane intersections?
Closest Intersection In Scene
When we fire a ray into the scene, we must
determine which object it hits first. This is just a
matter of looping over all objects in the scene and
performing intersection tests:
RayGetClosestIntersection(Ray r, Scene s):
tmin = INFINITY
objmin = NONE
foreach Object obj in s:
t = RayIntersectObject(r, obj)
if (t < tmin)
tmin = t
objmin = obj
I = RayEval(t)
N = IntersectionNormal(obj, I)
endif
endfor
return (objmin, tmin, I, N)
Closest Intersection In Scene
However, when there are many objects in the scene,
this can become very slow. To address this, we can
use a hierarchy of bounding volumes.
Closest Intersection In Scene
For this ray, only the blue items are tested for collision.
Algorithm Overview
1) Fire “primary” ray from the camera.

2) Determine closest object it hits and where
the intersection occurs.

3) Perform shading, including firing “secondary”
rays from intersection point:
a) towards each light source.
b) in the reflection direction.
c) in the refraction direction.
We’ve fired a ray and know the object it hits and the
position and normal at the point of intersection.

We will use this information to determine the color of the
ray.
As we saw earlier the resulting color depends on the
contribution of several type of light and is affected by the
surface’s pigment and “bump map” (POV-Ray “normals).

Ambient Light      Diffuse Light   Specular Highlight Reflected Light

Pigment
+ Bump Map
+ Finish

ray color  wt ambient      Ambient( I , pigment)

 
 wt diffuse      Diffuse( I , N , lightsource, pigment)
lightsourc e

  
 wt specular     Specular( I , N , V , lightsource)
lightsourc e


          I  Intersecti Point
on
 wt reflection  Reflection I , N )
(                                 
N  SurfaceNormal at I

V  Incoming Ray Direction
 
 wt refraction  Refraction( I , N , ior )
Ambient Contribution

ray color  wtambient    Ambient(I , pigment)

Ambient light only depends on the
surface’s pigment color at the point of
intersection.
                         
Ambient ( I , pigment )  pigment ( I )

I  Intersecti Point
on
Example: For a “ripple” effect:                                 
N  SurfaceNormal at I

V  Incoming Ray Direction
r  Ix  Iy
2     2


pigment( I )  sin(r )(1.0,1.0,1.0)
Diffuse Contribution
 
 wtdiffuse     Diffuse I , N , lightsource, pigment)
(
lightsourc e

Diffuse lighting is affect by all light sources

that are visible from I .

How can we determine if a light is visible?


I  Intersecti Point
on
                   
N  SurfaceNormal at I

V  Incoming Ray Direction
Diffuse Contribution
 
 wtdiffuse     Diffuse I , N , lightsource, pigment)
(
lightsourc e

Diffuse lighting is affect by all light sources

that are visible from . I

IsLightVisible(Scene s, Vec I, Light light):
ray.pos = I.pos
ray.dir = Normalize(light.pos – I.pos)
t = RayGetClosestIntersection(s, r).t
if t < |light.pos – I.pos|
return True
else
return False
endif
Diffuse Contribution
 
 wtdiffuse     Diffuse I , N , lightsource, pigment)
(
lightsourc e

Then for each lightsource that is visible
compute the diffuse contribution as
      
 L pos  I
D         
L pos  I
                       
                                                       
Diffuse I , N , lightsource, pigment)  ( D  N ) Lcolor * pigment( I )
(

I  Intersecti Point
on
                     
N  SurfaceNormal at I

V  Incoming Ray Direction

L pos  Light Position
Specular Contribution
  
 wt specular     Specular( I , N ,V , lightsource)
lightsourc e

Specular is the result of light being
reflected off a surface and spreading
narrowly.


I  Intersecti Point
on
                    
N  SurfaceNormal at I

V  Incoming Ray Direction
object                     
D  Direction From Light
                       
R  Reflection of D about N
Specular Contribution
  
 wt specular     Specular( I , N ,V , lightsource)
lightsourc e

The closer look towards the reflection
vector the more highlight you see.


              R       Lots of specular!
D                                           
V                      I  Intersecti Point
on
                    
N  SurfaceNormal at I

V  Incoming Ray Direction
object                                 
D  Direction From Light
                       
R  Reflection of D about N
Specular Contribution
  
 wt specular     Specular( I , N ,V , lightsource)
lightsourc e

The closer look towards the reflection
vector the more highlight you see.


              R
D                                             
I  Intersecti Point
on
 Almost no specular!                        
V                        N  SurfaceNormal at I

V  Incoming Ray Direction
object                                   
D  Direction From Light
                       
R  Reflection of D about N
Specular Contribution
  
 wt specular     Specular( I , N ,V , lightsource)
lightsourc e

          
R  D  2N (D  N )
 
Specular  ( R  V ) Lcolor

α controls how “tight” or wide the
highlights are.                            
I  Intersecti Point
on
                    
                N  SurfaceNormal at I
              R                
D                               V  Incoming Ray Direction

         D  Direction From Light
V                                  
R  Reflection of D about N
object
Reflection Contribution
 
 wtreflection  Reflection I , N )
(

We can easily determine the color
contribution due to mirror-like reflection.

How?


I  Intersecti Point
on
                     
                      N  SurfaceNormal at I
      N                      
R             V               V  Incoming Ray Direction
                       
R  Reflection of V about N
Reflection Contribution
 
 wtreflection  Reflection I , N )
(

Fire a ray in the reflection direction!
ray.pos = I
ray.dir = R
reflectionColor = FireRay(ray, scene)
          
Where R  V  2 N (V  N )

I  Intersecti Point
on
                     
                  N  SurfaceNormal at I
      N                  
R             V           V  Incoming Ray Direction
                       
R  Reflection of V about N
Refraction Contribution
 
 wtrefraction  Refraction I , N , ior)
(

Same idea for refraction:

N       
V

                                
T                                I  Intersecti Point
on
                     
N  SurfaceNormal at I

 1   1                                  
        
2
V  Incoming Ray Direction
 1 

                                2   
T  I   I  N  1   1 I  N
                       N

2    2          2                           T  Refraction Vector
                                      
1  Index of Refraction1
 2  Index of Refraction 2
For derivation using Snell’s law see
http://www.flipcode.com/archives/reflection_transmission.pdf
Refraction Contribution
 
 wtrefraction  Refraction I , N , ior)
(

Same idea for refraction:

N       
V

                        
T                        I  Intersecti Point
on
                     
N  SurfaceNormal at I

ray.pos = I                                 V  Incoming Ray Direction

ray.dir = T                                 T  Refraction Vector
refractionColor = FireRay(ray, scene)
1  Index of Refraction1
 2  Index of Refraction 2

ray color  wt ambient      Ambient( I , pigment)

 
 wt diffuse      Diffuse( I , N , lightsource, pigment)
lightsourc e

  
 wt specular     Specular( I , N , V , lightsource)
lightsourc e


          I  Intersecti Point
on
 wt reflection  Reflection I , N )
(                                 
N  SurfaceNormal at I

V  Incoming Ray Direction
 
 wt refraction  Refraction( I , N , ior )
Review: Algorithm Overview
1) Fire “primary” ray from the camera.

2) Determine closest object it hits and where
the intersection occurs.

3) Perform shading, including firing “secondary”
rays from intersection point:
a) towards each light source.
b) in the reflection direction.
c) in the refraction direction.
Limitations of Ray Tracing
1) Speed. It’s really slow. Too slow for games.

2) Poor approximation of indirect light.
• Ambient contribution does not take into
account light “bleeding” from one object to
another.
• Diffuse and Specular only work for “direct”
light, not other bright objects in the scene.
Alternative Rendering Techniques
Triangle rasterization for
interactive graphics. Less
realistic but very fast.

Photon Mapping
Slower than normal ray tracing,
but handles indirect light and
color bleeding better. Uses many
of the concepts you learned
today.
Thanks!

Slides will be posted here by the end of the week:

http://www.lycheesoftware.com/splash

To top