14 Ray Tracing II Antialiasing by olliegoblue23

VIEWS: 207 PAGES: 68

									#14: Ray Tracing II & Antialiasing
CSE167: Computer Graphics Instructor: Ronen Barzel UCSD, Winter 2006

Outline for today
 



Speeding up Ray Tracing Antialiasing Stochastic Ray Tracing

1

Where we are now


Ray tracing:
   

cast primary rays from eye through pixels intersect with objects cast rays towards lights to determine shadowing recursively cast reflection and refraction rays

Qu ickTime™ a nd a TIFF (Uncomp ressed) d ecompr esso r ar e nee ded to se e th is picture.

QuickTime™ and a TI FF (Uncompressed) decompressor are needed to see this picture.

Quic kTi me™ an d a TIFF (U nco mpre sse d) de comp ress or are nee ded to see this pic ture.

2

Need for acceleration structures


Lots of rays:
  

Scenes can contain millions or billions of primitives Ray tracers need to trace millions of rays This means zillions of potential ray-object intersections Just looping through all objects*rays would take days Not even counting time to do the intersection testing or illumination Major goal: minimize number of intersection tests
• Tests that would return false (no intersection) • Tests that would return an intersection that’s not closest to origin



Infeasible to test every object for intersection
 



Acceleration structures




Core approach: Hierarchical subdivision of space
• Can reduce O(N) tests to O(log(N)) tests



(Other acceleration techniques too…


beam tracing, cone tracing, photon maps, …)
3

Bounding Volume Hierarchies


Enclose objects with hierarchy of simple shapes
 

Same idea as for frustum culling Test ray against outermost bounding volume
• If ray misses bounding volume, can reject entire object • If ray intersects volume, recurse to child bounding volumes • When reaching the leaves of the hierarchy, intersect with primitives



Can keep track of current nearest intersection along ray
• If bounding volume is farther away that, no need to test intersection

4

Culling complex objects or groups




If an object is big and complex, it’s possible that only parts of it will be in view. Or if we have groups of objects, it’s possible that entire groups will be out of view.
 

Want to be able to cull the whole group quickly But if the group is partly in and partly out, want to be able to cull individual objects.

5

E.g. Sphere Hierarchy


Test for intersection against outermost sphere

6

E.g. Sphere Hierarchy


Outer sphere hits: test children


ignore child spheres that don’t intersect

7

E.g. Sphere Hierarchy


Test contents of bottom-most spheres


(Actually, hierachy would probably go down a few more levels)

8

Bounding Volume Hierarchies
 

Spheres good example of the concept Spheres not used much in practice:
 

No great techniques to automatically construct a good sphere hierarchy Spheres tend to overlap, so we would do redundant intersection tests Axis-aligned bounding boxes (AABBs) Oriented bounding boxes (OBBs) Can be good individual models Not great for organizing entire scenes



Other bounding volumes
 




9

Octrees
 

Start by placing a cube around the entire scene If the cube contains “too many” primitives (say, 10)




split equally into 8 nested cubes recursively test and possibly subdivide each of those cubes

   

More regular structure than the sphere tree Provides a clear rule for subdivision and no overlap between cells This makes it a better choice than sphere usually But still not ideal; lots of empty cubes

10

Octree
(2D illustration is a quadtree)

11

KD Trees
 

Place a box (not necessarily a cube) around the entire scene If the box contains too many primitives:
 




Split into two boxes Boxes need not be equal Split in the x, y, or z direction, at some arbitrary position within the box Heuristics to choose the split direction & position Tighter fit than octree

   

Adapts to irregular geometry


Pretty good for ray tracing Main drawback: tree can get deep


Lots of time traversing the tree itself

(called “KD” because it works the same in any number of dimensions)

12

KD Tree

13

BSP Trees
  

Binary Space Partitioning (BSP) tree Start with all of space If there are too many objects, split into two subspaces:
   

choose a plane to divide the space in two the plane can be placed anywhere and oriented any direction heuristics to choose a good plane recurse to children Potential to more tightly bound objects Harder to choose splitting plane Harder to work with arbitrary-shaped regions



Similar to KD tree: recursively splits space into two (unequal) parts
  



In practice, BSP trees tend to perform well for ray tracing

14

BSP Tree

15

Uniform Grids
 

Divide space into a uniform grid, instead of hierarchically Use ray marching to test the cells





Don’t need to test intersection against each grid cell Find cell where ray enters grid Test all objects in current cell
• If intersected an object, we’re done • Else, move to the next cell the ray passes through



Uniform grids can be very fast, or can be slow and a waste of memory




Depends on distribution of objects into cells Need to choose grid size properly

 

No good distribution if scene has large variation in object size and location Uniform grids not a practical general-purpose solution

16

Uniform Grid

17

Ray Marching

1

2

3 4 5 6

18

Hierarchical Grids
 

Start with a uniform grid If any cell has too many primitives
  

subdivide that cell into a grid subgrid can have any number of cells recurse if needed

 

(Octree: a hierarchical grid limited to 2x2x2 subdivision Hierarchical grids can perform very well

19

Acceleration Structures
   

Ray tracers always use acceleration structures to make the algorithm feasible No one “best” structure Ongoing research into new structures and new ways of using existing structures Considerations include:
   

Memory overhead of data structure Preprocessing time to construct data structure Ability to optimize well, given machine architecture For animation: Ability to update data structure as objects move

20

Outline for today
 



Speeding up Ray Tracing Antialiasing Stochastic Ray Tracing

21

Texture Minification
 

Remember texture minification Texture mapped triangle



triangle small or far away many texels land in a single pixel
i.e. take the texel at the center misses lots of detail causes “shimmering” or “buzzing” especially noticeable of object or view moves



Point-sample the texture?



 

 

Solution was to filter Texture buzzing is an example of aliasing


Filtering the texture is an example of antialiasing
22

Small Triangles


Aliasing when triangles very small


About the size of a pixel, or smaller Pixel color due to triangle that hits center of pixel Can miss triangles Can have gaps if narrow triangle misses pixel centers If view or object moves, can flicker as triangles cross pixel centers



Scan conversion: point sampling


 



23

Jaggies


Aliasing when drawing a diagonal on a square grid:
 

stairstepping AKA jaggies



Especially noticeable:
  

high-contrast eges near horizontal or near vertical As line rotates (in 2D)
• steps change length • corners of steps slide along the edge • known as crawlies

24

Moiré Patterns


Aliasing when rendering high detail regular patterns
  

can see concentric curve patterns known as Moiré patterns caused by interference between pattern and pixel grid



Also in real life: hold two window screens in front of each other

25

Strobing
 

Consider 30 frame-per-second animation of a spinning propeller If the propeller is spinning at 1 rotation per second
 

each frame shows propeller rotated 12 degrees more than previous looks OK each image shows propeller rotated 360 degrees i.e. in same place as previous frame i.e. propeller appears to stand still 31 rotations per second: will appear to rotate slowly forwads 29 rotations per second: will appear to rotate slowly backwards temporal aliasing caused by point-sampling the motion in time



If the propeller is spinning at 30 rotations per second
  



If the propeller is spinning at
 



Example of strobing problems
 

26

Aliasing


These examples cover a wide range of problems…


… but they all result from essentially the same thing

 

The image we are making is trying to represent a continuous signal


The “true” image color is a function that varies with continuous X & Y (and time) values sample the original signal at discrete points (pixel centers or texels or wherever) Use the samples to reconstruct a new signal, that we present to the audience Unfortunately, the sampling/reconstruction process causes some data to be misrepresented Hence the term alias: some part of the signal masquerading as something else

For digital computation, our standard approach is to:
 



Want the audience to perceive the new signal the same as the original would be
 

 

Often refer to instances of problems as artifacts or aliasing artifacts Antialiasing: trying to avoid aliasing problems. Three basic approaches:
  

Modify the original data so that it won’t have properties that cause aliasing Use more sophisticated sampling/reconstruction techniques Clean up the artifacts after-the-fact

27

Signal Analysis


Signal Analysis: the field that studies these problems in pure form
Applies also to digital audio, electrical engineering, radio, …  Artifacts are different, but the theory is the same Includes a variety of mathematical and engineering methods for working with signals:



 

Fourier analysis, sampling theory, filter, digital signal processing (DSP), …
electrical: a voltage changing over time. 1D signal: e = f(t) audio: sound pressure changing over time. 1D signal: a = f(t) computer graphics image: color changing over space. 2D signal: c=f(x,y) computer graphics animation: color changing over space & time. 3D signal: c=f(x,y,t) but they extend to more dimensions for the signal parameters but they extend to more dimensions for the signal value

Kinds of signals:
   



Examples and concepts typically shown for scalar 1D signal






A signal:

28

Sampling


Think of ideal image as perfect triangles in continuous (floating point) device space
  

Then we are thinking of our image as a continuous signal Continuous image has infinite resolution Edges of triangles are perfect straight lines We employ some sort of discrete sampling technique Examine the original continuous image and sample it onto a finite resolution grid of pixels



To render this image onto a regular grid of pixels:
 



If signal represents the red intensity of our virtual scene along some horizontal line, the sampled version consists of a row of discreet 8 bit red values


This is similar to what happens when a continuous analog sound signal is digitally sampled onto a CD

29

Reconstruction
 

Once we have our sampled signal, we then reconstruct it In the case of computer graphics, this reconstruction takes place as a bunch of colored pixels on a monitor


In the case of CD audio, the reconstruction happens in a DAC (digital to analog converter) and then finally in the physical movements of the speaker itself

30

Reconstruction Filters


Filtering or filtration happens at the reconstruction phase:
 

raw sample data isn’t used as is real world isn’t discrete



Some filtering due to the device, medium, and observer

 

Pixels of a monitor aren’t perfects squares or points of uniform color; they have some shape and distribution over space The human eye filters so that a grid of pixels appears to be a continuous image In audio, the loudspeaker has physical limitations on its movement
In audio, digital processing or analog circuitry In computer graphics, techniques such as bilinear or bicubic filtering



But we also introduce more filtering to help get the right result
 

31

Low Frequency Signals



Original signal



Point sampled at relatively high frequency



Reconstructed signal

32

High Frequency Signals


Original signal



Point sampled at relatively low frequency



Reconstructed signal

33

Regular Signals



Original repeating signal



Point sampled at relatively low frequency



Reconstructed signal repeating at incorrect frequency (result frequency aliases the original frequency)

34

Nyquist Limit


Any signal can be considered as a sum of signals with varying frequencies


That’s what an equalizer or spectrum display on an audio device shows



In order to correctly reconstruct a signal whose highest frequency is x:
 



sampling rate must have frequency at least 2x This is known as the Sampling Theorem • AKA Nyquist Sampling Theorem, AKA Nyquist-Shannon Sampling Theorem The 2x sampling frequency is known as the Nyquist frequency or Nyquist limit

 

Frequencies below the Nyquist limit come through OK Frequencies above the Nyquist limit come through as lower-frequency aliases, mixed in with the data

35

Nyquist Limit


In images, having high (spatial) frequencies means:
 

having lots of detail having sharp edges



Basic way to avoid aliasing: choose sampling rate higher than Nyquist limit
  



This assumes we are doing idealized sampling and reconstruction In practice, better to sample at least 4x But in practice, we don’t always know the highest frequency In fact, we might not have an upper limit!
• E.g. checkerboard pattern receding to the horizon in perspective • Spatial frequency is infinite • Must use antialiasing techniques

36

Aliasing Problems, summary


Shimmering / Buzzing:
Rapid pixel color changes (flickering) caused by high detail textures or high detail geometry. Ultimately due to point sampling of high frequency color changes at low frequency pixel intervals



Stairstepping / Jaggies:
Noticeable stairstep edges on high contrast edges that are nearly horizontal or vertical. Due to point sampling of effectively infinite frequency color changes (step gradient at edge of triangle)



Moiré patterns:
Strange swimming patterns that show up on regular patterns. Due to sampling of regular patterns on a regular pixel grid



Strobing:
Incorrect or discontinuous motion in fast moving animated objects. Due to low frequency sampling of regular motion in regular time intervals. (temporal aliasing)

37

Point Sampling
  

The aliasing problems we’ve seen are due to low frequency point sampling of high frequency information With point sampling, we sample the original signal at precise points (pixel centers, etc.) Is there a better way to sample continuous signals?

38

Box Sampling
  

We could also do a hypothetical box sampling of our image In this method, each triangle contributes to the pixel color based on the area of the triangle within the pixel The area is equally weighted across the pixel

39

Pyramid Sampling
  

Alternately, we could use a weighted sampling filter such as a pyramid filter The pyramid filter considers the area of triangles in the pixel, but weights them according to how close they are to the center of the pixel The pyramid base can be wider than a pixel
 

neighboring values influence the pixel minimizes abrupt changes

40

Sampling Filters
 

We could potentially use any one of several different sampling filters Common options include the point, box, pyramid, cone, and Gaussian filters
 

Different filters will perform differently in different situations, Best all-around sampling filters tend to be Gaussian in shape Commonly extend slightly outside, overlapping with the neighboring pixels. If filter covers less than the square pixel, will have problems like point sampling Eliminating unwanted alias frequencies (antialiasing) Eliminating wanted frequencies (blurring)



The filters aren’t necessarily limited to cover only pixel.
 



Trying to strike a balance between:
 

41

Edge Antialiasing

42

Pixel Coverage


Various antialiasing algorithms exist to color the pixel based on the exact area of the of the pixel that a triangle covers.


 

But, without storing a lot of additional information per pixel, very hard (or impossible) to properly handle case of several triangle edges in a single pixel Impractical to make a coverage-based scheme compatible with z-buffering Can do better if triangles are sorted back to front



Coverage approaches not generally used in practice for rendering


Still apply to things such as font filtering

43

Supersampling


A more popular method (although less elegant) is supersampling:
 

Point sample the pixel at several locations Combine the results into the final pixel color Raises the sampling rate Raises the frequencies we can capture



By sampling more times per pixel:
 



Commonly use 16 or more samples per pixel
 

Requires frame buffer and z-buffer to be 16 times as large Requires potentially 16 times as much work to generate image



A brute-force approach
 

But straightforward to implement Very powerful

44

Uniform Sampling
  

Divide each pixel into a uniform grid of subpixels Sample at the center of each subpixel Generates better quality images than single point sampling
 

Filters out some higher-than-one-pixel frequency data Nicely smooths lines and edges Regular high-frequency signals will have Moiré patterns



But frequencies higher than Nyquist limit will still alias


45

Random Sampling
 

Supersample at several randomly located points Breaks up repeating signals




Eliminates Moiré patterns Instead of aliasing, frequencies greater than 1 pixel appear as noise in the image The human eye is pretty good at filtering out noise Result is not necessarily accurate Too much noise.

 

Noise tends to be less objectionable to the viewer than jaggies or Moiré patterns


But suffers from potential clustering and gaps
 

46

Jittered Sampling
 

AKA stratified sampling, Divide pixel into a grid of subpixels


Sample each subpixel at a random location filters high frequencies frequencies greater than subpixel sampling rate turned into noise



Combines the advantages of both uniform and random sampling
 



Commonly used

47

Reconstruction filter
 

Take average of all samples: box filter Take weighted average of samples: other filters


weight according to a box, cone, pyramid, Gaussian, etc…



Can apply weighting to uniform, random, or jittered supersamples


little additional work

48

Weighted Distribution
  

Jittered supersampling with Gaussian filtering does well Because of the filter weights, some samples have more influence on the image than others


e.g. with 16 samples, the 4 samples in the center can have higher total weight than the 12 others We’re paying same computational price for samples that don’t contribute much We’re giving as much attention to the regions that don’t contribute much Put more samples in the areas that contribute more highly Get more accuracy for the same amount of work known as Importance Sampling

But:
 



Instead, adjust the distribution
  

49

Adaptive Sampling
  

More sophisticated option is to perform adaptive sampling Start with a small number of samples Analyze their statistical variation

 

It the colors are all similar, we accept that we have an accurate sampling If the colors have a large variation, take more samples continue until statistical error is within acceptable tolerance
Concentrates work where the image is “hard” But possible! Used in practice, especially in research renderers

 

Varying amount of work per pixel


Tricky to add samples while keeping good distribution
 

50

Semi-Jittered Sampling
  

Can apply a unique jittering pattern for each pixel (fully jittered) or re-use the pattern for all of the pixels (semi-jittered) Both are used in practice Semi-jittering advantages:
  

potential performance advantages can preselect a good distribution straight edges look cleaner re-admits subtle Moiré patterns because of semi-regularity of grid



Semi-jittering disadvantages:


51

Mipmapping & Pixel Antialiasing
 

Mipmapping and other texture filtering techniques reduce texture aliasing problems Combine mipmapping with pixel supersampling
 

Choose mipmaps based on subpixel size Gets better edge-on behavior than mipmapping alone




But it’s expensive to compute shading at every supersample Hybrid approach:
  

Assume that mipmapping and filters in procedural shaders minimize aliasing at pixel scale Compute only a single shading sample per pixel Still supersample the scan-conversion and z-buffer

  

Gives edge antialiasing of supersampling and texture filtering of mipmapping Doesn’t require cost of full supersampling GPU hardware often does this:
  

Requires increase framebuffer/z-buffer memory But doesn’t slow down performance much Works pretty well

52

Motion Blur
 

Looks cool in static images Improves perceived quality of animation


Details depend on display technology (film vs CRT vs LCD vs. …)



Generally speaking: the eye normally blurs moving objects
 

Animation is sequence of still frames Sequence of unblurred still frames look strangely unnatural
• E.g. old Sinbad movies with stop-motion monsters
QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.



If objects in each frame are blurred in the direction of motion, easier for brain to reconstruct continuous object.
• In Dragonslayer (1981), go-motion monster was introduced
• • Model moved with camera shutter open. Noticeably better quality, even if most people didn’t know why

• In CG special effects, motion blur always computed
53

Motion Blur


Spatial antialiasing:
 

Increase the spatial resolution and filter the results Pixels slightly blurred where there are spatially-varying parts Increase the temporal resultion and filter the results Image blurred where there are temporally-varying parts



Temporal antialiasing:
 



Brute force: supersample entire image in time
  

For each frame of animation Render several (say 16) images spaced over frame time Combine them into final image



Techniques also to do this per-sample…

54

Outline for today
 



Speeding up Ray Tracing Antialiasing Stochastic Ray Tracing

55

Stochastic Ray Tracing
 

Introduced in 1984 (Cook, Porter, Carpenter) AKA distributed ray tracing AKA distribution ray tracing


(originally called “distributed”, but now that refers to parallel processing)



Technique for achieving various fancy effects:
      

Antialiasing Motion Blur Soft Shadows Area Lights Blurry Reflections Camera Focus/Depth-of-field … with values having an appropriate random distribution i.e. stochastically



The basic idea is to shoot more rays
 

56

Antialiasing


Supersampling can easily be implemented in ray tracing
  

we’re creating whatever rays we want we can create as many as we want and aim them wherever we want can easily implement area-weighted jittered Gaussian distribution



(Jittered sampling was actually introduced to computer graphics by the 1984 Cook et al. distributed ray tracing paper)

57

Motion Blur


Assume we know the motion of our objects as a function of time
 

Given a value of time, we can look up the position of each object (At least within the current frame)

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.



Distribute rays in time


Give each ray a time value
• E.g., jittered time distribution during the “shutter open” interval

 

For intersection testing, use the object’s position at the ray’s time Combining the ray colors:
• if the object is moving, the result is motion blur • if the object isn’t moving, all values will be the same: no blur
• seems like this case is a waste of effort, but turns out OK…

first CG image with motion blur, from 1984 Cook et al. paper

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

58

Area Lights


Traditional CG point light sources are unrealistic:
  

Harsh lighting Sharp highlights Hard shadows Light emitted from some area Softens the lighting on objects Gives shape to highlights Creates soft shadows
• (CG researchers talk mostly about soft shadows; the other features are subtle but do affect lighting quality)

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.



Real lights have some shape to them
   

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (LZW) decompressor are needed to see this picture.
QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

(www.imagearts.ryerson.ca)

59

Area Lights
 

Instead of having a single direction vector for a light source Send rays distributed across the surface of the light
 

Each may be blocked by an intervening object Otherwise compute the illumination based on that ray’s direction If all rays are blocked, won’t get any light: full shadow (umbra) If some rays blocked, will get some light: penumbra If no rays blocked, fully lit Rays distribution should cover surface of the light evenly (though can be jittered) Hard to create distributions for arbitrary shapes; typically use lines, disks, rectangles, etc. Can need lots of samples to avoid noise in the penumbra or in specular highlights



Each contributes to the total lighting on the surface point
  



Notes:
 



QuickTime™ and a TIFF (LZW) decompressor are needed to see this picture.

60

Glossy Reflections


Distribute rays about the the ideal reflection direction
 

Blurry surfaces will have a wider distribution (and will need more rays) Polished surfaces will have a narrow distribution



Combine rays weighted according to BRDF (e.g. Phong)

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

61

Translucency
 

Like glossy reflection, but for refraction Distribute rays about the ideal refraction direction

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

62

Depth of Field


With a camera lens, only objects at the focal distance are sharp
  

those closer or farther are blurred. depth of field refers to the zone of acceptable sharpness In CG, “depth of field” refers to rendering including lens focus/blurring effect With a pinhole camera, there’s no blurring With a wider aperture, blurring increases Can trace them through a real lens model or something simpler For an object at the focal distance, whatever path the rays take all will reach the same spot on the object
• • all rays will have same color value (specular highlights might blur slightly since they depend on eye direction) object will be sharp



Amount of blurring depends on the aperture (how wide open the shutter is)
 



Distribute rays across the aperture
 



For an object outside the depth of field, the different rays will hit different spots on the object
• combining the rays will yield a blur.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

QuickTime™ and a TIFF (Uncompressed) decompressor are needed to see this picture.

63

Stochastic Ray Tracing






Ray tracing had a big impact on computer graphics in 1980 with the first images of accurate reflections and refractions from curved surfaces Distribution ray tracing had an even bigger impact in 1984, as it reaffirmed the power of the basic ray tracing technique and added a whole bunch of sophisticated effects, all within a consistent framework Previously, techniques such as depth of field, motion blur, soft shadows, etc., had only been achieved individually and by using a variety of complex, hacky algorithms

64

Stochastic Ray Tracing


Many more rays!




16 samples for antialiasing * 16 samples for motion blur & 16 samples for depth of field * 16 rays for glossy reflections * … ? Exponential explosion of number of rays Can combine distributions E.g. 16 rays in 4x4 jittered supersampling pattern Give each ray a different time and position in the aperture For area lights or glossy reflection/refraction 16 primary rays will be combined; each can get by with only a few secondary rays



Good news: Don’t need extra primary rays per pixel
  



OK news: Can get by with relatively few secondary rays
 



Still, need more rays.

  

Slower Insufficient sampling leads to noise Particularly noticeable for soft or blurry features Techniques such as importance sampling to minimize noise
65

Global illumination
  

Take into account bouncing from diffuse objects


Every surface is a light source! Caustics Send secondary rays in all directions, accumulate all contributions In practice that would take too many rays, be very noisy Find multi-step paths from light sources through scene to camera
• Numerical techniques to randomly choose rays/paths • Weighting/importance sampling to minimize noise, maximize efficiency • Photon Maps
• Optimize by storing intermediate distribution of light energy

Take into account light passing through objects


Conceptually simple extention to ray tracing:
 

 

Path Tracing


Monte Carlo Integration:



(Also, Radiosity computation:
 

Diffuse light bouncing between all objects Use numerical simultaneous equation solvers)
66

Done
 

Next class: Final project discussion Upcoming classes: Guest lectures! Cool demos!

67


								
To top