details by cuiliqing

VIEWS: 16 PAGES: 5

									                                                                        Ningshan (Simon) Chen
                                                                                   Andrew Vite
                                                                                     Tsoi Chan
                                                                                     CAP 3027
                                                                            September 24, 2007
                                                                     Group Exercise #2 (revised)




Honesty Pledge

I hereby agree that the work I am submitting is my own and that the Honor Code was neither

bent nor broken.                      __________________________________

                                     ___________________________________

                                     ___________________________________



Learning Experience

The easiest part of the assignment was developing the five topics. The hardest part was reading
the technical aspects of some of the rendering techniques and understanding them enough to
provide a clear and concise explanation. Some of the explanations were too lengthy and
complicated to understand, so in those instances, a general description of the idea was provided
instead. The slightly-more thorough research done on this project (as opposed to the last)
provided us with more insight into the nature of our topic and how complicated it actually is.
                                                                           Ningshan (Simon) Chen
                                                                                     Andrew Vite
                                                                                       Tsoi Chan
                                                                                       CAP 3027
                                                                              September 12, 2007
                                                                                Group Exercise #2



One major area of Digital Arts & Sciences today is the area of Real-Time Rendering, or graphics
being rendered and displayed according to real-time changes, usually manipulated by an outside
source such as a person. Our topics of discussion pertaining to real-time rendering include
polygons and NURBS, ray tracing, high-dynamic range lighting, motion blur and depth of field,
and the piece of hardware that makes the rendering possible, the graphics processing unit.

The Backbone of 3D Graphics: Polygons and NURBS (Non-Uniform Rational B-Splines)

The basic structure of a three-dimensional object is created by polygons, which are the
culmination of flat surfaces forming a convex shape, created by linked vertices. Real-time
rendering is often polygonal, and such rendering utilizes the computer’s GPU. One technique of
polygonal rendering is known as ‘scanline rendering’:
        “Scanline rendering works on a row-by-row basis rather than a polygon-by-polygon or
       pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y
       coordinate at which they first appear, then each row or scan line of the image is computed
       using the intersection of a scan line with the polygons on the front of the sorted list, while
       the sorted list is updated to discard no-longer-visible polygons as the active scan line is
       advanced down the picture.”
The popular Nintendo DS uses this technique to render its 3D environments. The benefit to this
technique is that each vertex is accessed only once, minimizing the rendering time.

For generating and rendering extremely smooth surfaces, a type of mathematical model called a
“non-uniform rational B-spline” (NURBS) is used. A spline is a piecewise polynomial function
in which its purpose is to smooth data by interpolation. When a curve is created, it passes
through various “control points” or “control vertices.” Smooth surfaces are possible with
NURBS because the curve created is not due to the interpolation of the control points themselves,
but an approximation of them. This provides smoothness not possible through polygons, and
allows for unlimited control over Level of Detail.

Rendering Technique: Ray Tracing

After the establishment of a three-dimensional object and a three-dimensional environment, the
next step is to perform the rendering of lighting and shadows. One technique that is commonly
used today in videogames and other light simulations is Ray Tracing. Ray tracing is the
production of 3D images in computer graphics with the purpose of achieving a more photo-
realistic image than with techniques like “ray casting” or “scanline rendering,” producing more
visually-appealing results without an extreme increase in computing.
Ray tracing describes the “tracing” of light paths originating from the viewer’s eye toward the
object in sight, as opposed to rendering a light path originating from an object moving toward the
direction of the viewer - the natural occurrence in nature. The practice of rendering light in its
opposite path of motion avoids the three computational issues of reflection, refraction, and
absorption. Since a light wave may bounce off several objects, lose intensity due to absorption,
and refract off water before ever reaching the viewer through a normal light path, calculating the
backwards path greatly reduces the total amount of calculations.

Ray tracing is considered the norm when it comes to the realistic simulation of lighting. Other
natural byproducts of ray tracing are reflections and shadows of objects created by any given
light source. Like other methods of rendering to increase realism, ray tracing also greatly reduces
overall performance, since calculations of tracing is required from the eye toward every other
object within view.

High-Dynamic Range Lighting (HDR Lighting)

A recent boom in graphical technology had led to the development of a lighting technology
known as High-Dynamic Range Lighting. The use of high-dynamic range lighting (HDR
Lighting, also known as HDR Rendering) pushes the level of realism of computer-generated
scenery to its limits. One effect created by HDR Lighting is sun flares, which is the vision and
shining of light rays when looking at a light source, e.g., a computer-generated sun, or even a
light bulb. Another effect of HDR Lighting is continuously-moving lights. In videogames, these
can range from spot lights to swinging street lights to create real-time shadows. Another major
element of HDR Lighting is the lighting effect of one’s perspective view during extreme changes
in environmental lighting. When moving from a dark area to a bright area within the computer-
generated world, the viewer’s vision will seem extremely bright for a second until the eyes
“adapt” to the brightness, where subsequently the brightness of vision will dim down.

Graphics processor company NVIDIA summarizes one of HDRL’s features in three points:
       i.     Bright things can be really bright
       ii.    Dark things can be really dark
       iii.   And details can be seen in both

The game engine Unreal Engine 3 is known for its effective use of HDR Lighting. The use of
this engine allows games such as Gears of War and Unreal Tournament to enhance attention to
detail when it comes to certain elements like lense flares and halos, light blooms, and depth of
field. With HDR Lighting, the engine is able to incorporate moving lights that generate accurate
soft shadows at an incredibly small calculation expense.

Motion Blur and Depth of Field Renderings

Other visual concepts that enhance realism for rendering real-time graphics are Motion Blur and
Depth of Field. Motion blur is the effect of visual blur caused by rapidly moving objects. In
general, motion blur is very hard to model in real-time applications. It decreases the frame by
several factors. For real-time applications, the only way to accomplish this task is to render the
image with much less detail. The method for creating smooth motion in animations is known as
Temporal Anti-aliasing, meaning smoothing out time, and animations are rendered in many more
frames than they normally are (without motion blur) in order to maintain a uniform amount of
frames per time length.

Depth of field also incorporates blur to enhance an image’s realism. Depth of field is the distance
in front of and beyond the subject that is in focus. In order to calculate the exact coordinates of
the boundaries of the dept of field a series of calculations is required involving a value called the
F-stop number, and diameter of the aperture in view. Generally, creating a depth of field requires
more calculations, thus increasing the time it takes to render the scene. The effect, however,
provides for a more realistic view of computer-generated scenery because not all objects in view
are in focus.

As with all elements that contribute to making real-time rendering more realistic, implementation
of motion blur and depth of field greatly reduces performance.

Rendering Graphics: The Graphics Processing Unit

The primary piece of hardware that makes the previously-mentioned rendering techniques
possible is the Graphics Processing Unit (GPU). This device is the primary device on the
graphics card (or video card) that performs all the necessary calculations required to render
polygons and motion-blur effects, create lighting through Ray Tracing and HDR Lighting, and
other various graphic enhancements.

GPU’s today evolved from the graphic chips of the early 1980’s, which could not even draw
shapes. In the late 1980’s and early 1990’s, high-end GPU’s were being implemented as a part of
the CPU. The first capable CPU-separate graphics chips were “2D-accelerators” that were cheap
to produce but could not compete with the graphics capabilities of the high-end CPU’s.

With the implementation of pixel and vertex shaders onto GPU’s, lengthier loops and floating
point math calculations could be performed. The function of vertex shaders is to “translate a 3D
image formed by vertices and lines into a 2D image formed by pixels.” Pixel shading allows for
rendering techniques like bump mapping to create various effects that allows an image to appear
more realistic and three-dimensional without excess computations of polygons. Recently, there
has been technology that allows for multiple cards to render a single image. The two companies
that have developed this technology are the two graphics card powerhouses NVIDIA and ATI.
NVIDIA’s technology is known as Scalable Link Interface (SLI), while ATI’s technology is
known as ATI CrossFire. The concept is primarily the same for both sides, where the image is
split in half, either down the vertical or across the horizontal, and each card renders its respective
piece of the image, resulting in better performance in most cases (e.g., frame rates).

The future for GPU’s is the development of a new technique called “stream processing” and the
development of a General Purpose Graphics Processing Unit (GPGPU). Stream processing
facilitates parallel processing, increasing its efficiency with minimal effort, while a GPGPU turns
its shader pipeline into a general purpose computational source. The combined implementation
of both stream processing and the GPGPU allows for multiple folds of increased performance
compared with a regular CPU. This directly results in graphics being rendered with better frames
rates and overall smoothness, yielding unlimited potential for creating ever-more realistic 3D
models and environments, lighting effects, and other visual sensations.

								
To top