old by huanghengdong


									                     PROJECTIVE VIDEO TEXTURE MAPPING

                                     Chaim Sanders

                             Solomon Schechter High School

                                  New York, NY 10024

                           Email: Chaim.Sanders@Gmail.com


       Projective texture mapping is a technology that allows a digital image to be

redisplayed on a virtual three-dimensional object. This is accomplished by taking the

texture coordinates of an image and assigning them to vertices. Since its inception, the

technology has been included in the OpenGL library. Although its original use was to

quickly calculate lighting, now the technology is used for creating shadows. The current

project builds on the earlier technology in order to simulate an LCD projector, projecting

a video image onto a three-dimensional model. The projected video is altered in several

ways in order to more accurately simulate an LCD projector. The C++ programming

language was used to develop the code for this project, based on earlier generations of

projective texture mapping techniques and computer graphical programming. The

development of this new technology has implications for a wide range of fields including:

medical simulation, 3D animation, computer game design, and architecture.
1. Introduction

       Projective Texture Mapping was created in order to simulate a slide projector

under perfect conditions. Simulating a slide projector under perfect conditions eliminated

the need for depth of field or shadow calculations. In contrast, Projective Video Texture

Mapping allows for the real-time realistic simulation of an LCD video projector.

Projective Video Texture Mapping works by taking a video file and breaking it into

frames. The frames from the video file are than loaded into the computer’s video memory

for fast access. When the frame is required by the program it is taken from the memory.

Filters are applied to simulate blur and other projector characteristics. Once filters are

applied correctly the frame is loaded into the program’s double buffer. When the frame is

required it is loaded into the lightsource and illuminated onto a 3D surface. The video is

reassembled at a framerate of approximately 30 frames per second.

      Figure 1. A duplicate shot of a scene, the left with lights on and the right with lights off.

       Projective Texture Mapping was first conceived of by Segal (1992). The

technology only recently became part of the OpenGL when it was included in the Nvidia

Software Development Kit (SDK). Nvidia released a whitepaper (Cass, 1995) in
association with the release of their SDK. Projective Texture Mapping has provided a

semi-realistic looking light source that projects a single texture onto a surface. Because

previously only one texture could be used in this technology the only realistic application

was a canned light source (see Wolfgang Heidrich, 1999). Projective Video Texture

Mapping differs in that both depth of field and blur are included. The Depth of Field

implementation was based on the work of Tin-Tin Yu (1992). The blur was based on

Intel’s computer vision library and is used for pixel interpolation based on lighting

effects. These advances among others produce a more realistic-looking image.

2. Materials and Methods

       Projective texture mapping technology simulates the display of a photo as though

the light source is a slide projector. Projective texture mapping technology works by

taking an initial picture of a scene through either one or multiple cameras. Based on this

image, a depth map of the scene is created, which produces accurate calculations of how

far objects are away from the point of origin on the z-axis. Recent development of

projective texture mapping has led to the use of this as a way of showing shadow. This

shadow effect is produced by a texture that looks like a shadow, and is commonly called

a shadow map. Although the technology has been refined for the use of still images,

before this project it had not been used for displaying video or automatic shadows.
  Figure 2. On the left a shadow map used for shadows in the original Projective Texture Mapping

   (Source: ATI Developer Resources) on the right automatic shadow detection incorporated into

                               Projective Video Texture Mapping.

       The method by which I created this program was quite straightforward. The first

thing I did was set the goal for an aspect I wanted to add in order to make my program

more realistic. I then took measurements using a real projector in order to ensure that my

projection simulation settings were accurate (i.e. radiosity, falloff). I then reviewed any

previous code, if available, to simulate an aspect. If the calculations of the code were the

same as the calculations I had obtained, the code was integrated into my code. If the code

was not available (which was almost always the case) I created my own code to perform

the function. After each function was created to my specification I ran the program and

tried to find any bug that might occur. If no bug occurred I moved on to the next aspect of

the program I needed to include.
3. Results

The resulting program allows for the realistic simulation of an LCD projector. Using the

current level of computer graphics capability I was able to create a three dimensional

scene on which to project a video. The video projected was broken down by my program

into individual frames and than stored in the memory. When the specified frame of the

movie needed to be played it was brought from the memory and had the appropriate

filters applied. The program allows for manipulation of objects within the simulated

environment and has a graphical user interface for easy use.

       Figure 3. Original Projective Texture Mapping Vs Projective Video Texture Mapping

4. Conclusion

The program was designed to test the effectiveness of modern computer graphics to

simulate an LCD projector. It was hypothesized that because one could project light onto

a surface and a texture could be assigned to an object, that a light could project a texture

onto a surface. Since the technology to translate a pixel's position already existed in the
form of projective texture mapping, there was only that matter of making the origin of all

the pixels the light source. This technology supported the idea that it would be possible to

simulate an LCD projector.

        Future improvement for this technology includes full support for imported

models, and a new interface for loading any movie regardless of compression format.

Missing from this version is a framerate counter although there is a frame reader counter.

5. Bibliography

B. Arnaldi, X. Pueyo, and J. Vilaplana. On the division of environments by virtual walls

for radiosity computation. In Photorealistic Rendering in Computer Graphics, pages 198–

205, May 1991.

I. Ashdown. Near-Field Photometry: A New Approach. Journal of the Illuminating

Engineering Society, 22(1):163–180, Winter 1993.

Ulf Assarsson, Michael Dougherty, Michael Mounier, Tomas Akenine-Möller, An

Optimized Soft Shadow Volume Algorithm with Real-Time Performance, 2003

R. Bastos. Efficient radiosity rendering using textures and bicubic reconstruction. In

Symposium on Interactive 3D Graphics. ACM Siggraph, 1997.

G. Drettakis and F. Sillion. Intaractive update of global illumination using a line-space

hierarchy. Computer Graphics (SIGGRAPH ’97 Proceedings), pages 57–64, August


Craig Duttweiler. Mapping Texels to Pixels in Direct3D.

Cass Everitt ,Projective Texture Mapping 1995

Cass Everitt, Ashu Rege, Cem Cebenoyan. Hardware Shadow Mapping, 1996
Cass Everitt. Interactive Order-Independent Transparency. Whitepaper:

S. J. Gortler, R. Grzeszczuk, R. Szelinski, and M. F. Cohen. The Lumigraph. In

Computer Graphics (SIGGRAPH ’96 Proceedings), pages 43–54, August 1996.

G. Greger, P. Shirley, P. Hubbard, and D. P. Greenberg. The irradiance volume. IEEE

Computer Graphics and Applications, 18(2):32–43, March 1998.


Wolfgang Heidrich, Jan Kautz, Philipp Slusallek, Hans-Peter Seidel, " Canned

Lightsources” (1999).

A. Keller. Instant radiosity. Computer Graphics (SIGGRAPH ’97 Proceedings), pages

49–56, August 1997.

Mark Kilgard. Shadow Mapping with Today’s Hardware. Technical presentation:

Tin-Tin Yu, "Depth of field implementation with OpenGL” (1992).

R. Lewis and A. Fournier. Light-driven global illumination with a wavelet representation

of light transport. In Rendering Techniques ’96, pages 11–20, June 1996.

Mark Segal, et al. Fast shadows and lighting effects using texture mapping. In

Proceedings of SIGGRAPH ’92, pages 249-252, 1992.

M. Segal, C. Korobkin, R. van Widenfelt, J. Foran, and P. Haeberli. Fast shadow and

lighting effects using texture mapping. Computer Graphics (SIGGRAPH ’92

Proceedings), 26(2):249–252, July 1992.

F. Sillion and C. Puech. Radiosity & Global Illumination. Morgan Kaufmann, 1994.
P. Slusallek, M. Stamminger, W. Heidrich, J.-C. Popp, and H.-P. Seidel. Composite

lighting simulations with lighting networks. IEEE Computer Graphics and Applications,

18(2):22–31, March 1998.

W. St¨urzlinger and R. Bastos. Interactive rendering of globally illuminated glossy

scenes. In Rendering Techniques ’97, pages 93–102, 1997.

B. Walter, G. Alppay, E. Lafortune, S. Fernandez, and D. P. Greenberg. Fitting virtual

lights for non-diffuse walkthroughs. Computer Graphics (SIGGRAPH ’97 Proceedings),

pages 45–48, August 1997.

Yulan Wang and Steven Molnar. Second-Depth Shadow Mapping. UNC-CS

Technical Report TR94-019, 1994.

G. Ward. The RADIANCE lighting simulation and rendering system. Computer Graphics

(SIGGRAPH ’94 Proceedings), pages 459–472, July 1994.

Lance Williams. Casting curved shadows on curved surfaces. In Proceedings of

SIGGRAPH ’78, pages 270-274, 1978.

6. Acknowledgments:

I would like to acknowledge the help of Dr. Michael Grossberg and Matt Johnson, whose

guidance on all things throughout the course of this project was extremely helpful.

To top