A NOVEL SEE-THROUGH SCREEN BASED ON WEAVE FABRICS
Cha Zhang†, Ruigang Yang‡, Tim Large†, Zhengyou Zhang†
Microsoft Research, One Microsoft Way, Redmond, WA 98075
University of Kentucky, 1 Quality Street, Lexington, KY 40507
See-through screens (STS) have found important applica-
tions in remote collaboration systems to enhance non-verbal
communication and gaze awareness. Existing STS designs
often sacrifice the display quality significantly, rendering
low-contrast images that discount the overall user experi-
ence. In this paper, we present a novel see-through screen
solution based on weave fabrics. Such fabrics are known to
be acoustically transparent and used to build professional
projection screens for Hollywood studios. We place a cam-
era immediately behind the screen and synchronize it with a
120Hz projector to perform time-multiplexing display and
Fig. 1. The eye contact issue in typical videoconferencing
video capture. By focusing the camera at the user 4-5 feet
systems. The ideal camera shall point to the user from be-
away from the screen, the image of the weave fabric will be
hind the screen.
severely blurred. We present the imaging principle of the
setup, and derive image processing techniques to enhance
fosters an environment of collaboration and partnership.
the quality of the captured video. The overall system is low
Lack of eye contact, on the other hand, may generate feel-
cost, has much better display quality than existing systems,
ings of distrust and discomfort. Unfortunately, eye contact is
and can be used to build wall-size see-through screens for
usually not preserved in typical videoconferencing systems.
As shown in Fig. 1, the user often looks at the remote party
displayed on the screen, while the local camera often cap-
tures the user from above the screen, creating gaze disparity.
Index Terms— See-through screen, gaze, remote col-
To preserve eye contact, the ideal camera shall be placed
behind the screen.
Creating a see-through screen has attracted research
studies for decades. Early approaches such as Teleprompter
 and Gazecam  used half silvered mirrors. Since half
As globalization continues to spread throughout the world
silvered mirrors need to be angled by around 45 degrees
economy, it is increasingly common to find projects where
from the display, the footprint of such systems are usually
team members are widely distributed across continents.
inconveniently large. In addition, stray reflections caused by
Videoconferencing has long been considered a critical tech-
the mirror may also be distracting. The Clearboard system
nology to reduce high travel expenses for distributed work-
 designed a clever “Drafter-Mirror” architecture which
forces. Nevertheless, even with high end teleconferencing
improved upon simple half silvered mirror implementations.
solutions such as HP’s Halo system and Cisco’s
They placed the display surface at a 45 degree angle with
Telepresence system, face-to-face meeting is usually still a
respect to the ground, and further adopted polarization films
better experience than remote meetings.
on the screen and the camera to avoid the displayed image
One of the factors that are known to be essential for face-
being captured by the camera. However, polarization films
to-face communication is eye contact. As Simmel remarked
cut down lights significantly, leading to poor display quali-
, eye contact “represents the most perfect reciprocity in
the entire field of human relationship”. It instills trust and
Fig. 2. The overall system setup. (a) System configuration. (b) Front side of the system with an ongoing
teleconferencing. (c) The camera hidden behind the screen.
Another popular scheme to solve the eye contact issue is white hexagons, thus it can be used as the projection screen.
through switchable liquid crystal diffusers (SLCD), such as The back side has black hexagons, and one can see the other
the system demonstrated by Shiwa and Ishibashi at NTT  side through the screen. The MAJIC prototype used a 40%
and blue-c . SLCD can switch between transparent and transmissibility screen, thus the displayed images had very
diffusing states based on the voltage applied on the film. poor quality, both in brightness and resolution.
When the diffuser, the camera and the projector are all syn- In this paper, we present a novel see-through screen de-
chronized, during the transparent state, the camera behind sign by using weave fabrics. Such fabrics are known to be
the diffuser can take images; during the diffusing state, the an excellent projection surface, and are widely used in pro-
projector can render images of the remote user. A known fessional movie studios for high display quality and acousti-
limitation of SLCD is the speed of state switching that limits cal transparency. We place a video camera right behind the
the update rate of the rendering, causing flickering images. screen, and capture videos by seeing through the small holes
In addition, the rendered images of SLCD tend to have poor in the weave fabrics. We then design image processing algo-
contrast under ambient lighting conditions, making it un- rithms to enhance the quality of the captured video. This
suitable for many applications. results in a low-cost, high display quality see-through screen
Recently holographic projection screens (DNP that can be used in many applications, such as teleconfer-
HoloScreen) have received a lot of attention . These encing, virtual reality, public advertisement, etc.
screens diffuse only light incident from pre-specified angles, The rest of the paper is organized as follows. The hard-
and allow light to pass through otherwise. They do not re- ware setup is explained in Section 2. The imaging model of
quire special synchronized cameras and projectors, thus the camera behind the screen is presented in Section 3. Vid-
offering greater freedom for designers. However, eo processing based on the imaging model and some exper-
HoloScreen usually has severe backscatters, which shall be imental results are detailed in Section 4. Conclusions and
handled carefully. The TouchLight system  used infrared future work are given in Section 5.
cameras to avoid difficulties, which is not suitable for tele-
conferencing. HoloPort  proposed to synchronize the 2. HARDWARE
projector and the camera to counter the backscatter problem.
In the recent ConnectBoard system , Tan et al. proposed The configuration of the proposed see-through screen is
to use wavelength multiplexing to remove the backscatter. shown in Fig. 2 (a). We adopt a frontal projection scheme.
Additional processing is needed to pre-distort the projected The user is in the projection space, making it more space
image and color-correct the captured image in order to cap- efficient for large systems. The camera is placed behind the
ture satisfactory images. Besides backscatters, Holo-Screen screen (Fig. 2 (c)). As shown in Section 3 and 4, it is advan-
only diffuses part of the incident light from the pre-specified tageous to place the camera as close to the screen as possi-
angle, and a crisp image can also be seen on the ceiling or ble. The projector and the camera are time-synchronized
the floor of the room. This means there are many angles for using the V-sync signal of the projector input. For every
which the user is looking right into the beam, which can be other frame the projector will output a black image, allow-
quite painful. The display image quality of HoloScreen is ing the camera to open its shutter to capture an image. To
similar to SLCD, which leaves room for improvement. avoid flickering, we use a 120 Hz projector (DepthQ HD 3D
Another work that is closely related to ours is the MAJIC projector), effectively refreshing the displayed image at 60
system by Okada et al. . MAJIC adopted a screen that is Hz.
a thin transparent film with a large number of small hexa- The camera is a Flea3 FL3-FW-03S3C made by Point-
gons printed on both sides. The front side is printed with Grey Research Inc. It is capable of capturing 640×480 pixel
Fig. 5. Illustration of the see-through ratio in our
Fig. 3. The weave fabric screen. (a) Texture of the Phifer
the screen is at distance , it will be focused at , where:
SheerWeave solar shade screen with 10% openness. (b)
The back side of the screen is painted black to prevent . (2)
The blurring diameter satisfies:
where is the aperture of the lens. Consequently, we obtain:
where is the f-number of the lens. It can be seen
from Eq. (4) that increasing the focal length , reducing the
Fig. 4. Use thin lens model to analyze the blurring radius of f-number and decreasing the screen distance will all
the weave fabric screen. enlarge the blurring diameter .
We next present an idealized model for the image for-
images at 76 frames per second. Since the holes on the mation process. Let the captured image be denoted as ,
weave fabric screen are very small, we tested two lenses where is the pixel index. Inspired by Aydin and Akgul
with large apertures, the Fujinon DF6HA-1B 6mm f/1.2 lens , we represent:
and the Pentax 12mm f/1.2 lens.
The weave fabric screen is obviously the most important [ ] , (5)
component of our system. We started with samples of the where is the object radiance observed by pixel when
CineWeave HD screen from SMX Cinema Solutions , there is no screen occlusion. is the see-through ratio,
which has about 5% openness. Later we found low cost al- which will be detailed later. is the radiance of the back
ternatives by using the SheerWeave solar shade made by side of the screen, and is the blurring filter with diameter
Phifer. We chose the 2100 P02 white shade with 10% open- , which is given by Eq. (4). Since we have painted the back
ness as our screen for a good compromise between display side of the screen black, . Therefore, the term
quality and see-through video quality. The texture of the can be safely ignored in our application.
screen is shown in Fig. 3. We painted the back side of the is used to model scene independent effects such as vignet-
screen black to prevent backscatters. ting, which happens often in low-cost lenses. is the
3. IMAGING MODEL Fig. 5 illustrates the computation of the see-through ratio
. Since the object surface can only be seen through the
We first apply the thin lens model to analyze the blurring openings , we have:
diameter of the defocused weave fabric, as shown in Fig. 4.
Let the focal length of the lens be . Assume the camera ∑
focuses on the object at distance with the imaging plane at
. According to the thin lens equation, we have:
where is the opening area of hole , and is the overall
. (1) area of lights at the screen distance that enter the lens and
reach when there is no occlusion.
Fig. 6. Images of a pure white paper in front of the lens. (a)
6mm lens. (b) 12mm lens.
Let denote the area of the aperture, since
if the object is at a constant depth, will be a constant
proportional to the aperture area . However, since the
holes of the weave fabric have finite sizes, different open-
ings will be seen when varying . Hence varies de-
pending on . For a piece of weave fabric with average
, if . (8)
In other words, to reduce the fluctuation of the see-through Fig. 7. Recover the object radiance for video enhancement.
ratio, we either increase the lens aperture, or reduce the size (a) Raw image captured by the camera. (b) After radiance
of the holes in the fabric. In Section 4.1, we will further dis- recovery using Eq. (12). Note no further image enhance-
cuss how to correct the unevenness of the see-through ratio ment schemes such as histogram equalization were used to
digitally. produce these images.
For a real-world lens, the see-through ratio may be af-
fected by other factors, such as lens distortions, edge effects, ratio is hard to predict, we can still recover the object
etc. The weave fabrics also have unevenness in the size and radiance as follows.
distribution of holes. Fig. 6 (a) and (b) shows two images of We first capture a video of a static, pure white object
a pure white paper ( is a constant) captured by (e.g., a white paper), as was shown in Fig. 6. According to
6mm and 12mm lenses, respectively. Note the see-through Eq. (5), the received images can be represented as:
ratio of the 12mm lens fluctuates much less than that of the
6mm lens. This is consistent with Eq. (8), since the 12mm , (9)
lens has a larger aperture.
where is the constant radiance of the white object. Note
the term in Eq. (5) has been ignored since
4. VIDEO PROCESSING
By averaging these video frames to obtain a mean image
Due to the small openness of the weave fabric used in our ̅ , we effectively remove the sensor noises, obtaining:
system and the short camera exposure time (7ms per frame),
the captured images are often dark and noisy (see Fig. 7 (a) ̅ . (10)
and Fig. 8 (a) for some examples). In this section, we pre-
sent a two-step video enhancement process to improve the That is, the mean image ̅ captures the fluctuation of
video quality: recovering the object radiance and video de- and scene independent effects . For an arbitrary
noising. scene captured by the same camera, we have:
4.1. Recovering the Object Radiance
Combining Eq. (10) and (11), we have:
In Section 3, we presented an imaging model for the camera
seeing through the weave fabric. Although the see-through ̅
, when ̅ , (12)
which can be computed very efficiently for each frame. The
condition ̅ is usually satisfied if a large aper-
ture lens is placed right behind the screen and focused on
objects a few feet away from the screen.
Fig. 7 shows some results for object radiance recovery
using Eq. (12). It can be seen that the images are improved
significantly compared with the original images captured by
It is worth mentioning that if the aperture of the lens is
very large or the weave fabric holes are very small (assume
the same openness), will be near constant, and the raw
image captured by the camera will be a very good approxi-
mation of the radiance after typical scene independent pro-
cesses such as vignetting removal.
4.2. Video Denoising
One may notice from Fig. 7 that the images captured by the
camera are very noisy. These noises remain in the processed
images after radiance recovery. It is necessary to perform Fig. 8. Video denoising results. (a) Raw image captured by
video denoising to further improve the video quality. the camera (cropped). (b) After radiance recovery using Eq.
Video denoising has been an active research topic for (12). (c) After noise removal.
many decades. A few novel and effective video denoising
approaches have been proposed recently, such as non-local When designing see-through screens, there has always
means , wavelet domain filters , bilateral filters been a tradeoff between the display quality and the see-
, etc. Unfortunately, most of these algorithms are very through video quality. For weave fabrics, the openness of
computationally expensive. In our system, the videos are the screen is a key factor determining the tradeoff. Fortu-
captured at 60 frames per second. In order to perform real- nately, it is very easy to change the openness of weave fab-
time video denoising, we adopted the simple scheme of rics during manufacture. We have tested with fabrics with
temporal denoising. The optical flow of neighboring frames 10% openness, though the optimal openness is still to be
is first estimated using the well-known Lucas-Kanade meth- explored.
od . Corresponding pixels based on the optical flow are In addition, due to the small openness of weave fabrics,
then averaged to obtain the denoised image. The algorithm the aperture of the camera behind the screen is usually large
is implemented with multiple threads, each handling a sub- in order to receive sufficient lights and reduce the adverse
region of the input video frame. In the future we plan to impact of screen occlusion. This can reduce the depth of
look into GPU-based implementations to further speed up field of the camera. An interesting future direction is to ena-
the denoising process. ble adaptive focusing, such that the camera can always focus
Fig. 8 shows two examples of video denoising results. on the objects in front of the screen.
The images are cropped from the original 640×480 images
to demonstrate the denoising effect. It can be seen clearly REFERENCES
that the algorithm is very effective in removing sensor nois-
es.  G. Simmel, “Sociology of the Senses: Visual Interac-
tion,” in R. Park and E. Burgess, editors, Introduction
5. CONCLUSIONS AND FUTURE WORK to the Science of Sociology, University of Chicago
In this paper, we presented a novel see-through screen based  J. Oppenheimer, “Prompting apparatus,” US Patent
on weave fabrics. It has better display quality than most of 2883902, 1959.
the existing see-through screens. The see-through video  S. Acker and S. Levitt, “Designing videoconference
quality is also very good, thanks to our radiance recovery facilities for improved eye contact,” Journal of Broad-
algorithm based on the camera’s imaging model and real- casting and Electronic Media, Vol. 31, No. 2, pp. 181–
time video denoising. The screen can be used for teleconfer- 191, 1987.
encing systems that maintain ideal eye contact between at-  H. Ishii and M. Kobayashi, “Clearboard: a seamless
tendees. Since weave fabrics are manufactured with mature medium for shared drawing and conversation with eye
technology, they can be used to build low-cost, wall-size contact,” ACM SIGCHI 1992.
screens with multiple see-through cameras.
 S. Shiwa and M. Ishibashi, “A large-screen visual tele-
communication device enabling eye contact,” SID Di-
gest, Vol. 22, pp. 327–328, 1991.
 M. Gross, S. W¨urmlin, M. Naef, E. Lamboray, C.
Spagno, A. Kunz, E. Koller-Meier, T. Svoboda, L. Van
Gool, S. Lang, K. Strehlke, A. V. Moere, and O. Staadt,
“Blue-c: a spatially immersive display and 3d video
portal for telepresence,” ACM SIGGRAPH 2003.
 A. Wilson, “Touchlight: An imaging touch screen and
display for gesture-based interaction,” ICMI 2004.
 M. Kuechler and A. Kunz, “Holoport - a device for
simultaneous video and data conferencing featuring
gaze awareness,” ICVR 2006.
 K. Tan, I. Robinson, R. Samadani, B. Lee, D. Gelb, A.
Vorbau, B. Culbertson and J. Apostolopoulos, “Con-
nectBoard: a remote collaboration system that supports
gaze-aware interaction and sharing,”, MMSP 2009.
 K. Okada, F. Maeda, Y. Ichikawaa and Y. Matsushita,
“Multiparty videoconferencing at virtual social dis-
tance: MAJIC design,” CSCW 1994.
 SMX Cinema Solutions, http://www.smxscreen.com/.
 T. Aydin and Y. Akgul, “An occlusion insensitive adap-
tive focus measurement method,” Optics Express, Vol.
18, No. 13, June 2010.
 A. Buades, B. Coll and J.-M. Morel, “Nonlocal image
and movie denoising,” International Journal of Com-
puter Vision, Vol. 76, No. 2, pp. 123-139, 2008.
 I. Selesnick and K. Li, “Video denoising using 2D and
3D dual-tree complex wavelet transforms,” Wavelets:
Applications in Signal and Image Processing X, SPIE,
Vol. 5207, pp. 607-618, 2003.
 M. Zhang and B. Gunturk, “Multiresolution bilateral
filtering for image denoising,” IEEE Trans. on Image
Processing, Vol. 17, No. 12, pp. 2324-2333, 2008.
 B. Lucas and T. Kanade, “An iterative image registra-
tion technique with an application to stereo vision,”
Proc. of DARPA Image Understanding Workshop,