Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Synthesis_of_VRVE_skills

VIEWS: 5 PAGES: 92

									Deliverable N. D1.A_6                Dissemination Level : CO           Contract N. IST-NMP-1-507248-2


          INFORMATION SOCIETY TECHNOLOGIES (IST)
                       PROGRAMME




                                                  INTUITION

                                      IST-NMP-1-507248-2

              Update of the Synthesis of VR/VE skills
               in Europe and worldwide (D1.A_6)
Deliverable No.                                     D1.A_6

Cluster No.                  SP1                    Cluster Title             Integrating and Structuring activities

Workpackage No.              WP1.A.2                Workpackage Title         State of the art and literature review

Authors                                             Dr. Patrick BOURDOT (CNRS/LIMSI)
                                                    Prof. Abderrahmane KHEDDAR (CNRS/IBISC)
                                                    Dr. Jean-Marc VEZIEN (CNRS/LIMSI)
                                                    Dr. Hichem ARIOUI (CNRS/IBISC)
                                                    Dr. Roland BLACH (FhG/IAO)
                                                    Dr. Jörg FROHNMAYER (FhG/IAO)
                                                    Dr. Massimo VOTA (ALENIA)
                                                    Dr. Dorin-Mircea POPOVICI (OVIDIUS)
                                                    Prof. Peter VINK (TNO)

Status (F: final; D: draft; RD: revised draft):     F
                                                    INTUITION-CNRS-D-WP1.A.2-R6-V2-Report D1.A_6
File Name:

Project start date and duration                     01 September 2004, 48 Months




April 2007         1                   CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6             Dissemination Level : CO          Contract N. IST-NMP-1-507248-2




List of abbreviations
3DI Group: Virginia Tech (USA)

Abilene: Advanced networking for leading-edge research and education (USA)

AMMI Lab: Advanced Man Machine Interface Laboratory (Alberta, Canada)

APAN: Asia Pacific Advanced Network

ART: Advanced Telecommunication Research Institute International (Japan)

ARToolKit: a software library designed for the rapid development of Augmented Reality applications (see HIT
Lab)

CAD: Computer Aided Design

CAMTech: Center for Advanced Media Technology (Nanyang Technological University, Singapore)

CAVETM: Cave Automatic Virtual Environment (designed in 1991 at EVL by C. Cruz Neira et al.)

CAVE Lab: Cognitive Agent in Virtual Environment Laboratory (Hanyang University, Korea)

CAVERN: CAVE Research Network (see EVL)

CAIP: Center for Advanced Information Processing (Rutgers University, USA)

CANARIE Inc.: Canada's advanced Internet development organization (Ottawa, Ontario)

CFD: Computational Fluid Dynamics

CET: Cue Exposure Treatment

CGRL: Computer Graphics Research Laboratory (National University of Singapore)

CoI: Committee of Inquiry

COSMOS: Japanese CAVE-like system (designed in 1996 by Professor Takeo Ojika)

DoE: Department of Energy (USA)

DARPA: Defense Advanced Research Projects Agency (USA)

DISCOVER: Distributed & Collaborative Virtual Environments Research Laboratory (Ottawa, Canada)

DIVERSE: Device Independent Virtual Environments - Reconfigurable, Scalable and Extensible

DMD: Digital Micro Mirror Displays

EMERGE: ESnet/MREN Regional Grid Experimental NGI Testbed (EVL project)

Euro-Link: project of the HPIIS program funded by NSF, its goal was to manage the connection of European
and Israeli National Research Networks to the high-performance vBNS and Abilene of North America.

EVL: Electronic Visualization Laboratory (University of Illinois at Chicago, USA)




April 2007        2                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6                Dissemination Level : CO        Contract N. IST-NMP-1-507248-2

FE: Finite Element

FEELEX: It is a family of haptic devices designed to enable touching interactions. Two resolutions of such
devices have been designed, one for double-handed interaction using the whole of the palms, another to touch a
surface using three fingers (Iwata Laboratory, University of Tsukuba, Japan)

FLC: Ferro-electric Liquid Crystal

FTTH: Fiber To The Home

HCI: Human Computing Interaction

HIT Lab: Human Interface Technology Laboratory (University of Washington at Seattle, USA)

HIT Lab NZ: Human Interface Technology Laboratory New Zealand (University of Canterbury in
Christchurch, New Zealand.)

HI-SPACE system: It is a new generation of VR device developed by HIT Lab whose the goal is to make
possible a continuum between real and virtual worlds by using the objects of our physical space (walls, tables,
books, and other surfaces…) in such way they become the material supports for viewing the electronic
information.

HPIIS: High-Performance International Internet Services (program funded by NSF)

iCAIR: International Center for Advanced Internet Research (Northwestern University)

ICT: Institute for Creative Technologies (University of Southern California at Los Angeles, USA)

I-DRIVE: Inserting Dynamic Real Objects into Interactive Virtual Environments (UNC project)

IEBook: Immersive Electronic Book (UNC project)

IMRC: Imaging Media Research Center (KIST, Korea)

IMS: I-cubed Media Systems (CAVE-like system)

IMSC: Integrated Media Systems Center (University of Southern California at Los Angeles, USA)

INI-GraphicsNet: International Network of Institutions for advanced education, training and R&D in Computer
Graphics technology, systems and applications)

IUCC: Israeli network, member of the Euro-Link consortium

ISI: Information Sciences Institute (University of Southern California at Los Angeles, USA)

KAIST: Korea Advanced Institute of Science and Technology

KIST: Korea Institute of Science & Technology

KJIST: Kwang Ju Institute of Science & Technology (Korea)

LCD: Liquid Crystal Display

LDAP: Lightweight Directory Access Protocol

LED: Light Emitting Diode

LITE: Louisiana Immersive Technologies Enterprise (University of Louisiana Lafayette, USA)




April 2007        3                   CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6             Dissemination Level : CO          Contract N. IST-NMP-1-507248-2

MagicBook: An AR and Collaborative project aiming to allow several users to look at the book from different
viewpoints (see HIT Lab)

MATES: Multimodal Approach to Tele-Experts Systems (CAIP project)

MIC-ART: ATR Media Integration & Communications Research Laboratories (see ART)

MIRnet: member of the HPIIS program, it is a joint US-Russian project to provide next generation of Internet
services to collaborating US-Russian scientists and educators

MRE: Mission Rehearsal Exercise (ICT project)

MREN: Metropolitan Research and Education Network (see EVL)

MRT Circle Line: the fourth Mass Rapid Transit (or Metropolitan) line of Singapore

NAP: Network Access Point

NASA: National Aeronautics and Space Administration (USA)

NCSA: National Computational Science Alliance (USA)

NGI: Next Generation Internet

NIST: National Institute of Standards and Technologies (USA)

NORDUnet: R&E network of Sweden, Denmark, Finland, Iceland and Norway, member of the Euro-Link
consortium

NPACI: National Partnership for Advanced Computing Infrastructure (USA)

NSF: National Science Foundation (USA)

PACI: NSF Partnership for Advanced Computational Infrastructure

PARISTM: Personal Augmented Reality Immersive System (EVL project)

PERF-RV: first French National Research Program on VR

QoS: Quality of Service

RCAST: Research Center for Advanced Science and Technology (University of Tokyo, Japan)

RENATER2: French network, member of the Euro-Link consortium

RTP: Real-Time Transport Protocol

RTCP: Real-Time Control Protocol

PTSD: Post-Traumatic Stress Disorder

SAVG: Scientific Applications and Visualization Group (NIST team)

STAR TAP: Science, Technology, And Research Transit Access Point (see EVL)

STAR: Science and Technology of Artificial Reality lab (University of Tokyo, Japan)

STAR LIGHT: a project aiming to provide optical facilities for STAR TAP




April 2007        4                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6              Dissemination Level : CO           Contract N. IST-NMP-1-507248-2

STEVE: a virtual agent developed by ISI laboratory for training applications in Virtual Environments (see VET
and MRE projects).

SURFnet: Research and Education Network of the Netherlands, member of the Euro-Link consortium

TransPAC: member of the HPIIS program, it is a high bandwidth international Internet connection from the
vBNS to APAN through the STAR TAP

UMDNJ: University of Medicine and Dentistry of New Jersey

UNC: University of North Carolina (USA)

U-VR: Ubiquitous Computing & Virtual Reality Lab (KJIST, Korea)

vBNS: very high performance Backbone Network Service

VCVR: Visual Computing & Virtual Reality Lab (Ewha Womans University, Korea)

VET: Virtual Environment for Training (ISI project)

Virtual Worlds Consortium: an HIT consortium of companies for the dissemination HIT research results to
industry, and for launching strategic research projects with industrial supports.

VisionStation (Elumens product): It is a display having a field of view (FOV) of 160°, where standard flat-
screen doesn’t allow more than 60°. This display creates an amazing sense of space and depth, without need for
goggles or glasses. The large size of the VisionStation screen (1.5 meters) also helps promote an excellent sense
of 3D immersion

VOLFLEX: It is an haptic interface that provides a physical surface like clay, which is a very popular tool for
shape design or plastic art (Iwata Laboratory, University of Tsukuba, Japan)

VPS: Voxmap PiontShellTM, a voxel-based approach for collision detection designed and developed by Boeing,
and is distributed by SensAble Technologies.

VRAC: Virtual Reality Application Center (Iowa State University at Ames, USA)

VRD: Virtual Retinal Display (see HIT Lab)

VRHIT: Virtual Reality & Human Interface Technology Lab (Tsinghua University, China)

VRSJ: Virtual Reality Society of Japanese




April 2007        5                  CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6                                Dissemination Level : CO                             Contract N. IST-NMP-1-507248-2




Table of contents
LIST OF ABBREVIATIONS................................................................................................................................2

TABLE OF CONTENTS.......................................................................................................................................6

EXECUTIVE SUMMARY....................................................................................................................................8

INTRODUCTION..................................................................................................................................................9

PART 1: BENCHMARKING OF VR/VE RESEARCH (MAIN ACTORS) IN THE WORLD...................10

1      VR/VE RESEARCH IN THE US ...............................................................................................................10
    1.1     CAIP CENTER, RUTGERS UNIVERSITY, PISCATAWAY............................................................................ 10
      1.1.1      Team overview .............................................................................................................................. 10
      1.1.2      Main research topics and/or demos .............................................................................................. 11
    1.2     SCIENTIFIC APPLICATIONS AND VISUALIZATION GROUP, NIST, WASHINGTON ..................................... 12
      1.2.1      Team overview .............................................................................................................................. 13
      1.2.2      Main research topics and/or demos .............................................................................................. 13
    1.3     DEPARTMENT OF COMPUTER SCIENCES, UNC (UNIVERSITY OF NORTH CAROLINA), CHAPEL HILL...... 14
      1.3.1      Team overview .............................................................................................................................. 14
      1.3.2      Main research topics and/or demos: hardware ............................................................................ 15
      1.3.3      Main research topics and/or demos: software .............................................................................. 16
    1.4     ELECTRONIC VISUALIZATION LABORATORY (EVL), UNIVERSITY OF ILLINOIS AT CHICAGO ................. 20
      1.4.1      Team overview .............................................................................................................................. 20
      1.4.2      Main research topics and/or demos .............................................................................................. 21
    1.5     VIRTUAL REALITY APPLICATION CENTER (VRAC), IOWA STATE UNIVERSITY AT AMES. .................... 25
      1.5.1      Team overview .............................................................................................................................. 25
      1.5.2      Main research topics and/or demos .............................................................................................. 26
    1.6     ADVANCED DESIGN SYSTEMS GROUP AT BOEING, SEATTLE............................................................... 27
      1.6.1      Team overview .............................................................................................................................. 27
      1.6.2      Main research topics and/or demos .............................................................................................. 27
    1.7     HIT LAB (HUMAN INTERFACE TECHNOLOGY LABORATORY), UNIVERSITY OF WASHINGTON, SEATTLE.
            29
      1.7.1      Team overview .............................................................................................................................. 29
      1.7.2      Main research topics and/or demos: hardware ............................................................................ 30
      1.7.3      Main research topics and/or demos: software .............................................................................. 30
    1.8     INFORMATION SCIENCES INSTITUTE (ISI) AT UNIVERSITY OF SOUTHERN CALIFORNIA (USC); LOS
    ANGELES. .......................................................................................................................................................... 32
      1.8.1      Team overview .............................................................................................................................. 32
      1.8.2      Main research topics and/or demos .............................................................................................. 33
    1.9     INSTITUTE FOR CREATIVE TECHNOLOGIES (ICT) AT UNIVERSITY OF SOUTHERN CALIFORNIA (USC);
    LOS ANGELES. ................................................................................................................................................... 34
      1.9.1      Team overview .............................................................................................................................. 34
      1.9.2      Main research topics and/or demos .............................................................................................. 34
    1.10 INTEGRATED MEDIA SYSTEMS CENTER (IMSC) AT UNIVERSITY OF SOUTHERN CALIFORNIA (USC); LOS
    ANGELES. .......................................................................................................................................................... 35
      1.10.1 Team overview .............................................................................................................................. 35
      1.10.2 Main research topics and/or demos .............................................................................................. 36
    1.11 BROWN UNIVERSITY, COMPUTER GRAPHICS LAB. ................................................................................ 36
      1.11.1 Team overview .............................................................................................................................. 36
      1.11.2 Main research topics and/or demos .............................................................................................. 36
      1.11.3 Links .............................................................................................................................................. 37
    1.12 LOUISIANA IMMERSIVE TECHNOLOGIES ENTERPRISE (LITE), UNIVERSITY OF LOUISIANA LAFAYETTE 38
      1.12.1 Team owerview.............................................................................................................................. 38
      1.12.2 The general research focuses ........................................................................................................ 38
      1.12.3 Main research topics and/or demos .............................................................................................. 38


April 2007                  6                           CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6                               Dissemination Level : CO                             Contract N. IST-NMP-1-507248-2

      1.12.4 Links .............................................................................................................................................. 39
    1.13 VIRGINIA TECH (VT), 3DI GROUP. ........................................................................................................ 39
      1.13.1 Team overview .............................................................................................................................. 39
      1.13.2 Main research topics and/or demos .............................................................................................. 40
      1.13.3 Links .............................................................................................................................................. 40
2      VR/VE RESEARCH IN ASIA ....................................................................................................................41
    2.1     CHINESE LABORATORIES........................................................................................................................ 41
      2.1.1     Virtual Reality & Human Interface Technology Lab, Tsinghua University .................................. 41
      2.1.2     Virtual Reality Lab (VRLab), Wuhan University........................................................................... 42
      2.1.3     State Key Lab of CAD&CG, Zhejiang University ......................................................................... 43
    2.2     KOREAN LABORATORIES ........................................................................................................................ 45
      2.2.1     POSTECH, Virtual Reality Lab, Pohang University of Science and Technology ......................... 45
      2.2.2     Virtual Reality Lab of KAIST (Korea Advanced Institute of Science and Technology) ................ 46
      2.2.3     Cognitive Agent in Virtual Environment Laboratory Overview (CAVE), Hanyang University .... 47
      2.2.4     Visual Computing & Virtual Reality Lab (VCVR), Ewha Womans University ............................. 48
      2.2.5     Ubiquitous Computing & Virtual Reality Lab (U-VR), Kwang Ju Institute of Science &
      Technology (KJIST) ...................................................................................................................................... 49
      2.2.6     Imaging Media Research Center (IMRC), Korea Institute of Science & Technology (KIST)....... 50
    2.3     SINGAPORE LABORATORIES ................................................................................................................... 51
      2.3.1     Center for Advanced Media Technology (CAMTech), Nanyang Technological University.......... 51
      2.3.2     Computer Graphics Research Laboratory (CGRL), National University of Singapore................ 52
    2.4     JAPANESE LABORATORIES ...................................................................................................................... 53
      2.4.1     The University of Tokyo ................................................................................................................ 54
      2.4.2     Iwata Laboratory, University of Tsukuba ..................................................................................... 56
      2.4.3     Sato-Koike group, Tokyo Institute of Technology ......................................................................... 57
      2.4.4     Ikei laboratory, Tokyo Metropolitan Institute of Technology ....................................................... 58
      2.4.5     Human interface Engineering lab, The Osaka University............................................................. 58
      2.4.6     The Gifu Region ............................................................................................................................ 59
      2.4.7     ATR Media Integration & Communications Research Laboratories (MIC-ART), Advanced
      Telecommunication Research Institute International (ATR)......................................................................... 60
3      VR/VE RESEARCH IN SOME OTHER WORLD REGIONS...............................................................62
    3.1     CANADA ................................................................................................................................................ 62
      3.1.1     AMMI Lab (Advanced Man Machine Interface Laboratory), Departement of Computing Sciences,
      University of Alberta ..................................................................................................................................... 62
      3.1.2     DISCOVER (Distributed & Collaborative Virtual Environments Research Laboratory) School of
      Information technology, University of Ottawa.............................................................................................. 65
    3.2     NEW ZEALAND LABS ............................................................................................................................. 72
      3.2.1     University of Otago, Dunedin, New Zealand, Department of Information Science, Human-
      Computer Interaction (HCI) ......................................................................................................................... 72
      3.2.2     Human Interface Technology New Zealand (HIT Lab NZ) ........................................................... 73
    3.3     BRAZIL .................................................................................................................................................. 75
      3.3.1     Laboratory of Integrated Systems Polytechnic School - University of São Paulo – Brazil........... 75
      3.3.2     Tecgraf/PUC-Rio .......................................................................................................................... 78
PART 2: WORLDWIDE ANALYSIS OF VR/VE RESEARCH.....................................................................82

4 COMPARISON BETWEEN THE US AND EUROPE ON VR/VE RESEARCH, AND
POTENTIAL EUROPEAN POLICIES.............................................................................................................82

5 COMPARISON BETWEEN ASIA AND EUROPE ON VR/VE RESEARCH, AND
POTENTIAL EUROPEAN POLICIES.............................................................................................................86

6      SOME APPLICATION PRIORITIES FOR VR/VE RESEARCH IN EUROPE ..................................88

CONCLUSION ....................................................................................................................................................89




April 2007                  7                          CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6      Dissemination Level : CO   Contract N. IST-NMP-1-507248-2




Executive Summary

The Deliverable D1.A_6, called “Update of the Synthesis of VR/VE skills in Europe and
worldwide”, is an upgrade of the previous D1.A_2 report delivered at M18 by CNRS.

Expected at M32, the main objectives of the D1.A_6 document were:
   1. to integrate into the previous D1.A_2 report additional inputs from the INTUITION
      partners,
   2. to obtain a global approbation of partners on the previous D1.A_2 report about:
          o the analysis of the main VR/VE research teams in the world outside Europe
          o the proposed actions and recommendations for a European research policy

In addition to CNRS labs, the other INTUITION partners that contributed to this D1.A_6
report are: ALENIA, FhG/IAO, OVIDIUS, and TNO.




April 2007     8            CNRS/LIMSI – CNRS/IBISC – FhG/IAO – ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Introduction
The present Deliverable is discussing the VR human resources and capabilities of the
INTUITION partners in comparison with worldwide available facilities. It tries to identify
shortcomings, synergies and overlaps, and then it sketches proposals towards the structuring
of ERA in terms of research capabilities, skills… and on ways to define competitive
orientations.
The Deliverable D1.A_6, called “Update of the Synthesis of VR/VE skills in Europe and
worldwide”, is an upgrade of the D1.A_2 report, a previous one whose genesis was itself an
internal report based on WP1.2 partner’s contributions and called “Inventory of the past and
undergoing VR research activities in Europe” (M1.2_1 milestone), delivered at M18 by
CNRS. Today, a large amount of the information gathered in the M1.2_1 report is accessible
with an INTUITION middleware, while an upgraded version of that internal report has been
annexed to the D1.A_3 report. Actually, within the chapter 5 of this other previous
Deliverable of the WP1.A.2, CNRS had provided a “Synthetic analysis on the Inventory of
past and undergoing research activities in Europe”. Of course this analysis could not be
exhaustive because of several reasons (limitation of the investigation time, confidentiality of
the information, difficulty to obtain detailed information from outside Europe institutions, and
so on). However, it already identified a set of problems, synergies and overlaps between the
European research teams of INTUITION.
Consequently, the present report focuses on the benchmarking of VR/VE research in the
world outside Europe (part 1), by providing an overview on some main actors not only in the
US and Asia, but also in Canada, New Zealand or Brazil that can be considered as US
satellites… Then, it proposes a global analysis of US and Asia VR/VE research teams, and it
underlines some application priorities for VR/VE research in Europe (part 2). Based on these
analyses, the conclusion of the D1.A_6 report proposes not only a set of priorities (neither
exhaustive nor ordered) for VR/VE research in Europe, but also some fundamental
recommendations for a European research policy in the VR/VE domain. These priorities and
recommendations should be useful for the global INTUITION roadmap on research aspects.
About the upgrade process, it has been launched on the 20th of March 2007 by a call for
contributions addressed to all INTUITION partners (and a reminding mail on the 29th of
March), with a deadline for the 6th of April 2007. The goal of that call was to prepare the
D1.A_6 report from three types of contributions:
   1. Collect missing important research teams outside Europe, because the part 1 of
      D1.A_2 report (called “Benchmarking of VR/VE research in the world”) was only
      based on an overview of the main VR laboratories in the US and Asia.
   2. Enhance and approve (or reject) the content of the Part 2 of D1.A_2 report, previously
      called “Plan and roadmap toward structuring the ERA in VR/VE” and which has been
      renamed “Worldwide analysis of VR/VE research” in the current D1.A_6 report.
   3. Enhance and approve (or reject) the content of the Conclusion of D1.A_2 report,
      where we were for instance expecting sufficient feedback from INTUITION partners,
      to define priorities within “action list for VR/VE research in Europe”
In spite of the call and the reminder addressed to the entire INTUITION consortium, only five
partners have contributed to this report, namely CNRS, ALENIA, FhG/IAO, OVIDIUS and
TNO.


April 2007     9                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2




Part 1: Benchmarking of VR/VE research (main
actors) in the world
The benchmarking of VR/VE research in the world mainly consists in the overview on main
actors in the US and Asia, as VR research (outside Europe) is mostly limited to these two
areas. However, the VR research activities of some labs within a small collection of US
satellite countries (Canada, New Zealand and Brazil) are also reported.


1 VR/VE research in the US
This section is an overview of US research activities of thirteen labs in the field of Virtual and
Augmented Reality. This overview is based on:
   -   an analysis of ten VR labs, which have been visited during a US trip organized in June
       2003 by the French Ministry of Research for some representative members of PERF-
       RV (http://www.perfrv.org/)
   -   an update of the collected information on these previous visited labs, and an analysis
       of three additional labs which was necessary because of their importance and
       visibility, enhancements that have been mainly realised on web research
Within a synthetic presentation of the research activities of these US labs, we tried to
underline some aspects of their success-stories.


1.1 CAIP Center, Rutgers University, Piscataway
Contact: Pr. G. Burdea et al.
Website: http://www.caip.rutgers.edu/aboutus/research.html

1.1.1 Team overview
Rutgers University is presented as “inquiry oriented”: a committee is in charge of specifying
research policies given the needs for member companies, which the CAIP (Center for
Advanced Information Processing) then tries to answer. The scope of CAIP research is
broader than VR. It accommodates approximately 75 students (not only PhDs) for a university
that counts more than 650 students. Human resources (teachers, PhD…) are financially
supported by the university, but the equipment must in general be financed by the projects on
external funds. An exception is probably their interesting concept of equipped rooms for VR
training courses (VR teaching lab), composed of a set of low-cost VR workstations (5 k$):
   -   PC with graphics card, active stereoscopy and shutter glass
   -   Network-shared tracking system
   -   low cost interactive VR devices: data glove (5DT), joystick, space-ball,
   -   play station haptic device (sidewinder),
   -   software developed in the lab based on VRML and Java3D
   -   course tutorial- a book of Professor G. Burdea
On the other hand, to finance its research projects, CAIP has an average of 25 industrial co-
operations (the CAIP member companies).


April 2007     10               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


The laboratory appears more directed towards basic teaching and research than towards great
national VR programs (military, transport, education). The experiments are technologically
mature without having arrived at the applicative stage. Haptic demonstrations presented were
recent on the applicative level (rehabilitation), but the haptic device technology is classical.
In the following sections we report some of the main research activities or demos.

1.1.2 Main research topics and/or demos

       1.1.2.1 Rutgers Ankle Rehabilitation Interface
An ankle haptic interface, coupled to a visual display system on standard workstation is
designed for the functional rehabilitation of the lower limbs. Designed for at-home use and
Internet-based remote monitoring by therapists, this interface allows patients to interact with
motivating virtual environments as they exercise. The ankle force feedback device is used to
control with a human foot a video game representing a plane on a screen. Concerning the
visual feedbacks of this application, one can notice that fog is exploited to facilitate scene
comprehension at the cognitive level.
According to the researchers, this system has the notable advantage to allow the rehabilitation
of the people even a few years after a traumatism. This project is developed in cooperation
with the University of Medicine and Dentistry of New Jersey (UMDNJ), aiming at low cost
products for an important market and a durable profit for the people.

       1.1.2.2 RM II Hand Master and Haptic Control Interface
The RM II Hand Master is a light haptic glove (100g) based on a pneumatic technology,
which seems not yet marketed, while a previous version of that device (RM-I) has been
evaluated at NASA.
The demonstration of the Haptic Control Interface of this device primarily implemented the
simulation of stiffness, while generally only a low stiffness is attainable by this technology.
This prototype is in the process of improvements. Progress on the noise of the pneumatic
system of control was made. Noted disadvantage: one cannot close the hand completely any
more. A current research focus for the Haptic Control Interface seems to be large volume
displays with force feedback.
Several applications of this device have been developed:
   -    Airport Luggage Inspection Simulation, supported by the Federal Aviation
        Administration;
   -    Knee palpation for medical training and Liver palpation for Tumor/Cyst detection
   -    Telerehabilitation with force feedback (remote rehabilitation) with Stanford.

       1.1.2.3 Augmented Reality and Collaborative Interface
A demonstration was presenting a game allowing the manipulation and the combination of
real and virtual cubes on a light system (a laptop and a see-through display).
A real application of this work is the MATES project (Multimodal Approach to Tele-Experts
Systems) for Remote Collaboration in Real and Virtual Worlds. The key research issues are
synchronization and registration of real and virtual worlds, software intelligent agents, speech



April 2007     11               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


and image processing, seamless remote collaboration etc. DARPA and Avaya
Communications (a CAIP member company) are supporting this project.

      1.1.2.4 Multimodal interaction for VR/VE applications
The CAIP center has several research activities in the field of multimodality, in input as well
as output modalities, applied to VR or AR interactive contexts. For instance, under the NSF
STIMULATE program (Flanagan et al., 1999), CAIP was conducting research to establish,
quantify and evaluate techniques for designing synergistic combinations of human-machine
communication modalities. The modalities under study were sight, sound and touch
interacting in collaborative multi-user environments. Several demonstrators are already
available.
A first demonstrator deals with Multimodal Interaction on PDAs integrating Speech and Pen
inputs. It is hard to assess what there is behind the demonstrations in terms of semantic
processing and multimodality. However, CAIP Center has a long background in Speech
Recognition and Language Processing. The innovative aspect of this work is the multimodal
combination of vocal events and graphic interaction for the use of a PDA. According to the
researchers' opinions, portable applications concerned by this new PDA interface are:
reporting after car accidents, or house retailing activities. It does not seem that industrial
partners are already interested for such applications, but the project seems to be supported by
DARPA for the voice encoding on low bandwidth channels. Nevertheless, the portable
systems (such as organizers and PDAs) already allow a strong HCI integration. It results that
the usability of speech recognition is not obvious on the presented applications. But clearly,
multimodal interaction including speech recognition seems to have real prospects for
interaction in Virtual Environments, for many applications,
A second demonstrator is focusing on Multimodal user interface based on gaze-tracking and
on speech recognition (without keyboard interaction). Gaze-tracking, uses the ISCAN system,
comprising a CDD video-camera and an infrared light emitter. The multimodal fusion is
centralized on one CPU, without distributed software architecture aspect, which is current in a
VR context, and requires time synchronization of computers if dated events are needed.
Actually, the demonstration didn’t prove if the fusion of events was simply syntactical (i.e.
the fusion is decided on typical sequences of events) or if time coherency was also taken into
account (i.e. the case where events are dated, but they are only merged if they happen within a
certain time range).
At a more generic level, CAIP has designed “A Framework for Rapid Development of
Multimodal Interfaces” (Frans Flippo et al, 2003). Using this framework, a multimodal
interface was created for Flatscape, a collaborative situation map application to plan military
missions, using icons representing units on a map overlay. Additionally, the application can
track moving robotic vehicles on the same map and give them direct control commands or
assign them higher level “missions”. Camera feedback from the robots was available and can
be viewed on screen. Camera images were also exploited by the robots for target recognition.
However, this application not really uses immersive aspects of VR/VE technologies, even if it
seems that the proposed software framework could already support it.


1.2 Scientific Applications and Visualization Group, NIST,
    Washington
Contact: Dr. Judith E. Devaney, John Kelso et al.


April 2007     12              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Website: http://math.nist.gov/mcsd/savg/

1.2.1 Team overview
The SAVG team is a part of the Mathematical and Computational Sciences Division of the
Information Technology Laboratory of the NIST (National Institute of Standards and
Technologies). Out of a staff of 12 people, 5 work particularly on visualization of scientific
data in VR. Created in 1901, the NIST is a federal agency depending on the commercial
department of the United States. Its missions, directly turned towards industry, are "to develop
and promote standards that allow increasing the productivity, to facilitate trade and to
contribute to the improvement of the quality of life". The principal mission of the SAVG team
is to provide to the whole of the NIST and industry, a material platform and software tools for
parallel calculation and the interactive 3D visualization of scientific data. By vocation, this
team is directed towards support for companies. However, It has an activity of research and
development of VR software.

1.2.2 Main research topics and/or demos
While still a VR actor, NIST activity is nowadays more focused on internal services. It does
not have a large budget and their immersive environment IVL (Immersive Visualization
Laboratory) is composed of:
   -   2 retro-projected vertical plans, which seem to have been the prototype modules of
       the RAVE FakeSpace System
   -   a MotionStar tracking system (fixed at the ceiling with bolts and threaded rods of
       nylon)
   -   active stereoscopy (shutter glasses of ChrystalEyes)
   -   pointing devices and navigation control using a Wanda
   -   computer: Onyx 3400 of Silicon Graphics
The software platform used is DIVERSE (Device Independent Virtual Environments -
Reconfigurable, Scalable, Extensible) developed at Virginia Tech. university Tech., an open
source VR library whose one of the authors, namely John Kelso, is today member of the staff
of the SAVG team.
In this context, this group has developed around this software core a large set of
complementary tools making possible to quickly visualize scientific data in the NIST
immersive environment with little or no specific graphic developments. Thanks to these tools
developments, 7 types of scientific applications using VR are thus currently operational
within SAVG group of NIST:
        -    Smart Gels,
        -    High Performance Concrete,
        -    Tissue Engineering,
        -    Dielectric Breakdown,
        -    Electro-Magnetic fields of Photonic Crystals,
        -    Nanostructures and 3D reconstruction from Chemical Imaging at the Nanoscale.
Some of these tools aim at graphical and immersive data mining. This activity has been build
upon several previous software development efforts, most notably the Glyph ToolBox (GTB),
a set of tools used for creating three-dimensional glyphs: a glyph is a symbol that conveys
information visually. In GTB, a glyph most commonly refers to an object with various
attributes that can be changed to represent various properties of data. GTB contains both


April 2007     13               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


primitive and Meta commands for creating a large number of glyphs. Each GTB function is
written as a simple command designed to do one small task well; the GTB file format is an
ASCII text file. These glyphs are viewable in the NIST RAVE, the GTB file format being
handled by viewing programs such as diversifly (a DIVERSE display utility for interactive
viewing within the immersive environment), VRML or Open Inventor.
However, this software platform for VR scientific applications is apparently not coupled with
different tools helping the analysis and the interpretation of data that is resulting from these
various application domains. This type of processing is made upstream with methods specific
to each field of expertise. In other words, the work made in VR by this team mainly aims at
developing paradigms of data presentation, paradigms which moreover are thought in an
immersive way only on the visual modality.


1.3 Department of Computer Sciences, UNC (University of
    North Carolina), Chapel Hill
Contact: Prof Henry Fuchs et al.
Website: http://www.cs.unc.edu/Research/

1.3.1 Team overview
The keywords of the UNC are: multidisciplinary teams and transversal research activities. Its
flexible operating mode is based on mixing "students + ideas + equipments" and on the
abolition of the barriers as well between the scientific disciplines than between laboratories. It
is with this philosophy that the Department of Computer Sciences was created in 1978, which
does not have a specific VR/AR laboratory, but nevertheless develops a very rich activity of
research in this field.
Indeed, on several Graphics and Image Analysis Research Groups or Projects of this
department, ~70% are directly focusing on VR/AR topics, the ones directed towards generic
problems:
        -    Advanced Real-Time Rendering and Avatar Reconstruction to make virtual
             environment systems more effective for doing real work;
        -    Design and implementation of algebraic, geometric, and numeric algorithms for
             Collision Detection, Haptics, Motion Planning;
        -    Tracking for Augmented Reality Research and Wide-Area Tracking;
        -    Collision Detection, Massive Model Rendering, Visibility Culling;
the others aiming high level development or very original design of VR/AR applications:
        -    Surgical Intervention with Augmented Reality
        -    Real-time 3D Tele-Immersion for the “Office of the Future”
        -    Instruments and 3D force interfaces that enable scientists to both see and interact
             with the world on the nano scale.
        -    Virtual painting, and so on.
The key technical skills of the Department of Computer Sciences are hardware and software
architectures, high performance graphic visualization and vision analysis. Its applicative fields
are medical imaging, molecular modelling, and computing and graphic modelling of complex
geometries.




April 2007     14               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Overall the Department of Computer Sciences is a structure of public university; their primary
vocation is teaching and it comprises approximately a permanent staff of 25 members. 2/3 of
their financing comes from outside contracts (research funds), the last part coming from state
support. This department counts approximately 140 graduate students and has many sponsors:
Honda, Intel, NSH, NSF, Office of Naval Research, US Army, Defence Advanced Research
Projects Agency, Sloan Foundation, US Department of Energy, PIE Medical Equipment,
ARO, DARPA, ONR, LINK Foundation, National Physical Science Foundation, and National
Institutes of Health Center for Research Resources in Biomedical Technology, Reddi-Foam
Inc.
In the VR/AR domain, their activity aims hardware aspects as well as software ones. Their
principal fields of expertise are:
        -    Haptics,
        -    multiresolution modelling,
        -    3D painting,
        -    human computer interaction,
        -    collision detection,
        -    Real time massive model visualization,
        -    Real time global illumination.
They use a large quantity and a much diversified equipment. They don't have any large
immersive display such as CAVE or RAVE, but rather HMD or See-through devices, which
allow them to develop a great diversity of demonstrators according to the richness of their
research activities.

1.3.2 Main research topics and/or demos: hardware

      1.3.2.1 High resolution tiled displays
The prototype of this graphic display is based on 8 projection areas with overlapping image
boundaries, therefore 8 projectors each one driven by a PC. UNC is specialized in frontal
projection (a camera calibration is necessary) whereas others teams use retro-projection. The
prototype makes a geometrical correction of images, but not photometric one. With circular
polarizing filters, they can make stereoscopy with this graphic device. The applications, which
use this graphic display, are developed with the Chromiun library (two pass, one for
rendering, the other for texture mapping), derived from WireGL, with soft genlocking.

      1.3.2.2 Acquisition of 3D models
They have a room for acquisition of 3D models, which is equipped with many cameras and a
large variety of variable light sources.
Their pitch, for medical applications, is that temporal manipulation of the lights is crucial, and
their approach is to synchronize the lighting condition with the phase of acquisition and work
on the studied model (real-time captures).
They have also a laser system for 3D acquisitions, which they used in particular for scanning
models of working scenes (offices, rooms) used for remote visualization of collaborative
environments to design solutions in the field of the “offices of the future”.




April 2007     15               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


      1.3.2.3 Microscopic palpation
Haptic feedbacks using PHANToM devices of a 3D model whose acquisition is done by laser
microscopy (example on circuit epitaxy). Applications: nano-technology manipulations.

1.3.3 Main research topics and/or demos: software

      1.3.3.1 Haptics for Art applications
One of the studied paradigms is using Haptic for Virtual Painting. The proposed method uses
a PHANToM device for painting on a virtual two-dimensional support (drawing plan), and
allowing intuitive color choices, mixing of colors, to change the brush's width according to
the hand pressure. This project is based on the analysis of the artistic process, to make more
natural the artist-machine interfaces and to make the visual effects as realistic as possible.
Other issues: painting on 3D objects (already available according to their Web site), 3D
virtual painting (e.g. knife painting).

      1.3.3.2 Visual rendering for CAD massive models
Visual rendering of massive CAD models is a problem for naval design, avionics, industrial
facilities, and urban environments. Three topics are mainly investigated:
        - Hierarchical LoDs for interactive walkthrough within complex environments:
        On this topic the approach is based on the development and combination of a set of
        techniques (geometrical approximations, hierarchical aggregation, software
        distribution of tasks on hardware, occlusion culling algorithm) for the optimization of
        the interactive visualization (virtual visit) of large CAD models. The demonstrators
        of this approach were using simple laptops.
        - Real-time illumination and interactive shadow generation in complex
            environments:
        This topic focused on the increasing the realism of virtual scenes by the development
        of solutions for real time shading. The goal is to make possible a better understanding
        of a complex 3D environment (for a good apprehension of the perspective view and
        of the obstructions between 3D objects). To do it, they combine shadow map and
        shadow volume. With 3 synchronized PCs including GeForce4 graphic cards, they
        obtained realistic shading with a frame rate of 7 to 25 frames/sec for 3D models of
        several million triangles.
        - Path planning for real time visualization of massive 3D models:
        To obtain interactive navigation, they use path planning to drive several scene
        management techniques. Some of them are well known techniques such as model
        partitioning, or substitution of objects' geometry by images mapping on billboards
        (impostors). On the other hand, they developed an original “pipeline system”, which
        uses an integrated and dedicated management of the CAD database to distribute and
        synchronize data on the CPUs of computers, and on the graphic memories and texture
        memories of the graphic cards.
For benchmarks, the sets of data on which they tested the performances of their prototypes
are:
        -     visualization of CAD models:
             o 13 million triangles for a Powerplant building
             o 82 million triangles for a Tanker boat


April 2007      16              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO        Contract N. IST-NMP-1-507248-2


             o 470 million triangles for the Boeing 777 plane
         -    visualization of results (simulation):
             o 27 000 steps of time, on a data grid of 2048×2048×1920 of size, that provide 3
                 Tbyte of multivariable data
             o 500 million triangles for each isosurface

      1.3.3.3 Offices of the future: 3D teleimmersion
UNC has an installation with two superimposed projectors (direct projection), a flat screen of
2m×1m, and a head tracking (rows of sensors at the ceiling), the whole to test situations of
distant 3D collaborations (with stereoscopic visualization).
The objective of the project is to develop a system for remote collaborative works, which
supports real-time transmissions, both for vision signal and graphic information, by
combining panoramic images, “tiled display” projection systems, techniques of image-based
modelling, and immersive environment technologies.
A demonstration shows the stereoscopic image of the distant interlocutor, only refreshed at 2
frames/sec, while the static bottom (an office reconstructed from a 3D laser scan) is rendered
at 20-30 Hz.

      1.3.3.4 AR for ultrasound Biopsy
It’s an AR application for the assistance to the removal of cancerous fabric of the breasts.
They combine real images coming from the patient and a 3D virtual model, to guide the user
of the system. Visual feedbacks are displayed on an HMD stereoscopic device. The system
integrates an extraction probe (laparoscope) and an exploration probe (vision by ultrasounds),
which enables 2 handed work. To simulate the effective haptic feedback of the breast, a
specific prop is used. The graphic machine treating this application is an SGI-Reality
Monster. The tracking of the HMD and the 2 probes is made by an optoelectronics system.
The prototype is based on the UNC expertise in model building for surgery, for which there
is no texture and where fluids are present (little contrast), i.e. surfaces sensitive to the specular
light and who are very difficult to rebuild by traditional methods. In the HMD device, the
operator perceives a hybrid stereoscopic image: the 2D image ultrasounds, and the image of
the 3D model of the reconstructed breast.

      1.3.3.5 Mixed Reality labyrinth for psychophysics experiments
This Mixed Reality application is carried out with researchers in cognitive sciences and makes
it possible to compare behaviour of walk in real environment and virtual environment. It
consists in experiencing travel in a Mixed Reality world, for both immersed subjects and not
immersed ones. Measurements of performances (speed, hesitations, etc.) allow comparing the
behaviours of the two groups.
The Mixed Reality scene is composed of a real labyrinth build with polystyrene walls (real
tactile perception) combined with virtual reproduction (stereoscopic perception by HMD
3RdTech). Thanks to optical sensors great angle (3RDTech tracking, transmitters at the
ceiling, triangulation via infra-red cameras) and with the HMD, the user can really move in
the virtual scene projected with a perfect coincidence between the physical walls and the
image projected in the HMD.




April 2007      17               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


The limits of the experiment are that the HMD is not wireless and that the immersed user does
not see his arms when they touch the walls.

      1.3.3.6 Immersive Electronic Book (IEBook)
Tele-immersion will provide a dramatic new medium for groups of people separated from
each other to work and share experiences together in an immersive 3D virtual environment,
much as if they were co-located in a shared physical space. Immersive electronic books that in
effect blend a "time machine" with 3D hypermedia, will add an additional important
dimension: that of being able to record experiences in which a viewer, immersed in the 3D
reconstruction, can literally walk through the scene or move backward and forward in time.
There are many potential application areas for such novel technologies: design and virtual
prototyping, maintenance and repair, paleontological and archaeological reconstruction, and
so on. Actually, the IEBook prototype of the Department of Computer Sciences at UNC is
focusing on a societal important and technologically challenging driving application: teaching
surgical management of difficult, potentially lethal, injuries. This prototype is based on 4
components: acquisition, reconstruction, authoring, and immersive visualization.
The acquisition equipment is a camera cube: a unit constructed from rigid modular aluminum
frames (one meter each side), which handles four fluorescent high-frequency linear lights, and
eight color cameras. Such lights reduce specular reflections and provide even illumination.
The cameras support synchronization, which ensures that all eight cameras image
simultaneously, which is prior to reconstruction of dynamic 3D models from the 2D images.
To accommodate the band-bandwidth for 15-frames-per-second acquisition, cameras are
configured into four camera-pair groups. Each group includes two cameras, a hub, a sync unit,
and rack mounted server.
The first step in creating an IEBook is capturing the event of interest using the camera cube.
The basic process involves calibrating the cameras, capturing synchronized video, and
converting the turning raw images into the RGB color space for reconstruction. On the other
hand, the 3D reconstruction process involves two major steps: a reconstruction of 3D points
from 2D images using view-dependent pixel coloring, and a reconstruction of 3D surfaces
from the 3D points using application-specific point filtering and triangulation to create 3D
meshes.
Their approach to authoring includes a combination of 2D and 3D interaction techniques. The
motivation for such hybrid 2D/3D approach is to provide a familiar and tangible means of
sketching (the notepad paradigm) while simultaneously offering a natural and immersive
means of viewing the dynamic 3D data and the evolving IEBook. The principle is that an
author annotating an IEBook in the VR-Cube by using an authoring interfaces on a Tablet PC.
Using VCR-like time controls (i.e.: a screen capture software lets you record all activity on a
computer screen to a movie file and then play it back), the author navigates time in the
captured sequence, looking for an interesting or important event. The author moves to a
viewpoint where he/she has a good view of the surgeon’s actions at that moment, and takes a
snapshot of it using a button on the Tablet PC. Using the same Tablet PC interface, the author
can highlight features, annotate the snapshot, and save the results to a virtual gallery in the
IEBook. The author can arrange the snapshots hierarchically by dragging their titles on the
Tablet PC application.
About immersive visualization, they mainly developed a hybrid display system that combines
head-mounted display and projector-based displays to simultaneously provide a high quality


April 2007     18              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


stereoscopic view of the procedure and a lower-quality monoscopic view for peripheral
awareness. The HMD used does not have baffling material around the displays that would
block the wearer’s peripheral view of the real world. In addition, four projectors are rendering
view dependent monoscopic imagery on synthetic materials that match the operating table and
nearby walls.

      1.3.3.7 Inserting Dynamic Real Objects into Interactive Virtual Environments (I-
              DRIVE)
The challenge of this project is to address two areas: visually faithful user representations
(avatars), and natural interactions with the environment. Typically, when users move their
arms into the field of view, an objective is to show accurately lit, pigmented, and clothed
arms. When these users carry real objects into the environment, it is expected those objects
affect the virtual environment. In other words, the field of interest of the I-DRIVE project is
Mixed Reality (namely “hybrid reality”), where real and virtual objects can dynamically
interact.
This work is based on several levels of algorithms. A first level deals with generating virtual
representations (avatars) from dynamic real objects at interactive rates. A second level is to
allow virtual objects to interact with and respond to the real-object avatars. Then, dynamic
real objects (user, tools…) can be visually and physically incorporated into the VE. The
system uses image-based object reconstruction and a volume-querying mechanism to detect
collisions and to determine plausible collision responses between virtual objects and the real-
time avatars.
The devices used for the I-DRIVE project are very common: a set of camera, a black
background, and an HMD. The principal contribution of the system is on 3D reconstruction,
which uses a new visual hull technique that increases the recent advances in graphics
hardware. The pixels that make up the real objects are extracted from images of each camera.
A volume querying technique asks which 3D points are within the visual hull of real objects.
Using projected textures of camera images accelerates this computation. The system bypasses
an explicit 3D modelling stage. It generates results in real-time. The collision detection with
the reconstructed object is testing for intersections between the real objects visual hull and the
virtual object geometry.
In a demonstrator, the user interacts with a virtual cloth simulation, just as he/she would with
a real curtain. The system renders the virtual curtain and the avatar of the user with properly
resolved depth, and the curtain reacts to collisions with the avatar.

      1.3.3.8 M-rep for modelling anatomic objects
UNC has a large research activity on M-reps (namely Medial representations).
M-reps of 3D objects involves capturing the object interior as a locus of medial atoms, each
atom being two vectors of equal length joined at the tail at the medial point. Some of the main
properties of such model are:
        -    to provide an object-intrinsic coordinate system, which thus allows
             correspondence between instances of the object in and near the object(s);
        -    to capture the object interior, which thus is very suitable for deformation;
        -    to provide the basis for an intuitive object-based multiscale sequence leading to
             efficiency of segmentation algorithms and trainability of statistical
             characterizations with limited training sets.


April 2007     19               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


M-reps are a multiscale approach to the modelling and rendering of 3D solid geometry.
Traditional geometric models (cf. B-reps or CSG) are represented at infinitesimal spatial scale
and then require simplification to meet needs requiring coarser scale or smaller data sets.
UNC has developed a model that is designed at successively smaller scales and supports a
coarse-to-fine hierarchy in design, rendering, physical deformation, and other graphics
operations. They base their representation on figural models, defined at coarse scale by a
hierarchy of figures (protrusions, indentations, corners, neighboring figures, and included
figures), which simultaneously represent solid regions and their boundaries. To capture local
shape at scale and thus local zoom-invariance, the figural components imply a fuzzy, i.e.,
probabilistically described boundary position with a width- and scale-proportional tolerance.
As a result of these properties, medial representation is particularly suitable for:
        -    segmentation of objects and object complexes via deformable models;
        -    segmentation of tubular trees, e.g., of blood vessels, by following height ridges of
             measures of fit of medial atoms to target images;
        -    object-based image registration via medial loci of such blood vessel trees;
        -    statistical characterization of shape differences between control and pathological
             classes of structures.
Consequently, M-reps are particularly well suited to the modelling of anatomic objects,
producing models that can be used to capture prior geometric information effectively in
deformable models segmentation approaches.
If this model does not directly deal with Virtual or Augmented Reality, one easily imagines its
usability within medical applications based on such advanced user interfaces. The capacity of
M-Reps to generate an average and parameterized morphology, starting from the statement of
a standard population, and thus to forecast the deviating cases, really will contribute to the
development of VR applications aiming, in the medium term the planning of operation, and in
the long term automated pre-diagnosis on the basis of traditional data for medical experts
(scanners, IRM, etc.).


1.4 Electronic Visualization Laboratory (EVL), University of
    Illinois at Chicago
Contact: Tom DeFanti, Maxime Brown et al.
Website: http://www.evl.uic.edu/core.php

1.4.1 Team overview
After building the famous CAVETM in 1991 (designed by C. Cruz Neira, D.J. Sandin and T.A.
DeFanti) and the ImmersaDeskTM in 1995, EVL is now conducting research for a new
generation of devices to build highest quality, variable resolution and desktop/office-sized
displays. But in a more general point of view its main current efforts are focused on the
development of VR devices, software libraries and applications for collaborative exploration
of data over national and global high-speed networks called "tele-immersion".
EVL defines "tele-immersion" as collaborative VR system over networks based on a
human/computer/human collaboration paradigm with the computer providing real-time data in
shared, collaborative environments, to enable computational science and engineering
researchers to interact with each other (via "tele-conferencing") as well as their computational
models, over distance. Consequently a large part of EVL activity aims to provide easy access


April 2007     20                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


to integrated heterogeneous distributed computing environments, whether supercomputers,
remote instrumentation, networks, or mass storage devices using advanced real-time 3D
immersive interfaces. In this context, the core of EVL's research is today Interconnectivity via
high-speed, advanced networks with educational, government, research institutions and
industry fosters collaborative application development, as well as the networking
underpinnings.
In addition, EVL continues to develop and refine a robust and VR-device-independent
software library, as well as the software tools for building tele-immersion applications. This
software infrastructure supports collaboration in design, training, scientific visualization, and
computational steering in VR.

1.4.2 Main research topics and/or demos

      1.4.2.1 New generation of VR devices
The EVL research activity on VR devices is the development of a "third-generation" with
several emphasis points such as: variable resolution, desktop/office-sized displays, auto-
stereoscopy and unencumbered tracking.
In this aim, after the CAVETM and the ImmersaDeskTM prototypes, it designed since 1998 the
PARISTM system (Personal Augmented Reality Immersive System), whose projection
technology is optimized to allow users to interact with the environment using their hands or a
variety of input devices such as haptic ones. The PARISTM display allows seeing hands as
immersed in the virtual world. It uses a DLP projector, shutter glasses and a double folded
mirror to compactly and brightly illuminate the overhead black screen. This configuration is
suitable for the lighting conditions of a typical office environment.
An application of the PARISTM device is a collaborative augmented reality environment with
haptic feedback to provide a precise method to construct and revise pre-surgical cranial
implant designs. This process can correct traumatic head injuries and restore normal facial
appearance. Using this device, a surgeon and medical modeller can view a three-dimensional
model of a patient's CT data, and collaboratively review, sculpt and virtually build an implant
using their hands. A sensation of the mold sculpting process is achieved using a force-
feedback device. In addition, as we will see below, EVL can provide high-speed advanced
networks capacities for collaborative works, which allow surgeons and medical modellers in
geographically distant locations to review and revise the implants over advanced networks.
Patients can also view and discuss the post surgical results beforehand. This digital,
networked, collaborative modelling environment aims to eliminate many time-consuming
steps and make this surgical procedure widely available. The digital model of the implant can
be exported directly to a variety of rapid-prototyping manufacturing modalities making
classical sculpting, molding, and casting steps unnecessary, and allowing the use of new,
state-of-the-art implant materials.
On a more general user interface aspect, multimodal approach is being explored on the
PARISTM system. Gesture recognition can come from either tracking the user's movements or
processing them using video camera input. Gaze direction, or eye tracking, using camera
input is also possible. Audio support can be used for voice recognition and generation, as well
as used in conjunction with recording tele-immersive sessions. The combination of these
modalities enables free human tracking and unencumbered hand movements for improved
augmented reality interaction.


April 2007     21               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6               Dissemination Level : CO   Contract N. IST-NMP-1-507248-2


In addition, EVL is also continuing the development of the GeoWall. Initially targeted to
provide affordable 3D stereoscopic visualization of small to modest sized Geoscience
datasets, they developed recently the GeoWall2, designed to cost-effectively serve
Geoscience applications that require greater display resolution and visualization capacity. The
classical GeoWall2 consists of 15 LCD panels tiled in a 5×3 array comprising a total
resolution of 8000×3600 pixels. Each LCD panel is driven by a single PC with a high-end
graphics card such as Nvidia's Quadro FX3000, at least 250GBytes of disk space, 2.5-3GHz
CPU, and Gigabit Ethernet networking. GeoWall2 is scalable in that smaller or even larger
versions can be built by adjusting the number of LCDs and computers.
On the other hand, EVL supports the purchase of systems to evaluate, and integrate a variety
of emerging display devices, such as large color plasma displays, Liquid Crystal Display
(LCD) projectors, Light Emitting Diode (LED) panels, Ferro-electric Liquid Crystal (FLC)
displays, and Digital Micro Mirror Displays (DMD). EVL also adopts the latest generation
graphics engines on both UNIX and NT-based graphics platforms in order to reach and
service the largest possible scientific and engineering user base.

         1.4.2.2 Interconnectivity via high-speed advanced networks
The general focus of this activity of research is the EVL contribution to the evolution of the
network technologies, from the electronic solutions to the photonic ones, by the world
deployment of infrastructures of optical networks multimode with very large band-width (10
to 100 Gbytes/sec). One of the goals of such research is to develop better practices and a real
usability of remote immersive systems for collaborative works on applications of the most
varied sectors. In this aim, EVL is leader or partner to several significant networking research
and development.
A first contribution is the Metropolitan Research and Education Network (MREN), a regional
network connecting research institutions in the Chicago area, whose mission is to create
advanced, innovative networking architecture and to provide for a wide range of advanced
digital communication services in support of leading-edge research and educational
applications. One example of this activity is the EMERGE project (ESnet/MREN Regional
Grid Experimental Next Generation Internet (NGI) Testbed), whose the purpose is to achieve
and demonstrate Differentiated Services (DiffServ) over an IP/ATM GigaPoP1 regional
network for applications in combustion, climate and high-energy physics. Supported by the
Department of Energy (DoE), NSF and NASA, this research and development project will
provide DiffServ capabilities as a vital part of advanced Grid Services. The main focused
capabilities are:
            -   access control (identification, authorization, authentication, and resource
                utilization),
            -   directory services via the Lightweight Directory Access Protocol (LDAP),
            -   delivery of multimedia data through sequence numbering, time stamping, and
                contents identification using Real-Time Transport Protocol (RTP),
            -   Real-Time Control Protocol (RTCP) to control RTP data transfers,
            -   network management including instrumentation.
This project will concentrate on facilitating advanced data flows poorly served by best-efforts
on current network solution: extremely large computed datasets, ultra-high resolution
rendered imagery, and real-time unicast/multicast.

1
    GigaPoP: a network aggregation point.


April 2007         22                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6                 Dissemination Level : CO            Contract N. IST-NMP-1-507248-2


Other important contribution of EVL is the management of STAR TAP and Euro-Link
networks. STAR TAP is connected with the Ameritech NAP in Chicago, with the vBNS, and
with other high-speed research networks. It enables traffic to flow to international
collaborators from over 100 US leading-edge research universities and supercomputer centers
that are, or will be, attached to the vBNS or other high-performance US research networks.
On the other hand, Euro-Link facilitates the connection of European and Israeli National
Research Networks to the high-performance vBNS and Abilene networks. It is a NGI
initiative that supports international research collaboration. Euro-Link consortium includes:
IUCC (Israel), NORDUnet (Sweden, Denmark, Finland, Iceland and Norway), SURFnet (The
Netherlands), RENATER2 (France) and CERN, along with TransPAC and MIRnet (projects
supported by of the HPIIS program), all connect to STAR TAP.
A clear overview on STAR TAP and all the networks concerned by the HPIIS program is
available at http://www.indiana.edu/~ovpit/presentations/beijing/, within slides 14 to 29 of
Michael McRobbie.On the other hand, EVL is developing STAR LIGHT in cooperation with
iCAIR at Northwestern University, the Mathematics and Computer Science Division at
Argonne National Laboratory, the Canada's CANARIE and the Netherlands' SURFnet. STAR
LIGHT is a 1GigE and 10GigE switch/router facility for high-performance access to
participating networks, and a true optical switching facility for wavelengths. The NSF-funded
Euro-Link initiative currently provides a 10Gbps link between STAR LIGHT and the
Netherlands. In addition, the SURFnet has a 10Gbps connection to STAR LIGHT. SURFnet
also manages NetherLight, a STAR LIGHT's sister facility in Europe, and carries the 10Gbps
international traffic of CESNE (National R&E network of the Czech Republic) and of
NORDUnet.
Based on the STAR LIGHT infrastructure, EVL develops the OptIPuter project. The goal of
this project is to develop advanced data mining, visualization and collaboration technologies
that maximize the use of such high-speed Gigabit and Terabit international optical networks
to help researchers accelerate scientific discovery. OptIPuter is designed as a “virtual” parallel
computer in which the individual “processors” are widely distributed clusters, the “memory”
is in the form of large distributed data repositories, “peripherals” are very-large scientific
instruments, visualization displays and/or sensor arrays, and the “motherboard” uses standard
IP delivered over multiple dedicated lambdas2. Actually, this project has two application
drivers, the NIH Biomedical Informatics Research Network and the NSF EarthScope where
scientists are generating multi-gigabytes of 3D volumetric data objects that reside on
distributed archives that they want to correlate, analyze and visualize.
EVL is also contributing to the National Computational Science Alliance (Alliance), one of
two partnerships in the new NSF PACI program. The Alliance partnership of more than 50
academic partners from across the nation is building a prototype of the country's next-
generation information and computational infrastructure, the PACI Grid, to enable scientific
discovery and increasingly complex engineering design. The Grid is creating a powerful,
seamless, integrated computational problem-solving environment for collaborative,
multidisciplinary work on a national scale. NCSA leads the Alliance in its mission to maintain
American pre-eminence in science and technology. In this context, EVL works with other
Alliance teams in an effort to realize the vision of a PACI Grid that is an integrated set of
parallel and distributed computing, data management, and collaboration tools. The team's
initiatives have included object technologies, end-to-end QoS, visual supercomputing,
collaborative technologies, high-performance distributed data management, and scalable

2
    Lambdas are wavelengths of laser light used to send parallel streams of data over a single optical fibre.


April 2007           23                  CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6           Dissemination Level : CO        Contract N. IST-NMP-1-507248-2


clustered systems. The research emphasis for EVL is on enhancing collaboration, specifically:
high-modality, immersive data exploration and collaboration so that researchers can
manipulate and explore data simultaneously via VR environments. For instance, the Virtual
Director paradigm3 is used to steer, edit, and record the researchers’ navigation through a
large dataset.

      1.4.2.3 VR-device-independent software activities for tele-immersion
One of the software contributions of EVL is the design of CAVELibTM, which is today
available commercially from VRCo. It is an API that provides the software environment and
toolkit for developing virtual reality applications. It currently supports a wide range of devices
- CAVEs, ImmersaDesks, PARIS, Infinity Walls, HMDs, etc. It handles hardware specific
tasks so that details of the hardware being used are transparent to applications. It reads various
tracking devices (InterSense, Ascension Flock of Birds, Spacepad and PC Bird, Polhemus,
Logitech, etc); computes accurate, viewer-centered, stereo perspective for arbitrary display
screens; takes care of multi-processing, synchronization, shared memory, and provides
general utility functions.
On the other hand, several software sub-projects are oriented toward the OptIPuter project
(see above). It is the case for:
         -   LambdaCam: a real-time screen capture and distribution utility for cluster driven
             tiled-displays. It acts akin to a web-cam pointed at the tiled display. This allows
             users to remotely monitor or present the output of a tiled-display even on a
             laptop.
         -   LambdaRAM: a Network Memory abstraction that collects the memory in a
             cluster and allocates it as a cache to minimize the effects of latency over long-
             distance, high-speed networks. It takes advantage of multiple-gigabit networks to
             pre-fetch information before an application is likely to need it (similar to how
             RAM caches work in computers today).
         -   LambdaVision: an ultra-high-resolution visualization and networking instrument
             designed to support collaboration among co-located and remote experts requiring
             interactive ultra-high-resolution imagery, up to 60 trillion bytes (or 60 Terabytes).
         -   OptiStore: a universal distributed data management system of very large time-
             varying spatial information. It should provide an interface to query the
             heterogeneous data repositories, access the spatial data, and maintain the systems.
             The collaborative and/or immersive applications are independant from the data
             location, the data structure, the data access mechanism, the portion of the data
             that is to be cropped,etc. They simply request the data from OptiStore client API,
             and OptiStore should handle the rest of the work. In addition it also should have
             the capability of data modelling and data mining, which can discover inner
             relation within the original crude data.
Another software effort of EVL and CAVERN members is CAVERNsoft. At the beginning
CAVERN was an alliance of industrial and research institutions equipped with immersive
displays and high-performance computing resources, all interconnected by high-speed
networks to support collaboration in design, training, scientific visualization, and
computational steering, in virtual reality. Consequently, the goals of CAVERNsoft were to
support: the rapid creation of new tele-immersive applications, the retrofitting of previously
3
  Virtual Director paradigm: originally developed between 1992-1995 to help create the "Cosmic Voyage"
IMAX movie, it is a VR interface which enables gestural motion capture and voice control of navigation,
editing, and recording in the CAVE, ImmersaDesk, and Infinity Wall immersive systems.


April 2007      24                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


non-collaborative VR applications with tele-immersive capabilities, the integration of
collaborative VR with supercomputers and terabyte data-stores that are connected over high
speed nation-wide and world-wide networks (for instance: vBNS, STAR TAP, TransPAC etc;
see above). The new generation of this software is now released as the toolkit QUANTA (see
above) to supports TeraNode applications over optical networks. Called CAVERNsoft G2, it
is an Open Source C++ toolkit for building collaborative networked applications.
CAVERNsoft's main strength is in providing networking capabilities for supporting high
throughput collaborative applications. These applications need not be CAVE applications. In
addition CAVERNsoft provides modules for accelerating the construction of Tele-Immersion
applications.

      1.4.2.4 Applications & Demos
Many demos and applications have been developed and are reporting on the EVL web site. In
addition to the scientific applications (Energy, Biomedical, Earth, and so on) evoked above, a
number of applications and demos deal with artistic domains, or training and educations. In
these fields we only underline two known past projects:
        -    CityCluster “From the Renaissance to the Gigabits Networking Age” (F.
             Fishnaller et al., in collaboration with F.A.B.R.I.CATORS, Italy).
             It is a virtual reality and high-speed networking project that juxtaposes the
             Renaissance and the emerging Gigabits Networking Age, represented
             metaphorically by Florence and Chicago. The project aims to explore the
             opportunities offered by advanced information technology in order to make this
             digital tool a more humanistic instrument of communication for the fruition of
             artistic content and communication.
        -    NICE “Learning Together in a Virtual World” (A. Johnson, M. Roussos et al.).
             This project aims to create a virtual learning environment that is based on current
             educational theories of contructionism, narrative, and collaboration within a
             motivating and engaging context. Designed to work in the CAVETM, and related
             projection-based VR hardware, NICE allows groups of children to learn together
             both in the same physical location, as well as from remotely located sites.
             Additionally, a web-based component allows other children to participate from
             less expensive desktop hardware.

1.5 Virtual Reality Application Center (VRAC), Iowa State
    University at Ames.
Contact: James Oliver, et al.
Website: http://www.vrac.iastate.edu/

1.5.1 Team overview
Previously co-directed by Dr. Carolina Cruz when she moved from EVL, this team has been
created in 1990 and opened since 1991. With an average of 40 faculty members and 200
graduate and undergraduate students, its funding is coming from NSF, US Army, industry
(Ford, General Motors, John Deere, Boeing, Fuel Tech…). In June 2003, this lab had an
average of 35 contracts from industry and government, which were representing 10 M$ of
funding.
The general research focuses of VRAC are:



April 2007     25               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


        -    Human Computer Interaction
        -    Virtual Prototyping
        -    Real-time Simulation
        -    CFD applications ()
Of course, VRAC has an impressive VR material:
        -    many Silicon Graphics or Sun stations,
        -    several clusters with a total of more than 30 CPUs
        -    a 6 faces CAVETM from MechDyne using BARCO projectors with retro-
             projected floor and ceiling, a RAVE-like system with 4 faces (2 mobile walls and
             1 floor), a BARONTM (the workbench of BARCO), several HMDs (VGA
             Datavisor, Virtual Research V8…);
        -    many tracking systems (Logitech, Intersense IS -900, Flock of Birds, Polhemus
             Isotrack II…) and a wireless motion capture systems (MotionStar);
        -    a VR auditorium for 244 peoples equipped of passive stereoscopy with 4
             projectors on a large screen of 29 feet, and using an Intersense tracking system on
             the platform;
        -    2 planes prototypes with a fixed wing and a mobile one;
        -    several robotic manipulators (PUMA 500, Mitsubishi R3…);
        -    several haptic devices (PHANToM, “Pinch Gloves” of Fakespace…)
        -    and large number of classical VR devices (“Cyber Touch” of Virtual
             Technologies, Power Ball, 3D mouse of Logitech…).
In addition, VRAC has an activity of technological transfer via several spins off.
In 2003, VRAC obtained the opening of the HCI initiative, a graduate program on Human
Computing Interaction, which was the only teaching project in Computer Sciences appointed
at that time by the University of Iowa, with an important financing including the recruiting of
three faculty positions. That is illustrating the importance that VRAC took during the last
fifteen years within the University of Iowa.


1.5.2 Main research topics and/or demos

      1.5.2.1 VR software activity
After having contributed to the design and development of CAVELibTM at EVL lab, Carolina
Cruz made so that VRAC puts online the first Open Source code authorized by the Iowa State
University, namely the famous VRJuggler software. This platform for the development of VR
applications is managed by a permanent staff and students of VRAC, but is also developed in
collaboration with several approved external partners (in U.K., Brazil, Pennsylvania, Chicago,
SGI). These external contributions are mainly directed towards the application fields (such as
Biology). Some partnerships with France were developed (with Compiègne, but also with
Orleans on their work on NetJuggler).


      1.5.2.2 VR hardware activity
VRAC developed a "low cost" CAVE based on the design of a "low cost" and "low size"
projection module. Each module is composed of a screen, 2 projectors LCD for passive
stereoscopy (circular polarization, 1280×1024 resolution), and a PC with its graphic card.
Cost of a module is 30,000$ (except tracking system), including 3,000$ for each projector.


April 2007     26               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO     Contract N. IST-NMP-1-507248-2



      1.5.2.3 Some applications and demos
For automotive simulation, VRAC has a mechanical platform (hexapod with hydraulic jacks)
for setting in situation applied to the car design.
For control centres, they have a demonstration in their RAVE-like system, which is coupling
such VR display with the logistic or organizational aspects of pre-existent information
systems. The operator has a physical organizer (wireless) on which he gives orders that he
visualizes within the immersive environment. The presented applications aim at the loading
process of an aircraft-freighter and the military supervision of missions.
In addition they study in their 6 faces CAVETM the problems of navigation in large and
complex urban spaces (cf. Beijing town in China) with the development of solutions for the
management of LoDs, to provide speeder virtual navigations with less jerk effects on such
massive date. This demo makes clearly the proof of the very good adjustment of the
stereoscopy provided by their VR software platform. But, about the use of their CAVETM, one
can observe a perceptive embarrassment by the fact they had to put a protection film (in
several pieces) on the retro-projected floor.



1.6 Advanced Design Systems Group at BOEING, Seattle.
Contact: Marie O. Murray, et al.
Website: Unknown

1.6.1 Team overview
The Advanced Design Systems Group is not a research lab in VR, but Boeing is the first
airframe manufacturer in the world to have tested VR. If the registered office of the group
were recently transferred in Chicago, Seattle remains the main technological center, and
Virtual Reality forms part of the fields with strong potential identified by this company.
Active since 1995, R&D team of Boeing has a staff of 214 scientists including 46% doctors
and 27% of masters controlled by 25 managers. This team is composed of 25 departments
located on 25 sites in the world (5 in the US), all connected by a protected network
infrastructure. Located at Seattle, VR represents only one of these R&D departments of
Boeing, and its staff of this specific department is only composed of 5 researchers.


1.6.2 Main research topics and/or demos
The Virtual Environments (VE) is centered according to several focus of interest,
differentiated throughout produced cycle of life:
        -    marketing and business design, for an early product definition,
        -    the manufacturing properties of the product, for the studies of assembly and
             disassembly capabilities,
        -    the support and documentation on all the levels of the product life, from the
             design reviews, until the exploitation and maintenance, and including the training
             of manufacturing technicians,
        -    the training and the certification of the pilots and the aircrafts.


April 2007     27               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Actually, it acts more for Boeing, to industrialize and to integrate VR solutions based on
commercial solutions, than to make its own fundamental developments on the subject: the
number of researchers implicated in the VR being today too tiny at Boeing, to entirely
develop their own VR solutions. However the presented work, although specific to aerospace
industry, are rather originals. In this context, their principal R&D topics are:
        -    Force feedback
        -    Fly trough
        -    Augmented Reality for training and maintenance,
        -    Flight simulation
        -    3D navigation
        -    Collision detection (cf. VPS, see below)
The used VR technologies are classical: graphic workstations, HMDs, PHANToM 6 DoF,
wearable computers, speech recognition systems. We can notice that they are not interested by
CAVE-like systems.

      1.6.2.1 Maintainability, Assembly and Disassembly studies of aircraft
              components
A first application aims at maintainability, the assembly and the disassembly capabilities of
the components during the design phase of the complex subsets of aircrafts. With a similar
approach to the PARISTM system of EVL, a stereoscopic projection on semi-transparent
screen is coupled with a haptic interaction via PHANToM 6 DoF, such as the user is able co-
localization of the hands in the virtual scene (case of scale 1). In case of Boeing, the
application uses the EASY54 software of MSC Company for dynamic modelling and the
simulation of the systems. For haptic feedbacks, they have designed the famous VPS
software, a voxel-based approach for collision detection, which enables 6-DOF haptic
rendering of a modestly sized rigid object within an arbitrarily complex environment of static
objects. This system is stable and convincing for haptic simulations with very few surface
interpenetration events. This level of performance is suitable for maintenance and assembly
task simulations that can tolerate voxel-scale minimum separation distances.

      1.6.2.2 AR for training or execution of complex tasks
A second application is the assistance of operators in the training or the execution of task via
an Augmented Reality system. Although promising this approach has some difficulties of
spreading, the example of the assistance to electric wiring seems most advanced. The system
is composed of a position-tracking hardware and a VGA-resolution see-through HMD, which
displays computer-generated images superimposed over real objects, while the user interacts
with the application via a speech recognition system (actually, a multi-speaker system of a
hundred words). The demonstrator presents the execution of a procedure (starting, check, test)
by a wiring technician of electrical networks assisted by this AR system. The operator does
not need to know the procedures, which have been modelled in a tree structure and which are
scanned by the application. Alstom has a similar approach to provide a new support solution
for partially supplanting the call centers.




4
  EASY5 is a schematic-based simulation software that allows modeling and simulating dynamic systems
containing hydraulic, pneumatic, mechanical, thermal, electrical, and digital subsystems.


April 2007     28                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


      1.6.2.3 Low cost VR workstation for engineering and design
The third example of application is the "low cost" engineering and design workstation. At the
technological level, it uses a VisionStation display distributed by Elumens Company. The
objective is to have an economic device allowing the perception of the volume of work in a
complex environment. The task given in example of the demonstrator is checking the bits of
free space between a tangled beam and a complex network of pipes (to check the constructive
principles and to see whether the standards of distance are respected, that is not easy with
such level of complexity of objects).


1.7 HIT Lab (Human Interface Technology Laboratory),
    University of Washington, Seattle.
Contact: S. Weghorst, et al.
Website: http://www.hitl.washington.edu/home/

1.7.1 Team overview
Founded by Pr. Tomas A. Furness in September 1989, the HIT Lab is a multi-disciplinary
research and development lab on human interface technology. The general research aims at
“increasing the band-width of the brain-machine interface”. But it also considers development
of its activity towards an industrial transfer strategy, namely:
        -    create a bridge between University and Industries (cf. BARCO, MICROVISION),
        -    give to the students the opportunity of a training on projects,
        -    become an economic actor in the Seattle area.
According to its multi-disciplinarily activity, the HIT Lab is connected to several departments
(Colleges) of the University of Washington, including engineering, medicine, education,
social sciences, architecture and the design arts. Its staff is between 50 to 100 peoples,
including PhD, MSc graduated and under-graduated students. In 2003 it had a budget of
37M$. In addition, the HIT Lab has developed a policy of industrial swarming. From the
beginning, it contributed to launch almost 25 spin-offs (including 5 networks and the
MICROVISION company), which represent the creation of more than 500 employments.
On the other hand, this lab tries to have world expansion, by proposing its name and its
organization as a label and a model of research and development in the Human Interface
Technology field. For instance, the University of Washington is 1/3 owner of the HIT Lab NZ
(University of Canterbury, New Zealand), while several countries (Taiwan, Korea, Japan,
Australia, Singapore, and almost two countries in Europe) approached the HIT Lab US to
study the possible creation some other HIT sister labs.
Finally, the HIT Lab developed the Virtual Worlds Consortium concept:
        -    to help support and guide the HIT Lab’s research and development and to provide
             the lab with an entrepreneurial and industrial flavor,
        -    to serve as the primary mechanism through which strategic projects and
             partnerships are formed between the HIT Lab, industry and academia.
Created in 1990, this consortium provides a forum for the dissemination of information to
industry, which allows launching strategic research, and which is also an educational and
professional resource for member-companies. The consortium members have privileged
access to advanced interface technologies developed in the HIT Lab. These new interface
applications can be transferred to the commercial sector by member-companies giving them a


April 2007     29              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


competitive advantage. Consortium meetings are held to showcase technology and students,
to keep members informed of latest developments, identify possible future employees and
interact with Universities. The HIT Lab Virtual Worlds Consortium in the US is supposed to
gather almost 50 companies, including some multinationals such as Microsoft, Nike, Eastman
Kodak, Chevron and Boeing. The consortium in New Zeeland should be composed of close to
30 companies.

1.7.2 Main research topics and/or demos: hardware

      1.7.2.1 HI-SPACE system
It is a new generation of VR device concept, which goal is to make possible the use of our
physical information space (walls, tables, books, and other surfaces), in such a way it
becomes the material support for viewing the electronic information. People perform physical
interactions with information every day by picking up a book, building a model, or writing
notes on a page. Similar interactions need to be developed for electronic information. The
proof-of-concept of the HI-SPACE system is based on the following features:
        -    sensors (camera, radio frequency tagging antenna, and microphone array) are
             placed over the table to capture user interactions with a table display.
        -    the table display itself is a rear-view screen being fed by a standard LCD
             projector.
For the future, the full HI-SPACE concept supposes to use the redundancy of multimodal
input, and must support direct interaction, which allows a more natural interface. It must also
support groups of people interacting with the same data at the same time and in the same
space, and enable users in different physical locations to interact with each other and with the
same data set. But the main problems will be to support the fluid transfer of information and
interaction between the physical and electronic spaces, and maintaining an unencumbered
work environment.

      1.7.2.2 VRD (Virtual Retinal Display)
Invented in 1991 at HIT Lab, the development began in November 1993. The aim was to
produce a full color, wide field-of-view, high resolution, high brightness, low cost virtual
display. The VRD projects a modulated beam of light (from an electronic source) directly
onto the retina of the eye producing a rasterized image. The viewer has the illusion of seeing
the source image as if he/she stands two feet away in front of a 14-inch monitor. In reality, the
image is on the retina of its eye and not on a screen. The quality of the image he/she sees is
excellent with stereo view, full color, wide field of view, no flickering characteristics. This
display scans intensity modulated laser light pixels directly onto retina. Each pixel is
modulated in short pulses of 30ns to 40ns. The input light is a combined beam from three
different wavelengths of laser light, which produces a color gamut exceeding that of a
conventional CRT. The commercial applications of the VRD system are supposed to be
developed by MICROVISION Inc., which has the exclusive license to commercialize this
technology. Actually, this technology has many potential applications, from HMDs for
military or aerospace applications to medical ones.

1.7.3 Main research topics and/or demos: software



April 2007     30               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


      1.7.3.1 Multimodal Interface software
A set of libraries is developed for incorporating multimodal input into human computer
interfaces. These libraries combine natural language and artificial intelligence techniques to
allow human computer interaction with an intuitive mix of voice, gesture, gaze and body
motion. Main current activities of HIT Lab in this field are focused on: Intelligent
Conversational Avatar (an expert system and natural language module to parse emotive
expressions from textual input), and Hand Motion Gesture Recognition system using Hidden
Markov models.

      1.7.3.2 MagicBook project
The MagicBook is an AR and Collaborative concept designed by Dr. Mark Billinghurst (today
director of the HIT Lab NZ, New Zealand). It aims the transition between real and virtual
worlds. By using goggles that look like opera glasses, a reader can focus on a square on a
page, press a button and watch a 3-D object pop up. The user can then motor around inside
the story and even become part of it. But the MagicBook technology also allows several users
to look at the book (and a particular 3-D object it creates) from different viewpoints. When
one person enters in the virtual world, others also using the glasses can see this person as part
of it. This kind of immersion and collaboration makes this HCI concept interesting for
education, architecture and entertainment. The MagicBook has now influenced several
derived projects over the world. One application prototype is the molecule book, a basic
introduction to protein structures.

      1.7.3.3 ARToolKit software
The ARToolKit, designed for the rapid development of Augmented Reality applications, is a
library which provides computer vision techniques to calculate a camera's position and
orientation relative to marked cards so that virtual 3D objects can be overlaid precisely on the
markers. This library has been inspired and implemented by Prof. Hirokazu Kato (Department
of Information Machines and Interfaces at Hiroshima City University, Japan) and Dr. Mark
Billinghurst (previously member of the HIT Lab).

      1.7.3.4 Fast Finite Element Modelling for Surgical Simulation
Because of the highly computational nature of FE analysis, its direct application to real-time
force-feedback and visualization of structure deformation has not been practical for most
applications. However this limitation is primarily due to the overabundance of information
provided by the standard FE approaches. But, with some mathematical optimisations through
pre-processing to yield only the information essential for a task, computation time can be
drastically reduced at run-time. For instance, a fast finite element analysis package has been
developed for real-time analysis in engineering design. On this aim, current effort at HIT Lab
is designing implicit models from medical images, which will allow for the creation of patient
specific models that can be used in surgery simulation. The final focus of this project is the
development of a real-time skin surgery simulator that will allow user to perform soft-tissue
deformation, soft-tissue cutting, skin’s undermining and suture placement. The goal is to
develop a simulator that allows accurate sutures such that a user can easily learn the basic
surgical techniques.




April 2007     31               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


      1.7.3.5 VR application
The VRRV project (Virtual reality roving vehicle) makes it possible to transport virtual
environments for the training in the schools to the attention of young pupils. It is a mobile
solution to familiarize young people with virtual reality. This approach also aims at
determining how the children can acquire knowledge starting from virtual reality. Between
1995 and 2003, 350 models of virtual environments were developed and 8000 children tried
RV thanks to VRRV.

      1.7.3.6 Virtual Puget Sound application
The Virtual Puget Sound project is an educational VR application. It aims to make understand
to pupil of secondary the water movements in the Puget bay around Seattle according to the
rates of salinity of water, the thermal variations and the tides. But for HIT Lab, the main goal
is to study the mechanisms of human training with virtual environments. This application uses
an HMD and a Wanda-like device in order to move in the virtual environment.

      1.7.3.7 SpiderWorld application
SpiderWorld is a VR application designed at HIT Lab for spider phobia therapy. Successful
phobia treatment requires the elicitation of an anxiety response during treatment. With a VR
therapy, the patient and the therapist have complete control over the feared object. VR allows
patients to confront fears that are not easy to simulate. It also offers complete confidentiality,
since the fears need not be confronted where others might be watching. In addition, VR
treatment tends to be more attractive to patients because they don't have to actually face the
feared object, such as a live spider. So VR is likely to increase the proportion of phobia
sufferers who seek treatment.

      1.7.3.8 SnowWorld application
SnowWorld is VR application also designed at HIT Lab for burn pain control. About the pain
perception, the same incoming pain signal can be interpreted as painful or not, depending on
what the patient is thinking. Pain requires conscious attention. If the patient is occupied by
navigating in another world, it is possible to drain a lot of attentional resources, leaving less
attention available to process pain signals. Rather than having pain as the focus of their
attention, for many patients using VR pain control system, the wound care becomes more of
an annoyance, distracting them from their primary goal of exploring the virtual world.


1.8 Information Sciences Institute (ISI) at University of
    Southern California (USC); Los Angeles.
Contact: L. Johson et al.
Website: http://www.isi.edu/research/

1.8.1 Team overview
ISI is a research center in computer science with a staff of 400 people (faculty members,
researchers, programmers and students). This team brings together computer scientists,
researchers specialized in artificial intelligence, developer’s multi-media, psychologist’s
knowledge engineers and researchers in pedagogy.




April 2007     32               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


ISI works aims at the applications of education and training. It is interested particularly in the
virtual teaching agents and the intelligent assistants. It seeks to allow the transfer of these
technologies towards the practice. For that, they work on projects meeting real needs, and do
it with the teaching staffs responsible for their realisation.

1.8.2 Main research topics and/or demos
This team has a number of research topics:
        -    Knowledge-based systems,
        -    Natural language processing and automatic translation,
        -    Intelligent agents robotics,
        -    Machine learning,
        -    Virtual Reality,
        -    Polymorph robots,
        -    Data mining,
        -    Information integration,
        -    Human Machine Interfaces,
        -    Interactive Training…
ISI is a reference in many fields of computer science. Their advanced researches in the field
of the virtual agents used for educational ends and of formation always refer. This team
contributed to create various international conferences such as IVA (Intelligent Virtual
Agents).
Virtual Reality is not the main field of research at ISI. It is through applications in the field of
the artificial intelligence that this team is interested in VR. Their research group working on
VR primarily deals with three issues:
        -    the intelligent agents,
        -    the knowledge management and data processing,
        -    the human language processing.
We present below, two main applications developed by the ISI lab in the VR field or which
can be applied to it.

      1.8.2.1 Carmen's Bright Ideas (CBI)
The CBI application has been developed by ISI with the participation of 6 American
paediatric centers for the treatment of cancer. The goal is to reinforce the success factors of
cancers’ therapies of young children by allowing to theirs mothers to acquire a more suitable
behaviour. The approach is to create a training scheme for the mothers starting from a
methodology of resolution of problem called "Bright Idea" in order to help them to manage
the stress generated by the cancer of their child. In this application, Carmen is the mother of 9
years old child with acute leukaemia. She has also girl a 5 years old. Gina, a medical welfare
officer, accompanies her. It is the history and the testimony of Carmen who structure the
situation. Unfolding is interactive and teaching. It is based on technology agent of ISI/USC.
The social and emotional interactions are modelled. The teaching use of this application is the
following: the user can allot emotional states to Carmen, then interacting with Gina, it can
seek suitable solutions in order to help Carmen. Although developed on a multi-media
platform (media chosen in comparison with the target), the approach of this application would
have allowed a virtual reality implementation.



April 2007     33                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


      1.8.2.2 Virtual Environment for Training (VET)
The VET application has been the starting point of the research works made on the STEVE
virtual agent (SOAR Expert Training for Virtual Environment) developed since 1997 by ISI.
The other partners of the VET project are Lockheed AI Center, and the Behavioural
Technology Laboratory of USC. The end-users are embarked technical staffs of US Navy.
The goal of the VET application is teaching to the control and maintenance of compressors.
The problem to be solved was to provide a system for training assistance. The principle of this
application is the following. The learner is immersed in a virtual environment reproducing an
engine room (compressors) of a US Navy ship. It must learn how to carry out monitoring and
maintenance actions. A virtual agent, STEVE, is a tutor whose pedagogy must be conclusive.
The learner must replay correctly what STEVE showed him.


1.9 Institute for Creative Technologies (ICT) at University of
    Southern California (USC); Los Angeles.
Contact: J. Douglas et al.
Website: http://www.ict.usc.edu/

1.9.1 Team overview
ICT collaborates with other departments of USC such as: School of Cinema-TV, School of
Engineering, Annenberg School of Communication, ISI and IMSC.
ICT is a University affiliated research center. It is thus a purely research laboratory. In August
1999, the American Army signed a five year contract with USC (50 million dollars) in order
to create a place where could develop more immersive environments for simulation training.
ICT was created for that and gathers computer scientists and entertainment specialists coming
from Hollywood. This team develops an Experiment Learning System (ELS), applied to
various types of military needs. The entertainment industry brings its expertise in the game
field, the narration, the creation of the characters and the special effects. The researchers in
computer science bring their competences in artificial intelligence, networks, and VR.

1.9.2 Main research topics and/or demos
ICT develops basic research (Audio Immersion, Automated Story Director, Graphics and
Animation, Sensory Environments Evaluations, Spacial Cognition, Story Representation and
Management, Virtual Humans, Mixed Reality), which is applied to many military software
applications and prototypes.
ICT research is focused on intelligent environments with virtual agents having rich and
emotional behaviours and interacting with the human participants. That is based on 3 axes:
        -    natural language integration as well for understanding than for producing it,
        -    narrative constructions (in the form of stories, with scenarios), in order to keep
             the user at the center of the story where it will have a more or less heroic roles,
        -    the expression of emotions, with all that can be technically used to produce those
             (3D audio, special effects, hybrid technical solutions).
The selected approach for emotion rendering is founded on psychology works, and on the
links between behaviours and cognition. A virtual agent evaluates each event at the same time


April 2007     34               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


in comparison with the way in which it impacts its goals and its action plans. It results a
modification of its emotional state. Its beliefs, its desires and its intentions can be modified.
At the same time, the emotional state is expressed on the verbal plan (voice, intonation – use
of actors’ voices for some feedbacks) and on nonverbal one (glance, face expression,
gestures). In addition, these emotions induce effects on the actions of the external world
(problem-focused coping) and the beliefs or the attention (emotion-focused coping).
We present below, one example of VR application developed by the ICT lab and based on the
approach previously described.

Mission Rehearsal Exercise (MRE)
The end-users of the MRE project are the Officers of the American army. Its application is for
preparing peace missions. The research goal is to put officers in situation in a credible way,
i.e. complex situation, with emotions and under stress. The principle of this application is the
following: the learner is immersed in a scene in Bosnia; he/she has to intervene on a crisis;
he/she does not have the means necessary to manage the situation, but must make choices and
prevent that the situation degenerates. The media platform is a real time interactive virtual
system with a voice recognition multi-speaker system. A scenario has been carefully built to
describe the training situation (expert coming from the entertainment field) and many special
effects are used in order to create the atmosphere. The virtual agents integrate emotional
dimensions. The scenario can also change, in taking into account, as well the acts, than the
emotional states of the actors. Virtual agent STEVE (see team works ISI) is used as a basis for
all the virtual agents of the application MRE. However it has been modified by adding an
explicit task model, and a mechanism for evaluation between an event and the realization of
its goals. The relations between events and behaviours are characterized by processes (for
example: valence, intensity, responsibility), which are applied to emotions. Thus, an action in
the real world, which comes to block the realization of goals, will be felt like undesirable. If it
is an event, which can occur in the future, it will be estimated like dubious and will be able to
give place to fear. If the action has happened, that can become distress.


1.10 Integrated Media Systems Center (IMSC) at University
    of Southern California (USC); Los Angeles.
Contact: A. Rizzo et al.
Website: http://imsc.usc.edu/

1.10.1 Team overview
IMSC is a research center of USC financed by the NSF. It acts of a multi-disciplinary lab,
which develops projects in the field of the Internet, multi-media and of VR applied to
education, industry, and technology transfer.
IMSC has industrial partners such as: Boeing, Eastman Kodak, FX Palo Alto, Hewlett-
Packard, IBM, INTEL, Litton, Lockheed Martin, Microsoft, Motorola, NCR, Raytheon, Sun
Microsystems, TRW. Partnerships also exist with NASA, Defence Advanced Research
Project Agency, and USTRANSCOM.
IMSC has a staff of 34 faculty investigators, 77 graduate research assistants, 32 undergraduate
research assistants, 12 administrative and 4 consultants. Its annual operational budget is about
11 million dollars.


April 2007     35                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2



1.10.2 Main research topics and/or demos
IMSC research fields are various:
        -    sculpture and 3D modelling,
        -    panoramic technology of video,
        -    haptics,
        -    3D audio,
        -    wireless technologies,
        -    Augmented Reality…
For instance, the team works on a 360° video technology and on Immerpresence, a 3D
immersive concept for the Internet and television. Designed like a new standard for Internet,
Immersipresence makes it possible to transform a set of panoramic 2D images into a 3D
immersive environment. IMSC covers a broad computer science domain, from algorithms to
interfaces, with the semantic approach and advanced technologies.
We present below, one example of VR application developed by the IMSC

Virtual Classroom
Virtual Reality is used here in order to diagnose disorders of the child’s attention. One of the
goals of this work is the creation of a new generation of tools (by reference to the
psychotechnical tools of the middle of the XXème century however always used) both for the
diagnostic one the therapy or rehabilitation. Its goal is the evaluation and the rehabilitation of
the attention disorders (Attention Deficit Hyperactivity Disorder) for children. End-users are
children at school. In taking the attention problems from a cognitive point of view, the
principle is using VR, because existing methods in real environment are not satisfactory. With
an HMD, a child is immersed in a virtual environment representing a class. One gives him a
spot to be realized (to locate occurrences in lists of letters), and he is subjected in parallel to
sounds and visual stimuli constituting distractions (agitation of other pupils, passage of a car
in front of the window, noise of plane). Its rate of distraction is measured at the same time, by
the degradation of the scores' tasks, and by the movements of the head (looking at the car by
the window).


1.11 Brown University, Computer Graphics Lab.
Address:       Brown University, Providence, RI 02912
Website:       http://graphics.cs.brown.edu/home.html

1.11.1 Team overview
The Faculty of the Computer Graphics Group is David Laidlaw, John Hughes and Andy van
Dam. Andries van Dam (Andy) has been on Brown's faculty since 1965, and was one of the
Computer Science Department's co-founders and its first Chairman, from 1979 to 1985. The
Lab has additionally six researchers, one post doctor, thirteen graduated students and four
supporters.

1.11.2 Main research topics and/or demos
The general research focuses of the Computer Graphics Lab are:



April 2007     36                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


      - 3D graphics tools for modeling
      - scientific visualization
      - telecollaboration
      - interactive illustrations
      - Education
      - Games Technology Research
      - Gestural Intarfaces and Tablet PC
      - Visualization
      - Software Libraries
Sponsors are:
      - Department of Energy
      - IBM, Microsoft
      - NFS
      - Sun Microsystems
      - Taco

The Technical Systems of Brown University is a CAVE environment.

The long-term research goal of the Brown University Graphics Group is to develop human-
centered, powerful, and interactive 3D graphics tools for modeling, scientific visualization,
telecollaboration, and interactive illustrations.
Driving applications include the need for rapid prototyping tools for 3D modeling, the need
for more sophisticated scientific visualization tools that can present concepts, techniques, and
algorithms as well as data, and the need of geographically-separated groups to more
effectively work in a shared visual, spatial, and auditory environment.
Currently the Graphics Group focus includes two major research efforts: the Visualization
Research Lab and the Microsoft Center for Research on Pen-Centric Computing.

Some applications and demos are:
   • Art-Inspired Visualization Synthesis
   • Blood Flow Visualization
   • CavePainting
   • Color Rapid Prototyping
   • Evolving Visualizations
   • Pottery Sherd Reassembly
   • User Performance in VR Environments
   • Virtual Environments for Archaeologists
   • Visualization of Brain Structures
   • Visualization of Wrist Bones and Ligaments
   • Volume Rendering

1.11.3 Links
http://www.cs.brown.edu/stc/resea/interaction/research_I5.html
http://www.cs.brown.edu/stc/home.html
http://graphics.cs.brown.edu/home.html
http://vis.cs.brown.edu/areas/projects/vrperformance.html
http://vis.cs.brown.edu/results/images.html




April 2007     37               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


1.12 Louisiana Immersive Technologies Enterprise (LITE),
     University of Louisiana Lafayette
Contact:      Dr. Carolina Cruz-Neira
Address:      Louisiana Immersive Technologies Enterprise (LITE)
              537 Cajundome Boulevard
              Lafayette, LA 70506
Website:      http://www.thelite.org/site.php

Founded in 2005/2006

1.12.1 Team owerview
Dr. Carolina Cruz-Neira is the Executive Director and Chief Scientist of the Louisiana
Immersive Technologies Enterprise (LITE™). She is also an Endowed Chair in the College of
Engineering at the University of Louisiana at Lafayette. Dirk Reiners, Margaret Watson,
Dr.Young Ho Chai and Dr. Andreas Gerndt are Research Scientist and/or Software
Developer, Paul C. Cutt is the Chief Operating Officer, Laurie Dineen is the Director,
Business Development and Marketing, Johnny Lawson is the Manager, Visualization
Systems, Trevor Antczak is the Senior System Administrator, Madeline Broussard is the
LITE Administrator and Kristine Antczak is the Executive Assistant.

1.12.2 The general research focuses
   •   real-time visualization and interaction
   •   large-scale data and simulations
   •   real-time remote visualization and computing
   •   high performance computing

The Technical Systems of LITE:
   • Eleven Altix 350 machines, with 32 Intel Itanium processors and 32 GB of RAM
   • one single system with 160 Intel Itanium II processing cores and 4.1TB of RAM
   • Two Prism with 16 Intel Itanium processors, 16GB of RAM and 6 graphics pipes
      each.
   • In addition to these, there is one other Prism system which has 4 Intel Itanium II
      processors, 4GB of RAM and 2 graphics pipes.
   • They have a currently boast of 22TB of useable high-speed Storage Area Network
      (SAN) with an Infinite Storage 6700 system.
   • Fiber optic highspeed network.
   • The LITE Cube is one of the world’s few six (6) sided immersive environments
   • One of the world’s largest theatre with 175 seats facing a 37-foot fully immersive
      curved screen
   • One immersive conference Room for audiences up to 25 - 30 people. The conference
      room’s visualization is provided by a three-projector screen system that supports a 3-D
      environment with tracking system available.
   • Immersive Collaboration Teleconference Room – 120-degree flat screen
      accommodating up to 20 – 25.

1.12.3 Main research topics and/or demos



April 2007    38               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


The LITE is a world-class research facility that brings together a dynamic group of faculty,
scientists, students, and industry collaborators to perform advanced research in the integration
of visualization and supercomputing. LITE’s research activities address the challenges
involved with the real-time visualization and interaction with large-scale data and simulations.
LITE goes beyond traditional university research centers by having a strong emphasis on
finding practical solutions to complex problems, by putting theory into practice, and by
creating innovation driven by industry needs.

LITE’s research teams are developing the new infrastructure to support interactive immersive
environments powered by large supercomputers; they are working on innovative network
tools to enable real-time remote visualization and computing; they are performing ground-
breaking work on practical uses of immersive technologies tightly integrated with high
performance computing.

The collaboration between the University of Louisiana at Lafayette and the Lafayette
Economic Development Authority enables LITE to provide an exciting and flexible research
environment. There research teams are always open to welcome new collaborators and to
address new challenges in interactive visualization.

1.12.4 Links
http://www.louisiana.edu/Advancement/PRNS/news/2006/723.shtml
http://www.thelite.org/site.php
http://www.thelite.org/site38.php       Researcher
5 years strategic plan:
http://instres.louisiana.edu/strategic-plan/StrategicPlan2005-2010.html

1.13 Virginia Tech (VT), 3DI Group.
Contact:       Doug A. Bowman
Address:       Dept. of Computer Science (0106)
               660 McBryde Hall
               Virginia Tech
               Blacksburg, VA 24061
Website:       http://research.cs.vt.edu/3di/

1.13.1 Team overview
Doug Bowman Director of 3DI Group joined the faculty in the Department of Computer
Science at VT in August, 1999. He head the 3DI (Interaction) research group, and participate
in the University Visualisation and Animation Group (UVAG) and Center for Human-
Computer Interaction. Faculty members are Joe Gabbard (Computer Science), Debby Hix
(Computer Science), Tom Ollendick (Psychology), Walid Thabet (Building Construction),
Marte Gutierrez (Civil & Environmental Engineering), Chris North (computer Science),
Mehdi Setareh (Architecture) and Dennis Gracanin (Computer Science). The Group also has
six PhD students, two Master Students and twentytwo Alumni.

The general research focuses of the VT are:
   •   3D user interfaces and interaction techniques
   •   3D applications, especially in the area of virtual environments
   •   immersive education


April 2007     39               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


   •   scientific visualization
   •   immersive design

The Technical Systems of VT:
   • Virtual Research v8 HMD
   • i-Visor HMD
   • CAVE (10'x10'x10')
   • Apu
   • 5DT data glove (a pair)
   • Pinch glove (a pair)
   • Measurand Shapetape
   • Twiddler keyboard
   • InterSense IS900 Tracking system
   • Polhemus magnetic tracking system
   • Maggie (movable)

1.13.2 Main research topics and/or demos
Some applications and demos are:
   • iDesign (supported by NSF-IIS-0237412)
   • Domain-specific Interaction Techniques (supported by NSF-IIS-0237412)
   • Information-Rich Virtual Environments
   • Immersive Virtual Environments for Clinical Assessment and Rehabilitation
      (Sponsored by Carilion Biomedical Institute 2005)
   • VE for Design of underground space (supported by ITR-O324889)
   • Large display UI design and evaluation
   • Benefits of Immersion
   • Home Design Tool
   • Spatial Collaboration
   • Text and number entry techniques for Ves
   • VEs for undergraduate science and engeneering education
   • Empirical comparison of VE display devices
   • TULIP menus
   • Nuance-oriented interfaces for Ves
   • Usability approaches for VEs

1.13.3 Links
http://research.cs.vt.edu/3di/
http://www.hci.vt.edu/
http://infovis.cs.vt.edu/gigapixel/index.html   High resolution display




April 2007     40                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2




2 VR/VE research in Asia
This section is realized on the basis of an exclusive investigation on the Web, the report of a
CNRS mission to Japan and formal visits to most cited laboratories. The synthesis is made so
as to release the overview of laboratories, current projects and research (or competences), the
human resources and the RV equipments if any.
We concentrated our research on five RV research nations in Asia, China, Korea, Singapore
and Japan. There are obviously other laboratories in Asia from countries not cited here.



2.1 Chinese laboratories
2.1.1 Virtual Reality & Human Interface Technology Lab,
      Tsinghua University
Contact: Prof. Heng L.
Website: http://www.ie.tsinghua.edu.cn

       2.1.1.1 Team overview
The Virtual Reality & Human Interface Technology Lab (VRHIT) is located in Shunde
Building, Room 524D, Tsinghua University. The main task of VRHIT is to conduct
application research of virtual reality technology on the analysis of human machine system, in
order to enhance both the credibility and efficiency of the analysis, leading to improved
system design and enhanced efficiency, safety and usability of the system.
Meanwhile, due to the high cost of the virtual reality systems relatively to the allowed
research budget in China, they also aim at research and development of low-cost virtual
reality integration systems that are suitable for education and research of most universities and
research institutes, including providing the whole scheme of system integration, important
human computer interactive facilities and simulation and analysis software.
Virtual Reality (VR) technology is a highly integrated interdisciplinary product, including
computer technology, human computer interactive technology, and measuring and control
technology. Immersion, interaction and imagination are its three important characteristics.
Appling VR technologies, the researchers of VRHIT are able to simulate many interactive
processes of human machine system, resulting in more credible evaluation of system design.
Human resource: Laboratory now has one professor, one associate professor, one engineer
and PhD candidates.
Equipment: Many virtual reality materials are available in VRHIT:
   •   Three-channel screen projection system: based on PC with professional display cards,
       multi-screen projection and adjacent channel edge merging can be realized by web-
       based collaboration among the PCs.
   •   Space Ball: by operating the space ball, the operator can operate the virtual object in
       the virtual environment in 6-Freedom.




April 2007     41               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


   •   Driving Simulation System: They installed sensors at the places of accelerator, brake,
       clutch, steering wheel and gear shifting. The signals are sent to computer through
       serial port after processing. All the signals can be utilized for controlling the virtual
       environment.
   •   Axis sculpture machine: It is improved from ordinary 3-Axis sculpture machine by
       adding 2 extra rotary axes. With this machine the no-coordinate NC sculpture code
       could be carried out. The NC control system was totally developed by this lab.

       2.1.1.2 Main research topics and/or demos
Currently, there are several research projects in VRHIT, including the National Natural
Science Foundation of China (NNSFC) project and cooperative research projects with several
international corporations and institutes such as Proter & Gamble (P&G), Mitsubishi Heavy
Industry (MHI) and Liberty Mutual Research Institute for Safety. They have gained
achievements on multi-input and feedback technology, five degree-of-freedom interactive
virtual sculpting and rapid prototyping system, human motion tracking and simulation system
and so on, and prepare to contribute more to the application of virtual reality technology on
the field of industry.
Virtual Driving Simulator
The first project deals with virtual driving simulator problem with University of Missouri-
Rolla. Working jointly, Tsinghua University and the University of Missouri – Rolla has
developed a low-cost, high fidelity wide screen driving simulation system, which aims at
driving safety study and driving training. The system consists of three fused channels, each of
which has a networked synchronized workstation and a high lumen projector. The fused wide
screen makes it possible to realize a very wide view angle. At the same time, rear-view mirror
and side-view mirrors are realized by sharing the three workstations. A real Ford small truck
is modified as a driving cockpit on a fixed base with lots of sophisticated sensors. The
simulation software system is developed using Multigen Creator and Vega package from
Paradigm. The open structure of this software system allows the researchers to collect the
entire driver operating data, including the operation of gas pedal, brake, clutch, shift, and
lights, etc. On the other hand, the researchers can also design specific driving simulation
scenarios to train the novice drivers, especially the teen-ager drivers.
Virtual Sculpting
A second project objective’s is to research on how to apply virtual reality technology, based
on principle of “virtual sculpting”, on the creative design of the product geometric shape as
well as the realization of real model by rapid prototyping. During the process of “virtual
sculpting”, a virtual design environment is created in real time with 3D stereo display, real
multi-texture display, cutting sound feedback and tactile feedback to certain extent, making
the design process natural and lifelike. In the meantime, the interactive device is able to
record the motion track of the tool and translate the data into a kind of special NC code, which
will be utilized by a five-axis milling machine revised by our lab to create the real object,
fulfilling the rapid prototyping process.
Form more information we invite the reader to consult the laboratory website for explanation
and several demonstrations.


2.1.2 Virtual Reality Lab (VRLab), Wuhan University
Contact: Prof. Zhu Q.


April 2007     42               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Website: http://vrlab.whu.edu.cn

      2.1.2.1 Team overview
The Virtual Reality Lab (VRLab) is located in 129 LuoYu Road, Wuhan University. The
main task of VRLab is to assist in the innovation, application and dissemination of theories
and technologies by teaching, research and public service in a comprehensive range of virtual
geographic environments, thereby serving the needs of Wuhan University, China as a whole,
and the wider world community.
Human resource: The laboratory now has one professor, two associates’ professors, and
twenty doctor candidates.
Equipment: Seen the nature computer specialist of the research led in VRLab, the equipment
is almost inexistent except for of the powerful computers (i.e. SGI Onyx3 InfiniteReality
Graphics System and a BARCO Systems for visualisation).

      2.1.2.2 Main research topics and/or demos
In VRLab, the research is based essentially on the software VGEGI and families, which are
developed for 3D city modelling and virtual geographic environments. The 3DCM module
provides automatic 3D modelling functions based on coded photogrammetric data, and/or GIS
data as well as to import the 3D CAD models in 3DS format directly. The 3DDB supports
integrated database management of massive 3D city models, 3Dviewer provides interactive
3D real-time and seamless visualization, regular GIS functions such as multimedia query,
spatial analysis, multi-scale and multidimensional representations are also included and
extended.
This software allows, among others, a visualization of collaborative design and sunlight
analysis based on the 3D city models.


2.1.3 State Key Lab of CAD&CG, Zhejiang University
Contact: Prof. Hujun Bao.
Website: http://www.cad.zju.edu.cn

      2.1.3.1 Team overview
The State Key Lab main researches are based on Computer Aided Design (CAD) and
Computer Graphics (CG) considered as a cross-disciplinary high-tech research area. The State
Key Laboratory of CAD&CG at Zhejiang University was funded by the National Seventh
Five-Year-Plan. It began its construction in 1989 and was approved to open to public by the
State Education Commission in 1990.
The Lab focuses its research works on the fundamental theories, algorithms of CAD&CG as
well as their applications in industry. It aims at putting a stand at the frontier of international
academic field and striving for the original innovatory results. The Lab is to be constructed as
a leading research institution in CAD&CG area with international influences, a base for
fostering young talents, promoting academic exchanges and spreading high technologies.
The research team of the Lab is creative and industrious. The Lab links closely with related
institutions in CAD&CG in different countries including USA, Germany, United Kingdom


April 2007     43                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


and Japan. It was ranked one of the top-level State Key Labs of China by international
journal « Science ».
The Lab has a good research environment with various facilities. It is full of academic
atmosphere. Both domestic and foreign experts are welcome to carry on advanced research
work in the Lab. Prof. Yunhe Pan, Fellow of the Chinese Academy of Engineering, is the
current chairman of the Laboratory Academic Committee. Prof. Hujun Bao is the current
director of the Lab.
Human resource: The virtual reality group of the lab has one professor, two associates’
professors, and sevral PhD candidates.
Equipment: In view of the computer speciality nature of the research led in State Key Lab, the
hardware equipment is almost inexistent except the set of powerful computers.


       2.1.3.2 Main research topics and/or demos
The Lab is the leader on Computer Graphics in China. It is mainly supported by National 973
Fundamental Science Programs (973 program), National High-Tech Programs (863 program)
and National Natural Science Foundation (NSFC). The research project of virtual reality
granted by 973 programs has been sponsored by 3 Million USD and started up in Jan. 2003,
and the research team led by Prof. Hunjun BAO consists of about thirty senior members from
top-level universities and institutes in computer graphics around China, including Zhejiang
University, Chinese Academy of Sciences, Tsinghua University, etc.
State key lab of CAD&CG has a diverse collection of research directions given as:
   •   Virtual Reality
   •   Visual Computing
   •   Computer Aided Design
   •   Geometry Computing
   •   Computer Animation

The Interactive Graphics & Virtual Reality Group of the lab is studying on the core
technology of graphics & spatial information acquisition, representation, processing and
visualization in harmonic human-computer environment, as well as various practical
techniques and utilities that are applicable to relevant industries. The main research subjects
are:
   •   Agile Modelling: concerns the methodology and systems of rapid modelling 3D
       geometry, material, motion and behaviour for diverse simulation and/or virtual reality
       applications.
   •   Real-time Rendering: concerns the study on the real-time rendering of 3D dynamic
       graphics, and the reusable architecture of simulation engine that is capable of handling
       extremely complex 3D scene.
   •   Spatial Information Process: approach developing innovative techniques and
       integrated systems for spatial information storage, transportation, searching and
       visualization.
   •   Argument Reality: investigation of theories and techniques of seamlessly synthesizing
       computer generated virtual object and physical world.




April 2007     44              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO    Contract N. IST-NMP-1-507248-2


2.2 Korean laboratories
2.2.1 POSTECH, Virtual Reality Lab, Pohang University of
      Science and Technology
Contact: Prof. Seungmoon C.
Website: http://vr.postech.ac.kr

       2.2.1.1 Team overview
POSTECH VR Laboratory is interested in pursuing research in any aspects of perceptive
media, but so far has concentrated mainly on the following topics: the study of presence, 3D
multimodal interaction, and new and structured approaches to mixed reality content authoring.
The VR Lab collaborates closely with many laboratories at POSTECH: the Computer
Graphics Laboratory, the Software Engineering Laboratory and the UI Laboratory.
Human resource: The laboratory now has three professors, two associates’ professors, and
several PhD candidates.
Equipment: The lab equipment is various. We mentioned here, a tactile V2 (with 60 vibrator
and very easy to program and handling) device recently developed by local researcher. A
Vibro-tactile glove has been developed also in the lab. The laboratory holds:
             o   HMD: Virtual Research (VR4 and V8) with dual channel
             o   3D glasses
             o   Haptics Joystick: MS Sidewinder ForceFeedback Pro.: DirectX SDK.
             o   3D mouse: space ball, Magellan Driver …
             o   Tracker,
             o   Etc.


       2.2.1.2 Main research topics and/or demos
In Virtual Reality Lab at POSTECH, there are four major research directions:
   •   Presence:
          Relative Contribution of Visual Presence Elements: Most researches on presence
          have been directed toward formulating the definitions of presence, and based on
          them, identifying key elements that affect presence. This research investigates into
          the relative benefits (or contribution) of the presence elements toward creating
          presence.
   •   3D Multimodal Interaction
          o Interactive Music System
          o VR based Motion Training (Just Follow Me): One of the typical application
              domains of VR is in training. There are numerous training domain were the
              trainee must exactly follow the motion of the teacher
          o Image based Interaction: image based rendering is one of the promising
              platform for realizing realistic VR environments. One of its current
              disadvantages is the lack of interaction. The lab is designing a hybrid platform
              in which both image based objects and objects with 3D geometry can be
              rendered together, and various interaction methods can be specified in an
              intuitive manner


April 2007       45             CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


          o Multimodal Interaction for CAD
          o Body based Interaction: this research explores different ways to use features of
              one’s own body for interacting with computers
          o Tactile Interface and Sensory Compensation: This research presents a vibro-
              tactile display system to increase the “presence” of the target interaction object
              and support human’s interaction with moving objects at a close range.
   •   New and Structured Methods to Building Virtual Worlds
   •   Haptics

2.2.2 Virtual Reality Lab of KAIST (Korea Advanced Institute of
      Science and Technology)
Contact: Prof. Kyung S. P.
Website: http://ie.kaist.ac.kr

       2.2.2.1 Team overview
The Virtual Reality Laboratory (VR lab.) is a laboratory of the Computer Science Division at
KAIST, located in the EECS Building. It has been established in 1991 by Professor
Kwangyun Wohn.
Human resource: The laboratory now has one professor, two associates’ professors, and
several PhD candidates.
Equipment: The Virtual Realty Lab at KAIST has also numerous equipments. We give here a
non-representative list:
             o 5 HMD: Leep Boston Cyberface2, MicroOptical CO-3/1 and TekGear M1
               Viewer
             o 1 DataGlove from VPL, 1 PowerGloves,1 Pinch Glove and 1 Fifth Glove.
             o 1 CrystalEyes from Stereo Graphics
             o 2 i-Glass
             o 1 Cyberscope
             o Matrox meteor II MC/4 frame grabbers
             o Sony XC-75 CCD cameras
             o IR Leds / Filters
             o Camera calibration frame
             o Retro-reflective markers
             o Etc.


       2.2.2.2 Main research topics and/or demos
Research in VR laboratory spans various areas of virtual reality, human computer interaction,
computer vision, computer graphics, and computer music. The interested research topics are:
   •   Time-critical Rendering
       o Visualization: visibility culling algorithms try to reduce the number of primitives
          sent to the graphics pipeline based on occlusion with respect to the current view.
       o Cluster Approaches for Visual Immersion: High Frame Rate even with very
          complex scene and Vision simulation of participants’ movement.
       o Level Of Detail: investigation on approaches using multiresolution representation,
          so called level-of-detail method. Several methods are developing for simplifying


April 2007      46               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


            single complex object preserving some of visual features, and effective detail
            selecting mechanism for given multiple detail models. Also, mechanisms for
            managing large scale terrain are in study.
    •   Digital Human:
            o Motion capture: Motion capture is the recording of human body movement for
                immediate or delayed analysis and playback.
            o Garment simulation: Interactive garment simulation for e-commerce.
    •   Interactive Computing: Virtual City Planning, Wearable computer.
    •   Culture Technology: Paradigm.

2.2.3 Cognitive Agent in Virtual Environment Laboratory
      Overview (CAVE), Hanyang University
Contact: Prof. Sun I. K.
Website: http://cave.hanyang.ac.kr/

        2.2.3.1 Team overview
The CAVE Laboratory has a singular position in the area of virtual reality psycho-therapy. It
was founded by Professors Sun I. Kim and Jang Han Lee in Hanyang University who first
applied virtual reality to psycho-therapy in Korea.
Since 1999, the CAVE Laboratory is pursuing an educational and research mission that is
helping to treat psycho-patients having acrophobia, agoraphobia, claustrophobia, and fear of
public speaking, by means of advanced virtual reality technology.
Human resource: The laboratory now has two professors, one associates’ professor, and
several PhD candidates.
Equipment: The CAVE laboratory arranges some son-in-law of equipment, which we present
here in a general way (not details are made available):
    •   Various type of HMD
    •   A complete Driving Simulator
    •   A BARCO systems for visualisation (three channel)
    •   Appropriate Computer Graphics systems


        2.2.3.2 Main research topics and/or demos
The laboratory is persuing efforts in the VR psycho-therapy field. Nowadays, the research is
exploring new frontiers, such as intelligent virtual reality, and autonomous avatar. These
themes outline a future where patients interact with virtual avatar with the atoms of our virtual
world, and where creative virtual avatar not only respond to patient's reaction, but also
understand patient's emotions and thinking. The Laboratory is beginning the 21st century with
plans to stand up as one of the best world VR therapy Laboratories.
A topic research list is giving as:
    •   Activities of Daily Living
        o The Virtual reality assessment and training system for unilateral spatial neglect:
                    Construction of Virtual Environment



April 2007      47               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


                  Verification of the Virtual Environment for Estimating of Hemi-Neglect
                  Patients
                  Construction of Contents for Estimation of Patients' Ability
     o Supermarket: in this study, the researchers developed a virtual supermarket, and
         examined the possibility of using the VR system to assess and train the cognitive
         abilities of brain injured patients with respect to the ADL by having them perform
         tasks in a virtual supermarket.
   • Cue Exposure Treatment (CET): the objective is to design a virtual reality system to
     create desire for nicotine, which was based the responses from a nicotine-craving
     questionnaire. The virtual environment is composed of craving environments, craving
     objects and smoking avatars. They compared this virtual reality system to Picture, and
     investigated the potential value of the use of a virtual reality system CET.
   • Driving: Development of the driving simulator for the traffic accident PTSD patients
     o Assessment the cognitive and motor ability
     o Reduction the fear during the driving
   • Etc.

2.2.4 Visual Computing & Virtual Reality Lab (VCVR), Ewha
      Womans University
Contact: Prof Kim J. H.
Website: http://chanel.ewha.ac.kr/

       2.2.4.1 Team overview
Visual Computing and Virtual Reality Lab develops various application systems in order to
provide efficient visualization of multi-dimensional information and interaction techniques
using new-media.
Also, the purpose of VCVR Laboratory is the development of medical-image visual
computing technologies that can be applied to such actual medical practices as diagnosis aid,
treatment support, treatment planning, and medical education and training.
It has been designed as a National Research Laboratory (NRL) by the Ministry of Science &
Technology and by Information Technology Research Center (ITRC) of Ministry of
Information and Communication.
Human resources: the laboratory now has one professor, four assistants and associates’
professor, and several PhD candidates.
Equipment: because of the research is computer-software oriented in VCVR, the hardware
equipment is limited to basic VR equipment, except for of the powerful computers.


       2.2.4.2 Main research topics and/or demos
Visual Computing and Virtual Reality Lab is interesting for computer graphics and virtual
reality technologies. The major research topics are:
   •   Collaborative Virtual Reality: Networked virtual reality environment makes it possible
       that users between different places can collaborate. They carry out researches on



April 2007    48               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


       providing immersive tele-presence and high efficiency in networked virtual reality
       environment.
   •   Mixed Reality: They research on the issues involved in constructing mixed reality
       environment for human-centered digital life in their usual living space. And for user
       interaction in this environment, the lab develops interaction methods based on mobile
       tools and sensors.
   •   Human Computer Interaction: the lab research efficient and intuitive interaction
       between human and virtual world. The goal is to make users feel virtual environment
       more friendly, and make easy interfaces that lead efficiency at work to be maximized.
   •   Visualization and Rendering: 3D modelling technique reconstructs a model from 2D
       medical images to offer realistic view. In addition physics based modelling technique
       simulates the actual organs. These algorithms provide a realistic view of human
       anatomy and help to diagnose the patient.
   •   Image Segmentation and Registration
   •   Surgery Planning: Planning coronary artery bypass grafts.
One of the actual projects in this lab turns around “Mixed Reality in Moving Space”. The aim
is to create a new form of human-centered digital living environment through the use of the
mixed reality environments, which respond to the movements of the user in everyday living
space. In addition, research is conducted on total interaction techniques using integrated
mobile media in order for the user to perform appropriate interactions in such mixed reality
environment.


2.2.5 Ubiquitous Computing & Virtual Reality Lab (U-VR),
      Kwang Ju Institute of Science & Technology (KJIST)
Contact: Prof. Woontack W.
Website: http://uvr.kjist.ac.kr

       2.2.5.1 Team overview
Ubiquitous Computing & Virtual Reality Lab (U-VR) was formed in February 2001. The aim
mission of the U-VR is to study and develop tools, algorithms and paradigms for the "Smart
computing environments" that process multimodal input, perceive user's intention and
emotion, and respond to user's request.
Human resource: The laboratory has one professor, three associates’ professor, and several
PhD candidates.
Equipment: The laboratory has several devices (for example: stereo vision system, many
HMD systems, etc.). We redirect the reader to their website to an updated list of the potential
in hardware and software facilities.


       2.2.5.2 Main research topics and/or demos
The research in U-VR Lab aims is to develop the "holistic smart computing environment" that
will be ubiquitous, at least, in 10 years. The major field of the U-VR research is the future
computing which is highly interdisciplinary and thus the principle areas of research in the Lab
will be diverse, ranging from highly theoretical to practical system, mathematical to intuitive.
Specifically, four main research axes are planned for the U-VR Lab:


April 2007     49                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


   •   Ubiquitous Computing for Smart Environment:
          o Context-aware for smart environments (home/office/car, etc.)
          o Location-aware & pervasive sensing (RF, IR, Ultrasound, etc.)
          o Wearable computing for user-centered smart environment (WearComp )
   •   Mixed and Augmented Reality for Immersive Media: 3D Reconstruction, HCI
          o 3D Vision for Scene Reconstruction (Panoramic VE)
          o HCI for Mixed/Augmented Reality environment (AR Keyboard / TUI / Hand-
              based Interface)
          o Multiview Camera Calibration (Self-Calibration)
          o Image/Video-based photorealistic interactive virtual environment
   •   U-VR systems (I-Media) for VR Applications: I-cubed Media Systems for Art,
       Culture, Edutainment:
          o I-cubed HCI: comfortable 3D interface, emotional intelligence, natural
              interaction
          o Image Processing for 3DTV
          o 3D Visual Computing and its Applications
          o Immersive 3D media delivery & display for VRGrid
          o Networked Virtual Reality, networked I-cubed media systems (IMS)
          o Perceptual User Interface: People Activity (e.g. dynamic gesture) Recognition
          o Art & Culture-Technology: Culture Technology Research Center
   •   Media Delivery.
The mentioned points of research in U-VR Lab are not restricted to these areas.
The aforementioned areas of research are highlighted by the projects of current research. The
following list gives an idea onto the approached axes:
   •   3D Display Technique for Multimodal Interaction
   •   Development of 3D modelling techniques and Realistic Contents Creation Platform
       for FTTH Services
   •   Development of Context Aware Technology for Physiological signal Responsible
       Agents (Context Representation and Context Integration)
More information is available only with local language (see web site of the Lab).


2.2.6 Imaging Media Research Center (IMRC), Korea Institute of
      Science & Technology (KIST)
Contact: Prof. Hyoung-gon K.
Website: http://www.imrc.kist.re.kr/

       2.2.6.1 Team overview
Imaging Media Research Center (IMRC) was established in 1997 to conduct human-media
interaction research and has been active in developing a new reality media technology of the
21st century.
The center has pioneered in developing immersive virtual reality interaction spaces last 7
years including reality studio with 9 meter wide Spherical Screen projected by 3 BARCO
Stereo Projectors driven by 3 pipe SGI ONYX2 Graphics Supercomputer, 4 sided KIST
CAVE driven by PC Clusters, Smart Blue Studio with perceptual interaction, Ubiquitous



April 2007     50              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6       Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


Media Interaction Room with smart floor, and Gigabit network infrastructure for Tele-
interaction with Japan and EU.
Human resource & Equipment: no information is communicated about these two points.


       2.2.6.2 Main research topics and/or demos
The research in IMRC Lab turns around “human-media interaction” challenge. Specifically,
four main research axes are planned for the IMRC Lab:
   •   Interactive Reality Display Technology
           o Large screen multi-channel immersive visual display system, 3D visual display
               device, haptic and motion interaction technologies are being studied toward
               comprehensive multi-modal interaction with 3D virtual environment.
   •   Tangible Media Technology
           o 3D virtual environment is becoming more tangible while the physical
               environment surrounding a user is becoming more responsive and ubiquitous.
               Tangible media technology is being developed to integrate between virtual and
               physical environments seamlessly.
   •   Reality Sensing and Imaging Technology
           o Microwave imaging based on electromagnetic scattering and diffraction,
               antenna design, electromagnetic inverse scattering and EMI/EMC technologies
               are being developed for sensing and imaging of hard-to-see objects like those
               buried under-ground as well as tracking of 3D objects
   •   Trans-continental Digital Culture Technology
           o Tele-immersive presentation of the virtual heritage via Trans-Eurasian
               Information Network (TEIN) is being developed through Digital Heritage
               Exchange (DHX) consortium with 6 EU members.


2.3 Singapore laboratories
2.3.1 Center for Advanced Media Technology (CAMTech),
      Nanyang Technological University
Contact: Prof. Chan K. Y.
Website: http://www.camtech.ntu.edu.sg/

       2.3.1.1 Team overview
Overview of the lab: Established in 1998, Center for Advanced Media Technology
(CAMTech) is a joint research and development centre between the Fraunhofer Institute for
Computer Graphics (IGD) of Darmstadt, Germany and the Nanyang Technological University
(NTU) of Singapore. CAMTech is a member of the INI-GraphicsNet.
CAMTech places significant efforts on technology transfer and manpower development.
CAMTech hosts undergraduate and postgraduate students to participate in the R&D projects
carried out at the centre and Industrial Attachment.
CAMTech's products and services are used widely by various MNCs and local SMEs,
governmental bodies and their affiliated companies, educational and research institutions.


April 2007    51              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


CAMTech primarily addresses the Asian market, but also handles international projects with
the INI-GraphicsNet.
Human resource: The laboratory now has two professors, seven assistants and associates’
professor, and PhD doctor candidates.


       2.3.1.2 Main research topics and/or demos
In research, the more noticeable points of CAMTech are following:
   •   Real-time Rendering
   •   Virtual Reality
   •   Augmented Reality
CAMTech has been established to conduct R&D activities in the broad technological area of
advanced media. The three areas of research cited below have several fields of application in
the CAMTech, we quote for example:
   •   Scientific and Medical Visualisation:
              The use of highly interactive visualisation of patient-specific characteristics to
              detect tumours’ regions, and also for the precise diagnostics and pre-operative
              planning.
   •   Next Generation Learning Environments for Life Sciences:
              Creating new learning environments in the field of life sciences by integrating
              scientific knowledge, information technology and innovative pedagogies.
   •   Virtual Engineering and Manufacturing:
              This work is based on the Augmented Reality Book concept, a technology that
              allows additional visual information to be viewed on a semi-transparent display
              while flipping the pages of a book.
   •   Virtual and Augmented Environments for Medical Application:
              With the use of computer-based models, doctors can interact through new
              human-machine interfaces to explore and better understand the human
              anatomy. Surgical skills can be refined by applying these techniques on virtual
              humans, with precision tracking devices to monitor the progress and provide
              recommendations.
   •   New Media for education and Cultural Heritage:
              Explore a digitally recreated heritage environment with a virtual avatar as a
              tour guide. Visit and interact with preserved heritage sites that are closed to the
              public.
   •   3-D Modelling and Reconstruction of Incident Scenes.
Current project: CAMTech provides Technical Support for Nicoll Highway Inquiry
CAMTech was requested for the provision of a 3D computer visualisation by the Ministry of
Manpower. Subsequently, CAMTech developed a 3D model of the accident scene at the MRT
Circle Line worksite. This virtual model of the overall wall systems is meant for the general
purpose of illustration and ground orientation in order to facilitate the Committee of Inquiry
(CoI) hearing. Using a highly interactive in-house visualisation system, this 3D scene was
presented to the public during the opening of the hearing on the August, 2nd 2004 for the first
time. The 3D scene will be used throughout the 8 weeks inquiry.


2.3.2 Computer Graphics Research Laboratory (CGRL), National
      University of Singapore

April 2007     52               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Contact: Prof. Chionh E. W.
Website: http://www.comp.nus.edu.sg

       2.3.2.1 Team overview
Computer graphics has been a topic of research in the School of Computing. The Computer
Graphics Research Laboratory (CGRL) was formed a few years ago to house researchers
working on computer graphics related topics. The objectives of CGRL: The Computer
Graphics Research Group pursues research in techniques to achieve realistic rendering of
computer models and virtual environments, and ways to interact with such environments.
These are relevant to city and urban planning, architectural visualization, computer-aided
design, simulations and entertainment.
Human resource: For the time being, the laboratory has no professor, four associates’
professor, and several PhD candidates. These informations are not update.


       2.3.2.2 Main research topics and/or demos
The areas of focus of the CGRL are giving:
   •   Real Virtual Reality:
              Considers efficiency issues encountered in interactive virtual reality
              applications. These include new data structures and algorithms for object
              culling, object simplification, collision detection, virtual environment
              manipulation; illumination and interaction with natural and artificial lighting,
              and distributed VR computing.
   •   Rapid Reality Modelling:
              Examines techniques and tools for creating virtual worlds. Subjects of interest
              include image and geometry based rendering, design methodology, data
              collection methodology, and plug-in solutions for commercial software.
   •   Easy Augmented Reality:
              It concerns the merging of computer-generated images with "live" video
              sequences. Topics under study include automatic deduction of object visibility,
              viewing parameters, and illumination conditions from video sequences or
              photographs.
   •   Vivid Virtual Environment:
              For the creation of autonomous virtual environments, this is a relatively new
              topic for the lab. Current work of relevance is on human animation.


2.4 Japanese laboratories
It is impossible to give an exhaustive list of all the Japanese laboratories and institutes
involved in virtual reality research in Japan. VR has been stated about 20 years ago in Japan.
At the world position level, Japan research on VR can be ranked just after the US. Innovation
is a general research policy in Japan covering all aspects of science and technology; it is also a
crucial stake to remain competitive relatively to their neighbour emerging Asian countries
such as Korea, China and India. Therefore, the Japanese government invests a lot in research
and innovation.
By tradition, VR is nearly a cultural heritage in Japan. Nevertheless, compared with Europe or
the US, the research and innovation is more oriented toward technological aspects of VR


April 2007     53               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


human-machine interfaces than software domains. For instance, Japan seems to be relatively
less innovative in physically based modelling, collision detection, and computer graphics
software in general… This seems to be linked to the educational program in schools and
technical universities.
Hereafter a list of some universities and institutes involved in VR research is given. Note that
in Japan, almost all laboratories are identified by the name of the head professor. However,
the VR research, industry and development are organized in the Virtual Reality Society of
Japanese (VRSJ) having the webpage: http://www.vrsj.org/main.html/ a (all is in Japanese).
For the time being, to the best contributors’ knowledge, no such a society/organization is
actually present in other Asian countries.


2.4.1 The University of Tokyo
The University of Tokyo is the outstanding university in Japan and includes several
laboratories devoted totally or partially to research on virtual reality. We selected some of
them (the ones for which VR is the main research stream).


       2.4.1.1 Tachi’s lab, also know as STAR (Science and Technology of
               Artificial Reality) lab
Contact: Prof. Tachi S. (tachi@star.t.u-tokyo.ac.jp)
Website: http://www.star.t.u-tokyo.ac.jp/index.php

2.4.1.1.1 Team overview

Tachi’s laboratory has a background of robotics and teleoperation. The researches on VR
started from the well know TeleExistance system and went to cover a wide area of VR. Now
the laboratory focuses mainly on the visualization technology or rather visuo-hapic
visualization technology.
Human resources: One professor, one lecturer and one research associate as permanent staff
one researcher, several PhD students, and many master course students.

2.4.1.1.2 Main research topics and/or demos
   •   TeleExistance systems:
              The original system consists in a teleoperation master-slave anthropomorphic
              system allowing the operator own projection in a remote or a virtual
              environment. Several researches have been conducted on this concept; recently
              a more advanced version, called TeleSar 2, has been demonstrated. It is a
              mutual Telexistence master-slave robot composed of with humanoid robot as a
              slave having the same size and structure as human body reflecting, in the
              remote location, the ‘master’ human operator. An exoskeleton type master
              cockpit controls the slave humanoid robot. Here the original idea is that
              humanoid technology (the most advanced in the world) is used a whole
              interactive display.
   •   Haptic display technologies:
              Several force reflecting and tactile display technologies have been developed.
              Among some of them, the SmartTouch tactile display is composed of a thin


April 2007     54               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


              electro-tactile display and a sensor mounted on the skin. The sensed
              information is converted to tactile sensation through electrical stimulation.
              Thus, the wearer not only can make physical contact with the object, but also
              can “touch” surface information of any modality, even those that are ordinarily
              non-touchable. The prototype of SmartTouch has optical sensors.
   •   Haptic sensing technology:
              To relay haptic information it is necessary to have haptic sensing serving as
              record devices. The GelForce is one remarkable outcome of this research. It
              consists on visualisation deformation stress/strain fields on a surface thanks to
              video processing of texture pattern embedded in a transparent gel. Another
              system is SmartTool: a new technology of “information haptisation” composed
              of real-time sensing devices and a haptic display. The sensor receives stimuli
              that change dynamically in a real environment, and displays the information to
              the user through haptic sensation. Therefore, SmartTool makes it possible to
              “touch” the dynamic information of real environments in real-time. It is worth
              mentioning that this technology is ported to commercialization through NITA
              Co. Company.
   •   Visualization technology:
              Several very original systems have been proposed and demonstrated through
              very advanced prototypes. For instance, the optical active camouflage concept
              is one of the most appealing one. This idea is very simple: an operator of
              whatever need to be hidded is covered by a flexible screen that will project the
              background image onto the masked object, one can observe the masked object
              just as if it were virtually transparent. The Seelinder display technology is a
              cylindrical 3D display that allows multiple viewers to see 3D images from any
              angle without wearing 3D glasses. Each viewer sees appropriate images
              corresponding to his viewpoint (even if the viewpoint is moving). Many other
              concepts have been proposed to improve the idea, which seems to find a way to
              military markets.

       2.4.1.2 Hirose, Hirota and Tanikawa’s lab, RCAST (Research Center
               for Advanced Science and Technology)
Contact: Prof. Hirao I.
Website: http://www.rcast.u-tokyo.ac.jp/

2.4.1.2.1 Team overview

This research laboratory focuses on building a high level, intuitive, synergetic and user-
friendly user interface. This covers a wide scope of problems which include: multimodal
interfaces, mixed reality (MR) research, wearable computers, and contents for ubiquitous
computing research. Techniques to generate high quality visualization and real-time
computation are also researched. In addition, tele-immersive communication systems through
the large-bandwidth gigabit communication network are investigated. Unlike current video
conferencing technology that only displays live video images in 2D; this laboratory proposes
high quality video avatars of the remote participant based on the live video images: many
cameras are placed in the real environment (ubiquitous cameras) to capture live video of
human, and the captured images are used to construct a 3D video avatar, and project it in
display systems such as the distributed wide-screen system.




April 2007    55               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


Human resources: 5 permanent faculty members, 11 researchers, several PhD students, many
master students.

2.4.1.2.2 Main research topics and/or demos

The research conducted in RCAT can be summarized in the following points:
        -    Visualization and rigid body simulation.
        -    Wearable computers: this research is very active and several original concepts
             have been proposed and advanced prototypes have been realized.
        -    Sensing systems: mainly avatar position tracking systems and algorithms. This
             research serves the visualization and video avatar research.
        -    Digital art: concerns using virtual reality techniques in designing innovative and
             interactive digital art visualization. For example, the controllable water particle
             display and the virtual visualization of the Maya civilization heritage (using
             combined real image with high-fidelity virtual complements).
        -    Video avatar: this research deals with developing interaction paradigms between
             virtual avatars and humans, distributing telepresence systems, etc.
        -    Augmented Reality projection.

2.4.2 Iwata Laboratory, University of Tsukuba
Contact: Prof Hiroo Iwata
Website: http://intron.kz.tsukuba.ac.jp/

       2.4.2.1 Team overview
At University of Tsukuba, the most important laboratory dealing strongly with VR research is
the Iwata Laboratory, which has been started in 1986.
Research in Iwata lab deals mainly with the development of various VR interface devices
including visual, force, and recently sounds displays. Various locomotion display interfaces
have also been proposed targeting immersive visual environment for interaction. Among
Iwata’s lab areas of interest are haptic interface, locomotion interface, spatial immersive
display and their applications. The overall objective is toward development of methods to
provide users ultimate multi-modal human interface using virtual reality.
Human resources: the present team includes one professor (Dr. Hiroo Iwata), one assistant
professor (Dr. Hiroaki Yano), thirteen graduate students and six under graduate students. The
background of the team is human interface technology and mechanical engineering.


       2.4.2.2 Main research topics and/or demos
   •   Haptics interfaces:
              Several interesting concepts and designs have been developed. For instance,
              the haptic master is a parallel link type haptic interface for desktop use that
              generates reaction force to the user's fingertips (in an independent manner
              through each finger attachment, or through a spherical handle in case of pinch
              grasp). The user can feel the rigidity or weight of virtual objects. The FEELEX
              family is designed to enable two-hand interaction using whole palms. It
              consists of a deformable front projection screen and linear actuator array. The
              screen is connected to a linear actuator array that deforms its shape. Using

April 2007     56               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


            HapticScreen, the user can see the images of virtual objects as well as tap,
            touch, and grasp them. The VOLFLEX family is a volumetric haptic display. It
            is composed of a group of air balloons driven by computer-controlled air
            cylinders. Each air cylinder is equipped with a pressure sensor that detects
            force applied by the user. Deformation occurs according to the actual hardness
            of the virtual clay. Haptics applications concerns: hepatisation of information,
            surgical simulators, 3D shape modelling, cooperative work in VE, haptic
            interactions, touching, grasping, moving virtual objects,etc.
   •   Locomotion interfaces:
              This is another strength attributed to this laboratory. Several locomotion
              interfaces have been conceived which allow operator travelling within
              immersive environments. Realizing natural navigation is one of the major
              challenges to be tackled in virtual reality research. Examples of proposed
              concepts concern the Torus Treadmill; a locomotion interface equipped with
              special arranged treadmill. It provides infinite plane for creation of the walking
              sense. The GaitMaster is another locomotion interface that generates omni-
              directional uneven surface. A walker stands on top of the plate on the motion-
              base. Each motion-base is controlled so that it can trace positions of the foot,
              and the turntable traces the orientation of the walker. The CirculaFloor is a
              locomotion interface using a group of movable tiles. The movable tiles employ
              holonomic mechanism that achieves omni-directional motion. Circulation of
              the tiles enables the user to walk in virtual environment while his/her position
              is maintained. The user can walk in arbitrary direction in virtual environment.
              And last, the PowerShoes is a concept of active sliding shoes allowing large
              scale navigation within a small restricted space.
   •   Immersive display:
              An image display system with wide-angle spherical screen named Ensphered
              Vision is proposed. In this system, a large screen is used for the virtual
              environment as an alternative to classical HMD. The sphere is an ideal shape
              for a screen that covers the human visual field. A single projector and a convex
              mirror are used to display a seamless image. The spherical convex mirror
              conveys the light from the projector into the spherical screen. The obtained
              image totally surrounds the viewer.


2.4.3 Sato-Koike group, Tokyo Institute of Technology
Contact: Prof Sato & Prof Koike
Website: http://sklab-www.pi.titech.ac.jp/

       2.4.3.1 Team overview
This laboratory comprises two teams. Research deals mainly with: human interface
technology, virtual reality techniques, image processing, pattern recognition and brain
interface. Prof Sato’s team concerns virtual world and human interface and development of
mathematical model of virtual information processing. Prof Koike’s team deals with
communication functions of the brain including motor control and visual perception for the
computational theory. This research is based on mathematical analysis, computer simulation
of the neural networks, and psychophysical experiments.



April 2007    57               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6            Dissemination Level : CO          Contract N. IST-NMP-1-507248-2


Human resources: 4 faculty permanent staff (one professor), 2 research associates, several
PhD students and many master students.


        2.4.3.2 Main research topics and/or demos
    •   Development of haptic display:
              These devices are dedicated to manipulation of computer-generated virtual
              objects with operator fingers and allow one to feel shape, texture, collision,
              weight, inertia, etc. In this system, user can operate virtual objects naturally as
              in the real world. The well known technology is the SPIDAR family which a
              stringed light weight haptic devices that has been successfully ported in several
              application including large space collocated visualisation displays (workbench,
              CAVE).
    •   Human-centered virtual environments:
              This work deals with human-scale VR systems technologies which give human
              operators immersive projections images and haptic interaction with large range
              of motion.
    •   Computer vision and human interface:
              The aim of this research is to make a computer vision performing robust
              recognition with small amount of computation, and applies it to human-
              computer interaction. The proposed approach is to construct mathematical
              model based on low-level animal's visual system and develop a computer
              vision system based on the model.

2.4.4 Ikei laboratory, Tokyo Metropolitan Institute of Technology
Contact: Unknown
Website: http://mgikta.tmit.ac.jp/ikeilab/

        2.4.4.1 Team overview
Human resources: 1 tenure permanent professor, some PhD candidates and master students.

        2.4.4.2 Main research topics and/or demos
The undergoing research deals mainly with two themes:
         -   wearable computer,
         -   virtual reality more focused on haptics.
Namely, several texture feedback devices have been developed and advanced concepts
achieved, sych as the tactile display mouse and the texture explorer system combining tactile
and force reflection in an inclusive device.


2.4.5 Human interface Engineering lab, The Osaka University
Contact: Prof. Fumio K.
Website: http://www-human.ist.osaka-u.ac.jp

        2.4.5.1 Team overview
Research conducted deals mainly with three problematic:



April 2007       58                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


        -    Human interface through integrated multimedia exploring novel media such as a
             tactile and haptic sensation and establishing an integration scheme of theses
             diverse media in order to apply the scheme to model or representation of human
             intentions and concepts.
        -    Human interface in virtual environments. Here, the research targets helping an
             operator to work with various things in virtual space, hence various methods by
             which computer can understand human's intention and realize a higher support to
             various works are studied (eg. recognition of eyes and face expression,
             recognition and understanding of hand gesture, generation of virtual agents
             bringing users computer's messages in a easy way to be understood, construction
             scheme of virtual 3D objects by utilizing a human-intention recognition
             technique…)
        -    Recognition engineering and human interface. This part deals with cognitive
             engineering integration in the area of information system mainly composed of
             computer.
Human resources: 1 professor, 1 associate professor, 3 assistant professors, 2 researchers, 9
PhD students, 17 master students.


       2.4.5.2 Main research topics and/or demos
   •   Virtual tools, virtual hand:
              The goal is to provide users with appropriate metaphors in interaction systems
              that allow intuitive and user-friendly manipulation of virtual representations.
              Research also concern constructing intelligent and advanced user interfaces by
              utilizing knowledge on kinematics in computer-human interaction obtained
              from careful experiments and investigation.
   •   Collaborative interaction:
              Research and interfacing technology facilitating face-to-face meeting and
              collaborative work, information exchange, etc.
   •   Agents from reality:
              It concerns exploring new approaches to interactive simulation of ecosystems
              by an augmented reality technique integrating virtual parts into autonomous
              agents and associating interactivity.
   •   3D interactions using images from multiple viewpoints
   •   Integrated 3D modelling (classical basic research on 3D modelling)
   •   Motion characteristic control: catch details from motion capture that allow natural-
       looking animation of virtual avatars
   •   Communication in networked symbiosis environment.

2.4.6 The Gifu Region
In 1995, the Gifu region launched a strong policy on virtual reality research. Since then, some
initiatives have been founded. The University of Gifu comprises 2 main groups dealing with
VR research.


       2.4.6.1 Institute of Dream Systems Development, Techno-Plaza
Contact: Prof Takeo Ojika.
Website: http://www.vsl.gifu-u.ac.jp


April 2007     59              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2




2.4.6.1.1 Team overview

Among the research centers in VR one may not miss the Techo Plaza, particularly, the
Institute of Dream Systems Development that was created by Professor Takeo Ojika.

2.4.6.1.2 Main research topics and/or demos

Professor Takeo Ojika invented in 1996 the COSMOS system. The COSMOS is a six-wall
box forcast to be used as an industrial tool.
This institute focuses VR research on mainly two applications:
   1) Advanced cultural heritage reconstruction and visualisation
   2) Advanced VR health systems

       2.4.6.2 Virtual Reality Lab
Contact: Prof Kijima.
Website: http://www.vsl.gifu-u.ac.jp/~kijima/

2.4.6.2.1 Team overview

This group is headed by Professor Kijima. It is a collaborative center between research
academia and industry (Satellite venture business laboratory).

2.4.6.2.2 Main research topics and/or demos

The main stream of research deals with the following items:
   •    Visualization:
               It concerns basic research and development of novel visualisation systems
               prototypes. Several visualisation concepts find demonstration in medical
               application, eg the projection head mounted display allowing stereovision
               without glasses. The teams investigate also on reducing rendering latencies
               through eye tracking technology and visualization of high-dimensional spaces.
   •    Motion capture and restitution:
               A 3D scanner that allows tracking objects of the size of humans and animals
               has been developed. As for motion restitution, this is demonstrated through a
               horse feedback device coupled with a real-time visualization screen which
               reproduces with high quality a horse ride experiment.
   •    Haptic devices mainly for surgical applications

2.4.7 ATR Media Integration & Communications Research
      Laboratories (MIC-ART), Advanced Telecommunication
      Research Institute International (ATR)
Contact: Prof. Nobuji Tetsutani
Website: http://www.mic.atr.co.jp/




April 2007     60              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6       Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


       2.4.7.1 Team overview
ATR Media Integration & Communications Research (MIC-ATR) was created about fifteen
year by the ATR.The major goal is to explore basic technologies for both realistic multi-
media communications and hyper-realistic communication environments for better sharing
thoughts and images between humans and machines.
MIC-ATR is composed into six departments and is organized according to the areas of
research approached on the following section.
Human resource: The laboratory six professors (project’s head), and about thirty three
members between assistant professors and PhD candidates.
Equipment: no precise inventory of material (hardware and software) is available for the
MIC-ATR lab. However, through the scientific papers, one may easily realize the huge
equipment potential available at this laboratory.


       2.4.7.2 Main research topics and/or demos
The main part in term of research area in this laboratory is based on the items described
below:
   •   Technologies for Generating Communication Environments (Department 1)
              To enable distant people to communicate with each other through virtual
              environments, by developping technologies for generating communication
              scenes and for reproducing and synthesizing human images in the scenes.
   •   Agent Interface (Department 2)
              The objective here is to develop intelligent interface agents which support and
              enhance human-to-human communication and human-machine interaction.
   •   Research into Communications by Mental Images (Department 3)
              Study of techniques, which can help non-experts express their mental images
              using multimedia, such as video and audio freely and flexibly.
   •   Human Communications Science (Department 4)
              Investigations into rich and natural human-to-human communication processes
              guide technological developments toward friendly and natural conversational
              exchanges in communications with computational agents and other new
              electronic media.
   •   Research on Virtual Reality (Department 5)
              Research on Virtual Reality is being conducted to create suitable environments
              for better communications.
   •   Research on Art & Technology (Department 6)
          o Artists and engineers conduct joint research in providing communication
              systems with a greater sensitivity.




April 2007    61              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2




3 VR/VE research in some other world regions
Beside the countries mentioned above, some other interesting ones are Canada, New Zealand
and Brazil. The question is: Are these labs of US satellites or connected countries on their
own? In spite of the fact that Australia is missing, these some other VR labs - outside of the
US, Asia, and Europe - are not all really important in terms of having a large impact on the
international research community. Nevertheless, they are a platform for interesting research
personalities, and they are worldwide connected, which seems to be their only way to survive.
On that point of view, we may have here a look to understand how their successful
cooperation models are working.

3.1 Canada
This section is realized on the basis of an exclusive investigation on the Web by email contact
with AMMI and DISCOVERS labs.


3.1.1 AMMI Lab (Advanced Man Machine Interface Laboratory),
      Departement of Computing Sciences, University of Alberta
Contact: Prof. Pierre Boulanger
Website: http://www.cs.ualberta.ca/ammi/

       3.1.1.1 Team overview
The AMMI lab is composed of 7 faculty members and 14 researchers. Its research activity
centers on the development of new man-machine interfaces that allow computer systems to
enhance human abilities by adapting to their needs. The main topics of this lab are:
   •   Collaborative Virtual Reality Environments
   •   Immersive Visualization Spaces (the VizRoom)
   •   Collaborative Computational Physics Over WestGrid Infrastructure
   •   Virtual Reality and Human Perception
   •   Responsive Avatars and Environments
   •   Music Visualization
   •   Tele-Immersive Systems
   •   Reverse Engineering and Dimensional Validation
   •   Virtual Wind Tunnel
The AMMI lab has many industrial partners and sponsors, such as: Alias Systems, and
Autodesk (Toronto), Bioware, Syncrude Canada, Telus World of Science, and TRLabs
(Edmonton), Bombardier (Montreal), Creaform 3D and Innovmetric (Quebec), Hewlett
Packard Research (Palo Alto, USA), MD Robotics (Brampton), Netera Alliance, and Tanberg
(Calgary), Silicon Graphics (Vancouver).
Its research projects are founded by local, national or US agencies, such as:
   •   Government of Alberta : ASRA, ASRIP, Innovation & Science, Informatics Centre for
       Research Excellence (iCORE);
   •   Government of Canada : Canadian Fund for Innovation (CFI), Heritage Canada,
       NSERC, Western Economics Diversification Program (WED);


April 2007     62               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


   •   USA : National Science Fondation (NSF).

       3.1.1.2 Main equipments
Visualization displays:
   •   The VizRoom, a three-walled stereoscopic immersion room
   •   Large passive stereoscopic display based on Disney screen technology
   •   Dimension Technology auto-stereoscopic display
Devices:
   •   Polyhemus electromagnetic 3D trackers
   •   Intersense inertial 3D trackers
   •   Phoenix system optical 3D tracker
   •   Microscribe mechanical 3D digitizer
   •   High Precision 3D laser scanner from Kreon
   •   Photogrammetic 3D digitizer from 3DMD
   •   Two Phantom Omni haptic interfaces
   •   Digitizing gloves
   •   Various video digitizers including a 25 camera system from Herodion
   •   Voice and musical instrument digitizer
   •   MIDI keyboard console
   •   Electroencephalographic recorder
Computing Equipment:
   •   SGI Onyx 2 with six CPUs and three graphics pipes
   •   Soon to be replaced this year by a large 32 CPUs 16 pipe SGI machine with 64Gb of
       memory
   •   Cluster of eight Dell PCs with 1 Gb/s connection and high-end graphics cards
   •   20 high-end graphics PCs running under Linux and Windows
   •   Macintosh G5 Machine
   •   Westgrid HPC infrastructure (http://www.westgrid.ca)
Communication Equipment:
   •   Three Access Grid videoconferencing environments
   •   Five fully optical switches from BigBangWidth
   •   Access to a wireless network running at 10Mb/s
   •   Access to normal campus network at 100Mb/s
   •   Two Tanberg dual channel video conferencing encoder/decoder units

       3.1.1.3 Main research topics and/or demos
Virtualized Reality: this project is a generalization of the standard visual simulation
paradigm where the model and the actions used in the simulated world are extracted from
various sensors and information retrieval systems. The resulting visual simulation aims at an
exact representation of the real world.
Remote Medical Training Using VR: The focus of this project is to develop shared hapto-
visual-audio-virtual environments (HAVE) with advanced multi-point video conferencing,
new display and interface technologies, and distributed latency compensated haptic
technologies that will be used for collaborative medical research and training in
ophthalmology.



April 2007     63              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


Virtual Wind Tunnel: The goal of this project is to explore how advanced computing
technologies and networking could allow scientists and engineers to solve complex
Computational Fluid Dynamics (CFD) problems creating in effect an advanced collaborative
Virtual Wind Tunnel (VWT).
Image-Based TeleImmersion: AMMI lab is developing with HP Research in Palo Alto
California a new generation of tele-communication technology, which aims at giving to
participants at a meeting (local and remote) the illusion that they are in the same room.
Contrary to current video conference technologies, tele-immersive systems will allow
participants to a meeting to express non-verbal communication clues: a critical element to
create a real sense of remote presence. HP has released a commercial prototype of this system
called HALO. In this context, AMMI lab is working on the use of auto-stereo display to give
to the participants the illusion of depth perception without having to wear glasses. In addition,
they also work with HP on basic image processing algorithms such as background removal
and multi-camera interpolation.
Spatial Navigation Studies: This project studies how people learn to navigate through a set
of mazes, initially guided by arrows. In some conditions, they are provided with many
perceptual cues (motion, disparity, texture, etc), in others they navigate in perceptually
impoverished environments. Does this affect the ease with which they can naviagate these
environments? Difficulty of navigation is assessed in several ways, one, with behavioural
measures (e.g. time to navigate, number of navigational errors), and two with
electroencephalographic (EEG) recordings.
Image-based Rendering: This project focuses on video-based methods allow quick and non-
intrusive capture using inexpensive commodity cameras (e.g. web cams, digital cameras or
camcorders) and a consumer PC is sufficient for the processing and rendering. Uncalibrated
video from different views of an object or scene are compiled into a geometry and texture
model. The geometric modeling is based on recent results in non-Euclidean multiview
geometry, and the texture is represented using a novel texture basis capturing fine scale
geometric and light variation over the surface. During rendering view dependent textures are
blended from this basis and warped onto the geometric model to generate new views.
ANIMUS Project: The goal of this project is developing a novel framework, allowing artists
and designers to define artificial characters that can interact with humans in a natural and
fruitful way with interactive media like virtual reality simulations and video games.
Music Visualization: This project is developing a framework used to generate real-time
music visualizations that can be used to augment a live musical performance. A musical
feature extraction system has been implemented in the music processing environment,
Max/MSP. The perceptual parameters of pitch, loudness and timbre are extracted from a
vocalist's performance, and chords played on a keyboard are identified. These extracted
musical features are then mapped to responsive visual metaphors.
Three-Dimensional Gesture Recognition: The interpretation of human motion and gestures
has been largely studied in the field of computer vision and biomechanics. Its possible
applications range from autonomic virtual character interaction, automatic synthetic character
animation, gesture recognition, sign language recognition, sports performance analysis,
military applications, among others. With a few exceptions outside the field of biomechanics,
there are little algorithms capable of performing a full analysis of three-dimensional
kinematics, and even less for the automatic analysis and recognition of kinematics systems.
The goal of this project is to develop new algorithms to solve many of these issues.


April 2007     64               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


Canadian Design Research Network: The AMMI Lab is an active member of the Canadian
Design Research Network a consortium of academics and partners of the private, public, and
non-governmental sector working together to improve design outcomes in Canadian society
through research and design


3.1.2 DISCOVER (Distributed & Collaborative Virtual
      Environments Research Laboratory) School of Information
      technology, University of Ottawa
Contact: Prof. Nicolas D. Georganas and Emil M. Petriu.
Website: http://www.discover.uottawa.ca/

       3.1.2.1 Team overview
DISCOVER was established by Professors Nicolas D. Georganas and Emil M. Petriu at the
School of Information technology, University of Ottawa. Its staff is composed of 7 professors,
18 PhD students, 18 MaSC students, 2 researchers.
Research in that area, however, started by the two colleagues in 1997 and resulted in many
projects funded by Ontario (CITO, ORDCF) and federal Networks of Centres of Excellence
(CITR, TL), NSERC, CANARIE, government labs (CRC), industry (Newbridge/Alcatel,
INCO, QNX), the Canada Research Chairs Program, the Canada Foundation for Innovation,
the National Capital Institute of Telecommunications (NCIT), the Universiy of Ottawa and
several other sources.
Two major applications of Collaborative Virtual Environments were developed: vCOM:
Virtual e-commerce and Industrial Training.

       3.1.2.2 Main equipments
One principal feature of DISCOVER is a Silicon Graphics InfiniteReality3 System based on
an ONYX 3400 computer with 12 processors and three graphic pipes, controlling a Mechdyne
MD Flex 30 ft screen that can fold and form a CAVE. It is composed of the following
equipment:
   •   SGI Onyx 3400 (12x500MHz CPU, 3GB RAM, 360GB HD, 3 Graphic Pipes)
   •   Mechdyne MDFlex 3 screen 10x12 max configured res: 1280x1024 @96Hz stereo x3
       pipes
   •   5.1 surround sound system
   •   StereoGraphics CrystalEyes x 15
   •   Ascension Flock of Bird (2 sensors) with extended range transmitter
   •   Fakespace Wanda
Another powerful VR system available in the DISCOVER Lab is the custom built
 D.I.V.I.N.E system. It integrates the functionalities of the following equipment into two
complete collaborative VR workdesks:
   •   14 x Dual Xeon 3 GHz each with 1GB of memory and nVidia Quadro FX 3000 video
   •   7 x Passive Stereo projection screen surfaces arranged to provide two independent 3D
       display volumes
   •   Virtools Dev VR authoring software with Physics pack and VR pack


April 2007     65              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


   •   16 x Dragonfly color firewire cameras 640x480 @ 30 fps with sync units
   •   16 x Dragonfly HiRes color cameras 1024x768 @ 15 fps with sync units
   •   Ascension Flock of Bird tracker (4 sensors) with extended range transmitter
   •   5.1 surround sound system
   •   Foundry FastIron FES X424 Gigabit switch linking up this equipment to NCCT
       laboratory and the NCIT network
Also, some other VR equipment are:
   •   Ascension MotionStar 6 sensors with extended range transmitter
   •   InterTrax2 USB
   •   Ascension mini-bird
   •   Contact Precision Instruments EEG8
   •   Immersion Cyberglove + Cybergrasp + CyberForce
   •   Immersion CyberTouch left and right
   •   Immersion Cybergrasp with cyberglove left and right
   •   Sensable Phantom Omni (x2)
   •   Sensable Phantom 1.0
   •   MPB Freedom-6s hand controllers (2 left hand, 2 right hand)
   •   FCS HapticMaster
   •   Reachin display 2A with Sensable Phantom Desktop
   •   Kaiser Electro-Optics Cyberview XL35 HMD
   •   Kaiser Electro-Optics Cyberview XL50 HMD
   •   MicroVision Nomad see-through HMD
   •   Prospecta holographic display
   •   Stereographics SynthaGram SG222D auto-stereo display
   •   Octane2 v12 graphics dual R12000 400MHz CPU
   •   Octane2 v10 graphics R12000 400MHz CPU (x4)
   •   Zaxel free-viewpoint 3D video capture system

       3.1.2.3 Main research topics and/or demos
3.1.2.3.1 3D Physical Modelling and Animation
       Physical Modelling by Human-Object Interactions [NSERC Discovery]
       Dr. Lang's research is directed towards the creation of physical models of real-world
       objects for computer graphics and virtual environments. In computer graphics the
       quality of the model determines how realistic a virtual environment may appear to a
       visitor. For a realistic impression virtual objects should have many geometric and
       physical properties including shape, colour, weight, softness and tactile texture. In the
       past, research has focused mainly on the acquisition and representation of realistic
       shape and colour of virtual objects. Shape and colour are important for the visual
       display of objects but for interactive environments to become truly realistic, i.e.,
       physical, other physical properties of objects need to be acquired and represented
       faithfully as well. Dr. Lang's goal is to develop a new approach to acquire these
       physical models in a simpler and more natural way. His specific approach to the
       acquisition of physical properties of objects is based on observing the probing of an
       object by a human. The human is observed by cameras which use advanced computer
       vision techniques to track the human-object interactions. The human modeller will be
       able to explore an object's behaviour in a natural way by using a force-sensing pen for
       the probing. The research will initially focus on elastic deformation of objects but will
       explore additional behaviour in the future, e.g., haptic surface texture or impact sound.


April 2007    66               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2



       Modelling Consumer Products for Low-End Haptic Displays [ORNEC]
       In haptic environments users can interact with objects through their visual as well as
       tactile sense. Unfortunately, the average consumer can not afford the expensive
       equipment required for high-fidelity haptic modelling and display. Low-end haptic
       devices, as well as novel and simple haptic modelling approaches will enable new
       applications. This project explores how to employ low-end haptic devices for the
       rendition of consumer products in on-line retailing and how to model these products.

       Research for this project is concerned with haptic parameter acquisition,
       generalization of haptic parameters for object modelling and display of consumer
       products for e-commerce web applications.

3.1.2.3.2 Distributed and Collaborative Virtual Environments
       LORNET (Theme 5): Creation, Search and Delivery of Advanced Multimedia
       Learning Objects [NSERC Research Network Project]
       Currently, learning object repositories embed minimal advanced multimedia
       (including 3D or virtual reality learning objects). Multimedia “objects” are composed
       of a spatial and/or temporal synthesis of time-dependent (audio, video, animations,
       virtual reality) and time-independent media (text, images, data). Simulations, such as
       virtual reality, undoubtedly create a very rich learning environment. The creation of
       these objects requires advanced authoring tools, particularly when they may include
       VR scenes and augmented reality annotations, or have to be stored in different
       versions, or appropriately coded, to accommodate user preferences, context situations
       and network quality-of-service (QoS) conditions. The multimedia learning objects will
       be protected, in terms of intellectual property, by digital watermarking techniques. To
       fully use the capacities of broadband networks for learning, content-based and
       context-based search and delivery methods for these advanced multimedia learning
       objects are needed.

       Federation Grid [CANARIE Project]
       The project, known as FedGrid (Federation Grid), will leverage existing military
       standards and technologies to build a scalable platform for a collaborative grid that can
       support a wide-range of distributed simulations, including emergency management
       training, military training, high-end image processing and cost-effective computer
       game development. The new tools and architecture built on the project will make it
       easier for software designers and domain experts across the country to collaborate
       online to build software and to tailor simulations, better, quicker, and cheaper.
       MDA, an information solutions firm in Richmond, BC, has teamed up with
       Vancouver-based software developer Magnetar Games Corp to develop the
       infrastructure and simulation technology for building the emergency management
       trainer. The university partners - Carleton University, the University of Ottawa, and
       the University of British Columbia - are supporting the project with infrastructure
       research and facilities to host an emergency management demonstration. The National
       Defence research partner, DRDC-Valcartier, is providing the data sets for performing
       demonstration and is responsible for evaluating how well the simulation showcases the
       technology.




April 2007    67               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


       Personalized Privacy Preserved Collaborative e-Commerce Environments
       [ORNEC]
       Although e-Commerce systems have progressed over the past few years, they lack
       important aspects such as building long-term and profitable relationships with
       customers and facilitating an environment that encourages buyers to buy more. The
       actual execution of e-Commerce today is too different from its real-life counterpart,
       and for the most part it’s a “Web page” with listing of items and prices. The social
       aspect such as personalization, collaboration, and interactivity is lacking. For this
       project, it is proposed the investigation, the design, and the implementation extensions
       to current e-Commerce systems in order to include collaboration between buyers
       and/or vendors, near-real-world interactivity such as item manipulation using 3D and
       Virtual Presence technology including “touching” using haptics, and personalization
       using intelligent agents to collect shoppers’ behaviour for customization. Such
       environments obviously contain huge amount of information about people. Hence,
       relevant privacy policies must be adhered to. Privacy management will therefore be
       integrated as a crucial component of the system, and research will be conducted on
       how a value-centered design process can be created so that important policy and legal
       values are preserved; recognizing that respecting end-user privacy in fact makes good
       business sense.

       Collaborative Virtual Presense Modelling, Communications, and Applications
       [New Opportunities]
       Research on Collaborative Virtual Presence, and Multimedia in general, is inherently
       interdisciplinary. Interactions between researchers in Multimedia and computer
       graphics have a long tradition because Multimedia research started within the field of
       computer graphics. One of the major challenges in computer graphics and Multimedia
       is content generation which is addressed by the confluence of video processing and
       computer graphics. Additionally, research on computer haptics is gaining momentum
       while only “the first steps into a vast field” have been taken. Research on the creation
       of multimedia content including computer graphics, video processing and haptic
       rendering will benefit from the experience of the multimedia community with
       synchronization issues and networking. The proposed research on Collaborative
       Virtual Presence aims to exploit the potential of these interdisciplinary fields; the
       applicants have a background in all the corresponding areas. Computer-generated
       worlds in general have been a subject of interest for many years, yet many challenges
       remain. Recently, the ACM Special Interest Group in Multimedia agreed that there
       are 3 “grand challenges” in this field for the next decades to come. One of those
       challenges is “to make interactions with remote people and environments nearly the
       same as interactions with local people and environments”. This challenge incorporates
       two problems: distributed collaboration and interactive, immersive three-dimensional
       environments. It also suggests that in addition to solving the distributed collaboration
       problem inherent in the challenge, research must also be conducted to address the
       promise of interactions with remote people, places, and virtual environments, using
       sensors (e.g., touch, smell, taste, motion, etc.) and output devices (e.g., large
       immersive displays and personal displays integrated with eye glasses) that offer the
       opportunity for more intimate and sensitive interaction with a remote environment.

3.1.2.3.3 Intelligent Sensor Networks and Ubiquitous Computing
       Tele-monitoring and Mobile Intelligent Sensor Agent Networks [NSERC Strategic
       Project]


April 2007    68               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


       The objective of the project is to develop a new generation of wireless Intelligent
       Sensor Agents (ISA), ISA network architectures for complex environment
       monitoring.
       Some of the specific objectives of this research are:
       • Study of a distributed architecture for sensor networks integrating large numbers of
       both stationary and mobile wireless ISA’s deployed in the field.
       • Study of task-directed information gathering features to allow for a more efficient
       use of the inherently limited sensing and processing capabilities of each ISA. Sensor
       agents should be capable of selective environment perception that focuses on
       parameters that are important for the specific task and avoid wasting effort on
       processing irrelevant data.
       • Development of a multi-sensor fusion system able to integrate a large variety of
       sensors. This model-based system architecture will provide superior modularity, plug-
       and-play, and transparence allowing for easier sensor fusion and knowledge
       extraction.
       • Study of machine learning techniques for the real-time reactive behavior of the
       sensor agents. In order to find more efficient sensor fusion techniques, we will study
       the combination of the intrinsic reactive sensor behavior with higher-order world
       model representations of the environment.
       • Development of an agent-based resource management framework allowing
       seamless interaction and cooperation between a large diversity of distributed
       intelligent sensors.
       • Study ad-hoc area wireless sensor networks and development of new sensor
       communication protocols for multi-modality inter-sensor communication with
       appropriate quality of service.

       Ambient Multimedia Intelligent Systems (AMIS) [NSERC Discovery Project]
       The research program on Ambient Multimedia Intelligent System (AMIS) will
       focus on the design, implementation, testing and evaluation of multimedia
       technologies and applications running in ambient intelligence environments.
       Multimedia applications will be composed of combinations of media such as audio,
       video, text, graphics, animation, Virtual Reality, Augmented Reality and Haptics,
       where the media have space, temporal and/or semantic relationships among them.
       Natural interfaces, such as human gestures and body movements will be in the core of
       this research program. User-centric computing, employing the user profile,
       preferences, emotions, behavior, context (of both the user, the object and the situation)
       will be main points of research interest. Multimedia digital content adaptation and
       delivery, according to user, situation and/or object context will be effected, following
       standards such as MPEG-7 (for multimedia indexing and metadata creation), MPEG-4
       (for content coding, transport and rendering) and MPEG-21 (for digital item
       adaptation and stream control). The role of artificial intelligent algorithms and ambient
       “smart spaces” in this research is essential because they constitute key enabling factors
       for realizing natural interaction with multimedia content. The applications of this
       research are plentiful, ranging from the smart-home, smart-factory and smart-
       workplace of the future, to entertainment, digital media in health applications and
       digital media in the arts.

3.1.2.3.4 Multimedia Computing and Communications
       Advanced Protocols for Multi-Participant Multimedia Communications [NSERC
       Discovery]


April 2007    69               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


       The aim of this research initiative is to come up with architecture, design, and
       implementation of both new communication protocols and new multimedia tools and
       applications, based on the application-layer multicast paradigm, with focus on the
       transfer of multimedia data in multi-user multimedia sessions with very large number
       of participants, solving subsequent issues such as routing, reliability, delay, and
       bandwidth.

       An Infrastructure of Research for Audio-Visual Quality Evaluation, Security,
       Coding, and Communications [NSERC RTI]
       The main research program that will be undertaken by the applicants using the
       proposed facilities concerns quality evaluation of multimedia content using digital
       watermarking. The equipment will also allow us to pursue other related research of
       common interest to all the three labs such as digital watermarking, audio/speech
       coding, and multimedia communications. DISCOVER’s objective is not only to
       develop practical and effective watermarking based quality evaluation systems, but
       also to compare the performance against subjective evaluation, and test their
       applicability on the Internet and other environment. Speech quality evaluation is a
       very important research topic. The Mean Opinion Score (MOS) is reliable but the
       listening test is very expensive, time-consuming, and sometimes impractical. The
       existing objective quality assessment methods require either the original speech or
       complicated computational models, which makes some applications of quality
       evaluation impossible. They propose to use digital audio watermarking to evaluate the
       quality of speech. The basis of the method is that the carefully embedded watermark in
       a speech will suffer the same distortions as the speech does. Their method does not
       need the original signal or a computational model. The experimental results show that
       the method yields accurate quality scores that are very close to the results of PESQ
       (Perceptual Evaluation of Speech Quality), under the effect of MP3 compression,
       Gaussian noise, and low-pass filtering. DISCOVER intend to continue the research to
       propose advanced quality evaluation methods that can be used to measure the effects
       of packet loss and network fluctuation. It would be extremely useful to Voice over IP.

       Automatic Word Choice and Recognition in Natural Language Processing
       [NSERC Discovery]
       The goal of this research is to develop ways to use more lexical semantics in
       combination with other techniques to improve the performance of natural language
       processing applications. Lexical semantics is the study of the meaning of words for
       use in a computational system.

       Management, Access, and Visualization of Archived Meeting Transcripts
       [NSERC CRD]
       The goal is to allow information retrieval from teleconference recordings. DISCOVER
       team is working on filtering of mistranscribed words in speech transcripts, using
       Statistical Semantics approaches. They extended a previous approach for filtering
       mistranscribed keyphrases, being able to filter part of the mistranscribed words, while
       retaining most of the properly transcribed ones. They are also investigating with
       several word similarity measures (based on their distribution in large collections of
       text and based on dictionaries), exploring several evaluation measures. The next
       challenge is to improve precision by applying our method to word lattices and by
       combining their semantic confidence scores with acoustic confidence score internal to
       the speech recognition engine.


April 2007    70               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2



3.1.2.3.5 Tele-Haptics
       Haptic Sensors and Interfaces for Robotic Telemanipulation and Interactive
       Virtual Environments [NSERC Discovery]
       The main objective of proposed research is the development of new haptic robot
       sensors and interfaces, composite geometric & haptic object models, and robust real-
       time computing techniques for robotic telemanipulation and interactive virtual
       environments. An experimental haptic interactive virtual environment will be
       developed to evaluate the new concepts.
       The short term-objectives are:
       (i)    Development of more efficient haptic robot sensors for robotic telemanipulation
       systems;
       (ii)    Study of new concepts for the development of haptic interfaces for object
       manipulation in interactive virtual environments and robotic telemanipulation systems;
       (iii) Development of new instrumentation and sensor data fusion techniques for the
       production of composite geometric & haptic object models which are conformal
       representations of real physical objects, accounting for their geometric shape and
       elastic behaviour while interacting through direct contact with other objects;
       (iv) Study of new computing techniques for the storage and real-time rendering of
       the composite geometric & haptic object models.

       HAVE: Hapto-Audio-Visual Environments for Collaborative Tele-Surgery
       Training over Photonic Networking [CANARIE Project]
       The main goal of this project is to develop shared hapto-virtual environments with
       advanced multi-point 3D video conferencing, new display and interface technologies,
       and solution servers technologies that will be used for collaborative medical research
       and training. Specifically, it is proposed to develop, in collaboration with Medical and
       Cognitive researchers, a realistic HAVE immersive collaborative virtual environment
       application for the training of ophthalmic residents in cataract surgery, linking the
       Virtual Reality CAVE systems at UofO and UofA over Photonic Networking, and
       through the regional high-speed networks (NCIT*net and ORION in Ontario and
       NeteraNet in Alberta) and CA*net4. The goal is to provide a good environment for the
       early training of the residents so they acquire the minimum skills required to safely
       continue their training in a clinical setting. The application will be evaluated by the
       Cognitive Scientist (Dr. Whalen of CRC) through human testing.

       This goal furthers CA*net4's objective and the objectives of the Advanced
       Applications Program specifically by developing an application, that may deliver
       considerable benefits to the economy and quality of life, that requires the bandwidth
       and networking capabilities being developed for CA*net4, particularly the use of end-
       to-end lightpaths to support a collaborative environment. The application and network
       control methods developed on the project will contribute to maintaining Canada 's
       leadership position in research networking.

       The overall objectives of this University/Industry/Government project are to:
       • Develop and research new enabling technology and multimedia applications for
         Hapto-Audio-Visual Distributed Virtual Environments (HAVE), allowing multiple
         globally-situated participants to perform collaborative medical training sessions
         over high-speed networks connected to heterogeneous computing resources;



April 2007    71               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6          Dissemination Level : CO    Contract N. IST-NMP-1-507248-2


       • Develop new real-time interfaces and new ways to explore the complex multi-
             dimensional space produced by large simulation programs, including haptics;
       •     Create an infrastructure of equipment and software for experimentation in
             Collaborative Medical Simulation for Training;
       •     Utilize as much as possible current infrastructures provided by the CA*net4, and
             the NCIT research network in the Ottawa area;
       •     Investigate the performance of networking technology for these applications
       •     Investigate the use of end-to-end lightpaths for the communication between the
             remote sites and develop a method for establishing lightpaths with minimum
             propagation delays.

       HARVEST: A Framework for HAptic, inteRactive Virtual EnvironmentS for
       Tele-presence [NSERC Strategic]
       The main goal of this research project is to design and implement a framework, called
       HARVEST, that would allow Interactivity and Tele-presence in Haptic Audio Visual
       Environments (HAVE) to be efficiently supported and managed on the Internet. These
       network-intensive applications require specific support at the transport and application
       layer, and it is our goal to provide such support as will be shown in the remainder of
       this document. The DISCOVER proposed research falls within the Advanced
       Communications and Management of Information target area, covering Telepresence,
       Haptic-Audio-Visual Interfaces, and Distributed Interactive Environments under the
       Network Intensive Applications topic.

3.2 New Zealand Labs
3.2.1 University of Otago, Dunedin, New Zealand, Department of
      Information Science, Human-Computer Interaction (HCI)
Contact:         Dr. Holger Regenbrecht
Address:         P.O. Box 56
                 Dunedin / New Zealand
Website:         http://www.hci.otago.ac.nz/default.html

      3.2.1.1 Team overview
The Department of Information Science has ten members, Holger Regenbrecht worksin the
field of Virtual and Augmented Reality, 3D Teleconferencing, Embodied Interaction,
Christoph John is the Hand recognition and visualisation in conferencing environments, PhD
student, Holger supervisor together with Geoff Wyvill (CS), Sabine Lukaschik is a Diploma
(Masters) student, Gaming environment for (e)Learning: Focus Input modalities, Katrin Frank
is a Diploma (Masters) student, Visualisation of spatial and temporal jaw movement data,
together with Physiotherapy (Gill Johnson), Cameron Teoh works in the field of Task
Performance in teleconferencing environments, PhD student, Holger supervisor together with
David O'Hare (Psych), Andrew Long focuses on Collaborative Learning Environments, PhD
student, Holger supervisor together with George Benwell (Dean SoB), Greg Scown is a
Postgrad student, research thesis on non-verbal communication in teleconferencing, Richard
Barrington is a Postgrad and summer student, research thesis on visualisation / interaction in
cultural landscape project for Central Otago, Joerg Hauber works in the field of 3D User
Interfaces and Social Presence; PhD student, University of Canterbury (HIT Lab NZ), Holger



April 2007       72              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6       Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


as associated supervisor and Ralph Schoenfelder who is a PhD candidate, Immersive
Modeling in VR/AR; University of Graz (Austria), Holger as associated supervisor.
The general research focuses of the Department of Information Science:
   • ubiquitous computing
   • social computing
   • convergent information and communication technology (ICT)
   • devices and pervasive computing
   • Three-dimensional Teleconferencing
   • Virtual and Augmented Reality
   • Interfaces to complex information
   • Three-Dimensional User Interfaces (3DUI)
   • Psychological Aspects of Mixed Reality
   • Embodied Interaction Theory
   • Telepresence
   • Ubiquitous, Tangible, Perceptual Interfaces
   • Usability Evaluation
   • Multi-Media Interface Design

      3.2.1.2 Main research topics and/or demos
Middle - and long-term research in information science will be confronted with two main
challenges:
   1. an increasing complexity of information in terms of quantity and quality and
   2. an increasing need for collaboration and communication on all geographical levels
      (co-located, local, regional, national, international).
      3.2.1.3 Links
http://www.hci.otago.ac.nz/default.html
http://www.igroup.org/regenbre/cv.html

3.2.2 Human Interface Technology New Zealand (HIT Lab NZ)
Address:      University of Canterbury
              Private Bag 4800
              Christchurch 8104
              New Zealand
Website:      http://www.hitlabnz.org/route.php?r=home

      3.2.2.1 Team overview
The HIT Lab NZ is a partner of the world-leading HIT Lab US based at the University of
Washington in Seattle, and shares its goals of developing revolutionary interfaces that
transform the way people interact with computers.
Its staff is composed of thirtyone members. The Director Dr Billinghurst has a wealth of
knowledge and expertise in human computer interface technology, particularly in the area of
Augmented Reality (the overlay of three-dimensional images on the real world). The
International Director Prof. Thomas Furness, the General Manager Richard Bishop, the HIT
Lab NZ Faculty Member and Project Leader Dr Richard Green, the Canterbury Development


April 2007    73              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


Corporation Ltd Larry Podmore, the Marketing & Communications Consultant Deborah
Parker, the Multimedia Consultant Eric Woods, the Administration & Finance Manager
Anna-lee Mason, the Software Engineer David Sickinger, the Hardware Engineer Dr Marilyn
Lim, the IT Manager Nathan Gardiner, the Administration Assistant Daniela Achatz and
Donna Salaiau and the Media System Designer Tobias Gefken. Additionally there are three
Post-Doctoral Fellows, one Adjunct Fellow, eleven PhD Students, two Masters Students, two
Interns, one Volunteer, two Research Associates, two Casual Employees and two Visiting
Researchers.
The general research focuses of the Computer Graphics Lab are:
   •   Develop and transition to industry leading-edge human-computer interfaces to
       accelerate economic development in New Zealand.
   •   Build a world centre of excellence in human interface technology.
   •   Provide multi-disciplinary project-based learning experiences for students.
   •   Act as a bridge between academia and industry.
   •   Medicine: The Virtual Retinal Display (VRD) which scans images directly onto the
       retina of the eye has been found to work well on the eyes of some people with low
       vision. Further research is being pursued which may allow blind people to see. Other
       augmented reality (the ability to see virtual objects and the real world simultaneously)
       eyeglasses were also found to enhance the motor skills of people with Parkinson’s
       disease - allowing them to walk more readily.
   •   Simulation & Training: Virtual reality and augmented reality technologies are being
       used to train airline pilots and surgeons to prepare them for a range of potential
       outcomes.
   •   Entertainment: New interfaces are being developed so that entertainment technologies
       such as game consoles can be used to immerse players in the games they are playing,
       rather than staring at a flat two-dimensional screen.
   •   Scientific Visualisation: Visualisation using virtual and augmented reality aids
       scientists to analyse and manipulate large data sets.
Sponsors and partners are:
   •   University of Canterbury
   •   The University of Washington
   •   Canterbury Development Corporation Ltd (CDC)
   •   32 industry parners
   •   17 academic partners

The Technical Systems of HIT Lab NZ:
   • VisionSpace Theatre Facility 120 degrees screenspace
   • VisionSpace Portable systems

       3.2.2.2 Main research topics and/or demos
The HIT Lab NZ mission is to empower people through the invention, development,
transition and commercialisation of technologies that unlock the power of human intelligence
and link minds globally.
Some applications and demos are:
   • AR Jam
   • AR Multimodal Interface



April 2007    74               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


   •    AR Tank War
   •    AR Tennis
   •    Auditory Interface for Mobile Devices
   •    Bluetooth Health Monitor
   •    Human perception in Teleoperation
   •    Hybrid User Interfaces
   •    Intelligent Vision Algorithm Selection
   •    Interactive Handheld Display for AR
   •    IPhedron
   •    Large Scale Display Interaction Techniques
   •    Low Cost Robotics Platform supporting AR
   •    Magic Book (Authoring Tools, Mixed Reality Book Natural 3D Interaction,
        Transitional Interface
   •    Magic Lens
   •    Mobile Social Collaboration
   •    Multi-Modal Interaction in AR
   •    Occlusion-Based Interactions
   •    Office of Tomorrow
   •    Optimising Marker Based Technology
   •    Realism of Virtual Characters
   •    Refractive AR
   •    Robust Dynamic Inertial Head Tracking
   •    Robust Hybrid Tracking for AR
   •    Smash Palace
   •    Social Presence in Virtual Environments
   •    Spatial Video Conferencing
   •    Tangible Tiles
   •    Touch-Sensitive Board
   •    Unconstrained 3D Environment Mapping
   •    Wireless EEG

       3.2.2.3 Links
http://www.hitlabnz.org/route.php?r=home
http://www.hitlabnz.org/visionspace/route.php?r=vspace-home

3.3 Brazil
In Brazil there are two active instituitions over the last decade. They both are linked to the
Petrobras Oil Company, which shows the importance of an industrial client/sponsor for
VR/AR technology in the early phases.


3.3.1 Laboratory of Integrated Systems Polytechnic School -
      University of São Paulo – Brazil
Contact:      Prof. Dr. Marcelo Knörich Zuffo
              Prof. Dra. Roseli de Deus Lopes



April 2007     75              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


       3.3.1.1 Team overview
This work group gave to Brazil international acknowledgement in technological innovation.
Virtual Reality Nucleus’s Research’s focus is over high performance computational systems
to visualization, simulation and interaction.
A great part of the systems utilized by the nucleus was internally developed, generating a
team with high production and utilization knowledge of the most modern virtual reality
technologies.
Between the main nucleus’s researches, the eminent one is the pioneerism production of
computer clusters to the high quality graphical image synthesis to immersive environment.
The Nucleus also shelters the Digital CAVE, an important infrastructure to the researches on
multiple projections and immersive environments. The system gave to the nucleus the
pioneerism on several researches’ fronts and international acknowledgement.
Actually, the nucleus figures on with a team, which works in the optimization of agglomerate
of commodity computers and in the development of applicatives to the complex 3D data
structures visualization in real time.
Partners:
   •   Ltautec
   •   FINEP
   •   INTEL
   •   Petrobras

       3.3.1.2 Main research topics and/or demos
   •   Sirius - Virtual Reality Systems Management based on computer clusters
       Immersive multi-projection systems are complex environments that became even more
       sophisticated with the use of computer clusters.
       The proposal of the Sirius solution was to provide a clear interface to the applicatives
       and devices management for the Virtual Reality Nucleus infrastructure.
       The Sirius project is a graphical interface, implemented in Java, which considers a
       heterogeneous cluster node system. Actually, Sirius can control the light, video,
       tracking and audio systems, the robotized cameras and demonstrations. Every
       operation and problems are recorded (on logs), for future depurations and use control.
   •   JINX - 3D Virtual Environment Production
       JINX is a fully distributed virtual environments browser, which has a special support
       for commodity computer clusters and immersive visualization devices. The presented
       mechanism intends to be fast and easy to use to develop virtual reality applications
       based on the X3D format, enabling great flexibility for displays and interaction
       devices, allowing users to concentrate only on content creation. JINX provides support
       for nodes synchronization and resources sharing, from Framelock to Datalock. (Just
       click to know more)
   •   The Cathedral Project – a virtual tour at Sibenik
       The project describes a virtual tour through Sibenik Cathedral. The production was
       made through the conversion of a cathedral’s model from 3DMAX to ASE, which
       then, is loaded to the memory. OpenGL structures were used to renderization.
       The renderizations are distributed between the graphical cluster nodes and directed to
       the Digital CAVE. Actually, the navigation can be done through wireless devices from
       inside the CAVE, without the need of a second character controlling the navigation.


April 2007    76               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


   •   Hang-gliding – Virtual Hang-gliding over Rio de Janeiro
       By the perspective of a hang-gliding, the user can performance a virtual aerial tour
       through the city of Rio, visiting the most traditional touristic points. To complete the
       sensation of immersion – provided by a HMD (helmet-mounted display) type, 3D
       visualization system, the user can figure on with environmental music, besides sounds
       geographically disposed over the sambódromo and the Maracanã.
       Actually, the system has been imported to the Digital CAVE. (Just click to know
       more)
   •   Glass - Library for distributed computing
       Glass’s objective is to improve the development process of applications based on
       graphical clusters. This library was developed on C++, in a way that offers data
       abstraction and independence of other libraries. In that way, the common data and
       others types can be created and manipulated; besides, the already available solutions
       can be reused. The reutilization of other libraries is a developer option, which can, by
       example, utilize OpenGL Graphical Library or OpenGL Performer to applicatives
       implementation. a binding for Java is under development. (Just click to know more)
   •   OP_ERA – Artmedia at Digital CAVE
       OP_ERA is an immersive-interactive environment created by Daniela Kutschat &
       Rejane Cantoni, where the visitor and the space around him were realized like mutant
       integrated fields. Produced by the Virtual Reality Nucleus, the environment was
       installed at Digital CAVE. OP_ERA includes:
             o Research and development of scientific and artistic space models;
             o Human-machine interfaces (hardware and software) specially drawn for
               immersive-interactive environments, which the human and artificial
               (computer) agent is symbiotically interconnected;
             o Alternative spatial perception and cognition ways through multidimensional
               experimentation of conceptual space models.

   •   IMERSIVE AUDIO FOR COMPLETE VIRTUAL REALITY SYSTEMS
       Research and implementation project of a flexible and scalable system for three-
       dimensional audio reproduction at the EP-USP’s Digital CAVE. Objectives includes
       since the implementation of decoder systems for traditional surround formats (like 5.1,
       7.1, 10.2, DTS, Dolby Surround) to the development and availabilitation of more
       complete formats for 3D audio generation and reproduction, still restricted to high-
       fidelity and research environments, where not only the surround reverberation and the
       envelopment are important, but mainly the sonorous objects locations distributed in a
       three-dimensional scene, and the acoustical environment’s synthesis.
   •   Wireless Stereoscopic Video Transmission
       The prototype constructed at the Digital CAVE allows stereoscopic video capture,
       transmission and reception. A helmet composes such prototype where there is two
       connected cameras. The video capture is synchronized. This video is encoded and
       transmitted, through wifi 802.11, technology, to another computer connected to the
       net. The system is all mounted inside a hunchback, becoming weightless and portable,
       that allows the user a free movement inside the wifi net reached area. (Just click to
       know more)



April 2007      77              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


   •   Paulista´s entrances flags and symbols in a three-dimension vision of the public
       works of Victor Brecheret from the sculpture to architecture

       The research presents as a consolidation of Paulista and Brazilian culture patrimony,
       Giving a contribution for the maintenance of part of Paulista culture identity through
       the study of Brecheret´s works in sculpture and architecture it looks to identify these
       works the assist of victor Brecheret Institute in the particular and public workmanship.
       (Just click to know more)

       3.3.1.3 Technical Equipment
The Digital CAVE is a (Integrable Systems Laboratory) LSI’s Virtual Reality Nucleus’s
infrastructure, tied to the USP’s Polytechnical School. Developed by LSI-EPUSP’s
researchers, this system is known at United States as Cave (Cave Automatic Virtual
Environment) and at Europe as Cube.
The CAVE started to be constructed in 2000, with (Projects and Study Financier) FINESP’s
financing, and it was inaugurated in April of 2001. Much beyond 3D immersive projections
allowed by 5 screens of 3x3meters that form it, the Digital CAVE also may receive interfaces
that stimulate audition and tactile sense, like stereo sound boxes and force feedback devices;
that’s why it’s a virtual reality system that allows a high evolvement from the user.
To generate all these virtual worlds, 24 computers, the called clusters, work together,
producing a performance equivalent to the graphical super machines used at the Caves at
great centers of research of the world, but with very inferior costs. The development of this
technology is the result of years of research of the LSI’s team, and it is already available to
the Brazilian industrial market, bringing the country for the international vanguard of the area.


3.3.2 Tecgraf/PUC-Rio
Contact:       Marcelo Gattass
Address:       Rua Marquês de São Vicente, 225
               Prédio Belisário Velloso
               22.453-900 Rio de Janeiro/RJ
               Brazil
Website:       http://www.tecgraf.puc-rio.br

       3.3.2.1 Team overview
Tecgraf - Computer Graphics Technology Group - was created in May 1987 in a partnership
with PETROBRAS' Research and Development Center - CENPES - at the Pontifical Catholic
University of Rio de Janeiro - PUC-Rio. Its purpose is to develop, establish, and maintain
computer graphics and user interface software for technical and scientific applications.
Tecgraf is one of the laboratories from PUC-Rio's Computer Science Department. Its projects
are developed in cooperation with PUC-Rio's Civil Engineering Department, the Institute of
Pure and Applied Mathematics, National Institute of Spatial Research, and the Department of
Naval and Oceanic Engineering at the University of Sao Paulo, being mediated by Padre
Leonel Franca Foundation.
Tecgraf is coordinated by Prof. Marcelo Gattass, from the Computer Science Department. He
directs a team of consultants, researchers, engineers, analysts, and students at PUC-Rio.
Research developed here is part of the academic activities of PUC-Rio's Computer Science
and Civil Engineering Departments. As a consequence of its research activities, the Group


April 2007     78               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


publishes a large number of works, graduates several doctors and masters, and interacts with
research groups both in Brazil and abroad. Tecgraf is one of the laboratories taking part in the
Inter-institutional Post-graduation Program on CAD and Computer Graphics - PiCG -,
organized by ICAD, created by PUC-Rio and IMPA, and involving several departments.
Coordinador             Marcelo Gattass,
General Manager         Albino J. Tavares
Consultant Professors

   •   Modeling and Visualization: Luiz Fernando Campos Ramos Martha

   •   GIS and Data Base:      Marco Antônio Casanova

   •   Distributed Systems: Renato Fontoura de Gusmão Cerqueira

   •   Real-Time Visualization:       Waldemar Celes Filho

   •   Computational Geometry:        Luiz Henrique de Figueiredo

   •   Computer Vision, Optimization:        Paulo Cezar Pinto de Carvalho

   •   Training Games:         Roberto de Beauclair Seixas
Area Coordinators

   •   Anchoring Systems: Ivan Fábio Mota de Menezes

   •   Virtual Reality and Collaboration:    Alberto Barbosa Raposo

   •   Geographic Information Systems and Environment Modeling and Stability of Floating
       Systems:      Luiz Cristovão Gomes Coelho,

   •   Simulation, Control and Automation:           Carlos Cassino


       3.3.2.2 Technical Equipment
The current facilities of the Laboratory cover an area of 300 m2 including: a 3D Visualization
Room, 12 (twelve) general-use laboratory rooms, one of them in the 2nd floor of building
Padre Leonel Franca, and 6 (six) rooms reserved for Coordination, Administration and
Support.
Such space is interconnected by an Ethernet Network certified at 100Mbits/s operating with
layer 3 and layer 2 switches. At the moment, all machines run at 100Mbits. Soon we will be
updating some servers to 1Gbit. The network at the 2nd floor of building Padre Leonel Franca
and that of ITS are connected by an optic-fiber link at 1Gbit


       3.3.2.3 Partner and Sponors
Tecgraf's network is structured so that the equipment connected to it can work with several
operational systems, sharing driver and printer resources.



April 2007     79               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


Throughout the years, Tecgraf's primary partner has been PETROBRAS. This partnership's
core purpose is to develop computer systems to support activities in Engineering, Geology
and Seismic executed by PETROBRAS. An important emphasis is the use PETROBRAS’
technical abilities in the programs developed. Only Computer technologies not involved in
PETROBRAS’ main activity are dealt with by Tecgraf.
This project has produced, until the beginning of 2002, over 60 products and 200 scientific
papers, and has graduated 62 masters and 21 doctors. Many of the products generated by this
project have exceeded the borders of the PUC/PETROBRAS Partnership and are being used
in leading Brazilian universities and in some universities abroad.
With CEPEL, even though the work volume is smaller, Tecgraf has established a solid
partnership. Developing graphic interfaces for transmission-network planning and analysis
systems SINTRA, ANAREDE, and SAGE is the main purpose of this partnership, and it
constitutes its major products. Another goal in the CEPEL cooperation is to develop an
interface for the Center of Distributed Controls.
More recently, Tecgraf has set up an important partnership with the Brazilian Navy
Department by means of the marinhaes Instruction Center. Its main purpose is to provide
support to the Center's computer science activities.
Tecgraf has also established other important partnerships. With CBPO, its function was to
provide support to the implantation of CAD in the company's international plan. With
MARKO, the partnership's purpose was to develop a project system for the RollOn roofs
which supported the activities of a worldwide network of distributors. Finally, with
NATRON, the project leaded to three important products: ESTAT, NORTAN, and METAL.
The first and the second are programs for the analysis of structural and piping tensions; the
last one verifies security according to the 1978 AISC steel structures norm.
Ever since its creation, Tecgraf maintains frequent joined projects with:
   o IMPA, Institute of Pure and Applied Mathematics, through Project Visgraf, with the
     participation of researchers Paulo Cezar Carvalho, Luiz Velho, Luiz Henrique de
     Figueiredo and Roberto de Beauclair Seixas. This cooperation takes place in several
     levels, from the academic environment, by means of graduate courses, to the actual
     development of computer graphics technologies in the areas of volume visualization
     and distributed computing.
   o INPE, National Spatial Research Institute, through a project named "Computer
     Graphics and Image Processing", sponsored by CNPq, coordinated by Prof. Marcelo
     K. Zuffo - LSI/USP -, and with Prof. Gilberto Câmara - DPI/INPE - as rapporteur.
   o Cornell Fracture Group and Cornell Program of Computer Graphics, groups belonging
     to the Cornell University, with professors Anthony Ingraffea and Donald Greenberg,
     respectively. This cooperation begun in the 80s during Prof. Marcelo Gattass'
     doctorate in Cornell. Nowadays, Tecgraf researchers take part in exchange programs
     for one or two years in Cornell, where joint researches are carried out in the areas of
     computer graphics and computer mechanics.
   o EPUSP/LMC and UNICAMP, with Prof. Túlio Nogueira Bittencourt, from EPUSP's
     Department of Structures and Foundations, and Prof. José Luis Antunes de Souza,
     from the Civil Engineering Faculty, in the project "Two-dimensional Simulation of




April 2007     80               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6       Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


       Processes of Continuous Fracturing Mechanics", sponsored by FINEP, within the
       scope of Project RECOPE.


       3.3.2.4 Main research topics and/or demos
The list is very generic and shows that VR/AR is embedded in an computer graphics context
where is employed when appropriate.
Program Development Tools

   •   Computational Mechanics

   •   Naval Projects

   •   Reservoirs and Geology

   •   GIS and the Environment

   •   Computer-Assisted Projects and Supervision

   •   Geometric Modeling and Scientific Visualization




April 2007    81                CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2




Part 2: Worldwide analysis of VR/VE research
The present section was initially supposed to underline: some potential synergies in Europe,
future research activities that take advantages of the existing results, sketching solutions to
give added value for the distributed activities. However, a “Synthetic analysis on the
Inventory of past and undergoing research activities Europe” is provided within the chapter 5
of the D1.A_3 report (another Deliverable of the WP1.A.2). Based on the first version of the
M1.2_1 report (called “Inventory of the past and undergoing VR research activities in
Europe”), this chapter of the D1.A_3 report already identifies a set of problems, synergies and
overlaps between the European research teams of INTUITION.
Finding redundancies in templates of the M1.2_1 internal report was not surprising, in the
way that some INTUITION teams worked together in previous European projects. For
instance, it is the case for some templates about POEMS, TOUCH-HapSys and CREATE
projects and probably some other cases. On the other hand, if we can understand that the
material of this inventory is not fully homogenous, we observe some missing templates
because of the fact that WP1.2 partners were not all the INTUITION partners. For instance
(we suppose there are many such cases) no template has been delivered on tracking research
technologies developed by ARTrack company, which is actually an INTUITION partner. A
similar observation can be done on the R&D activities of companies such as BARCO and
HAPTION.
According to the content of part 1, we underline in the current part 2 the main features of
worldwide research policies in VR, mainly in the US and Asia (Canada, New Zeeland and
Brazil didn’t really have significant specificities) and features that seem missing or not
developed enough in Europe. Then, at the end of this worldwide analysis, we underline in the
last section of part 2 some application fields where VR/VE research in Europe should be
increased.
Consequently, the action list and general recommendations that are proposed after in the
Conclusion of this report are actually the synthesis of the analysis presented in the current
section. Our aim is to provide a set of possible orientations which should be useful for the
global INTUITION roadmap on research aspects.


4 Comparison between the US and Europe on
  VR/VE research, and potential European policies
The first and global analysis of the US VR labs we presented above shows:
         - they are in full activity, they reached a critical size and they are richly equipped;
         - a large part of the VR research developed by these labs is software oriented, and
             fully uses existing commercial equipments;
         - however, the US is still at the leading edge of hardware development: for
             instance, several new visualisation techniques or configurations are under
             development, such as retinal projection and HI-SPACE concept (HIT Lab), high
             resolution image walls with self-adjusting and office of the future (UNC),
             modular and low-cost CAVE-like (VRAC);



April 2007     82               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


        -    the importance of the engineering and development activities on VR, and their
             coexistence and complementary with the VR research activities
         - the multi-disciplinary nature of the staffs of these VR labs.
Beyond these first comments, 3 levels of analysis are possible: (a) the main VR/VE
technologies, which interest fundamental research; (b) the main VR applications, which drive
the applied research; (c) the VR community structure in the US.

From the reported US labs, the main VR/VE technologies seem to be:
       - Augmented Reality, in the direction to facilitate the capture of reality for the
            integration of various sources of information in immersive and interactive
            environments. In this field the main US research focuses are:
           o 3D acquisition by triangulation or laser interferometry
           o electronic microscopy
           o ultrasonic echography and IRM
           o automatic surface reconstruction
       - Improve ubiquity and distant work, in the direction of increasing the material &
            software performances:
           o in developing high flow optical networks up to 100 Gbits/second (EVL),
           o in increasing the rendering and simulation performances up to 100 Mega
               entities/second (UNC)
       - Free the users from the HCI constraints:
           o video/graphic overlay and retina interface,
           o tangible and direct interaction with objects,
           o wearable computing and portability of the RV applications on laptop,
               organizer, and glasses (see-through or not),
           o natural and multimodal interaction (gesture and speech recognition using
               commercial solutions)
           o rendering and immersive interaction in physical environments (HI-SPACE
               concept of HIT Lab)

About the VR applications developed by these US labs, the main focuses are:
        - health prevention, treatment, rehabilitation, and societal aspect of lengthening of
            the lifespan:
           o functional rehabilitation by haptic means (Rutgers),
           o prosthetic (EVL),
           o biopsy (UNC),
           o surgery, pain control, therapy phobia (HIT Lab);
        - supports for decision chains, control, maintenance and training, for military or
            safety applications:
           o crisis operational center, command center (VRAC),
           o collaborative work (Rutgers, UNC, HIT Lab),
           o procedures training (USC);
        - education, training, culture, management of knowledge and behaviours:
           o virtual classroom (USC),
           o virtual teleportation and historical visits (EVL, VRAC)
           o education for children (Virtual Puget Sound, HIT Lab)
However, many specific applications can rise from three above identified fields. They relate
to the whole population and thus have large potential markets. The structure of the cost of
such solutions will depend closely on the type of VR devices necessary to their exploitation,
and the broader the market is, the more significant the scale factor will be.


April 2007    83               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


On the other hand, although the engineering and design of products are still the focus of many
companies, one observes that VR/VE applications for CAD/CAM are seldom reported by the
US labs. Two combined reasons are probably explaining that: the industrial confidentiality of
CAD/CAM activities, and the small market for such products.
Another remark is about the success-stories of Augmented Reality, Augmented Virtuality or
Mixed Reality in the US. Research in these topics is not so generalised in all the US labs we
visited. But the labs, which develop such research activities, have very advanced results.
Actually, the approaches developed by US labs are fundamentally pragmatic, more in these
topics than in all the other VR domains. In addition to multi-disciplinary teams, all the
research projects in AR, AV of MR are always based on very well-defined tasks for the
improvement of existing critical protocols, such as center command, crisis cells, operational
support, telemedicine and remote surgery.

Concerning the VR community structure in the US, it seems clear that each state, mainly via
its state university, has structured its own VR community. But, by looking the above report on
the main VR labs in the US, one cannot determine if a real coordination exists at the federal
level. For instance, DARPA and NSF finance several US labs on very close VR subjects.
The federal structures (NSF, DoE) are recurrently financing many laboratories on the VR
topics. Actually, the current geopolitical situation justifies the continuation of several
important programs on military or security topics, with consequent DARPA financings. State
university funding and private industrial contracts always consolidate federal financings. It is
however impossible to establish a significant ratio between these different funding resources,
because of the variable parameters that US labs use to present their teams (including or not:
personnel costs, equipments, material, shared resources, sponsoring)
If US labs have important financial state and federal supports, it is also not new that American
universities have many success-stories in their industrial partnerships. Actually, industrial
collaboration has a place of choice in the flow chart of each university (specific direction). If
private companies develop the results of research labs, these companies are also often closely
associated to the laboratories beyond the strict perimeter of the project itself, which motivates
both partners reciprocally. Joint venture, royalties are integrated into the processes of
exchange and swarming. About internal working procedure of US labs, the transversal
structure of the project bringing into play different research departments from the university
(for example "graphic computing" & "medicine") facilitates the dissemination of the
laboratories’ solutions towards the end-users markets or the commercial companies.

Here we would like to point out that in terms of commercialisation the US is not as far ahead
as in other fields (if they are ahead at all). Europe (foremost France and Germany) has also
many small companies which provide VR/AR equipment, applications and consultation to
industry. In the US you have also a concentration to a few companies as the
Mechdyne/Fakespace/VRCo complex, Immersion, WorldViz. Here also the exploitation of
the hardware seems to play a majot role. Software solutions and applications seem to be
stronger in Europe (eg. Dassault/Virtools, Mercury, RTT, IcIdo, vrcom…). These all are
small companies but nevertheless they exits and cover a wide range from development
toolkits to turnkey applications. Also the strong interest of automotive companies in Europe
seems to focus much more on engineering applications than that in the US.

Consequently, VR research in the US is much diversified, as much as the European one. In
several fields, European and the US are in competition, but we observe also some
fundamental differences.



April 2007     84               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


One fundamental difference compared to Europe is that the US attaches high importance on
the correlation between VR technologies and High-speed Gigabit and Terabit optical
networks. The research on the Next Generation Internet (NGI) is justified by the fact that
optical bandwidth and storage capacity are growing much faster than processing power. From
a processor-centric approach, computer science needs to migrate towards a distributed
computation massively using the optical bandwidth, because the networks will be faster than
the computational resources. The application of such NGI research for VR is not only oriented
towards developing real Collaborative VR system. It more generally addresses the problem of
sharing and computing massive data, while, on the other hand, multi-sensorial VR devices
seem to be the best user interfaces to exploit such data or results, via multimodal exploration
and collaborative work.
On the NGI topic and its interest for VR, Europe research seems out of the world competition,
and it is necessary that INTUITION develop contacts with existing NGI European projects. In
addition, INTUITION should ask EC, to profile some future IST calls, for the typical needs of
VR applications on high-speed optical networking, while acceptable proposals on such
thematic should imperatively include partnership of some European VR teams.

On the technological aspects, European VR research teams and US ones are really in
competition. The competition is not only in visual display which is stronger than in haptic
rendering, but also in the tracking and motion capture technologies. On 3D audio, European
countries seems more advanced than the US.
But at the opposite, probably for historical reasons, the US has a real world leadership, in
networking for VR applications (as previously explained), and in computing (including
hardware graphics). Consequently, if we observe in Europe some interesting software
research on clustering and distributed architectures, we are frequently late of one generation
on hardware components, because new hardware products in North American and Asia
markets are usually distributed several months before the European customers (including our
VR research labs).
In this context, a real question is to know if Europe must decide or not to have its own policy
in hardware manufacturing, by designing new generation of computers, and by applying its
scientific and technological competencies to develop its own networking solutions for high-
speed optical networks.

On VR software aspects, US and European VR research teams are also in competition. But
the difference is the fact that European software solutions are not products that are really
distributed in the world, while these products are poorly distributed even within the European
countries. Two possible answers to this critical problem are to limit overlapping research
activities in Europe, and to promote standardisation of European VR solutions. We discuss
below some advantages and limits of such policies.

Avoiding overlaps in VR research between European teams is probably necessary, but the US
have also many overlapping VR research activities according to the research policies of each
state university. Of course, the US has global policy imposed by research institutes managed
by the federal government. However, overlapping research is not typically tracked in the US.
In other words, US federal organizations do not manage the effect (the overlaps), they only
have some actions on the cause of such effect, but not because they politically want to avoid
it. Actually, the US defines scientific orientations based on federal needs and US objectives to
maintain their world leadership.
Consequently, state universities make all they can to be in consortiums of the big federal
projects (lead by NSF, NCSA, NASA, US Army, DoE, and so on). Additionally, the US


April 2007     85               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


teams generally have a set of companies’ partners, which sponsor or fund some of the
research activities of these labs, and/or which distribute their resulting products. That does
imply that those US teams try to constitute research staffs with sufficient critical masses. In
some way, that can contribute to limit the overlaps. But, as VR research is also a pluri-
disciplinary field, many US labs share the same competencies and still develop overlapping
activities.
In the end, in the US, the main question does not seem to be the overlaps in activitiy between
VR research teams, but the fact that there is a strong enough competition between VR teams
having sufficient sizes. The goal of this research competition is to increase the development of
new VR products, while the main excellence criterion of this research activity is more to
contribute to maintain the US in its world leadership, than to really develop universal
scientific knowledge.

Another question is to know if the standardisation of European VR production is a pertinent
policy. This question seems fundamental for many European end-users of VR technologies. In
some way, it is probably a solution to develop the European market in this field. But the US
seems to not have the same strategy.
In Computer Science, some standardisations have been tried in the US, frequently from state
university initiatives (VRML, MPEG, and so on), but only a few of them are really enduring
success-stories. On VR, the US didn’t politically decide (until today) to encourage
standardisation. Actually, in the US, it is mainly the market, from the quality of a product but
also the financial power and the distribution network of a company, which makes that a
product becomes or not a standard (cf. OpenGL, Performer, CAVElib, and so on). That is also
right for some open sources solutions, where completeness of the functionalities, good user
guide packaging, marketing on well known Computer Scientist and active web service
management have allowed some successes (UNIX, VRjuggler, and so on).
On this topic, Europe probably cannot have the same policy as the US. In many domains, the
production cost of US products is generally damped in the North American market before
they are distributed in the rest of the world. It is generally not the case for many European
products and for VR European ones.
Consequently, a standardisation of European VR production seems necessary. But the main
question is to define how to launch and set up VR standardisation in the European scientific
community. A very critical point to design such policy is that we have to find standardisation
features, which maintain sufficient degrees of freedom to develop research activities, and
which make possible an interesting competition between European VR teams, in a way these
teams continue to provide research excellence in this field.
However, success of national or international activities promoted by public bodies is still
questionable. Most of the successful standards are imposed via defacto standards of industry
interest.
    • Example: Adobe PDF, Postscript, dxf, VRML (SGI), JT, collada.
    • Counterexample: STEP, IGES, x3d (which has slowed down significantly after taken
        over by public bodies even if now part of MPEG4)
On the other hand MPEG is an example for a compromise which seem to work. The question
is now if Europe has a strong impact via companies which are willing to open up their
technology to such an extent that it can be standardized.

5 Comparison between Asia and Europe on VR/VE
  research, and potential European policies

April 2007     86               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6         Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


There are very similar points and reasoning that are gained from the previous section. This is
mainly because of historical reasons: the Japanese and South Korean for instance have a
strong connection with the US. Therefore, we will focus on important issues such as the
dominant concerns of the EC relatively to Asia and vice-versa.

It is clear, from the review of the Asian laboratories presented in this document, that Asian
research in VR is very active. All laboratories seem to have enough financial support and
human resources; they are nicely equipped and present some synergistic relations with
industries. Overall, we can claim without any doubt that the European VR research is
competitive in many aspects and show limitations in other (namely, in interface technology).
Nevertheless, in Japan for instance, VR research is also considered as a priority in the
important research investment policy of the government to maintain its leadership in
innovation.

From this comparison, it also appears clearly that, globally, Asian research in VR is
technology-oriented: new visualization concepts, interactive interfaces… are not found with a
similar weight and concepts in Europe. On the contrary, software development related to
either APIs, computational real-time artificial physics, cognitive interfacing or simulation are
the strength of the European research in VR. Many Asian colleagues clearly mentioned this
aspect. Some quick investigations show that this tendency will be maintained in the future and
this is because of several reasons:
    1) the education system in most of these countries, especially in Japan does not focus the
         master courses and PhD towards the software related fundamental issues;
    2) the laboratories will educate students according to present professor and associated
         research background orientations;
    3) the Asian institutions are somehow conservative and do not recruit a lot of foreign
         researchers and young talents that will establish in their countries (this tendency is
         being reversed because most of outstanding researchers that have established in
         Europe or the US are being offered good return opportunities in their respective
         countries. This is very well noticed in Korea, Japan and recently China).

In Europe, technological aspects are dealt with engineering, while researchers are implicitly
asked to focus on fundamental aspects. That doesn't mean researchers don't have to prove
usefulness of their works and their applicability to convince the sponsors. But, this
dichotomy, between technology and fundamental aspects, creates consequently a space for
complementary research programs between Europe and Asia in the frame of VR and
subsequent applications and technology.

Concerning applications, Asian developments in VR is oriented towards gaming, interactive
communication, cyberspaces, medical application, wearable computers, ubiquitous interfaces,
sensing/recording technology, etc. Surprisingly, few applications are CAD/CAM oriented,
even in haptics (this is also a paradox issue: the US, and recently Europe, are leaders in Haptic
interfaces commercialization, whereas no such devices are being Japanese or Korean or even
Chinese made). Nevertheless, concerning interfaces technology we have to admit that the
Asian laboratories are very innovative, and this is likely to be maintained in the future. In the
other hand, the traditional link with industries facilitates a quick transfer of ideas and
prototypes into commercial products. There is more flexibility in technology transfer in these
countries relatively to Europe.




April 2007     87               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO       Contract N. IST-NMP-1-507248-2


Concerning the “structural issue” Asia did not established yet a structuring policy for research
programme or institution dedicated to VR. We noticed that a VR society do exists in Japan
but not in other courtiers. However, entire poles seem to emerge in Japan and Korea but it
they do not drive national programmes and research. In Japan, the VR do neither really drives
research nor catalyzes it: it rather plies the role of a “meeting point” where information of
various natures are posted and exchanged.

Another important consideration to be mentioned in Asian VR research is its acceptability by
the society. The Manga/Anime (comic and movie) and video gaming culture in Asian
countries, Japan for instance, drives cyberspaces and artificial life research with less ethical
restriction than in Europe.

6 Some application priorities for VR/VE research in
  Europe
Concerning industrial applications, in order to minimize the Time to Market and optimize
costs, the main tool for Product Design have to be more and more the Virtual Mockup.
The real prototypes should be very few, used as last check-up of whole project before series
production. This is more applicable for Aerospace and Automotive applications, where
physical mockups are so complex and onerous.

The safety is one of the major features about new cars and aircrafts: the Augmented Reality
should be one factor to improve it.
On motion, in front of the driver/pilot, could appear some virtual elements used to improve
the perception regarding the surrounding environment. These additional elements could be
shown on windscreen or Helmet Mounted Display, mainly during bad weather.
The Virtual Motion Predictor could be a nice feature used to keep an eye on autonomous
drive/fly: the vehicle track could be valued in advance in order to avoid car/plane crash both
by the autonomous drive/fly controller and by the driver/pilot.

Home Automation could be a way to introduce VR as a familiar technology to the common
people.
Virtual Home Assistant could be shown as a hologram (or visualized on wall screen) with
human looks (and voice) as oneself like (VIP, historical personalities, etc.). It could be an
interactive tool to manage the house: lights, automatic curtains and rolling shutter, heating,
telephone calls and so on.
An intelligent human thinking Virtual Home Assistant could be used as teacher and to keep
the elderly company or as a psychologic aid.




April 2007     88               CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6               Dissemination Level : CO          Contract N. IST-NMP-1-507248-2




Conclusion
The outcome of the INTUITION project, as defined in this Deliverable, can only be a
proposal which has to be merged with the bigger picture of research and research politics on
an overall European level. If we accept that a large part of the VR/AR research is dedicated to
interfaces and interaction which can be employed in many application fields and industry
domains, the roadmap of VR/AR research is a cross section of the necessary technology in
these and other fields. This fact makes it somewhat difficult to come up with a clear road map
besides some pure technological driven advances, e.g. display resolution or tracking accuracy.
The question remains, how can we promote the benefits of VR/AR to other fields so that these
areas exploit the technology for their own enhancement for the benefit of all?

A work to be done is probably to make a thorough investigation on how VR research is
actually structured or not in each European country. For instance, collect, if any, the VR
academic and industrial association and their operation mode, role, etc. In the second stage a
comparison with international structures, if any, is made. And then end-up with a structuring
proposal that would bring the creation of a VR society in Europe (that would be the role of
INTUITION) with clear role and mission to prepare calls and catalyze research at the
European level.

Consequently, the conclusion of this report is only a “set of priorities for VR/VE research in
Europe” towards structuring this ERA. In other words, we propose here a synthetic list of
possible actions, where VR European community seems not yet competitive, and/or has
important research challenges. This proposed action list bases on the analysis we developed
above on main US and Asia research labs in the field of VR. So, the authors of the present
report do not justify here the reasons of their choices. The reader is left to infer the genesis of
these possible actions from sections 4, 5 and 6 of part 2.


Set of priorities for VR/VE research in Europe5
1) Launch an European research initiative on High-speed Terabit Optical networks also
   for VR:
   a) specification of VR needs in terms of Sharing massive data, and Distributed
       computing
   b) policy for the set up of a European research cooperation in this field
   c) design and development of Collaborative Virtual Environments solutions in such new
       network infrastructure

2) Launch an European research initiative on Computers and Hardware (e.g. graphics)
   boards for VR:
   a) specification of future VR needs in Computers and Hardware boards
   b) policy for an independence (or relative independence) of Europe in this field
   c) design, development and manufacturing of the target Computers and Hardware
      equipment

5
    the actions’ numbering is absolutely not the transcription of priorities between these actions


April 2007          89                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO      Contract N. IST-NMP-1-507248-2


3) Increase the European research effort on interoperability of VR software components
   a) multiply European VR/VE Open Sources and Commercial products for VR
   b) support European development of defacto standard with a policy
   c) provide support for implementation
   d) simplify the administrative procedures for distribution

4) Increase the European research effort on Natural VR interaction
   a) multiply solutions on Tangible interfaces and Multimodal VR interfaces
   b) define future IST calls for such topic

5) Launch a European research on New VR environments for a continuum between the
   real physical working space and the numerical information space.
   a) design and develop European solutions on the grounds of the HI-SPACE or “Office of
      the future” concepts
   b) define future IST calls for such topic

6) Increase the European research effort on Scene management for VR applications
   a) find new generic and specific solutions for the software management of massive data
       in VR applications
   b) define future IST calls for such topic
   c) prepare standardization or interoperability schemes

7) Increase the European research cooperation on New design methodologies of VR
   interfaces based on knowledge of Cognitive and Social Sciences
   a) create a multi-disciplinary network of researchers in this field, which has to gather the
       replicable psychophysical, cognitive and social behaviours experiments, and defines,
       from this set of knowledge, design procedures to develop advanced VR interfaces on
       existing and future applications
   b) define future IST calls for such topic

8) Increase of the European research effort on Virtual Mockup as main tool for Product
   Design in industrial applications:
   a) minimize the Time to Market and optimize costs
   b) mainly involve Aerospace and Automotive field

9) Consolidate the European research on Augmented Reality, Augmented Virtuality and
   Mixed Reality
   a) consider AR as one VR method among many to access computer data
   b) combine VR and AR with all other access research (all fields can benefit from each
      other)
   c) multiply applied research on concrete AR applications
   d) define an IST program for such topic, opened to a large set of concrete applications,
      according to the success-story of this pragmatic approach observed in the US

10) Increase of the European research effort on AR to increase safety during driving/flying:
    a) use virtual elements to improve the perception regard the surrounding elements
    b) use Virtual Motion Predictor to keep an eye on autonomous drive/fly

11) Launch an European research on VR, AR and Haptics customized for Home
   Automation:


April 2007     90              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6            Dissemination Level : CO         Contract N. IST-NMP-1-507248-2


    a) introduce VR as a familiar technology to the common people
    b) use Virtual Home Assistant to manage the house
    c) use Intelligent human thinking Virtual Home Assistant as a teacher and to keep the
       elderly company or as a psychologic aid

12) In several issues, Europe, the US and Asia seem to be complementary rather than
    competitors, we would suggest to:
    a) extend some national exchange programme at the European level (for instance,
       each country has its own exchange programme, some could be copied to be extended
       to a European level), dedicated VR Marie-Curie fellow-ship
    b) facilitate and explicitly launched join research programme with the US and Asia

13) Create a European onsite (Asia, the US) science observatory for quick transfers of
    new technologies related to hardware.


Additional fundamental recommendations
In addition to the above “Set of priorities for VR/VE research in Europe”, it seems also
necessary, at the end of this report, to synthetically recall some fundamental
recommendations, which have been discussed in the sections 4 and 5 of part 2:
        - A European policy for VR that would much center on removing overlaps between
            European research VR teams, and/or promoting an integrative process exclusively
            drives by financial criteria, can be counterproductive, mainly for two reasons:
           o first, because research activities in VR field need multi-disciplinary teams that
               induces overlapping;
           o second, because research excellence needs a sufficient level of competition
               between VR research teams, which also supposes overlapping.
        - At the opposite, a European policy for VR standardisation, elaborated with all the
            European VR research teams to maintain sufficient degrees of freedom for
            research activities, is probably a good strategy to increase the dissemination of
            the European VR solutions and products in the world.
        - Last but not least, the creation of a large European VR consortium, similar to the
            features of the Virtual Worlds Consortium concept that the HIT Lab has
            developed, for durably promoting the European VR research products towards
            commercial companies and VR end-users, must probably be included within the
            current projects of the INTUITION Business Office, or more generally, inside the
            solutions that WP1.F have to design for the sustainability of the INTUITION
            network after the end of the current EC fundings.

However, more practical than the above structural recommendations, it is also necessary to
develop the proposed VR/VE research priorities with a deep market vision for end users such
as SMEs.
For instance, in one Internal Project6 of INTUITION it was established that a blue screen
system is very frequently used by SMEs. In a workshop with INTUITION partners it was
discussed why this system is used so frequently and what we can learn from that in applying
VR more frequently. SMEs and other companies, which do not have their core business in
designing for human operations, will use VR of other institutes. As it is not their core
6
 See the report of the Internal Project “Requirements for low cost SME VR” (TNO, COAT, Mitsubishi
Caterpillar, TUTwente, ICCS, Un Trasylvania) supported by INTUITION.


April 2007      91                 CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO
Deliverable N. D1.A_6        Dissemination Level : CO     Contract N. IST-NMP-1-507248-2


business, their investment attitude is low. Often these companies do not have the time to wait
for studies on the effects of design changes. That is why they want rapid design and quick
results. Also, they needed to be convinced that the purpose of the change and communication
between various groups within the company (management, marketing, engineers and
designers) is needed.
The conclusion was that there was a need to bridge the gap between SME demands and VR
systems currently available.




April 2007     92              CNRS/LIMSI – CNRS/IBISC – FhG/IAO –ALENIA – OVIDIUS – TNO

								
To top