AM exican-Spanish Talking Head by 61Q2i5E

VIEWS: 0 PAGES: 1

									                                                                   Title
                                           Author 1            Author 2         Author 3
                                                   University of Sheffield
                                             Department of Computer Science
                                  Regent Court, 211 Portobello Street, Sheffield, S1 DP, U.K.
                                                    +44(0) 111 111 1111
                                               {a.1, a.2, a.3 }@dcs.shef.ac.uk



ABSTRACT
Write a short abstract of your paper here. Write a short abstract
                                                                       4. RESULTS
of your paper here. Write a short abstract of your paper here.         The Mexican-Spanish talking head was tested with the sentence
Write a short abstract of your paper here. Write a short abstract      “hola, ¿cómo estas?“. Figure 9 shows the results when the global
of your paper here. Write a short abstract of your paper here.         acceleration constraint is varied. For the left column the global
Write a short abstract of your paper here. Write a short abstract      constraint is set at 0.03, whereas in the right column it is set at
of your paper here. Write a short abstract of your paper here.         0.004. Differences in the mouth opening can be observed in the
Write a short abstract of your paper here..                            two columns.


Categories and Subject Descriptors                                              Table 1. Mexican-Spanish viseme definition
I.3.7 [Three-Dimensional Graphics and Realism]: Computer
Facial Animation                                                           Phoneme                      Viseme name


General Terms
Algorithms, Human Factors.

Keywords
Computer facial animation.

1. INTRODUCTION
There are a number of approaches to producing visual speech
and general facial movements, such as pose-based interpolation,
                                                                                Figure 1. Front and side view of the viseme
concatenation of dynamic units, and physically-based modeling
(see [13] for a review).
                                                                       5. CONCLUSIONS
                                                                       We have produced a Mexican-Spanish talking head that uses a
2. CONSTRAINT-BASED VISUAL                                             constraint-based approach to create realistic-looking speech
SPEECH                                                                 trajectories.
A posture (viseme) for a phoneme is variable within and
between speakers. It is affected by context (the so-called             6. ACKNOWLEDGMENTS
coarticulation effect), as well as by such things as mood and          Thanks to XYZ who helped us with the production of the real
tiredness.                                                             mouth pictures.
X j 1  X j  (X obj  X cstr )                         (2.1)
                                                                       7. REFERENCES
                                                                       [1] Black, A., Taylor, P., Caley, R., Festival Speech Synthesis
                                                                           System. http://www.cstr.ed.ac.uk/projects/festival/, 2007
3. INPUT DATA FOR THE RANGE                                            [2] Benguerel, A. and Cowan, H., 1974. Coarticulation of
CONSTRAINTS                                                                upper lip protrusion in french. Phonetica, 30:41–55.
In order to produce specific values for the range constraints          [3] Cohen, M. and Massaro, D., 1993. Modeling coarticulation
described in the previous section, we need to define the visemes           in synthetic visual speech. In Proceedings of Computer
that are to be used and measure their visual shapes on real                Animation ‘93, pp. 139–156.
speakers.
                                                                       [4] Dodd, B. and Campbell, R. (Ed), 1987. Hearing by Eye:
                                                                           The Psychology of Lipreading. London: Lawrence
                                                                           Erlbaum.

								
To top