Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation
-
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2010 2009:826091
-
SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support
This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animat...
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:191940 -
Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models
We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two o...
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:769494 -
Model-Based Synthesis of Visual Speech Movements from 3D Video
We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into p...
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:597267 -
Optimization of an Image-Based Talking Head System
This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a ...
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:174192 -
On the Importance of Audiovisual Coherence for the Perceived Quality of Synthesized Visual Speech
Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either...
Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:169819