Skip to main content

Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation

  1. Research Article

    SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support

    This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animat...

    Giampiero Salvi, Jonas Beskow, Samer Al Moubayed and Björn Granström

    EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:191940

    Published on: 16 November 2009

  2. Research Article

    Model-Based Synthesis of Visual Speech Movements from 3D Video

    We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into p...

    JamesD Edge, Adrian Hilton and Philip Jackson

    EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:597267

    Published on: 15 November 2009

  3. Research Article

    Lip-Synching Using Speaker-Specific Articulation, Shape and Appearance Models

    We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two o...

    Gérard Bailly, Oxana Govokhina, Frédéric Elisei and Gaspard Breton

    EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:769494

    Published on: 15 November 2009

  4. Research Article

    Optimization of an Image-Based Talking Head System

    This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a ...

    Kang Liu and Joern Ostermann

    EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:174192

    Published on: 30 September 2009