Skip to content

Advertisement

Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation

  1. Content type: Research Article

    This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animat...

    Authors: Giampiero Salvi, Jonas Beskow, Samer Al Moubayed and Björn Granström

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:191940

    Published on:

  2. Content type: Research Article

    We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into p...

    Authors: JamesD Edge, Adrian Hilton and Philip Jackson

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:597267

    Published on:

  3. Content type: Research Article

    We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two o...

    Authors: Gérard Bailly, Oxana Govokhina, Frédéric Elisei and Gaspard Breton

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:769494

    Published on:

  4. Content type: Research Article

    This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a ...

    Authors: Kang Liu and Joern Ostermann

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:174192

    Published on:

  5. Content type: Research Article

    Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either...

    Authors: Wesley Mattheyses, Lukas Latacz and Werner Verhelst

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:169819

    Published on: