Skip to main content

Advertisement

Animating Virtual Speakers or Singers from Audio: Lip-Synching Facial Animation

  1. This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animat...

    Authors: Giampiero Salvi, Jonas Beskow, Samer Al Moubayed and Björn Granström

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:191940

    Content type: Research Article

    Published on:

  2. We describe here the control, shape and appearance models that are built using an original photogrammetric method to capture characteristics of speaker-specific facial articulation, anatomy, and texture. Two o...

    Authors: Gérard Bailly, Oxana Govokhina, Frédéric Elisei and Gaspard Breton

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:769494

    Content type: Research Article

    Published on:

  3. We describe a method for the synthesis of visual speech movements using a hybrid unit selection/model-based approach. Speech lip movements are captured using a 3D stereo face capture system and split up into p...

    Authors: JamesD Edge, Adrian Hilton and Philip Jackson

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:597267

    Content type: Research Article

    Published on:

  4. This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a ...

    Authors: Kang Liu and Joern Ostermann

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:174192

    Content type: Research Article

    Published on:

  5. Audiovisual text-to-speech systems convert a written text into an audiovisual speech signal. Typically, the visual mode of the synthetic speech is synthesized separately from the audio, the latter being either...

    Authors: Wesley Mattheyses, Lukas Latacz and Werner Verhelst

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2009 2009:169819

    Content type: Research Article

    Published on: