Skip to main content

Advertisement

Perceptual Models for Speech, Audio, and Music Processing

  1. This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLS...

    Authors: AG Katsiamis, EM Drakakis and RF Lyon

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:063685

    Content type: Research Article

    Published on:

  2. This work is the result of an interdisciplinary collaboration between scientists from the fields of audio signal processing, phonetics and cognitive neuroscience aiming at studying the perception of modificati...

    Authors: Sølvi Ystad, Cyrille Magne, Snorre Farner, Gregory Pallone, Mitsuko Aramaki, Mireille Besson and Richard Kronland-Martinet

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:030194

    Content type: Research Article

    Published on:

  3. Variability of speaker accent is a challenge for effective human communication as well as speech technology including automatic speech recognition and accent identification. The motivation of this study is to ...

    Authors: Ayako Ikeno and John HL Hansen

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:076030

    Content type: Research Article

    Published on:

  4. A noise suppression algorithm is proposed based on filtering the spectrotemporal modulations of noisy signals. The modulations are estimated from a multiscale representation of the signal spectrogram generated...

    Authors: Nima Mesgarani and Shihab Shamma

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:042357

    Content type: Research Article

    Published on:

  5. This paper experimentally shows the importance of perceptual continuity of the expressive strength in vocal timbre for natural change in vocal expression. In order to synthesize various and continuous expressi...

    Authors: Tomoko Yonezawa, Noriko Suzuki, Shinji Abe, Kenji Mase and Kiyoshi Kogure

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:023807

    Content type: Research Article

    Published on:

  6. Many modern speech bandwidth extension techniques predict the high-frequency band based on features extracted from the lower band. While this method works for certain types of speech, problems arise when the c...

    Authors: Visar Berisha and Andreas Spanias

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:016816

    Content type: Research Article

    Published on:

  7. We describe an FFT-based companding algorithm for preprocessing speech before recognition. The algorithm mimics tone-to-tone suppression and masking in the auditory system to improve automatic speech recogniti...

    Authors: Bhiksha Raj, Lorenzo Turicchia, Bent Schmidt-Nielsen and Rahul Sarpeshkar

    Citation: EURASIP Journal on Audio, Speech, and Music Processing 2007 2007:065420

    Content type: Research Article

    Published on: