Skip to main content

Music Information Retrieval Based on Signal Processing

  1. We report our findings on using MIDI files and audio features from MIDI, separately and combined together, for MIDI music genre classification. We use McKay and Fujinaga's 3-root and 9-leaf genre data set. In ...

    Authors: Zehra Cataltepe, Yusuf Yaslan and Abdullah Sonmez
    Citation: EURASIP Journal on Advances in Signal Processing 2007 2007:036409
  2. This paper approaches, under a unified framework, several algorithms for the spectral analysis of musical signals. Such algorithms include the fast Fourier transform (FFT), the fast filter bank (FFB), the cons...

    Authors: Filipe C. C. B. Diniz, Iuri Kothe, Sergio L. Netto and Luiz W. P. Biscainho
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:094704
  3. One major goal of structural analysis of an audio recording is to automatically extract the repetitive structure or, more generally, the musical form of the underlying piece of music. Recent approaches to this...

    Authors: Meinard Müller and Frank Kurth
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:089686
  4. Recent work in blind source separation applied to anechoic mixtures of speech allows for improved reconstruction of sources that rarely overlap in a time-frequency representation. While the assumption that speech...

    Authors: John Woodruff and Bryan Pardo
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:086369
  5. We present an innovative tempo estimation system that processes acoustic audio signals and does not use any high-level musical knowledge. Our proposal relies on a harmonic + noise decomposition of the audio si...

    Authors: Miguel Alonso, Gael Richard and Bertrand David
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:082795
  6. This paper describes a system for optical music recognition (OMR) in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image...

    Authors: Florence Rossant and Isabelle Bloch
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:081541
  7. The segmentation of music into intro-chorus-verse-outro, and similar segments, is a difficult topic. A method for performing automatic segmentation based on features related to rhythm, timbre, and harmony is p...

    Authors: Kristoffer Jensen
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:073205
  8. We present a novel approach to automatic estimation of tempo over time. This method aims at detecting tempo at the tactus level for percussive and nonpercussive audio. The front-end of our system is based on a...

    Authors: Geoffroy Peeters
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:067215
  9. We present a strategy to perform automatic genre classification of musical signals. The technique divides the signals into 21.3 milliseconds frames, from which 4 features are extracted. The values of each feat...

    Authors: Jayme Garcia sArnal Barbedo and Amauri Lopes
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:064960
  10. We systematically analyze audio key finding to determine factors important to system design, and the selection and evaluation of solutions. First, we present a basic system, fuzzy analysis spiral array center ...

    Authors: Ching-Hua Chuan and Elaine Chew
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:056561
  11. We provide a new solution to the problem of feature variations caused by the overlapping of sounds in instrument identification in polyphonic music. When multiple instruments simultaneously play, partials (har...

    Authors: Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata and Hiroshi G. Okuno
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:051979
  12. We present a discriminative model for polyphonic piano transcription. Support vector machines trained on spectral features are used to classify frame-level note instances. The classifier outputs are temporally...

    Authors: Graham E. Poliner and Daniel P. W. Ellis
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:048317
  13. This paper presents a novel approach to detecting onsets in music audio files. We use a supervised learning algorithm to classify spectrogram frames extracted from digital audio as being onsets or nononsets. F...

    Authors: Alexandre Lacoste and Douglas Eck
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:043745
  14. The Internet enables us to freely access music as recorded sound and even music scores. For the visually impaired, music scores must be transcribed from computer-based musical formats to Braille music notation...

    Authors: D Goto, T Gotoh, R Minamikawa-Tachino and N Tamura
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:042498
  15. Recently, several music information retrieval (MIR) systems which retrieve musical pieces by the user's singing voice have been developed. All of these systems use only melody information for retrieval, althou...

    Authors: Motoyuki Suzuki, Toru Hosoya, Akinori Ito and Shozo Makino
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:038727
  16. Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculat...

    Authors: Kris West and Paul Lamere
    Citation: EURASIP Journal on Advances in Signal Processing 2006 2007:024602