Skip to main content

Advertisement

Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

Article metrics

  • 1849 Accesses

  • 19 Citations

Abstract

We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

Author information

Correspondence to Alessandro Bugatti.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Bugatti, A., Flammini, A. & Migliorati, P. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach. EURASIP J. Adv. Signal Process. 2002, 980905 (2002) doi:10.1155/S1110865702000720

Download citation

Keywords

  • speech/music discrimination
  • indexing of audio-visual documents
  • neural networks
  • multimedia applications