Open Access

Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

  • Alessandro Bugatti1Email author,
  • Alessandra Flammini1 and
  • Pierangelo Migliorati1
EURASIP Journal on Advances in Signal Processing20022002:980905

https://doi.org/10.1155/S1110865702000720

Received: 27 July 2001

Published: 30 April 2002

Abstract

We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

Keywords

speech/music discrimination indexing of audio-visual documents neural networks multimedia applications

Authors’ Affiliations

(1)
Department of Electronics for Automation, University of Brescia

Copyright

© Bugatti et al. 2002