Skip to content

Advertisement

  • Research Article
  • Open Access

Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

  • 1Email author,
  • 2,
  • 2 and
  • 1
EURASIP Journal on Advances in Signal Processing20052005:387845

https://doi.org/10.1155/ASP.2005.2991

  • Received: 29 April 2004
  • Published:

Abstract

A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

Keywords and phrases

  • hearing aids
  • sound classification
  • auditory scene analysis

Authors’ Affiliations

(1)
ENT Department, University Hospital Zurich, Zurich, CH-8091, Switzerland
(2)
Phonak AG, Staefa, CH-8712, Switzerland

Copyright

© Michael Büchler et al. 2005

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement