Skip to content

Advertisement

  • Research Article
  • Open Access

Source Separation with One Ear: Proposition for an Anthropomorphic Approach

EURASIP Journal on Advances in Signal Processing20052005:471801

https://doi.org/10.1155/ASP.2005.1365

  • Received: 9 December 2003
  • Published:

Abstract

We present an example of an anthropomorphic approach, in which auditory-based cues are combined with temporal correlation to implement a source separation system. The auditory features are based on spectral amplitude modulation and energy information obtained through 256 cochlear filters. Segmentation and binding of auditory objects are performed with a two-layered spiking neural network. The first layer performs the segmentation of the auditory images into objects, while the second layer binds the auditory objects belonging to the same source. The binding is further used to generate a mask (binary gain) to suppress the undesired sources from the original signal. Results are presented for a double-voiced (2 speakers) speech segment and for sentences corrupted with different noise sources. Comparative results are also given using PESQ (perceptual evaluation of speech quality) scores. The spiking neural network is fully adaptive and unsupervised.

Keywords and phrases

  • auditory modeling
  • source separation
  • amplitude modulation
  • auditory scene analysis
  • spiking neurons
  • temporal correlation

Authors’ Affiliations

(1)
Département de Génie Électrique et de Génie Informatique, Université Sherbrooke, 2500 boulevard de l'Université, Sherbrooke, QC, J1K 2R1, Canada
(2)
Équipe de Recherche en Micro-électronique et Traitement Informatique des Signaux (ETMETIS), Département de Sciences Appliqués, Université du Québec à Chicoutimi, 555 boulevard de l'Université, Chicoutimi, Québec, G7H 2B1, Canada

Copyright

Advertisement