Skip to content

Advertisement

  • Research Article
  • Open Access

Acoustic Event Detection Based on Feature-Level Fusion of Audio and Video Modalities

  • 1Email author,
  • 1,
  • 1,
  • 1,
  • 1,
  • 1 and
  • 1
EURASIP Journal on Advances in Signal Processing20112011:485738

https://doi.org/10.1155/2011/485738

  • Received: 20 May 2010
  • Accepted: 14 January 2011
  • Published:

Abstract

Acoustic event detection (AED) aims at determining the identity of sounds and their temporal position in audio signals. When applied to spontaneously generated acoustic events, AED based only on audio information shows a large amount of errors, which are mostly due to temporal overlaps. Actually, temporal overlaps accounted for more than 70% of errors in the real-world interactive seminar recordings used in CLEAR 2007 evaluations. In this paper, we improve the recognition rate of acoustic events using information from both audio and video modalities. First, the acoustic data are processed to obtain both a set of spectrotemporal features and the 3D localization coordinates of the sound source. Second, a number of features are extracted from video recordings by means of object detection, motion analysis, and multicamera person tracking to represent the visual counterpart of several acoustic events. A feature-level fusion strategy is used, and a parallel structure of binary HMM-based detectors is employed in our work. The experimental results show that information from both the microphone array and video cameras is useful to improve the detection rate of isolated as well as spontaneously generated acoustic events.

Keywords

  • Recognition Rate
  • Sound Source
  • Object Detection
  • Audio Signal
  • Fusion Strategy

Publisher note

To access the full article, please see PDF.

Authors’ Affiliations

(1)
Department of Signal Theory and Communications, TALP Research Center, Technical University of Catalonia, Campus Nord, Ed. D5, Jordi Girona 1-3, 08034 Barcelona, Spain

Copyright

Advertisement