Open Access

Detection and Separation of Speech Event Using Audio and Video Information Fusion and Its Application to Robust Speech Interface

  • Futoshi Asano1Email author,
  • Kiyoshi Yamamoto2,
  • Isao Hara1,
  • Jun Ogata1,
  • Takashi Yoshimura1,
  • Yoichi Motomura1,
  • Naoyuki Ichimura1 and
  • Hideki Asoh1
EURASIP Journal on Advances in Signal Processing20042004:324028

https://doi.org/10.1155/S1110865704402303

Received: 11 November 2003

Published: 18 September 2004

Abstract

A method of detecting speech events in a multiple-sound-source condition using audio and video information is proposed. For detecting speech events, sound localization using a microphone array and human tracking by stereo vision is combined by a Bayesian network. From the inference results of the Bayesian network, information on the time and location of speech events can be known. The information on the detected speech events is then utilized in the robust speech interface. A maximum likelihood adaptive beamformer is employed as a preprocessor of the speech recognizer to separate the speech signal from environmental noise. The coefficients of the beamformer are kept updated based on the information of the speech events. The information on the speech events is also used by the speech recognizer for extracting the speech segment.

Keywords

information fusion sound localization human tracking adaptive beamformer speech recognition

Authors’ Affiliations

(1)
Information Technology Research Institute, National Institute of Advanced Industrial Science and Technology
(2)
Department of Computer Science, Tsukuba University

Copyright

© Asano et al. 2004

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.