Open Access

Dynamic Bayesian Networks for Audio-Visual Speech Recognition

  • Ara V. Nefian1Email author,
  • Luhong Liang2,
  • Xiaobo Pi2,
  • Xiaoxing Liu2 and
  • Kevin Murphy3
EURASIP Journal on Advances in Signal Processing20022002:783042

https://doi.org/10.1155/S1110865702206083

Received: 30 November 2001

Published: 28 November 2002

Abstract

The use of visual features in audio-visual speech recognition (AVSR) is justified by both the speech generation mechanism, which is essentially bimodal in audio and visual representation, and by the need for features that are invariant to acoustic noise perturbation. As a result, current AVSR systems demonstrate significant accuracy improvements in environments affected by acoustic noise. In this paper, we describe the use of two statistical models for audio-visual integration, the coupled HMM (CHMM) and the factorial HMM (FHMM), and compare the performance of these models with the existing models used in speaker dependent audio-visual isolated word recognition. The statistical properties of both the CHMM and FHMM allow to model the state asynchrony of the audio and visual observation sequences while preserving their natural correlation over time. In our experiments, the CHMM performs best overall, outperforming all the existing models and the FHMM.

Keywords

audio-visual speech recognition hidden Markov models coupled hidden Markov models factorial hidden Markov models dynamic Bayesian networks

Authors’ Affiliations

(1)
Intel Corporation, Microprocessor Research Labs
(2)
Intel Corporation, Microcomputer Research Labs
(3)
Computer Science Division, University of California

Copyright

© Nefian et al. 2002