Skip to main content

Noise Adaptive Stream Weighting in Audio-Visual Speech Recognition


It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Martin Heckmann.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Heckmann, M., Berthommier, F. & Kroschel, K. Noise Adaptive Stream Weighting in Audio-Visual Speech Recognition. EURASIP J. Adv. Signal Process. 2002, 720764 (2002).

Download citation

  • Revised:

  • Published:

  • DOI:


  • audio-visual speech recognition
  • adaptive weighting
  • robust recognition
  • multi-stream recognition