Skip to content

Advertisement

  • Research Article
  • Open Access

An Iterative Decoding Algorithm for Fusion of Multimodal Information

  • 1Email author,
  • 1 and
  • 1
EURASIP Journal on Advances in Signal Processing20072008:478396

https://doi.org/10.1155/2008/478396

  • Received: 16 February 2007
  • Accepted: 26 October 2007
  • Published:

Abstract

Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.

Keywords

  • Markov Model
  • Hide Markov Model
  • Iterative Algorithm
  • Turbo Code
  • Fuse Information

Publisher note

To access the full article, please see PDF.

Authors’ Affiliations

(1)
Department of Electrical and Computer Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA

Copyright

Advertisement