Skip to main content

An Iterative Decoding Algorithm for Fusion of Multimodal Information


Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.

Publisher note

To access the full article, please see PDF.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Shankar T. Shivappa.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Shivappa, S.T., Rao, B.D. & Trivedi, M.M. An Iterative Decoding Algorithm for Fusion of Multimodal Information. EURASIP J. Adv. Signal Process. 2008, 478396 (2007).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI:


  • Markov Model
  • Hide Markov Model
  • Iterative Algorithm
  • Turbo Code
  • Fuse Information