- Research Article
- Open access
- Published:
An Iterative Decoding Algorithm for Fusion of Multimodal Information
EURASIP Journal on Advances in Signal Processing volume 2008, Article number: 478396 (2007)
Abstract
Human activity analysis in an intelligent space is typically based on multimodal informational cues. Use of multiple modalities gives us a lot of advantages. But information fusion from different sources is a problem that has to be addressed. In this paper, we propose an iterative algorithm to fuse information from multimodal sources. We draw inspiration from the theory of turbo codes. We draw an analogy between the redundant parity bits of the constituent codes of a turbo code and the information from different sensors in a multimodal system. A hidden Markov model is used to model the sequence of observations of individual modalities. The decoded state likelihoods from one modality are used as additional information in decoding the states of the other modalities. This procedure is repeated until a certain convergence criterion is met. The resulting iterative algorithm is shown to have lower error rates than the individual models alone. The algorithm is then applied to a real-world problem of speech segmentation using audio and visual cues.
Publisher note
To access the full article, please see PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Shivappa, S.T., Rao, B.D. & Trivedi, M.M. An Iterative Decoding Algorithm for Fusion of Multimodal Information. EURASIP J. Adv. Signal Process. 2008, 478396 (2007). https://doi.org/10.1155/2008/478396
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1155/2008/478396