Skip to main content

Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

Abstract

We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

Author information

Affiliations

Authors

Corresponding author

Correspondence to W. H. Adams.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Adams, W.H., Iyengar, G., Lin, C. et al. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues. EURASIP J. Adv. Signal Process. 2003, 987184 (2003). https://doi.org/10.1155/S1110865703211173

Download citation

Keywords

  • query by keywords
  • multimodal information fusion
  • statistical modeling of multimedia
  • video indexing and retrieval
  • SVM
  • GMM
  • HMM
  • spoken document retrieval
  • video event detection
  • video TREC