- Research Article
- Open Access
A System for the Semantic Multimodal Analysis of News Audio-Visual Content
https://doi.org/10.1155/2010/645052
© Vasileios Mezaris et al. 2010
- Received: 24 July 2009
- Accepted: 21 February 2010
- Published: 11 April 2010
Abstract
News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text) separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.
Keywords
- Automatic Generation
- Individual Modality
- Multimedia Content
- Fusion Technique
- Semantic Annotation
Publisher note
To access the full article, please see PDF.
Authors’ Affiliations
Copyright
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.