Open Access

An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

EURASIP Journal on Advances in Signal Processing20072007:025415

https://doi.org/10.1155/2007/25415

Received: 1 September 2006

Accepted: 3 May 2007

Published: 26 June 2007

Abstract

With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

[123456789101112131415]

Authors’ Affiliations

(1)
Department of Electronic Engineering and Information Science (EEIS), University of Science and Technology of China

References

  1. Kim J-G, Wang Y, Chang S-F, Kim H-M: An optimal framework of video adaptation and its application to rate adaptation transcoding. ETRI Journal 2005,27(4):341-354. 10.4218/etrij.05.0105.0019View ArticleGoogle Scholar
  2. Chang S-F, Vetro A: Video adaptation: concepts, technologies, and open issues. Proceedings of the IEEE 2005,93(1):148-158.View ArticleGoogle Scholar
  3. Xin J, Lin C-W, Sun M-T: Digital video transcoding. Proceedings of the IEEE 2005,93(1):84-97.View ArticleGoogle Scholar
  4. Vetro A, Christopoulos C, Sun H: Video transcoding architectures and techniques: an overview. IEEE Signal Processing Magazine 2003,20(2):18-29. 10.1109/MSP.2003.1184336View ArticleGoogle Scholar
  5. Sinha A, Agarwal G, Anbu A: Region-of-interest based compressed domain video transcoding scheme. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Canada 3: 161-164.Google Scholar
  6. Agarwal G, Anbu A, Sinha A: A fast algorithm to find the region-of-interest in the compressed MPEG domain. Proceedings of the International Conference on Multimedia and Expo (ICME '03), July 2003, Baltimore, Md, USA 2: 133-136.Google Scholar
  7. Vetro A, Sun H, Wang Y: Object-based transcoding for adaptable video content delivery. IEEE Transactions on Circuits and Systems for Video Technology 2001,11(3):387-401. 10.1109/76.911163View ArticleGoogle Scholar
  8. Shimoga KB: Region of interest based video image transcoding for heterogeneous client displays. Proceedings of the 12th International Packetvideo Workshop (PV '02), April 2002, Pittsburgh, Pa, USAGoogle Scholar
  9. Draft ITU-T recommendation and final draft international standard of joint video specification (ITU-T Rec. H.264/ISO/IEC 14496-10 AVC in Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, JVT-GO50, 2003Google Scholar
  10. Wang Y, Li H, Fan X, Chen CW: An attention based spatial adaptation scheme for H.264 videos on mobiles. International Journal of Pattern Recognition and Artificial Intelligence 2006,20(4):565-584. special issue on Intelligent Mobile and Embedded Systems 10.1142/S0218001406004843View ArticleGoogle Scholar
  11. Chen L-Q, Xie X, Fan X, Ma W-Y, Zhang H-J, Zhou H-Q: A visual attention model for adapting images on small displays. Multimedia Systems 2003,9(4):353-364. 10.1007/s00530-003-0105-4View ArticleGoogle Scholar
  12. Ma Y-F, Zhang H-J: Contrast-based image attention analysis by using fuzzy growing. Proceedings of the 11th ACM International Multimedia Conference (MM '03), November 2003, Berkeley, Calif, USA 374-381.Google Scholar
  13. Hua X-S, Chen X-R, Wenying L, Zhang H-J: Automatic location of text in video frames. Proceedings of the ACM International Multimedia Information Retrieval Conference (MIR '01), October 2001, Ottawa, Canada 24-27.Google Scholar
  14. Fan X, Xie X, Zhou H-Q, Ma W-Y: Looking into video frames on small displays. Proceedings of the 11th ACM International Multimedia Conference (MM '03), November 2003, Berkeley, Calif, USA 247-250.Google Scholar
  15. JVT reference software official version Image Processing Homepage, http://bs.hhi.de/~suehring/tml/

Copyright

© Houqiang Li et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.