Skip to main content

An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices

Abstract

With the growing popularity of personal digital assistant devices and smart phones, more and more consumers are becoming quite enthusiastic to appreciate videos via mobile devices. However, limited display size of the mobile devices has been imposing significant barriers for users to enjoy browsing high-resolution videos. In this paper, we present an attention-information-based spatial adaptation framework to address this problem. The whole framework includes two major parts: video content generation and video adaptation system. During video compression, the attention information in video sequences will be detected using an attention model and embedded into bitstreams with proposed supplement-enhanced information (SEI) structure. Furthermore, we also develop an innovative scheme to adaptively adjust quantization parameters in order to simultaneously improve the quality of overall encoding and the quality of transcoding the attention areas. When the high-resolution bitstream is transmitted to mobile users, a fast transcoding algorithm we developed earlier will be applied to generate a new bitstream for attention areas in frames. The new low-resolution bitstream containing mostly attention information, instead of the high-resolution one, will be sent to users for display on the mobile devices. Experimental results show that the proposed spatial adaptation scheme is able to improve both subjective and objective video qualities.

References

  1. 1.

    Kim J-G, Wang Y, Chang S-F, Kim H-M: An optimal framework of video adaptation and its application to rate adaptation transcoding. ETRI Journal 2005,27(4):341-354. 10.4218/etrij.05.0105.0019

    Article  Google Scholar 

  2. 2.

    Chang S-F, Vetro A: Video adaptation: concepts, technologies, and open issues. Proceedings of the IEEE 2005,93(1):148-158.

    Article  Google Scholar 

  3. 3.

    Xin J, Lin C-W, Sun M-T: Digital video transcoding. Proceedings of the IEEE 2005,93(1):84-97.

    Article  Google Scholar 

  4. 4.

    Vetro A, Christopoulos C, Sun H: Video transcoding architectures and techniques: an overview. IEEE Signal Processing Magazine 2003,20(2):18-29. 10.1109/MSP.2003.1184336

    Article  Google Scholar 

  5. 5.

    Sinha A, Agarwal G, Anbu A: Region-of-interest based compressed domain video transcoding scheme. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '04), May 2004, Montreal, Canada 3: 161–164.

    Google Scholar 

  6. 6.

    Agarwal G, Anbu A, Sinha A: A fast algorithm to find the region-of-interest in the compressed MPEG domain. Proceedings of the International Conference on Multimedia and Expo (ICME '03), July 2003, Baltimore, Md, USA 2: 133–136.

    Google Scholar 

  7. 7.

    Vetro A, Sun H, Wang Y: Object-based transcoding for adaptable video content delivery. IEEE Transactions on Circuits and Systems for Video Technology 2001,11(3):387-401. 10.1109/76.911163

    Article  Google Scholar 

  8. 8.

    Shimoga KB: Region of interest based video image transcoding for heterogeneous client displays. Proceedings of the 12th International Packetvideo Workshop (PV '02), April 2002, Pittsburgh, Pa, USA

    Google Scholar 

  9. 9.

    Draft ITU-T recommendation and final draft international standard of joint video specification (ITU-T Rec. H.264/ISO/IEC 14496-10 AVC in Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG, JVT-GO50, 2003

  10. 10.

    Wang Y, Li H, Fan X, Chen CW: An attention based spatial adaptation scheme for H.264 videos on mobiles. International Journal of Pattern Recognition and Artificial Intelligence 2006,20(4):565-584. special issue on Intelligent Mobile and Embedded Systems 10.1142/S0218001406004843

    Article  Google Scholar 

  11. 11.

    Chen L-Q, Xie X, Fan X, Ma W-Y, Zhang H-J, Zhou H-Q: A visual attention model for adapting images on small displays. Multimedia Systems 2003,9(4):353-364. 10.1007/s00530-003-0105-4

    Article  Google Scholar 

  12. 12.

    Ma Y-F, Zhang H-J: Contrast-based image attention analysis by using fuzzy growing. Proceedings of the 11th ACM International Multimedia Conference (MM '03), November 2003, Berkeley, Calif, USA 374–381.

    Google Scholar 

  13. 13.

    Hua X-S, Chen X-R, Wenying L, Zhang H-J: Automatic location of text in video frames. Proceedings of the ACM International Multimedia Information Retrieval Conference (MIR '01), October 2001, Ottawa, Canada 24–27.

    Google Scholar 

  14. 14.

    Fan X, Xie X, Zhou H-Q, Ma W-Y: Looking into video frames on small displays. Proceedings of the 11th ACM International Multimedia Conference (MM '03), November 2003, Berkeley, Calif, USA 247–250.

    Google Scholar 

  15. 15.

    JVT reference software official version Image Processing Homepage, https://doi.org/bs.hhi.de/~suehring/tml/

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Houqiang Li.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://doi.org/creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Li, H., Wang, Y. & Chen, C.W. An Attention-Information-Based Spatial Adaptation Framework for Browsing Videos via Mobile Devices. EURASIP J. Adv. Signal Process. 2007, 025415 (2007). https://doi.org/10.1155/2007/25415

Download citation

Keywords

  • Mobile Device
  • Video Quality
  • Display Size
  • Quantization Parameter
  • Video Compression