Skip to main content

3DTV: Capture, Transmission, and Display of 3D Video

Editorial

Extension of visual communications to the third dimension (3DTV) has been a dream over decades; capturing three-dimensional visual information of a real-life scene and creating an exact (except the scale) optical duplicate of it at a remote site instantaneously, or at a later time, are ultimate goals in visual communications. However, limitations in visual quality and user acceptance have prevented the development of relevant mass markets so far. Recent achievements in research-and-development and application-driven demand triggered an increasing interest in 3DTV visual technologies.

Main functional components of 3DTV are "capture and representation of 3D scene information," "complete definition of digital 3DTV signal," "storage and transmission of this signal," and finally "the display of the reproduced 3D scene." For a successful consumer-accepted operation of 3DTV, all these functional components must be carefully designed in an integrated fashion by considering the harmonious interaction among them. This kind of large-scale integration naturally involves a large group of researchers with diverse backgrounds, and therefore has a highly multidisciplinary nature.

In this context, the objective of this special issue is to present, in a well-coordinated fashion, the works and efforts of researchers with rather diverse experience and activity in distinct yet related and complementary areas for achieving full-scale 3DTV. This effort is a continuation of the conference series of 3DTV-CON (http://www.3dtv-con.org) and the European 3DTV Project within the 6th Framework Information Society Technologies Programme of European Community.

The first three papers deal with issues related to the representation of 3D scene information. More specifically, the first paper is entitled "Modulating the shape and size of backprojection surfaces to improve accuracy in volumetric stereo" by X. Zabulis and G. Floros. This paper focuses on the 3D reconstruction of imaged scenes and, in particular, on the cue to 3D scene geometry due to the assumption of texture uniqueness which gives rise to volumetric stereo approaches. In such approaches, the acquired images are backprojected on a hypothetical backprojection surface prior to the establishment of stereo correspondences. Methods are proposed to increase the accuracy of estimations of surface location and orientation, which is the essential information for the reconstruction of the imaged scene, by implying spatial normalizations in the comparison of backprojected image segments.

The second paper in this group is entitled "Motion segmentation for time-varying mesh sequences based on spherical registration" by T. Yamasaki and K. Aizawa. In this paper, a robust motion segmentation and retrieval technique for time-varying mesh is presented. In conventional approaches, motion of the objects is analyzed using shape feature vectors extracted from time-varying mesh frames. An algorithm is developed to analyze motion of objects in the 3D space using the spherical registration based on the iterative closest point algorithm. Rough motion tracking is conducted, and the degree of motion is robustly calculated. The approach is straightforward and achieves much better results than the conventional approaches.

The third paper of the 3D representation group is entitled "Motion editing for time-varying mesh" by J. Xu et al. This paper presents a system of motion editing for a time-varying mesh by reusing the original data and reorganizing the motions. This goal is achieved by constructing a motion graph to connect motions with smooth transitions and using a modified Dijkstra algorithm in order to obtain a new sequence based on the user requirements. Thus, the problem of expensive and time-consuming generation of time-varying mesh sequences is addressed.

The second group of three next papers is related to issues on coding and transmission of 3D video signal. The first paper of this group is entitled "Motion vector sharing and bitrate allocation for 3D video-plus-depth coding" by I. Daribo et al. This paper analyzes the 3D video-plus-depth coding efficiency by taking into account the fact that the video and the depth map sequences are strongly correlated. In this context, it is proposed to reduce the amount of information for describing the motions of the texture video and of the depth map sequences by sharing one common motion vector field. A new bitrate allocation strategy is also proposed between the texture and its associated per-pixel depth information.

The next paper dealing with 3D coding is entitled "The emerging MVC standard for 3D video services" by Y. Chen et al. This paper elaborates the key aspects of system, transport interface, and decoder design of multiview video coding (MVC) approach. Techniques needed to meet the requirements of typical 3D services and system architectures are also introduced. These solutions focus on two aspects: features to facilitate storage and transport of MVC bitstreams and features to achieve minimum decoder resource consumption.

The third paper of this group by A. S. Tan et al. deals with the 3D video transmission, and discusses the problem of "rate-distortion optimization for stereoscopic video streaming with unequal error protection." The problem is addressed by proposing a methodology for modeling the end-to-end rate distortion and dynamically adjusting the source compression ratio in response to channel conditions so that the overall distortion is minimized.

The next group of three papers elaborates 3D display issues. The first paper of this group is entitled "Pattern projection with a sinusoidal phase grating" by E. Stoykova et al. This paper presents a pattern projection profilometric system that uses diffractive properties of a sinusoidal phase grating as a pattern projection element in a multisource and multicamera phase-shifting setting. Challenges, which are connected to inherent limitations of the phase-shifting algorithm, are identified for the successful operation of such a system.

The next paper of the 3D displays' group is entitled "Basic holographic characteristics of a panchromatic light sensitive material for reflective auto stereoscopic 3D display" by Ts. Petrova et al. The aim of this work is to present recently obtained results on the development of ultra-fine grain panchromatic silver halide emulsion for high-quality recording of RGB reflection holograms for the needs of auto stereoscopic video display. The average grain size in the emulsion is less than 10 nm, which ensures a high resolution, a high diffraction efficiency, and a high signal-to-noise ratio in a large dynamic range for RGB reflective holographic recording.

The third paper dealing with the 3D displays is entitled "Dynamic resolution in GPU-accelerated volume rendering to autostereoscopic multiview lenticular displays" by Daniel Ruijters. In this paper, a cost-effective and easy-to-use method for accelerated direct volume rendering for multiview lenticular displays is presented. Due to GPU-acceleration, together with the adaptive adjustment of the intermediate view resolution, interactive frame rates which allow virtual manipulation of the rendered scene can be reached.

Finally, the last paper of the special issue presents an end-to-end 3DTV system. It is entitled "A flexible client-driven 3DTV system for real-time acquisition, transmission, and display of dynamic scenes" by X. Cao et al. In this paper, a flexible 3DTV system is analyzed, in which multiview video streams are captured, compressed, transmitted, and finally converted to a high-quality 3D or free-viewpoint video in real time. The system consists of an camera array, 16 producer PCs, a streaming server, multiple clients, and several autostereoscopic displays. The entire system is implemented over an IP network to provide multiple users with interactive 2D/3D switching, viewpoint control, and dynamic scene synthesis.

Interest in 3D has never been greater. Research-and-development on 3D imaging systems is gaining significant momentum. 3D visual information is expected to be used more increasingly due to advances in 3DTV-related technologies—the subject of this special issue.

G. A. Triantafyllidis

A. Enis Çetin

Aljoscha Smolic

Levent Onural

Thomas Sikora

John Watson

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to G. A. Triantafyllidis.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Triantafyllidis, G.A., Enis Çetin, A., Smolic, A. et al. 3DTV: Capture, Transmission, and Display of 3D Video. EURASIP J. Adv. Signal Process. 2009, 585216 (2008). https://doi.org/10.1155/2009/585216

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/585216