 Research
 Open access
 Published:
Perspective transform motion modeling for improved side information creation
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 189 (2013)
Abstract
The distributed video coding (DVC) paradigm is based on two wellknown information theory results: the SlepianWolf and WynerZiv theorems. In a DVC codec, the video signal correlation is mostly exploited at the decoder, providing a flexible distribution of the computational complexity between the encoder and the decoder and error robustness to channel errors. To exploit the temporal correlation, an estimate of the original frame to code, wellknown as side information, is typically created at the decoder. One popular approach to side information creation is to perform frame interpolation using a translational motion model derived from already decoded frames. However, this translational model fails to estimate complex camera motions, such as zooms and rotations, and is not accurate enough to estimate the true trajectories of scene objects. In this paper, a new side information creation framework integrating perspective transform motion modeling is proposed. This solution is able to better locally track the trajectories and deformations of each object and increase the accuracy of the overall side information estimation process. Experimental results show peak signaltonoise ratio gains of up to 1 dB in side information quality and up to 0.5 dB in ratedistortion performance for some video sequences regarding stateoftheart alternative solutions.
1. Introduction
Nowadays, image, video, and audio digital coding technologies are widely used by a significant amount of the world population. This leads to a huge volume of data being transmitted and stored, especially when video information is involved. The key objective of digital audiovisual coding techniques is to compress the original information into the minimum number of bits for a target decoded signal quality, eventually also fulfilling other relevant requirements such as error resilience, random access, and scalability. Nowadays, most digital videoenabled services and devices use the popular H.264/AVC (Advanced Video Coding) standard, a joint ITUT and ISO/IEC MPEG effort, which typically provides up to 50% compression efficiency gain (this means about half the rate for the same perceptual quality) compared to the previously available standards [1]. However, the state of the art on predictive video coding has been evolving, and in January 2013, another milestone on predictive video coding has emerged, the socalled High Efficiency Video Coding (HEVC) standard [2], which brings again about 50% additional compression compared to the H.264/AVC High profile solution [3] while increasing the encoding complexity. On one hand, the new upcoming HEVC standard has a higher complexity encoder, several times more complex than H.264/AVC encoders, and a realtime implementation will be a subject of research in the near future [4]. On the other hand, an HEVC decoder has a rather similar complexity to an H.264/AVC decoder, which means that the standardization trend of developing rather high encoder complexity when compared to the decoder complexity is still present. This type of complexity budget suits well the downlink broadcast model, where few powerful encoders provide coded content to many simpler and cheaper decoders. However, some emerging video applications are not well characterized by the downlink model but rather follow an uplink model, where some simple devices deliver information to a central, eventually rather complex, receiver. Examples of these applications are wireless lowpower video surveillance, visual sensor networks, mobile video communications, and deepspace applications. Typically, these emerging applications require light encoding or at least a flexible distribution of the video codec complexity, robustness to packet losses, high compression efficiency, and, often, low latency/delay as well. These novel requirements and needs led to the emergence of a new video coding paradigm based on the SlepianWolf and WynerZiv (WZ) Information Theory theorems [5, 6], well known as distributed video coding (DVC). DVC targets light video encoding systems while theoretically achieving the same compression efficiency as the best predictive video coding schemes available (for specific conditions defined by the theorems), improved error resilience, and codecindependent scalability [7]. Moreover, DVC allows exploiting the interview correlation in multiview video scenarios with the architectural advantage that the encoders do not need to communicate among them. In the DVC paradigm, it is crucial that the correlation between the video frames can be efficiently exploited at the decoder side (as the encoder is not anymore exploiting this correlation as in predictive video coding). Therefore, the decoder module responsible for this task, the side information (SI) creation module, is rather critical as its performance strongly impacts the overall DVC codec performance. Typically, to create each side information frame, a translational motion model, representing the motion of each block with a motion vector, is used in most DVC codecs [8]. However, this motion representation is not powerful and accurate enough to efficiently estimate complex motions, like rotations, zooms, and object deformations, and may lead to motion field discontinuities and inaccuracies. To overcome the translational motion model limitations, it is necessary to exploit the capabilities of more advanced motion models to estimate the SI frame. In addition, the SI creation techniques can also be used to enhance the performance of a predictive video decoder when errors corrupt the video bitstream; for example, when an entire video frame is lost (a plausible occurrence in packet loss networks), SI creation techniques can conceal this frame rather efficiently. Other possible use is frame rate upconversion, notably when the encoder drops some frames to save bitrate to meet the constraints of a bandwidthlimited channel; in this case, the decoder can still obtain a reliable estimate of the lost frame, minimizing the error propagation in a predictive group of pictures (GOP) structure.
In this context, this paper proposes a novel side information creation framework that exploits a perspective transform motion model to more accurately represent the temporal correlation between the video frames, thus obtaining better SI quality when compared with the typical translation motion model SI creation solutions. The proposed SI creation solution obtains two estimations for each SI block: one using the popular translational motioncompensated frame interpolation (MCFI) method [9] and another using the proposed perspective transform motion model; then, for each block, the best approach is selected and the final SI frame is created by appropriately combining, at the block level, the two preliminary SI estimations. In addition, to efficiently deal with occlusions, motion models are created for both temporal directions between reference frames (this means backward to forward and forward to backward) and the results are appropriately fused.
The rest of this paper is organized as follows: in Section 2, some relevant SI creation techniques and DVC solutions using advanced motion models available in the literature are reviewed; next, Section 3 presents the architecture and walkthrough of the proposed SI creation solution while Section 4 makes a detailed description of the proposed techniques; in Section 5, the performance of the proposed SI creation solution is assessed in terms of both SI quality through a peak signaltonoise ratio (PSNR) metric and ratedistortion (RD) performance in the context of a stateoftheart DVC solution; and finally, Section 6 presents some conclusions and future work.
2. Reviewing side information creation techniques and advanced motion models
The early Stanford DVC solution is characterized by framebased SlepianWolf coding, at the beginning using turbo codes and later lowdensity paritycheck (LDPC) codes, and assumes the availability of a feedback channel to perform rate control at the decoder [10]. The DISCOVER codec adopts the same architecture and includes stateoftheart techniques for most of the techniques, such as LDPC codes, advanced (translational) motioncompensated frame interpolation, online estimation of the correlation noise model, and a cyclic redundancy check (CRC) error detection code [8]. Although all DVC decoder techniques are important to reach the best RD performance, SI creation has a critical impact on the distributed video codec compression efficiency. In the past, several SI creation techniques only using past decoded frames to create the SI have been proposed. In [9], a translational SI creation framework using a regularization criterion for motion estimation, adaptive search range, two block sizes, and a spatial motion vector median filter is proposed. In [11], meshbased motion estimation and interpolation aims to better represent the motion field, especially for scenes composed by large objects and/or scenes with dominant camera motion. In [12], the motion between the K previously decoded frames (i  K,…, i  1) is tracked to extrapolate the motion field for frame i using a Kalman filtering approach. In [13], a method called highorder motion interpolation (HOMI) was proposed where the SI is created with two reference frames from the past and two reference frames from the future. The motion trajectory is interpolated using information obtained from four frames instead of the (typical) twoframe scenario as in the MCFI method, thus allowing the adoption of more complex motion models, such as nonlinear motion models. Bjontegaard delta PSNR (BDPSNR) improvements of 0.044 to 0.141 dB are achieved when compared to the usual DISCOVER MCFI approach. In [14], an autoregressive (AR) model is used to generate the SI frames targeting a lowdelay DVC scenario. Each SI pixel is calculated as a linear weighted summation of pixels within a window in the previous reconstructed frames. An accurate AR model must be estimated to obtain highquality SI; in such case, two weighting coefficient sets are computed for each SI block, which allow creating two SI frames which are fused together with a simple extrapolation SI frame according to a probability model. The RD performance results are rather promising when compared to other stateoftheart motion extrapolation results available in the literature. In [15], a SI creation approach is proposed where the first steps include forward and bidirectional YUV motion estimation algorithms with variable block size. Then, spatial motion smoothing and motion refinement is performed and an adaptive overlapped block motion compensation algorithm is applied to a few selected neighboring motion vectors. The results show up to 0.4dB improvements when compared to the DISCOVER MCFIbased method. In [16], an optical flowbased frame interpolation method was proposed and combined with an overlapped block motion compensation (OBMC)based frame interpolation method. A multihypothesis transformdomain DVC decoder exploits the SI frames interpolated by both schemes with the help of a weighted joint distribution obtained from the (individual) correlation noise distribution associated to each SI frame. The total variationL1 (TVL1) norm optical flow algorithm was used to compute two flow fields, the backward and the forward flow, leading to two SI frame estimations which are then combined. This type of approach was also followed in [17] where a dense motion field was computed with pixelrecursive CafforioRocca algorithm adapted to the frame interpolation context. The RD performance improvements are up to 2% gains over the DISCOVER MCFI scheme. In [18], it is proposed to combine global and local (MCFIbased) motion compensation estimation at the decoder side to improve the SI quality. To create a global motioncompensated (GMC) estimation, Scaleinvariant feature transform (SIFT) features are extracted and a global affine motion model is computed at the encoder side. Then, the global affine motion model parameters are transmitted to the decoder to generate the GMC estimation, which is fused with the MCFI estimation, at the block level, to obtain the final SI frame. With the proposed SI creation method, it is possible to systematically outperform the standard H.264/AVC Intra and H.264/AVC zero motion coding solutions for all the video sequences evaluated. However, despite the RD performance improvements shown, this approach increases the encoding complexity significantly in the calculation of the SIFT features, matching, and global motion modeling estimation, which is not desirable in a DVC scenario where low encoder complexity is targeted. Thus, it has become clear that to further improve the SI quality produced by motioncompensated frame interpolation schemes (thus obtaining further compression efficiency gains), more accurate and reliable motion models must be used. A promising approach is to estimate the deformation of objects along time with higherorder motion models that are able to more accurately describe the geometric transformations between reference frames when compared to the simpler translational motion models. With more advanced motion models, it is possible to obtain better predictions in predicted video coding schemes (and lower bitrates for the residual signals) and higherquality SI frames in distributed video coding scheme (and fewer SI errors to correct with lower bitrates). These motion models allow warping (transforming) a quadrilateral of any size and shape from one reference frame to a block (square) of the current frame, thus obtaining a more general representation of camera and object motions, as described next.
There are several different motion models that can be used to warp blocks between frames, such as the affine, projective, and bilinear motion models [19]. In the past, these motion models have been used in the context of predictive video codecs to obtain more accurate predictions, especially when nontranslational motion is present. For example, in [20], block matching with several firstorder geometric transforms was proposed and significant RD performance improvements were obtained when compared to simple translational block matching. More recently, a parametric motion representation was proposed [21] where a KanadeLucasTomasi (KLT) tracking algorithm is adopted to match correspondence points and a motion segmentation algorithm is used to compute several parameter sets for a perspective transform; then, several warped reference pictures are generated and the best is selected for the coding process. In [22], a parametric Skip mode is proposed using a parametric motion estimation algorithm with cubic spline interpolation to obtain a set of motion model parameters for each frame; these parameters describe the global motion between the current and last decoded frames. Then, a new zeroresidue Skip mode makes use of the estimated parameters to obtain a new prediction that can be selected for each block (although more typically used for background areas that may be well described with global motion). In the past, advanced motion models have brought significant advances in terms of coding performance for predictive video codecs at the cost of higher encoding and decoding complexity. However, these advanced motion models have not yet been exploited in the context of distributed video codecs, i.e., SI creation solutions employing advanced motion models based on geometric transforms to perform the SI frame interpolation are not available in the literature.
3. Bidirectional perspective side information creation: basics and architecture
The first task of a distributed video encoder based on the popular Stanford architecture is to classify the video frames into WZ frames and key frames; typically, the key frames are periodically inserted, determining the GOP size. While the key frames are Intraencoded, this means without exploiting the temporal redundancy, the WZ frames are distributed encoded by using error correcting codes; for the WZ frames, the decoder generates the SI with the help of already decoded WZ and key frames, socalled reference frames. In this paper, a novel bidirectional perspective side information (BPSI) creation framework is proposed to model the motion between reference frames and obtain SI frames with improved quality. The proposed BPSI creation solution makes use of the perspective motion model by warping quadrilaterals from both (backward and forward) reference frames to estimate each SI block.
3.1 Perspective transform
Regarding the several motion models available in the literature, the eightparameter perspective motion model was selected in this paper, due to its popularity [20–23] and also its ability to model complex motion, like zooms, rotations, and perspective deformations. The perspective transform provides a quadrilateraltoquadrilateral mapping with the following representation:
where {a _{11}, a _{12}, a _{13}, a _{21}, a _{22}, a _{23}, a _{31}, a _{32}, a _{33}} are the perspective transform parameters, (u,v) an input quadrilateral vertex, and (x,y) the corresponding output warped quadrilateral vertex. To compute the eight perspective motion parameters, four pairs of correspondence points vertices, (u,v) and (x,y), are needed. Each pair of correspondence vertices is used to define a perspective transform vector, which is then used for the estimation of the best perspective transform model.
As usual, it is assumed that a _{33} = 1, while the eight a parameters are determined by solving the linear system in (2), using the four vertices of the input quadrilateral and the four corresponding vertices of the output quadrilateral:
when square to quadrilateral mappings occur, as shown in Figure 1, the linear system in (2) can be simplified, leading to a lower complexity process.
After the perspective transform parameters, a, are found, any (u,v) point can be warped into a (x,y) point according to the perspective motion model using:
With (3), the warped coordinates (x,y) associated to the correspondence (u,v) coordinates can be calculated. Notice that motion models with fewer parameters are frequently employed in predictive video codecs since motion parameters have to be transmitted to the decoder, many cases with a significant impact on the final bitrate, thus significantly influencing the overall RD performance. However, in this paper, the motion model parameters are not transmitted since they are created and used at the decoder to create a SI frame, thus allowing more powerful motion representations without any bitrate penalty.
3.2 Walkthrough
The highlevel architecture of the proposed BPSI creation solution is shown in Figure 2. To achieve the best SI quality and RD performance, the BPSI architecture includes two SI estimation branches: a conventional translational SI creation branch using the popular SI creation solution proposed in [9], and adopted in the DISCOVER DVC solution [8], and a novel advanced motion modeling branch exploiting the perspective transform characteristics.
Before describing in detail the BPSI architectural modules (see the next section), this subsection presents a walkthrough of the full BPSI process to explain first in a concise way the overall highlevel SI creation processing flow. In Figure 2, the shaded blocks correspond to the novel techniques proposed in this paper, notably with respect to the popular translational frame interpolation solution [9]. The BPSI creation framework still includes several translational techniques, notably backward motion estimation (ME), forward ME, bidirectional ME, and spatial motion filtering, which are implemented using conventional solutions [9]. The proposed BPSI architecture intends to exploit the best of the translational and perspective SI creation approaches by selecting, at the block level, one of the approaches to generate the SI frame, thus adapting to the specific local content characteristics.
The conventional BPSI translational motion branch targets the creation of blocklevel SI candidates as follows:

1.
Backward ME  Using both (previously decoded) reference frames, motion estimation with 16 × 16 block sizes and full pixel accuracy is performed for the forward direction, i.e., from the forward/future reference frame, X _{ f }, to the backward/past reference frame, X _{ b }. The motion estimation is performed using a weighted mean absolute difference (MAD) criterion [9] and provides a good starting point for the backward perspective transform search performed in step 7 and for the bidirectional ME technique described next. This process generates a backward translational motion field.

2.
Bidirectional ME (16 × 16 and 8 × 8)  This step targets the refinement of the backward translational motion field already computed. With this purpose, the bidirectional translational ME is performed twice, first for 16 × 16 blocks and after for 8 × 8 blocks, always with the backward translational motion field to replicate the MCFI approach [9]. This technique incorporates several additional constraints in the translational motion field refinement, notably (1) all motion vectors must cross the center of each block in the SI frame and (2) the motion trajectories are restricted with an adaptive search range technique that defines the search windows by using information from neighboring blocks.

3.
Spatial motion filtering  This technique spatially filters the noisy backward translational motion field obtained after the refinement in the previous step to reduce the number of ‘incorrect’ motion vectors when compared to the true motion field. The weighted vector median filter used [24] improves the motion field spatial coherence by identifying, for each SI block, the neighboring blocks motion vectors which can better represent the motion trajectory.
By performing the pure translational techniques in steps 1 to 3, and the motion compensation step to create the SI block at the end (step 12), the translational frame interpolation approach proposed in [9] is replicated. Naturally, for some of the SI frame blocks, the translational motion model ‘fails’ (in the sense that it provides poor SI quality), and a warped block estimated with a perspective motion model can provide higher SI quality.
The BPSI perspective motion branch targets the creation of SI candidate blocks as follows:

4.
Forward ME  Here, step 1 is repeated for the forward direction, i.e., from X _{ b } to X _{ f }. Thus, a forward translational motion field is obtained to provide the starting point for the forward perspective transform search performed in step 6. Note that some of the motion vectors obtained in this step are not correlated in any way with the motion vectors from step 1, especially when occlusions or illumination changes occur, thus justifying the adoption of both the forward ME and backward ME steps.

5.
Quarterpel upsampling  To provide a more accurate estimation for the possible perspective deformations, the backward and forward reference frames are first upsampled with the H.264/AVC quarterpel upsampling filter [1]; this is performed so that the next steps can benefit from increased precision reference frames.

6.
Forward perspective ME  This step receives as input the forward translational motion field estimated in step 4 to generate a forward perspective motion field. This motion field is a more complete representation of the motion between the reference frames as it includes, for each block, a set of four vectors, referred here as perspective transform vectors (PTVs), one for each vertex of the block; these vectors allow representing a whole range of deformations that cannot be obtained with a single motion vector as in the pure translational approach. Then, the upsampled reference frames are used to estimate the best (in terms of distortion) perspective transform for each 16 × 16 block by searching for the deformation leading to the highest quality warped block; in this case, halfpel accuracy is used for the perspective transform vectors.

7.
Backward perspective ME  Then, the process performed in the previous step is repeated for the opposite direction (from X _{ f } to X _{ b }), thus obtaining the perspective deformations (defined by the associated PTVs) from the future reference frame, X _{ f }, to the backward reference frame, X _{ b }. In this case, the backward translational motion field estimated in step 1 is used as input.

8.
Perspective transform selection  This step aims at obtaining a reliable perspective transform for each SI frame block while avoiding holes and block overlappings that may occur in the SI frame. This module receives as input both the forward and backward perspective transforms (defined by their associated PTVs) and generates a unified perspective motion field for the SI frame, after eliminating the PTVs classified as unreliable (see Section 4 for more details).

9.
Bidirectional perspective ME  Similarly to the translational SI creation solutions, bidirectional perspective motion estimation is performed with the perspective transforms selected in the previous step to refine the perspective deformations already obtained. This bidirectional perspective ME procedure is performed between the two reference frames while taking as reference the SI frame; it is performed twice, first with 16 × 16 blocks using as input the PTVs estimated in the previous step and after with 8 × 8 blocks after performing the adaptation described in the next step.

10.
Block size adaptation  This step intends to estimate a perspective transform for 8 × 8 blocks using as input the perspective transforms obtained in the previous step for 16 × 16 blocks, i.e., the deformations of the 16 × 16 blocks are used to obtain the 8 × 8 block deformations. A perspective transform hierarchical approach is adopted because it was found that more accurate and coherent 8 × 8 block perspective transforms may be obtained from the corresponding 16 × 16 block perspective transforms than directly estimating the perspective transforms for 8 × 8 blocks.
After the two motion modeling branches have provided their best estimations, a motion model decision (step 11) that selects between the perspective and translational motion models is performed along with the final motion compensation and warping to fuse the available SI estimations (step 12). Thus, the final SI frame is created by the following steps:

11.
Motion model decision  This step aims to choose, for each SI block, the best motion model between the translational model (characterized by motion vectors) and the novel perspective transform model (characterized by perspective transform vectors). This is a challenging task since the original frame is not available at the decoder to help assess the real quality (e.g., using a mean square error) of each of these SI estimations.

12.
SI creation  Last, the final SI frame is created, following the decisions taken for each SI block in the previous step. Thus, for the blocks where the translational motion model was selected, the SI frame is created by motion compensation, while for the blocks where the perspective motion model was selected, warping of the quadrilaterals in both reference frames (followed by averaging) is made according to the perspective transforms obtained in step 9.
The perspective motion model used for SI creation (in a distributed video decoder) enables a more accurate characterization of complex motion that might occur in the video sequence, such as zooms, rotations, and other affine and perspective deformations, without transmitting any parameters or vectors from the encoder to the decoder as in predictive video coding. However, since the original frame is not available, this is a challenging task and several tools are necessary to compute and regularize the perspective transforms. In the next section, the details on the proposed techniques for the novel modules in the BPSI framework, notably exploiting the perspective motion model, are presented.
4. Bidirectional perspective side information creation: techniques
The novel algorithms proposed for the several modules in the BPSI architecture presented in Figure 2 are now described in detail in the following subsections. Naturally, more detail is provided for the algorithms related to the perspective motion modeling as they regard the major technical contributions of this paper.
4.1 Backward and forward motion estimation
This module receives as input the two (decoded) reference frames and estimates the translational motion field between those reference frames, without any information about the original frame. While backward ME is part of the translational MCFI technique [9], both the backward and forward ME motion vectors are used for the estimation of the best perspective motion model, thus the reason to explain them in detail here. However, since this technique is equivalent for the backward and forward directions, only the backward motion estimation process is described. The backward ME proceeds as follows:

1.
Identification of the reference frames  Initially, the two relevant reference frames associated to the SI frame under estimation are identified. For a GOP size of 2, the reference frames are the two neighboring key frames of the interpolated SI frame, one in the past and another in the future. If a larger GOP size is used, previously decoded WZ frames are also used as reference frames while still using only the two neighboring reference frames; these neighboring frames are defined as proposed in [9].

2.
Motion estimation  After, the two reference frames are lowpass filtered to obtain a more spatially coherent motion vector field. Motion estimation is then performed from X _{ f } to X _{ b }, i.e., in the backward direction. For this, a block matching algorithm employing a modified matching criterion is used [9]; the modified criterion adds weights to the MAD criterion to favor translational motion vectors closer to the block center to regularize the translational motion field. The block size is 16 × 16 and full pixel accuracy is used.
This backward motion vector field is later refined with bidirectional motion estimation and spatial motion smoothing techniques; please refer to [9] for more details.
4.2 Backward and forward perspective motion estimation
The main objective of this technique is to estimate the perspective motion, locally, notably for each block of the backward and forward reference frames, using as starting point the translational motion vectors obtained from the backward and forward (translational) motion estimation processes previously described. Only the backward perspective motion estimation performed between the X _{ b } to X _{ f } decoded reference frames is described here as the forward estimation is equivalent. The architecture of the proposed perspective motion estimation technique is shown in Figure 3; several key tasks are performed here, namely the estimation of the perspective motion model parameters and the selection of the best perspective transform, i.e., the perspective transform that creates (by warping) the highest quality SI block.
The following steps are performed to obtain the best perspective backward transform for each 16 × 16 X _{ f } block to be characterized by the selected PTVs:

1.
Perspective transform initialization  In this step, the initialization of the perspective transform estimation is performed by simply assigning to the four corners of each block in the forward reference frame the (same) motion vector calculated with the translational backward motion estimation algorithm, thus obtaining the initial perspective transform vectors. In this deformation, the warped quadrilateral corresponds to a square block (see Figure 4a) in the same position and size as the displaced block calculated by (translational) motion estimation.

2.
Vertex selection  A block vertex is selected for processing, starting with the upper left vertex and following a clockwise direction. To estimate the best perspective transform, a possible solution would be to evaluate every possible combination of PTVs; for example, with a 32 × 32 pixel search window located at each vertex, it would be necessary to estimate a prohibitive number of transforms for each block. Since the fullsearch complexity is rather high, a novel search algorithm is proposed to find the best PTVs for each block, providing a good tradeoff between SI quality and perspective modeling complexity. Thus, the ‘best’ perspective transform is estimated by evaluating the PTV associated to each block vertex individually while keeping the remaining three PTVs (associated to the remaining three vertices) in a fixed position, as illustrated in Figure 4b.

3.
Perspective transform search  In this step, the perspective transform of each X _{ f } block is found by evaluating several possible deformations, i.e., the quality of several warped blocks is evaluated. For each X _{ f } block, the search for the best perspective transform in the past X _{ b } frame is made as follows:

a.
Transform parameter computation: After selecting a vertex and setting its initial search point within the search window, the eight perspective parameters for the quadrilateral deformation in the backward reference frame are obtained considering each (square) block in the forward reference frame. In such case, the parameters are obtained by simply solving the linear system in (2) using the four vertices of the quadrilateral and the four vertices of the corresponding square block.

b.
Block warping: In this step, a warped square block in frame X _{ f } is generated (using the corresponding quadrilateral in frame X _{ b }) with the perspective parameters calculated in the previous step. In such case, (3) can be used to obtain the warped coordinates (x,y) associated to the corresponding (u,v) points inside each square block. This procedure is shown in Figure 5, where the quadrilateral in frame X _{ b } is warped to a square block in the current frame X _{ f } (left square block), where a reference square block (right block) is already available. The mapping of a regular square grid of pixels (X _{ f } block) into a quadrilateral (in X _{ b }) usually leads to positions with noninteger coordinates. Thus, it is necessary to have a reliable method to estimate the pixel values for these warped positions. To achieve this target, the first step is to upsample the reference frames with the quarterpel H.264/AVC motion compensation interpolation filter to provide more precise interpolated values for the warped positions. However, since a position with any arbitrary (real value) precision may be obtained, the obtained quarterpel samples are not enough. Therefore, a bilinear interpolation method is used to estimate a pixel value at any warped position based on the quarterpel samples; this interpolation filter is formalized in (4) and illustrated in Figure 6:
\begin{array}{ll}\phantom{\rule{1em}{0ex}}P\left(x,y\right)=& \left(1{L}_{x}\right)\left(1{L}_{y}\right){P}_{a}+{L}_{x}\left(1{L}_{y}\right){P}_{b}\\ +\left(1{L}_{x}\right){L}_{y}{P}_{d}+{L}_{x}{L}_{y}{P}_{c}\end{array}(4)
As shown in Figure 6, the estimation for a pixel value P in any arbitrary position (x,y) is based on L _{ x } and L _{ y }, which represent the distances to the quarterpel positions in the square grid, both horizontally and vertically. In (4), P(x,y) is obtained by averaging the four neighboring quarterpel pixel values P _{ a }, P _{ b }, P _{ c }, and P _{ d } weighted by their distance to P(x,y). After some manipulation, the following expression is obtained:
For a perspective transform under evaluation, all the X _{ f } block pixel positions can be projected into frame X _{ b } by computing (5), and their interpolated values can be obtained, i.e., the warped candidate block (block W (X _{ b }) in Figure 5) can be created.

c.
Residual error calculation: To evaluate the quality of the warped block calculated in the previous step, a MAD metric is adopted. In this case, the residual error calculation is performed between the warped block W _{ b } (from reference frame X _{ b }) and the corresponding reference block in frame X _{ f } (represented by two square blocks in Figure 5):
\mathrm{MAD}\left(j\right)=\frac{1}{N\times N}{\displaystyle \sum _{\left(x,y\right)\in {B}_{j}}}\left{X}_{f}\left(x,y\right){W}_{b}\left(x,y\right)\right(6)
where N is the block size and B _{ j } is the j th block in the reference frame X _{ f }.

d.
PTV regularization: Since the original frame is not available, to obtain a perspective transform that is closer to the true motion of the objects/blocks in the video sequence [9], it is necessary to apply a regularization technique. Thus, it is not enough to minimize the MAD residual error as in (6), as many motion estimation solutions do, but it is also necessary to avoid large deviations between neighboring motion models. In this case, a simple regularization criterion, favoring the PTVs closer to the origin and avoiding PTVs with large magnitudes (which most likely do not represent true motion) is used. This PTV regularization consists in applying a penalty to the MAD obtained in the previous step, directly proportional to the distance between the current and initial PTVs, i.e., the PTV available as input to this module. The proposed process includes two steps: first, a local/vertex weighting regularization and after a global/block weighting regularization. The proposed local weighting regularization technique computes the distortion D _{ l } associated to each warped block as:
{\delta}_{l}\left(x,y\right)=\sqrt{{\left(x{x}_{c}\right)}^{2}+{\left(y{y}_{c}\right)}^{2}}(7){D}_{l}=\mathrm{MAD}\left(1+k{\delta}_{l}\right)(8)
where (x _{ c },y _{ c }) represents the initial PTV position for a given vertex, δ _{ l } represents the distance between the current PTV position (x,y) and (x _{ c },y _{ c }) for the local weighting approach, and k is a scaling factor. In this case, the distortion D _{ l } is regularized with a cost that is directly proportional to the distance between the current PTV and the initial PTV of that vertex.
After an estimate of the four PTVs (one for each vertex) for the block is available, a global weighting regularization is applied. In this approach, the PTVs obtained for all vertices are globally refined, i.e., the penalty applied to the MAD is directly proportional to the sum of the distances between the current PTV positions and the corresponding initial PTV positions for the four vertices. For the global weighting approach, the distortion D _{ g } of each block is computed as:
where δ _{ g } represents the sum of the distances of the PTVs obtained after local weighting regularization to (x _{ c },y _{ c }), calculated independently for each vertex. It was found experimentally that k could be the same for both the global and local weighting regularization approaches as no benefits were obtained with different k values. Finally, steps a to d are repeated until all positions inside the PTV search range are tested.

e.
PTV decision: From all the PTVs evaluated inside the search window, the PTV leading to the minimum D _{ l } (only when at least one vertex has not yet been processed for regularization) or D _{ g } is selected. Then, steps 2 and 3 are repeated for each of the remaining three block vertices, using a clockwise rotation. When the four vertices have been processed and the corresponding PTVs are obtained, a full refinement iteration is completed, and the algorithm proceeds to step 4.

4.
Refinement stopping criteria  If the PTVs do not change during a complete refinement iteration, the algorithm stops the PTV search process, implying that the search algorithm has successfully converged to a solution, i.e., the best PTVs have been found for the block under processing. Otherwise, the number of iterations is incremented and the algorithm goes back to step 2. For most cases, the computational complexity can be reduced when compared to an approach where a fixed number of iterations are executed. Anyway, to stop the search algorithm for the cases where convergence is difficult to obtain, the adopted maximum number of iterations is five.
4.3 Perspective transform selection
The perspective transforms obtained with the technique proposed in the previous section should represent well the motion of each X _{ b } or X _{ f } block; however, the frame to estimate is Y _{ i } for which there is still no perspective transform available. Thus, the perspective transform selection technique aims to obtain reliable perspective transforms for the SI frame using the perspective transforms previously estimated, this means for both the forward and backward directions, and only involving the reference frames X _{ b } and X _{ f }. In the past, the exploitation of two SI estimations according to the direction (backward vs. forward) was also adopted to handle sequences where occlusions and complex motion occur [18]. This process includes three main steps: first, the backward and forward perspective transforms considered unreliable are eliminated; second, for each SI block, two perspective transforms (one from each direction) are selected; and, third, only one transform is chosen as the final perspective transform for the SI block after evaluating both the available transforms.
Considering T as a group of four PTVs, one for each vertex, representing the perspective deformation of a given X _{ b } or X _{ f } block, the process to obtain a perspective transform for each SI block proceeds as follows:

1.
Perspective transform filtering  This step compares the two residual (MAD) errors obtained with the forward perspective transform, T _{ f }, and the backward perspective transform, T _{ b }, obtained for the same block position, but in their respective reference frames. This comparison is performed to decide if both perspective transforms are kept or one of them is excluded, trying to filter out perspective transforms that do not potentially lead to good SI quality. The filtering decision performed for each block is done according to the following rules:
\begin{array}{ll}\mathrm{No}\phantom{\rule{0.25em}{0ex}}\mathrm{filtering}\hfill & \mathrm{if}\phantom{\rule{0.25em}{0ex}}\left{\mathrm{MAD}}_{b}{\mathrm{MAD}}_{f}\right<\tau \hfill \\ {T}_{f}\phantom{\rule{0.25em}{0ex}}\mathrm{removed}\phantom{\rule{0.25em}{0ex}}\hfill & \mathrm{if}\phantom{\rule{0.25em}{0ex}}\left{\mathrm{MAD}}_{b}{\mathrm{MAD}}_{f}\right\ge \tau \phantom{\rule{0.37em}{0ex}}\wedge {\mathrm{MAD}}_{b}<{\mathrm{MAD}}_{f}\hfill \\ {T}_{b}\phantom{\rule{0.25em}{0ex}}\mathrm{removed}\hfill & \mathrm{if}\phantom{\rule{0.25em}{0ex}}\left{\mathrm{MAD}}_{b}{\mathrm{MAD}}_{f}\right\ge \tau \wedge {\mathrm{MAD}}_{f}<{\mathrm{MAD}}_{b}\hfill \end{array}(11)
where MAD_{ b } is the residual error calculated between a given block in X _{ f } and the respective block obtained with the backward transform T _{ b } (similarly for MAD_{ f }), and τ is a threshold. If MAD_{ b }  MAD_{ f } < τ, both the backward transform, T _{ b }, and the forward transform, T _{ f }, are considered reliable; thus, no transform is eliminated. When MAD_{ b }  MAD_{ f } is larger or equal than τ, one of the transforms is considered more reliable than the other and two cases are considered: (1) if MAD_{ b } < MAD_{ f }, T _{ b } is kept (MAD_{ b } has the lowest value) and T _{ f } is dropped; and (2) if MAD_{ f } < MAD_{ b }, T _{ f } is kept (MAD_{ f } has the lowest value) and T _{ b } is dropped.

2.
Perspective transform selection  Now, the best transform for each SI block has to be selected from all the transforms T _{ b } and T _{ f } considered reliable in the previous step, i.e., the T _{ b } and T _{ f } perspective transforms that were not eliminated. Note that these transforms do not characterize the perspective deformation associated to a SI block but the deformations of blocks in the backward and forward reference frames. For this selection, a simple criterion based on the distance between the position where each PTV intersects the SI frame and the corresponding SI block vertex is used. First, two perspective transforms, {\dot{T}}_{b} and {\dot{T}}_{f} for each SI block are selected, one for each backward/forward direction; then, both are evaluated in terms of residual error and the best transform is selected. From all the backward perspective transforms, T _{ b }, only one is selected for each SI block (denoted as {\dot{T}}_{b}); the following selection procedure is made for each SI block:

f.
For each PTV associated to a given transform, T _{ b }, the distance d _{ i } between the corresponding SI block vertex and the point where the PTV intersects the SI frame (see Figure 7) can be calculated as:
{d}_{i}=\sqrt{{\left({x}_{b,i}+\left({x}_{f,i}{x}_{b,i}\right){\overline{d}}_{b}{u}_{i}\right)}^{2}+{\left({y}_{b,i}+\left({y}_{f,i}{y}_{b,i}\right){\overline{d}}_{b}{v}_{i}\right)}^{2}}(12){\overline{d}}_{b}=\frac{{d}_{b}}{{d}_{b}+{d}_{f}}(13)
In Figure 7, (x _{ b,i },y _{ b,i }) and (x _{ f,i },y _{ f,i }) are the PTV positions of vertex i in X _{ b } and X _{ f }, respectively, {\overline{d}}_{b} is the normalized distance of the SI frame to the backward frame defined in (13), and (u _{ i },v _{ i }) are the vertices of the SI block under consideration.

g.
Then, the overall distance, {d}_{T}^{b}, considering the four vertices of each SI block, is computed for T _{ b } using (14) and defines the overall distance between the positions where the transform T _{ b } intersects the SI frame and the corresponding SI block for which no perspective transform is yet available.
{d}_{T}^{b}={\displaystyle \sum _{i=1}^{4}{d}_{i}}(14) 
h.
After obtaining the overall distance {d}_{T}^{b} for each T _{ b } transform, the T _{ b } leading to the minimum overall {d}_{T}^{b} distance is selected, obtaining the perspective transform {\dot{T}}_{b} for the SI block under consideration. Then, the previous steps a to c are repeated for the forward direction to find the best forward transform, {\dot{T}}_{f}, for the same SI block.

3.
SI perspective transform creation  In this step, the deformation for each SI block (and not anymore of the blocks in the reference frames) is found by selecting just one perspective transform. Thus, the PTVs of the two transforms, {\dot{T}}_{b} and {\dot{T}}_{f}, obtained in the previous step for each SI block, are displaced, so that the PTVs cross the respective vertex on the SI block. With the selected perspective transforms, {\dot{T}}_{b} and {\dot{T}}_{f}, the procedure illustrated in Figure 8 is applied for each SI block:

i.
First, a block warping procedure is applied where two warped blocks are generated using two sets of perspective transform parameters, derived (by solving (2)) from each of the perspective transforms {\dot{T}}_{b} and {\dot{T}}_{f}, one for each upsampled reference frame (backward and forward). The two warpings, W, obtained for each transform, correspond to two sets of perspective transform vectors, one set of vectors pointing from the WZ frame to X _{ b } and another pointing from the WZ frame to X _{ f }. Then, the MAD metric in (6) is applied to calculate the residual error, as for the backward and forward perspective ME; the only difference is that the MAD (represented as the minus sign in Figure 8) is calculated between the two warped blocks, {W}_{b}^{1} and {W}_{b}^{0} for transform {\dot{T}}_{b} and {W}_{f}^{1} and {W}_{f}^{0} for transform {\dot{T}}_{f}.

j.
Then, the perspective transform T _{ s } leading to the minimum MAD is selected, i.e., T _{ s } is made equal to {\dot{T}}_{b} or {\dot{T}}_{f} depending on which of these transforms leads to the minimum MAD. The new perspective transform, T _{ s }, represents the motion between the SI block and the corresponding reference frames, X _{ b } and X _{ f }.
For the blocks at the SI frame border, a unidirectional motion compensation mode is adopted if the corresponding quadrilateral has more than 25% of the block area outside the reference frame border. In such case, the estimation obtained from the reference is considered unreliable, and block warping is only performed with the remaining reference frame. In the rare case where both quadrilaterals have more than 25% of the corresponding area outside the reference frame, the bidirectional mode, as for the other nonborder blocks, is still used.
4.4 Bidirectional perspective transform estimation
This module is applied twice, first with a 16 × 16 block size and after with a 8 × 8 block size, and aims to refine the perspective transform (only one) obtained for each SI block in the previous step. A hierarchical approach was adopted so that the 16 × 16 perspective transforms can provide a good starting point for the final 8 × 8 perspective transforms. The architecture of this module is similar to the backward and forward perspective ME technique presented in Section 4.2. The major difference is that this module receives as input a perspective transform for each SI block and no longer translational motion vectors (as in the forward and backward perspective transform estimation modules) and refines the initial transforms to obtain better SI quality. This module is also able to correct some of the errors and inaccuracies made in the algorithm presented in the previous section which selects and creates perspective transforms for the SI blocks based on the perspective transforms obtained between the reference frames. To obtain a refined set of four PTVs for each SI block, the algorithm presented in Section 4.2 is applied. The major difference is that the linear trajectory of the PTVs for each block vertex must be preserved, i.e., the forward PTV position needs to be symmetric to the backward PTV position considering the (u _{ i },v _{ i }) vertex in the SI frame, and the PTV has always to cross the vertex (u _{ i },v _{ i }) in the interpolated SI frame. This constraint is similar to the constraint considered in the translational bidirectional motion estimation algorithm already proposed in [9]. In addition, the constrained block warping procedure described in the previous section is also applied: from a candidate SI perspective transform, two sets of transform parameters are calculated to describe the motion between the SI frame and both the backward and forward reference frames. Then, two warped blocks are obtained for each SI block and the MAD residual error is computed to evaluate the quality of each candidate perspective transform. By displacing the PTVs, other candidate perspective transforms can be tested with an iterative search procedure to find the best perspective transform (as in Section 4.2).
4.5 Block size adaptation
The main objective of this module is to obtain more precise PTVs (finer scale) based on the PTVs already obtained for a coarser scale. Thus, the PTVs for the four 8 × 8 blocks in each 16 × 16 block are computed by applying the perspective transform of the corresponding 16 × 16 block. This procedure is performed for each SI block and for both the backward and forward PTVs. The final result for one of the directions is shown in Figure 9.
Thus, the perspective model parameters, a, already computed for the 16 × 16 blocks by solving the linear system in (2), are used to obtain the PTVs for a finer 8 × 8 scale. More precisely, for every new vertex (in the 8 × 8 block), the corresponding projection in the reference frame is found by using (3), thus obtaining four sets of PTVs corresponding to the four 8 × 8 blocks in each 16 × 16 block.
4.6 Motion model decision
The motion model decision algorithm selects, for each SI block, the best motion model (translational or perspective) using a MAD criterion, i.e., the motion model with the minimum MAD residual error for a given SI block is chosen. The MAD_{ t } residual error for the translational mode has already been obtained when performing the bidirectional translational ME for the 8 × 8 blocks [9], while the MAD_{ p } residual error for the perspective mode was obtained when performing the bidirectional perspective ME algorithm for the 8 × 8 blocks. Regarding this decision, it was possible to observe that, for a significant number of SI blocks, the perspective mode could have a slightly better MAD value than the translational mode without leading to a final better SI estimation. Based on these observations, it is proposed to apply a penalty offset to the perspective mode MAD to ensure that this mode is only selected when it is reasonably better than the translational mode, thus increasing the probability of obtaining better SI quality. For each SI block, the motion modeling mode, ϕ _{si}, is calculated according to:
where α is the penalty offset, ϕ _{si} represents the selected motion modeling mode for each SI block, and \mathcal{T}, \mathcal{P} represent the translational and perspective motion modeling modes, respectively.
4.7 Motion compensation and warping
Finally, the SI frame is created according to the motion model previously selected for each SI block. The calculated perspective motion model parameters, the bilinear interpolation method, and the upsampled reference frames are used to obtain the warped pixel values for the relevant SI block when the perspective mode is selected. When the translational mode is used, only the motion vector previously calculated (i.e., after spatial motion filtering) is used to obtain the SI block. In both cases, two SI estimations are available, one using each of the backward and forward reference frames, respectively, and the final SI block is obtained by averaging, followed by rounding, the warped or motion compensation SI estimations according to:
where W _{ b } and W _{ f } are the warped blocks obtained from the backward and forward reference frames, P _{ b } and P _{ f } the motion compensated blocks obtained from the backward and forward references frames, {\overline{d}}_{b} the normalized temporal distance already defined in (13), and ⌊⌋ the floor operator.
5. Bidirectional perspective side information creation: performance evaluation
After presenting the proposed perspective transform motion modelingbased SI creation solution, it is time to assess the performance of the BPSI framework in terms of both SI quality and RD performance in the context of a stateoftheart DVC codec. In this context, the next subsection provides first a brief description of the DVC codec employed for RD performance evaluation and its encoding and decoding tools.
5.1 DVCBPSI video codec
The proposed BPSI creation solution is used in the context of the DVCBPSI codec which is a stateoftheart DVC solution following the Stanford DVC architecture originally proposed in [18]. A simplified version of the DVCBPSI codec architecture is shown in Figure 10.
The proposed DVCBPSI codec corresponds to the DVC codec proposed in [25] taking the proposed BPSI creation framework as the SI creation solution. To obtain a powerful DVC solution, the DVCBPSI codec includes stateoftheart DVC coding techniques available in the literature. In summary, the DVCBPSI encoder includes the H.264/AVC 4 × 4 DCT transform, a uniform scalar quantizer and a LDPC syndrome code as the SlepianWolf codec. The DVCBPSI decoder uses a CRC code for error detection, a minimum mean square error reconstruction method, and an offline Laplacian correlation noise model at band level [26]. In the following, the DVCBPSI codec is compared to a DVCMCFI codec corresponding to the same DVC solution but using the popular MCFI SI creation solution [9] instead of BPSI, i.e., using a pure translational motion model for SI creation.
5.2 Test conditions
To evaluate the SI quality and the overall RD performance, meaningful and precise test conditions must be first defined. The DISCOVER project [8] has provided a detailed, clear, and complete set of test conditions that are currently widely used in the DVC literature; thus, the test conditions used for the evaluation of the proposed SI creation solution are similar to the DISCOVER test conditions.
5.2.1 Video sequences
To evaluate the proposed DVCBPSI solution, four video sequences were selected, with different characteristics, notably in terms of motion and texture, thus leading to a rather complete and meaningful set of sequences and results. The selected video sequences are Hall Monitor, Mobile and Calendar, Bus, and Stefan. In Figure 11, a representative frame of each sequence is presented.
In Table 1, the characteristics of each video sequence are presented, notably the spatial and temporal resolutions as well the total number of frames for each sequence.
5.2.2 Coding parameters
The coding parameters and configurations used to evaluate the DVCBPSI performance are as follows:

GOP size  As common in the DVC literature, a fixed GOP size of 2 was adopted.

Key frames coding  The key frames are coded with H.264/AVC Intra in the Main profile since this is one of the best Intra coding schemes in terms of RD performance.

Quantization parameters  To perform the experimental evaluation, eight RD points (Q _{ i }) were defined in terms of the H.264/AVC Intra key frame quantization parameter (QP_{ I }) and the quantization level matrix for the WZ frames. The WZ frame quantization level matrices define for which DCT bands parity bits are transmitted with each matrix entry indicating the number of quantization levels for the respective DCT band (0 means that no parity bits are transmitted for that DCT band). The eight quantization level matrices considered here for RD performance evaluation are presented in Figure 12. The decoded quality (after reconstruction) depends on the quantization level matrix chosen and the corresponding key frame quantization step.
The key frame QP_{ I } values were defined using an iterative process, which stopped when the average WZ frame PSNR was similar to the average key frame PSNR to avoid significant temporal quality variations which may have a negative user impact. The QP _{ I } value selection procedure for the key frames assures an almost constant decoded video quality for the full set of frames (key frames and WZ frames) which is essential from the subjective quality point of view. Notice that distributing the same total bitrate in a different way between WZ and key frames may even lead to better RD performance, for example, by investing more bits in the key frames at the cost of a less stable video quality, but the resulting strong quality variations along time are highly undesirable. In Table 2, the QP_{ I } values used for each RD point and each video sequence are presented.

MCFI parameters  In the backward ME and bidirectional ME modules, the parameters are chosen according to [9], namely using a scaling factor k = 0.05 and a 32 × 32 search window.

BPSI parameters  The forward and backward perspective ME is performed with a 9 × 9 search window for each vertex. The bidirectional perspective ME for the 16 × 16 blocks is performed with half pixel accuracy PTVs and a 7 × 7 search window for each vertex, while the 8 × 8 blocks use quarter pixel accuracy PTVs and a 5 × 5 search window for each vertex. For the bidirectional perspective ME, the scaling factor for the 16 × 16 blocks, k _{ 16 }, is 0.05, while for the 8 × 8 blocks, k _{ 8 } is 0.21. To perform the motion model decision, a penalty offset, α, equal to 1 was applied to the MAD obtained with the perspective motion model. The BPSI parameters were defined following extensive experiments using as criterion the maximization of the SI quality gains. The video sequences used to compute the suggested parameter values to assess the RD performance are Mobile and Calendar, Hall Monitor, Stefan, Bus, Soccer, Container, Table Tennis, Coastguard, and Foreman at Quarter Common Intermediate Format (QCIF) spatial resolution.
5.2.3 Coding benchmarks
To evaluate the performance of the proposed DVCBPSI video codec, the RD performance is compared to some relevant benchmarking solutions, notably the H.264/AVC Intra, H.264/AVC Zero Motion, and DVCMCFI codecs. These video coding solutions share an important characteristic: all the encoders under evaluation have rather low encoding complexity (although not necessarily precisely the same) as they do not use motion estimation at the encoder. More precisely, the video coding solutions used as benchmarks are as follows:

H.264/AVC Intra  H.264/AVC Main profile video codec using only the Intra mode as this is one of the most powerful and efficient Intra video codecs.

H.264/AVC Zero Motion  H.264/AVC video codec exploiting only some temporal redundancy as no motion estimation is performed at the encoder. The same prediction structure adopted for the DVCBPSI codec is used, i.e., a GOP size of 2, with an IBI GOP structure and two reference frames, one in the past and another in the future.

DVCMCFI  To also assess the DVCBPSI performance regarding an alternative stateoftheart DVC codec, the DVCMCFI codec performance is also used as benchmark. The only difference between the DVCMCFI video codec and the proposed DVCBPSI codec is the technique used in the SI creation module. The DVCMCFI codec is rather similar to the DISCOVER DVC codec: the only differences regard the LDPC syndrome codec [27] adopted in the DVCMCFI codec and the subpel motion vector accuracy adopted in the DISCOVER DVC codec.
For all codecs under evaluation, only the luminance component was coded, meaning that the SI quality and the RD performance consider only the luminance rate and quality (naturally, for both the key frames and WZ frames). For the DVCMCFI and DVCBPSI key frame coding and the H.264/AVC Intra and H.264/AVC Zero motion codecs, the Main profile was selected (as typically in the DVC literature) since it allows high RD performance, even if with some encoding complexity cost associated to the CABAC entropy encoder and Intra coding modes.
5.3 Side information performance evaluation
The main target of this section is to evaluate the SI quality obtained with the proposed BPSI solution standalone, i.e., before integration in any DVC codec. Thus, the BPSI SI quality is only compared to the stateoftheart MCFI SI quality for all the test sequences. The average BPSI and MCFI SI qualities for the whole sequences are shown in Tables 3 and 4 for two RD points, where ∆_{BWSI} expresses the average BPSI SI PSNR gain regarding the MCFI solution (value in italic corresponds to the highest PSNR gain). This evaluation also allows comparing the SI PSNR gains with the complete RD performance gains, which are presented in the next section for the proposed DVCBPSI codec.
From the results in Tables 3 and 4, the following conclusions may be derived:

The most significant SI quality gains are obtained when the key frame quality is better, i.e., when a lower QP is used (corresponding to the RD point Q _{7}). In fact, the lower gains obtained for the RD point Q _{2} can be easily explained by the poorer quality of the key frames, which limits the SI gains, as it is difficult to obtain accurate perspective transforms when the reference frame quantization error is too high.

The proposed BPSI solution shows more significant gains for the Mobile and Calendar video sequence. This can be explained by the camera motion present in this sequence, which contains a zoom out and a slow left pan, and the specific BPSI capabilities to handle complex camera motions. In addition, this sequence also has high contrast, which benefits the search for the optimal perspective deformation.

For the Hall Monitor sequence in QCIF, gains up to 0.4 dB are obtained in terms of SI PSNR quality. These gains can be explained by the complex object motion associated to the two persons walking in the corridor. For the Bus QCIF sequence, gains up to 0.66 dB are obtained, while the Stefan QCIF sequence shows gains up to 0.37 dB. For the Stefan sequence, the BPSI framework is not able to estimate the motion model parameters as efficiently as for some other sequences since high camera motion occurs. Thus, considering a 15Hz frame rate (where the key frames are less correlated) and the rather high motion, lower SI quality gains are obtained when compared to the results obtained for the other video sequences.

For the Common Intermediate Format (CIF) spatial resolution with a 30Hz frame rate, the SI quality gains are comparable to the SI quality QCIF results. For the Mobile and Calendar sequence, gains up to 0.89 dB are achieved, while the Bus sequence obtains 0.82 dB, the Stefan sequence 0.26 dB, and the Hall Monitor sequence 0.19 dB. For the sequences with high camera motion (such as Mobile and Calendar, and Bus sequences) and complex deformations (such as the Stefan sequence), the perspective motion model can better characterize the true motion when compared to the translational model.
In Figure 13, the SI quality temporal evolution is shown for the Mobile and Calendar, and Stefan sequences. The quantization parameters adopted for the H.264/AVC key frame codec in this experiment correspond to the RD point Q _{7}. The SI quality results for the Mobile and Calendar sequence (see Figure 13a) show average gains around 1 dB for the BPSI solution regarding the MCFI solution. This sequence includes a zoom out from the beginning until frame 74 and then a slow camera panning until the end of the sequence. As shown in Figure 13a, gains up to 1.8 dB are obtained when the zoom out camera motion occurs, i.e., when the translational model cannot accurately capture the camera motion. From the SI quality results for the Stefan sequence (Figure 13b), it is also possible to conclude that significant BPSI gains are obtained for some parts of the sequence (regarding the MCFI solution), notably when more complex motions occur. These gains are coherent with the overall gains presented in Table 3.
Figure 14 shows a couple of examples with the SI created for regions in two frames of the Hall Monitor sequence using the proposed BPSI and the benchmark MCFI techniques; for reference, also the corresponding original frame regions are included. As shown, the proposed method can lead to relevant perceptual gains in a rather important object of the video sequence; in this case, the MCFI estimation leads to ghosting artifacts which are much improved using the proposed BPSI technique.
5.4 RD performance evaluation
The RD performance for the proposed DVCBPSI solution and the adopted benchmarks is presented in Figure 15, for the eight defined RD points according to the adopted test conditions. Moreover, Table 5 shows the DVCBPSI RD performance gains over DVCMCFI using the Bitrate Bjontegaard delta (BDRate) [28] and BDPSNR metrics for four quantization parameter sets corresponding to the Q _{1}, Q _{4}, Q _{7}, and Q _{8} RD points. The Bjontegaard metrics enable the comparison of RD curves in terms of the average PSNR improvement or the average bitrate savings (positive values mean gains).
From the RD performance results obtained, the following conclusions may be drawn:

DVCBPSI vs. DVCMCFI  For the video sequences adopted, the proposed DVCBPSI codec always outperforms the stateoftheart DVCMCFI codec. In fact, the DVCBPSI codec shows gains up to 0.66 dB (maximum) and 0.49 dB (in terms of BDPSNR) in comparison with DVCMCFI for the Mobile and Calendar video sequence. Even for the Hall Monitor lowmotion video sequence, where no significant deformations are available, some RD gains were obtained, notably 0.12 dB in terms of BDPSNR. The perspective motion model used for the BPSI creation process is the main responsible for the RD performance gains since all the remaining techniques are kept the same. Thus, these RD performance results in the context of DVC codecs validate the assumption that perspective motion models can more accurately represent the motion in a video sequence. Generally, video sequences with complex camera motion (such as Mobile and Calendar, and Bus) can be better characterized with more complex motion models, and thus, higher RD performance gains can be obtained.

DVCBPSI vs. H.264/AVC Intra  As observed, the DVCBPSI codec is able to outperform the H.264/AVC Intra codec for all video sequences but for the Stefan sequence; in fact, BDPSNR gains up to 3.75 dB (see Table 5) are obtained for the Mobile and Calendar sequence; a similar behavior can be observed for the Hall Monitor video sequence with gains up to 3.56 dB. The complex and high motion in the Stefan sequence leads to a significant amount of ‘estimation errors’ in the SI frame which causes lower RD performance. In such cases, better RD performance may be obtained with learning DVC solutions that are able to exploit already decoded data or hint DVC solutions that are able to exploit auxiliary information coded and transmitted by the encoder, e.g., block hashes.

DVCBPSI vs. H.264/AVC Zero Motion  The DVCBPSI codec shows BDPSNR gains up to 0.3 dB for the Bus sequence and up to 1.4 dB for the Mobile and Calendar sequence regarding the H.264/AVC Zero Motion codec. The impressive Mobile and Calendar RD performance gains can be explained by the fact that the DVCBPSI codec can properly estimate (at the decoder) the slow panning (and zooming) while the H.264/AVC Zero Motion codec cannot since no motion estimation is performed at all. However, the DVCBPSI codec does not outperform the H.264/AVC Zero Motion codec for the Hall Monitor and Stefan sequences with losses up to 1.3 dB. For the Hall Monitor sequence, the H.264/AVC Zero Motion video codec is able to efficiently characterize the static areas with the Skip mode (where no residual or MV data are transmitted), while for the Stefan sequence, the BPSI framework is still unable to create SI with competitive quality. However, the H.264/AVC Zero Motion codec has higher encoding complexity regarding the DVCBPSI codec, as shown in the encoding complexity evaluation presented in [25].
Finally, it is important to stress that the proposed DVCBPSI encoding complexity is kept low since no changes were made at the encoder side. The proposed BPSI framework can also be used for other objectives, such as error concealment in the context of a predictive video decoder, notably when an entire frame is lost (which happens frequently in packet loss networks), and in frame rateup conversion scenarios when a low frame rate video is transmitted but a higher display rate is desired.
6. Conclusions
The main objective of this paper was to improve the overall RD performance of a stateoftheart DVC codec using a more powerful SI creation framework, notably exploiting perspective transform motion modeling to better characterize more complex video motions. With the proposed DVCBPSI codec, gains up to 1 dB were obtained in terms of SI quality and up to 0.5 dB in terms of RD performance when compared to the previous stateoftheart MCFI and DVCMCFI solutions, respectively. Regarding H.264/AVC Intra, RD gains of up to 4 dB are obtained, for low and mediummotion sequences; when compared to the H.264/AVC Zero Motion, RD performance gains up to 1.5 dB can be obtained for sequences with strong global camera motion. Future work shall consider the development of an algorithm to merge neighboring perspective transforms according to some similarity criteria. In this way, it should be possible to eliminate some wrongly chosen deformations, thus providing a more coherent set of perspective transforms.
References
Wiegand T, Sullivan GJ, Bjøontegaard G, Luthra A: Overview of the H.264/AVC video coding standard. IEEE Trans. Circuits Sys. Video Technol 2003, 13(7):560576.
Sullivan GJ, Ohm J, WooJin H, Wiegand T: Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Sys. Video Technol. 2012, 22(12):16491668.
Ohm J, Sullivan GJ, Schwarz H, Tan TK, Wiegand T: Comparison of the coding efficiency of video coding standards—including high efficiency video coding (HEVC). IEEE Trans. Circuits Sys. Video Technol. 2012, 22(12):16691684.
Bossen F, Bross B, Suhring K, Flynn D: HEVC complexity and implementation analysis. IEEE Trans. Circuits Sys. Video Technol. 2012, 22(12):16851696.
Slepian D, Wolf J: Noiseless coding of correlated information sources. IEEE Trans. Info. Theory 1973, 19(4):471480. 10.1109/TIT.1973.1055037
Ziv J, Wyner A: The ratedistortion function for source coding with side information at the decoder. IEEE Trans. Info. Theory 1976, 22(1):110. 10.1109/TIT.1976.1055508
Pereira F, Torres L, Guillemot C, Ebrahimi T, Leonardi R, Klomp S: Distributed video coding: selecting the most promising application scenarios. Signal Processing Image Comm. 2008, 23(5):339352. 10.1016/j.image.2008.04.002
Artigas X, Ascenso J, Dalai M, Klomp S, Kubasov D, Ouaret M: The DISCOVER codec: architecture, techniques and evaluation. Lisbon, Portugal, 7–9 Nov 2007. Picture Coding Symposium
Ascenso J, Brites C, Pereira F: Content adaptive WynerZiv video coding driven by motion activity. Atlanta, USA, 16–19 Sept 2007. IEEE International Conference on Image Processing
Girod B, Aaron A, Rane S, RebolloMonedero D: Distributed video coding. Proc. IEEE 2005, 93(1):7183.
Kubasov D, Guillemot C: Meshbased motioncompensated interpolation for side information extraction in distributed video coding. Atlanta, USA, 8–11 Oct 2006. IEEE International Conference on Image Processing
Tagliasacchi M, Tubaro S, Sarti A: On the modeling of motion in WynerZiv video coding. Atlanta, USA, 8–11 Oct 2006. IEEE International Conference on Image Processing
Petrazzuoli G, Cagnazzo M, PesquetPopescu B: High order motion interpolation for side information improvement in DVC. Dallas, USA, 15–19 Mar 2010. IEEE International Conference on Acoustics Speech and Signal Processing
Zhang Y, Zhao D, Liu H, Li Y, Ma S, Gao W: Side information generation with auto regressive model for lowdelay distributed video coding. J. Visual Comm. Image Represent. 2012, 23(1):229236. 10.1016/j.jvcir.2011.10.001
Huang X, Forchhammer S: Improved side information generation for distributed video coding. IEEE International Workshop on Multimedia Signal Processing Cairns, Australia, 8–10 Oct 2008
Huang X, Raket LL, Luong HV, Nielsen M, Lauze F, Forchhammer S: Multihypothesis transform domain WynerZiv video coding including optical flow. IEEE International Workshop on Multimedia Signal Processing Hangzhou, China, 17–19 Oct 2011
Cagnazzo M, Maugey T, PesquetPopescu B: A differential motion estimation method for image interpolation in distributed video coding. IEEE International Conference on Acoustics, Speech, and Signal Processing Taipei, Taiwan, 19–24 Apr 2009
AbouEl Ailah A, Dufaux F, Farah J, Cagnazzo M, PesquetPopescu B: Fusion of global and local motion estimation for distributed video coding. IEEE Trans. Circuits Sys. Video Technol 2013, 23(1):158172.
Pratt WK: Digital Image Processing. 4th edition. New York: Wiley Interscience; 2007.
Faria S PhD thesis. In Very low bit rate video coding using geometric transform motion compensation. University of Essex; 1996.
Sung J, Park S, Park J, Jeon B: Picturelevel parametric motion representation for efficient motion compensation. IEEE International Conference on Image Processing Brussels, Belgium, 11–14 Sept 2011
Glantz A, Tok M, Krutz A, Sikora T: A blockadaptive skip mode for inter prediction based on parametric motion models. IEEE International Conference on Image Processing Brussels, 11–14 Sept 2011
Glantz A, Krutz A, Sikora T: Adaptive global motion temporal prediction for video coding. Picture Coding Symposium Nagoya, Japan, 7–10 Dec 2010
Ascenso J, Brites C, Pereira F: Improving frame interpolation with spatial motion smoothing for pixel domain distributed video coding. Smolenice, Slovak Republic, 29 June–2 July 2005. 5th EURASIP Conference on Speech and Image Processing, Multimedia Communications and Services
Ascenso J: Improving compression efficiency in distributed video coding systems, PhD dissertation, Instituto Superior Técnico, Universidade Técnica de Lisboa, November 2010. . Accessed 19 Dec 2013 http://www.img.lx.it.pt/publications_ig.html
Brites C, Pereira F: Correlation noise modeling for efficient pixel and transform domain Wyner–Ziv video coding. IEEE Trans. Circuits Sys. Video Technol. 2008, 18(9):11771190.
Ascenso J, Brites C, Pereira F: Design and performance of a novel lowdensity paritycheck code for distributed video coding. San Diego, USA, 12–15 Oct 2008. IEEE International Conference on Image Processing 2008.
Bjontegaard G: Calculation of average PSNR differences between RD curves. 13th ITUT VCEG Meeting Austin, 2–4 Apr 2001
Acknowledgements
This work was supported by the Fundação para a Ciência e a Tecnologia (FCT) through the project Multiview Video Disparity Compensation using Geometric Transforms (MuViDisCo) with reference IT/LA/P01082/2011.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Monteiro, P., Ascenso, J. & Pereira, F. Perspective transform motion modeling for improved side information creation. EURASIP J. Adv. Signal Process. 2013, 189 (2013). https://doi.org/10.1186/168761802013189
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/168761802013189