 Research
 Open Access
 Published:
Video coding using arbitrarily shaped block partitions in globally optimal perspective
EURASIP Journal on Advances in Signal Processing volume 2011, Article number: 16 (2011)
Abstract
Algorithms using contentbased patterns to segment moving regions at the macroblock (MB) level have exhibited good potential for improved coding efficiency when embedded into the H.264 standard as an extra mode. The contentbased pattern generation (CPG) algorithm provides local optimal result as only one pattern can be optimally generated from a given set of moving regions. But, it failed to provide optimal results for multiple patterns from entire sets. Obviously, a global optimal solution for clustering the set and then generation of multiple patterns enhances the performance farther. But a global optimal solution is not achievable due to the nonpolynomial nature of the clustering problem. In this paper, we propose a nearoptimal contentbased pattern generation (OCPG) algorithm which outperforms the existing approach. Coupling OCPG, generating a set of patterns after clustering the MBs into several disjoint sets, with a direct pattern selection algorithm by allowing all the MBs in multiple pattern modes outperforms the existing patternbased coding when embedded into the H.264.
1. Introduction
VIDEO coding standards such as H.263 [1] and MPEG2 [2] introduced blockbased motion estimation (ME) and motion compensation (MC) to improve coding performance by capturing various motions in a small area (for example, a 8 × 8 block). However, they are inefficient while coding at low bit rate due to their inability to exploit intrablock temporal redundancy (ITR). Figure 1 shows that objects can partly cover a block, leaving highly redundant information in successive frames as background is almost static in colocated blocks. Inability to exploit ITR results in the entire 16 × 16pixel macroblock (MB) being coded with ME&MC regardless of whether there are moving objects in the MB.
The latest video coding standard H.264 [3] has introduced treestructured variable block size ME & MC from 16 × 16pixel down to 4 × 4pixel to approximate various motions more accurately within a MB. We empirically observed in [4] that while coding headandshoulder type video sequences at low bit rate, more than 70% of the MBs were never partitioned into smaller blocks by the H.264 that would otherwise be at a high bitrate. In [5], it has been further demonstrated that the partitioning actually depends upon the extent of motion and quantization parameter (QP): for low motion video, 67% (with low QP) to 85% (with high QP) of MBs are not further partitioned; for high motion video, the range is 2664. It can be easily observed that the possibility of choosing smaller block sizes diminishes as the target bitrate is lowered. Consequently, coding efficiency improvement due to the variable blocks can no longer be realized for a low bit rate as larger blocks have to be chosen in most cases to keep the bitrate in check but at the expense of inferior shape and motion approximation.
Recently, many researchers [6–12] have successfully introduced other forms of block partitioning to approximate the shape of a moving region more accurately to improve the compression efficiency. Chen et al. [6] extended the variable block size ME&MC method to include additional four partitions each with one Lshaped and one square segment to achieve improvement in picture quality. One of the limitations of segmenting MBs with rectangular/square shape building blocks as done in the method with variable block size and in [6] is that the partitioning boundaries cannot always approximate arbitrary shapes of moving objects efficiently.
Hung et al. [7] and Divorra et al. [8, 9] independently addressed this limitation with the variable block size ME&MC by introducing additional wedgelike partitions where a MB is segmented using a straight line modelled by two parameters: orientation angle θ and distance ρ from the centre of the MB. A very limited case with only four partitions (θ ∈ {0°, 45°, 90°, 135°} and ρ = 0) was reported by Fukuhara et al. [10] even before the introduction of variable block size ME&MC for low bit rate video coding. Chen et al. [11] and Kim et al. [12] improved compression efficiency further with implicit block segmentation (IBS) and thus avoided explicit encoding of the segmentation information. In both cases, the segmentation of the current MB can be generated by the encoder and decoder using previously coded frames only.
But none of these techniques, including the H.264 standard, allows for encoding a blockpartitioned segment by skipping ME&MC. Consequently, they use unnecessary bits to encode almost zero motion vectors with perceptually insignificant residual errors for the background segment. These bits are quite valuable at low bit rate that could otherwise be spent wisely for encoding residual errors in perceptually significant segments. Note that the H.264 standard acknowledges the penalty of extra bits used by the motion vectors by imposing ratedistortion optimisation in motion search to keep the length of the motion vector smaller and disallowing Bframes which require two motion vectors, in the Baseline profile used widely in video conferencing and mobile applications.
Patternbased video coding (PVC) initially proposed by Wong et al. [13] and later extended by Paul et al. [14, 15] used 8 and 32 predefined regularshaped binary rectangular and nonrectangular pattern templates respectively to segment the moving region in a MB to exploit the ITR. Note that a pattern template is a size of 16 × 16 positions (i.e., similar to a MB size) with 64 '1's and 192 '0's. The bestmatched moving region of a MB with a pattern template (see in Figure 1) through an efficient similarity measure estimates the motion, compensates the residual error using only pattern coveredregion (i.e., only 64 pixels among 256 pixels), and ignores the remaining region (which is copied from the reference block) of the MB from signalling any bits for motion vector and residual errors. Successful pattern matching can, therefore, theoretically attain maximum compression ratio of 4:1 for a MB as the size of the pattern is 64pixel. The actual compression however will be lower due to the overheads of identifying this special type of MB as well as the best matched pattern for it and the matching error for approximating the moving region using the pattern. An example of pattern approximation using predefined thirty two patterns [14] for Miss America video sequence is shown in Figure 2.
As the objects in video sequences are widely varied, not necessarily the moving region is wellmatched with any predefined regularshape pattern template. Intuitively, an efficient coding is possible if the moving region is encoded using the pattern templates generated from the content of the video sequences. Very recently, Paul and Murshed [16] proposed a contentbased pattern generation (CPG) algorithm to generate eight patterns from the given moving regions. The PVC using those generated patterns outperformed the H.264 (i.e., baseline profile) and the existing PVC by 1.0 and 0.5 dB respectively [16] for headshouldertype video sequences. They also mathematically proved that this pattern generation technique is optimal if only one pattern would be generated for a given division of moving regions. Thus, they got a local optimal solution as they could generate single pattern rather than multiple patterns. But for efficient coding, multiple patterns are necessary for different shape of moving regions.
It is obvious that a global optimal solution improves the pattern generation process for multiple patterns, and hence, eventually the coding efficiency. A global optimal solution can be achieved if we are able to divide the entire moving regions optimally. But, this problem is a non polynomial (NPcomplete) problem, as no clustering techniques provide optimal clusters. In this paper, we propose a heuristic to find the nearoptimal clusters and apply local optimal CPG algorithm on each cluster to get the near global optimal solution.
Moreover, the existing PVC used a predefined threshold to reduce the number of MBs coded using patterns to control the computational complexity as it requires extra ME cost. It is experimentally observed that any fixed threshold for different video sequences may overlook some potential MBs from the pattern mode [15]. Obviously, eliminating this threshold by allowing all MBs to be motion estimated and compensated using patterns and finally selected by the Lagrangian optimization function will provide better ratedistortion performance by increasing computational time. To reduce the computational complexity we assume the already known motion vector of the H.264 in pattern mode, which may degrade the performance. But the net performance gain would outweigh this.
As the best pattern selection process solely relies on the similarity measures, it is not guaranteed that the best pattern will always result in maximum compression and better quality, which also depends on the residual errors after quantization and Lagrangian multiplier. This paper also exploits to introduce additional pattern modes that select the pattern in order of similarity ranking. Furthermore, a new Lagrangian multiplier is also determined as the pattern modes provide relatively less bits and slightly higher distortion as compared to the other modes of the H.264. The experimental results confirm that this new scheme successfully improves the ratedistortion performance as compared to the existing PVC as well as the H.264.
The rest of the paper is organized as follows: Section 2 provides the background of the contentbased PVC techniques including collection of moving regions & generation of pattern templates, and encoding & decoding of PVC using contentbased patterns. Section 3 illustrates the proposed approach including optimal pattern generation technique and its parameter settings. Section 4 discusses the computational complexity of the proposed technique. Section 5 presents the experimental set up along with the comparative performance results. Section 6 concludes the paper.
2. Contentbased PVC algorithm
The PVC with a set of contentbased patterns termed as pattern codebook performs in two phases. In first phase, moving regions are collected from the given number of frames and pattern codebook is generated from those MRs using the CPG algorithm [15]. In the second phase, actual coding is taken place using the generated pattern codebook.
2.1. Collection of moving regions and generation of pattern codebook
The moving region in a current MB is defined based on the number of pixels whose intensities are different from the corresponding pixels of the reference MB. The moving region M of a MB Ω in the current frame is obtained using the colocated MB ω in the reference frame [13] as follows:
where Θ is a 3 × 3 unit matrix for the morphological closing operation denoted by • [17], which is applied to reduce noise, and the thresholding function T(v) = 1 if v > 2 (i.e., the said pixel intensity difference is bigger than two grey levels) and 0 otherwise. Let M_{1} be the total number of 1's in the matrix M. If 8 ≤ M_{1} < 2QP/3 + 64 where QP is the quantization parameter, the corresponding MB, i.e., Ω will participate in the pattern generation process as it has a reasonable number of moving pixels to be covered by a 64pixel '1's in a pattern so that high matching error is avoided.
The binary moving region map of Ω is used in the pattern generation process as the representative of Ω. The MB with moving region is named as candidate regionactive MB (CRMB). We have assumed that if the number of '1's in a CRMB is too low or too high, the corresponding MB is not suitable to be encoded by the pattern mode, and thus, we do not include these CRMBs in the pattern generation process that is to be described next. In the proposed technique, if the number of '1's is less than 8 (same as in [13]), the MB has very low movement so that it can be encoded as skipped block. On the other hand, if the total number of '1's is more than 64 + 2QP/3, the MB has very high motion so that it can be encoded using standard H.264 modes. Obviously, more MBs are encoded using the pattern mode at low bit rates as compared to high bit rates. Thus, we also relate the upperbound threshold with QP to regulate the number of CRMBs with different bit rates.
Once all such CRMBs are collected for a certain number of consecutive frames, decided by the ratedistortion optimizer [18] when the ratedistortion gain outweighs the overhead of encoding the shape of new patterns, these are divided into α sets to generate patterns. In order to generate patterns with minimal overlapping, a simpler greedy heuristic is employed where these CRMBs are divided into α clusters such that the average distance among the gravitational centres of CRMBs within a cluster is small while the same among the centres of CRMBs taken from different clusters is large. The CPG algorithm generates μpixel pattern for a cluster by the μmostfrequent pixels among all the CRMBs in the cluster.
2.2. Encoding and decoding of PVC using contentbased pattern codebook
The Lagrangian multiplier [19, 20] is used to trade off between the quality of the compressed video and the bit rate generated for different modes. In this method, the Lagrangian multiplier, λ is calculated with an empirical formula using the selected QP for every MB in the H.264 [18] as follows:
During the encoding process, all possible modes including the pattern mode are first motion estimated and compensated for each MB, and the resultant rates and the distortions are determined. The final mode m is selected as follows:
Where B(m_{ i } ) is the total bits for mode m_{ i } , including mode type, motion vectors, extra pattern index code for pattern mode, and residual error after quantization, while D(m_{ i } ) is measured as the sum of square difference between the original MB and the corresponding reconstructed MB for mode m_{ i } .
3. Proposed algorithm
As mentioned earlier, the CPG algorithm can generate an optimal pattern from given moving regions but there is no guarantee to generate optimal multiple patterns from the entire given set of moving regions. For simplicity, it uses a clustering technique which divides the moving regions into α clusters to generate α patterns. Thus, it is obvious that the performance of CPG also depends on the efficiency of clustering technique. As aforementioned, a clustering problem is a NPcomplete problem and thus, a global optimization algorithm would be computationally unworkable. We propose a heuristic which can solve this problem nearoptimally.
3.1. Optimal contentbased pattern generation algorithm
Without losing any generality, we can assume that an optimal clustering technique with the CPG algorithm can provide optimal pattern codebook. We can define an optimal codebook, if each moving region is bestmatched by the pattern which is generated from the cluster of that moving region. Suppose that an optimal clustering technique divides the CRMBs into clusters C_{1}, C_{2},...,C_{ α } . If the pattern P_{ i } is generated from the C_{ i } , i.e.,
and the pattern P_{ j } is selected as the best matched pattern for the moving region M ∈ C_{ i } as
then P_{ i } and P_{ j } will be same for an optimal pattern codebook, and ^ represents the AND operation.
In the actual coding phase, a CRMB of a cluster can be approximated by the following two approaches: the pattern generated from its corresponding cluster or the best matchedpattern from the pattern codebook irrespective of its clusters. The first approach is termed as direct pattern selection and later approach is exhaustive pattern selection.
The correct classification rate, τ, can be defined as the fraction of the number of CRMBs matched by the pattern using direct pattern selection against entire CRMBs. Due to the overlapped regions of the patterns, there is a probability to better approximate a CRMB with a pattern generated other than its cluster. Obviously the probability of τ will increase with the number of patterns in a codebook due to the better similarity between moving region and the corresponding pattern. Moreover, a small number of patterns cannot better approximate the CRMBs, as a result there is always a possibility of ignoring a CRMB using the pattern mode, if only the extracted pattern from a cluster is used to match against the CRMBs of the same cluster. Thus, this system requires reasonable number of patterns. On the other hand, we can called a CPG algorithm as the globally optimal one if it produces a pattern set in such a way that each CRMB is bestsimilaritymatched by the pattern which is generated from its own cluster, i.e., the value of τ is 100%. We can define τ as follows where CRMB_{ s }  indicates the total number of CRMBs:
where P_{ i } and P_{ j } are selected from Equations 4 and 5, respectively. When τ = 100%, we get the optimal solution using clustering and the CPG algorithm. To do this we need to modify the CPG algorithm where a generic clustering technique using pattern similarity metric as a part of this algorithm. The dissimilarity of a CRMB against a pattern, P_{ n } is defined as:
where M and P_{ n } are the CRMB and the n th pattern respectively. The bestmatched pattern is selected using Equation 5.
Unlike the CPG, optimal CPG (OCPG) (detailed in Figure 3) performs clustering and pattern formation until τ is 100% in each iteration. For a seed pattern codebook, it ensures that each CRMB will be bestmatched by a pattern generated from its own cluster, i.e., the clustering process is optimum. However, it does not guarantee the global optimality of clustering because of trapping in local optima. To ensure the global optimality we need to determine average dissimilarity ψ_{avg} using pattern codebooks generated with iterations. The final pattern codebook is selected based on the minimum ψ_{avg} for a given number of iterations with random starts as ψ_{avg} indicates the optimal pattern codebook. We can define ψ_{avg} as follows where C indicates the total set of CRMBs, C_{ i } indicates the i th subset of CRMBs clustered using i th pattern, C_{ i }  indicates the total number of CRMBs in C_{ i } , and ψ^{i} (C_{ i } (j)) indicates the dissimilarity between i th pattern and j th CRMB in the C_{ i } subset:
For one random start, we will get a candidate global solution for a seed codebook. There would be multiple solutions for given moving regions. When the search space is really large and there is no suitable algorithm to find the optimum solution, kchange neighbourhood may be considered as a koptimal solution [21]. Lin and Kernighan [22] empirically found that a 3optimal solution for the travelling salesman problem has a probability of about 0.05 of being not optimal, and hence 100 random starts yields the optimum with a probability of 0.99. Lin and Kernighan also demonstrated that a 3optimal solution is much better than a 2optimal solution; however, a 4optimal solution is not sufficiently superior to a 3optimal solution to justify the additional computational cost. In our approach we also use 100 random starts and replace 3pixel in each pattern to get the optimal solution. We terminate each iteration of a random start when either the average dissimilarity is not reduced in successive iteration or τ = 100%. Thus, OCPG ensures convergence by providing nearoptimal solutions.
The main advantage of this global OCPG approach over the local CPG approach is that it takes whole moving region information to cluster the CRMB against the pattern (instead of a gravitational centre of a CRMB [15]). Moreover, multiple iterations ensure the quality of pattern codebook to represent the CRMBs and this approach does not require exhaustive pattern matching so that it reduces the computational time needed to select the bestmatch pattern from a codebook against each CRMB.
Figure 4 shows the way to generate a pattern using the proposed OCPG algorithm. Figure 4a shows 3D representation of the total moving regions for the corresponding pixel position which is calculated by the summation of all CRMBs' '1's in a cluster in the first iteration. This 3D representation indicates the most significant moving area (where the frequency is high) in a cluster. Figure 4d shows the same thing after the final iteration. Note that Figure 4d has more concentrated high frequency area compared to Figure 4a, and this suggests the necessity of global optimization for pattern generation. Figure 4b, e show the 2D cluster view. The final patterns are shown in Figure 4c, f where the latter is obviously the desirable pattern due to the compactness.
3.2. Impact of OCPG algorithm on correct classification rate τ, dissimilarity ψ, and number of iterations
Figure 5 shows average number of iterations needed for each random start to provide τ = 100% using ten standard QCIF video sequences. The average is 9.73 per random start, would be much lower if we use seed patterns for each start. But the seed pattern may bias towards the seed pattern shape.
Figure 6 shows the 32 patterns used in [14, 15]. To generate the arbitrary number of patterns using definition, certain features are assumed for each 64pixel pattern such that each is regular (i.e., bounded by straight lines), clustered (i.e., the pixels are connected), and boundaryadjoined. Since the moving region of a MB is normally a part of a rigid object, the clustered and boundaryadjoined features of a pattern can be easily justified, while the regularity feature is added to limit the pattern codebook size.
Figure 7 shows some example patterns from the seven test sequences. It is interesting to note the lack of similarity between the pattern sets for each of the sequences. The patterns cover different regions of a MB to ensure the maximum pattern coded MBs form maximum compression. It should also be noted that of the three fundamental pixelbased assumptions, which apply to any predefined codebook, only regularity has been relaxed, while the clustering and boundaryadjoined conditions are adhered to it in most cases. This relaxation is one of the main reasons for the superior coding efficiency achieved by the arbitraryshaped patterns.
Figure 8 shows that how the proposed OCPG algorithm generates the optimal codebook. For each random start the OCPG algorithm reduces the dissimilarity, ψ_{avg} (see Figure 8b) by classifying the CRMBs (Line 15 of the proposed OCPG algorithm) using the bestmatched pattern. Thus, it increases τ (see Figure 8a) and also ensures the convergence of the OCPG algorithm.
It is clear that the coding performance will decrease with the group of frames participated in the pattern formation process of the proposed OCPG algorithm as the generated PC is gradually approximating the shape of the CRMBs. This process imposes restriction on the group of frames size. Thus, we need to refresh the pattern codebook with a regular interval. As shown by experiments, the group of picture (GOP) size would be good candidate to test whether we need to refresh the codebook. The detailed procedure of pattern codebook refreshment and transmission will be described in Section 3.4.
3.3. Clustering techniques
The CPG algorithm uses Kmeans clustering technique [23] where it uses gravitational centre of the CRMBs to cluster them. The average value of τ is at best 70% using the CPG algorithm. It is due to the gravitational centre which represents all the 256 pixels with a point. We also investigate Fuzzy Cmeans[24, 25] clustering technique, but the results is almost the same. Neural network is not a good candidate due to the computational complexity. It is interesting to note that the performance of the proposed OCPG algorithm does not depend on any specific clustering algorithm because whatever a clustering algorithm used is merely to generate only seed codebook and subsequently, the process converges quickly with our pattern similarity matching algorithm.
3.4. Pattern codebook refreshing and coding
For contentbased pattern generation, we need to transmit the pattern codebook after a certain interval. To determine whether we need to transmit the newly generated codebook or continue with the current one, we consider the bits and distortions generated with both the current and the previous pattern codebooks. The GOP [26] may be the best choice as after a GOP we need to send a fresh Intra picture in the bitstream. Note that this GOP may be different from the group of frames used for codebook generation. To trade off the bitstream size and quality we can use Lagrangian optimization function as it is used to control the ratedistortion performance. Here we consider average distortion and bits per MB in both cases. We select the current codebook if it provides less Lagrangian cost as compared to the previous one.
From the experimental results we observe that around 2 to 4 times we need to refresh arbitrary patterns while we use first 100 frames of the seven standard QCIF video sequences (same as those used in Figure 7) as illustrated in Figure 9. The figure also shows that the number of transmission increases with the bit rate because almost fixed amount of bits for pattern transmission has significant contribution in ratedistortion optimization at the low bit rate but has insignificant contribution at relatively high bit rates. Note that five times in refreshment mean that we need to refresh the pattern codebook in all GOP in our experiments.
For pattern codebook transmission we have divided each pattern (i.e., 16 × 16 pixel binary MB) into four 8 × 8 blocks and then applied zerorun length coding. The zerorun length will be 063 as the total number of elements in a block are 64. We have used Huffman coding to assign variable length codes for each combination. The length of codes varies from 2 to 14 bits for the length of 063. But we have treated the length of 64 (i.e., all are zeros) as a special case, and thus we assign it with two bits as well. As for the variable length codes, one can easily generate them from the frequencies of the zerorun lengths using Huffman coding, so we do not include the whole table in this paper. From the experimental results, for eight patterns with each of 64 '1's requires 518 bits on average. On the other hand, if we use fixed length coding for the positions of '1's in a pattern we need 4,096 bits to transmit eight patterns with 64 '1's (i.e., 8 × 64 × 8 = 4096 bits).
3.5. Multiple pattern modes and allowance of all MBs as CRMBs
As we mentioned in "Introduction", the best pattern selection process relying on the similarity measures does not guarantee that the best pattern would always result in coding efficiency because of residual errors after quantization and the choice of Lagrangian multiplier. To address this we use multiple pattern modes that select the pattern in the order of similarity. Since the similarity measure is a good estimator, we only consider higher ranked patterns. Eliminating the CRMB classification threshold for 8 ≤ M_{1} < 2QP/3 + 64, by allowing all MBs to be motion estimated and compensated using pattern modes and finally selected by the Lagrangian optimization function, provides better ratedistortion performance. Obviously this increases the computational complexity which is checked using already known motion vector determined by the 16 × 16 mode.
3.6. Encoding and decoding in the proposed technique
In the proposed technique, nearglobaloptimal arbitrarily shaped PVC (ASPVCGlobal) uses a pattern codebook that comprises eight patterns with 64 pixels as '1'. Note that a pattern is a 16 × 16 binary MB with 64 positions are marked as '1' and the rest of the positions are marked as '0'. The proposed OCPG technique is generic to form any pixelsize patterns (for example 64 as used in the experiment, 128, or 192) with any number of patterns (for example 2, 4, 8 as used in the experiment, 16, 32) in a codebook. We have investigated into different combinations of pixels and patterns, but found that the eight 64pixel patterns are the best pattern codebook in terms of ratedistortion and computational performance using different video sequences. We have used fixed length codes (i.e., 3 bits) to identify each pattern in the proposed technique. Note that we have also encoded the pattern mode using finer quantization. In the implementation we have used QP_{pattern} = QP  2, where QP is used for the other standard modes. The rationality of the finer quantization is that as the pattern mode requires fewer bits as compared to the other modes, we can easily spend more bits in coding residual errors by lowering quantization. The final mode decision is taken place by the Lagrangian optimizer.
Before encoding a GOP a new pattern codebook is generated using all frames of the GOP. Then encode that GOP using the new codebook and the previous codebook (if there is one, for the first GOP there is no previous codebook). We have selected the bits stream based on the minimum cost function (using Equation 3 with new Lagrangian Multiplier (see Section 5.1) using the average bits and distortion (sum of square difference) per MB.
As mentioned earlier, we have used the motion vector of the 16 × 16 mode as the pattern mode motion vector to avoid computational time requirement of ME. Only the patterncovered residual error (i.e., region marked as '1' in the pattern template) is encoded and the rest of the regions are copied from the motion translated region of the reference frame. To encode a patterncovered region, we need four 4 × 4pixel subblocks (as 64 '1's in a pattern) for DCT transformation. Using the existing shape of the pattern (for example, the first pattern in the Miss America video sequence in Figure 7), we may need more than four 4 × 4pixel blocks for DCT transformation. To avoid this, we need to rearrange the 64 positions before transformation so that we do not need more than four blocks. Inverse arrangement is performed in the decoder with the corresponding pattern index, and thus we do not lose any information.
In the decoder, we can determine the pattern mode and the particular pattern from the MB type and pattern index codes respectively. From the transmitted pattern codebook, we also know the shape of the patterns i.e., the positions of '1's and '0's. After inversely arranging the residual errors according to the pattern, we reconstruct the MB of the current frame by adding residual error with the motion translated MB of the reference frame.
4. Computational complexity of OCPG algorithm
In order to determine the computational complexity of the proposed ASPVCGlobal algorithm, let us compare it with the H.264 standard. From now on, previous contentbased PVC is named as ASPVCLocal [16]. The H.264 encodes each MB with motion search for each mode. When the proposed ASPVCGlobal scheme is embedded into the H.264 as an extra mode, additional onefourth motion search is required per MB as the pattern size is a quarter of a macroblock. Each macroblock takes part in the proposed OCPG algorithm and the best pattern is selected at the end. For detailed analysis of the proposed OCPG algorithm is described as follows.
We can divide the entire process into (i) Binary matrix calculation, (ii) clustering and correct classification rate τ calculation, (iii) pixel frequency calculation of each cluster, and (iv) sorting the pixels based on the frequency. Let N, α, M^{2}, k, and I be the total number of MBs, total number of cluster, block size, total number of random starts, and number of iterations for τ = 100%, respectively, and then:

[i]
Each binary matrix calculation requires one subtraction, one absolute and one comparison. Thus, totally 3NM^{2} operations are required.

[ii]
Each clustering requires one comparison and one addition. Thus totally 2aNM^{2} operations are required. Each correct classification rate calculation requires one comparison. Thus, aN operations are required.

[iii]
Each pixel frequency calculation requires one addition. Thus NM^{2} operations are required.

[iv]
Sorting the pixel frequency requires 2aM^{2} ln M operations.
Therefore, the proposed OCPG algorithm requires 3NM^{2} + kI(2αNM^{2} + αN + NM^{2} + 2αM^{2} ln M) operations. If we assume that N >> α and N >> M, the required operations would be nM^{2}(4 + 16K) where K is the total number of iterations including the number of random starts and the associated innerloop iterations. On the other hand, motion search using any mode requires 3(2d + 1)^{2}NM^{2} operations where d is the range of motion search. Thus, the proposed ASPVCGlobal with 100 random starts and 9.73 (according to Figure 5) innerloop iterations until τ = 100% requires no more than 5.4 times operations compared to the full motion search by a mode where search length is 15.
Compared to the fractional as well as multimode motion search this extra operation does not restrict it from real time operations. The experimental results also show that maximum of dissimilarity is within 7% of the minimum dissimilarity of 100 random starts. It means that if we consider only one start, we only lose 7% of clustering accuracy. Thus, according to the availability of computing power or hardware, we can make the proposed OCPG efficient by reducing the number of random starts. The experimental results show that with only five random starts we can achieve very similar performance of optimal one and much better than the existing approach. The OCPG with five random starts and 9.73 iterations for τ = 100% requires no more than 30% of more operations compared to the full motion search using a mode where search range is 15 pixels.
For multiple pattern modes, the ASPVCGlobal needs only bit and distortion calculation without ME. The ME, irrespective of a scene's complexity, typically comprises more than 60% of the computational overhead required to encode an inter picture with a software codec using the DCT [27, 28], when full search is used. Thus, maximum of 10% operations are needed for one pattern mode as each pattern mode will process onefourth of the MB. As a result, the ASPVCGlobal algorithm using five random starts and up to four pattern modes may requires extra 0.58 of a mode ME&MC operations compared to the H.264 which would not be a problem in real time processing.
5. Experimental set up and simulation results
5.1. Integration with H.264 coder
To accommodate extra pattern modes in the H.264 video coding standard for testing, we need to modify its bitstream structure and Lagrangian multiplier. For inclusion of pattern mode we change the header information for MB type, pattern identification code, and shape of patterns. Inclusion of pattern mode also demands modification of the Lagrangian multiplier as the pattern mode is biased to bits rather than distortion.
The H.264 recommendation document [3] provided binarization for MB and subMB in P and SP slices. Experimental results show that in most of the cases the 8 × 8 mode is less frequent compared to the larger modes. Thus, we use first part of the MB type header for the pattern mode using '001' code and then assign variable length codes for pattern modes, 8 × 8, 8 × 4, 4 × 8, and 4 × 4. Using the frequency of MB type, we assigned the pattern modes, 8 × 8, 8 × 4, 4 × 8, and 4 × 4 as '0', '10', '111', '1100', and '1101', respectively. After the header of MB type we need to send the pattern type with the maximum length of codes as log_{2}(number of pattern templates) when fixed length pattern codes will be used. For example, when we use eight patterns in a codebook, we use 3 bits for the pattern code. The pattern code will identify the particular pattern. At the beginning of a GOP we transmit the codebook if necessary. We use one bit to indicate whether a new codebook is transmitting.
We also investigate into Lagrangian multiplier after embedding new pattern modes in the H.264 coder. It is already mentioned earlier that a new pattern mode yields less bits and sometimes higher distortion compared to the standard H.264 modes. To be fair with the other modes, the value of multiplier is reduced to λ = 0.4 × 2^{(QP12)/3}. The experimental results of Lagrangian multiplier and ratedistortion performance have justified the new valuation. As the pattern modes require fewer bits compared to the 16 × 16 mode, the reduced λ signifies less importance in bits as compared to the distortion in the minimization of Lagrangian cost function. We have also observed that for a given λ, the generated QP is slightly large for relatively high motion compared to the smooth motion video sequences.
5.2. Experiments and results
In this paper, experimental results are presented using nine standard video sequences with wide range of motions (i.e., smooth to high motions) and resolutions (QCIF to 4CIF) [26]. Among them, three (Miss America, Foreman, and Table Tennis) are QCIF (176 × 144), one (Football) is SIF (352 × 240), two (Paris and Silent) are CIF (352 × 288), and other two (Susie and Popple) are 4CIF (720 × 576). Fullsearch ME with 15 as the search range and fractional accuracy has been employed. We have selected a number of existing techniques to compare with the proposed one. They are the H.264 (as it is the stateoftheart video coding standard), the ASPVCLocal [16] (as it is the latest block partitioning coding technique with arbitrarily shaped patterns), the IBS [12] (as it is the latest block partitioning video coding technique), and the PVC [15] (as it is the latest block partitioning technique using predefined patterns).
Figure 10 shows some decoded frames for visual viewing comparison by the H.264, the IBS [12], the ASPVCLocal [16], PVC [15], and the proposed techniques. The 21st frame of Silent sequence is shown as an example. They are encoded using 0.171, 0.171, 0.160, 0.136, and 0.136 bits per pixel (bpp) and resulting in 32.77, 32.77, 32.75, 34.57, and 35.07 dB in YPSNR, respectively. Better visual quality can be observed in the decoded frame constructed by the proposed technique at the fingers areas. Apart from the best PSNR result by the proposed technique, subjective viewing has also confirmed the quality improvement. From the viewing tests with 10 people, the decoded video by the proposed scheme is with the best subjective quality. It is due to the fact that the proposed method performs well in the patterncovered moving areas, and the bit saving for partially skipped blocks (i.e., exploiting more of intrablock temporal redundancy) compared to the other methods. Thus, the quality of the moving areas (i.e., area comprising objects) is better in the proposed method.
Table 1 shows ratedistortion performance for a fixed bit rate using different algorithms for different video sequences. The table reveals that the proposed algorithm outperforms the relevant existing algorithms such as H.264, the IBS [12], the ASPVCLocal [16], and the PVC [15] by 2.2, 2.0, 1.5, and 0.5 dB, respectively.
Figure 11 shows overall ratedistortion performance for wide range of bit rates using different types of video sequences (in terms of motion and resolution) by the H.264, the IBS [12], the ASPVCLocal [16], the PVC [15], and the proposed techniques. For all cases, the proposed technique outperforms the stateofarts techniques. The proposed technique outperforms the most recent PVC technique [15] by at least 0.5 dB for almost all video sequences with wide range of bit rates. The proposed technique exhibits better performance due to the global optimization, allowing all MBs into multiple pattern generation and pattern modes, and spending more bits in pattern mode.
The performance of the proposed technique as well as other patternbased video coding may not perform better significantly compared to the H.264 at high bit rates as the number of MBs encoded by the patternmode may diminish. It is due to the dominancy of the smaller modes of the variable block size over pattern mode. It may also fail if the video sequences have extremely high motion. It is due to the smaller amount of intrablock temporal redundancy available in MBs in such situations. After all, the proposed technique is good at low bit rates by the nature of its theoretical ground. It has been demonstrated above that its objectives have been achieved.
6. Conclusions
In this paper, we have proposed an efficient video coding technique using arbitrarily shaped block partitions in global optimal perspective, for low bit rates. The proposed scheme uses a contentbased pattern generation strategy in the globally optimal perspective, based upon multiple pattern modes. A Lagrangian multiplier has been derived to embed the pattern mode into the H.264. We have verified the effectiveness of the proposed technique by comparing other contemporary and relevant algorithms. The experimental results show that this new scheme improves the video quality by 0.5 and 1.5 dB compared to the existing latest patternbased video coding and the H.264 standard respectively.
Author's information
Additional email addres for Professor Paul: mpaul@csu.edu.au
Abbreviations
 ASPVC:

arbitrarily shaped patternbased video coding
 BPP:

bits per pixel
 CPG:

Contentbased pattern generation
 CRMB:

candidate regionactive macroblock
 GOP:

Group of picture
 IBS:

implicit block segmentation
 ITR:

intrablock temporal redundancy
 MB:

Macroblock
 MC:

motion compensation
 ME:

motion estimation
 NP:

nonpolynomial
 OCPG:

optimal contentbased pattern generation
 PVC:

patternbased video coding
 QP:

quantization parameter.
References
 1.
ITUT Recommendation H.263 Video coding for low bitrate communication, version 2 1998.
 2.
ISO/IEC 13818, MPEG2 International Standard 1995.
 3.
ITUT Rec. H.264/ISO/IEC 1449610 AVC Joint Video Team (JVT) of ISO MPEG and ITUT VCEG, JVTG050 2003.
 4.
Paul M, Murshed MM: Superior VLBR video coding using pattern templates for moving objects instead of variablebloc size in H.264. In the 7th IEEE Int Conferen Signal Proce (ICSP04). Beijing, China; 2004:717720.
 5.
Li P, Lin W, Yang XK: Analysis of H.264/AVC and an associated rate control scheme. J Electron Imaging 2008,17(4):043023. 10.1117/1.3036181
 6.
Chen S, Sun Q, Wu X, Yu L: Lshaped segmentations in motioncompensated prediction of H.264. IEEE Conference on Circuits and Systems (ISCAS08) 2008.
 7.
Hung EM, Ricardo L, Queiroz D, Mukherjee D: On MB partition for motion compensation. IEEE International Conference on Imaging Process (ICIP06) 2006, 16971700.
 8.
DivorraEscoda O, Yin P, Dai C, Li X: Geometryadaptive block partitioning for video coding. IEEE International Conference on Acoustic Speech, and Signal Processing (ICASSP07) 2007, I657I660.
 9.
DivorraEscoda O, Yin P, Gomila C: Hierarchical Bframe results on geometryadaptive block partitioning. In VCEGAH16 Proposal, ITU/SG16/Q6/VCEG. Antalya, Turkey; 2008.
 10.
Fukuhara T, Asai K, Murakami T: Very low bitrate video coding with block partitioning and adaptive selection of two timedifferential frame memories. IEEE Trans Circ Syst Video Technol 1997, (7):212220.
 11.
Chen J, Lee S, Lee KH, Han WJ: Object boundary based motion partition for video coding. Picture Coding Symposium 2007.
 12.
Kim JH, Ortega A, Yin P, Pandit P, Gomila C: Motion compensation based on implicit block segmentation. IEEE International Conference on Image Processing (ICIP08) 2008.
 13.
Wong KW, Lam KM, Siu WC: An efficient low bitrate videocoding algorithm focusing on moving regions. IEEE Trans Circ Syst Video Technol 2001,11(10):11281134. 10.1109/76.954499
 14.
Paul M, Murshed M, Dooley L: A realtime pattern selection algorithm for very low bitrate video coding using relevance and similarity metrics. IEEE Trans Circ Syst Video Technol 2005,15(6):753761.
 15.
Paul M, Murshed M: Video coding focusing on block partitioning and occlusions. IEEE Trans Image Process 2010,19(3):691701.
 16.
Paul M, Murshed M: An optimal contentbased pattern generation algorithm. IEEE Signal Process Lett 2007,14(12):904907.
 17.
Maragos P: Tutorial on advances in morphological image processing and analysis. Opt Eng 1987,26(7):623632.
 18.
Wiegand T, Schwarz H, Joch A, Kossentini F: Rateconstrained coder control and comparison of video coding standards. IEEE Trans Circ Syst Video Technol 2003,13(7):688702. 10.1109/TCSVT.2003.815168
 19.
Sullivan GI, Wiegand T: Ratedistortion optimization for video compression. IEEE Signal Process Mag 1998, 15: 7490. 10.1109/79.733497
 20.
Wiegand T, Girod B: Lagrange multiplier selection in hybrid video coder control. IEEE International Conference on Image Processing (IEEE ICIP01) 2001, 542545.
 21.
Papadimitriou CH, Steiglitz K: Combinatorial Optimization: Algorithms and Complexity. PrenticeHall, India; 1939.
 22.
Lin S, Kernighan BW: An effective heuristic procedure for the travelingsalesman problem. Oper Res 1973, 21: 498516. 10.1287/opre.21.2.498
 23.
MacQueen JB: Some methods for classification and analysis of multivariate observations. In Proceeding of 5th Berkeley Symposium on Mathematical Statistics and Probability. Volume 1. University of California Press; 1967:281297.
 24.
Dunn JC: A fuzzy relative of the ISODATA process and its use in detecting compact wellseparated clusters. J Cybern 1973, 3: 3257. 10.1080/01969727308546046
 25.
Bezdek JC: Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York; 1981.
 26.
Richardson IEG: H 264 and MPEG4 Video Compression. WIL; 2003.
 27.
Shanableh T, Ghanbari M: Heterogeneous video transcoding to lower spatiotemporal resolutions and different encoding formats. IEEE Trans Multimedia 2000,2(2):101110. 10.1109/6046.845014
 28.
Paul M, Lin W, Lau CT, Lee BS: Direct intermode selection for H.264 video coding using phase correlation. IEEE Trans Image Processing 2011,20(2):461473.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
A significant portion of the research work is done when I was a PhD student and research fellow in Monash University under the supervision of Manzur Murshed. I wrote this paper when I was not in Monash University. I have submitted this paper and modified the paper when I am a Lecturer in Charles Sturt University. Article processing fee is provided by Charles Sturt University.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Paul, M., Murshed, M. Video coding using arbitrarily shaped block partitions in globally optimal perspective. EURASIP J. Adv. Signal Process. 2011, 16 (2011). https://doi.org/10.1186/16876180201116
Received:
Accepted:
Published:
Keywords
 video coding
 block partitioning
 H.264
 motion estimation
 low bitrate coding
 occlusion