 Research
 Open access
 Published:
An efficient interpolation filter VLSI architecture for HEVC standard
EURASIP Journal on Advances in Signal Processing volume 2015, Article number: 95 (2015)
Abstract
The nextgeneration video coding standard of HighEfficiency Video Coding (HEVC) is especially efficient for coding highresolution video such as 8Kultrahighdefinition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8KUHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either halfpixel interpolation or quarterpixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the realtime processing of 4:2:0 format 7680 × 4320@78fps video sequences.
1 Introduction
Now, the Joint Collaborative Team on Video Coding (JCTVC) is developing the nextgeneration video coding standard, called HighEfficiency Video Coding (HEVC) [1, 2]. It provides a significant ratedistortion improvement over its predecessor H.264/AVC and can save 40–50 % bit rates compared to H.264/AVC, especially for 4K (3840 × 2160)/8K (7680 × 4320)ultrahighdefinition (UHD) video applications [3]. A number of new algorithmic tools have been proposed, covering many aspects of video compression technology, such as larger coding units, new tools, and more complex prediction schemes.
Motion compensation (MC) is the key factor for efficient video compression. Compensation for motion with fractionalpel accuracy requires interpolation of reference pixels. Therefore, in order to increase the performance of integer pixel motion estimation, the subpixel (i.e., half and quarter) accurate variable block size motion estimation is applied in both H.264/AVC and HEVC. The H.264/AVC standard uses a sixtap finite impulse response (FIR) luma filtering at halfpixel positions followed by a linear interpolation at quarterpixel positions. Chroma samples are computed by the weighed interpolation of four closest integer pixel samples. In HEVC standard, three different eighttap or seventap FIR filters are used for the luma interpolation of halfpixel and quarterpixel positions, respectively. Chroma samples are computed using fourtap filters. Subpixel interpolation is one of the most computationally intensive parts of HEVC video encoder and decoder. In the highefficiency and lowcomplexity configurations of HEVC decoder, 37 and 50 % of the HEVC decoder complexity is caused by subpixel interpolation on average, respectively [4]. On the other hand, compared with the sixtap filters used in H.264/AVC standard, the seventap and eighttap filters cost more area in hardware implementation and occupy 37~50 % of the total complexity for its DRAM access and filtering. Therefore, it is necessary to design a dedicated hardware architecture for MC interpolation filter to realize the realtime processing for highresolution videos.
Some highthroughput interpolators have been proposed for H.264/AVC in literatures [5–10]. Usually, they are embedded in the fractional motion estimation pipeline stage that follows the integerpel motion estimation. Their scheduling assumes two successive steps for half and quarter interpolations. A twostep approach is natural in terms of the specification of quarterpel computations which refer to the results of halfpel computations. This data flow cannot be applied directly in HEVC since the quarterpel samples are computed using separate filters. In particular, more filters are needed in the second step. Furthermore, the hardware cost increases due to a larger number of filter taps, and thus, much higher throughputs are required (i.e., more partitioning modes).
There have been many previous works focusing on designing efficient architecture for HEVC MC interpolations [11–18]. Huang proposed a highthroughput interpolation filter architecture with a prediction unit (PU)adaptive filtering flow and a unified filter combining the eighttap luma and fourtap chroma filters [11]. But its hardware area is larger than the hardware cost proposed in this paper. In [12], a dedicated hardware accelerator for interpolation was presented. Although it could read 8 input samples and produce 64 output samples at each clock cycle, its area cost was huge. An efficient VLSI design which is composed of a reconfigurable filter, an optimized pipeline engine organization, and a filter reuse scheme for HEVC interpolation was proposed in [13]. This hardware is slower than the architecture proposed in this paper because it has restricted reconfigurability for filter data paths. In [14], a simplified fractional motion estimation (FME) architecture for fieldprogrammable gate arrays (FPGAs) is presented that processes only 8 × 8sized blocks at the cost of a bit rate increase of 13 %. In [15], reconfigurable acceleration engines were developed in the interpolation filter hardware architecture to adapt to different filter types. In [16], a lowenergy HEVC subpixel interpolation hardware for all PU sizes was proposed and Hcub multiplierless constant multiplication algorithm was used. But the focus of [14–16] is on developing FPGAbased reconfigurable hardware architecture, and block random access memories (BRAMs) are usually embedded in the FPGA platform.
To overcome the obstacles of the previous work, we proposed a fast interpolation filter algorithm and the corresponding hardware architecture in [18], which can save the encoding time and reduce the computational complexity of fractional motion estimation in HEVC. In the aspect of algorithm, we speed up the encoding process by skipping all the 4 × 8, 4 × 16, and 12 × 16 prediction units in the queue. Based on the algorithm, we designed the interpolation filter VLSI architecture with the reconfigurable configuration and the cell block reuse to reduce the implement hardware area.
In this paper, we extend our earlier work [18] in three ways. First, on the basic of our original algorithm, we skip the 4 × 8, 4 × 16, 8 × 4, 16 × 4, 16 × 12, and 12 × 16 subPU blocks in the interpolation process instead of only skipping the 4 × 8, 4 × 16, and 12 × 16 PU, to further save the encoding time and save a large area in hardware implementation by skipping 4× size with acceptable coding quality degradation. Second, an efficient memory organization method is proposed in this paper to reduce the data access of SRAM and save the power of VLSI architecture. Third, a fivestep pipeline interpolation filter engine is proposed in this paper to shorten the critical path of the filter and improve the working speed.
Another obvious limitation of our earlier work [18] and the above progress is that they only target at the video applications with the resolutions up to HD or 4K. For higher throughput of 8KUHD 7680 × 4320, more efficient hardware architecture is desirable. Although the powerefficient FME VLSI architecture proposed in [17] can realize 8KUHD video realtime encoding, the focus of it is on the search pattern and hardware architecture of fractional search module. With aims at supporting 8KUHD video applications, an efficient interpolation filter VLSI architecture is proposed in this paper.
The main contributions of this paper are summarized as follows:

(1)
A fast and implementationfriendly interpolation algorithm is proposed, which skips the interpolation process of 4 × 8, 4 × 16, 8 × 4, 16 × 4, 16 × 12, and 12 × 16 subPU blocks to reduce the encoding time and hardware complexity.

(2)
A reused threelevel interpolation filter architecture is adopted for the halfpixel and quarterpixel interpolations to store the intermediate result and thus can reduce the hardware cost.

(3)
An efficient memory organization method is proposed in the paper to reduce the data access of SRAM and save the power of VLSI architecture.

(4)
A fivestep pipeline interpolation filter engine is proposed in the paper. It can shorten the critical path of the filter and improve the working speed.

(5)
A reconfigurable interpolation unit is developed in the paper, and the two types of the filters can be carried out with the same hardware architecture by only reversing the order of input reference pixels. As a result, the proposed reconfigurable filter can reduce the area of the whole architecture.
The rest of this paper is organized as follows. Section 2 presents the overview of the interpolation filter algorithm in the HEVC test model (HM). Section 3 describes the improved fast interpolation filter algorithm. The proposed efficient interpolation filter VLSI architecture is presented in Section 4 in details. The implementation results are analyzed in Section 5. Finally, Section 6 concludes this paper.
2 Overview of interpolation algorithm
In HEVC standard, three different eighttap and seventap interpolation FIR filters are used for both halfpixel and quarterpixel interpolations. These three FIR filters, i.e., type A, type B, and type C, are shown in (1), (2), and (3), respectively.
Integer pixels (A _{ x,y }), half pixels (b _{ x,y }, h _{ x,y }, j _{ x,y }), and quarter pixels (a _{ x,y }, c _{ x,y }, d _{ x,y }, e _{ x,y }, f _{ x,y }, g _{ x,y }, i _{ x,y }, k _{ x,y }, p _{ x,y }, q _{ x,y }, r _{ x,y }, n _{ x,y }) in a PU are shown in Fig. 1. The half pixels are interpolated from the nearest integer pixels in either horizontal direction or vertical direction. The quarter pixels are interpolated from the nearest half pixels in the horizontal direction and in the vertical direction, respectively, using type A, type B, or type C filter. According to which fractional pixel should be computed, one interpolation filter is chosen. As the position of a quarter pixel point is close to the integer pixel, we can choose a seventap interpolation filter. But for the farther halfpixel point, an eighttap filter is required.
For the halfpixel point interpolation, b _{0,0} can be calculated by applying (2) from A _{0,0} in the horizontal direction. Then, h _{0,0} and j _{0,0} can be calculated from A _{0,0} and b _{0,0}, respectively, in the vertical direction.
After that, the motion vector (MV) can be obtained by using the SATD algorithm [1] to calculate the values of the halfpixel points. For the quarterpixel point interpolation, a _{0,0} and c _{0,0} can be calculated by applying (1) and (3) from A _{0,0} in the horizontal direction. Then, e _{0,0}, p _{0,0}, g _{0,0}, and r _{0,0} can be calculated from a _{0,0} and c _{0,0}, respectively, in the vertical direction, while the other points should be filtered according to the MV.
If the horizontal component of MV is equal to zero, d _{0,0} and n _{0,0} can be calculated by applying (1) and (3) from A _{0,0}; otherwise, f _{0,0} and q _{0,0} will be calculated by applying (1) and (3) from b _{0,0}. If the vertical component of MV is equal to zero, a _{0,0} and c _{0,0} can be calculated from themselves; otherwise, i _{0,0} and k _{0,0} will be calculated by applying (2) from a _{0,0} and c _{0,0}.
For 8 × 8 subblock predictions, the reference pixel values of a 16 × 16 prediction block are required in the worstcase scenario. Compared to the sixtap interpolation filter in H.264/AVC, the interpolation filter in HEVC will cost a larger area. Therefore, it is very important to design an efficient luma interpolation VLSI architecture to realize the implementation of realtime video coding and to reduce the implementation area.
3 The fast interpolation filter algorithm
3.1 Fast interpolation filter algorithm
Like H.264/AVC, mode decisions with motion estimation (ME) remain to be among the most timeconsuming computations in HEVC. In the initial HEVC design, there are four different possible partition modes for inter predictions: two square partition modes (2N × 2N and N × N) and two symmetric motion partition (SMP) modes (2N × N and N × 2N). As a complement to the squareshaped or nonsquare symmetrically partitioned prediction blocks, the asymmetric motion partition (AMP) is proposed in HEVC. AMP includes four partition modes: 2N × nU, 2N × nD, nR × 2N, and nL × 2N, which divide a coding block into two asymmetric prediction blocks along the horizontal or vertical direction. In HEVC, the size of the largest PU is 64 × 64. So, it can be split into a total of 21 different sizes of subPUs, as shown in Table 1 (there is no 4 × 4 mode in the interpolation filtering operation). All possible prediction modes are traversed. And the one having the minimum RD cost will be used.
In an interprediction mode decision, a fullsearch algorithm searches for every possible block size and refines the results from the integerpel to quarterpel resolution. Thus, a fullsearch algorithm guarantees the highest level of compression performance. However, the considerable computational complexity for a mode decision is critical for the encoding speed. Moreover, the main target resolution of HEVC is full HD (1920 × 1080) and beyond. For hardware implementation of an HEVC encoder, the area cost will be very high if the hardware structure executes interpolation filter for all possible prediction modes. In the VLSI architecture design, therefore, it is required to achieve the interpolation filtering operation of larger blocks by reusing the smallest unit.
According to eight different possible splittings of PUs, a 4pixel interpolation unit and an 8pixel interpolation unit are used in the proposed architecture. The splitting modules for the 4pixel interpolation unit include 4 × 8, 4 × 16, 8 × 4, and 16 × 4 modes. Both 4pixel interpolation unit and 8pixel interpolation unit will be used for 12 × 16 and 16 × 12 modes. So the 4pixel interpolation unit mentioned in this paper also includes 12 × 16 and 16 × 12 modes. The 4pixel interpolation unit is capable of processing every subblock in a coding unit (CU), but it will cost more hardware areas and clock cycles. So it is very difficult for a 4pixel interpolation unit to achieve the realtime processing of interpolation filter with reasonable computing powers. The statistics of possible splittings of PUs in HM 13.0 with low delay configuration is shown in Table 2. The size ranges from 64× to 4×. The 64× size includes 64 × 32 and 64 × 64 modes. The 32× size includes 32 × 8, 32 × 16, 32 × 24, 24 × 32, and 32 × 32 modes. The 16× size includes 16 × 8, 16 × 16, and 16 × 32 modes. The 8× size includes 8 × 8, 8 × 16, and 8 × 32 modes. The 4× size includes 4 × 8, 4 × 16, 8 × 4, 16 × 4, 12 × 16, and 16 × 12 modes.
It can be observed from Table 2 that, the splitting modules for a 4pixel interpolation unit (4× size) are only about 3.52 % of all possible splitting of PUs, so it will only cause a small decrease in image quality to skip them. In addition, it needs 4 + 8 = 12 pixels data width to process the 4pixel interpolation unit, and the valid data percentage is only 33 %. If we can skip the 4× size, using 8pixel interpolation unit as the basic unit, the valid data can account for 50 %. Therefore, it can save a large area and clock cycles in hardware implementation by skipping the 4× size.
Therefore, we propose a fast and implementationfriendly interpolation algorithm in which the interpolation processing with a 4pixel interpolation unit will be skipped. If we use the 8pixel interpolation unit, we will skip the 4× basic blocks’ (i.e., 4 × 8, 4 × 16, 8 × 4, 16 × 4, 16 × 12, and 12 × 16) interpolation operation in HEVC. Figure 2 illustrates the toplevel block diagram of our proposed fast interpolation algorithm. Compared to the original algorithm, the interpolation process of 4 × 8, 4 × 16, 8 × 4, 16 × 4, 16 × 12, and 12 × 16 subPU blocks is skipped in the interpolation.
Based on the proposed fast interpolation algorithm, we rearrange the classification of PU splitting modules, as shown in Table 3. According to the new splitting modules and the proposed fast algorithm, we can put the minimum 8× PU modes together to realize the interpolation of larger blocks in the VLSI design.
3.2 Experiment results
In order to evaluate the performance of the proposed fast interpolation algorithm, we implement the algorithm using the recent HEVC reference software (HM 13.0). A PC with Inter Core™i52400K CPU @ 3.1GHz and 4G RAM is used in the experiments. We compare the proposed algorithm and our previous work [18] in a low complexity configuration with the original algorithm in HM 13.0 encoder. The performance of the proposed algorithm is shown in Table 4.
A set of experiments are carried out for IPPP frame sequences in which CABAC is used as the entropy coder. The proposed algorithm is evaluated with QPs 22, 27, 32, and 37 using 13 typical sequences recommended by the JCTVC in five resolutions [19]. In terms of resolutions, the video sequences are classified into five classes, including class A (2560 × 1600), class B (1920 × 1080), class C (832 × 480), class D (416 × 240), and class E (1280 × 720). Coding efficiency is measured in terms of peak signaltonoise ratio (PSNR) and bit rate. Computational complexity is measured by the consumed coding time. Bjøntegaard delta PSNR (BDPSNR) (dB) and Bjøntegaard delta bit rate (BDBR) (%) are used to represent the average PSNR and bit rate difference [20]. “Time save (%)” is used to represent the coding time change of motion estimation in percentage. The positive and negative values represent increments and decrements, respectively.
Table 4 shows the comparison of our previous work [18] and the proposed new interpolation algorithm as compared to the original algorithm in HM 13.0 encoder. For the five classes (A, B, C, D, and E) of test sequences, the proposed new algorithm can greatly reduce the coding time of motion estimation. The proposed algorithm can achieve about 19.7 % motion estimation time reduction with a maximum of 21.9 % in “PeopleOnStreet (2560 × 1600)” and a minimum of 17.1 % in “Flowervase (832 × 480)”. Our previous algorithm in [18] can only achieve about 10.0 % total coding time reduction with a maximum of 11.1 % in “Racehorses (832 × 480)” and a minimum of 9.0 % in “Traffic (2560 × 1600)”. Compared with our previous algorithm in [18] which only the interpolation process of 4 × 8, 4 × 16, and 12 × 16 blocks are skipped, it can achieve a higher time saving. About 8.1–11.5 % encoding time can be further reduced, while the average coding efficiency loss is small, less than 0.26 % BDBR increase. For those sequences with higher resolutions (such as 1920 × 1080 and 2560 × 1600), the proposed algorithm shows impressive improvement with more than 19.9 % of coding time saved. Therefore, it is especially efficient for coding higherresolution video. In [21], a simplified HEVC FME interpolation unit targeting a lowcost and highthroughput hardware design was proposed and it causes a bit rate loss of about 13.18 % and a quality loss of about 0.45 dB. Compared to the works in [21], our algorithm provides significant improvement in terms of PSNR and bit rate.
The gain of our algorithm is high because unnecessary small PU size decision has been skipped. For sequences with large smooth texture areas like “PeopleOnStreet”, the algorithm saves more than 20 % coding time. On the other hand, coding efficiency loss is acceptable in Table 4, where the average BDBR increase is 1.38 % with the minimum of 0.3 % in “BasketballDrive” and the average BDPSNR decrease is 0.0594 dB with the minimum of 0.0132 dB in “BQTerrace”. The above experimental results indicate that the proposed new algorithm is efficient for all types of video sequences and outperforms our previous algorithm [18] for HEVC encoders. Therefore, the proposed algorithm can efficiently reduce the coding time while keeping nearly the same RD performance as the original HM encoder. What is more, it can also reduce the implementation area cost in the VLSI design.
4 The efficient interpolation filter VLSI architecture
4.1 The reused data path of interpolation
Fractional motion estimation performs a halfpixel refinement about the integer search positions, and then a quarterpixel one is performed around the best halfpixel position. In the interpolation algorithm described in Section 2, it is known that the quarterpixel interpolation processor needs to filter the results of the halfpixel horizontal interpolation in a vertical direction. If carrying out the interpolation process of a 64 × 64 CU, 2 × (64 + 1) × (64 + 8) × (8 + 6) = 131,040 bits RAM is required in total. The area cost will be huge for hardware implementation. In our design, a reused threelevel architecture is proposed for halfpixel and quarterpixel interpolations. With this structure, we would not need to store the intermediate results and thus can reduce the area cost for about 131,040 bits RAM.
Figure 3 shows the data path of the interpolation processor. There are three horizontal filters (H_F1/4, H_F2/4, H_F3/4 in level 1) and eight vertical filters (V_F1/4, V_F2/4, V_F3/4 in level 2 and level 3) in the proposed threelevel reused architecture.
There are three horizontal filters in the first level (level 1). For the halfpixel interpolation as shown in Fig. 3a, the horizontal filter H_F2/4 is open and the other two are close in the first round. The halfpixel b _{0,0} (as seen in Fig. 1) is calculated by H_F2/4 from the integer pixel A _{0,0} in the horizontal direction.
For the quarterpixel interpolation in the second round as shown in Fig. 3b, the filtered results of pixels a _{0,0}, b _{0,0}, and c _{0,0} are calculated by the three horizontal filters in level 1 from the integer position A _{0,0}.
The second level (level 2) contains four vertical filters. They work just at the second round of the quarterpixel interpolation process. The quarter pixels e _{0,0} and p _{0,0} are interpolated by the filters V_F1/4 and V_F3/4, respectively, from the pixel a _{0,0} in the vertical direction. Similarly, the quarter pixels g _{0,0} and r _{0,0} are interpolated, respectively, by the filters V_F1/4 and V_F3/4 from the pixel c _{0,0} in the vertical direction.
The last level (level 3) also contains four vertical filters. The difference between the four vertical filters in level 2 and level 3 is that the data inputs of the vertical filters in level 3 are not fixed. The filtered results of the half pixels h _{0,0} and j _{0,0} are calculated by the two vertical filters V_F2/4 from the pixels A _{0,0} and b _{0,0} at the first round of the halfpixel interpolation process. During the second round, the quarter pixels i _{0,0} and k _{0,0} are interpolated by the same two vertical filters from the pixels a _{0,0} and c _{0,0} when the vertical component of the best half MV is not equal to zero. The interpolated results of quarter pixels d _{0,0} and n _{0,0} are calculated by the other two vertical filters V_F1/4 and V_F3/4 from the integer pixel A _{0,0} when the horizontal component of MV is equal to zero; otherwise, the quarter pixels f _{0,0} and q _{0,0} are interpolated by the same vertical filters V_F1/4 and V_F3/4 from the half pixel b _{0,0}.
From the above data path of the proposed interpolation filter architecture, it can be seen that all the horizontal and vertical filters in the process of halfpixel interpolation can be reused in the process of quarterpixel interpolation. Moreover, some interpolation filter units can be reused for different quarterpixel positions. This reused architecture will greatly reduce the area cost in hardware implementation.
4.2 Memory organization
In the VLSI design, an 8pixel interpolation unit is applied to balance the processing time and the hardware efficiency. Because every PU can be split into multiple 8× blocks, the 8pixel interpolation unit can deal with every subblock in the processing unit of interprediction.
Extra four pixels around every 8 × 8 block will be used as the input pixels for the 8tap interpolation filter. So the window of filter should be (4 + 8 + 4) × (4 + 8 + 4) = 16 × 16 and the width of the input data is 16 pixels. The scan order is vertical, and the adjacent 8 × 8 blocks adopt similar operations to reuse the interpolation data and to reduce the memory access.
The basic filter unit is 8× block which will be reused many times for 8 × 8 blocks and larger PU blocks. The reference data inputs should be reasonably stored before the process of subpixel interpolation to reduce the memory access. SRAM is used to store the input reference pixels. The maximum processing unit of LPU is 64 × 64 block, and there are also four extra reference pixels around the processing unit. So the actual reference pixel matrix is 72 × 72. As the width of processing unit is from 8 to 64, the 72 × 72 pixel matrix is stored in terms of 9pixel width separately as shown in Fig. 4. The depth of every SRAM is 9 × 8 bit = 72 bits, and every bit is the data address of each line. There are eight SRAMs in order to realize the storage of a 72 × 72 pixel matrix. Based on this organization, only SRAM0 and SRAM1 are open for 8× processing unit while the others are close with no data access. Only when the width of processing unit is 64, all the SRAMs will be used to store and read the input reference pixels.
4.3 The reconfigurable interpolation filter architecture
4.3.1 The pipeline interpolation filter engine
According to the analysis in Section 3, the 8pixel interpolation unit is chosen as the basic unit in the proposed interpolation processor. The proposed pipeline interpolator architecture is shown in Fig. 5 where the 8× block module is the basic reused block. This interpolator can support 8pixel interpolation, which can adapt to most of the variable block sizes. One 16 × 16 block is split into two 8 × 16 blocks, and a 16 × 8 block is split into two 8 × 8 blocks. For the interpolation process of a 64 × 64 CU, 8× block module can be reused by eight times.
As shown in Fig. 5, h_f is the 8tap horizontal interpolation filter and v_f is the 8tap vertical interpolation filter. The h_f can support 8pixel interpolation. There are nine 8tap horizontal interpolation filters (h_f0~h_f8), and only eight filtered results among them are selected as the predicted outputs according to the distribution of half pixels around the integer pixels. After a horizontal interpolation filtering, the vertical interpolation filter reads the horizontal outputs. There are eight shift registers in the vertical interpolation filter, and the output data from the horizontal filter are stored in these registers sequentially. When the eight registers are filled with the predicted outputs from the horizontal interpolation filter, the vertical interpolation filter starts to work.
There are five steps in the operation of the interpolation filter pipeline.

Step 1:
The interpolation filter reads the reference integer pixels from the first line, and as a result, there are 16 reference data inputs from 0~15.

Step 2:
The horizontal interpolation filter h_f0 reads the integer pixels 0~7 of line 1, and the filter h_f1 reads the integer pixels 1~8, and so on. These 16 pixels are interpolated by the corresponding horizontal interpolation filters.

Step 3:
The filtered data from the horizontal interpolation filter of line 1 are written into the registers of the vertical interpolation filter v_f. By repeating the same operations as in step 1 and step 2, the filtered data of following lines are written into the registers.

Step 4:
When the registers of v_f are filled with eight pixels, the 8tap vertical interpolation starts to work and the filtered results of line 1 will be obtained.

Step 5:
When v_f executes filtering from line 1 to line 8, the input reference pixels of line 9 are interpolated by the horizontal interpolation filter h_f simultaneously. After the filtered data of line 9 are written into the register of v_f and the filtered results of line 1 are released by the register, the vertical interpolation filter starts to execute the filtering operation on the reference pixels from line 2 to line 9.
According to the five steps above, the 8× block interpolation engine performs the pipeline filtering operations and the ultimate interpolation filtered result will be obtained after one clock cycle.
4.3.2 The reconfigurable interpolation unit
Table 5 shows the coefficients of an 8tap filter. It can be observed that the coefficients of A and C type are symmetry. Therefore, the interpolation of A and C type filters can be carried out with the same hardware architecture by only reversing the order of input reference pixels.
The interpolation performed by the seventap or eighttap interpolation filter (h_f or v_f in Fig. 5) is implemented by shifters and additions and subtraction operations. Based on (1) and (2), the 8tap filter needs 33 adders (12 adders for A type, 9 adders for B type, and 12 adders for C type) and 14 shifters (5 shifters for type A, 4 shifters for B type, and 5 shifters for C type) in the hardware implementation.
The proposed optimal architecture of A and B filters are shown in Fig. 6 where A, B, C, D, E, F, G, and H are eight input reference pixels. The structures of horizontal and vertical filters are identical. Compared to the above 33 adders and 14 shifters, the proposed architecture of A and B type filters only needs 19 adders (10 adders for A type and 9 adders for B type) and 8 shifters (4 shifters for A type and 4 shifters for B type) to realize the hardware implementation. Since only one type of the three filters is used at one time, the interpolation of A and C type filters can be carried out with the same hardware architecture by only reversing the order of input reference pixels. As a result, the proposed reconfigurable filter can reduce the area of the whole architecture.
5 Implementation results
The proposed interpolation filter architecture is implemented in Verilog HDL and synthesized using SMIC 90nm cell library. Table 6 shows the implementation comparison between the proposed and stateoftheart designs, as well as our previous work [18]. When synthesized with 90nm CMOS standard library, the total gate count of this design is 37.2k for supporting 7680 × 4320@78fps (4:2:0 format) videos and realtime processing with a working clock speed of 240 MHz.
In terms of hardware resources, the proposed architecture can reduce about 18 % area compared to the works in [11]. Although the works in [12] has eight times greater parallelism and can work at higher frequencies than the design in this paper, the amount of logic resources is also six times greater. The proposed architecture also allows for the use of a reduced input buffer so that the memory cost can be reduced by 131,040 bits. Compared with the works in [13, 16], due to the different targeted video specification, our design consumes more hardware cost. Although hardware cost in [16] is only 28.5k, memories are not included in the gates’ area. The reconfigurable hardware accelerator engines in [14, 15] are synthesized for the FPGA device and BRAMs are used, so it is difficult to make a fair hardware cost comparison with the works in [14, 15].
In terms of performance, the throughput of the proposed architecture is 13.4 pixels/cycle, which is almost 18 times larger than the works in [13] with 0.73 pixel/cycle, 6 times larger than the works in [11] with 2.58 pixels/cycle, and 2 times larger than the works in [15] with 8.5 pixels/cycle. So the hardware implementation can better adapt to the HEVC standard with larger LCU size. Consequently, the hardware implementation cost of our architecture is comparable to H.264/AVC.
At 0.9 V power supply, the processing of the proposed hardware for 7680 × 4320@78fps video processing dissipates only 4.7 mW when running at 240 MHz. The power consumptions shown in [14, 15] are measured by the FPGAbased evaluation system, and the power consumption shown in [17] also includes the energy consumption of fractional search module. It is quite unfair to make a power comparison with them. The power consumptions of interpolation hardware in [11–13] are not shown.
From Table 6, it also can be seen that only the proposed architecture and the works in [17] can support 8KUHD video realtime encoding. The hardware costs of the works in [17] are 1183k gates, and fractional search module is also included. Therefore, it is difficult to make a fair comparison of hardware resources. For the processing speed of the design, the throughput of the proposed architecture is almost 2.5 times larger than the works in [17] with 5.26 pixels/cycle. The design delivers a maximum throughput of 2588 Mpixels/s for 7680 × 4320 78 frames/s video application, and the works in [17] can only achieve a maximum throughput of 995 Mpixels/s for 7680 × 4320 30 frames/s video coding. Compared to the works in [17], our architecture provides significant speed improvement.
Table 6 also shows the implementation comparison between the proposed architecture and our previous works in [18]. Our previous works in [18] can only support 4KUHD video realtime encoding with the frame rate of 47, and the proposed architecture can support 8KUHD video realtime encoding with the frame rate of 78. The proposed architecture can also achieve about 42 % area reduction and 40 % power reduction compared to our previous works in [18]. These speed and cost improvement come from both the new fast interpolation filter algorithm and hardware efficiency.
6 Conclusions
In this paper, highperformance VLSI architecture for luma interpolation in HEVC is proposed and it is implemented with 37.2k gates at an operating frequency of 240 MHz. It can support 8KUHD (7680 × 4320)@78fps (4:2:0 format) realtime video processing. Our proposed architecture can be reused for halfpixel interpolation and quarterpixel interpolation, and it reduces the area cost about 131,040 bits RAM with the reused interpolation architecture. Our proposed architecture can achieve high throughput for realtime encoding of ultra highresolution videos with reduced hardware resources and is especially suitable for 8KUHD video realtime encoding.
References
GJ Sullivan, JR Ohm, WJ Han et al., Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012)
J Ohm, GJ Sullivan, High efficiency video coding: the next frontier in video compression [standards in a nutshell]. Signal Process Magazine IEEE 30(1), 152–158 (2013)
JR Ohm, GJ Sullivan, H Schwarz, TK Tan, T Wiegand, Comparison of the coding efficiency of video coding standards—including high efficiency video coding (HEVC). IEEE Trans. Circuits Syst. Video Technol. 22(12), 1669–1684 (2012)
Y. J. Ahn, W. J. Han, D. G. Sim, “Study of decoder complexity for HEVC and AVC standards based on toolbytool comparison,” SPIE Appl. Digital Image Process. XXXV, 8499, 84990X184990X10 (2012)
TC Chen, SY Chien, YW Huang, CH Tsai, CY Chen, TW Chen, LG Chen, Analysis and architecture design of an HDTV720p 30 frames/s H.264/AVC encoder. IEEE Trans. Circuits Syst. Video Technol. 16(6), 673–688 (2006)
Z Liu, Y Song, M Shao, S Li, L Li, S Ishiwata, M Nakagawa, S Goto, T Ikenaga, HDTV1080p H.264/AVC encoder chip design and performance analysis. IEEE J. SolidState Circuits 44(2), 594–608 (2009)
C Yang, S. Goto, T. Ikenaga, in Proc. IEEE International Symposium on Circuits and Systems (ISCAS). High performance VLSI architecture of fractional motion estimation in H.264 for HDTV (IEEE, Kos, Greece, 2006)
S. Oktem and I. Hamzaoglu, in Proc.10th Euromicro Conference on Digital System Design. An efficient hardware architecture for quarterpixel accurate H.264 motion estimation (IEEE, Luebeck, Germany, 2007)
G Pastuszak, M Jakubowski, Adaptive computationallyscalable motion estimation for the hardware H.264/AVC encoder. IEEE Trans. Circuits Syst. Video Technol. 23(5), 802–812 (2013)
D. Zhou and P. Liu, in Proc. IEEE International Symposium on Circuits and Systems. A hardwareefficient dualstandard VLSI architecture for MC interpolation in AVS and H.264 (IEEE, New Orleans, Louisiana, 2007)
ChaoTsung Huang, Chiraag Juvekar, Mehul Tikekar, Anantha P. Chandrakasan, in Proc. IEEE Conference on Visual Communications and Image Processing (VCIP). HEVC interpolation filter architecture for quad full HD decoding (IEEE, Kuching, Sarawak, 2013)
G. Pastuszak, M. Trochimiuk, in Proc. 16th Euromicro Conference on Digital System Design. Architecture design and efficiency evaluation for the highthroughput interpolation in the HEVC encoder (IEEE, Santander, Spain, 2013)
Guo Z, Zhou D, Guto S, in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). An optimized MC interpolation architecture for HEVC (IEEE, Kyoto, Japan, 2012)
V. Afonso, H. Maich, L. Agostini, and D. Franco, in Proc. IEEE Lat. Amer. Symp. Circuits Syst. (LASCAS). Low cost and high throughput FME interpolation for the HEVC emerging video coding standard (IEEE, Cusco, Peru, 2013)
CM Cláudio, M Shafique, S Bampi, J Henkel, A reconfigurable hardware architecture for fractional pixel interpolation in high efficiency video coding. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 34(2), 238–251 (2015)
E. Kalali, I. Hamzaoglu, in Proc. IEEE International Conference on Image Processing (ICIP). A low energy HEVC subpixel interpolation hardware (IEEE, Paris, French, 2014)
G. He, D. Zhou, Y. Li, Z. Chen, T. Zhang, and S. Goto, “Highthroughput powerefficient VLSI architecture of fractional motion estimation for ultraHD HEVC video encoding,” IEEE Trans. Very Large Scale Integr. VLSI Syst. (2015) doi: 10.1109/TVLSI.2014.2386897.
X Lian, W Zhou, Z Duan, R Li, in Proc. 2nd IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP). An efficient interpolation filter VLSI architecture for HEVC standard (IEEE, Xi’an, China, 2014)
F. Bossen, “Common test conditions and software reference configurations,” document JCTVCH1100, ITUT/ISO/IEC Joint CollaborativeTeam on Video Coding (JCTVC) (ITUT/ISO/IEC, San Jose, USA, 2012)
G. Bjontegaard, “Calculation of average PSNR difference between RDcurves,” document VCEGM33 (ITUT, Austin, USA, 2001)
V Afonso, H Maich, L Agostini, D Franco, in Proc. 2013 Data Compression Conference. Simplified HEVC FME interpolation unit targeting a low cost and high throughput hardware design (Snowbird, Utah, 2013)
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (60902101), New Century Excellent Talents in University of Ministry of Education of China (NCET110824), and Fundamental Research Funds for the Central Universities (3102014JCQ01057).
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhou, W., Zhou, X., Lian, X. et al. An efficient interpolation filter VLSI architecture for HEVC standard. EURASIP J. Adv. Signal Process. 2015, 95 (2015). https://doi.org/10.1186/s1363401502840
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363401502840