Robust depth enhancement and optimization based on advanced multilateral filters
- Ting-An Chang^{1},
- Yang-Ting Chou^{1} and
- Jar-Ferr Yang^{1}Email authorView ORCID ID profile
https://doi.org/10.1186/s13634-017-0487-7
© The Author(s). 2017
Received: 12 December 2016
Accepted: 26 June 2017
Published: 10 July 2017
Abstract
Stereo matching of two distanced cameras and structured-light RGB-D cameras are the two common ways to capture the depth map, which conveys the per-pixel depth information of the image. However, the results with mismatched and occluded pixels would not provide accurately well-matched depth and image information. The mismatched depth-image relations would degrade the performances of view syntheses seriously in modern-day three-dimension video applications. Therefore, how to effectively utilize the image and depth to enhance themselves becomes more and more important. In this paper, we propose an advanced multilateral filter (AMF), which refers spatial, range, depth, and credibility information to achieve their enhancements. The AMF enhancements could sharpen the image, suppress noisy depth, filling depth holes, and sharpen the depth edges simultaneously. Experimental results demonstrate that the proposed method provides a superior performance, especially around the object boundary.
Keywords
Depth enhancement Multilateral filter Hole filling Mold matching Depth refinement1 Introduction
In general, the three-dimensional (3D) video is widely recognized as a visual media technique which enables viewers to perceive the depth in a scene without special glasses. Owing to understanding the 3D video among users who wish to experiment extended visual sensations, developments in 3D video technologies have initiated the commercialization of 3D services in consumer products such as 3D TVs [1], tablet PCs, mobile devices, and computer gaming devices. At the same time, the multi-view video plus depth (MVD) format has appeared as a potent technique for 3D video applications [2, 3]. To produce virtual views at desired viewpoints with low processing costs, the MVD format uses the depth-image-based rendering (DIBR) techniques [4–6]. The DIBR technique synthesizes images at the desired viewpoint by using the color image and its corresponding depth map. Thus, it can be treated as an efficient data format for the 3D video. Moreover, the depth map is an image which represents the range information of the captured scene. The depth map is important because it affects the quality of the synthesized images.
The acquisitions of depth information can be categorized into two approaches: the indirect estimation approach based on stereo matching of two images taken in different locations and the direct measure approach based on the time-of-flight of depth sensors. The stereo matching with visual computation estimates the depth map from two-view images [7–9]. However, its computational complexity is high and its estimation accuracy would be fail in texture-less and occluded regions. Recently, the low-cost structured-light RGB-D cameras have been used to capture high-resolution color images and low-resolution depth maps [10]. Thus, depth map upsampling [11, 12] followed by its enhancement [13] becomes an inevitable task because the quality of the DIBR process heavily depends on the accuracy of depth information. To improve the depth map of RGB-D cameras, the following problems should be solved. First, the boundary of an object in the depth map would not be well matched with that of its corresponding color image. The region near the object boundary is commonly referred to the mismatched region. Secondly, the holes with no depth information often happened in the depth map because the infrared (IR) light can be absorbed or obstructed by the object. Thirdly, the depth map suffers from the optical noise because of multiple reflections or scatters of the IR light.
In general, the images usually have better quality but could not be well matched with the depth map. Thus, it is reasonable to assume that the depth maps usually have much worse quality with noisy, mismatched, and hole pixels. To overcome these problems, the joint bilateral filters (JBF) proposed [14–16] use color and spatial similarity between corresponding pixels in the image to enhance the depth map. Then, the iterative joint multilateral filtering (IJMF) suggested in [17] achieves the best unsharp masking structure through the training of parameters. This iterative method not only enhances the sharpness of the image but also smooths the corresponding video pixel values; however, it requires a complex training process to obtain the parameters. In order to overcome the drawbacks of the IJMF method, the adaptive joint trilateral filter (AJTF) method has been proposed in [18] by using different designed patterns to test the differences between the image and its corresponding depth map. The depth map is then sharpened along object boundary borders and suitable for practical DIBR process [19–21]. However, this method is easy to be affected by complex image texture, which suffers serious blocking effect in depth map for high texture objects.
In summary, the above methods cannot accurately enhance the noisy depth maps with unmatched color image and depth map to result in distortion of the synthesized 3D image. Therefore, in this paper, we propose an adaptive multilateral filter (AMF) for effective depth enhancement. The AMF approach considers the similarities of the spatial, range, depth, and credibility information can successfully suppress the noise, filling the holes, and sharpening the object edges simultaneously. The rest of this paper is present as follows. In Section 2 the background and motivations are explained. In Section 3, the proposed advanced multilateral filter is addressed in details. The comparisons of subjective SSIM and PSNR performances and the viewing quality are exhibited in Section 4. Finally, conclusions of this paper are exhibited in Section 5.
2 Background and motivations
Generally, the source depth map could be generated by fast stereo matching technique with subsample stereo images or captured by RGB-D cameras with a lower resolution than the color image. Thus, the source depth map produced by fast stereo matching and depth camera usually has a lower resolution than the corresponding color image and contains a lot of noisy pixels including unknown pixels due to occlusions. Thus, the source depth map will be first up sampled before depth enhancement. In the paper, the traditional bicubic interpolation [22] is applied to recover the resolution of the source depth map to that of the corresponding color image. After upsampling depth map, we assume that the original texture and corresponding depth map, which are respectively expressed by g(x, y) and d(x, y), are with the same spatial resolution and come with some undesired noisy pixels in depth maps and images. Specially, the foreground boundaries do not well match the corresponding color image and jagged boundaries are produced from the interpolated depth map after the bicubic interpolation. If the depth is estimated by stereo matching algorithms, there exist mismatched and occluded pixels due to singularity properties. On the other hand, we cannot generate virtual image precisely by depth image-based rendering (DIBR), if the image and its corresponding depth map cannot be matched successfully due to the noises and holes existed in depth maps. Hence, the enhancements of the image and its depth become very important in 3D visualization.
Thus, the upsampled depth map d(x, y) would be affected by three major factors with noisy, blurring, and missing pixels [23], where the noise pixels are caused by the distortion of capture devices resulting in unmatched depth, the blurring pixels are produced by interpolation filters mostly along object boundaries, and the missing pixels are mainly originated from the presence of object occlusions and concave objects. Thus, the quality improvement of the upsampled depth map d(x, y) becomes an important task in 3D visualization applications. In [23], the traditionally enhanced processing generally contains two stages including the suppressing noise and the image-depth enhancement. However, these stages take a high computational complexity and large computational time.
Therefore, a robust filter for solving both the existed holes and flatness problems is needed to improve the performance and reduce the computational complexity as the same time. In this paper, we propose a new algorithm which is called advanced multilateral filter (AMF) to jointly fill the holes and enhance the sharpness of the upsampled depth map d(x, y) and sharpen the image g(x, y) at the same time. Besides, the parameters of the AMF can be determined according to the accuracy of the depth map and image. The proposed AMF does not require the complicated parameter training, and it is applicable to the practical DIBR applications, which require the robustness against any deformation of images or depth maps. In AMF process, the image and the corresponding depth map are classified based on the designed binary molds first. Excluding the hole regions, the image and the corresponding depth map are smoothed to reduce the noisy and blurring pixels first. Then, the smooth enhancement can degrade the high-frequency noise. Then, the holes are crammed by surrounding neighbors. Finally, after AMF, the rolling guidance refinement (RCR) method is used to sharpen the object edges.
3 Proposed AMF algorithm
3.1 Advanced multilateral filter
In (10), the horizontal and vertical direction gradients, G _{ x }(x, y) and G _{ y }(x, y) are computed from Sobel operators [24]. According to (9), the credibility map can be determined if the pixel is in smooth or edge region. If the corresponding candidate of d(i, j) is in edge regions, c(i, j) = 1. The corresponding candidate depth, d(i, j) will be strengthened with the weight controlled by (7). The AMF will be given a strong weight by J _{ c } to enhance d(x, y). On the other hand, if the corresponding candidate of d(i, j) is in smoothing regions, c(i, j) = 0 such that the corresponding candidate depth, d(i, j) is weakened with the weight controlled by (7).
It is noted that we need to determine four standard deviations, σ_{ s }, σ_{ d }, σ_{ g }, and σ_{ c } to achieve the best enhancement of depth map, where the mold matching technique is used for the selection of AMF parameters.
3.2 Mold matching for image and depth map
In (18), a _{ n } denotes the matching error between the nth mold and the image block. Thus, the minimum of a _{ n } represents the best matching mold to the image block among all the candidate molds. N _{ R }, which is 56, is the number of total molds, \( {R}_n^0 \) and \( {R}_n^1 \) respectively represent the black and white regions in the nth mold as shown in Fig. 2. With k = 0 or 1, \( {\eta}_j^k \) is the average of texture values in \( {R}_n^k \), and \( \left|{R}_n^k\right| \) denotes the number of elements in \( {R}_n^k \). In (19), we use the least squares error method to predict the best mold, \( {M}_{I_g} \) for the image block. To find the best mold for the depth block, \( {M}_{I_d} \), we can simply replace g(x _{ m }) with d(x _{ m }) in (19) and (20). In addition, if the block variance is less than a given threshold, e.g., 1, we would assume that the corresponding block belongs to the smooth region. In this case, a new binary mold can be assigned by consisting of all elements with 1’s or 0’s, denoted by M _{0}.
The SADs between two binary molds represents the total number of mismatched pixels. The SADs of the mold and its binary inversion of the depth map comparing to the image mold, which are denoted {\( {M}_{I_d},{M}_{I_g} \)} and {\( {\overline{M}}_{I_d},{M}_{I_g} \)}, are respectively computed. Then, the minimum SAD is used to represent the mold similarity. It is worth noting that \( SAD\left(\overline{M_{I_d}},{M}_{I_g}\right) \) is necessary because the binary mold only classifies the block pixels into two different groups. If we reverse bits in all molds, the mold matching processes defined in (18), (19), and (20) will achieve the same results. It means texture image and depth map corresponding to each binary mold with all the black region and white region swap, the similarity between the two molds will not be changed. Since the smallest SAD will be generated by D _{max}, D _{max} denotes the largest SAD between any two molds, the mold similarity after normalization of D _{max}, the value of D _{ pm } is between 0 and 1. In Fig. 2, by comparing them one-by-one, we found that the maximum SAD is D _{max} = 83. The value of D _{max} is defined as the largest difference between the image mold and the depth mold.
3.3 Rotating counsel refinement for depth map
4 Experimental results
4.1 Performance evaluation of depth enhancement
In experiments, the weighted factors are empirically set as k _{1} = 15, k _{2} = 15, k _{3} = 18 in (23)–(25) and ϕ is set to 240 in (9). Decreases or increases of the above factors will result in strong or weak enhancement of the results. For comparisons of subjective performances, the proposed method without using any hole filling is compared to the joint bilateral filter (JBF) [16], intensity guided depth superresolution (IGDS) [39], compressive sensing based depth upsampling (CSDU) [40], and adaptive joint trilateral filter (AJTF) [18] methods all coupled with cross-based hole filling (CHF). After depth enhancement, the enhanced depth maps by the proposed AMF process as well as JBF and AJRF methods with CHF are shown in Fig. 7. The simulation results show that the proposed AMF method effectively removes the noise and hole pixels, but the enhanced depth map still exists tiny jagged edges. Therefore, the depth refinement along object edges is another important step.
PSNR comparisons with different approaches on Middlebury dataset and RGBD dataset
Methods\image data | Art | Books | Doily | Moebius | RGBD_1 | RGBD_2 |
---|---|---|---|---|---|---|
JBF [16] with CHF | 32.4373 | 33.3130 | 34.6370 | 33.5519 | 26.3400 | 31.1924 |
IGDS [39] with CHF | 37.9567 | 41.2789 | 43.3450 | 41.4567 | 30.3135 | 32.8103 |
CSDU [40] with CHF | 37.6181 | 40.6110 | 41.3045 | 42.0501 | 30.3520 | 33.6959 |
AJTF [18] with CHF | 32.4128 | 29.2324 | 29.5552 | 29.3657 | 27.6347 | 31.4258 |
Proposed method | 39.0004 | 41.3019 | 41.7411 | 42.5622 | 31.6245 | 33.7466 |
PSNR comparisons with sensitive of parameters on Middlebury and RGBD dataset
Image data | σ _{ s } = 2, k _{1} = 15, k _{2} = 15, k _{3} = 18 | σ _{ s } = 2, k _{1} = 10, k _{2} = 10, k _{3} = 12 | |
---|---|---|---|
Virtual depth maps | Art | 39.0004 | 38.8735 |
Moebius | 42.5622 | 41.6602 | |
Natural depth maps | RGBD_1 | 31.6245 | 31.1052 |
RGBD_2 | 33.7466 | 32.9218 |
4.2 Depth enhancement with RCR process
PSNR performances achieved by AMF and RCR methods on Middlebury dataset and RGBD dataset
Image data | Art | Books | Doily | Moebius | RGBD_1 | RGBD_1 |
---|---|---|---|---|---|---|
PSNR | 40.6890 | 40.8794 | 43.6993 | 41.0097 | 33.2462 | 34.8825 |
Execution time of three main stages in the proposed method
Stage | AMF | RCR |
---|---|---|
Execution time(s) | 178.5513 | 6.4113 |
4.3 Performance evaluation with Middlebury datasets
SSIM and PSNR performance achieved by different depth enhancement methods on Middlebury dataset and RGBD dataset
Methods | Art | Books | Doily | Moebius | RGBD_1 | RGBD_2 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | |
JBF [16] + CHF | 0.9880 | 36.0083 | 0.9754 | 33.8966 | 0.9913 | 44.6455 | 0.9912 | 39.7306 | 0.9561 | 29.8534 | 0.9686 | 33.6112 |
IGDS [39] + CHF | 0.9911 | 37.4257 | 0.9913 | 37.2058 | 0.9951 | 47.7032 | 0.9887 | 38.1115 | 0.9610 | 29.9120 | 0.9677 | 33.5204 |
CSDU [40] + CHF | 0.9895 | 37.2223 | 0.9902 | 37.1560 | 0.9947 | 47.8200 | 0.9881 | 37.0544 | 0.9612 | 30.4155 | 0.9691 | 33.8235 |
AJTF [18] + CHF | 0.9865 | 35.1333 | 0.9744 | 34.1137 | 0.9967 | 47.8452 | 0.9876 | 36.4653 | 0.9582 | 30.2524 | 0.9674 | 33.2741 |
AMF | 0.9945 | 38.9374 | 0.9940 | 40.4719 | 0.9962 | 47.7070 | 0.9956 | 40.5725 | 0.9633 | 31.1112 | 0.9741 | 34.6881 |
AMF + RCR | 0.9946 | 38.9546 | 0.9935 | 37.3574 | 0.9976 | 49.6378 | 0.9979 | 42.2108 | 0.9664 | 31.2685 | 0.9756 | 34.7168 |
5 Conclusions
The image and depth enhancements play an important role in nowadays 3D video technologies. Many approaches are proposed to deal with different situations. We present a new robust adaptive method based on the adaptive joint trilateral filter (AJTF) to enhance the image and noisy depth maps. In this paper, we propose an advanced multilateral filter (AMF), which considers the similarities of the spatial, range, depth, and credibility information. The AMF is used for the depth enhancement by suppressing the noise, filling the holes and sharpening the object edges simultaneously. Finally, the proposed method performs the better results than the other method in the experiments.
The proposed AMF without hole filling outperforms the AJTF and the JBF with CHF. The proposed AMF produces sharper object edges and removes overshoot and undershoot artifacts. Besides, the proposed AMF method can remove hole regions and sharpen edges simultaneously. The proposed method replaces the exponential function with the second order Taylor expansion function, which can save 12.69% of the computing time on MATLAB platform. We compare the proposed AMF method with different depth enhancement algorithms; the AMF exhibits better performance in subjective and objective identification.
As a future work, the research direction of the AMF with the hardware VLSI circuits should be considered. In conjunction with DIBR techniques, the edge detection should be more accurate in small object edges such that the DIBR technique requires extremely accurate depth maps. Finally, the proposed method can be extended to depth video enhancement by employing the temporal depth information between successive frames.
Declarations
Funding
This work was supported in part by the National Science Council of Taiwan, under Grant NSC 105-2221-E-006-065-MY3.
Authors’ contributions
TAC carried out the image processing studies, participated in the proposed system design, and drafted the manuscript. YTC carried out the mold design and adjustment parameters. JFY conceived of the study, and participated in its design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- L Zhang, C Vazquez, S Knorr, 3D-TV content creation: automatic 2D-to-3D video conversion. IEEE Trans. on Broadcasting 57(2), 372–383 (2011)View ArticleGoogle Scholar
- A Smolic, D McCutchen, 3DAV exploration of video-based rendering technology in MPEG. IEEE Trans. on Circuits Syst. Video Technology 14(3), 3448–356 (2004)View ArticleGoogle Scholar
- HM Wang, CH Huang, JF Yang, Depth maps interpolation from existing pairs of keyframes and depth maps for 3D video generation. IEEE Circuits and Systems (ISCAS) Conf, 2010, pp. 3248–3251Google Scholar
- M Schmeing, X Jiang, Faithful disocclusion filling in depth image based rendering using superpixel-based inpainting. IEEE Trans. on Mutimedia PP(99), 1 (2015)Google Scholar
- F Shao, M Yu, G Jiang, F Li, Z Peng, Depth map compression and depth-aided view rendering for a three-dimensional video system. IET Trans. on Signal Process. 6(3), 247–254 (2012)View ArticleGoogle Scholar
- TC Yang, PC Kuo, BD Liu, JF Yang, Depth image-based rendering with edge-oriented hole filling for multiview synthesis. Communications, circuits and systems (ICCCAS) Conf. 1, 50–53 (2013)Google Scholar
- YS Heo, KM Lee, SU Lee, Robust stereo matching using adaptive normalized cross-correlation. IEEE trans. on pattern analysis and machine intelligence 33(4), 807–822 (2011)View ArticleGoogle Scholar
- H Hirschmuller, D Scharstein, Evaluation of cost functions for stereo matching. IEEE Computer Vision and Pattern Recognition (CVPR) conf, 2007, pp. 1–8Google Scholar
- D Scharstein, C Pal, Learning conditional random fields for stereo. IEEE Computer Vision and Pattern Recognition (CVPR) conf, 2007, pp. 1–8Google Scholar
- F Garcia, D Aouada, T Solignac, B Mirbach, B Ottersten, Real-time depth enhancement by fusion for RGB-D cameras. IET Computer Vision 7(5), 1–11 (2013)View ArticleGoogle Scholar
- J Xie, RS Feris, MT Sun, Edge-guided single depth image super resolution. IEEE Trans. on Image Process. 25(1), 428–438 (2016)MathSciNetView ArticleGoogle Scholar
- MY Liu, O Tuzel, Y Taguchi, Joint geodesic upsampling of depth images. IEEE Conf. on Computer Vision and Pattern Recognition, 2013, pp. 169–176Google Scholar
- J Yang, X Ye, K Li, C Hou, Y Wang, Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Trans. on Image Process. 23(8), 3443–3458 (2014)MathSciNetView ArticleGoogle Scholar
- Y Shen, J Li, C Lu, Depth map enhancement method based on joint bilateral filter. IEEE Image and Signal Process. Conf, 2014, pp. 153–158Google Scholar
- Y Wang, A Ortega, D Tian, A Vetro, A graph-based joint bilateral approach for depth enhancement. IEEE Speech and Signal Process. Conf, 2014, pp. 885–889Google Scholar
- J Kopf, MF Cohen, D Lischinski, M Uyttendaele, joint bilateral upsampling. ACM Trans. on Graph 26(3), p.96 (2007) Google Scholar
- P Lai, D Tian, P Lopez, Depth map processing with iterative joint multilateral filtering. IEEE Picture Coding Symposium (PCS), 2010, pp. 9–12Google Scholar
- SW Jung, Enhancement of image and depth map using adaptive joint trilateral filter. IEEE Trans. on Circuits and Syst. for Video Technology 23, 258–269 (2013)View ArticleGoogle Scholar
- H Shan, WD Chien, HM Wang, JF Yang, A homography-based inpainting algorithm for effective depth-image-based rendering”. IEEE Image Processing (ICIP), 2014, pp. 5412–5416Google Scholar
- CH Hsia, Improved depth image-based rendering using an adaptive compensation method on an autostereoscopic 3-D display for a Kinect sensor. IEEE Sensors Journal 15(2), 994–1002 (2015)MathSciNetView ArticleGoogle Scholar
- P Ndjiki-Nya, M Koppel, D Doshkov, H Lakshman, P Merkle, K Muller, T Wiegand, Depth image-based rendering with advanced texture synthesis for 3-D video. IEEE Trans. on Multimedia 13(3), 453–465 (2011)View ArticleGoogle Scholar
- H Chang, DY Yeung, Y Xiong, Super-resolution through neighbor embedding. IEEE Computer Vision and Pattern Recognition 1, I (2004)Google Scholar
- NE Yang, YG Kim, RH Park, Depth hole filling using the depth distribution of neighboring region of depth holes in the Kinect sensor. IEEE Signal Process., Communication and Computing Conf, 2012, pp. 658–661Google Scholar
- ME Sobel, Asymptotic confidence intervals for indirect effects in structural equation models. Sociological methodology 13, 290–312 (1982)View ArticleGoogle Scholar
- TA Chang, JF Yang, Enhancement of depth map using texture and depth consistency. IEEE Conf. (TENCON), 2016, pp. 1139–1142Google Scholar
- P Perona, J Malik, Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Analysis and Machine Intelligence 12(7), 629–639 (1990)View ArticleGoogle Scholar
- K He, J Sun, X Tang, Guided image filtering. IEEE Trans. Pattern Analysis and Machine Intelligence 35(6), 1397–1409 (2013)View ArticleGoogle Scholar
- CP Cuong, WJ Jae, Efficient image sharpening and denoising using adaptive guided image filtering. IET Image Process. 9(1), 71–79 (2015)View ArticleGoogle Scholar
- A Criminisi, T Sharp, C Rother, P Perez, Geodesic image and video editing. ACM Trans. Graph 29(5), 134 (2010)View ArticleGoogle Scholar
- Q Yang, D Li, LH Wang, M Zhang, A novel guided image filter using orthogonal geodesic distance weight. IEEE Image Process. (ICIP) conf, 2013, pp. 1207–1211Google Scholar
- SJ Ko, YH Lee, Center weighted median filters and their applications to image enhancement. IEEE Trans. Circuits and Systems 38(39), 984–993 (1991)View ArticleGoogle Scholar
- Z Ma, K He, Y Wei, J Sun, E Wu, Constant time weighted median filtering for stereo matching and beyond. IEEE Computer Vision (ICCV) conf, 2013, pp. 49–56Google Scholar
- D Gang, ST Acton, On the convergence of bilateral filter for edge-preserving image smoothing. IEEE Signal Process. Letters 14(9), 617–620 (2007)View ArticleGoogle Scholar
- G Guarnieri, S Marsi, G Ramponi, Fast bilateral filter for edge-preserving smoothing. Electronics Letters 42(7), 396–397 (2006)View ArticleGoogle Scholar
- Z Su, X Luo, Z Deng, Y Liang, Z Ji, Edge-preserving texture suppression filter based on joint filtering schemes”. IEEE Trans. on Multimedia 15(3), 535–548 (2013)View ArticleGoogle Scholar
- Q Zhang, L Xu, J Jia, Rolling guidance filter. European Computer Vision Conf. (ECCV), 2014View ArticleGoogle Scholar
- D Scharstein, R Szeliski, A taxonomy and evaluation of dense two-frame stereo correspondence algorithm. Int. J. Comput. Vision 47(1-3), 7–42 (2002)View ArticleMATHGoogle Scholar
- D Scharstein, R Szeliski, High-accuracy stereo depth maps using structured light. IEEE Computer Vision and Pattern Recognition (CVPR) conf 1, I-195-I-202 (2003)Google Scholar
- B Ham, D Min, S Sohn, Depth superresolution by transduction. IEEE Trans. on Image Process. 24(5), 1524–1535 (2015)MathSciNetView ArticleGoogle Scholar
- L Dai, H Wang, X Mei, X Zhang, Depth map upsampling via compressive sensing. IEEE Asian Conference Pattern Recognition (ACPR), 2013, pp. 90–94Google Scholar
- D Kim, D Min, J Oh, S Jeon, K Sohn, Depth map quality metric for three-dimensional video. IS&T/SPIE Electronic Imaging Conf, 2009, pp. 723719–723719Google Scholar