SSIMinspired image restoration using sparse representation
 Abdul Rehman^{1}Email author,
 Mohammad Rostami^{1},
 Zhou Wang^{1},
 Dominique Brunet^{2} and
 Edward R Vrscay^{2}
https://doi.org/10.1186/16876180201216
© Rehman et al; licensee Springer. 2012
Received: 6 June 2011
Accepted: 20 January 2012
Published: 20 January 2012
Abstract
Recently, sparse representation based methods have proven to be successful towards solving image restoration problems. The objective of these methods is to use sparsity prior of the underlying signal in terms of some dictionary and achieve optimal performance in terms of meansquared error, a metric that has been widely criticized in the literature due to its poor performance as a visual quality predictor. In this work, we make one of the first attempts to employ structural similarity (SSIM) index, a more accurate perceptual image measure, by incorporating it into the framework of sparse signal representation and approximation. Specifically, the proposed optimization problem solves for coefficients with minimum ${\mathcal{L}}_{0}$ norm and maximum SSIM index value. Furthermore, a gradient descent algorithm is developed to achieve SSIMoptimal compromise in combining the input and sparse dictionary reconstructed images. We demonstrate the performance of the proposed method by using image denoising and superresolution methods as examples. Our experimental results show that the proposed SSIMbased sparse representation algorithm achieves better SSIM performance and better visual quality than the corresponding least squarebased method.
1 Introduction
The SSIM index and its extensions have found a wide variety of applications, ranging from image/video coding i.e., H.264 video coding standard implementation [9], image classification [10], restoration and fusion [11], to watermarking, denoising and biometrics (see [7] for a complete list of references). In most existing works, however, SSIM has been used for quality evaluation and algorithm comparison purposes only. SSIM possesses a number of desirable mathematical properties, making it easier to be employed in optimization tasks than other stateoftheart perceptual IQA measures [12]. But, much less has been done on using SSIM as an optimization criterion in the design and optimization of image processing algorithms and systems [13–19].
Image restoration problems are of particular interest to image processing researchers, not only for their practical value, but also because they provide an excellent test bed for image modeling, representation and estimation theories. When addressing general image restoration problems with the help of Bayesian approach, an image prior model is required. Traditionally, the problem of determining suitable image priors has been based on a close observation of natural images. This leads to simplifying assumptions such as spatial smoothness, low/maxentropy or sparsity in some basis set. Recently, a new approach has been developed for learning the prior based on sparse representations. A dictionary is learned either from the corrupted image or a highquality set of images with the assumption that it can sparsely represent any natural image. Thus, this learned dictionary encapsulates the prior information about the set of natural images. Such methods have proven to be quite successful in performing image restoration tasks such as image denoising [3] and image superresolution [5, 20]. More specifically, an image is divided into overlapping blocks with the help of a sliding window and subsequently each block is sparsely coded with the help of dictionary. The dictionary, ideally, models the prior of natural images and is therefore free from all kinds of distortions. As a result the reconstructed blocks, obtained by linear combination of the atoms of dictionary, are distortion free. Finally, the blocks are put back into their places and combined together in light of a global constraint for which a minimum MSE solution is reached. The accumulation of many blocks at each pixel location might affect the sharpness of the image. Therefore, the distorted image must be considered as well in order to reach the best compromise between sharpness and admissible distortions.
Since MSE is employed as the optimization criterion, the resulting output image might not have the best perceptual quality. This motivated us to replace the role of MSE with SSIM in the framework. The solution of this novel optimization problem is not trivial because SSIM is nonconvex in nature. There are two key problems that have to be resolved before effective SSIMbased optimization can be performed. First, how to optimally decompose an image as a linear combination of basis functions in maximal SSIM, as opposed to minimal MSE sense. Second, how to estimate the best compromise between the distorted and sparse dictionary reconstructed images for maximal SSIM. In this article, we provide solutions to these problems and use image denoising and image superresolution as applications to demonstrate the proposed framework for image restoration problems.
We formulate the problem in Section 2.1 and provide our solutions to issues discussed above in Sections 2.2 and 2.3. Section 3.1 describes our approach to denoise the images. The proposed method for image superresolution is described in Section 3.2 and finally we conclude in Section 4.
2 The proposed method
In this section we will incorporate SSIM as our quality measure, particularly for sparse representation. In contrast to what we may expect, it is shown that sparse representation in minimal ${\mathcal{L}}_{2}$ norm sense can be easily converted to maximal SSIM sense. We will also use a gradient descend approach to solve a global optimization problem in maximal SSIM sense. Our framework can be applied to a wide class of problems dealing with sparse representation to improve visual quality.
2.1 Image restoration from sparsity
where x ∈ ℝ^{ n }, y ∈ ℝ^{ m }, n ∈ ℝ^{ m }, and Φ ∈ ℝ^{m x n}. Here we assume x and y are vectorized versions, by column stacking, of original 2D original and distorted images, respectively. n is the noise term, which is mostly assumed to be zero mean, additive, and independent Gaussian. Generally m < n and thus the problem is illposed. To solve the problem assertion of a prior on the original image is necessary. The early approaches used least square (LS) [21] and Tikhonov regularization [22] as priors. Later minimal total variation (TV) solution [23] and sparse priors [3] were used successfully on this problem. Our focus in the current work is to improve algorithms, in terms of visual quality, that assert sparsity prior on the solution in term of a dictionary domain.
Sparsity prior has been used successfully to solve different inverse problems in image processing [3, 5, 24, 25]. If our desired signal, x, is sparse enough then it has been shown that the solution to (1) is the one with maximum sparsity which is unique (within some ϵball around x) [26, 27]. It can be easily found by solving a linear programming problem or by orthogonal matching pursuit (OMP). Not all natural signals are sparse but a wide range of natural signals can be represented sparsely in terms of a dictionary and this makes it possible to use sparsity prior on a wide range of inverse problems. One major problem is that the image signals are considered to be high dimensional data and thus, solving (1) directly is computationally expensive. To tackle this problem we assume local sparsity on image patches. Here, it is assumed that all the image patches have sparse representation in terms of a dictionary. This dictionary can be trained over some patches [28].
In (3), we have assumed that the distortion operator Φ in (1) may be represented by the product DH, where H is a blurring filter and D the downsampling operator. Here we have assumed each nonoverlapping patch of the images can be represented sparsely in the domain of Ψ. Assuming this prior on each patch (2) refers to the sparse coding of local image patches with bounded prior, hence building a local model from sparse representations. This enables us to restore individual patches by solving (2) for each patch. By doing so, we face the problem of blockiness at the patch boundaries when denoised nonoverlapping patches are placed back in the image. To remove these artifacts from the denoised images overlapping patches are extracted from the noisy image which are combined together with the help of (3). The solution of (3) demands the proximity between the noisy image, Y, and the output image X, thus enforcing the global reconstruction constraint. The ${\mathcal{L}}_{2}$ optimal solution suggests to take the average of the overlapping patches [3], thus eliminating the problem of blockiness in the denoised image.
with μ_{ a }and μ_{ y }the means of a and y respectively, ${\sigma}_{\mathbf{a}}^{2}$ and ${\sigma}_{\mathbf{y}}^{2}$ the sample variances of a and y respectively, and σ_{ ay }the covariance between a and y. The constants C_{1} and C_{2} are stabilizing constants and account for the saturation effect of the HVS.
Equation (5) aims to provide the best approximation of a local patch in SSIMsense with the help of minimum possible number of atoms. The process is performed locally for each block in the image which are then combined together by simple averaging to construct W. Equation (6) applies a global constraint and outputs the image that is the best compromise between the noisy image, Y, and W in SSIMsense. This step is very vital because it has been observed that the image W lacks the sharpness in the structures present in the image. Due to the masking effect of the HVS, same level of noise does not distort different visual content equally. Therefore, the noisy image is used to borrow the content from its regions which are not convoluted severely by noise. Use of SSIM is very wellsuited for such a task, as compared to MSE, because it accounts for the masking effect of HVS and allows us to capture improve structural details with the help of the noisy image. Note the use of 1  S(·, ·) in (5). This is motivated by the fact that 1  S(·,·) is a squared variancenormalized ${\mathcal{L}}_{2}$ distance [30]. Solutions to the optimization problems in (5) and (6) are given in Sections 2.2 and 2.3, respectively.
2.2 SSIMoptimal local model from sparse representation
(12)
where C_{2} is the constant originally used in SSIM index expression [8] and ${\sigma}_{\mathbf{a}}^{2}$ is calculated based on current approximation of the block given by a: = Ψ α.
It has already been shown that the main difference between SSIM and MSE is the divisive normalization [30, 31]. This normalization is conceptually consistent with the light adaptation (also called luminance masking) and contrast masking effect of HVS. It has been recognized as an efficient perceptually and statistically nonlinear image representation model [32, 33]. It is shown to be a useful framework that accounts for the masking effect in human visual system, which refers to the reduction of the visibility of an image component in the presence of large neighboring components [34, 35]. It has also been found to be powerful in modeling the neuronal responses in the visual cortex [36, 37]. Divisive normalization has been successfully applied in IQA [38, 39], image coding [40], video coding [31] and image denoising [41].
We denote the inner product of a signal with the constant signal (1/n, 1/n,..., 1/n) of length n by < ψ >: = < ψ, 1/n >, where < ·, · > represents the inner product.
Now we have all the tools required for an OMP algorithm that perform the sparse coding stage in optimal SSIM sense. The modified OMP pursuit algorithm is explained in Algorithm 1. There are two main differences between the OMP algorithm [29] and the one proposed in this work. First, the stopping criterion is based on SSIM. Unlike MSE, SSIM is adaptive according to the reference image. In particular, if the distortion is consistent with the underlying reference e.g., contract enhancement, the distortion is nonstructural and is much less objectional than structural distortions. Defining the stopping criterion according to SSIM essentially means that we are modifying the set of accepted points (image patches) around the noisy image patch which can be represented as the linear combination of dictionary atoms. This way, in the space of image patches, we are omitting image patches in the direction of structural distortion and including the ones which are in the same direction as the original image patch in the set of acceptable image patches. Therefore, we can expect to see more structures in the image constructed using sparsity as a prior. Second, we calculate the SSIMoptimal coefficients from the optimal coefficients in ${\mathcal{L}}_{2}$sense using the derivation in Section 2.2, which are scalar multiple of the optimal ${\mathcal{L}}_{2}$based coefficients.
2.3 SSIMbased global reconstruction
where tr(·) denotes the trace of a matrix.
where N_{ w }is the number of pixels in the local image patch, μ_{ x }, ${\sigma}_{\mathbf{x}}^{2}$ and σ_{ xy }represent the sample mean of x, the sample variance of x, and the sample covariance of x and y, respectively Equation (34) suggests that averaging of the gradients of local patches is to be calculated in order to obtain the global SSIM gradient, and thus the direction and distance of the k th update in $\widehat{\mathbf{X}}$. More details regarding the computation of SSIM gradient can be found in [42]. In our experiment, we found this gradient based approach is wellbehaved and it takes only a few iterations for $\widehat{X}$ to converge to a stationary point. We initialize $\widehat{\mathbf{x}}$ as the best MSE solution. Having the gradient of SSIM we follow an iterative procedure to solve (6), assuming the initial value derived from minimal MSE solution.
3 Applications
The framework we proposed provides a general approach that can be used for different applications. To show the effectiveness of our method we will provide two applications: image denoising and superresolution.
3.1 Image denoising
where Y is the observed distorted image, X is the noisefree image and N is additive Gaussian noise. Our goal is to remove the noise from distorted image. Here we train a dictionary, Ψ, for which the original image can be represented sparsely in its domain. We use KSVD method [28] to train the dictionary. In this method the dictionary, which is trained directly over the noisy image and denoising is done in parallel. For a fixed number of iterations, J, we initialize the dictionary by discrete cosine transform (DCT) dictionary. In each step we update the image and then the dictionary. First, based on the current dictionary, sparse coding is done for each patch, and then KSVD is used to update the dictionary (interested reader can refer to [28] for details of dictionary updating). Finally, after doing this procedure J times we execute a global construction stage, following the gradient descend procedure. The proposed image denoising algorithm is summarized in Algorithm 2.
SSIM and PSNR comparisons of image denoising results
Image  Barbara  Lena  Peppers  House  

Noise std  20  25  50  100  20  25  50  100  20  25  50  100  20  25  50  100 
PSNR comparison (in dB)  
Noisy  22.11  20.17  14.15  8.13  22.11  20.17  14.15  8.13  22.11  20.17  14.15  8.13  22.11  20.17  14.15  8.13 
KSVD  30.85  29.55  25.44  21.65  32.38  31.32  27.79  24.46  30.80  29.72  26.10  21.84  33.16  32.12  28.08  23.54 
Proposed  30.88  29.53  25.50  21.74  32.26  31.28  27.80  24.53  30.84  29.84  26.25  21.98  33.04  32.09  28.13  23.59 
SSIM comparison  
Noisy  0.593  0.503  0.241  0.084  0.531  0.443  0.204  0.074  0.529  0.442  0.212  0.076  0.452  0.368  0.166  0.057 
KSVD  0.894  0.859  0.708  0.519  0.903  0.877  0.733  0.550  0.905  0.883  0.782  0.601  0.909  0.890  0.779  0.549 
Proposed  0.906  0.875  0.733  0.526  0.913  0.888  0.754  0.573  0.913  0.894  0.797  0.627  0.915  0.901  0.795  0.574 
3.2 Image superresolution
The patch from each location of the lowresolution image, that needs to be scaled up, is extracted and sparsely coded with the help of SSIMoptimal Algorithm 1. Once the sparse coefficients, α, are obtained, high resolution patches, y, are computed using (39) which are finally merged by averaging in the overlap area to create the resulting image. The proposed image superresolution algorithm is summarized in Algorithm 3:
SSIM and PSNR comparisons of image superresolution results
Image  Barbara  Lena  Baboon  House  Raccoon  Zebra  Parthenon  Desk  Aeroplane  Man  Moon  Bridge 

PSNR comparison (in dB)  
Yang et al.  30.3  33.4  25.3  34.1  34.0  24.6  28.4  31.9  34.2  33.2  32.2  28.0 
Zeyde et al.  31.3  33.8  25.5  35.4  36.5  25.0  28.8  33.8  36.1  34.4  33.3  28.5 
Proposed  31.4  33.9  25.6  35.5  37.0  25.1  28.9  33.9  36.4  34.6  33.4  28.6 
SSIM comparison  
Yang et al.  0.843  0.888  0.680  0.876  0.880  0.760  0.773  0.871  0.829  0.857  0.746  0.754 
Zeyde et al.  0.874  0.909  0.710  0.904  0.934  0.789  0.811  0.918  0.860  0.896  0.803  0.783 
Proposed  0.877  0.912  0.720  0.906  0.942  0.794  0.815  0.922  0.862  0.900  0.808  0.792 
4 Conclusions
In this article, we attempt to combine perceptual image fidelity measurement with optimal sparse signal representation in the context of image denoising and image superresolution to improve two stateoftheart algorithms in these areas. We proposed an algorithm to solve for the optimal coefficients for sparse and redundant dictionary in maximal SSIM sense. We also developed a gradient descent approach to achieve the best compromise between the distorted image and the image reconstructed using sparse representation. Our simulations demonstrate promising results and also indicate the potential of SSIM to replace the ubiquitous PSNR/MSE as the optimization criterion in image processing applications. It must be taken into account that this is only an early attempt along a new but promising direction. The main contribution of the current work is mostly in the general framework and theoretical development. Significant improvement in visual quality can be expected by improving the dictionary learning process based on SSIM, as dictionary encapsulates in itself the prior knowledge about the image to be restored. An SSIMoptimal dictionary will capture structures contained in the image in a better way and the restoration task will result into sharper output image. Further improvement is also expected in the future when some of the advanced mathematical properties of SSIM and normalized metrics [12] are incorporated into the optimization framework.
Algorithm 1: SSIMinspired OMP
Initialize: D = {} set of selected atoms, S_{ opt }= 0, r = Y
while S_{ opt }< T_{ ssim }

Add the next best atom in ${\mathcal{L}}_{2}$ sense to D

Find the optimal ${\mathcal{L}}_{2}$based coefficient(s) using (15)

Find the optimal SSIMbased coefficient(s) using (27) and (31)

Update the residual r

Find SSIMbased approximation a

Calculate S_{ opt }= S(a, y)
end
Algorithm 2: SSIMinspired image denoising
 1.
Initialize: X = Y, Ψ = overcomplete DCT dictionary
 2.Repeat J times

Sparse coding stage: use SSIMoptimal OMP to compute the representation vectors α_{ ij }for each patch

Dictionary update stage: Use KSVD [28] to calculate the updated dictionary and coefficients. Calculate
SSIMoptimal coefficients using (27) and (31)

 3.
Global Reconstruction: Use gradient descent algorithm to optimize (6), where the SSIM gradient is given by (35).
Algorithm 3: SSIMinspired image super resolution
 1.
Dictionary Training Phase: trained high and low resolution dictionaries Ψ _{ l }, Ψ _{ h }, [20]
 2.Reconstruction Phase

Sparse coding stage: use SSIMoptimal OMP to compute the representation vectors _{ a }_{ ij }for all the patches of low resolution image

High resolution patches reconstruction: Reconstruct high resolution patches by Ψ_{ h }α_{ ij }

 3.
Global Reconstruction: merge highresolution patches by averaging over the overlapped
region to create the high resolution image.
Declarations
Acknowledgements
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada and in part by Ontario Early Researcher Award program, which are gratefully acknowledged.
Authors’ Affiliations
References
 Dabov K, Foi A, Katkovnik V, Egiazarian K: Image denoising by sparse 3D transformdomain collaborative filtering. IEEE Trans. Image Process 2007, 16: 20802095.MathSciNetView ArticleGoogle Scholar
 Buades A, Coll B, Morel JM: A review of image denoising algorithms, with a new one. Multiscale Model Simul 2005, 4(2):490530. 10.1137/040616024MathSciNetView ArticleMATHGoogle Scholar
 Elad M, Aharon M: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans Image Process 2006, 15(12):37363745.MathSciNetView ArticleGoogle Scholar
 Hou H, Andrews H: Cubic splines for image interpolation and digital filtering. IEEE Trans Signal Process 1978, 26: 508517. 10.1109/TASSP.1978.1163154View ArticleMATHGoogle Scholar
 Yang J, Wright J, Huang T, Ma Y: Image superresolution via sparse representation. IEEE Trans Image Process 2010, 19(11):28612873.MathSciNetView ArticleGoogle Scholar
 Yang J, Wright J, Huang TS, Ma Y: Image superresolution as sparse representation of raw image patches. Proc IEEE Comput Vis Pattern Recognit 2008, 18.Google Scholar
 Wang Z, Bovik AC: Mean squared error: love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag 2009, 26: 98117.View ArticleGoogle Scholar
 Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004, 13(4):600612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar
 Joint Video Team (JVT) Reference Software [Online][http://iphome.hhi.de/suehring/tml/download/old_jm]
 Gao Y, Rehman A, Wang Z: CWSSIM Based image classification. In IEEE International Conference on Image Processing ICIP. Brussels, Belgium; 2011:12491252.Google Scholar
 Piella G, Heijmans H: A new quality metric for image fusion. In IEEE International Conference on Image Processing (ICIP). Volume 3. Barcelona, Spain; 2003:173176.Google Scholar
 Brunet D, Vrscay ER, Wang Z:On the Mathematical Properties of the Structural Similarity Index (Preprint). University of Waterloo, Waterloo; 2011. [http://www.math.uwaterloo.ca/~dbrunet/]Google Scholar
 Channappayya SS, Bovik AC, Caramanis C, Heath R: Design of linear equalizers optimized for the structural similarity index. IEEE Trans Image Process 2008, 17(6):857872.MathSciNetView ArticleGoogle Scholar
 Wang Z, Li Q, Shang X: Perceptual image coding based on a maximum of minimal structural similarity criterion. IEEE Int Conf Image Process 2007, 2: II121II124.Google Scholar
 Rehman A, Wang Z: SSIMbased nonlocal means image denoising. In IEEE International Conference on Image Processing (ICIP). Brussels, Belgium; 2011:14.Google Scholar
 Wang S, Rehman A, Wang Z, Ma S, Gao W: RateSSIM optimization for video coding. In IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP 11). Prague, Czech Republic; 2011:833836.Google Scholar
 Ou T, Huang Y, Chen H: A perceptualbased approach to bit allocation for H.264 encoder. SPIE Visual Communications and Image Processing 2010, 77441B.Google Scholar
 Mai Z, Yang C, Kuang K, Po L: A novel motion estimation method based on structural similarity for h.264 inter prediction. In IEEE Int Conf Acoust Speech Signal Process. Volume 2. Toulouse; 2006:913916.Google Scholar
 Yang C, Wang H, Po L: Improved inter prediction based on structural similarity in H.264. In IEEE Int Conf Signal Process Commun. Volume 2. Dubai; 2007:340343.Google Scholar
 Zeyde R, Elad M, Protter M: On single image scaleup using sparserepresentations. In Curves & Surfaces. AvignonFrance; 2010:711730.Google Scholar
 Savitzky A, Golay MJE: Smoothing and differentiation of data by simplified least squares procedures. Anal Chem 1964, 36: 16271639. 10.1021/ac60214a047View ArticleGoogle Scholar
 Tikhonov AN, Arsenin VY: Solutions of IllPosed Problem. V. H. Winston, Washington DC; 1977.MATHGoogle Scholar
 Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259268. 10.1016/01672789(92)90242FView ArticleMathSciNetMATHGoogle Scholar
 Protter M, Elad M: Image sequence denoising via sparse and redundant representations. IEEE Trans Image Process 2009, 18: 2735.MathSciNetView ArticleGoogle Scholar
 Mairal J, Sapiro G, Elad M: Learning multiscale sparse representations for image and video restoration. Multiscale Model Simul 2008, 7: 214241. 10.1137/070697653MathSciNetView ArticleMATHGoogle Scholar
 Candés EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 2006, 52(2):489509.View ArticleMathSciNetMATHGoogle Scholar
 Donoho DL: Compressed sensing. IEEE Trans Inf Theory 2006, 52(4):12891306.MathSciNetView ArticleMATHGoogle Scholar
 Aharon M, Elad M, Bruckstein A: KSVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 2006, 54(11):43114322.View ArticleGoogle Scholar
 Pati Y, Rezaiifar R, Krishnaprasad P: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Twenty Seventh Asilomar Conference on Signals, Systems and Computers. Volume 1. Pacific Grove, CA; 1993:4044.View ArticleGoogle Scholar
 Brunet D, Vrscay ER, Wang Z: Structural similaritybased approximation of signals and images using orthogonal bases. In Proc Int Conf on Image Analysis and Recognition. Edited by: M Kamel, A Campilho. Springer, Heidelberg; 2010:1122. vol. 6111 of LNCSView ArticleGoogle Scholar
 Wang S, Rehman A, Wang Z, Ma S, Gao W: SSIMinspired divisive normalization for perceptual video coding. In IEEE International Conference on Image Processing ICIP. Brussels, Belgium; 2011:16571660.Google Scholar
 Wainwright MJ, Simoncelli EP: Scale mixtures of gaussians and the statistics of natural images. Adv Neural Inf Process Syst 2000, 12: 855861.Google Scholar
 Lyu S, Simoncelli EP: Statistically and perceptually motivated nonlinear image representation. In Proc SPIE Conf Human Vision Electron Imaging XII. Volume 6492. San Jose, CA; 2007:649207164920715.Google Scholar
 Foley J: Human luminance pattern mechanisms: masking experiments require a new model. J Opt Soc Am 1994, 11: 17101719. 10.1364/JOSAA.11.001710View ArticleGoogle Scholar
 Watson AB, Solomon JA: Model of visual contrast gain control and pattern masking. J Opt Soc Am 1997, 14: 23792391. 10.1364/JOSAA.14.002379View ArticleGoogle Scholar
 Heeger DJ: Normalization of cell responses in cat striate cortex. Vis Neural Sci 1992, 9: 181198.View ArticleGoogle Scholar
 Simoncelli EP, Heeger DJ: A model of neuronal responses in visual area MT. Vis Res 1998, 38: 743761. 10.1016/S00426989(97)001831View ArticleGoogle Scholar
 Li Q, Wang Z: Reducedreference image quality assessment using divisive normalizationbased image representation. IEEE J Coupled dictionary training for image s Spec Top Signal Process 2009, 3: 202211.View ArticleGoogle Scholar
 Rehman A, Wang Z: Reducedreference SSIM estimation. In International Conference on Image Processing. Hong Kong, China; 2010:289292.Google Scholar
 Malo J, Epifanio I, Navarro R, Simoncelli EP: Nonlinear image representation for efficient perceptual coding. IEEE Trans Image Process 2006, 15: 6880.View ArticleGoogle Scholar
 Portilla J, Strela V, Wainwright MJ, Simoncelli EP: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans Image Process 2003, 12: 13381351. 10.1109/TIP.2003.818640MathSciNetView ArticleMATHGoogle Scholar
 Wang Z, Simoncelli EP: Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities. J Vis 2008, 8(12):113. 10.1167/8.12.1View ArticleGoogle Scholar
 Yang J, Wang Z, Lin Z, Huang T: Coupled dictionary training for image superresolution.2011. [http://www.ifp.illinois.edu/~jyang29/]Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.