Skip to content


  • Research
  • Open Access

Hyperspectral imagery super-resolution by sparse representation and spectral regularization

  • 1Email author,
  • 1,
  • 1,
  • 1,
  • 1 and
  • 1
EURASIP Journal on Advances in Signal Processing20112011:87

  • Received: 31 March 2011
  • Accepted: 12 October 2011
  • Published:


For the instrument limitation and imperfect imaging optics, it is difficult to acquire high spatial resolution hyperspectral imagery. Low spatial resolution will result in a lot of mixed pixels and greatly degrade the detection and recognition performance, affect the related application in civil and military fields. As a powerful statistical image modeling technique, sparse representation can be utilized to analyze the hyperspectral image efficiently. Hyperspectral imagery is intrinsically sparse in spatial and spectral domains, and image super-resolution quality largely depends on whether the prior knowledge is utilized properly. In this article, we propose a novel hyperspectral imagery super-resolution method by utilizing the sparse representation and spectral mixing model. Based on the sparse representation model and hyperspectral image acquisition process model, small patches of hyperspectral observations from different wavelengths can be represented as weighted linear combinations of a small number of atoms in pre-trained dictionary. Then super-resolution is treated as a least squares problem with sparse constraints. To maintain the spectral consistency, we further introduce an adaptive regularization terms into the sparse representation framework by combining the linear spectrum mixing model. Extensive experiments validate that the proposed method achieves much better results.


  • hyperspectral
  • sparse representation
  • super-resolution
  • linear mixing model

1. Introduction

Hyperspectral sensor can acquire imagery in many contiguous and very narrow (such as 10 nm) spectral bands that typically span the visible, near-infrared, and mid-infrared portions of the spectrum (0.4-2.5 μm) [1]. A continuous radiance spectrum for every pixel can be constructed from hyperspectral imagery, and it makes the identification of land-covers of interest possible based on their spectral signatures [1, 2]. Hyperspectral remote sensing has widely been used for aerial and space imaging applications, including land use analysis, pollution monitoring, wide-area reconnaissance, and battle-field surveillance [3].

Although hyperspectral sensor can acquire higher spectral resolution information, for the instrument limitation and imperfect imaging optics, it is difficult to acquire high spatial resolution imagery. The spatial resolution is a key parameter in many applications related to space images (object detection and precise location to name a few); it is obvious that any improvement here is important [35]. Low spatial resolution will result in a lot of mixed pixels and greatly degrade the detection and recognition performance, affect the related application in civil and military fields. There is significant sense to enhance the hyperspectral imagery's spatial resolution. In practice, modifying the imaging optics or the sensor array is not a good option, resolution enhancement using post-processing is a better way. Super-resolution image reconstruction offers the promise of overcoming the inherent resolution limitation of imaging sensors [4, 5]. Conventional approaches to generating a super-resolution image normally require as input multiple low-resolution image of the same scene, which are registered with sub-pixel accuracy [6]. But, it is difficult for hyperspectral aerial and space remote sensing. Based on the hyperspectral imaging model [5], the super-resolution task is cast as the inverse problem of recovering the original high-resolution images, based on reasonable assumptions or prior knowledge about the observation model that maps the high-resolution image to the low-resolution ones [7]. For hyperspectral images, the assumptions and prior knowledge are not only limited to spatial domain, but also spectral domain. How to represent and utilize these assumptions and prior will affect the performance of super-resolution. For example, to ensure the spatial satisfaction of constraints, total variation (TV) model is used as prior to regularize the super-resolution problem [8]. Guo et al. [9] proposed a hyperspectral super-resolution algorithm using the spectral unmixing information and TV model. Based on the pixel neighborhood, the piecewise autoregressive (AR) models can be used to enhance the performance of restoration [10]. To get the better performance, the prior and assumption can be represented in Fourier-Wavelet domain [11].

As an emerging image modeling technique, sparse representation has successfully been used in various image super-resolution applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain [7]. Natural images can be sparsely represented using a dictionary of atoms, the dictionary can be gotten through DCT/wavelet transforming or learning. It has been proven that the sparsity of the coefficients can be served as good prior during the panchromatic image super-resolution [7, 1215]. But, for hyperspectral image, the contents across different bands are related tightly. For hyperspectral image super-resolution, correlation among bands can be used as a prior, and more important thing is, after spatial super-resolution, the endmember of scene should not be changed. Based on these ideas, a novel hyperspectral image super-resolution method is proposed in this article. By analyzing the hyperspectral and panchromatic imaging process, we prove that dictionary learned from panchromatic images can be used to represent the hyperspectral images. Then the spectral mixing model is used to test the spectral consistency, and take it as a regularization term in super-resolution process. Finally, we present results obtained from experiments carried out on two datasets, namely a 118-band hyperspectral images captured under a controlled illumination laboratory environment and a 224-band airborne visible/infrared imaging spectrometer (AVIRIS) image.

The rest of the article is organized as follows. Section 2 introduces the related works. Section 3 presents the super-resolution algorithm based on spatial sparsity and endmember regularization. Section 4 presents experimental results and Section 5 concludes the article.

2. Related studies

2.1. Sparse representation

It has been found that natural images can generally be coded by structured primitives, e.g., edges and line segments [7], and these primitives are qualitatively similar in form to simple cell receptive fields. Olshausen and Field [16] proposed to represent a natural image using a small number of basis functions chosen out of an over-complete code set. The sparse representation of a signal over an over-complete dictionary is achieved by optimizing an objective function that includes two terms: one measures the signal reconstruction error and the other measures the signal sparsity.

Suppose that the data (image patch) x R m admits a sparse approximation over an over-complete dictionary Φ R n×m (K > m) with K atoms. Then x can approximately be represented as a linear combination of a few atoms from Φ. An over-complete dictionary Φ and the sparse coefficients α are obtained by solving the following optimization problem:
argmin Φ , α x - Φ α 2 2 + λ α 1

where ||·||2 is the 2 norm. For Φ, each of its atoms (columns) is a unit vector in the l2 norm. They are learned by solving the above minimization problem. l1 norm regularization constraint is used to guarantee the sparseness of α, where α 1 = i α i and α = [α1;...; α m ]. Positive constant λ controls the trade-off between accuracy of reconstruction and sparseness of α. The cost function given above is non-convex with respect to both Φ and α. However, it is convex when one is fixed. Thus, this problem can be alternating between learning Φ using K-SVD [17], MOD [18], or gradient descent [19] while fixing α and inferring α using orthogonal matching pursuit (OMP) [20] while fixing Φ.

2.2. Hyperspectral and panchromatic images representation

Hyperspectral imaging sensors divide the wavelength span (such as 0.4-1.2 μm) into a series of contiguous and narrow (such as 10 nm) spectral bands, and measure the radiance of every bands. Panchromatic imaging sensors measure all the radiance in the wavelength spans (such as 0.4-1.2 μm) once. For an arbitrary pixel in the hyperspectral or panchromatic image, digital number (or gray levels) of the image between wavelength λ1 and λ2 can be written as
D N = λ 1 λ 2 L ( λ ) K i g i ( λ ) d λ + B ( λ 1 ~ λ 2 )
where L(λ) is the spectral radiance at the sensor's entrance pupil, g i (λ) is the spectral response function of the sensor between wavelength λ1 and λ2. K i is the constant related to sensor such as electronic gain, the detector saturation electrons, the quantization levels, the area of the entrance aperture, and so on. B ( λ 1 ~ λ 2 ) is the noise caused by dark signal. For same scene imaged by different sensors, L(λ) is same. Figure 1 shows the principle of hyperspectral and panchromatic imaging processes.
Figure 1
Figure 1

The principle of hyperspectral and panchromatic imaging.

Although image contents can vary a lot from image-to-image, it has been found that the micro-structures of images can be represented by a small number of structural primitives (e.g., edges, line segments, and other elementary features) [7]. Here, we can assume that spatial micro-structures information in hyperspectral bands can be derived from a series of panchromatic images. Based on this assumption, we can use the panchromatic images to enhance the spatial resolution of hyperspectral images.

In sparse representation scheme, a natural image can be coded using a small number of basis functions chosen out of an over-complete code set. We can train the dictionaries using the patches extracted from several training panchromatic images which are rich in edges and textures. Here, we train two dictionaries using panchromatic image sets and hyperspectral image sets, as shown in Figures 2 and 3. The redundant DCT dictionary is described on the left side of Figure 2, each of its atoms shown as an 8 × 8 pixel image. This dictionary was also used as the initialization for all the training algorithms that follow. The globally trained dictionary is shown on the right side of Figure 2. This dictionary was produced by the K-SVD algorithm (executed 180 iterations, using OMP for sparse coding with), trained on a dataset of 1,00,000 8 × 8 patches. Those patches are taken from an arbitrary set of natural images (unrelated to the test images), some of which are shown in Figure 4.
Figure 2
Figure 2

Left: Overcomplete DCT dictionary. Right: Globally trained dictionary.

Figure 3
Figure 3

Dictionary trained from hyperspectral images.

Figure 4
Figure 4

Sample from the images used for training the global dictionary.

3. Hyperspectral imagery super-resolution (HISR) with sparsity based regularization

HISR aims to reconstruct a high-quality imagery X from its degraded measurement Y. The hyperspectral imagery acquisition process can be modeled as:
Y = W H X + υ
where W represents the down-sampling operator and H represents a blurring filter, and υ is the additive noise [2]. In this article, we just consider the spatial down-sampling and blurring operator. We can get the estimation of high-quality imagery X ^ through resolving the inverse of (3):
X ^ = arg min Y - W H X F 2

where ||·|| F is the F norm. The estimation of X ^ by formula (4) is a ill-posed inverse problem, since for a given low-resolution input Y, infinitely many high-resolution images X satisfy the reconstruction constraint. To find a better solution, prior knowledge of hyperspectral imagery can be used to regularize the HISR problem. We regularize the problem via the following prior on small patch x of X.

3.1. Spatial sparsity regularization

Panchromatic image is fused with hyperspectral image to enhance the spatial resolution, by extracting the structure information from a panchromatic image and injected into the hyperspectral image in a special representation frame work. Here, we use a dictionary Φ = [ϕ1,..., ϕ m ]R n×m trained from high-resolution images to model the structure from panchromatic images. Based on this dictionary Φ, the hyperspectral imagery acquisition process can be modeled as
Y W H Φ Λ + υ
where Λ = [α1;...; α m ] is the m × N matrix where most of the elements in αi (i = 1, 2,...,m) are close to zero.
Λ ^ = arg min α Y - W H Φ Λ F 2 + λ Λ 1
Λ = [α1, α2,...,α N ] where N is the number of band. That is for n th band, we have
α ^ n = arg min α n y - w h ϕ α n 2 2 + λ α n 1

3.2. Spectral regularization

Solving (7) individually for each band does not guarantee the compatibility between adjacent bands. We enforce compatibility between adjacent bands using the spectral regularization. For each given pixel, a linear spectral mixing model can be used to regularize the solution space, which is very helpful in preserving spectral consistency and suppressing noise. Let m i 1 i N denote the set of N endmember material signals. We will arrange the signals m i as the columns of the endmember matrix M. α i is the abundance of endmember m i , and it satisfies the two constraints: non-negative and normality. We will write the abundance values α1,...,α N as a column vector a. For mathematical simplicity, it is common to assume a linear mixing model:
f = M a + n
where f is the given spectral signal, n is the noise. The most straightforward approach for solving the linear problem (8) is by constrained least squares minimization
arg min a x i - M a 2 2  subject to  a i > 0 , i a i = 1
By incorporating the non-local similarity regularization term into the sparse representation, we have
α ^ n = arg min α n y - w h ϕ α n 2 2 + λ α n 1 + γ x i x x i , n - M a 2 2
where γ is a constant balancing the contribution of the spectral regularization term. For a small patch, we write the third term x i x x i , n - M a 2 2 as ( I - B ) ϕ α n 2 2 , where I is the identity matrix and
B ( i , j ) = M a i ,  if  x j  is an element of  a 0 , otherwise
Then, formula (10) can be rewritten as
α ^ n = arg min α n ( f ) = arg min α n y - w h ϕ α n 2 2 + λ α n 1 + γ ( I - B ) ϕ α n 2 2
By letting
= y 0 , L = w h γ ( I - B )
Formula (12) can be rewritten as
α ^ n = arg min α n - L ϕ α n 2 2 + λ α n 1

This is a reweighted l1-minimization problem, which can effectively be solved by the iterative shrinkage algorithm [21].

4. Experiment results and analysis

To test the performance of the super-resolution reconstruction algorithm, two kinds of experiments are designed. In the first experiment, the proposed super-resolution algorithm is tested on data which is collected under controlled illumination. The spectral span of images used for training and testing is same in this scene. In the second experiment, the algorithm is tested on the AVIRIS hyperspectral datasets. The super-resolution results by proposed algorithm are compared with interpolation technique and hyperspectral super-resolution method using endmember-based TV model [9].

4.1. Evaluation measures

The most common measure to quantitatively understand the performance of the reconstruction is the peak-signal-to-noise-ratio (PSNR). The peak signal value for each band can significantly change, which makes this measure biased toward bands with higher energy. To compensate this PSNR, the definition of standard is changed as follows
PSNR = 2 0 log b = 1 K S peak , b MSE

where Speak,bis the peak signal value at b'th band, MSE is the mean square error between the ground truth and the estimated high-resolution signal.

Wang et al. [22] proposed a structural similarity measure SSIM for two panchromatic images. For hyperspectral images, the means and variance should be vector. The SSIM index between hyperspectral images x and y can be defined as [22]
SSIM ( x , y ) = ( 2 μ x T μ y + C 1 ) ( 2 σ xy + C 2 ) ( μ x T μ x + μ y T μ y + C 1 ) ( σ x T σ x + σ y T σ y + C 2 )
Zhang et al [23] proposed a novel feature-similarity (FSIM) index for full reference image quality assessment. The phase congruency (PC) and gradient magnitude (GM) are utilized jointly in evaluating process. The SSIM index between hyperspectral images x and y can be defined as [9]
FSIM ( x , y ) = z Ω S L ( z ) P C m ( z ) z Ω P C m ( z )

where Ω means the whole image spatial domain. PC m (z) = max{PC x (z), PC y (z)}, where PC x (z) is phase congruency for a given position z of image x. S L (z) is the gradient magnitude for a given position z.

4.2. Indoor experiments

The data used in this experiment are the hyperspectral image captured at Instrumentation and Sensing Laboratory (ISL) at Beltsville Agricultural Research Center (16 bit BIL, 307 rows by 307 columns by 118 bands, wavelength range: 426.9-853.0 nm, bandwidth: 4 nm) under control illumination. The spectral range of panchromatic images used to train the dictionary is 400-900 nm. Part of these panchromatic images comes from website [24], and some panchromatic images come from GoogleEarth, and some captured by our research group using Retiga EXi camera [25]. Figure 5 shows one scene and its corresponding spectral reflectance.
Figure 5
Figure 5

The scene of interest and spectral characteristics of the four materials in the scene.

In this experiment, the original ISL hyperspectral data are first down sampled by the nearest neighboring filter, band-by-band. Then, the linear spectral unmixing is performed on the down-sampled images. Here, a geometry-based N_FINDR algorithm is adopted for automatic extraction of endmembers. The details about this algorithm can be found in [26]. Figure 6 shows the super-resolution results by bicubic interpolation, endmember-based TV model, and proposed algorithm, and its corresponding PSNR, SSIM, and FSIM. By analyzing the images and evaluation measures in Figure 6, we can conclude that the proposed algorithm can get better PSNR and maintain more structure information. Figure 7 shows the spectral difference by different algorithm at different regions. Region 1 is smooth background, three different algorithms acquire almost same reflectance. Region 2 is edge, there is sharp variation of texture in this region, and only proposed algorithm can acquire almost same reflectance with original high-resolution data. The reflectance acquired by endmember-based TV model is obviously deviated from original high-resolution data, and the reflectance error of bicubic interpolation is large. Region 3 is relative smooth edge region, and the reflectance difference is not so obvious. But, there is still obvious deviation of reflectance acquired by bicubic interpolation. The pre-trained dictionary has the adaptivity to image local structure, and the spectral regularization can maintain the spectral information very well. Although endmember information is used in endmember-based TV model, poor performance is acquired at edge region. Since the TV model favors the piecewise constant image structures, it tends to smooth out the fine details of an image.
Figure 6
Figure 6

The results for a 118-band indoor hyperspectral database. (a) The original high-resolution data at band 70, (b) low-resolution image of (a) (downsampled in the spatial domain by a factor of 3), (c) bicubic interpolation (PSNR = 12.10, SSIM = 0.343364, FSIM = 0.4745), (d) super-resolution by endmember-based TV model (PSNR = 13.30, SSIM = 0.366734, FSIM = 0.5822), (e) super-resolution by proposed algorithm (PSNR = 16.61, SSIM = 0.393864, FSIM = 0.6433).

Figure 7
Figure 7

Spectral difference by different super-resolution algorithms. (a) Different region location, (b) spectral difference at region 1, (c) spectral difference at region 2, (d) spectral difference at region 3.

5. Outdoor experiments

The dataset used in this experiment is the hyperspectral image of the Indian Pines (200 spectral bands in the 400-2500 nm range, 145 × 145 pixels), obtained by the AVIRIS sensor. Here, two dictionaries are trained for super-resolution of this kind of data. The first dictionary is trained using the panchromatic image with spectral covering within 400-900 nm. The second dictionary is trained using the multispectral images with spectral covering within 400-2400 nm. The data used in second dictionary training come from different remote sensing satellites, such as LANDSAT and SPOT. Figure 8 shows the super-resolution results by bicubic interpolation, endmember-based TV model, and proposed algorithm, and its corresponding PSNR, SSIM, and FSIM. The experiment results in Figure 8 also shown that the proposed algorithm can get better performance. To better illustrate the robustness of the proposed method to the training dataset, we train two dictionaries using panchromatic image sets and hyperspectral image sets, some of panchromatic images are shown in Figure 4. Figure 8e, f shows the super-resolution results by the proposed algorithm using different dictionaries. By comparing Figure 8e, f, we can conclude that using the different dictionaries almost same super-resolution results can be get. It can be explained as two different dictionaries are trained from sufficient large training remote sensing images sets, and almost all micro-structures in nature scene can be captured in these two dictionaries. It can be proved by Figures 2 and 3. Figure 9 shows the spectral difference by different algorithm. In smooth region, there are no land cover's variations, three different algorithms acquire almost same reflectance. But for the regions with obvious land cover's variations, only proposed algorithm can acquire the almost reflectance with original high-resolution data.
Figure 8
Figure 8

The results for a 220-band Indian Pines test site of the AVIRIS database. (a) The original high-resolution data at band 70, (b) low-resolution image of (a) (downsampled in the spatial domain by a factor of three), (c) bicubic interpolation (PSNR = 19.97, SSIM = 0.550082, FSIM = 0.5822), (d) super-resolution by endmember-based TV model (PSNR = 20.30, SSIM = 0.666734, FSIM = 0.6230), (e) super-resolution by the proposed algorithm using the first dictionary (PSNR = 23.17, SSIM = 0.760108, FSIM = 0.6730), (f) super-resolution by the proposed algorithm using the second dictionary (PSNR = 23.19, SSIM = 0.760148, FSIM = 0.6737).

Figure 9
Figure 9

Spectral difference by different super-resolution algorithms. (a) Spectral difference at smooth region, (b) spectral difference at edge region of different land covers.

5. Conclusion

We propose a novel hyperspectral super-resolution algorithm by utilizing the sparse representation and spectral mixing model. Single hyperspectral image super-resolution is typical ill-posed inverse problem, prior knowledge of data can be used to regularize the super-resolution problem. Considering the fact that the micro-structures of images can be represented as linear combination of atoms in the pre-trained dictionaries, we utilize the sparsity of combination coefficient to solve the inverse problem. To further improve the spectral quality of reconstructed images, we introduced a spectral mixing model-based image restoration framework. Spectral mixing models were learned from the training dataset and were used to regularize the image local smoothness. The experimental results on two hyperspectral images showed that the proposed approach outperforms state-of-the-art methods in both PSNR, visual, and spectral quality.



This work was supported by the Natural Science Foundation of China under grants Nos. 61071172, 60602056, and 60634030, and the Aviation Science Funds 20105153022, Sciences Foundation of Northwestern Polytechnical University No. JC200941.

Authors’ Affiliations

College of Automation, Northwestern Polytechnical University, Xi'An, 710072, China


  1. Gu Y, Zheng Y, Zhang J: Integration of spatial-spectral information for resolution enhancement in hyperspectral images. IEEE Trans Geosci Remote Sens 2008,46(5):1347-1357.View ArticleGoogle Scholar
  2. Akgun T, Altunbasak Y, Mersereau RM: Super-resolution reconstruction of hyperspectral images. IEEE Trans Image Process 2005,14(11):1860-1875.View ArticleGoogle Scholar
  3. Zhao Y, Zhang L, Kong SG: Band-subset-based clustering and fusion for hyperspectral imagery classification. IEEE Trans Geosci Remote Sens 2011,49(2):747-756.View ArticleGoogle Scholar
  4. Mianji FA, Zhang Y, Sulehria HK: Super-resolution challenges in hyperspectral imagery. Inf Technol J 2008,7(7):1030-1036. 10.3923/itj.2008.1030.1036View ArticleGoogle Scholar
  5. Akgun T, Altunbasak Y, Mersereau RM: Super-resolution reconstruction of hyperspectral images. IEEE Trans Image Process 2005,14(11):1860-1873.View ArticleGoogle Scholar
  6. Choi M, Kim R, Nam M, Kim HO: Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosci Remote Sens Lett 2005,2(2):136-139. 10.1109/LGRS.2005.845313View ArticleGoogle Scholar
  7. Dong W, Zhang L, Shi G, Wu X: Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans Image Process 2011,20(7):1838-1857.MathSciNetView ArticleGoogle Scholar
  8. Chan T, Esedoglu S, Park F, Yip A: Recent development in total variation image restoration. In Mathematical Models of Computer Vision. Edited by: Paragios N, Chen Y, Faugeras O. Springer, New York; 2005.Google Scholar
  9. Guo Z, Wittman T, Osher S: L1 unmixing and its application to hyperspectral image enhancement. In Proc SPIE Conference on Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV. Orlando, Florida; 2009.Google Scholar
  10. Minami K, Kawata S, Minami S: Superresolution of Fourier transform spectra by autoregressive model fitting with singular value decomposition. Appl Optics 1985,24(2):162-167. 10.1364/AO.24.000162View ArticleGoogle Scholar
  11. Robinson MD, Toth CA, Lo JY, Farsiu S: Efficient Fourier-wavelet super-resolution. IEEE Trans Image Process 2010,19(10):2669-2681.MathSciNetView ArticleGoogle Scholar
  12. Yang J, Wright J, Huang TS, Ma Y: Image super-resolution via sparse representation. IEEE Trans Image Process 2010,19(11):2861-2873.MathSciNetView ArticleGoogle Scholar
  13. Dong W, Shi G, Zhang L, Wu X: Super-resolution with nonlocal-regularized sparse representation. SPIE VCIP 2010, 7744.Google Scholar
  14. Dong W, Zhang L, Shi G, Wu X: Nonlocal-back-projection for adaptive image enlargement. ICIP 2009.Google Scholar
  15. Dong W, Zhang L, Shi G: Centralized sparse representation for image restoration. ICCV 2011.Google Scholar
  16. Olshausen BA, Field DJ: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 1996, 381: 607-609. 10.1038/381607a0View ArticleGoogle Scholar
  17. Aharon M, Elad M, Katz A, Bruckstein Y: The K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representations. IEEE Trans Signal Process 2006,54(11):4311-4322.View ArticleGoogle Scholar
  18. Lee H, Battle A, Raina R, Andrew YN: Efficient sparse coding algorithms. NIPS 2007, 19: 801-808.Google Scholar
  19. Mairal J, Bach F, Ponce J, Sapiro G: Online learning for matrix factorization and sparse coding. J Mach Learn Res 2010,11(1):19-60.MathSciNetGoogle Scholar
  20. Rubinstein R, Zibulevsky M, Elad M: Efficient implementation of the k-svd algorithm using batch orthogonal matching pursuit. Technical Report - CS Technion 2008.Google Scholar
  21. Daubechies I, Defriese M, DeMol C: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun Pure Appl Math 2004, 57: 1413-1457. 10.1002/cpa.20042View ArticleGoogle Scholar
  22. Wang Z, Bovik AC, Rahim Sheikh H, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004,13(4):600-612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar
  23. Zhang L, Zhang L, Mou X, Zhang D: FSIM: a feature similarity index for image quality assessment. IEEE Trans Image Process 2011.Google Scholar
  24. Foster DH, Nascimento SMC, Amano K: Information limits on neural identification of coloured surfaces in natural scenes. Visual Neurosci 2004, 21: 331-336. 10.1017/S0952523804213335View ArticleGoogle Scholar
  25. Zhao Y, Gong P, Pan Q: Object detection by spectropolarimeteric imagery fusion. IEEE Trans Geosci Remote Sens 2008,46(10):3337-3345.View ArticleGoogle Scholar
  26. Plaza A, Martinez P, Perez R, Plaza J: A quantitative and comparative analysis of endmember extraction algorithm from hyperspectral data. IEEE Trans Geosci Remote Sens 2004,42(3):650-663. 10.1109/TGRS.2003.820314View ArticleGoogle Scholar


© Zhao et al; licensee Springer. 2011

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.