- Research
- Open Access

# Fusion of hyperspectral and panchromatic images using multiresolution analysis and nonlinear PCA band reduction

- Giorgio Antonino Licciardi
^{1}Email author, - Muhammad Murtaza Khan
^{2}, - Jocelyn Chanussot
^{1}, - Annick Montanvert
^{1}, - Laurent Condat
^{1}and - Christian Jutten
^{1}

**2012**:207

https://doi.org/10.1186/1687-6180-2012-207

© Licciardi et al; licensee Springer. 2012

**Received:**15 July 2011**Accepted:**26 August 2012**Published:**25 September 2012

## Abstract

This article presents a novel method for the enhancement of the spatial quality of hyperspectral (HS) images through the use of a high resolution panchromatic (PAN) image. Due to the high number of bands, the application of a pan-sharpening technique to HS images may result in an increase of the computational load and complexity. Thus a dimensionality reduction preprocess, compressing the original number of measurements into a lower dimensional space, becomes mandatory. To solve this problem, we propose a pan-sharpening technique combining both dimensionality reduction and fusion, making use of non-linear principal component analysis (NLPCA) and Indusion, respectively, to enhance the spatial resolution of a HS image. We have tested the proposed algorithm on HS images obtained from CHRIS-Proba sensor and PAN image obtained from World view 2 and demonstrated that a reduction using NLPCA does not result in any significant degradation in the pan-sharpening results.

## Keywords

- Dimensionality Reduction
- Spectral Distortion
- Spectral Angle Mapper
- Minimum Noise Fraction
- Reduction Constraint

## Introduction

Generally, for satellite images, the highest spatial resolution is captured by the panchromatic (PAN) image. However the drawback of the PAN image is that it has no spectral information beyond that which is averaged within the bandpass of the PAN image. Unlike a PAN image, multispectral (MS) and in particular hyperspectral (HS) satellite images cover a wider spectral range with moderate to high resolution. However, as compared to MS images, HS images have a better spectral resolution, that may result in a very high number of bands having low spatial resolution. For better utilization and interpretation, HS images having both high spectral and spatial resolution are desired. This can be achieved by making use of a high spatial resolution PAN image along with low resolution HS images in the context of pan-sharpening.

Pan-sharpening, or image fusion, is the process of improving the spatial quality of a low spatial resolution image (HS or MS) by fusing it with a high resolution PAN image[1, 2]. One of the main challenges in image fusion is to improve the spatial resolution i.e. spatial details, while preserving the original spectral information. This requires addition of spatial details to each band of the image. Due to the high number of bands the pan-sharpening of HS images results in an increased computational load and complexity. Thus a dimensionality reduction preprocess, compressing the original number of measurements into a lower dimensional space, becomes mandatory[3]. Among image fusion methods, the most popular are those based on the substitution approach, such as intensity-hue-saturation transformation (IHS) and principal component analysis (PCA)[4]. Among these, PCA approaches are commonly applied to HS images. PCA approaches are based on the assumption that the first principal component PC collects the information that is common to all the bands. The fusion is achieved by substituting the first PC band with the PAN image, whose histogram has previously been matched with that of PC band. In this way the spatial information is encapsulated in the histogram matched PAN image, while the spectral information that is specific to each spectral band is contained in the other principal components. In this case the dimensionality reduction is performed discarding the less relevant principal components. This means that the image resulting from the inverse transformation will not have the same information of the original one, resulting in a strong spectral distortion. In this article, we propose a new approach to enhance the spatial resolution of a HS image combining both non-linear principal component analysis (NLPCA) and Indusion for dimensionality reduction and fusion, respectively. In particular NLPCA is applied to reproject the original data into a lower space, thus the derived nonlinear principal components are then enhanced according to the Indusion process. Finally, the inverse NLPCA reprojects the enhanced components back to the original dimensionality, resulting in a spatially enhanced HS image having a similar spectral characteristic of the original one.

The article is organized as follows. In Sections “Dimensionality reduction” and “Image fusion” the NLPCA and Indusion are described, respectively. Section “Experimental results” presents experimental results obtained by applying the proposed approach to three different datasets, while conclusions are drawn in Section “Conclusion”.

## Dimensionality reduction

The main difficulty in processing HS images is that the number of bands can vary from several tens to several hundreds. Applying a pan-sharpening technique to each band of the HS image, can lead to a dramatic increase of the computational time of the entire process. Hence, while enhancing the spatial resolution of a HS image, it is generally desirable to reduce the number of bands. Another important property is that the reduction method should allow a complete reconstruction of the original spectral information content. Consequently, the dimensionality reduction should seek to avoid loosing relevant spectral information from the original dataset.

In the literature, many methods have been developed to tackle the issue of high dimensionality of HS data[5]. Summarizing, we may say that dimensionality reduction methods can be grouped into two classes: “feature-selection” algorithms (which suitably select a sub-optimal subset of the original set of features while discarding the remaining ones) and “feature extraction” by data transformation (which projects the original feature space onto a lower dimensional subspace that preserves most of the information)[6].

Feature selection techniques can be generally considered as a combination of both a search algorithm and a criterion function[7–11]. The solution to the feature selection problem is offered by the search algorithm, which generates a subset of features and compares them on the basis of the criterion function. On the other hand, feature extraction techniques seek to reduce the dimensionality of the data by mapping the feature space onto a new lower-dimensional space. While feature selection is a more simple and direct approach, feature extraction methods can be more effective in representing the information content in a lower dimensionality domain. Nevertheless the loss of information derived from a feature selection approach does not allow a good reconstruction of the original dataset. For this reason it is not recommended to integrate feature selection methods into pan-sharpening processes.

The most common techniques to reduce the number of bands are the minimum noise fraction (MNF) transform, where an operator calculates a set of transformed features according to a signal-to-noise ratio optimization criterion, PCA, where a set of uncorrelated transformed features is generated and also independent component analysis (ICA), where a computational method for separating a multivariate signal into additive subcomponents supposing the mutual statistical independence of the non-Gaussian source signals[12, 13]. For these techniques, the dimensionality reduction is performed by discarding the components with the lowest information content. Also, for these techniques, the components obtained are linearly uncorrelated but the physical representation of the image may be lost. Moreover, being linear methods, ICA, PCA and MNF assume that the observed data set is composed of linear combinations of certain basis. In[14, 15], it has been demonstrated that the nonlinear version of the common PCA, namely kernel PCA (KPCA) is capable of capturing part of the high order statistics, thus providing more information from the original data set than the PCA. In this case, the dimensionality reduction is once again performed by discarding the less relevant components. Other approaches, are based upon the characteristic of HS images having adjacent bands that are spectrally highly correlated[7, 8].

In this article, we propose to perform the dimensionality reduction by using NLPCA, commonly referred to as nonlinear generalization of standard PCA.

*x*has to be equal to the output$\widehat{x}$. Training an AANN is not an easy task because of the bottleneck layer where the data have to be projected or compressed into a lower dimensional space

*Z*. Since there are fewer units in the bottleneck layer than the output, the bottleneck nodes must represent or encode the information obtained from the inputs for the subsequent layers to reconstruct the input.

*F*

_{encode}:

*X*→

*Z*, while the second part represents the inverse function, called decoding function

*F*

_{decode}:

*Z*→

*X*̂. After the training of the AANN the NLPCs can be extracted from the extraction subnet, while the reconstruction can be performed by the decoding subnet. A topology with three hidden layers enables the AANN to perform non-linear mapping functions. In fact, if we design an AANN with only one hidden layer of linear nodes, the projection into the

*Z*-dimensional subspace would correspond exactly to linear PCA. Also if the activation functions in the bottleneck nodes were sigmoidal, the projection into the sub-space would still be severely constrained; only linear combinations of the inputs compressed by the sigmoid into the range [−1,1] could be represented. Therefore the performance of an AANN with only one internal layer of sigmoidal nodes is often no better than linear PCA [81]. The proposed AANN algorithms can be trained by minimizing the sum-of-squares error of the form:

*y*

_{ k }(

*k*=1,2,…,

*d*) is the output vector. The non-linear activation function

*σ*(

*x*) can be any continuous and monotonically increasing function with

*σ*(

*x*)→1 as

*x*→ +

*∞*and

*u*(

*x*)→0 as

*x*→−

*∞*. In this article the chosen function is the sigmoid and is applied elementwise:

One of the main tasks designing the AANN is the selection of the number of nodes in the hidden layers and in particular in the bottleneck layer that minimizes the loss of information of the entire network. This problem has been solved by a grid search algorithm that varies recursively the number of nodes and evaluates the respective error. The topology with the lowest error is then selected. A general evaluation of the performance of this method, in terms of time and memory, cannot be addressed because it is mainly dependent on the number of nodes of the network considered in each iteration. However usually each iteration of the grid search required an average time that is less than a minute, resulting in a computational time that was less than 10 min for each of the experiments described in the following sections.

Compared to linear reduction techniques, NLPCA has many advantages. First of all, while PCA and MNF can detect and discard linear correlations among spectral bands, NLPCA detects both linear and nonlinear correlations. In this case the dimensionality reduction is performed by discarding the less relevant components, meaning that the reconstruction of the original image would be extremely affected in terms of loss of information. Moreover the first PCA usually retains information from all the original spectral bands, meaning that it may include features within the spectral bandpass of the PAN image and will thus result in strong spectral distortions. On the other hand, the information content in the NLPCs is somehow focused on relevant spectral signatures. For instance, in a HS image acquired on a landscape with water, vegetation and manmade structures, one of the derived NLPCs will contain all the information related to the vegetation, while another one will have only information about water surfaces. This allows NLPCA to be significantly more effective than PCA or MNF in the inverse operation of reconstruction of the original spectral information[19]. Results similar to NLPCA can be obtained through the use of KPCA, however, as for the linear approaches, also in this case the dimensionality reduction is performed by discarding less relevant components. This let the NLPCA to be preferred for the dimensionality reduction processing[15].

In this article we propose the use of NLPCA for dimensionality reduction. The NLPCs obtained from the extraction function will be used as input for the pan-sharpening task. After enhancing the NLPCs, the enhanced HS image is eventually obtained with the decoding subnet of the AANN.

## Image fusion

The fusion of HS and PAN images is a useful technique for enhancing the spatial quality of low-resolution HS images. Generally, the fusion process can be subdivided into two steps. In the first step, the low resolution image is scaled up to the same size as the PAN image. Next, fusion is achieved by adding high-frequency content of the PAN image to the HS image. Literature on pan-sharpening methods is rich with diversity, encompassing methods based upon the use of discrete wavelet transform (DWT)[20, 21], Laplacian Pyramids[22], PCA transform[4], and IHS transform[23]. The latter two methods fall in the category of component substitution method and result in fused images having high spatial quality but suffering from spectral distortions[24, 25]. The images fused using DWT or Laplacian Pyramid based methods are not as sharp as component substitution methods but they are spectrally consistent[24].

*reduction constraint*. Assuming

*I*as the original image and

*R*the reduction filter, with a reduction ratio

*a*so that the upscaled image is

*I*

^{1/a}, the reduction constraint can be written as:

*reduction constraint*, and is defined as:

*J*, not adhering to the reduction constraint, onto the induced set

*Ω*so as to obtain an induced image

*K*that belongs to the induced set. Indusion process, deriving its name from Induction and Fusion, defines the induced image as:

Where *R* and *A* are the Cohen–Daubechies–Fauveau (CDF) 9/7 tap bi-orthogonal filter pair, respectively[28], and *J* is the upscaled version of the initial image *I* that does not adhere to the reduction constraint.

The Indusion algorithm has been successfully tested on true and simulated images[26]. The main intent of this article is to evaluate the effectiveness of the Indusion approach combined with a NLPCA dimensionality reduction applied to HS images. In particular the original HS image is reduced in dimensionality by means of NLPCA, resulting in few nonlinear principal components. Then according to the above described Indusion method the high resolution spatial details contained in the PAN image are injected into the nonlinear components. Finally the spatially enhanced components are reprojected back to the original space using the inverse NLPCA. The result is a spatially enhanced HS image having the same spectral characteristics as the original one.

- 1.
Design/training of the AANN (selection of the best topology);

- 2.
Extraction of the NLPCs through the encoding function;

- 3.
Downscaling of the PAN image using CDF9 filter to fit the size of the NLPCs;

- 4.
Perform histogram matching between the downscaled PAN and the NLPCs;

- 5.
Upscale the NLPCs and the histogram matched PAN using CDF7 filter;

- 6.
Perform histogram matching between the original PAN and the upscaled NLPCs;

- 7.
Obtain the difference between histogram matched original PAN and the histogram matched upscaled PAN;

- 8.
Add the previously obtained difference to the upscaled NLPCs;

- 9.
Reconstruction of the original spectral bands through the decoding function.

## Experimental results

In this article the proposed method was applied to three different images having increasing complexity. In Section “WorldView-2 dataset”, we discuss the results of applying the Indusion approach to a WorldView-2 dataset to assess the accuracy of the fusion method. In Section “CHRIS-Proba dataset”, the NLPCA + Indusion approach is discussed in the context of the fusion of a CHRIS-Proba dataset and a QuickBird PAN image. The last section evaluates the reduction-fusion algorithm on a Hyperion image. While the enlargement of a MS image such as provided by WorldView-2 satellite, can be an easy task, pan-sharpening of HS imagery can be more complex not only because of the huge number of HS bands, but also because the spectral coverage of the PAN image does not match the wavelength acquisition range of the HS bands. Moreover a difference between the WorldView-2 dataset and the CHRIS-proba and Hyperion datasets is that for the latter two datasets the PAN image has been acquired by a different satellite sensor i.e. QuickBird, having different acquisition dates and geometry. A test of the spectral fidelity of the fusion process is introduced using PAN images not covering the spectral range of the HS bands, and this is verified by pan-sharpening CHRIS-proba and Hyperion images using a PAN image from the QuickBird sensor.

Once the pansharpened images have been obtained, the next phase is their quality assessment. Evaluating the quality of a fusion process is not a trivial task. For the quantitative quality assessment, it is generally recommended to make use of the synthesis property as proposed by Wald[1]. This means that both the HS and the PAN images are degraded to a lower resolution and then pan-sharpening is performed. This is done so that the resultant pansharpened image is at the same resolution as the starting reference and hence statistical analysis can be made between the reference and the pansharpened images. If the reference image is at the same resolution as the fused image, we can perform universal image quality index (UIQI), relative dimensional global error (ERGAS) and spectral angle mapper (SAM) calculations between the pansharpened and the reference image[1, 29]. UIQI can be seen as a ratio between the original image and the enhanced one, where 1 is the ideal value. ERGAS and SAM will both produce positive values with an ideal value of 0. However, values that are around 3 are referred to a good image enhancement. SAM is a useful measure of the spectral quality introduced by the fusion process, while ERGAS and UIQI measure both spectral and spatial quality. On the other hand, if we reduce our images to a lower resolution there will be no significant information left in the images and hence the pansharpened images would not be at a good resolution. For this reason a qualitative analysis, through visual inspection, will also be discussed without reducing the spatial resolution of the images before the fusion process.

### WorldView-2 dataset

WorldView-2, launched October 2009, is a high-resolution 8-band MS satellite. WorldView-2 provides PAN and MS images at 50 cm and 2 m spatial resolution, respectively. The spectral coverages are 450–800 nm for the PAN images and 400–1,040 nm for the MS image. The image used in this experiment was collected over Tor Vergata area, in the south-east part of Rome, Italy, on February 2010. The landscape represented is diverse, with large pastures, industrial and dense urban areas.

**Quality indexes for the fusion process applied on WorldView-2**

UIQI | ERGAS | SAM | |
---|---|---|---|

Reference | 1 | 0 | 0 |

Indusion | 0.9891 | 2.6298 | 2.0500 |

### CHRIS-Proba dataset

In the second test a CHRIS-Proba dataset and a QuickBird PAN image at 1m have been used for testing the proposed fusion technique. The two datasets were acquired during different periods of 2006 over the Tor Vergata area, south-east of Rome, Italy.

The PRoject for On Board Autonomy (PROBA) spacecraft is a mini-satellite operated by the European Space Agency. The main payload of PROBA is the Compact High Resolution Imaging s Spectrometer (CHRIS) sensor which can acquire up to 63 spectral bands (400–1,050 nm) with nominal spatial resolutions of 17 or 34 m at nadir, depending on the acquisition mode. The CHRIS sensor acquires data in five different modes (aerosol, land cover, vegetation and coastal zones), varying the number and location of spectral bands, the spatial resolution and the width of the swath. Thus, the CHRIS-PROBA system acquires HS images of the same scene with five different view angles during a single overpass: +55°, +36°, 0°, -36° and -55°. To test the proposed approach a 0° image, acquired in Land-Cover mode (18 bands), was used. The spectral ranges of the two images are very similar, 438–1,035 nm for CHRIS and 405–1,053 nm for the QuickBird-PAN, respectively. Also the angles of view of the two images are very similar, thus avoiding any geometric or registration distortions between the two images. The CHRIS image was atmospherically corrected and accurately co-registered to the PAN image for testing. In particular, the atmospheric correction consists of estimating the contribution of the atmospheric effects in terms of the total irradiance, the direct and diffuse transmittance, and the radiance due to the scattering. The estimations have been produced starting from simulations carried out with the suite of libraries libRadtran for the simulation of the radiative transfer balance[30]. A subset of 216×216 pixels for the CHRIS image and 864×864 for the PAN image was selected according to the overlapping areas of the two acquisitions. To obtain an enlargement ratio of 4, that, respect to the description of the Indusion method, offers the best tradeoff between spatial enhancement and spectral distortion[26], the CHRIS image and the PAN image have been degraded to the spatial resolutions of 20 and 5 m, respectively adhering to the consistency criteria of quality assessment as proposed by Wald[1].

**Quality indexes for the proposed fusion process applied on the entire CHRIS-Proba image (complete image) and on three different land cover types (pasture, industrial and dense urban fabric)**

UIQI | ERGAS | SAM | |
---|---|---|---|

Reference | 1 | 0 | 0 |

Indusion | 0.9627 | 1.6798 | 2.3751 |

complete image | 0.9229 | 2.6797 | 2.7413 |

Pasture | 0.9373 | 2.2180 | 2.1511 |

Industrial | 0.9313 | 2.2978 | 2.3871 |

Dense Urban fabric | 0.8616 | 4.1971 | 3.9812 |

**Quality indexes values for the reconstruction of the CHRIS-Proba image obtained by PCA and NLPCA, respectively**

UIQI | ERGAS | SAM | |
---|---|---|---|

Reference | 1 | 0 | 0 |

NLPCA | 0.9945 | 0.7953 | 0.8317 |

PCA | 0.9903 | 1.0487 | 0.7467 |

**RMSE, mean difference and Standard deviation computed between the values of original and enhanced spectra for three different test areas (Grassland: 20 samples; Industrial: 8 samples; Residential: 12 samples)**

Grassland | Industrial | Residential | |
---|---|---|---|

RMSE | 0.0539 | 0.1084 | 0.1474 |

Mean | 0.1613 | 0.4193 | 0.2137 |

Std. Dev. | 0.1437 | 0.0166 | 0.1658 |

### Hyperion dataset

The last experiment shows the results of the proposed technique applied to a Hyperion dataset acquired in 2002. Hyperion is a grating imaging spectrometer providing 220 HS bands (from 0.4 to 2.5 *μ* m) with a 30 m spatial resolution. Each image covers a 7.5 km by 100 km land area and provides detailed spectral mapping across all 220 channels with high radiometric accuracy. The test area was selected over the Rome city centre, with a landscape characterized mainly by dense urban areas and sparse vegetation. Before the extraction of the NLPCs, noisy bands not containing relevant information were discarded from the original dataset, resulting in 168 spectrally unique and good quality bands[32]. In this experiment, two different PAN images were used to evaluate the spectral distortion introduced by these images. In a first attempt, we fused the Hyperion image with a QuickBird PAN image.

**Quality indexes for the fusion process applied on Hyperion**

UIQI | ERGAS | SAM | |
---|---|---|---|

Reference | 1 | 0 | 0 |

NLPCA | 0.9759 | 3.0622 | 1.3400 |

Indusion | 0.9627 | 1.6798 | 2.3751 |

Hyperion + QuickBird | 0.7941 | 4.7472 | 6.3233 |

Hyperion + ALI-PAN | 0.9001 | 3.6562 | 1.4861 |

*μ*m), with a spatial resolution of 10m. Since this PAN image was acquired on a different date than the HS image, there are isolated clouds that are not present in the HS image. This does not allow to select the same reference area. Only a subset of the area is selected as depicted in Figure21. Moreover, to respect the algorithm requirement to have a ratio of 4, we have degraded the HS image resolution to 40 m.

## Conclusion

In this, we have presented a novel approach combining dimensionality reduction and a pan-sharpening technique for spatial quality improvement of HS images while preserving the spectral quality of the original HS image. The proposed method introduces dimensionality reduction of HS images based on the nonlinear generalization of standard PCA. The nonlinear principal components were obtained by an AANN. The use of the Indusion technique has been investigated in the framework of pan-sharpening. The innovation proposed by this technique relies in fusing NLPCs with the PAN images instead of the spectral bands to reduce the computational load of the pan-sharpening process. In the experimental section the Indusion was first tested on a WorldView-2 image to assess the performance of the fusion algorithm, while Indusion combined with NLPCA was tested on a two HS images, a CHRIS-Proba and a Hyperion dataset, respectively. These two latter experiments were carried out fusing NLPCs with PAN images collected under different geometries and for images acquired at different dates. Moreover in both cases the PAN image does not cover the same spectral range as covered by the original HS bands. A further experiment has been made fusing a Hyperion image with a PAN image acquired by the same satellite and having the same geometry as the HS image. Being the proposed method applied to real data, it is important to consider that many sources of spectral distortion are introduced, such as error in the registration phase and differences in terms of angles of view. Moreover, it has to be considered that there are also other negative contributions introduced by objects that are detected by the PAN image but not in the HS image where their spectral signature results to be mixed with the signatures of the surrounding objects. Finally there are also some objects in one image that are not present in the other due to the different dates of acquisition. However, apart from these negative contributions, the results demonstrated a good behavior of the proposed method in mitigating the spectral distortions. Aside from the benefits of using Indusion for image fusion, the use of NLPCA for dimensionality reduction results in a better reconstruction of the original HS image. However, one main drawback on the use of this technique relies on the necessity to perform a grid search to find the neural network topology that may lead to the best performances. In any case, the UIQI and ERGAS quality indexes quantitatively demonstrate the good performance of the proposed method on CHRIS-Proba and a Hyperion images. Visually, Indusion produced sharp and spectrally consistent images while NLPCA reduced the original dataset dimensionality minimizing the introduction of further spectral distortions while speeding up the pan-sharpening process.

## Declarations

## Authors’ Affiliations

## References

- Wald L:
*Data Fusion. Definitions and Architectures—Fusion of Images of Different Spatial Resolutions*. Presses de l’Ecole, Ecole des Mines de Paris, Paris, France,; 2002.Google Scholar - Palsson F, Sveinsson JR, Benediktsson J, Aanaes H: Classification of Pansharpened Urban Satellite Images.
*IEEE J. Sel. Topics Appl. Earth Observations and Remote Sens*2012, 5: 281-297.View ArticleGoogle Scholar - Eismann MT, Hardie RC: Hyperspectral resolution enhancement using high resolution multispectral imagery with arbitrary response function.
*IEEE Trans. Geosci. Remote Sens*2005., 43(3):Google Scholar - Chavez PS, Sides SC, Anderson JA: Comparison of three different methods to merge multiresolution and multispectral data: landsat TM and SPOT panchromatic.
*Photogram. Eng. Remote Sens.*1991, 57(3):295-303.Google Scholar - Serpico B, D’Inca M, Melgani F, Moser G: Comparison of feature reduction techniques for classification of hyperspectral remote sensing data universal image quality index.
*Proc. SPIE Image Signal Process Remote Sens*2003, 4885(8):347-358.Google Scholar - Lee C, Landgrebe DA: Feature extraction based on decision boundaries.
*IEEE Trans. Pattern Anal. Mach. Intell*1993, 15(4):388-400. 10.1109/34.206958View ArticleGoogle Scholar - Mitra P, Murthy CA, Pal SK: Unsupervised feature selection using feature similarity.
*IEEE Trans. Pattern Anal. Mach. Intell*2002, 24(3):301-312. 10.1109/34.990133View ArticleGoogle Scholar - Plaza A, Martinez P, Plaza J, Perez R: Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations.
*IEEE Trans. Geosci. Remote Sens*2005, 43(3):466-479.View ArticleGoogle Scholar - He Y, Du Q, Chen G: Unsupervised Hyperspectral Band Selection Using Graphics Processing Units.
*IEEE J. Selected Topics in Appl. Earth Observations and Remote Sens*2011, 4: 660-668.View ArticleGoogle Scholar - Sen J, Zhen J, Yuntao Q, Linlin S: Unsupervised Band Selection for Hyperspectral Imagery Classification Without Manual Band Removal.
*IEEE J. Selected Topics Appl. Earth Observations and Remote Sens*2012, 5: 531-543.View ArticleGoogle Scholar - Chein-I C, Su W, Keng-Hao L, Mann-Li C, Chinsu L: Progressive Band Dimensionality Expansion and Reduction Via Band Prioritization for Hyperspectral Imagery.
*IEEE J. Selected Topics Appl. Earth Observations and Remote Sens*2011, 4: 591-614.View ArticleGoogle Scholar - Jutten C, Herault J: Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture.
*Signal Process*1991, 24: 1-10. 10.1016/0165-1684(91)90079-XView ArticleMATHGoogle Scholar - Green AA, Berman M, Switze RP, Craig MD: A transformation for ordering multispectral data in terms of image quality with implications for noise removal.
*IEEE Trans. Geosci. Remote Sens*1988, 26: 65-74. 10.1109/36.3001View ArticleGoogle Scholar - Fauvel M, Chanussot J, Benediktsson JA: Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas.
*EURASIP J. Adv. Signal Process*2009, 2009: 1-14.View ArticleGoogle Scholar - Licciardi G, Marpu PR, Chanussot J, Benediktsson JA: Linear versus nonlinear PCA for the classification of hyperspectral data based on the extended morphological profiles.
*IEEE Geosci. Remote Sens. Lett*2012, 9: 447-451.View ArticleGoogle Scholar - Kramer MA: Nonlinear principal component analysis using autoassociative neural networks.
*AIChE J*1991, 37: 233-243. 10.1002/aic.690370209View ArticleGoogle Scholar - Bishop C:
*Neural Networks for Pattern Recognition*. Oxford University Press, London; 1995.MATHGoogle Scholar - Scholz M, Kaplan F, Guy L, Kopka J, Selbig J: Non-linear PCA: a missing data approach.
*Bioinformatics*2005, 21: 3887-3895. 10.1093/bioinformatics/bti634View ArticleGoogle Scholar - Licciardi G, Frate FD: Pixel unmixing in hyperspectral data by means of neural networks.
*IEEE Trans. Geosci. Remote Sens*2011, 49: 4163-4172.View ArticleGoogle Scholar - Nunez J, Otazu X, Fors O, Prades A, Pala V, Arbiol R: Multiresolution-based image fusion with additive wavelet decomposition.
*Trans. Geosci. Remote Sens*1999, 37(3):1204-1211. 10.1109/36.763274View ArticleGoogle Scholar - Ranchin T, Wald L: Fusion of high spatial and spectral resolution images: the ARSIS concept and its implementation.
*Photogramm. Eng. Remote Sens*2000, 66: 49-61.Google Scholar - Aiazzi B, Alparone L, Baronti S, Garzelli A: Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis.
*IEEE Trans. Geosci. Remote Sens*2002, 40: 2300-2312. 10.1109/TGRS.2002.803623View ArticleGoogle Scholar - Tu TM, Huang PS, Hung CL, Chang CP: A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery.
*IEEE Geosci. Remote Sens. Lett*2004, 1(4):309-312. 10.1109/LGRS.2004.834804View ArticleGoogle Scholar - Alparone L, Wald L, Chanussot J, Thomas C, Gamba P, Bruce LM: Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest.
*IEEE Trans. Geosci. Remote Sens*2007, 45(10):3012-3021.View ArticleGoogle Scholar - Thomas C, Ranchin T, Wald L, Chanussot J: Synthesis of Multispectral Images to High Spatial Resolution: A Critical Review of Fusion Methods Based on Remote Sensing Physics.
*IEEE Trans Geoscience and Remote Sens*2008, 46: 1301-1312.View ArticleGoogle Scholar - Khan MM, Chanussot J, Condat L, Montanvert A: Indusion: fusion of multispectral and panchromatic images using the induction scaling technique.
*IEEE Geosci. Remote Sens. Lett*2008, 5: 98-102.View ArticleGoogle Scholar - Condat L, Montanvert A: A framework for image magnification: induction revisited.
*ICASSP*2005. pp. 845–848Google Scholar - Cohen A, Daubechies I, Feauveau JC: Biorthogonal bases of compactly supported wavelets.
*Commun. Pure Appl. Math*1992, 45(5):485-560. 10.1002/cpa.3160450502MathSciNetView ArticleMATHGoogle Scholar - Wang Z, Boviki AC: A universal image quality index.
*IEEE Signal Process. Lett*2002, 9(3):81-84.View ArticleGoogle Scholar - Mayer B, Kylling A: Technical note: the libRadtran software package for radiative transfer calculations—description and example of use.
*Atmos. Chem. Phys*2005, 5: 1855-1877. 10.5194/acp-5-1855-2005View ArticleGoogle Scholar - Villa A, Chanussot J, Benediktsson J, Jutten C: Spectral unmixing for the classification of hyperspectral images at a finer spatial resolution.
*IEEE J. Sel. Top. Signal Process*2011, 5(3):521-533.View ArticleGoogle Scholar - Datt B, McVicar TR, Van Niel TG, Jupp DLB, Pearlman JS: Preprocessing EO-1 hyperion hyperspectral data to support the application of agricultural indexes.
*IEEE Trans. Geosci. Remote Sens*2003, 41(6):1246-1259. 10.1109/TGRS.2003.813206View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.