- Review
- Open Access
Survey of hyperspectral image denoising methods based on tensor decompositions
- Tao Lin^{1} and
- Salah Bourennane^{1}Email author
https://doi.org/10.1186/1687-6180-2013-186
© Lin and Bourennane; licensee Springer. 2013
- Received: 16 August 2013
- Accepted: 2 December 2013
- Published: 17 December 2013
Abstract
A hyperspectral image (HSI) is always modeled as a three-dimensional tensor, with the first two dimensions indicating the spatial domain and the third dimension indicating the spectral domain. The classical matrix-based denoising methods require to rearrange the tensor into a matrix, then filter noise in the column space, and finally rebuild the tensor. To avoid the rearranging and rebuilding steps, the tensor-based denoising methods can be used to process the HSI directly by employing multilinear algebra. This paper presents a survey on three newly proposed HSI denoising methods and shows their performances in reducing noise. The first method is the Multiway Wiener Filter (MWF), which is an extension of the Wiener filter to data tensors, based on the TUCKER3 decomposition. The second one is the PARAFAC filter, which removes noise by truncating the lower rank K of the PARAFAC decomposition. And the third one is the combination of multidimensional wavelet packet transform (MWPT) and MWF (MWPT-MWF), which models each coefficient set as a tensor and then filters each tensor by applying MWF. MWPT-MWF has been proposed to preserve rare signals in the denoising process, which cannot be preserved well by using the MWF or PARAFAC filters. A real-world HYDICE HSI data is used in the experiments to assess these three tensor-based denoising methods, and the performances of each method are analyzed in two aspects: signal-to-noise ratio and improvement of subsequent target detection results.
Keywords
- Mean Square Error
- Hyperspectral Image
- Wavelet Packet
- Small Target
- Multilinear Algebra
1 Review
1.1 Introduction
Hyperspectral images (HSI) attract more and more interest in recent years in different domains, such as geography, agriculture, and military [1–3]. They use the HSI to do the target detection [4] or classification [5] to find objects or materials of interest on the ground. Unfortunately, in the capturing procedure, the HSI is usually impaired by several types of noise, such as thermal noise [6], photonic noise [7], and strip noise [8]. Therefore, denoising methods [9–13] have become a critical step for improving the subsequent target detection and classification in remote sensing imaging applications [14].
In HSI processing, images are modeled as a three-dimensional tensor, i.e., two spatial dimensions and one spectral dimension. The classical denoising methods [15–18] rearrange the HSI into a matrix whose columns contain the spectral signatures of all the pixels, then estimate the signal subspace by methods based on the analysis of second-order statistics, and finally rebuild the original HSI structure after processing.
Since matrix-based techniques cannot take advantage of spectra in hyperspectral images, therefore, in order to treat the HSI as a whole entity, some new techniques were developed. For example, an HSI was treated as a hypercube in order to take into account the correlation among different bands [19, 20], tensor-algebra was brought to jointly analyze the 3D HSI, etc. In this paper, we mainly focus on the problem of applying tensor algebra in reducing noise in HSIs. Unlike the matrix-based denoising methods which are based on matrix algebra, the newly proposed tensor-based denosing methods utilize multilinear algebra to analyze the HSI tensor directly. It is well known that SVD (singular value decomposition) is important for matrix analysis. Similarly, there are two important tensor decompositions: TUCKER3 and PARAFAC. These two decompositions play significant roles in analyzing tensors. Therefore, in this paper, we focus on comparative methods based on multilinear algebra for sake of coherence with the recently developed method which combines multidimensional wavelet packet transform and TUCKER3 decomposition: The three methods involve a tensor decomposition either TUCKER3 or PARAFAC.
TUCKER3 decomposition, also known as lower rank-(K _{1},…,K _{ N }) tensor approximation (LRTA-(K _{1},…,K _{ N })), has been firstly used as multimode PCA, which uses the first K _{ n } PCA components in mode n, n=1,…,N, to restore the multidimensional signal. The LRTA-(K _{1},…,K _{ N }) has been employed for seismic wave separation [21], face recognition [22], and color image denoising [23]. Although the LRTA-(K _{1},…,K _{ N }) can obtain good results in denoising, it is not an optimal solution for filtering noise in the aspect of the mean squared error (MSE). The multidimensional Wiener filter (MWF) has been proposed to overcome this drawback of LRTA-(K _{1},…,K _{ N }) [24]. MWF calculates the filter in each mode under the criterion of minimizing the MSE between the desired signal and the estimated signal, therefore it can been understood as an optimal LRTA-(K _{1},…,K _{ N }). Moreover, MWF can also be understood as an extension of the classical matrix-based Wiener filter to the tensor model by using multilinear algebra tools. MWF has been used in seismic wave denoising [24] and HSI denoising [12, 25], and obtained good results. Recently, a statistical criteria has been adapted to estimate the rank of signal subspace in each mode [13], which makes MWF an automatic method to reduce noise in the data.
Apart from TUCKER3, the PARAFAC [26] decomposition, also known as CANDECOMP [27], is another way to decompose a tensor into lower rank factors. Distinguishing from TUCKER3, PARAFAC decomposes a tensor into a sum of rank-one tensors and only one rank K needs to be estimated for the tensor. Moreover, the PARAFAC decomposition is unique when the rank K is greater than one, whereas TUCKER3 cannot be. PARAFAC decomposition has recently been applied to chemical sciences [28], array processing [29], telecommunications [30], and HSI denoising [14]. As a comparison of MWF, reference [31] shows the potential of PARAFAC in the HSI denoising. However, there is not an efficient way to estimate the rank of PARAFAC, which constrains it in automaticdenoising.
In a HSI, a rare signal is the one that is represented by only a few number of pixels, while the abundant signal is the one that contains a large number of pixels compared to a rare signal [17]. MWF and PARAFAC treat a HSI as a whole entity in the denoising operation; therefore, the abundant signals and the rare signals are processed together, which inhibits a drawback: the rare signals may be unintentionally removed. In fact, the energy of the rare signal is so weak compared to that of the abundant signal that the estimated signal subspace cannot include the rare signal, and as a result, the rare signal is removed. MWPT-MWF (multidimensional wavelet packet transform (MWPT) with multiway Wiener filter) has been proposed to overcome this drawback of MWF and PARAFAC [32]. Instead of treating the HSI as a whole entity, MWPT-MWF firstly decomposes the HSI into several coefficient sets, also called components, by employing MWPT, therefore the abundant signal and the rare signal can be separated. After this step, each component is filtered by MWF automatically. Because the rare signal and the abundant signal are separated into different components, the signal subspace in each component can be estimated more exactly.
The goal of this paper is to present a survey of the tensor-based denoising methods applied in filtering the HSI. Some recent simulations and comparative results on a real-world HYDICE HSI are also presented. The reminder of this paper is organized as follows: Section 1.2 briefly introduces some basic knowledge about multilinear algebra. Section 1.3 introduces the signal model used in this paper. Sections 1.4, 1.5, and 1.6 present the recently proposed denoising methods MWF, PARAFAC, and MWPT-MWF, respectively. Section 1.7 supplies some comparative denoising and detection results. And finally, Section 2 concludes this paper.
1.2 Basics on tensor tools and multilinear algebra
1.2.1 Tensor model
where ∘ indicates the outer product [34].
1.2.2 Multilinear algebra tools
1.2.2.0 n-mode unfolding
denotes the n-mode unfolding matrix of a tensor $\mathcal{X}\in {\mathbb{R}}^{{I}_{1}\times {I}_{2}\times \dots \times {I}_{N}}$, where M _{ n }=I _{ n+1}…I _{1} I _{ N }…I _{ n−1}. The columns of X _{ n } are the I _{ n }-dimensional vectors obtained from by varying index i _{ n } while keeping the other indices fixed. Here, we define the n-mode rank K _{ n } as the n-mode unfolding matrix rank, i.e., K _{ n }=rank (X _{ n }).
1.2.2.0 n-mode product
where $\mathcal{C}\in {\mathbb{R}}^{{I}_{1}\times {I}_{2}\times \dots \times {I}_{n-1}\times J\times {I}_{n+1}\times \dots \times {I}_{N}}$.
1.3 Problem formulation and signal modeling
In this paper, we assume that the noise is zero-mean white Gaussian noise and independent from the signal . The aim of this paper was to estimate the desired signal from the noisy HSI .
1.4 Multiway Wiener filtering
1.4.1 Denoising model
From the signal processing point of view, the n-mode product is a n-mode filtering of ; therefore, H _{ n } is named as n-mode filter.
Then, the optimal n-mode filters are the ones which can minimize the MSE given in (6).
1.4.2 Calculation of H _{ n }
1.4.3 Estimation of K _{ n }
where $\{{\lambda}_{i}^{\gamma},\phantom{\rule{1em}{0ex}}i=1,\dots ,{I}_{n}\}$ are the eigenvalues of ${\gamma}_{\mathit{\text{RR}}}^{\left(n\right)}$, M _{ n } is the column number of ${\gamma}_{\mathit{\text{RR}}}^{\left(n\right)}$ and k _{ n } changes in the range of {1,…,I _{ n }−1}. The estimated n-mode rank K _{ n } is the value of k _{ n } which minimizes AIC criterion.
1.4.4 ALS algorithm
- 1.
- 2.Initialization k=0:. Where ${\mathbf{I}}_{{I}_{n}}$ is the I _{ n }×I _{ n } identity matrix.${\mathcal{X}}^{0}=\mathcal{R}\iff {\mathbf{H}}_{n}={\mathbf{I}}_{{I}_{n}}\forall n=1,2,3$
- 3.ALS loop: Repeat until convergence, that is, for example, while $\parallel {\mathcal{X}}^{k+1}-{\mathcal{X}}^{k}\parallel >\epsilon $
- (a)Estimation of K _{ n }, n=1,2,3,${K}_{n}={argmin}_{{k}_{n}}\left[\text{AIC}\left({k}_{n}\right)\right],{k}_{n}=1,\dots ,{I}_{n}-1.$
- (b)Estimation of ${\mathbf{H}}_{n}^{k+1}$ for n=1,2,3.
- (i).${\mathcal{X}}_{n}^{k}=\mathcal{R}{\times}_{p}{\mathbf{H}}_{p}^{k+1}{\times}_{q}{\mathbf{H}}_{q}^{k}$
p,q=1,2,3, p,q≠n and p<q
- (ii)${\mathbf{H}}_{n}^{k+1}=\underset{{\mathbf{Z}}_{n}}{argmin}{\parallel \mathcal{X}-{\mathcal{X}}_{n}^{k}{\times}_{n}{\mathbf{Z}}_{n}\parallel}^{2}$subject to${\mathbf{\text{Z}}}_{n}\in {\mathbb{R}}^{{I}_{n}\times {I}_{n}}.$
- (i)
- (c)Multidimensional Wiener filtering ${\mathcal{X}}^{k+1}=\mathcal{R}{\times}_{1}{\mathbf{H}}_{1}^{k+1}{\times}_{2}{\mathbf{H}}_{2}^{k+1}{\times}_{3}{\mathbf{H}}_{3}^{k+1}$.(d).$k\leftarrow k+1$
- (a)
- 4.
Output: Estimated signal tensor $\widehat{\mathcal{X}}=\mathcal{R}{\times}_{1}{\mathbf{H}}_{1}^{{k}_{c}}{\times}_{2}{\mathbf{H}}_{2}^{{k}_{c}}{\times}_{3}{\mathbf{H}}_{3}^{{k}_{c}}$, where k _{ c } is the convergence iteration index.
As the calculation of n-mode filter H _{ n } in step 33b utilizes the filters in other modes {H _{ i }, 1≤i≤3andi≠n}, it shows that the MWF considers the relationships between elements in all modes of the data set.
1.5 PARAFAC filtering
1.5.1 Denoising model
Nonetheless, it is worth noting that the criterion of PARAFAC is the squared error between the estimate $\widehat{\mathcal{X}}$ and the noisy HSI , while that of MWF is the mean squared error between the estimate $\widehat{\mathcal{X}}$ and the desired signal (see (6)). For a given rank K, minimizing (17) means removing as little signal as possible in the denoising process.
1.5.2 Calculation of A _{ n }
Obviously, the estimation of A _{ n } needs information of A _{ p } and A _{ q }, which are not known. In this situation, an ALS algorithm should be employed to calculate the optimal A _{ n }.
1.5.3 PARAFAC ALS algorithm
- 1.
Input:
Data tensor .
- 2.
Initialization:
Set k=0 and e _{ k }=0. Randomly initialize ${\mathbf{A}}_{n}^{0}\in {\mathbb{R}}^{{I}_{n}\times K}$, n=1,2,3.
- 3.Loop:
- (a)
Estimate ${\mathbf{A}}_{n}^{k+1}$
- (b)Compute${\widehat{\mathbf{X}}}_{3}^{k+1}={\mathbf{A}}_{3}^{k+1}{{\mathbf{U}}_{3}^{k+1}}^{T}$
- (c), if |e _{ k+1}-e _{ k }|>e and k is less than the maximum number of iteration, $k?k+1$ and then go back to step 33a. Otherwise, break the loop.${e}_{k+1}=?{\mathbf{R}}_{3}^{}-{\widehat{\mathbf{X}}}_{3}^{k+1}{?}^{2}$
- (i)${\mathbf{U}}_{1}^{k+1}={\mathbf{A}}_{3}^{k}?{\mathbf{A}}_{2}^{k}$${\mathbf{A}}_{1}^{k+1}={\mathbf{X}}_{1}^{}{\mathbf{U}}_{1}^{k+1}\left({{\mathbf{U}}_{1}^{k+1}}^{T}{\mathbf{U}}_{1}^{k+1}\right)$
- (ii)${\mathbf{U}}_{2}^{k+1}={\mathbf{A}}_{3}^{k}?{\mathbf{A}}_{1}^{k}$${\mathbf{A}}_{2}^{k+1}={\mathbf{X}}_{2}^{}{\mathbf{U}}_{2}^{k+1}\left({{\mathbf{U}}_{2}^{k+1}}^{T}{\mathbf{U}}_{2}^{k+1}\right)$
- (iii)${\mathbf{U}}_{3}^{k+1}={\mathbf{A}}_{2}^{k}?{\mathbf{A}}_{1}^{k}$${\mathbf{A}}_{3}^{k+1}={\mathbf{X}}_{3}^{}{\mathbf{U}}_{3}^{k+1}\left({{\mathbf{U}}_{3}^{k+1}}^{T}{\mathbf{U}}_{3}^{k+1}\right)$
- (i)
- (a)
- 4.
Output:
Return ${\mathbf{A}}_{n}^{}={\mathbf{A}}_{n}^{k+1}$, n=1,2,3.
1.5.4 Rank estimation
- 1.
- 2.
Initialization:
Set i=1. Set rank-searching-set K-SCOPE.
- 3.Loop:
- (a)
Set K=K-SCOPE[i].
- (b)
Do PARAFAC decomposition: $\mathcal{R}=\sum _{k=1}^{K}{\mathbf{a}}_{1}^{k}\circ {\mathbf{a}}_{2}^{k}\circ {\mathbf{a}}_{3}^{k}+\widehat{\mathcal{N}}$.
- (c)
At n=1,2,3, calculate the covariance matrix C _{ n } of ${\widehat{\mathbf{N}}}_{n}$, the n-mode unfolding matrix of $\widehat{\mathcal{N}}$.
- (d)If
- (i), , where c _{ i,i } is the diagonal elements of C _{ n }.${s}_{\text{diag}}^{2}=1/{I}_{n}\underset{i=1}{\overset{{I}_{n}}{?}}{({c}_{i,i}-1/{I}_{n}\underset{i=1}{\overset{{I}_{n}}{?}}{c}_{i,i})}^{2}<{d}_{1}$
- (ii)$|?{\mathbf{C}}_{n}{?}^{2}-\underset{i=1}{\overset{{I}_{n}}{?}}{c}_{i,i}^{2}|<{d}_{2}$
- (i)
these two conditions are satisfied for all n=1,2,3 at the same time, break the loop. Otherwise, $i\leftarrow i+1$.
- (a)
- 4.
Output:
Return the rank K.
1.6 MWPT-MWF
1.6.1 Denoising model
MWF and PARAFAC treat as an entire entity in the denoising process. This works well when there are only abundant signals or the rare signals can be neglected. However, in the situation where the rare signals cannot be neglected, such as the target detection, MWF and PARAFAC might remove rare signals in the denoising process.
Nevertheless, unlike MWF or PARAFAC, MWPT-MWF reduces noise by jointly filtering the wavelet packet coefficient set. The details of MWPT-MWF will be described in the following subsections.
1.6.2 Multidimensional wavelet packet transform
where 0 _{1} is a zero matrix with size $\frac{{I}_{n}}{{2}^{{l}_{n}}}\times \frac{{m}_{n}{I}_{n}}{{2}^{{l}_{n}}}$ and 0 _{2} is a zero matrix with size $\frac{{I}_{n}}{{2}^{{l}_{n}}}\times \frac{({2}^{{l}_{n}}-1-m){I}_{n}}{{2}^{{l}_{n}}}$.
1.6.3 Multiway Wiener filter in multidimensional wavelet packet domain
1.6.4 Best transform level and basis selection
- 1.Level of transform: the performance of the algorithm is affected by the level of transform, which depends on the size of tensor . The maximum level can be calculated by${N}_{{L}_{k}}=\lceil \underset{2}{log}{I}_{k}\rceil -5,\phantom{\rule{1em}{0ex}}k=1,2,3,$(36)
where ⌈·⌉ rounds a number upward to its nearest integer, and the constant 5 is subtracted from $\lceil \underset{2}{log}{I}_{k}\rceil $ to make sure there are enough elements in each mode so that the transform is meaningful.
Then, the set of possible transform levels can be expressed as:${L}_{k}=\{0,1,\cdots \phantom{\rule{0.3em}{0ex}},{N}_{{L}_{k}}\},\phantom{\rule{1em}{0ex}}k=1,2,3,$(37)where {·} denotes a set.
- 2.Basis of transform: there are many wavelet bases designed for different cases. For the simplicity of expression, we define$W=\{{\mathrm{w}}_{1},\phantom{\rule{1em}{0ex}}{\mathrm{w}}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\mathrm{w}}_{{N}_{W}}\}$(38)
to denote the set of possible wavelet bases, where N _{ W } is the number of wavelets in this set.
1.6.5 Summary of the MWPT-MWF
- 1.
Input:
Data tensor .
- 2.
Initialization:
Set $L=\{1,\dots ,{N}_{{L}_{k}}\}$, $W=\{{w}_{1},\dots ,{W}_{{N}_{w}}\}$ and the risk threshold ε.
- 3.
Loop:
For each l _{1},l _{2},l _{3}∈L and w∈W. Loop l _{1},l _{2},l _{3} and w:- (a)
Decompose the whitened data by MWPT: ${\mathcal{C}}_{\mathbf{l}}^{\mathcal{R}}=\mathcal{R}{\times}_{1}\mathbf{W}1{\times}_{2}\mathbf{W}2{\times}_{3}\mathbf{W}3$.
- (b)
Extract component ${\mathcal{C}}_{\mathbf{l},\mathbf{m}}^{\mathcal{R}}$ from ${\mathcal{C}}_{\mathbf{l}}^{\mathcal{R}}$ by (24), for m=[m _{1},m _{2},m _{3}]^{ T }, where $0\le {m}_{k}\le {2}^{{l}_{k}}-1$, k=1,2,3.
- (c)
Filter component ${\mathcal{C}}_{\mathbf{l},\mathbf{m}}^{\mathcal{R}}$ by MWF: ${\widehat{\mathcal{C}}}_{\mathbf{l},\mathbf{m}}^{\mathcal{X}}={\mathcal{C}}_{\mathbf{l},\mathbf{m}}^{\mathcal{R}}{\times}_{1}{\mathbf{H}}_{1,\mathbf{m}}{\times}_{2}{\mathbf{H}}_{2,\mathbf{m}}{\times}_{3}{\mathbf{H}}_{3,\mathbf{m}}$.
- (d)
Calculate the risk $\widehat{{R}_{c}}=\sum _{\mathbf{m}}\parallel {\widehat{\mathcal{C}}}_{\mathbf{l},m}^{\mathcal{X}}\left[d\right]-{\widehat{\mathcal{C}}}_{\mathbf{l},m}^{\mathcal{X}}[\phantom{\rule{0.3em}{0ex}}d-1]{\parallel}^{2}$. If $\widehat{{R}_{c}}$ reaches a fixed threshold ε, return the optimal l _{1},l _{2},l _{3},w and ${\widehat{\mathcal{C}}}_{\mathbf{l},\mathbf{m}}^{\mathcal{X}}$.
- (a)
- 4.
Output: Concatenate ${\widehat{\mathcal{C}}}_{\mathbf{l},\mathbf{m}}^{\mathcal{X}}$ to obtain ${\mathcal{C}}_{\mathbf{l}}^{\mathcal{X}}$ and perform inverse MWPT: $\widehat{\mathcal{X}}={\widehat{\mathcal{C}}}_{\mathbf{l}}^{\mathcal{X}}{\times}_{1}\mathbf{W}{1}^{T}{\times}_{2}\mathbf{W}{2}^{T}{\times}_{3}\mathbf{W}{3}^{T}$.
1.7 Experimental results
White Gaussian noise is added into the HSI with signal-to-noise ratio (SNR) ranged from 15 to 30 dB (with a step of 5 dB) to reproduce different simulation scenarios. MWF, PARAFAC, and MWPT-MWF are used to reduce noise in the HSI. The rank-searching-set of PARAFAC is set as [51,101,151,201], and wavelet db3 is selected to do MWPT with transform levels [ l _{1},l _{2},l _{3}]=[ 1,1,0].
1.7.1 Denoising performance evaluation and comparison
If SNR_{OUTPUT} is greater than SNR_{INPUT}, we can conclude that the algorithm improves the SNR of the image.
SNR _{ OUTPUT } vs. SNR _{ INPUT } obtained after denoising by methods MWF, PARAFAC, and MWPT-MWF
SNR_{OUTPUT}(dB) | |||
---|---|---|---|
SNR _{ INPUT } (dB) | MWF | PARAFAC | MWPT-MWF |
15 | 23.55 | 28.80 | 30.27 |
20 | 29.68 | 32.04 | 33.58 |
25 | 35.54 | 35.19 | 36.60 |
30 | 38.35 | 38.07 | 39.19 |
1.7.2 Target detection performance evaluation and comparison
In the last subsection, we have compared the denoising performances of different methods in the aspect of SNR_{OUTPUT}. However, sometimes SNR_{OUTPUT} cannot reflect the denoising performance we want, especially when we consider preserving small targets in the HSI while removing noise. Hence, in this subsection, we compare the target detection performance after denoising by MWF, PARAFAC, and MWPT-MWF.
where s is the reference spectrum and x is the pixel spectrum.
where n _{ s } is the number of spectral signatures, N _{ i } the number of pixels with spectral signature i, ${N}_{i}^{\mathit{\text{rd}}}$ the number of correctly detected pixels, and ${N}_{i}^{\mathit{\text{fd}}}$ the number of false-alarm pixels.
SNR _{ INPUT } vs. Pd obtained after denoising by methods MWF, PARAFAC, and MWPT-MWF
Pd | |||
---|---|---|---|
SNR _{ INPUT } (dB) | MWF | PARAFAC | MWPT-MWF |
15 | 0.724 | 0.878 | 0.922 |
20 | 0.972 | 0.998 | 0.998 |
25 | 1 | 1 | 1 |
30 | 1 | 1 | 1 |
It is obvious that the detection result after denoising by MWPT-MWF outperforms the two other methods. By comparing Table 2 with Table 1, we can understand that the denoising process can improve the target detection performance.
2 Conclusion
In this paper, a survey has been presented on three recently proposed tensor filtering methods: MWF, PARAFAC, and MWPT-MWF. They utilize multilinear algebra in analyzing a multidimensional data cube to jointly filter it in each mode.
The MWF extends the classical Wiener filter to the multidimensional case by using the TUCKER3 decomposition while minimizing the MSE between the desired signal tensor and the estimated signal tensor. As the filter in one mode relies on the filters in the other modes, the ALS algorithm is used to jointly calculate the MWF filters. In the filtering process, the signal subspace rank in mode n needs to be known to remove the noise in the orthogonal complement subspace of the signal subspace. For this reason, the AIC algorithm is taken to estimate the rank in mode n, which implies that the MWF can reduce noise automatically.
The PARAFAC filtering method was proposed to reduce the number of rank values to be estimated. As aforementioned, the rank in each mode must be estimated in MWF, while only one rank must be estimated in PARAFAC filtering. Moreover, the low-rank PARAFAC decomposition is unique for rank values higher than one, whereas the TUCKER3 decomposition is not. However, there is not an efficient way to estimate the PARAFAC rank automatically. Though we have shown a rank estimation method in this paper, it is a time-consuming brute force searching way.
The MWF and PARAFAC were proposed to process the HSI as a whole entity, but this may remove the small targets in an HSI in the denoising process. Distinguishing from MWF and PARAFAC, MWPT-MWF firstly transforms the HSI into different wavelet packet sets, also called components in this paper, and then filters each component as a whole entity. As the small targets are separated from the large ones, the former can be well preserved in the denoising process.
A real-world HYDICE HSI is used in the comparative study. Quantitative and visual evaluation of the three methods is shown. From the experimental results, we can conclude that MWPT-MWF is a suitable tool for denoising especially when there exist small targets in the HSI.
Declarations
Acknowledgements
The authors would like to thank the reviewers for their careful reading and helpful comments which improve the quality of this paper.
Authors’ Affiliations
References
- Kotwal K, Chaudhuri S: Visualization of hyperspectral images using bilateral filtering. IEEE Trans. Geosci. Remote Sens 2010, 48(5):2308-2316.View ArticleGoogle Scholar
- Lewis S, Hudak A, Ottmar R, Robichaud P, Lentile L, Hood S, Cronan J, Morgan P: Using hyperspectral imagery to estimate forest floor consumption from wildfire in boreal forests of Alaska, USA. Int. J. Wildland Fire 2011, 20(2):255-271. 10.1071/WF09081View ArticleGoogle Scholar
- Tiwari K, Arora M, Singh D: An assessment of independent component analysis for detection of military targets from hyperspectral images. Int. J. Appl. Earth Obs. Geoinf 2011, 13(5):730-740. 10.1016/j.jag.2011.03.007View ArticleGoogle Scholar
- Veracini T, Matteoli S, Diani M, Corsini G: Nonparametric framework for detecting spectral anomalies in hyperspectral images. IEEE Geosci. Remote Sens. Lett 2011, 8(4):666-670.View ArticleGoogle Scholar
- Prasad S, Li W, Fowler JE, Bruce LM: Information fusion in the redundant-wavelet-transform domain for noise-robust hyperspectral classification. IEEE Trans. Geosci. Remote Sens 2012, 50(9):3474-3486.View ArticleGoogle Scholar
- Kerekes J, Baum J: Full-spectrum spectral imaging system analytical model. IEEE Trans. Geosci. Remote Sens 2005, 43(3):571-580.View ArticleGoogle Scholar
- Uss ML, Vozel B, Lukin VV, Chehdi K: Local signal-dependent noise variance estimation from hyperspectral textural images. IEEE J. Sel. Topics Signal Process 2011, 5(3):469-486.View ArticleGoogle Scholar
- Acito N, Diani M, Corsini G: Subspace-based striping noise reduction in hyperspectral images. IEEE Trans. Geosci. Remote Sens 2011, 49(4):1325-1342.View ArticleGoogle Scholar
- Shao L, Yan R, Li X, Liu Y: From heuristic optimization to dictionary learning: a review and comprehensive comparaison of image denoising algorithms. IEEE Trans. Cybernet. 2013. in press.Google Scholar
- Yan R, Shao L, Liu Y: Nonlocal hierarchical dictionary learning using wavelets for image denoising. IEEE Trans. Image Process 2013, 22(12):4689-4698.MathSciNetView ArticleGoogle Scholar
- Yan R, Shao L, Cvetković S, Klijn J: Improved nonlocal means based on pre-classification and invariant block matching. J. Display Technol 2012, 8(4):212-218.View ArticleGoogle Scholar
- Letexier D, Bourennane S: Noise removal from hyperspectral images by multidimensional filtering. IEEE Trans. Geosci. Remote Sens 2008, 46(7):2061-2069.View ArticleGoogle Scholar
- Renard N, Bourennane S: Improvement of target detection methods by multiway filtering. IEEE Trans. Geosci. Remote Sens 2008, 46(8):2407-2417.View ArticleGoogle Scholar
- Liu X, Bourennane S, Fossati C: Denoising of hyperspectral images using the PARAFAC model and statistical performance analysis. IEEE Trans. Geosci. Remote Sens 2012, 50(10):3717-3724.View ArticleGoogle Scholar
- Richards JA: Remote sensing digital image analysis: an introduction. Berlin Heidelberg: Springer; 2012.Google Scholar
- Chein IC, Qian D: Estimation of number of spectrally distinct signal sources in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens 2004, 42(3):608-619. 10.1109/TGRS.2003.819189View ArticleGoogle Scholar
- Kuybeda O, Malah D, Barzohar M, Rank estimation and redundancy reduction of high-dimensional noisy signals with preservation of rare vectors: IEEE Trans. Signal Process. 2007, 55(12):5579-5592.MathSciNetView ArticleGoogle Scholar
- Acito N, Diani M, Corsini G: A new algorithm for robust estimation of the signal subspace in hyperspectral images in the presence of rare signal components. IEEE Trans. Geosci. Remote Sens 2009, 47(11):3844-3856.View ArticleGoogle Scholar
- Martin-Herrero J: Anisotropic diffusion in the hypercube. IEEE Trans. Geosci. Remote Sens 2007, 45(5):1386-1398.View ArticleGoogle Scholar
- Mendez-Rial R, Calvino-Cancela M, Martin-Herrero J: Accurate implementation of anisotropic diffusion in the hypercube. IEEE Geosci. Remote Sens. Lett 2010, 7(4):870-874.View ArticleGoogle Scholar
- Le Bihan N, Ginolhac G: Three-mode data set analysis using higher order subspace method: application to sonar and seismo-acoustic signal processing. Signal Process 2004, 84(5):919-942. 10.1016/j.sigpro.2004.02.003View ArticleMATHGoogle Scholar
- Vasilescu MAO, Terzopoulos D: Multilinear image analysis for facial recognition. In International Association of Pattern Recognition (IAPR). Quebec City; August 2002:511-514.Google Scholar
- Muti D, Bourennane S: Multidimensional signal processing using lower-rank tensor approximation. In IEEE ICASSP. Hongkong; 6–10 April 2003:457-60.Google Scholar
- Muti D, Bourennane S: Multidimensional filtering based on a tensor approach. Signal Process 2005, 85(12):2338-2353. 10.1016/j.sigpro.2004.11.029View ArticleMATHGoogle Scholar
- Letexier D, Bourennane S, Talon J: Nonorthogonal tensor matricization for hyperspectral image filtering. IEEE Geosci. Remote Sens. Lett 2008, 5: 3-7.View ArticleGoogle Scholar
- Harshman RA, Lundy ME: The PARAFAC model for three-way factor analysis and multidimensional scaling. In Research methods for multimode data analysis. New York: Praeger; 1984:122-215.Google Scholar
- Carroll JD, Chang JJ: Analysis of individual differences in multidimensional scaling via an N-way generalization of Eckart-Young decomposition. Psychometrika 1970, 35(3):283-319. 10.1007/BF02310791View ArticleMATHGoogle Scholar
- Smilde A, Bro R, Geladi P: Multi-way analysis: applications in the chemical sciences. Hoboken: Wiley; 2005.Google Scholar
- Guo X, Miron S, Brie D, Zhu S, Liao X: A CANDECOMP/PARAFAC perspective on uniqueness of DOA estimation using a vector sensor array. IEEE Trans. Signal Process 2011, 59(7):3475-3481.MathSciNetView ArticleGoogle Scholar
- De Almeida AL, Favier G, Mota JCM: PARAFAC-based unified tensor modeling for wireless communication systems with application to blind multiuser equalization. Signal Process 2007, 87(2):337-351. 10.1016/j.sigpro.2005.12.014View ArticleMATHGoogle Scholar
- Liu X, Bourennane S, Fossati C: Nonwhite noise reduction in hyperspectral images. IEEE Geosci. Remote Sens. Lett 2012, 9(3):368-372.View ArticleGoogle Scholar
- Lin T, Bourennane S: Hyperspectral image processing by jointly filtering wavelet component tensor. IEEE Trans. Geosci. Remote Sens 2013, 51(6):3529-3541.View ArticleGoogle Scholar
- Kolda TG, Bader BW: Tensor decompositions and applications. SIAM Rev 2009, 51(3):455-500. 10.1137/07070111XMathSciNetView ArticleMATHGoogle Scholar
- Cichocki A, Zdunek R, Phan A, Amari S: Nonnegative matrix and tensor factorizations: applications to exploratory multi-way data analysis and blind source separation. Hoboken: Wiley; 2009.View ArticleGoogle Scholar
- Muti D, Bourennane S, Marot J: Lower-rank tensor approximation and multiway, filtering. SIAM J. Matrix Anal. Appl 2008, 30(3):1172-1204. 10.1137/060653263MathSciNetView ArticleMATHGoogle Scholar
- Donoho D, Johnstone I: Ideal denoising in an orthonormal basis chosen from a library of bases. Comptes Rendus de l’Academie des Sciences-Serie I-Mathematique 1994, 319(12):1317-1322.MathSciNetMATHGoogle Scholar
- Jin X, Paswaters S, Cline H: A comparative study of target detection algorithms for hyperspectral imagery. In SPIE Defense, Security, and Sensing. Orlando, FL; 13–17 April 2009.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.