 Research
 Open Access
 Published:
A learningbased target decomposition method using Kernel KSVD for polarimetric SAR image classification
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 159 (2012)
Abstract
In this article, a learningbased target decomposition method based on Kernel Ksingular vector decomposition (Kernel KSVD) algorithm is proposed for polarimetric synthetic aperture radar (PolSAR) image classification. With new methods offering increased resolution, more details (structures and objects) could be exploited in the SAR images, thus invalidating the traditional decompositions based on specific scattering mechanisms offering lowresolution SAR image classification. Instead of adopting fixed bases corresponding to the known scattering mechanisms, we propose a learningbased decomposition method for generating adaptive bases and developing a nonlinear extension of the KSVD algorithm in a nonlinear feature space, called as Kernel KSVD. It is an iterative method that alternates between sparse coding in the kernel feature space based on the nonlinear dictionary and a process of updating each atom in the dictionary. The Kernel KSVDbased decomposition not only generates a stable and adaptive representation of the images but also establishes a curvilinear coordinate that goes along the flow of nonlinear polarimetric features. This proposed approach was verified on two sets of SAR data and found to outperform traditional decompositions based on scattering mechanisms.
Introduction
Synthetic Aperture Radar (SAR)[1] has become an important tool for a wide range of applications, including in military exploration, resource exploration, urban development planning and marine research. Compared with singlepolarized SAR, polarimetric SAR (PolSAR) can work under different polarimetric combinations of transmitting and receiving antennas. Since combinations of electromagnetic waves from antennas are sensitive to the dielectric constant, physical characteristics and geometric shape, PolSAR gives birth to a remarkable enhancement on capabilities of data application and obtains rich target information with identification and separation of fullpolarized scattering mechanisms. As an important component of PolSAR image interpretation, target decomposition[2] expresses the average mechanism as the sum of independent elements in order to associate a physical mechanism with each pixel, which allows the identification and separation of scattering mechanisms for purposes of classification[3, 4].
Many methods for target decompositions have been proposed for the identification of scattering characteristics based on the study of polarimetric matrixes. At present, two main camps of decompositions are identified, namely coherent decompositions or incoherent decompositions. The coherent decompositions express the measured scattering matrix by radar as a combination of simpler responses, mainly as the Pauli, the Krogager and the Cameron decompositions. These decompositions are possible only if the scatters are points or pure targets. When the particular pixel belongs to distributed scatters with the presence of speckle noise, incoherent approaches must be chosen for data postprocessing in order to use the traditional averaging and statistical methods. Incoherent decompositions deal with polarimetric coherency matrix or covariance matrix, such as the Freeman, the OEC, the FourComponent, the Huynen, the Barnes and the Cloude decompositions. However, these traditional methods aim to associate each decomposition component with a specific scattering mechanism, invalidating their applications for different kinds of PolSAR images. For instance, the component of the Pauli decomposition denotes water capacity, in which only crops, which contain water, can be targets, and decomposition on such a basis represents how much water the targets comprise. The fourcomponent scattering model proposed by Yamaguchi et al.[5] often appears in complex urban areas whereas disappears in almost all natural distributed scenarios. In addition, with improved resolution of SAR images, targets in the images become clearer and clearer, and a pixel no longer purely consists of several kinds of scattering mechanisms—the limited scattering mechanisms being explored currently may be unable to satisfy pluralism.
In the recent years, there has been a growing interest in the study of learning based representation of signals, which approximates an input signal y as a linear combination of adaptive atoms d_{ i } instead of adopting bases corresponding to known scattering mechanisms. Several methods are available for searching sparse codes efficiently and include efficient sparse coding algorithm[6], KSVD algorithm[7] and online dictionary[8]. The KSVD algorithm shows stable performance in dictionary learning as an iterative method that alternates between the sparse coding of signal samples based on the learned dictionary and the process of updating the atoms in the dictionary. Although KSVD algorithm has been widely used for linear problems with good performance, for the nonlinear case, which widely exists in actual problems, KSVD algorithm has the limitation that a nonlinearly clustered structure is not easy to capture. It is empirically found that, in order to achieve good performance in classification, such sparse representations generally need to be combined with some nonlinear classifiers, which leads to a high computational complexity. In order to make KSVD applicable to nonlinear structured data, kernel methods[9] have been introduced in this article. The main idea of kernel methods is to map the input data into a highdimensional space in order to nonlinearly divide the samples into arbitrary clusters without the knowledge of nonlinear mapping explicitly and increasing computational complex. The combinations of kernel function with other methods also give birth to various kernelbased algorithms, including Kernel Principal Component Analysis (KPCA)[10], Kernel independent component analysis (KICA)[11] and Kernel Fisher discriminant analysis (KFDA)[12].
Towards a general nonlinear analysis, we propose a learningbased target decomposition algorithm for the classification of SAR images, called Kernel Ksingular vector decomposition (Kernel KSVD). The algorithm presented not only maintains the adaptation of KSVD algorithm for dictionary learning but also exploits the nonlinearity in kernel feature space for SAR images. The Kernel KSVDbased target decomposition method has been tested in the experiments of PolSAR image classification, which demonstrated better performance than traditional decomposition strategies based on scattering mechanisms.
The remainder of the article is organized as follows. We describe the current target decompositions based on scattering mechanisms in Section “Target decomposition based on scattering mechanisms” and present the framework of our proposed Kernel KSVD algorithm for the learningbased decomposition in Section “A novel learningbased target decomposition method based on Kernel KSVD for PolSAR image”. Then, we show experimental results in the comparisons of traditional decomposition methods in Section “Experiment”. Finally, we conclude the article in Section “Conclusion”.
Target decomposition based on scattering mechanisms
Many target decomposition methods have been proposed for the identification of the scattering characteristics based on polarimetric matrixes, including the scattering matrix S, the covariance matrix C and the coherency matrix T. A PolSAR measures microwave reflectivity at the linear quadpolarizations HH, HV, VH, and VV to form a 2×2 scattering matrix.
where S_{ ab } represents the complex scattering amplitude for transmitting a and receiving b, in which a or b is horizontal or vertical polarization, respectively. S_{ HH } and S_{ VV } describe the cooperative polarimetric complex scattering amplitudes, while S_{ HV } and S_{ VH } are the crosspolarimetric complex scattering matrixes. For a reciprocal medium in a monostatic case, the reciprocity theory[13] ensures that S_{ HV } equals S_{ VH }, thus the matrix S is symmetric. In a general situation, it is difficult to make a direct analysis on the scattering matrix, and then it is always expressed as the combination of scattering responses [S]_{ i } of simpler objects.
where c_{ i } indicates the weight of [S]_{ i } in the combination. The decomposition proposed in (2) is not unique in the sense that it is possible to find a number of infinite sets {[S]_{ i } , i = 1,…,k} in which the matrix S can be decomposed. However, only in some of the sets, it is convenient to interpret the polarimetric information contained in matrix S, for instance, the Pauli, the Krogager and the Cameron decompositions.
Decompositions of the scattering matrix can only be employed for characterizing the coherent or pure scatters. For distributed scatters, due to the presence of speckle noise, only second polarimetric representations or incoherent decompositions based on covariance or coherency matrix can be employed. The objective of incoherent decompositions is to separate matrix [C] or [T] as a combination of secondorder descriptors [C]_{ i } or [T]_{ i }corresponding to simpler objects:
where p_{ i } and q_{ i } denote the responding coefficients for [C]_{ i } and [T]_{ i }. Since the bases {[C]_{ i },i = 1,…,k} and {[T]_{ i },i = 1,…,k} are not unique, different decompositions can be presented, such as the Freeman, the FourComponent, the OEC, the Barnes, the Holm, the Huynen and the Cloude decompositions.
Put it simply, the ultimate objective of target decompositions is to decompose a radar matrix into the weighted sum of several specific components, which can be used to characterize the target scattering process or geometry information, as shown in (2)–(4).
A novel learningbased target decomposition method based on Kernel KSVD for PolSAR image
This section reviews the decomposition based on KSVD algorithm and introduces our Kernel KSVD method in the kernel feature space.
KSVD algorithm
Let Y be a set of Ndimensional samples extracted from an image,$Y=[{y}_{1},{y}_{2},\dots ,{y}_{M}]\in {\mathbf{\text{R}}}^{N\times M}$, used to train an overcomplete dictionary$D=[{d}_{1},{d}_{2},\dots ,{d}_{K}]\in {\mathbf{\text{R}}}^{N\times K}$ (K>N), and the element d_{ i } is called an atom. The purpose of KSVD algorithm is to solve the following objective function:
where$\parallel \xb7{\parallel}_{F}^{2}$ denotes the reconstruction error.$X=[{x}_{1},{x}_{2},\dots ,{x}_{M}]$ is the set of sparse codes representing the input samples Y in terms of columns of the learned dictionary D. The given sparsity level T_{0} restricts that each sample has fewer than T_{0} terms in its decomposition. The KSVD algorithm is divided into two stages:

(1)
Sparse coding stage:Using the learned overcomplete dictionary D, the given signal Y can be represented as a linear combination of atoms under the constraint of (5). It is often done by greedy algorithms such as matching pursuit (MP) [14] and orthogonal matching pursuit (OMP) [15]. In this article, we choose the OMP algorithm due to its fastest convergence.

(2)
Dictionary updating stage:Given the sparse codes, the second stage is performed to minimize the reconstruction error and search a new atom d _{ j } under the sparsity constraint.The performance of sparse representation depends on the quality of learned dictionary. The KSVD algorithm is an iterative approach for improving approximation performance of sparse coding. It initializes the dictionary through a Kmean clustering process and updates the dictionary atoms assuming known coefficients until it satisfies the sparsity level. The updating of atoms and sparse coefficients is done jointly, leading to accelerated convergence. Despite its popularity, KSVD algorithm generates a linear coordinate system that cannot guarantee its performance when applied to the case of nonlinear input.
Kernel KSVD in the kernel feature space
Let$X=[{x}_{1},{x}_{2},\dots ,{x}_{M}]\in {\mathbf{\text{R}}}^{K\times M}$ denote nonlinear samples and M the number of samples. The Kdimensional space f:x_{ i } belongs to is called ‘input space’. It requires a high computational complexity to accomplish classification on such samples with a nonlinear classifier. Assuming x_{ i } to be almost always linearly separated in another Fdimensional space, called ‘feature space’, new linear samples${x}_{i}^{F}=\phi \left({x}_{i}\right)\in {\mathbf{\text{R}}}^{F}$ can be generated after a nonlinear mapping function φ. With such a nonlinear transform, the original samples are linearly divided into arbitrary clusters without increasing the computational complex. However, the dimension of F space required is generally much high or possibly infinite. It is difficult to perform the general process of inner products in such a highdimensional space.
The main objective of kernel methods is that, without knowing the nonlinear feature mapping function or the mapped feature space explicitly, we can work on the feature space through kernel functions, as long as the two properties are satisfied: (1) the process is formulated in terms of dot products of sample points in the input space; (2) the determined kernel function satisfies Mercer constraint, and the alternative algorithm can be obtained by replacing each dot product with a kernel function κ. Then the kernel function can be written as:
where 〈,〉 is an inner product in the feature space transformed by φ. By replacing inner products with kernel functions in linear algorithms, we can obtain very flexible representation for nonlinear data. Choosing the kernel function κ is similar to choosing the mapping function φ. Several kernel functions are widely used in practice:

(1)
Polynomial:
$$\kappa ({x}_{i},{x}_{j})={({x}_{i}\xb7{x}_{j}+b)}^{d},\phantom{\rule{1em}{0ex}}d>0,\phantom{\rule{1em}{0ex}}b\in \mathbf{\text{R}}$$(7) 
(2)
Gaussian radial basis function (GRBF):
$$\kappa ({x}_{i},{x}_{j})=\mathbf{\text{exp}}\left(\frac{\parallel {x}_{i}{x}_{j}\parallel}{2{\delta}^{2}}\right),\phantom{\rule{1em}{0ex}}\delta \in \mathbf{\text{R}}$$(8) 
(3)
Hyperbolic tangent:
$$\kappa ({x}_{1},{x}_{2})=\mathbf{\text{tanh}}\left(v\right({x}_{1}\xb7{x}_{2})+c),\phantom{\rule{1em}{0ex}}v>0,\phantom{\rule{1em}{0ex}}c<0$$(9)
In this article, we introduce the kernel function into KSVD algorithm and the adaptive dictionary is learned in the feature space F instead of the original space. Let Y be a set of Ndimensional samples extracted from an image,$Y=[{y}_{1},{y}_{2},\dots ,{y}_{M}]\in {\mathbf{\text{R}}}^{N\times M}$, used to train an initial overcomplete dictionary D∈R^{N×K}(K>N). Assuming$X=[{x}_{1},{x}_{2},\dots ,{x}_{M}]\in {\mathbf{\text{R}}}^{K\times M}$ to be the sparse matrix via the OMP algorithm, the kernel trick is based on the mapping f : x_{ i } → F : κ(x,·), which maps each element of input space to the kernel feature space. The replacement of inner product 〈x_{ i },x_{ j }〉 by the kernel function κ(x_{ i },x_{ j }) is equivalent to changing a nonlinear problem in the original space into a linear one in a highdimensional space and looking for the dictionary in the converted space. The construction of kernel feature space based on the sparse codes of training samples provides promising implement of curvilinear coordinate system along the flow of nonlinear feature. Let K = φ(X)^{T}φ(X) be the responding kernel matrix:
Through performing a linear algorithm on the kernel matrix, we can get a new sparse matrix$\stackrel{~}{X}=[{\stackrel{~}{x}}_{1},{\stackrel{~}{x}}_{2},\dots ,{\stackrel{~}{x}}_{M}]\in {\mathbf{\text{R}}}^{P\times M}$. Then, the objective function of the Kernel KSVD algorithm is described as follows:
where$\stackrel{~}{D}\in {\mathbf{\text{R}}}^{N\times P}(P>N)$ is the new dictionary in feature space T (κ(x_{ i },x_{ j })) represents a linear transform on the kernel function κ(x_{ i },x_{ j }).
The construction of a kernel feature space can be concluded as the following steps:

(1)
Normalize the input data.

(2)
Map the nonlinear features to a highdimensional space and compute the dot product between each feature.

(3)
Choose or construct a proper kernel function to replace the dot products.

(4)
Translate the data to kernel matrix according to the kernel function.

(5)
Perform a linear algorithm on the kernel matrix in the feature space.

(6)
Generate the nonlinear model of the input space.
The flow of the above steps is shown in Figure1.
Given the sparse matrix$\stackrel{~}{X}$, the process of dictionary learning is described as:
Let${E}_{p}=Y\sum _{j\ne p}^{P}{\stackrel{~}{d}}_{j}{\stackrel{~}{x}}_{R}^{j}$ indicate the representation error of samples after removing the p_{ th } atom, and let${\stackrel{~}{x}}_{R}^{p}$ denote the p_{ th } row in$\stackrel{~}{X}$.
Once E_{ p } is done, SVD decomposition is used to decompose${E}_{p}=\mathrm{U\Delta}{V}^{T}$; the updated p_{th} atom${\stackrel{~}{d}}_{p}$ and the corresponding sparse coefficients${\stackrel{~}{x}}_{R}^{p}$ are computed as:
In the kernel methods, the dimension of a kernel matrix is determined by the number of training samples instead of the dimension of input samples. Hence, the kernel function enables efficient operations on a highdimensional linear space and avoids ‘dimension disaster’ in the traditional pattern analysis algorithms. As a result, the proposed Kernel KSVD approach deals with nonlinearity without having to know the concrete form of the nonlinear mapping function. As shown in Figure2, we analyzed the target decomposition performance based on (a) scattering mechanisms, (b) KSVD and (c) Kernel KSVD.
Flowchart of the proposed learningbased target decomposition method using Kernel KSVD algorithm
The framework of learningbased target decomposition using Kernel KSVD algorithm is shown in Figure3. In this article, we apply the KPCA algorithm to deal with nonlinearity in the proposed algorithm. The basic idea of KPCA is mapping the original dataset into a highdimensional feature space where PCA is used to establish a linear relationship. The kernel function used in KPCA is GRBF, and the corresponding feature space becomes a Hilbert space of infinite dimension. As shown in Figure3, we will perform the proposed algorithm on three polarimetric matrixes, namely scattering matrix S, covariance matrix C and coherency matrix T, to generate the respective sparse codes. The codes of each matrix is pooled by Spatial Pyramid Machine (SPM)[16], and a linear SVM classifier[17] is finally used to give classification accuracy.
Task
Find the nonlinear dictionary and sparse compositions to represent the data samples Y ∈ R^{N×M} in the kernel feature space, by solving:
Set J = 1. Repeat until convergence:

Sparse coding on the kernel feature space:

(1)
Perform the OMP algorithm.

(2)
Compute the kernel matrix ${K}_{\mathrm{ij}}=\left\{\kappa \right({x}_{i},{x}_{j}),i,j=1,2,\dots ,M\}$, where $\kappa ({x}_{i},{x}_{j})=\u3008\phi \left({x}_{i}\right),\phi \left({x}_{j}\right)\u3009$.

(3)
Compute the Eigenvalue component of the kernel matrix Kα = λα, where λ is the Eigenvalue of the matrix, and α is the corresponding Eigenvector.

(4)
Normalize the Eigenvector ${\alpha}_{i}^{T}{\alpha}_{i}=\frac{1}{\lambda}$; all Eigenvalues are sorted in descending order; λ is the minimum nonzero Eigenvalue of matrix K.

(5)
Extract the principal component of test point ${\stackrel{~}{x}}_{p}=\sum _{j=1}^{M}{\alpha}_{p,j}K({x}_{i},{x}_{j}),p=1,\dots ,P$, where α _{k,j} is the j _{ th }element of the Eigenvector, and generate the sparse matrix $\stackrel{~}{X}=[{\stackrel{~}{x}}_{1},{\stackrel{~}{x}}_{2},\dots ,{\stackrel{~}{x}}_{M}]$.

(1)

Dictionary update: For each atom d_{ p }, update it by
$$\underset{\stackrel{~}{D}}{min}\parallel Y\stackrel{~}{D}\stackrel{~}{X}{\parallel}_{F}^{2}\phantom{\rule{1em}{0ex}}\text{s.t.}\parallel {\stackrel{~}{x}}_{i}\parallel \le {T}_{0},\phantom{\rule{1em}{0ex}}\forall i=1,\dots ,M$$(16)
(1)
Compute the overall representation error matrix ${E}_{p}=Y\sum _{j\ne p}^{P}{\stackrel{~}{d}}_{j}{\stackrel{~}{x}}_{R}^{j}$.

(2)
Apply SVD decomposition ${E}_{p}=\mathrm{U\Delta}{V}^{T}$, update the p _{ th } atom ${\stackrel{~}{d}}_{p}$, and compute the corresponding sparse coefficients ${\stackrel{~}{x}}_{R}^{p}$.

(1)

Set J = J + 1.
Experiment
Experimental setup
Two sets of experimental data adopted were derived from the airborne XBand singletrack PolSAR provided by the 38th Institute of China Electronics Technology Company.
Rice data
The photograph (Figure4a) is an image of rice field of Lingshui County, Hainan Province, China. The original picture is 2,048×2,048 pixels and 1×1 resolution. We manually labeled the corresponding groundtruth image (Figure4b) using ArcGIS software with five classes, namely rice 1, rice 2, rice 3, rice 4 and rice 5, according to different growth periods after investigation by the author. In our experiment, we sampled the data in 683×683 pixels.
Orchard data
The photograph (Figure5a) is an image of an orchard of Lingshui County, Hainan Province, China. The original piture is 2,200×2,400 pixels. The ground objects we are interested in are mango 1, mango 2, mango 3, betelnut and longan, which are identified by different colors in the labeling image (Figure5b). The three different types of mango represent different growth periods. In our experiment, we sampled the data in 440×480 pixels.
Experimental process
The proposed learningbased algorithm aims to perform Kernel KSVD decomposition on the scattering matrix [S], covariance matrix [C] and coherency matrix [T]. In general, the scattering intensity of four different channels, namely HH, HV , VH and VV , is treated as the component of scattering matrix [S]. When the reciprocity theory ensures scattering intensity of HV channel equals that of VH, we can represent the scattering matrix of each pixel as a threedimensional vector. Covariance matrix [C] and coherency matrix [T] can also be represented as a ninedimensional matrix under the reciprocity theory. In this article, we only take the amplitude information of the matrix [S], [C] and [T] into consideration owing to the complication of the proposed decomposition method.
In the experiment, we first treat each pixel of the image as a vector made of three or nine elements, based on which the proposed Kernel KSVD algorithm performs decomposition and generates an overcomplete dictionary of certain size. Then, we combine the sparse codes with spatial information using a threelevel SPM to get the final spatial pyramid features of the image. Finally, a simple linear SVM classifier is used to test the classification performance. The grid size of SPM is 1×1, 2×2 and 4×4. In each region of the spatial pyramid, the sparse codes are pooled together to form a new feature. There are three kinds of pooling methods, namely the max pooling (Max)[18], the square root of mean squared statistics (Sqrt), and the mean of absolute values (Abs). Due to the presence of speckle noise in the SAR image, this article chooses Abs as the pooling function.
The proposed Kernel KSVD algorithm is a nonlinear extension of KSVD algorithm by introducing a kernel method between sparse coding and atom updating stages. We choose the KPCA approach using Gaussian kernel function as our method. The covariance of kernel function is 0.5 and the training ratio in KPCA is 10% of sparse coefficient matrix. All the experiments are averaged over a ratio of 10% training and 90% testing in linear SVM.
Comparison experiment
To illustrate the efficiency of Kernel KSVD algorithm, we devised a comparison experiment of SAR target decompositions based on different scattering mechanisms, including the Pauli, the Krogager, the Cameron, the Freeman, the FourComponent, the OEC, the Barnes, the Holm, the Huynen and the Cloude decompositions. In the comparison experiment, the decomposition coefficients of each polarimetric matrix are processed with SPM and a linear SVM classifier is also used to generate the classification result. The comparison features of Kernel KSVD and other physical decompositions are shown in Table1.
Experimental results
Experimental results on rice data
We followed the common experiment setup for rice data. Table2 gives the detailed comparison results of target decomposition based on Kernel KSVD and other scattering mechanisms under different polarimetric matrixes. Figure4c,d shows the classification results based on Barnes and Kernel KSVD _{[S]} decomposition. As shown in Table2, for matrix [S], the improvements in Kernel KSVD are 20.9, 3.6 and 1.5% than other three traditional decompositions. For matrix [C], Kernel KSVD cannot achieve a higher classification result than the FourComponent and the Freeman decompositions, particularly for rice2 and rice5. For matrix [T], the classification accuracy of Kernel KSVD is 0.4% lower than the Barnes decomposition.
Experimental results on orchard data
We also tested our algorithm on orchard data. Table3 demonstrates the classification accuracies based on Kernel KSVD and scattering mechanisms under different matrixes, and Figure5c,d shows the classification results based on FourComponent and Kernel KSVD _{[S]} decomposition. From Table3, the decomposition based on Kernel KSVD again achieves much better performance than decompositions based on scattering mechanisms under matrix [S], [C] and [T], respectively. Compared with the best physical decomposition on each polarimetric matrix, improvements in Kernel KSVD are 7.3, 7.6 and 6.1%, respectively. From Tables2 and3, we find that Coherency and Cloude decompositions are not able to achieve a satisfactory classification for both sets of data. The reason may be that the responding scattering mechanisms are not associated with categories in rice and orchard data. As we can see, it is necessary to take such an association into account in traditional decompositions. However, Kernel KSVD can always show an acceptable accuracy for different ground objects without considering this relationship due to its adaptive learningbased method. In addition, we can also find that the classification based on the proposed algorithm can achieve better results on matrix [S] than matrix [C] and [T].
Conclusion
This article presents a learningbased target decomposition method based on the Kernel KSVD model for the classification of SAR images. Experimental results on the two sets of SAR data indicate that the proposed method has better performance than traditional decompositions based on scattering mechanisms in the classification of SAR images.
The success of the proposed kernel KSVD algorithm is largely due to the following reasons: first, Kernel KSVD is an extension of KSVD method with inheritance of adaptive characteristics for dictionary learning; second, KPCA is used to capture nonlinearity via projecting the sparse coefficients into a kernel feature space, in which the zero coefficients will be eliminated through inner product. finally, Kernel KSVD constructs a curvilinear coordinate for target decomposition that goes along the flow of nonlinear feature points. We will further apply this method for different land covers classification as a future work.
References
 1.
Maitre H: Traitement des Images de Radar à Synthèse d’Ouverture. (Hermès Science Publication, Lavoisier, 2001)
 2.
Pottier E, Saillard J: On radar polarization target decomposition theorems with application to target classification by using network method. Proceedings of the International Conference on Antennas and Propagation, vol. 1, (York, England, 1991), pp. 265–268
 3.
Hoekman DH, Vissers MAM, Tran TN: Unsupervised fullpolarimetric SAR data segmentation as a tool for classification of agricultural areas. IEEE J. Sel. Top. Appl. Earth Obser. Remote Sens 2011, 4(2):402411.
 4.
Skriver H, Mattia F, Satalino G, Balenzano A, Pauwels VRN, Verhoest NEC, Davidson M: Crop classification using shortrevisit multitemporal SAR data. IEEE J. Sel. Top. Appl. Earth Obser. Remote Sens 2011, 4(2):423431.
 5.
Yamaguchi Y, Yajima Y, Yamada H: A fourcomponent decomposition of PolSAR images based on the Coherency matrix. IEEE Geosci. Remote Sens. Lett 2006, 2(2):292296.
 6.
Lee H, Battle A, Raina R, Ng A: Efficient sparse coding algorithms. Proceedings of the Neural Information Processing Systems, (Vancouver, B. C., Canada, 2006), pp. 4–9
 7.
Aharon M, Elad M, Bruckstein A: KSVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54(11):43114322.
 8.
Bottou L: Online Learning and and Stochastic Approximations. Online Learning in Neural Networks. (Cambridge University Press, Cambridge, 1998), pp. 9–42
 9.
Scholkopf B, Mika S, Burges CJC, Knirsch P, Muller KR, Ratsch G, Smola AJ: Input space versus feature space in kernelbased methods. IEEE Trans. Neural Netw 1999, 10(5):10001017. 10.1109/72.788641
 10.
Smola A, Scholkopf B, Muller KR: Nonlinear component analysis as a Kernel eigenvalue problem. Neural Comput 1998, 10(6):12991319.
 11.
Bach FR, Jordan MI: Kernel independent component analysism. J. Mach. Lear. Res 2002, 3: 148.
 12.
Liu Q, Lu H, Ma S: Improving kernel Fisher discriminant analysis for face recognition. IEEE Trans. Circ. Syst. Video Technol 2004, 14(1):4249. 10.1109/TCSVT.2003.818352
 13.
Ulaby FT, Elachi C: Radar Polarimetry for Geoscience Applications. (Artech House, Norwood, 1990)
 14.
Mallat S, Zhang Z: Matching pursuits with timefrequency dictionaries. IEEE Trans. Signal Process 1993, 41(12):33973415. 10.1109/78.258082
 15.
Pati YC, Rezaiifar R, Krishnaprasad PS: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. Proceedings of Asilomar Conference on Signals, Systems and Computers, vol. 1, (California, USA, 1993), pp. 40–44
 16.
Lazebnik S, Schmid C, Ponce J: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, (New York, 2006), pp. 2169–2178
 17.
ChihChung C: LIBSVM: a library for support vector machines. 2011.http://www.csie.ntu.edu.tw/cjlin/libsvm Available:
 18.
Yang J, Yu K, Gong Y, Huang T: Linear spatial pyramid matching using sparse coding for image classification. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (Miami, 2009), pp. 1794–1801
Acknowledgements
This study was supported by the National Basic Research Program of China (973 program) under Grant No. 2011CB707102, NSFC grant (No. 60702041, 41174120), the China Postdoctoral Science Foundation funded project and the LIESMARS Special Research Funding.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
He, C., Liu, M., Liao, Zx. et al. A learningbased target decomposition method using Kernel KSVD for polarimetric SAR image classification. EURASIP J. Adv. Signal Process. 2012, 159 (2012). https://doi.org/10.1186/168761802012159
Received:
Accepted:
Published:
Keywords
 Synthetic Aperture Radar
 Synthetic Aperture Radar Image
 Sparse Code
 Kernel Matrix
 Kernel Principal Component Analysis