 Research
 Open Access
 Published:
Kernel generalized neighbor discriminant embedding for SAR automatic target recognition
EURASIP Journal on Advances in Signal Processing volume 2014, Article number: 72 (2014)
Abstract
In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernelGNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.
1 Introduction
Synthetic aperture radar (SAR) has been widely used in many fields, such as terrain surveying, marine monitoring, and earth observation, because of its alltime, allweather, penetrating ability and high resolution. SAR automatic target recognition (ATR) is the essential technology in SAR image interpretation and analysis.
Generally, the procedure of SAR ATR can be divided into four major steps: detection, discrimination, feature extraction, and recognition. The goal of detection is to locate the potential region of interest. In the discrimination phase, the region of interest is processed to remove the false alarms. The feature extraction is one of the crucial steps for SAR ATR, which can reduce the dimensionality of SAR images greatly and improve recognition efficiency. Finally, the extracted features of the target clips are recognized in the last stage of SAR ATR system.
It has been observed that many feather extraction techniques have been proposed. Principal component analysis (PCA) and linear discriminant analysis (LDA) were used for SAR image feature extraction [1, 2] because of their simplicity and effectiveness. Both of them are based on a global linear structure and need to transform a twodimensional image into a onedimensional vector. This will cause a large calculation burden since feather extraction is implemented in a very highdimensional vector space.
In addition, the kernel trick [3, 4] is applied to extending linear feature extraction algorithms to nonlinear ones. These methods transform input space to other higher or even infinite dimensional inner product space, using nonlinear operators, which is performed by a kernel mapping function. Kernel PCA (KPCA) [5] and kernel LDA (KLDA) [6] describe that in detail.
Recently, the manifold learning algorithmlocal preserving projection (LPP) is proposed [7]. But it might not be suitable for SAR ATR, because of its minimization problem, which results in discarding larger principle components.
Based on the manifold learning method, we design neighborhood geometry and target function using the average of similar dispersion of dataset, and then, calculate the linear embedding mapping, according to category information. When this method was extended to vector space, we named it as generalized neighbor discriminant embedding (GNDE). In order to reduce calculation burden, a kernel function was employed to replace the highdimensional vector inner product. This is the kernel GNDE (KGNDE) method mainly discussed in this paper. It was hoped to solve the nonlinear problem better and improve target identification rate in SAR ATR.
The rest of this paper is organized as follows: We introduce the GNDE in section 2, and KGNDE is proposed in section 3. In section 4, we verify GNDE and KGNDE by the MSTAR database. Finally, we conclude the paper in section 5.
2 Generalized neighbor discriminant embedding
Assume that M is the manifold structure embedded in R^{m} Euclidean space. Given a training set {x_{ i } ∈ ℝ^{m}, i = 1, 2, …, N} ⊂ M and their homologous labels {y_{ i } ∈ [1, 2, …, c], i = 1, 2, …, N}, where N denotes the total number of training samples in training set, and c is the total class number in the training set. In the integrated class and neighborhood information, GNDE aims at finding a linear embedding map V ∈ ℝ^{m × l} : x_{ i } ∈ ℝ^{m} → z_{ i } = V^{T}x_{ i } ∈ ℝ^{l}(i = 1, 2, …, N), l ≪ m, so that samples in the same class keep their neighborhood information and samples in different classes apart from each other. The object function of GNDE is as follows:
W = [w_{ ij }] ∈ ℝ^{N × N} is the affinity weight matrix [8], which is defined as
where t_{1} and t_{2} are constants, ϵ_{1} and ϵ_{2} define radius of local neighborhood.
Equation 1 shows that maximizing J_{ V } makes samples from different classes apart from each other while samples in the same class proximate in the feature space, which is helpful for discrimination.
Referring to (1) and (2), we can infer that
where $\mathbf{S}=\left[\begin{array}{cccc}\hfill {\mathit{w}}_{11}\hfill & \hfill {\mathit{w}}_{12}\hfill & \hfill \cdots \hfill & \hfill {\mathit{w}}_{1\mathit{N}}\hfill \\ \hfill 0\hfill & \hfill {\mathit{w}}_{22}\hfill & \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill 0\hfill & \hfill \ddots \hfill & \hfill \ddots \hfill & \hfill {\mathit{w}}_{\left(\mathit{N}1\right)\mathit{N}}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill \cdots \hfill & \hfill {\mathit{w}}_{\mathit{NN}}\hfill \end{array}\right]\in {\mathbb{R}}^{\mathit{N}\times \mathit{N}},\phantom{\rule{0.5em}{0ex}}{\mathit{D}}_{\mathit{ii}}={\displaystyle \sum _{\mathit{i}\le \mathit{j}}{\mathit{w}}_{\mathit{ij}}},\phantom{\rule{0.5em}{0ex}}\mathbf{X}=\left[{\mathbf{x}}_{1},{\mathbf{x}}_{2},\dots ,{\mathbf{x}}_{\mathit{N}}\right]\in {\mathbb{R}}^{\mathit{m}\times \mathit{N}}$, D = diag(D_{11}, D_{22}, ⋯, D_{ NN }) ∈ ℝ^{N × N}, L = D − S ∈ ℝ^{N × N} is a Laplacian matrix. We define an object matrix M_{ V }
Then
Impose an additional constraint:
where E_{ l×l } is l × l unit matrix. Finally, optimization problem reduces to find:
Therefore, the optimal embedding map V = [v_{1}, v_{2}, …, v_{ l }] is the set of orthogonal eigenvectors of M_{ V } corresponding to the l largest eigenvalue.
GNDE is formally stated as follows:

1)
Compute affinity weight matrix W according to (2).

2)
According to (3) and (4), compute object matrix M _{ V }, resolve the maximization problem as (7) and get the optimal embedding map V.

3)
Feature extraction: given a testing sample x _{ T }, extracted feature is z _{ T } = V ^{T}x _{ i }.
3 Kernel generalized neighbor discriminant embedding
The kernel function is widely used to enhance the classification of linear dimensionality reduction methods. GNDE can be further improved by kernel function, which is named KGNDE. Assume that a nonlinear mapping φ : x_{ i } ∈ ℝ^{m} → φ(x_{ i }) ∈ ℝ^{H} is introduced, where H is a certain highdimensional feature space.
The main purpose of KGNDE is to find embedding map Φ ∈ ℝ^{H × l} : x_{ i } ∈ ℝ^{m} → k(z_{ i }) = Φ^{T}φ(x_{ i }) ∈ ℝ^{l}(i = 1, 2, …, N), l ≪ m. According to kernel trick property, Φ = [Φ_{1}, Φ_{2}, ⋯, Φ_{ l }], where ${\mathbf{\Phi}}_{\mathit{k}}={\displaystyle \sum _{\mathit{p}=1}^{\mathit{N}}{\mathit{\alpha}}_{\mathit{p}}^{\mathit{k}}\mathit{\phi}\left({\mathbf{x}}_{\mathit{p}}\right)}$, ${\mathit{\alpha}}_{\mathit{p}}^{\mathit{k}}\in \mathbb{R}$. The objective function of KGNDE is as follows:
where w_{ ij } is defined as (2).
Based on kernel theory, each element of kernel matrix K = [k_{ ij }] ∈ ℝ^{N × N} is as follows:
Sometimes, we use Gauss or polynomial function instead of (9). Furthermore, we can recompute k(z_{ i }):
where $\mathbf{A}=\left[{\mathbf{\alpha}}_{1},{\mathbf{\alpha}}_{2},\dots ,{\mathbf{\alpha}}_{\mathit{l}}\right]\in {\mathbb{R}}^{\mathit{N}\times \mathit{l}},\phantom{\rule{0.5em}{0ex}}{\mathbf{\alpha}}_{\mathit{i}}={\left[{\mathit{\alpha}}_{1}^{\mathit{i}},{\mathit{\alpha}}_{2}^{\mathit{i}}\dots {\mathit{\alpha}}_{\mathit{N}}^{\mathit{i}}\right]}^{\mathit{T}},\phantom{\rule{0.5em}{0ex}}{\mathbf{K}}_{\u2022\mathit{i}}={\left[{\mathit{k}}_{1\mathit{i}},{\mathit{k}}_{2\mathit{i}},\cdots ,{\mathit{k}}_{\mathit{Ni}}\right]}^{\mathit{T}}.$
According to (8) and (10), we can get
Define an object matrix M_{ K },
We can infer that
Impose the additional constraint:
Finally, the object function can be written as:
Therefore, the optimal embedding map A = [α_{1}, α_{2}, …, α_{ l }] is the set of orthogonal eigenvectors of M_{ K } corresponding to the l largest eigenvalue.
KGNDE is formally stated as follows:

1)
Compute the affinity weight matrix W according to (2), compute kernel matrix K according to (9).

2)
According to (11) and (12), compute the object matrix M _{ K }, resolve the maximization problem as (15) and get the optimal embedding map A.

3)
Feature extraction: given a testing sample x _{ T }, extracted feature is k(z _{ T }) = A ^{T}K _{• i}.
Now, we concern the computational complexity of the proposed algorithms. In most cases, the number of training samples is less than the dimension of the training sample (N ≪ m). Therefore, like most of other feature extraction methods, the computational bottlenecks of GNDE and KGNDE are solving the generalized eigenvalue problems, whose computational complexity are O(m^{3}) and O(N^{3}), respectively.
4 Experiment
We use the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset to evaluate GNDE and KGNDE. The training dataset contains SAR images at the depression angle 17°, and testing dataset contains SAR images at the depression angle 15°. Both training dataset and testing dataset cover full 0° ~ 360° aspect ranges. Table 1 lists a detailed information about the type and number included in the training and testing datasets [9].
4.1 Experiment steps

1)
Image preprocessing: Speckle suppression and target segmentation are used for removing speckles and background clutters, respectively. Then we use gray enhancement based on power function to enhance information in the dataset. Finally, we get the dataset {x _{ i } ∈ ℝ ^{m}, i = 1, 2, …, N} called DATA, where x _{ i } donates each SAR image vectors and its dimensions m = 61 × 61 = 3, 721. The optical images and the corresponding SAR images of the three targets in the MSTAR dataset are shown in Figures 1 and 2. Images of the targets after processing are shown in Figure 3.

2)
Feature extraction: Both GNDE and KGNDE are utilized to extract feature of DATA. In order to examine recognition performance of these methods, PCA and KPCA are also used to extract feature. In this paper, both KPCA and KGNDE use polynomial function as the kernel function, as is shown in (16):
$${\mathit{k}}_{\mathit{ij}}={\left({{\mathbf{x}}_{\mathit{i}}}^{\mathit{T}}{\mathbf{x}}_{\mathit{j}}+1\right)}^{\mathit{\mu}}$$(16)where μ is the function parameter. In this paper, we choose μ = 5.

3)
Classification: Nearest neighbor classifier (NNC) [10] is utilized to classify extracted feature based on this algorithm.
4.2 Experiment results
Firstly, we compare GNDE with PCA. As is shown in Figure 4, GNDE performs better than PCA. PCA is an unsupervised method based on a linear structure, while GNDE is a supervised method based on manifold structure. Global linear structure is not applicable for highdimensional dataset, but manifold structure is. Besides, supervised method is conducive to cluster so that classification is easier. Therefore, GNDE is superior to PCA.
Secondly, KGNDE is compared with KPCA. From Figure 5, we can see that KGNDE performs better than KPCA as well. The kernel trick can handle nonlinear problems in a highdimensional dataset. However, KGNDE is a supervised method based on not only the kernel trick but also the manifold structure. The manifold structure can fit the real structure of the dataset, and the supervised method is a benefit to classification. So, KGNDE performs better than KPCA.
Finally, as is shown in Table 2, KGNDE performs slightly poorer than GNDE. Recognition performance of kernel trick is closely related to kernel functions; polynomial function may not be suitable for DATA feature extraction.
5 Conclusions
Feature extraction is the key step in SAR ATR. In this paper, a new feature extraction algorithm and its kernel counterpart are proposed. Based on the manifold structure, both GNDE and KGNDE get linear transformation to achieve lowdimensional embedding of the dataset. Compared with the linear structure, the manifold ways can detect the underlying nonlinear structure, which preserves local information so that manifold ways is more robust. In addition, GNDE and KGNDE are supervised methods. Through these algorithms, the extracted feature can gain better clustering effect than unsupervised methods, which is helpful for classification.
References
 1.
Mishra AK, Mulgrew B: Bistatic SAR ATR Using PCAbased Features. In Proceedings of the SPIE 6234, Automatic Target Recognition XVI. United Kingdom: University of Edinburgh; 18 May 2006, doi:10.1117/12.664117
 2.
Mishra AK: Validation of PCA and LDA for SAR ATR. Paper presented at IEEE Region 10 Conference. Guwahati: IIT Guwahati; 19–21 Nov 2008
 3.
Gunn SR: Support Vector Machines for Classification and Regression. Technical Report: Analyst, University of Southampton; 2010. doi:10.1039/B918972F
 4.
Zhao Q, Principe JC: Support vector machines for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001, 2: 643654.
 5.
Li Y, Zhang XQ, Bai BD, Zhang YN: Information Compression and Speckle Reduction for Multifrequency Polarimetric SAR Imagery using KPCA. Paper presented at the 2007 International Conference on Machine Learning and Cybernetics. Xi’an: Northwest Polytechnical University; 19–22 Aug 2007
 6.
Han P, Wu RB, Wang YH, Wang ZH: An efficient SAR ATR approach. Paper presented at the 2003 IEEE International Conference on the ICASSP. Tian jin: Tian jin University; 6–10 April 2003
 7.
He X, Niyogi P: Locality preserving projections. Paper presented at the 16th Processing Conference on the Neural Information Processing Systems. 2003.
 8.
Yan S, Xu D, Zhang BY, Zhang HJ, Yang Q, Li S: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 1: 4051. doi:10.1109/TPAMI.2007.250598
 9.
Ross T, Worrell S, Velten V, Mossing J, Bryant M: Standard SAR ATR evaluation experiments using the MSTAR public release data set. Paper presented at the Processing Conference on the SPIE. 15 September 1998
 10.
Cover TM: Estimation by the nearest neighbor rule. IEEE Trans. Inf. Theory 1968, 1: 5055.
Acknowledgement
This research was supported by the National Natural Science Foundation of China (No. 61201272).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Huang, Y., Pei, J., Yang, J. et al. Kernel generalized neighbor discriminant embedding for SAR automatic target recognition. EURASIP J. Adv. Signal Process. 2014, 72 (2014). https://doi.org/10.1186/16876180201472
Received:
Accepted:
Published:
Keywords
 Synthetic aperture radar
 Automatic target recognition
 Feature extraction
 Manifold learning