# Kernel generalized neighbor discriminant embedding for SAR automatic target recognition

- Yulin Huang
^{1}, - Jifang Pei
^{1}Email author, - Jianyu Yang
^{1}, - Tao Wang
^{1}, - Haiguang Yang
^{1}and - Bing Wang
^{1}

**2014**:72

https://doi.org/10.1186/1687-6180-2014-72

© Huang et al.; licensee Springer. 2014

**Received: **21 February 2014

**Accepted: **1 May 2014

**Published: **19 May 2014

## Abstract

In this paper, we propose a new supervised feature extraction algorithm in synthetic aperture radar automatic target recognition (SAR ATR), called generalized neighbor discriminant embedding (GNDE). Based on manifold learning, GNDE integrates class and neighborhood information to enhance discriminative power of extracted feature. Besides, the kernelized counterpart of this algorithm is also proposed, called kernel-GNDE (KGNDE). The experiment in this paper shows that the proposed algorithms have better recognition performance than PCA and KPCA.

## Keywords

## 1 Introduction

Synthetic aperture radar (SAR) has been widely used in many fields, such as terrain surveying, marine monitoring, and earth observation, because of its all-time, all-weather, penetrating ability and high resolution. SAR automatic target recognition (ATR) is the essential technology in SAR image interpretation and analysis.

Generally, the procedure of SAR ATR can be divided into four major steps: detection, discrimination, feature extraction, and recognition. The goal of detection is to locate the potential region of interest. In the discrimination phase, the region of interest is processed to remove the false alarms. The feature extraction is one of the crucial steps for SAR ATR, which can reduce the dimensionality of SAR images greatly and improve recognition efficiency. Finally, the extracted features of the target clips are recognized in the last stage of SAR ATR system.

It has been observed that many feather extraction techniques have been proposed. Principal component analysis (PCA) and linear discriminant analysis (LDA) were used for SAR image feature extraction [1, 2] because of their simplicity and effectiveness. Both of them are based on a global linear structure and need to transform a two-dimensional image into a one-dimensional vector. This will cause a large calculation burden since feather extraction is implemented in a very high-dimensional vector space.

In addition, the kernel trick [3, 4] is applied to extending linear feature extraction algorithms to nonlinear ones. These methods transform input space to other higher or even infinite dimensional inner product space, using nonlinear operators, which is performed by a kernel mapping function. Kernel PCA (KPCA) [5] and kernel LDA (KLDA) [6] describe that in detail.

Recently, the manifold learning algorithm-local preserving projection (LPP) is proposed [7]. But it might not be suitable for SAR ATR, because of its minimization problem, which results in discarding larger principle components.

Based on the manifold learning method, we design neighborhood geometry and target function using the average of similar dispersion of dataset, and then, calculate the linear embedding mapping, according to category information. When this method was extended to vector space, we named it as generalized neighbor discriminant embedding (GNDE). In order to reduce calculation burden, a kernel function was employed to replace the high-dimensional vector inner product. This is the kernel GNDE (KGNDE) method mainly discussed in this paper. It was hoped to solve the nonlinear problem better and improve target identification rate in SAR ATR.

The rest of this paper is organized as follows: We introduce the GNDE in section 2, and KGNDE is proposed in section 3. In section 4, we verify GNDE and KGNDE by the MSTAR database. Finally, we conclude the paper in section 5.

## 2 Generalized neighbor discriminant embedding

*M*is the manifold structure embedded in

*R*

^{ m }Euclidean space. Given a training set {

**x**

_{ i }∈

*ℝ*

^{ m },

*i*= 1, 2, …,

*N*} ⊂

*M*and their homologous labels {

*y*

_{ i }∈ [1, 2, …,

*c*],

*i*= 1, 2, …,

*N*}, where

*N*denotes the total number of training samples in training set, and

*c*is the total class number in the training set. In the integrated class and neighborhood information, GNDE aims at finding a linear embedding map

**V**∈

*ℝ*

^{m × l}:

**x**

_{ i }∈

*ℝ*

^{ m }→

**z**

_{ i }=

**V**

^{ T }

**x**

_{ i }∈

*ℝ*

^{ l }(

*i*= 1, 2, …,

*N*),

*l*≪

*m*, so that samples in the same class keep their neighborhood information and samples in different classes apart from each other. The object function of GNDE is as follows:

**W**= [

*w*

_{ ij }] ∈

*ℝ*

^{N × N}is the affinity weight matrix [8], which is defined as

where *t*_{1} and *t*_{2} are constants, *ϵ*_{1} and *ϵ*_{2} define radius of local neighborhood.

Equation 1 shows that maximizing *J*_{
V
} makes samples from different classes apart from each other while samples in the same class proximate in the feature space, which is helpful for discrimination.

**D**= diag(

*D*

_{11},

*D*

_{22}, ⋯,

*D*

_{ NN }) ∈

*ℝ*

^{N × N},

**L**=

**D**−

**S**∈

*ℝ*

^{N × N}is a Laplacian matrix. We define an object matrix

**M**

_{ V }

_{ l×l }is

*l × l*unit matrix. Finally, optimization problem reduces to find:

Therefore, the optimal embedding map **V** = [**v**_{1}, **v**_{2}, …, **v**_{
l
}] is the set of orthogonal eigenvectors of M_{
V
} corresponding to the *l* largest eigenvalue.

- 1)
Compute affinity weight matrix

**W**according to (2). - 2)
According to (3) and (4), compute object matrix

**M**_{ V }, resolve the maximization problem as (7) and get the optimal embedding map**V**. - 3)
Feature extraction: given a testing sample

**x**_{ T }, extracted feature is**z**_{ T }=**V**^{ T }**x**_{ i }.

## 3 Kernel generalized neighbor discriminant embedding

The kernel function is widely used to enhance the classification of linear dimensionality reduction methods. GNDE can be further improved by kernel function, which is named KGNDE. Assume that a nonlinear mapping *φ* : **x**_{
i
} ∈ *ℝ*^{
m
} → *φ*(**x**_{
i
}) ∈ *ℝ*^{
H
} is introduced, where *H* is a certain high-dimensional feature space.

*ℝ*

^{H × l}:

**x**

_{ i }∈

*ℝ*

^{ m }→

*k*(

**z**

_{ i }) =

**Φ**

^{ T }

*φ*(

**x**

_{ i }) ∈

*ℝ*

^{ l }(

*i*= 1, 2, …,

*N*),

*l*≪

*m*. According to kernel trick property,

**Φ**= [

**Φ**

_{1},

**Φ**

_{2}, ⋯,

**Φ**

_{ l }], where ${\mathbf{\Phi}}_{\mathit{k}}={\displaystyle \sum _{\mathit{p}=1}^{\mathit{N}}{\mathit{\alpha}}_{\mathit{p}}^{\mathit{k}}\mathit{\phi}\left({\mathbf{x}}_{\mathit{p}}\right)}$, ${\mathit{\alpha}}_{\mathit{p}}^{\mathit{k}}\in \mathbb{R}$. The objective function of KGNDE is as follows:

where *w*_{
ij
} is defined as (2).

**K**= [

*k*

_{ ij }] ∈

*ℝ*

^{N × N}is as follows:

*k*(

**z**

_{ i }):

where $\mathbf{A}=\left[{\mathbf{\alpha}}_{1},{\mathbf{\alpha}}_{2},\dots ,{\mathbf{\alpha}}_{\mathit{l}}\right]\in {\mathbb{R}}^{\mathit{N}\times \mathit{l}},\phantom{\rule{0.5em}{0ex}}{\mathbf{\alpha}}_{\mathit{i}}={\left[{\mathit{\alpha}}_{1}^{\mathit{i}},{\mathit{\alpha}}_{2}^{\mathit{i}}\dots {\mathit{\alpha}}_{\mathit{N}}^{\mathit{i}}\right]}^{\mathit{T}},\phantom{\rule{0.5em}{0ex}}{\mathbf{K}}_{\u2022\mathit{i}}={\left[{\mathit{k}}_{1\mathit{i}},{\mathit{k}}_{2\mathit{i}},\cdots ,{\mathit{k}}_{\mathit{Ni}}\right]}^{\mathit{T}}.$

**M**

_{ K },

Therefore, the optimal embedding map **A** = [**α**_{1}, **α**_{2}, …, **α**_{
l
}] is the set of orthogonal eigenvectors of **M**_{
K
} corresponding to the *l* largest eigenvalue.

- 1)
Compute the affinity weight matrix

**W**according to (2), compute kernel matrix**K**according to (9). - 2)
According to (11) and (12), compute the object matrix

**M**_{ K }, resolve the maximization problem as (15) and get the optimal embedding map**A**. - 3)
Feature extraction: given a testing sample

**x**_{ T }, extracted feature is*k*(**z**_{ T }) =**A**^{ T }**K**_{• i}.

Now, we concern the computational complexity of the proposed algorithms. In most cases, the number of training samples is less than the dimension of the training sample (*N* ≪ *m*). Therefore, like most of other feature extraction methods, the computational bottlenecks of GNDE and KGNDE are solving the generalized eigenvalue problems, whose computational complexity are *O*(*m*^{3}) and *O*(*N*^{3}), respectively.

## 4 Experiment

**Training and testing datasets**

Training dataset | Size | Testing dataset | Size |
---|---|---|---|

BMP2sn_c21 | 233 | BMP2sn_9563 | 195 |

BMP2sn_9566 | 196 | ||

BMP2sn_c21 | 196 | ||

BTR70sn_c71 | 233 | BTR70sn_c71 | 196 |

T72sn_132 | 232 | T72sn_132 | 196 |

T72sn_812 | 195 | ||

T72sn_s7 | 191 |

### 4.1 Experiment steps

- 1)
Image pre-processing: Speckle suppression and target segmentation are used for removing speckles and background clutters, respectively. Then we use gray enhancement based on power function to enhance information in the dataset. Finally, we get the dataset {

**x**_{ i }∈*ℝ*^{ m },*i*= 1, 2, …,*N*} called DATA, where**x**_{ i }donates each SAR image vectors and its dimensions*m*= 61 × 61 = 3, 721. The optical images and the corresponding SAR images of the three targets in the MSTAR dataset are shown in Figures 1 and 2. Images of the targets after processing are shown in Figure 3.

- 2)Feature extraction: Both GNDE and KGNDE are utilized to extract feature of DATA. In order to examine recognition performance of these methods, PCA and KPCA are also used to extract feature. In this paper, both KPCA and KGNDE use polynomial function as the kernel function, as is shown in (16):${\mathit{k}}_{\mathit{ij}}={\left({{\mathbf{x}}_{\mathit{i}}}^{\mathit{T}}{\mathbf{x}}_{\mathit{j}}+1\right)}^{\mathit{\mu}}$(16)
where

*μ*is the function parameter. In this paper, we choose*μ*= 5. - 3)
Classification: Nearest neighbor classifier (NNC) [10] is utilized to classify extracted feature based on this algorithm.

### 4.2 Experiment results

**Best recognition performance by various algorithms**

Algorithms | Best recognition rate (%) | Feature dimensions |
---|---|---|

PCA | 84.67 | 70 |

GNDE | 94.18 | 60 |

KPCA | 90.13 | 120 |

KGNDE | 92.42 | 70 |

## 5 Conclusions

Feature extraction is the key step in SAR ATR. In this paper, a new feature extraction algorithm and its kernel counterpart are proposed. Based on the manifold structure, both GNDE and KGNDE get linear transformation to achieve low-dimensional embedding of the dataset. Compared with the linear structure, the manifold ways can detect the underlying nonlinear structure, which preserves local information so that manifold ways is more robust. In addition, GNDE and KGNDE are supervised methods. Through these algorithms, the extracted feature can gain better clustering effect than unsupervised methods, which is helpful for classification.

## Declarations

### Acknowledgement

This research was supported by the National Natural Science Foundation of China (No. 61201272).

## Authors’ Affiliations

## References

- Mishra AK, Mulgrew B: Bistatic SAR ATR Using PCA-based Features. In
*Proceedings of the SPIE 6234, Automatic Target Recognition XVI*. United Kingdom: University of Edinburgh; 18 May 2006, doi:10.1117/12.664117Google Scholar - Mishra AK: Validation of PCA and LDA for SAR ATR. Paper presented at IEEE Region 10 Conference. Guwahati: IIT Guwahati; 19–21 Nov 2008Google Scholar
- Gunn SR:
*Support Vector Machines for Classification and Regression*. Technical Report: Analyst, University of Southampton; 2010. doi:10.1039/B918972FGoogle Scholar - Zhao Q, Principe JC: Support vector machines for SAR automatic target recognition.
*IEEE Trans. Aerosp. Electron. Syst.*2001, 2: 643-654.View ArticleGoogle Scholar - Li Y, Zhang XQ, Bai BD, Zhang YN:
*Information Compression and Speckle Reduction for Multifrequency Polarimetric SAR Imagery using KPCA. Paper presented at the 2007 International Conference on Machine Learning and Cybernetics*. Xi’an: Northwest Polytechnical University; 19–22 Aug 2007Google Scholar - Han P, Wu RB, Wang YH, Wang ZH:
*An efficient SAR ATR approach. Paper presented at the 2003 IEEE International Conference on the ICASSP*. Tian jin: Tian jin University; 6–10 April 2003Google Scholar - He X, Niyogi P: Locality preserving projections. Paper presented at the 16th Processing Conference on the Neural Information Processing Systems. 2003.Google Scholar
- Yan S, Xu D, Zhang BY, Zhang HJ, Yang Q, Li S: Graph embedding and extensions: a general framework for dimensionality reduction.
*IEEE Trans. Pattern Anal. Mach. Intell.*2007, 1: 40-51. doi:10.1109/TPAMI.2007.250598View ArticleGoogle Scholar - Ross T, Worrell S, Velten V, Mossing J, Bryant M: Standard SAR ATR evaluation experiments using the MSTAR public release data set. Paper presented at the Processing Conference on the SPIE. 15 September 1998Google Scholar
- Cover TM: Estimation by the nearest neighbor rule.
*IEEE Trans. Inf. Theory*1968, 1: 50-55.View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.