Skip to main content

Radar signal recognition based on triplet convolutional neural network

Abstract

Recently, due to the wide application of low probability of intercept (LPI) radar, lots of recognition approaches about LPI radar signal modulations have been proposed. However, facing the increasingly complex electromagnetic environment, most existing methods have poor performance to identify different modulation types in low signal-to-noise ratio (SNR). This paper proposes an automatic recognition method for different LPI radar signal modulations. Firstly, time-domain signals are converted to time-frequency images (TFIs) by smooth pseudo-Wigner–Ville distribution. Then, these TFIs are fed into a designed triplet convolutional neural network (TCNN) to obtain high-dimensional feature vectors. In essence, TCNN is a CNN network that triplet loss is adopted to optimize parameters of the network in the training process. The participation of triplet loss can ensure that the distance between samples in different classes is greater than that between samples with the same label, improving the discriminability of TCNN. Eventually, a fully connected neural network is employed as the classifier to recognize different modulation types. Simulation shows that the overall recognition success rate can achieve 94% at − 10 dB, which proves the proposed method has a strong discriminating capability for the recognition of different LPI radar signal modulations, even under low SNR.

1 Introduction

LPI radar prevents the non-cooperative receiver from intercepting and detecting its signals by transmitting a special waveform[1, 2]. Due to the properties of low power, high resolution, large bandwidth, frequency changing, and so on [3, 4], it is tough for traditional electronic reconnaissance methods to estimate parameters of received signals exactly, which means different modulation types of LPI radar signals cannot be recognized accurately. To improve the cognition ability of reconnaissance equipment, how to precisely identify LPI radar signals in a harsh electromagnetic environment becomes a hot spot in electronic warfare systems.

Most exiting methods about LPI radar signal modulation recognition involve two key processes, which are feature extraction and signal classification [5, 6]. From the perspective of features, most methods can be summarized into four classes: time-domain methods [7], frequency-domain methods [8, 9], time-frequency domain methods [5, 10], and transform-domain methods [11]. And for the classifier, both traditional machine learning (ML) [12, 13] and prevalent deep learning (DL) [14, 15] are widely applied. Especially for the DL methods, more and more attentions have been paid to them recently, due to their superb performance. It has already been proved that compared with other models of DL, such as Stacked AutoEncoder (SAE) [16, 17] and Deep Belief Network (DBN) [18, 19], CNN has a better performance in many areas such as time series prediction [20], target detection [21], and object identification [22, 23]. Therefore, the method of combining TFIs and CNN stands out from all these approaches. Because compared with any single domain method mentioned above, the time-frequency technique performs well in the aspect of anti-noise [24]. Meanwhile, employing CNN as the encoder module means that manual intervention will not be needed anymore, which makes the recognition process more reasonable and reliable.

Existing recognition methods of LPI radar signal modulations are mostly based on time-frequency analysis and DL. Lunden and Koivunen [5] presented a large set of features extracted from TFIs of radar signals, and fed them into a MLP classifier to perform the classification. However, the selections of these features need some prior information and human intervention. Zhang et al. [25] firstly explored an automatic recognition system for radar waveforms based on Choi–Williams Distribution (CWD) and CNN. The system didn’t need any prior information and manual intervention to recognize radar waveforms. For 8 kinds of radar waveforms (LFM, BPSK, Costas, Frank, and T1–T4), the overall RSR was more than 93.7% when SNR was greater than −2 dB. In [26], the authors also chose CWD to process received radar signals. Features extracted from TFIs were fed into Elman neural network (ENN) for classification. Different from [25], they took P1, P2, P3, and P4 polyphase classes into account, expanding the types of recognition waveforms. The overall RSR of 8 radar waveforms (LFM, BPSK, Costas, Frank code, P1–P4 code) was 94.7% at SNR of − 2 dB. However, the feature extraction process in their research still required manual design and was cumbersome to handle. Guo and Chen [27] used an improved AlexNet to classify LPI radar signals. They successfully classified 10 types of radar signals at − 6 dB, including CW, NLFM, LFM, BPSK, Costas, Frank, and T1–T4. Their research not only expanded the types of identification, but also achieved a better result in a lower SNR. With the rise of transfer learning, more and more new methods have been explored in the area of radar waveforms recognition. Guo et al. [28] adopted a transferred CNN to recognize the TFIs of radar signals. By virtue of transfer learning, their system achieved the recognition of radar waveforms with a small number of training samples, providing a new method for the circumstance of insufficient training samples. In addition, Xiao et al. [29] took advantage of feature fusion algorithm and transfer learning to achieve a good recognition results at − 4 dB. The above methods are gradually improving the ability of radar signal recognition. Their progresses encourage more and more methods of radar signal identification to be explored and applied in electronic reconnaissance.

To improve the classification accuracy in lower SNR cases, this paper proposes an automatic recognition approach to achieve accurate recognition of LPI radar signal modulations. The approach involves analyzing radar signals in time-frequency domain, designing a feature encoder named TCNN, and constructing a FCNN as the classifier. For simplicity, TCNN-FCNN will be used to represent the proposed method in the following. The specific contribution of this paper can be summarized as follows:

  • This paper proposes a TCNN-FCNN structure to address the problem of LPI radar signal modulations recognition in low SNR. As an end-to-end model, TCNN-FCNN can identify different modulation types accurately even when SNR is -10dB. It means that our method provides a solid basis for further research on modulation recognition in complicated electromagnetic environments.

  • The proposed method employs triplet loss in the process of LPI radar modulation identification. By setting a margin between each positive pair and negative pair, triplet loss minimizes the distance between samples with the same label and maximizes the distance between samples with different labels. Experiments show that the discriminability of the model trained with triplet loss is effectively enhanced.

  • Different from other existing approaches, the proposed method emphasizes the role of the objective function in the training process. To some extent, it can provide novel ideas for LPI radar signal modulations recognition.

The rest of this paper is organized as follows. The overall structure of our recognition system is proposed in Sect. 2. Section 3 briefly introduces the groundworks of the proposed method including the signal model and SPWVD technique. Main methods of the system are introduced in Sect. 4, involving specific structure of models, triplet loss, t-Distributed Stochastic Neighbor Embedding (t-SNE) technique, etc. Section 5 shows and analyzes the performance of the proposed recognition system. Finally, the conclusion of this paper is drawn in Sect. 6.

2 System overview

In this section, an automatic recognition method of LPI radar signal modulations is described in detail. The specific structure of the system is shown in Fig. 1. At first, all received LPI radar signals are converted into TFIs by SPWVD. Since SPWVD describes the distribution of signal energy over time and frequency on a two-dimensional plane, TFIs can reflect the distinction between different modulation types of LPI radar signals, even at low SNR. Next, the signal dataset is separated into train dataset and test dataset. Then, the CNN is designed as the feature encoder to extract features automatically. Note that triplet loss plays an important role in the iterative training process of CNN. In the high-dimensional space, the distribution of features extracted from the same class is more concentrated than those from different classes with the assistance of triplet loss. Finally, as the classifier, FCNN is tuned by cross-entropy loss to achieve multi-classes classification accurately. In particular, t-SNE technique is adopted to visualize the 2-D distribution of high-dimensional feature vectors obtained by the designed TCNN, ensuring that triplet loss actually works in the identification process, and providing a visual proof for the classification results.

Fig. 1
figure 1

Framework of the proposed method

It is noteworthy that triplet loss is employed to optimize parameters of CNN encoder and cross-entropy loss is used to update parameters of FCNN classifier, separately. As a metric loss function, triplet loss aims to maximize the similarity of within-class and minimize the similarity of between-class. It works by narrowing the distance between intra-class samples and increasing the distance between inter-class samples in higher dimensional space. Accordingly, it is calculated in the embedding space basing on the feature vectors extracted by CNN encoder, while cross-entropy loss is a common classification loss function. It is calculated by comparing target labels and predicted outputs of the last dense layer. By minimizing cross-entropy loss, FCNN can tag each training sample with a corresponding label. Consequently, we choose triplet loss and cross-entropy loss to update the parameters of CNN encoder and FCNN classifier, separately. More details about triplet loss and cross-entropy loss are introduced in Sect. 4.2.

3 Time-frequency analysis of radar signals

3.1 Signal model

In general, the filtered radar signal \(r\left( t \right)\) consists of radar modulated signal \(s\left( t \right)\) and additive Gaussian white noise (AGWN) \(n\left( t \right)\) [26]. Corresponding signal model can be expressed as

$$\begin{aligned} r\left( t \right) =s\left( t \right) +n\left( t \right) =A{{\text {e}}^{j\varphi t}}+n\left( t \right) \end{aligned}$$
(1)

where A is the amplitude and \(\varphi\) represents the modulation phase. For the sake of simplicity, we assume \(A=1\). Different values of SNR are designed in the cause of mimicking the complexity of actual application environment. The definition of SNR in this paper is

$$\begin{aligned} \hbox{SNR}=10{{\log }_{10}}\frac{E\left( \left\| s\left( t \right) \right\| _{2}^{2} \right) }{E\left( \left\| n\left( t \right) \right\| _{2}^{2} \right) } \end{aligned}$$
(2)

where \({{\left\| \cdot \right\| }_{2}}\) is L2-norm. \(E\left( \left\| s\left( t \right) \right\| _{2}^{2} \right)\) and \(E\left( \left\| n\left( t \right) \right\| _{2}^{2} \right)\) denote the mean of \(\left\| s\left( t \right) \right\| _{2}^{2}\) and \(\left\| n\left( t \right) \right\| _{2}^{2}\), respectively.

3.2 Smooth pseudo-Wigner–Ville distribution

As a kind of Cohen class time-frequency distribution, SPWVD adopts smoothing operations in both frequency and time domains. Therefore, it can eliminate the cross-term interference distributed both along the time axis and the frequency axis [30].

$$\begin{aligned} \begin{aligned} SPWV{{D}_{r}}(t,f)=\iint {r}(t-v+\tau /2){{r}^{*}}(t-v-\tau /2)\cdot g(\tau )h(v){{e}^{-j2\pi f\tau }}\hbox {d}v\hbox {d}\tau \end{aligned} \end{aligned}$$
(3)

where \(*\) denotes the complex conjugate. \(r\left( t \right)\) is the complex signal received by radar, which is shown in Eq. 1. t and f represent time and frequency variables, respectively. \(\phi \left( \tau ,v \right) =g\left( \tau \right) h\left( v \right)\) is the kernel function of SPWVD. \(g\left( \tau \right)\) and \(h\left( v \right)\) are the independent low pass filters and work on the time delay \(\tau\) and the frequency shift v, respectively.

For the TFI generated by SPWVD, cross-term interference is eliminated at the cost of decreasing the time-frequency concentration. Namely, the smoothing operation of SPWVD will reduce the time-frequency resolution, resulting in a loss of some useful information. To increase the time-frequency concentration and improve the time-frequency resolution of TFIs, a proper selection of window function is needed. In this paper, we choose the Gaussian window function as the smoothing filter, since the Gaussian window function has no negative sidelobes and no sidelobes fluctuation, which means the spectral energy leakage can be suppressed to a certain extent.

Fig. 2
figure 2

Different TFIs produced by SPWVD with different combinations of window lengths

Another critical parameter for SPWVD is the window length. Actually, there have been some related works [31,32,33] on parameters selection of time-frequency distribution. Inspired by [31], we define three levels of window lengths: small, medium and large, and choose “33, 133, 233” as the concrete representations of them, respectively. Figure 2 shows different TFIs generated by SPWVD under different combinations of window lengths. \(L_g\) and \(L_h\) denote the length of Gaussian window g and h. As shown in Fig. 2, there is an issue about energy leakage in Fig. 2a, d, g. It demonstrates that severe energy leakage exists in TFIs when \(L_h\) is small, while as \(L_g\) increases, the time resolution becomes worse, so that some useful information cannot be displayed in TFIs. This is verified by Fig. 2e–i. Likewise, \(L_h\) in Fig. 2c is larger than Fig. 2b, which results in a lower frequency resolution in Fig. 2c. Therefore, to make a trade-off between less energy leakage and high resolution, the combination of \(L_g = 33\) and \(L_h = 133\) is chosen in this paper. In fact, the selection of window length is not be strictly restricted in this paper. Parameters which can ensure that mutation features of signals are fully reflected and no severe spectrum energy leakage exists in TFIs will be included in the selection.

3.3 Different TFIs of radar signals based on SPWVD

SPWVD transformation results of 10 LPI radar signals at 8dB are presented in Fig. 3, including Costas, Frank, LFM, NS, BPSK, NLFM, and T1-T4.

Fig. 3
figure 3

TFIs of different modulation types

As shown in Fig. 3, each TFI describes the change of signal instantaneous frequency with time clearly. Different TFIs can intuitively reflect different signal modulation types. Therefore, it is feasible and dependable to recognize different modulation types by TFIs.

4 Classification method based on proposed TCNN-FCNN

4.1 Structure of designed models

In this section, the architecture of the encoder module TCNN presented in Fig. 4a and the classifier FCNN shown in Fig. 4b will be introduced in detail.

Fig. 4
figure 4

Architecture of the proposed TCNN-FCNN

As shown in Fig. 4a, the encoder module has 2 convolutional blocks. Each of them is comprised of a convolutional layer, a batch normalization layer, an activating function, and a pooling layer. The convolutional layer in Conv Block 1 has 128 kernels with the kernel size of \(3\times 3\), aiming to extract feature maps from TFIs. In particular, to reduce internal covariate shift, avoid vanishing gradient and accelerate the convergence speed of the model, a batch normalization layer [34] is added, since it can ensure that input data of the activation units will obey Gaussian distribution. Rectified linear unit (ReLU) is adopted as the nonlinear activating function to provide nonlinearity for the model and alleviate overfitting. To retain major features and reduce the complexity of the network, a max-pooling layer with the kernel size of \(2\times 2\) is employed. The structure of Conv Block 2 is the same as Conv Block 1, except that the number of kernels is 64 in the convolutional layer. A dense layer with ReLU activating function is used to integrate the learned “distributed features.” Eventually, after the forward propagation, a 128-dimensional feature vector of input TFI is obtained; especially, triplet loss is employed as the objective function during the back propagation and is detailed explained in Sect. 4.2. We choose Adam [35] as the optimization algorithm to minimize triplet loss instead of traditional stochastic gradient descent (SGD), because it only needs first-order gradients with high computational efficiency and little memory requirements.

As illustrated in Fig. 4b, the FCNN model is composed of dense layers, completely. The number of neurons in dense 1 and dense 2 is 128 and 10, respectively. Dense 1 still uses ReLU as the nonlinear activating function. Dense 2 utilizes softmax function to achieve multi-objective classification. FCNN model uses Adam to optimize as well, except that cross-entropy loss is employed as the objective function during the back propagation.

4.2 Triplet loss

Two different objective functions are mentioned in Sect. 4.1. Both triplet loss [36,37,38,39] and cross-entropy loss are widely used in deep neural networks. Cross-entropy loss is usually employed in multi-classification missions [40]. In high-dimensional embedding space, cross-entropy loss aims to project samples with the same label to the same place, and map the rest samples with different labels to other places. However, it doesn’t take account of the distance between different classes [41]. This may cause an unsatisfied circumstance that the distance between samples with the same label \({{d}_{\rm inter}}\) is farther than the distance between samples of different classes \({{d}_{\rm intra}}\). The discrepancy between triplet loss and cross-entropy loss is shown in Fig. 5, where the same shape represents the same class, and different colors represent different samples of each class. Figure 5a illustrates the spatial distribution of initial samples in embedding space. Figure 5b, c show distributions of samples trained by cross-entropy loss and triplet loss, respectively.

Fig. 5
figure 5

Discrepancy between triplet loss and cross-entropy loss

This unsatisfied circumstance can be addressed by using triplet loss as the objective function to optimize models. The effect of triplet loss is displayed in Fig. 5c. Apparently, triplet loss is designed to update the parameters of models by enforcing a margin between each sample from one class to all samples from other classes [36]. Not only can it minimize \({{d}_{\rm inter}}\), but it also can maximize \({{d}_{\rm intra}}\) .

More specifically, as shown in Fig. 4a, TCNN maps initial TFIs into high dimensional Euclidean space, and the embedding function can be represented by \(\mathcal {M}_{\theta }: \mathbb {R}^{H \times W \times 3} \rightarrow \mathbb {R}^{D}\), where \(\theta\) denotes the encoder module. Each TFI with size of \(H\times W\) will be represented as a D-dimensional feature vector \({{f}_{i}}\in {{\mathbb {R}}^{D}},\ i=1,2,\ldots ,m\) by the embedding module, where \({{f}_{i}}\) is the output of TCNN.

Among all these \({f}_{i}\), an anchor feature vector \(f_{i}^{a}\) is chosen randomly. Then, a positive feature vector \(f_{i}^{p}\) which has the same label with \(f_{i}^{a}\) and a negative feature vector \(f_{i}^{n}\) whose label differs from \(f_{i}^{a}\) are needed to construct a valid triplet. For each given \(f_{i}^{a}\), triplet loss needs to ensure that \(f_{i}^{a}\) is closer to all other \(f_{i}^{p}\). In the meanwhile, \(f_{i}^{a}\) also should stay away from any other \(f_{i}^{n}\). The main purpose of triplet loss is to satisfy the following condition:

$$\begin{aligned} \left\| f_{i}^{a}-f_{i}^{p} \right\| _{2}^{2}+\hbox{margin}<\left\| f_{i}^{a}-f_{i}^{n} \right\| _{2}^{2} \end{aligned}$$
(4)

The objective function of triplet loss can be written as:

$$\begin{aligned} {{L}_{\rm triplet}}=\!\max \left( \sum \limits _{i}^{m}{\left\| f_{i}^{a}-f_{i}^{p} \right\| _{2}^{2}-\left\| f_{i}^{a}-f_{i}^{n} \right\| _{2}^{2}+\hbox{margin}},\,\ 0 \right) \end{aligned}$$
(5)

In summary, with the assistance of triplet loss, the discriminative ability of the encoder module will be efficiently enhanced during the process of training.

4.3 Visualization by t-SNE

To demonstrate the effect of triplet loss further and provide an intuitive explanation for the results of the classification in subsequent experiments, t-SNE technology is adopted as the visualization tool in this paper. The basic theory of t-SNE will be discussed in this section.

t-SNE [42,43,44] is a variation in Stochastic Neighbor Embedding (SNE) technique [45, 46]. It can visualize high-dimensional data by providing a location in a two or three-dimensional space for each datapoint [42].

Compared with SNE, t-SNE employs a Student t-distribution in the low-dimensional space, instead of Gaussian distribution. Since Student t-distribution is closely related to the Gaussian distribution and has much heavier tails than Gaussian, it can alleviate the crowding problem to some extent. The principle of t-SNE is as follows:

  • For high-dimensional feature vectors \({f}_{1},{{f}_{2}},\ldots ,{{f}_{m}}\), t-SNE converts Euclidean distance between \({{f}_{i}}\) and \({{f}_{j}}\) into a joint probability \({{p}_{ij}}\) obeying Gaussian distribution. The formulation can be written as

    $$\begin{aligned} {{p}_{ij}}=\frac{\exp \left( {-{{\left\| {{f}_{i}}-{{f}_{j}} \right\| }^{2}}}/{2\sigma ^{2}}\; \right) }{\sum \nolimits _{k\ne l}{\exp \left( {-{{\left\| {{f}_{k}}-{{f}_{l}} \right\| }^{2}}}/{2\sigma ^{2}}\; \right) }} \end{aligned}$$
    (6)

    where \(\sigma\) denotes the variance of Gaussian distribution.

  • In low dimensional space, a similar probability \({{q}_{ij}}\) is computed by using Student t-distribution with a single degree of freedom.

    $$\begin{aligned} {{q}_{ij}}=\frac{{{\left( 1+{{\left\| {{m}_{i}}-{{m}_{j}} \right\| }^{2}} \right) }^{-1}}}{\sum \nolimits _{k\ne l}{{{\left( 1+{{\left\| {{m}_{k}}-{{m}_{l}} \right\| }^{2}} \right) }^{-1}}}} \end{aligned}$$
    (7)

    where \({{m}_{i}}\) and \({{m}_{j}}\) are the low-dimensional mapping points of high-dimensional feature vectors \({{f}_{i}}\) and \({{f}_{j}}\).

  • t-SNE tries to find an optimal low-dimensional data representation which will match \({{p}_{ij}}\) and \({{q}_{ij}}\) as well as possible. The objective function of t-SNE is shown in Eq. 8.

    $$\begin{aligned} C\text {=}KL\left( P\left\| Q \right. \right) =\sum \limits _{i}{\sum \limits _{j}{{{p}_{ij}}\log \frac{{{p}_{ij}}}{{{q}_{ij}}}}} \end{aligned}$$
    (8)

    where \(KL\left( P\left\| Q \right. \right)\) denotes the Kullback–Leibler divergence between P which is the joint probability distribution over high-dimensional feature vectors and Q which represents the joint probability distribution over low-dimensional mapping points.

  • By minimizing Eq. 8, t-SNE can find the optimal low-dimensional representation. The gradient of Eq. 8 is given by

    $$\begin{aligned} \frac{\delta C}{\delta {{m}_{i}}}=4\sum \limits _{j}{\left( {{p}_{ij}}-{{q}_{ij}} \right) \left( {{m}_{i}}-{{m}_{j}} \right) {{\left( 1+{{\left\| {{m}_{i}}-{{m}_{j}} \right\| }^{2}} \right) }^{-1}}}. \end{aligned}$$
    (9)

5 Experiments and analysis

To evaluate the performance of the proposed TCNN-FCNN method, some experiments and analyses are presented in this section.

5.1 Dataset

The dataset includes 10 different kinds of LPI radar signal modulations mentioned in Sect. 3.3. The parameters of simulation signals are set dynamic ranges so as to verify the generalization performance of the designed framework. Corresponding parameters are shown in Table 1.

Table 1 Signal parameters and simulation conditions

For each class, there are 1000 samples in the dataset. We randomly choose 800 samples from each class as the training dataset \({{D}_{\rm train}}\), and the rest of them as the testing dataset \({{D}_{\rm test}}\). Besides, 11 different values of SNR are designed to mimic different situations, which range from − 12 to 8 dB at interval in 2 dB. Actually, there are 110, 000 simulation signals provided for the subsequent training and testing processes in total.

5.2 Feasibility experiments

The feasibility and validity of the proposed TCNN-FCNN method will be verified by some experiments in this section. At first, to figure out whether the encoder module can extract representative features of input TFIs, we randomly choose a single TFI in each modulation type and correspondingly display several feature maps of them in Fig. 6. It illustrates that most intermediate feature maps generated by TCNN have high similarity to input TFIs. Therefore, using these features to identify different LPI radar signal modulations is totally enough. In further, it demonstrates that the TCNN encoder module is effective and convictive as well.

Fig. 6
figure 6

Visualization of feature maps in different CNN layers. a The input TFIs with the size of \(100\times 100\). b, c The output features of Conv Block 1 and Conv Block 2. The size of feature maps in b is \(50\times 50\) and \(25\times 25\) in c

Fig. 7
figure 7

Visualization of feature distribution

In order to show the difference between triplet loss and cross-entropy loss more intuitively, we employ t-SNE technology to visualize the distribution of 128-D feature vectors in 2-D space. The visualization of feature distribution is displayed in Fig. 7. In the condition of \(\hbox{ SNR} = 8\,dB\), there are 200 samples of 10 LPI radar modulations shown in Fig. 7a, b, and each class has 20 samples. The parameters of t-SNE are set as follows: The perplexity is 30 and the number of iterations is 5000.

Apparently, compared with Fig. 7a, the distribution of samples with the same label is highly aggregated and different labels are far from each other in Fig. 7b. It means that CNN trained with triplet loss is more discriminative than that trained with cross-entropy loss. It proves that triplet loss is feasible in the recognition of LPI radar signal modulations as well.

5.3 Results and discussions

For discussing the performance of the proposed approach, several methods are compared in the following experiments. Figure 8 presents the relation curves between RSR and SNR of these methods. In the legend, TCNN-FCNN (red curve) represents our proposed method. CNN-FCNN (blue curve) has the same structure as TCNN-FCNN, except that cross-entropy loss is the only loss function adopted to update parameters of the CNN encoder and FCNN classifier. In addition, the other three different methods Lunden (dashed magenta curve) proposed in [5], Zhang (dotted green curve) proposed in [25], Guo (dash-dot cyan curve) proposed in [28] are involved.

Fig. 8
figure 8

Recognition accuracy of LPI radar signals under different SNR

Figure 8 delivers some important messages. Firstly, compared with Lunden, TCNN-FCNN has strikingly advantage, meaning that those features designed in Lunten’s method are not applicable to all signal classes. Namely, using TCNN encoder to extract features automatically is more reliable. Secondly, both TCNN-FCNN and CNN-FCNN are superior to Zhang. Considering TCNN-FCNN and CNN-FCNN have the same net structure, it implies that the structure of the model which we designed in this paper is proper and effective. Thirdly, Guo loses its advantage when SNR drops below − 4 dB, which means that the transferred net cannot perform well in lower SNR. Lastly, the gap between CNN-FCNN and TCNN-FCNN becomes wider and wider with the decrease in SNR from − 2 to − 10 dB, meaning that the strength of triplet loss is highlighted at lower SNR. To sum up, compared with other methods, the proposed TCNN-FCNN has better performance, especially in lower SNR.

Table 2 Macro \(F_1\)-score

Besides, some extra experiments are provided to make an in-depth analysis of the overall RSR shown in Fig. 8. Since Zhang and Lunten have poor performance, they are omitted in the following experiments. Table 2 adopts macro \(F_1\)-score to evaluate the performance of TCNN-FCNN, CNN-FCNN and Guo. On the basis of Fig. 8, we focus on cases that SNR drops below 0 dB, because these three methods have almost same effect when SNR is higher than 0 dB. According to Table 2, the result of Guo becomes worse and worse from − 4 to − 12 dB. Therefore, considering the RSR and macro \(F_1\)-score, method Guo is more suitable for the situation that SNR is higher than − 4 dB. While, the macro \(F_1\)-score of TCNN-FCNN is over 0.9 when SNR is higher than − 10 dB. The gap grows wider between TCNN-FCNN and CNN-FCNN, especially at − 8 dB and − 10 dB. Concerning with this phenomenon, confusion matrices are displayed in Fig. 9 to investigate the classification details of TCNN-FCNN and CNN-FCNN.

Fig. 9
figure 9

Confusion matrix of cross-entropy loss and triplet loss at − 8 dB and − 10 dB

Figure 9 shows the confusion matrices of TCNN-FCNN and CNN-FCNN at − 8 dB and − 10 dB. Since the discussed SNR is out of Guo’s best range of application, we don’t analyze it in the following experiments. In the light of Fig. 9, CNN-FCNN doesn’t perform well on the recognition of BPSK at − 8 dB and − 10 dB. Moreover, it is completely invalid to T1 at − 10 dB. Most samples of T1 are treated as T3 and other classes, while for TCNN-FCNN, although the RSR is reduced in − 8 dB and − 10 dB, most signals can still be identified correctly. It means that TCNN-FCNN is effective for all classes, even when SNR is − 10 dB. It will be demonstrated more clearly by visualizing feature vectors with t-SNE technique in Fig. 10.

Fig. 10
figure 10

Visualization of feature distribution

Figure 10 not only explains the classification results in a more intuitive way, but also emphasizes the effectiveness of the triplet loss by the comparison between TCNN-FCNN and CNN-FCNN at − -10 dB. It depicts that most samples of T1 and T3 are mixed and difficult to distinguish in Fig. 10a, just like the result in Fig. 9c. Some samples of NS and BPSK are considered as a new cluster, which increases the probability of misjudgment. In contrast, boundaries between every two classes are clear in Fig. 10b, which means that most testing samples will be recognized correctly. It is acceptable that a few samples are in the wrong place considering the value of SNR. A little aliasing between LFM and Frank, T1 and T3 also verifies the recognition effect of themselves in Fig. 9d. On the other hand, features extracted by TCNN (Fig. 10b) have more within-class similarity and lesser between-class similarity, verifying that compared with cross-entropy loss, triplet loss performs better on optimizing parameters of the encoder module.

To sum up, TCNN-FCNN proposed in this paper has a strong discriminative ability even in a harsh environment with low SNR. Not only can it be proved by RSR and macro \(F_1\)-score from the data perspective, but it is also verified in an intuitive way such as confusion matrix and t-SNE visualization.

6 Conclusion

An automatic recognition method named TCNN-FCNN is proposed to recognize 10 different modulations of LPI radar signals in this literature. Different from other existing related methods, more attentions are paid to the objective function of the optimization in the proposed method, which provides a new way for the recognition of LPI radar signal modulations. Simulation results show that the RSR is 0.94 at − 10 dB and almost always 1 when the SNR is greater than − 4 dB. It means the presented TCNN-FCNN method has remarkable performance in the recognition process, especially in the situation with low SNR. And it also proves that triplet loss has a better discriminative ability than cross-entropy loss, which can improve the classification performance in the recognition process of different LPI radar modulations, specifically in terrible circumstances. The success of LPI radar signal modulation recognition will make a better preparation for the following tracking, locating and interference. Therefore, the proposed method has vital application value in the electronic reconnaissance system.

Availability of data and materials

Unfortunately, the data are not available online. Kindly, contact the corresponding author for data requests.

Abbreviations

DL:

Deep learning

ML:

Machine learning

CWD:

Choi–Williams distribution

DBN:

Deep belief network

LPI:

Low probability of intercept

SAE:

Stacked AutoEncoder

SGD:

Stochastic gradient descent

SNE:

Stochastic neighbor embedding

SNR:

Signal-to-noise ratio

TFI:

Time–frequency image

RSR:

Recognition success rate

AGWN:

Additive Gaussian White noise

TCNN:

Triplet convolutional neural network

FCNN:

Fully connected neural network

t-SNE:

t-Distributed stochastic neighbor embedding

SPWVD:

Smooth pseudo-Wigner–Ville distribution

References

  1. M. Gupta, G. Hareesh, A.K. Mahla, Electronic warfare: issues and challenges for emitter classification. Def. Sci. J. 61(3), 228–234 (2011). https://doi.org/10.14429/dsj.61.529

    Article  Google Scholar 

  2. R.G. Wiley, I. Ebrary, Elint: The Interception and Analysis of Radar Signals (Artech House, Boston, 2006)

    Google Scholar 

  3. D. Schleher, Low probability of intercept radar, in International Radar Conference, pp. 346–349 (1985)

  4. R. Wiley, Electronic Intelligence: The Interception of Radar Signals (Artech House, Inc, Dedham, 1985)

    Google Scholar 

  5. J. Lunden, V. Koivunen, Automatic radar waveform recognition. IEEE J. Sel. Top. Signal Process. 1(1), 124–136 (2007). https://doi.org/10.1109/JSTSP.2007.897055

    Article  Google Scholar 

  6. W. Si, C. Wan, C. Zhang, Towards an accurate radar waveform recognition algorithm based on dense CNN. Multimed. Tools Appl. (2020). https://doi.org/10.1007/s11042-020-09490-5

    Article  Google Scholar 

  7. A. Amar, A. Leshem, A. van der Veen, A low complexity blind estimator of narrowband polynomial phase signals. IEEE Trans. Signal Process. 58(9), 4674–4683 (2010). https://doi.org/10.1109/TSP.2010.2050202

    Article  MathSciNet  MATH  Google Scholar 

  8. R. Cao, J. Cao, J.P. Mei, C. Yin, X. Huang, Radar emitter identification with bispectrum and hierarchical extreme learning machine. Multimed. Tools Appl. 78(20), 28953–28970 (2019). https://doi.org/10.1007/s11042-018-6134-y

    Article  Google Scholar 

  9. J. Li, Y. Ying, Radar signal recognition algorithm based on entropy theory, in The 2014 2nd International Conference on Systems and Informatics (ICSAI 2014), pp. 718–723 (2014). https://doi.org/10.1109/ICSAI.2014.7009379

  10. K. Assaleh, K. Farrell, R.J. Mammone, A new method of modulation classification for digitally modulated signals, in MILCOM 92 Conference Record, vol. 2, pp. 712–716 (1992). https://doi.org/10.1109/MILCOM.1992.244137

  11. K.C. Ho, W. Prokopiw, Y.T. Chan, Modulation identification by the wavelet transform, in Proceedings of MILCOM ’95, vol. 2, pp. 886–890 (1995). https://doi.org/10.1109/MILCOM.1995.483654

  12. L. Lutao, W. Shuang, Z. Zhongkai, Radar waveform recognition based on time-frequency analysis and artificial bee colony-support vector machine. Electronics 7(5), 59 (2018). https://doi.org/10.3390/electronics7050059

    Article  Google Scholar 

  13. G. Vanhoy, T. Schucker, T. Bose, Classification of lpi radar signals using spectral correlation and support vector machines. Analog Integr. Circuits Signal Process. 91(2), 305–313 (2017). https://doi.org/10.1007/s10470-017-0944-0

    Article  Google Scholar 

  14. W. Gongming, C. Shiwen, H. Jie, H. Donghua, Radar signal sorting and recognition based on transferred deep learning. Comput. Sci. Appl. 09, 1761–1778 (2019). https://doi.org/10.12677/CSA.2019.99198

    Article  Google Scholar 

  15. S. Kong, M. Kim, L.M. Hoang, E. Kim, Automatic lpi radar waveform recognition using cnn. IEEE Access 6, 4207–4219 (2018). https://doi.org/10.1109/ACCESS.2017.2788942

    Article  Google Scholar 

  16. J. Wang, B. Hou, L. Jiao, S. Wang, Pol-sar image classification based on modified stacked autoencoder network and data distribution. IEEE Trans. Geosci. Remote Sens. 58(3), 1678–1695 (2020). https://doi.org/10.1109/TGRS.2019.2947633

    Article  Google Scholar 

  17. S. Liu, Y. Liu, Y. Gu, X. Xu, Method of extracting gear fault feature based on stacked autoencoder. J. Eng. 2019(23), 8765–8769 (2019). https://doi.org/10.1049/joe.2018.9101

    Article  Google Scholar 

  18. J. Ying, J. Dutta, N. Guo, C. Hu, D. Zhou, A. Sitek, Q. Li, Classification of exacerbation frequency in the copdgene cohort using deep learning with deep belief networks. IEEE J. Biomed. Health Inform. 24(6), 1805–1813 (2020). https://doi.org/10.1109/JBHI.2016.2642944

    Article  Google Scholar 

  19. A. Mughees, L. Tao, Multiple deep-belief-network-based spectral-spatial classification of hyperspectral images. Tsinghua Sci. Technol. 24(2), 183–194 (2019). https://doi.org/10.26599/TST.2018.9010043

    Article  Google Scholar 

  20. Y. Xiao, H. Yin, Y. Zhang, H. Qi, Y. Zhang, Z. Liu, A dual-stage attention-based conv-lstm network for spatio-temporal correlation and multivariate time series prediction. Int. J. Intell. Syst. 36(5), 2036–2057 (2021). https://doi.org/10.1002/int.22370

    Article  Google Scholar 

  21. R. Girshick, J. Donahue, T. Darrell, J. Malik, in Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation, pp. 580–587 (2014). https://doi.org/10.1109/CVPR.2014.81

  22. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, in Going Deeper with Convolutions, pp. 1–9 (2015). https://doi.org/10.1109/CVPR.2015.7298594

  23. Z. Zhang, C. Wang, C. Gan, S. Sun, M. Wang, Automatic modulation classification using convolutional neural network with features fusion of spwvd and bjd. IEEE Trans. Signal Inf. Process. Netw. 5(3), 469–478 (2019). https://doi.org/10.1109/TSIPN.2019.2900201

    Article  MathSciNet  Google Scholar 

  24. L. Cohen, Time-frequency distributions—a review. Proc. IEEE 77(7), 941–981 (1989). https://doi.org/10.1109/5.30749

    Article  Google Scholar 

  25. M. Zhang, M. Diao, L. Guo, Convolutional neural networks for automatic cognitive radio waveform recognition. IEEE Access 5, 11074–11082 (2017). https://doi.org/10.1109/ACCESS.2017.2716191

    Article  Google Scholar 

  26. Z. Ming, L. Lutao, D. Ming, Lpi radar waveform recognition based on time-frequency distribution. Sensors 16(10), 1682 (2016). https://doi.org/10.3390/s16101682

    Article  Google Scholar 

  27. L. Guo, X. Chen, in Low Probability of Intercept Radar Signal Recognition Based on the Improved Alexnet Model, Tokyo, Japan, pp. 119–124 (2018). https://doi.org/10.1145/3193025.3193037

  28. Q. Guo, X. Yu, G. Ruan, Lpi radar waveform recognition based on deep convolutional neural network transfer learning. Symmetry (2019). https://doi.org/10.3390/sym11040540

    Article  Google Scholar 

  29. Y. Xiao, W. Liu, L. Gao, Radar signal recognition based on transfer learning and feature fusion. Mob. Netw. Appl. 25, 1563–1571 (2020). https://doi.org/10.1007/s11036-019-01360-1

    Article  Google Scholar 

  30. M. Jiang, Comparison and application of some time-frequency distributions belonging to cohen class. Chin. J. Mech. Eng. 39(8), 129–134 (2003)

    Article  Google Scholar 

  31. L. Stanković, A measure of some time-frequency distributions concentration. Signal Process. 81(3), 621–631 (2001). https://doi.org/10.1016/S0165-1684(00)00236-X

    Article  MATH  Google Scholar 

  32. V. Sicic, B. Boashash, Parameter selection for optimising time-frequency distributions and measurements of time-frequency characteristics of non-stationary signals. in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), vol. 6, pp. 3557–3560 (2001). https://doi.org/10.1109/ICASSP.2001.940610

  33. G.-h. Wang, H.-c. Wang, M.-z. Zhu, A time-frequency concentration criterion using grayscale erosion, in 2016 IEEE International Conference on Signal and Image Processing (ICSIP), pp. 398–401 (2016). https://doi.org/10.1109/SIPROCESS.2016.7888292

  34. S. Ioffe, C. Szegedy, in Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (2015). arXiv:1502.03167

  35. D.P. Kingma, J. Ba, in Adam: A Method for Stochastic Optimization (2017). arXiv:1412.6980

  36. F. Schroff, D. Kalenichenko, J. Philbin, in Facenet: A Unified Embedding for Face Recognition and Clustering, pp. 815–823 (2015). https://doi.org/10.1109/CVPR.2015.7298682

  37. H. Bredin, Tristounet: Triplet Loss for Speaker Turn Embedding, pp. 5430–5434 (2017). https://doi.org/10.1109/ICASSP.2017.7953194

  38. X. Zhao, H. Qi, R. Luo, L. Davis, in A Weakly Supervised Adaptive Triplet Loss for Deep Metric Learning, pp. 3177–3180 (2019). https://doi.org/10.1109/ICCVW.2019.00393

  39. J. Yu, C. Zhu, J. Zhang, Q. Huang, D. Tao, Spatial pyramid-enhanced netvlad with weighted triplet loss for place recognition. IEEE Trans. Neural Netw. Learn. Syst. 31(2), 661–674 (2020). https://doi.org/10.1109/TNNLS.2019.2908982

    Article  Google Scholar 

  40. A. Bahri, S. Ghofrani Majelan, S. Mohammadi, M. Noori, K. Mohammadi, Remote sensing image classification via improved cross-entropy loss and transfer learning strategy based on deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 17(6), 1087–1091 (2020). https://doi.org/10.1109/LGRS.2019.2937872

    Article  Google Scholar 

  41. Z. Hu, H. Wu, S. Liao, H. Hu, S. Liu, B. Li, in Person Re-identification with Hybrid Loss and Hard Triplets Mining, pp. 1–5 (2018). https://doi.org/10.1109/BigMM.2018.8499463

  42. L. Van der Maaten, G. Hinton, Visualizing data using t-sne. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  43. M. Pan, J. Jiang, Q. Kong, J. Shi, Q. Sheng, T. Zhou, Radar hrrp target recognition based on t-sne segmentation and discriminant deep belief network. IEEE Geosci. Remote Sens. Lett. 14(9), 1609–1613 (2017). https://doi.org/10.1109/LGRS.2017.2726098

    Article  Google Scholar 

  44. D.M. Chan, R. Rao, F. Huang, J.F. Canny, in T-sne-cuda: Gpu-Accelerated t-sne and Its Applications to Modern Data, pp. 330–338 (2018). https://doi.org/10.1109/CAHPC.2018.8645912

  45. G. Hinton, S. Roweis, Stochastic neighbor embedding. Adv. Neural. Inf. Process. Syst. 15(4), 833–840 (2003)

    Google Scholar 

  46. U. Shaham, S. Steinerberger, in Stochastic Neighbor Embedding Separates Well-separated Clusters (2017). arXiv:1702.02670

Download references

Acknowledgements

The authors would like to thank the editors and the reviewers for their comments on the manuscript of this article.

Funding

This work was supported by the National Natural Science Foundation of China (No. 62071137) and the Fundamental Research Funds for the Central Universities in Key Laboratory of Advanced Marine Communication and Information Technology (No. 3072020CF0815).

Author information

Authors and Affiliations

Authors

Contributions

LTL and XYL conceived and designed the experiments; XYL performed the experiments. XYL and LTL analyzed the data. XY.L. wrote the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xinyu Li.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, L., Li, X. Radar signal recognition based on triplet convolutional neural network. EURASIP J. Adv. Signal Process. 2021, 112 (2021). https://doi.org/10.1186/s13634-021-00821-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-021-00821-8

Keywords