Skip to main content

Independently convolutional gated recurrent neural unit for space-based ADS-B signal separation with single antenna


Automatic Dependent Surveillance-Broadcast (ADS-B) is a critical technology to transform aircraft navigation by improving safety and overall effectiveness in the aviation industry. However, overlapping of ADS-B signals is a large challenge, especially for space-based ADS-B systems. Existing traditional methods are not effective when dealing with cases that overlapped signals with small difference (such as power difference and carrier frequency difference) require to be separated. In order to generate an effective separation performance of the ADS-B signals by exploring its temporal relationship, Independently Convolutional Gated Recurrent Neural Unit (Ind-CGRU) is presented for encoder–decoder network construction. Experimental results on the dataset SR-ADSB demonstrate that the proposed Ind-CGRU achieves good performance.

1 Introduction

Automatic Dependent Surveillance-Broadcast (ADS-B) is a modern air traffic management technology that sends the position, speed, altitude and other necessary information of the aircraft to the ground control center in real time [1,2,3,4]. It can improve the safety and efficiency of air traffic. In addition, ADS-B signals can provide pilots with real-time traffic information to help them better avoid hazards during flight. For space-based ADS-B system, ADS-B receivers are placed on low-orbiting satellites instead of ground [5], which can avoid some complex environments such as deserts and ocean. However, the ADS-B signals are easily overlapped due to the large number of aircraft, resulting that the location information of targeted aircraft cannot be efficiently obtained [6]. It is a vital and large challenge to separate the mixed space-based ADS-B signals. To solve this problem, many methods [7,8,9,10] based on array antenna have been proposed for ADS-B signal separation. For example, the project algorithm (PA) [8] is developed to separate multiple secondary surveillance radar signals. Nevertheless, the direction angle difference between overlapping signals is less than the array resolution, methods based on array-antenna cannot separate overlapping signals efficiently. In this case, single antenna-based methods will be utilized and become increasingly important. However, single antenna-based signal separation still remains an open problem.

Up to now, there are two mainstreams for ADS-B signal separation, namely traditional separation methods and deep learning-based separation methods. Depending on algorithm principle, traditional separation methods can be classified into three categories for single antenna-based ADS-B signal separation. For the first approach [11,12,13], ADS-B signals are separated based on their power difference. However, this approach is limited to small power difference between overlapping signals. The second approach [14,15,16] uses the carrier frequency difference to separate overlapped signals. Note that the large carrier frequency difference is required for most methods, which needs at least 200 kHz carrier frequency difference. The third approach [17, 18] is to use other ADS-B signal characteristics to separate overlapped signals, such as pulse position modulation (PPM) [18]. But these methods also cannot effectively separate overlapping signals with a small power difference, small carrier frequency or small relative time delay.

With the development of deep learning in signal processing [19, 20], many methods have been proposed for ADS-B signal separation due to powerful capability of deep learning in feature extraction. In existing methods, complex neural networks [21] and temporal convolutional networks [22, 23] are used to extract valid features for single antenna-based ADS-B signal separation. But these methods utilize convolution network and cannot extract effective temporal information. To capture long-term relationship between overlapping signals, recurrent neural network (RNN) is more suitable for ADS-B signal separation. Unfortunately, when using RNN, deep features of original ADS-B sequence cannot be directly extracted since it is so long. In general, original ADS-B sequence is segmented to construct the 2D matrix. Using RNN to effectively extract space-time information from 2D matrix is an open problem. In this paper, we will explore the RNN-based method for ADS-B signal separation.

In this work, we improve independently recurrent neural network (Ind-RNN) [24] to design the independently convolutional recurrent neural network (Ind-CRNN). Compared to traditional RNN, Ind-CRNN can better exploit the temporal context and spatial information. Ind-CRNN is the basic component to develop a deep Independently Convolutional Gated Recurrent Neural Unit (Ind-CGRU). We follow the paper [25] to construct encoder–decoder architecture. In separation network, Ind-CGRU is utilized to build dual-path network that captures dual temporal information and generates separation masks effectively. Extensive experiments have been performed on SR-ADSB dataset [23]. The average decoding accuracy is 90.40% and the bit error rate is 0.26% on the SR-ADSB dataset, which illustrates that the proposed method performs better than traditional RNN.

The rest of this article is structured as follows. Section 2  provides a review of the relevant literature. The Ind-CGRU is proposed in Sect. 3. Extensive experiments and detailed analysis are given in Sect. 4 . Section 5 concludes the paper.

2 Related work

Some related and representative works on single antenna-based ADS-B signal separation are introduced in this section. They can be categorized into two approaches: traditional features-based and deep learning-based.

2.1 Traditional feature-based ADS-B signal separation

According to the description in introduction, traditional feature-based ADS-B signal separation can be divided into three categories. The first category is to utilize power difference between overlapping signals. However, this category can only separate two overlapping signals and is easily affected by the power difference. For instance, Wu et al. [11] proposed an additive classification algorithm and used a k-means clustering algorithm to estimate signal amplitudes under different bit combinations of strong and weak signals as thresholds. The bit values of strong and weak signals corresponding to each pulse on the overlapping signals are judged to separate signals. This method can separate the overlapping signals with more than 3 dB power difference effectively. The performance of the algorithm will decrease when the power difference is less than 3 dB. Yu et al. [12] proposed the reconstruction cancellation method. Firstly, the common receiver is used to decode the overlapping signals to obtain the strong signal information, and then the amplitude value of the strong signal is estimated to recover the signal waveform. Secondly, the overlapping signal is subtracted from the strong signal waveform to obtain the weak signal. However, the weak signal cannot be recovered when the input signal-to-noise ratio (SNR) of the overlapping signals is low or the power difference between the two signals is large. Li et al. [13] proposed a blind separation algorithm based on time domain. Firstly, the amplitude of the first and last signals is estimated as the threshold, and then two signals are obtained by differentiating from the overlapping signals. Finally, the bit values of the two signals are estimated based on the threshold. But the method requires a certain relative time delay between the overlapping signals, and the method is invalid when the signals are completely overlapped.

The second category is to use carrier frequency difference information to separate overlapping signals. But this category requires a large carrier frequency difference. For example, Galati et al. [14] proposed projection algorithm single antenna (PASA) algorithm. It is the first to reconstruct the single-antenna signal for multi-antenna signal processing. The multi-antenna deinterlacing algorithm PA is used to separate the ADS-B signals. Finally, the separated signal is changed into the form of single antenna again by matrix inverse transformation. However, this method needs relative time delay and at least 240 kHz carrier frequency. In order to solve the problem of relative time delay, Shao et al. [15] proposed an improved PASA and the matrix reconstruction form of single-antenna signal transformed into multi-antenna signal is improved, and only 0.5 us data reconstruction matrix is used to estimate the signal guidance vector. But it also requires 300 kHz carrier frequency difference. Lu et al. [16] proposed the self-detection and separation algorithm based on empirical mode decomposition (EMD). The EMD method is used to decompose single-antenna signals into multimode IMF, which forms multichannel signals with overlapping signals, and FastICA algorithm is used to separate multichannel signals. At least 300 kHz carrier frequency difference is required for this method. Wang et al. [26] proposed an ADS-B overlapping signal separation method for a single antenna based on MVDR algorithm. The samples received by a single antenna within every 0.5 us are reconstructed using the same matrix form as described in the literature [15] to obtain the virtual multichannel data structure. Then, the Zoom-FFT algorithm is used to accurately estimate the carrier frequency of the signal, calculate the virtual guiding vector of the signal and finally, the MVDR algorithm is used to separate the signals. However, this method can only separate overlapping signals with more than 400 kHz carrier frequency difference.

The third approach is to use other features of ADS-B signals. Luo et al. [17] used the sparse characteristics of ADS-B to separate signals based on compressed sensing method. First, ADS-B signal's data set are built, and sparse dictionary of ADS-B signals is obtained using K-SVD algorithm. Then, separated signals are obtained by using OMP algorithm including sparse base and mixed matrix initial value calculation. The algorithm is insensitive to relative delay and signal power difference, but the accuracy of the algorithm is determined by the selection of data set and residual threshold. Li et al. [18] proposed a single-antenna MDA algorithm using PPM coding characteristics of ADS-B to separate signals. Firstly, the reconstruction matrix form of single antenna converted into multiple antennas is adopted. Then, MDA algorithm is used to separate signals, and finally, multi-antenna signals are converted into single antenna forms. The algorithm can separate multiple overlapping signals, but the accuracy decreases when the relative delay between signals is greater than 30 us. In general, traditional feature-based ADS-B signal separation methods can work under certain conditions such as large carrier frequency difference.

2.2 Deep learning-based ADS-B signal separation

Currently, some works based on convolutional networks have been proposed for ADS-B signal separation. For example, Yang used the Hilbert transform for overlapping signals and a complex neural network [21] is utilized to generate deep features for signal separation. However, this method ignores temporal information and has a relatively low decoding accuracy. In order to solve this problem, Wang et al. [22] used the TCN network to build an encoder–decoder framework, which is similar to Conv-TasNet [25]. But the long temporal information is not sufficiently captured in this method. Bi et al. [23] proposed a multi-scale convolutional separation (MCS) network to separate ADS-B overlapping signals. Convolutional layers with different convolutional strides are used to capture long temporal information and improve decoding accuracy. From the comparison of results, it can be seen that deep learning-based separation methods perform better than traditional feature-based methods. However, CNN cannot effectively explore long temporal information and RNN is good at capturing temporal relationship. To the best of our knowledge, this is the first research that explores the RNN networks for ADS-B signal separation.

3 Proposed method

We follow the Conv-TasNet [25] to build our ADS-B signal separation network, as shown in Fig. 1. The proposed signal separation network consists of three modules: the encoder, separation network, and decoder. Firstly, ADS-B overlapping signals are fed into encoder to generate an adaptive 2-D representation. Then, segmentation operation is used to segment 2-D representation and generate 3-D representation, and separation network is utilized to generate effective masks from 3-D representation. In order to obtain the same size of 2-D representation, the overlap-Add is used to transform 3-D features into 2-D representation in separation network. Finally, the decoder separates the overlapped signals from the output of encoder and separation masks.

Fig. 1
figure 1

The framework of signal separation network. It consists of encoder, separation network and decoder. The encoder is featuring extractor, and separation network is to generate effective mask. The mask can select vital features of each separated signal from the encoder feature and feed into decoder to generate separated ADS-B signals

3.1 Encoder module

In encoder module, 1D convolution with a large kernel is used as a feature extractor to generate an adaptive 2D feature. For an ADS-B overlapping signals sequence \({{{\textbf {x}}}} \in { {\mathbb {R}}^{1 \times L}}\), it is transformed into 2D feature \({\varvec{x_k}} \in {{\mathbb {R}}^{C \times M}}\) by using 1D convolution with kernel size K, where L is length of ADS-B overlapping signals, C is the number of convolutional kernel and M is length of feature. In order to reduce computation, the convolutional stride is set to K/2. After 1D convolution, M is still large, and RNN is not working due to the problem of gradient exploding. To solve problem, segmentation operation is used to split \({\varvec{x_k}}\) into 3D feature \({\varvec{x_s}} \in {{\mathbb {R}}^{C \times N \times P}},\) where N is length of chunk. And the stride of chunk is N/2. If it is not divisible, a zero-padded operation is used.

3.2 Separation network

The separation network is a vital part in the whole ADS-B signal separation network, and the quality of feature masks determines the precision of ADS-B signal separation. From Fig. 1, we can see that it consists of six dual-path Independently Convolution Gated Recurrent Unit (Ind-CGRU) modules, one overlap-Add module and one 1D Conv. Compared with traditional RNN, dual-path Ind-CGRU can explore more spatial-temporal information to generate effective feature masks.

In order to capture intra- and inter-chunk features, dual-path Ind-CGRU contains two sub-blocks, namely intra-Ind-CGRU block and inter-Ind-CGRU block, as shown in Fig. 2. For two blocks, it uses the same networks including one Ind-CGRU, one 1D Conv and one LayerNorm layer to explore intra- and inter-chunk temporal information. In this paper, bidirectional operation is used in Ind-CGRU to capture more temporal information. Then, a 1D Conv with a 1*1 convolution kernel is used to compress the features. The compression features are normalized and added to input as output.

Fig. 2
figure 2

The structure of dual-path Ind-CGRU. It consists of intra-Ind-CGRU and inter-Ind-CGRU

In this paper, we design an independently convolution recurrent neural network (Ind-CRNN) as the basic component to develop Ind-CGRU as shown in Fig. 3. In order to keep spatial structure and reduce temporal sensitivity of starting point, the convolution operation is used instead of matrix multiplication in Ind-CRNN. To explore more spatial-temporal information, Ind-CRNN is used to construct Ind-CGRU and it can be expressed as:

$$\begin{aligned}{} & {} { {\mathrm{{z}}_{{\textbf {t}}}} = \sigma ({{\varvec{w_z}}} * {{\varvec{x_t}}} + {{\varvec{u_z}}} \odot {\varvec{{h_{t - 1}}}}+ {\varvec{{b_z}}})} \end{aligned}$$
$$\begin{aligned}{} & {} {{\mathrm{{r}}_{{\textbf {t}}}} = {\textbf {Relu}}({{\varvec{w_r}}} * {{\varvec{x_t}}} + {{\varvec{u_r}}} \odot {\varvec{{h_{t - 1}}}} + {\varvec{{b_r}}})} \end{aligned}$$
$$\begin{aligned}{} & {} {{\mathop {\text {h}}\limits ^ \sim {_{\textbf {t}}}} = \sigma ({\varvec{{w_h}}} * {\varvec{{x_t}}} + {\varvec{{r_t}}} \odot {\varvec{{h_{t - 1}}}} + {\varvec{{b_h}}})} \end{aligned}$$
$$\begin{aligned}{} & {} \varvec{h_t} = {\varvec{(1 - {z_t})}} * {\varvec{{h_{t - 1}}}} + {\varvec{{z_t}}} * {\mathop h\limits ^ \sim {_{\textbf {t}}}} \end{aligned}$$
Fig. 3
figure 3

The structure of Ind-CGRU

where \({{\textbf {w}}}\) and \({{\textbf {u}}}\) are convolution weight and recurrent weight, respectively. \(*\) and \(\odot\) represent convolution operator and Hadamard product, respectively.

The 3D feature \({\varvec{x_s}} \in {{\mathbb {R}}^{C \times N \times P}}\) is fed into dual-path Ind-CGRU with six layers and gets new 3D feature \({\varvec{x'_s}} \in {{\mathbb {R}}^{C' \times N \times P}}\), where \(C'\) is hidden units of dual-path Ind-CGRU. In order to transform it back to a 2D sequence, \({\varvec{x'_s}} \in {{\mathbb {R}}^{C' \times N \times P}}\) is fed into Overlap-Add module that is a reversed operation of segmentation. Then, a 2D feature \({\varvec{x_t}} \in {{\mathbb {R}}^{C' \times M }}\) is generated. In order to get the same size as encoder feature, a 1D convolution with 1*1 kernel is used to transform \({\varvec{x_t}}\) into feature mask \({\varvec{x'_{\mathrm{{t}}}}} \in {{\mathbb {R}}^{i\times C \times M}}\), where i is the number of signal separation.

3.3 Decoder module

After the separation network, an effective mask \({\varvec{x'_{\mathrm{{t}}}}}\) is generated and Sigmoid activation function is used to normalize mask for each separated signal. Then, Hadamard product between each mask and output of encoder is utilized to generate feature representation of each non-overlapping ADS-B signal. Finally, the 1D deconvolution is used to optimize feature into one dimension \({\varvec{\ddot{x} }}\in {{\mathbb {R}}^{i \times L}}\). The whole process can be expressed as:

$$\begin{aligned} {\varvec{\ddot{x}}} = \text{Dcov}({\varvec{x_k}} \odot \text{Sigmoid}({\varvec{x'_t}})) \end{aligned}$$

where Dcov is 1D deconvolution and \({\varvec{x_k}}\) is output of encoder.

4 Experiment

In the paper [23], it has a detailed comparison between deep learning-based method and different traditional methods. And the deep learning-based method largely outperforms the traditional methods. So, we will not discuss the traditional methods in this paper.

4.1 Implementation details

Our proposed Ind-CGRU is performed in the SR-ADSB Dataset. The two RTX 6000 GPUs and PyTorch platform are used in all experiments. We use Adam optimization function with 0.001 initial learning rate to train model. When the validation loss is not declining in three epochs, the learning rate is divided by 10. The convolution kernel size K and channel C of encoder are set to 4 and 256, respectively. The chunk length N is set to 180. The hidden unit \(C'\) is 128. The batch size is set to 6. We follow paper [25] to use negative scale-invariant source-to-noise ratio as loss function.

4.2 Dataset

Semi-real ADS-B dataset (SR-ADSB) [23]: To the best of our knowledge, SR-ADSB is the largest dataset in ADS-B signal separation. The SR-ADSB consists of 360,000 samples. Two-thirds of the samples are used for model training and remaining samples are test data. It is collected from different signal-to-noise ratio (SNR), carrier frequency and relative delay. For SNR, 5 dB, 10 dB, 15 dB, 20 dB and 25 dB are used. And power difference between two signals is set to 0 dB, 1 dB and 2 dB for each SNR. The carrier frequency of the signals is set to 9 MHz, 9.5 MHz, 10 MHz, 10.5 MHz and 11 MHz. The carrier frequency difference between two signals is set to 0 Hz, 500 Hz, 1000 Hz and 1500 Hz. 0 μs, 5 μs, 10 μs and 20.3 μs relative time delay are used in SR-ADSB.

4.3 Ablation study

4.3.1 Comparison with different network parameters

In this section, the effectiveness of our proposed Ind-CGRU is validated from different network parameters, namely K, C, N and \(C'\). And the comparison results using decoding accuracy are listed in Table 1. The decoding accuracy is the percentage of the correctly decoded signals number in the total number of received signals. In the table, we change one parameter and fix other parameters to compare the results. From the table, it can be seen that proposed Ind-CGRU perform better when K is set to 4 for SR-ADSB dataset. And the decoding accuracy is lower with smaller parameter C due to not fully mining the encoder feature. But when parameter C is large enough, decoding accuracy is not improved largely. Hence, the parameter C is set to 256 in this paper. For parameter N, it has similar result for using different N. When N is about the same as P, namely N = 180, it performs better for proposed Ind-CGRU. For parameter \(C'\), it will have lower decoding accuracy for using smaller \(C'\) and have similar result for using 128 and 256. But it has computational complexity with using 256. And \(C'\) is set to 128 in our experiment.

Table 1 Comparison of different parameters on the SR-ADSB dataset

4.3.2 Comparison of different RNN modules

Table 2 Comparison of different RNN modules on the SR-ADSB dataset

In order to verify the effectiveness of our proposed Ind-CGRU, we compare different RNN modules, and the results are listed in Table 2. From the table, bidirectional operation can improve explore more temporal information and improve decoding accuracy. And the Ind-RNN performs better than RNN, GRU and LSTM in term of decoding accuracy and bit-error rate. Where the bit-error rate is the ratio of the erroneous bits number to the total number of signal bits. And our proposed Ind-CRNN outperforms bidirectional Ind-RNN. Moreover, Ind-CGRU can further improve decoding accuracy, which verifies the effectiveness of our proposed method.

4.4 Analysis of different signal parameters

To analyze influence of different signal parameters on proposed Ind-CGRU, we list the results from power difference, carrier frequency difference and relative time delay as shown in Figs. 4, 5 and 6, respectively. From Fig. 4, it can be seen that, proposed Ind-CGRU is affected by low SNR and power difference. With the increase of SNR and power difference, the decoding accuracy will be improved rapidly. For carrier frequency difference, it mostly performs well, except for signal with low SNR and 0Hz carrier frequency difference. Compared with power difference and carrier frequency difference, relative time delay is least affected, and it performs better in low SNR. From three figure, we can see that the results are easily affected by low SNR, which provides a clear direction to improve performance using deep learning to separate ADS-B signals.

Fig. 4
figure 4

The decoding accuracy of Ind-CGRU on signal power difference

Fig. 5
figure 5

The decoding accuracy of Ind-CGRU on carrier frequency difference

Fig. 6
figure 6

The decoding accuracy of Ind-CGRU on relative time delay

4.5 Results on the SR-ADSB dataset

The comparison between the existing deep learning-based methods and the proposed method on the SR-ADSB dataset is shown in Table 3. It can be seen that 1-DAMRAE is lightweight network and cannot perform good on large dataset due to lack of model complexity. Bidirectional LSTM-based separation method is worse than temporal convolutional network-based separation method due to losing spatial information. The Transformer-based separation method is better than these methods. However, MConv-TasNet using different scale temporal information outperforms TCN, bidirectional LSTM and Transformer-based methods. Our proposed Ind-CGRU can perform better than MConv-TasNet. We also show the original ADS-B signals and separated ADS-B signals in Fig. 7. In this figure, overlapping signals with 20.3 μs relative time delay, 2 dB power difference and 0 Hz carrier frequency difference is used. From the figure, it can be seen that separated signals are highly consistent with the original ADS-B signals.

Table 3 Comparison of different methods on the SR-ADSB dataset
Fig. 7
figure 7

The visualization of separation signal

5 Conclusion

This paper proposes a novel Ind-CGRU for the space-based ADS-B signal separation with a single antenna. The Ind-CGRU effectively takes the advantages of GRU and convolution models in exploiting temporal information and mining spatial information respectively to improve the separation performance. The efficacy of proposed method has been verified on SR-ADSB datasets. Compared with the previous deep learning-based methods, proposed Ind-CGRU can perform better. However, it has higher computational complexity, compared with CNN-based methods. In the future, we will optimize Ind-CGRU to design lightweight model and improve robustness to SNR.

Data availability

The data presented in this study are available on request from the corresponding author.


  1. B.S. Ali, W. Schuster, W.Y. Ochieng, Evaluation of the capability of automatic dependent surveillance broadcast to meet the requirements of future airborne surveillance applications. J. Navig. 70, 49–66 (2017)

    Article  Google Scholar 

  2. X. Zhang, J. Zhang, S.-F. Wu, Q. Cheng, R. Zhu, Aircraft monitoring by the fusion of satellite and ground ADS-B data. Acta Astronaut. 143, 398–405 (2018)

    Article  Google Scholar 

  3. S. Yu et al., Integrated antenna and receiver system with self-calibrating digital beamforming for space-based ADS-B. Acta Astronaut. 170, 480–486 (2020)

    Article  Google Scholar 

  4. Z. Shen, X. Cheng, Q. Wang, A cooperative construction method for the measurement matrix and sensing dictionary used in compression sensing. EURASIP J. Adv. Sign. Process. 2020, 1–8 (2020)

    Google Scholar 

  5. K. Baker, Space-based ADS-B: performance, architecture and market(2019), p. 1–10

  6. G. Michael, D. John, H. Ben, H. Andy, D. Dennis, A compilation of measured ADS-B performance characteristics from aiereons orbit test program (2018), p. 18–19

  7. C. Zhang, T. Zhang, H. Zhang, Overlapping ads-b signals separation algorithm based on music. in 6th International Conference on Information Science and Control Engineering (ICISCE) (2019), p. 1094–1098

  8. N. Petrochilos, G. Galati, E.G. Piracci, Separation of SSR signals by array processing in multilateration systems. IEEE Trans. Aerosp. Electron. Syst. 45, 965–982 (2009)

    Article  Google Scholar 

  9. W. Wang, R. Wu, J. Liang, ADS-B signal separation based on blind adaptive beamforming. IEEE Trans. Veh. Technol. 68, 6547–6556 (2019)

    Article  Google Scholar 

  10. Y. Zhao, N. Wang, Q. Chen, S. Yu, X. Chen, Satellite coverage traffic volume prediction using a new surrogate model. Acta Astronaut. 193, 357–369 (2022)

    Article  Google Scholar 

  11. R. Wu, C. Wu, W. Wang, A method of overlapped ADS-B signal processing based on accumulation and classification. J. Sign. Process. 33, 572–576 (2017)

    Google Scholar 

  12. Y. Sunquan, C. Lihu, L. Songting, L. Lanmin, Separation of space-based ADS-B signals with single channel for small satellite. in IEEE 3rd International Conference on Signal and Image Processing (ICSIP) (2018), p. 315–321

  13. K. Li, J. Kang, H. Ren, Q. Wu, A reliable separation algorithm of ADS-B signal based on time domain. IEEE Access 9, 88019–88026 (2021)

    Article  Google Scholar 

  14. G. Galati, N. Petrochilos, E.G. Piracci, Degarbling mode s replies received in single channel stations with a digital incremental improvement. IET Radar Sonar Navigation 9, 681–691 (2015)

    Article  Google Scholar 

  15. W. Wang, Y. Shao, Signal separation for automatic dependent surveillance-broadcast using improved single antenna project algorithm. J. Electron. Inf. Technol. 42, 2721–2728 (2020)

    Google Scholar 

  16. D. Lu, T. Chen, Single-antenna overlapped ADS-B signal self-detection and separation algorithm based on emd. J. Sign. Process. 35, 1681–1690 (2019)

    Google Scholar 

  17. A. Luo, L. Wu, L. Chen, S. Yu, J. Ni, Single channel signals separation of space-based ADS-B based on compressed sensing. in 4th International Conference on Information Communication and Signal Processing (ICICSP) (2021), p. 116–123

  18. C. Li, Y. Zhang, B. Tang, Secondary surveillance radar replies received in single channel based on manchester decoding algorithm. J. Detect. Control 40, 66–69 (2018)

    Google Scholar 

  19. L. Pang, Y. Tang, Q. Tan, Y. Liu, B. Yang, A mle-based blind signal separation method for time-frequency overlapped signal using neural network. EURASIP J. Adv. Sign. Process. 2022, 1–25 (2022)

    Google Scholar 

  20. Y. Wang, J. Han, T. Zhang, D. Qing, Speech enhancement from fused features based on deep neural network and gated recurrent unit network. EURASIP J. Adv. Sign. Process. 2021, 1–19 (2021)

    Google Scholar 

  21. Y. Yang, H. Zhang, H. Zha, R. Song, ADS-B signal separation via complex neural network. Mob. Multimed. Commun. 394, 679–688 (2021)

    Article  Google Scholar 

  22. W. Wang, J. Liu, J. Liang, Single antenna ADS-B overlapping signals separation based on deep learning. Digit. Sign. Process. 132, 103804 (2022)

    Article  Google Scholar 

  23. Y. Bi, C. Li, Multi-scale convolutional network for space-based ADS-B signal separation with single antenna. Appl. Sci. 12, 1–12 (2022)

    Article  Google Scholar 

  24. S. Li, W. Li, C. Cook, C. Zhu, Y. Gao, Independently recurrent neural network (indrnn): building a longer and deeper rnn. in IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), p. 5457–5466

  25. Y. Luo, N. Mesgarani, Conv-tasnet: surpassing ideal time frequency magnitude masking for speech separation. IEEE/ACM Trans. Audio Speech Lang. Process. 27, 1256–1266 (2019)

    Article  Google Scholar 

  26. W. Wang, Y. Shao, Single-antenna overlapped ads-b signal separation based on mvdr algorithm. J. Sign. Process. 36, 686–694 (2020)

    Google Scholar 

  27. K. Chen, J. Zhang, S. Chen, S. Zhang, H. Zhao, Active jamming mitigation for short-range detection system. IEEE Trans. Veh. Technol. 72, 11446–11457 (2023)

    Article  Google Scholar 

  28. Y. Luo, Z. Chen, T. Yoshioka, Dual-path rnn: efficient long sequence modeling for time-domain single-channel speech separation. in ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2020), p. 46–50

  29. J. Chen, Q. Mao, D. Liu, Dual-path transformer network: direct context-aware modeling for end-to-end monaural speech separation. in INTERSPEECH, (2020) p. 1–5

Download references


This work was partly supported by the National Natural Science Foundation of China (Nos. 62101512, 62271453), the National Key R &D Program of China (2018YFE0203900), Fundamental Research Program of Shanxi Province (20210302124031, 202203021212123), Shanxi Scholarship Council of China (2023-131) and Foundation of State Key Laboratory of Dynamic Measurement Technology (2022-SYSJJ-08).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Yan Bi.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, C., Bi, Y. Independently convolutional gated recurrent neural unit for space-based ADS-B signal separation with single antenna. EURASIP J. Adv. Signal Process. 2023, 127 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: