Skip to main content

A neural network framework for binary classification of radar detections

Abstract

A desired objective in radar target detection is to satisfy two very contradictory requirements: offer a high probability of detection with a low false alarm rate. In this paper, we propose the utilization of artificial neural networks for binary classification of targets detected by a depreciated detection process. It is shown that trained neural networks are capable of identifying false detections with considerable accuracy and can to this extent utilize information present in guard cells and Doppler profiles. This allows for a reduction in the false alarm rate with only moderate loss in the probability of detection. With an appropriately designed neural network, an overall improved system performance can be achieved when compared against traditional constant false alarm rate detectors for the specific trained scenarios.

1 Introduction

Discriminating targets from background noise and interference is a fundamental task of all radar systems. Targets need to be differentiated with high probability of detection (\(P_{\mathrm{D}}\)) and simultaneously the detection methodology should offer a low false alarm rate (\(P_{\mathrm{FA}}\)). The radar detection problem is complicated by possible factors such as multiple closely spaced targets, the presence of clutter and clutter edges in vicinity of targets. Target detection has been studied heavily over the years, and a large number of techniques have been suggested. One particular class of algorithms, who also sufficiently satisfy the constant false alarm rate (CFAR) property, include CA (cell averaging), GO (greatest of) and SO (smallest of) CFAR sliding window methods with several proposed variants [1,2,3,4,5,6,7,8]. Importantly, these detectors aim to provide an adaptive mean to calculate the detection threshold as fixed thresholds are inadequate in case of complex and dynamic surroundings. In the literature, a wide variety of alternative detection methods have also been proposed for specific environmental conditions where the detectors are tailored with respect to assorted target and clutter distributions and applicable secondary data are made use of [9,10,11,12,13]. These methods often rely on estimation of distribution parameters, the covariance matrices and are dependent upon accurate estimation of these figures.

The use of machine learning has gained much attention over the years, and these techniques have also been discussed in radar contexts for target detection [14,15,16,17,18,19,20,21,22,23,24,25,26]. Particularly, Wang et al. [21] investigated the use of deep convolutional networks to improve target detection, while [23] employed autoencoders for the same task. In [19, 20, 27], the authors proposed training of feedforwarding neural networks (NN) to emulate various CFAR detectors while concurrently aiming to reduce the number of false detections. The trained neural networks were demonstrated to be effective in reproducing the detection algorithms and in reducing the false alarm rate with some loss in probability of detection. Nevertheless, the proposed approaches also left open several questions, such as what features do the networks require to distinguish between true and false detections, are there circumstances where they do not operate well, how many layers should a network have and what would be the maximum potential for such a trained neural network? Several disadvantages with the previous techniques have also been observed. For instance, the neural networks were required to learn and implement the CFAR detection process as well as discriminating between true and false detections resulting in an interweave training procedure with lesser degree of specialization. Another drawback was that the strategy was found to be incapable of further generalization with regard to sophisticated type of detectors who, for example, require sorting of entries. Finally, processing every sliding window sample through a full neural network is a computational expensive procedure, compared to a standard CFAR test, and other strategies that can be developed would therefore be of great practical interest.

This paper builds upon the previous works and presents a cohesive generic methodology on how to train and perform a detection process in combination with feedforwarding neural networks. The initial detection process is herein proposed carried out conventionally by making use of an amended, rudimentary, sliding window CFAR detector, and only positive detections are to be processed by the neural network. The neural network thus acts as a specialized binary classifier between true and false detections. The leading detection strategy is proposed based on a modified version of censored mean SO-CFAR with the objective to force the detector to provide a high probability of detection. In the suggested SO-CFAR construction, a number of the largest elements in the sliding window are censored which makes it viable to detect in tangled conditions such as multiple closely spaced targets as well as enclosed targets in clutter edges. The downside of this is an exceedingly high false alarm rate which is where the neural network steps in and aims to curtail it to acceptable levels. In order to achieve a high probability of correct target detection, we show that simply using the CFAR sliding window samples is not sufficient rather the target spread in Doppler contains important discriminatory information and contributes positively if integrated by the neural network. By linking together different strategies in an appropriate training session, it is shown that a high level of \(P_{\mathrm{D}}\) can be achieved with a satisfactory low \(P_{\mathrm{FA}}\) for the type of scenarios the network has been trained on. On the other hand, if only elementary CFAR window data, also excluding guard cells, are allotted to train a neural network then in a noise-only situation, the networks can converge toward a traditional CFAR detection strategy. To this end, simulations performed over a convoluted scene with multiple closely spaced fluctuating targets and with and without K-distributed clutter are performed and evaluated under various parameters and sliding window structures. The contributions of this paper permit a radar sensor to adapt the detection process to its surroundings based on learned collected or simulated data, and it is shown how established detection methods may be coupled with machine learning concepts.

2 Radar and signal model

This section briefly reviews a generic structure for a pulsed radar system upon which a sliding window detector can be modeled. The radar is assumed to emit a burst of M waveforms in a coherent processing interval (CPI). The targets are assumed to be slowly fluctuating with a distribution, such as Swerling 1, where the values vary randomly across different dwells but with a given mean signal-to-noise ratio (SNR) and signal-to-clutter ratio (SCR). For each CPI, the radar processing unit performs a tapered Fourier transform over each range bin to construct a range-Doppler map represented by an \(M \times R\) complex matrix \({\mathbf{D}}(t,\omega )\). \(t=1,2,\ldots ,R\) is the discrete fast-time parameter with respect to different time delays (range cells) while \(\omega =1,\ldots ,M\) represents the Doppler cells.

To perform a detection, each individual cell of the range-Doppler map is evaluated one by one. The detector takes the square law range samples of the map \({\mathbf{D}}\), \({\hat{\mathbf{D}}}(t,\omega ) = |{\mathbf{D}}(t,\omega )|^2 \; \forall \; t,\omega\), and a sliding window of size \(2N+2G+1\) is moved across, \({\hat{t}}=1+N+G,\ldots ,R-N-G\) and \({\hat{\omega }}=1,\ldots ,N\) excluding the edges. The \(2N+2G+1\) samples in range specified by the window are extracted in \(x(u) = {\hat{\mathbf{D}}} ({{\hat{t}}-N-G:}\;{{\hat{t}}+N+G},{\hat{\omega }}), \; u=1,2,\ldots ,2N+2G+1\) and the cell in the middle of the window, \(x(N+G+1)\), cell under test (CUT), is compared against a scaled average, \(\gamma\), to determine whether conditions for declaring a detection are satisfied. G number of gap cells immediately to the right and left of CUT are discarded when computing the average to avoid target leakage and neutralize the impact of sidelobes. A detection is declared if

$$\begin{aligned} x(u)_{|u={\mathrm{CUT}}} > \gamma \; K, \end{aligned}$$
(1)

where K is a specified threshold.

The background average, \(\gamma\), plays a central role in the detection process and may be computed through a variety of methods. In (Cell Averaging) CA-CFAR, the average is composed of all available 2N cells,

$$\begin{aligned} \gamma = \frac{1}{2N} \left( \sum _{k=1}^N x(k) + \sum _{k=N+2G+2}^{2N+2G+1} x(k)\right) , \end{aligned}$$
(2)

which corresponds to a maximum likelihood estimate under homogeneous assumptions [5, 28] but can also be attempted applied under other conditions. As other alternatives, GO (Greatest Of) GO-CFAR and (Smallest Of) SO-CFAR can be implemented where two averages are computed based on N reference values each from the left and right side of the CUT,

$$\begin{aligned} \gamma _1 = \frac{1}{N} \sum _{k=1}^N x(k), \;\;\; \gamma _2 = \frac{1}{N} \sum _{k=N+2G+2}^{2N+2G+1} x(k). \end{aligned}$$
(3)

In GO-CFAR, a conservative approach is taken as the maximum value among the two is selected as the comparable value,

$$\begin{aligned} \gamma = \max \; \left( \gamma _1, \gamma _2\right) , \end{aligned}$$
(4)

resulting in typically a slightly lower \(P_{\mathrm{D}}\) alongside reduced false alarm rate. In SO-CFAR, instead, the smallest value is chosen,

$$\begin{aligned} \gamma = \min \; \left( \gamma _1, \gamma _2\right) , \end{aligned}$$
(5)

which improves detection in case of clutter edge or in the presence of an additional target on one side of the sliding reference window [29]. This results in a good detectional performance; however, as the interference level is always underestimated, a high \(P_{\mathrm{FA}}\) is to be expected in non-homogeneous conditions. There exist a large number of other methods including censored mean level (CML-CFAR) detectors [30, 31] where the largest samples in the reference windows are excluded before the background mean is computed. Removing the greater values improves detection in the presence of dense targets but otherwise comes at the expense of \(P_{\mathrm{FA}}\) due to an undervalued \(\gamma\) estimate. In previous papers [19, 20], it was demonstrated that a neural network could be trained to mimic the classical CFAR detectors with reduced number of incorrect detections, but the \(P_{\mathrm{D}}\) could not be increased. Building upon this capability one can move forward to a new type of detection process where the detector is altered to provide a very high level of \(P_{\mathrm{D}}\) alongside a much enlarged \(P_{\mathrm{FA}}\). This can be viewed as the first step in a two part cascaded classification process, where in the first stage, the designated detector synthesizes a very coarse decision. In the second step only positive detections are evaluated by a trained neural network to determine whether the bin qualifies for a detection or not. The procedure implemented by the secondary network classifier is not contingent upon a particular detector but the first detection upper-bounds the \(P_{\mathrm{D}}\) performance of the system and the network is only taught to identify false detections with respect to a given detector. In this text, we limit ourselves to the modified version of the classical SO-CFAR detector as it can be used to demonstrate the applicability on both noise-only and clutter based scenarios. This detector may readily be replaced by other type of detectors; preferably, as long as the detector can be tuned to yield a larger or fewer number of correct and incorrect detections.

3 Methods

The methodology of the two step detection and classification process is described next.

3.1 Step 1: CMSO-CFAR detector

As the initial first step detector, we utilize a modified version of the censored mean level detector combined with SO-CFAR, denoted herein as CMSO-CFAR. In this detector, the N elements to the right and left side of the sliding window x(u) are sorted in two blocks in increasingly order

$$\begin{aligned} {\hat{x}}_1(u) = {\text{sort}}\left( x(1) \; \ldots \; x(N) \right) \end{aligned}$$
(6)

and

$$\begin{aligned} {\hat{x}}_2(u) = {\text{sort}}( x(N+2G+2) \; \ldots \; x(2N+2G+1) ). \end{aligned}$$
(7)

From each block only the first P, \(1 \le P \le N\), lowest value samples are taken into consideration in estimating the two mean averages,

$$\begin{aligned} \gamma _1 = \frac{1}{P} \sum _{k=1}^P {\hat{x}}_1(k), \;\;\;\gamma _2 = \frac{1}{P} \sum _{k=1}^{P} {\hat{x}}_2(k). \end{aligned}$$
(8)

The smallest of these two averages is selected \(\gamma = \min \; (\gamma _1, \gamma _2)\) as the estimate to be applied by (1). The performance of this detector will be contingent on the choice of P. A selection of \(P=N\) leads to the standard case of SO-CFAR while at the other extreme \(P=1\) implies an unrefined estimate of the background noise or clutter. A small choice of P, nevertheless, is befitting for detection of multiple closely spaced targets and detection in non-homogeneous settings. The reference level \(\gamma\) can be rather forgiving in CMSO-CFAR but operates as a regulator of when it is justifiable to train and evaluate a cell by the neural network. A complementary objective of the detector is to make certain that not the complete range-Doppler map requires inspection by a neural network which would lead to a more computational expensive operation.

3.2 Step 2: Neural network classifier

In the second step, all positive outcomes from the initial detector are processed by a neural network. If the detector returns a positive value, only then selected range-Doppler data, from the neighborhood of CUT, are supplied to the network. Consequently, the output from the neural network determines whether a target detection at CUT is declared or not.

The aforementioned CMSO-CFAR test is formulated to operate in the range dimension only; however, for the neural network several other choices can be made on what type of data it should be dispensed from the range-Doppler map. We consider three different options, namely O1, O2 and O3 where the data fed into the network are specified by \({\mathbf{r}}\) and follows:

  • O1: \({\mathbf{r}}= (x(1) \;\ldots \; x(N), \; x(N+G+1), \; x(N+2G+1) \; \ldots \; x(2N+2G+2))\), \(\;2N+1\) values corresponding to the sliding window reference cells and the value in CUT

  • O2: \({\mathbf{r}}= (x(1) \;\ldots \; x(2N+2G+1))\), \(\;2N+2G+1\) values corresponding to the sliding window reference cells, including guard cells, and the value in CUT

  • O3: \({\mathbf{r}}= (x(1) \;\ldots \; x(2N+2G+1) \; {\hat{\mathbf{D }}} (t, 1 \;\ldots \; M))\), \(\;2N+2G+1+M\) values corresponding to the sliding window reference cells, including guard cells, the value in CUT and the M values from the Doppler profile for the particular range-bin.

Figure 1 provides an illustration of range-Doppler sliding window data interconnected with a fully connected neural network. With the first option (O1), the network is only fed the identical information as utilized by a standard SO-CFAR test, while with the second option (O2) the guard cell data are also incorporated. Guard cells can potentially contain useful information for target identification due to potential range walk and range sidelobes originating from the pulse spreading function. Similarly, targets tend to spread out in Doppler, subject to the applied tapering window, and the statistical information on target impacting neighboring cells can be taken into account by a neural network. With the third option (O3), the neural network will be fed all M samples from the Doppler profile in addition to the \(2N+2G+1\) values from the CFAR range window. The auxiliary data of O2 and O3 have always been available for detectional purposes, but it is not analytically discernible how this can be utilized to improve the target detection process; this is nevertheless a task ideally suited for neural networks that can internally construct intricate models. In all cases, the input samples to the neural network are normalized by min–max normalization, \({\hat{{\mathbf{r}}}} = \frac{ {\mathbf{r}} - \min ({\mathbf{r}})}{\max ({\mathbf{r}}) - \min ({\mathbf{r}})}\). The output from the last layer of a neural network, \(\kappa =f_{\mathrm{NN}}({\hat{{\mathbf{r}}}})\), returns a detection estimate. \(f_{\mathrm{NN}}({\hat{{\mathbf{r}}}})\) represents the neural network modeled as a function with an input of the normalized data \({\hat{{\mathbf{r}}}}\). A threshold test is applied on \(\kappa\) and if exceeded a detection is declared.

Fig. 1
figure 1

Schematic description of the detection process and data linked with a neural network

For supervised training of the proposed network, we assume that a collection of L independent range-Doppler maps \(\mathbf{D }_1(t,\omega ),\ldots ,\mathbf{D }_L(t,\omega )\) have been acquired wherein the targets and their positions in range and Doppler are known precisely (i.e., ground truth). A database is thereupon constructed based on realizations of CMSO-CFAR tests and window samples who lead to positive detections. A number of samples are collected for when the detection process returns a correct decision and when the detector returns a false positive. The training objective of the neural network is to distinguish between these two cases and to return either 0 or 1:

$$\begin{aligned} \hat{\kappa } \; = \; \left\{ \begin{array}{ll} 1, &{}\quad {\text{CMSO-CFAR: correct decision}} \\ \\ 0, &{}\quad {\text{CMSO-CFAR: false positive}}, \end{array} \right. \end{aligned}$$

where \(\hat{\kappa }\) is the aimed output from the neural network. This training process forces the network to evolve a statistical mechanism to separate between the two type of detections. We remark that the condition for a correct decision must also include neighboring cells if a target spreads out in range and/or Doppler due to sidelobes and if the initial detector returns a positive outcome. The objective behind the neural network training is not necessarily to retain the same \(P_{\mathrm{D}}\) as for CMSO-CFAR but to exploit it to maximize the detection capability with a satisfactory low false alarm rate.

For a generic type of network training, the training database should contain a wide variety of samples in order to learn to recognize different situations. This is particularly important if the network is trained on data including gap cells and/or Doppler profiles as the network can become highly specialized with respect to these inherent features. To attain better control over this, the various samples within the training database may be split into multiple categories when the initial detector returns a positive result. Possible categories for positive and correct CMSO-CFAR can be: target in noise-only environment, target in clutter region, target with the presence of another target in the reference cells, target with clutter edge on the right or left side of the reference cells and so forth. Similarly, different categories can be established for incorrect decisions, such as false positive in noise-only environment and false positive due to presence of clutter. A fair balance between these groups is important if the network is expected to perform equally well on all situations. In a simulated environment, a good counterbalance can be achieved by making certain that the various situations all occur with equal likelihood. A neural network training based on the above criteria is in principle an optimization process with the objective to minimize the overall error of the network which can be decomposed as,

$$\begin{aligned} \min {f_{{\mathrm{NN}}|{\hat{\mathbf{x }}}}} =&\sum _{k=1}^{A_N} \left( |f_{\mathrm{NN}}({\hat{x}}_{A,k})| - 1\right) ^2 + \sum _{k=1}^{B_N} |f_{\mathrm{NN}}\left( {\hat{x}}_{B,k}\right) |^2 \end{aligned}$$

where \(A_N\) and \(B_N\) refer to the number of samples for correct or false positive detections with the sliding window samples denoted by \({\hat{x}}_{A,k}, \; k=1,\ldots ,A_N\) and \({\hat{x}}_{B,k}, \; k=1,\ldots ,B_N\) for the two categories.

3.3 Choice of neural network

The number of input parameters to the network will be limited between \(2N+1\) entries (O1) up to \(2N+2G+1+M\) values in case of O3. As the amount of input data is rather limited, we recommend to use standard fully connected feedforwarding networks for the machine learning parts as these networks are able to approximate any arbitrary operator [32]. The output from a node of the network is therefore connected to every other node in the next layer. The network will consist of an input layer, an output layer with a linear transfer function and one or two hidden layers with hyperbolic tangent sigmoid activation functions. The output layer is to contain a single node as a binary detection estimate is desired. The number of hidden layers and nodes may be varied and will be discussed in the next section though very large networks are not desirable as they can potentially lead to overtrained networks. Contrarily, very small networks may not be able to distinguish well between true targets and false detections.Footnote 1

4 Results and discussion

Neural network training is a highly data-driven approach, and the results may depend on the type of input data and the constructed scenario. This section exemplifies how the presented framework can be put to use under both noise-only and clutter oriented scenarios. Parts of a pulsed radar system are simulated in slow-time to train neural networks under the proposed methodology, and the performance is evaluated against traditional SO-CFAR and GO-CFAR detectors alongside CMSO-CFAR. The radar is assumed to transmit and receive \(M=16\) pulses over \(R=300\) range bins. In total, 11 independently fluctuating targets are modeled being placed at various range bins. The targets’ reflectivity is assumed to follow a standard Swerling 1 model where the mean is varied randomly during training to mirror different power levels. The clutter shape parameter is randomly selected uniformly for each dwell to be in the range between \(v=0.05\) (spiky) and \(v=10\) (Rayleigh distributed) [33]. The clutter values are then arbitrary generated modeled through a K-distribution function [18] to cover the first half of range bins and are additionally formed with a propagation factor to provide a dip in the clutter region. A random process additionally up or down scales the clutter to implement a greater variation in signal-to-clutter ratio from dwell-to-dwell. Noise is modeled as white Gaussian noise and to simulate noise-only scenarios, the clutter modeling aspects are discarded. For construction of the range-Doppler map, the Hamming window is put to use.

We refer to Fig. 2 for an example range-Doppler map where all 11 targets stand out and are designated from A to K. In the figure, the clutter component is included and can be seen on the left side. Targets A, B, C, H, I, J can be considered to be in a clutter dominated region, and their velocity is randomly selected to be within \(-45\) m/s to 45 m/s. For other targets, the velocity range is set between − 65 and 65 m/s. Targets A, C and H are within the vicinity of the clutter edge, and their range placement is randomly determined to be between 0 and 9 bins to the left or right of range bins, respectively, 60 or 160 at the start and end of the second clutter region. Positive detection samples will thus experience cases of clutter edge with different distance from CUT from both sides. Targets D, E, F and I, J are closely spaced with identical velocities, but the distance amidst is randomly set between 3 and 10 range cells. With a probability of 0.5, these closely spaced targets have equal power levels, while otherwise they all fluctuate randomly. Closely spaced targets are thus modeled with both matching and dissimilar reflecting values. We further remark that targets A, H, B, J and G, K are placed on the same range bin. This is important to make certain that when a network is trained incorporating the Doppler profile (O3), the network does not erroneously make the assumption that only a single target can be encountered on a given range-bin. The simulated scenario is set up, so the different type of targets in noise-only and clutter regions, targets in clutter edges or single or closely spaced targets are accounted approximately once in a proportionally manner. The few exceptions are related to targets who occur twice in the same range-bin, such as targets G and K who are both single targets in noise-only region. All targets are moreover modeled to have a single sidelobe in range of \(-\,23\) dB on adjacent bins and with a probability of 0.5 range walk is simulated with a target spreading across two range-cells. If the target is designated to spread out in range then the neighboring cell instead determines an independent Swerling 1 value from the same distribution. Simulating target spread does not alter the performance of a standard CFAR detector with guard cells; however, a neural network can then not simply recognize a target by an expected fixed sidelobe level in bordering cells. The noise floor during simulations is also not kept fixed, rather ranges between \(-\,80\) and \(-\,115\) dB following a uniform distribution between CPIs. The variation in the above setup, particularly from dwell-to-dwell, captures a broad types of both simple and challenging detectional conditions which should be suitable for training and evaluating a generic type of detector.

Fig. 2
figure 2

Simulated range-Doppler map

After formation of a range-Doppler map, a detection process is performed with the CFAR parameters being set as \(G=3\) guard cells and \(N=9\) averaging cells on each side and a thresholding factor of 14dB. Two values of \(P=3\) and \(P=6\) were selected for CMSO for two independent realizations of training and evaluations. To construct the training database, true and false positive detections were taken as encountered sequentially by CMSO-CFAR and new random range-Doppler maps were generated for as long as required. A total of \(A_N=100{,}000\) positive detections were collected, while the number of false positive collected detections was set to \(B_N=1{,}000{,}000\), resulting in 1.1 million entries for the training set. The ratio between incorrect and correct detections symbolizes the much larger number of false detections which arise with the application of CMSO-CFAR. For clutter scenarios, the number of false detections was grouped in two, half of the false detections occurring in the clutter region and the other half in the noise-only region. The target SNRs over all dwells ranged from \(-\,40\) to 70 dB, while the SCR varied between \(-\,60\) and 60 dB when clutter was included.

For neural network training, all three data options were considered with two different sizes; a small 50x1 network with one hidden layer of 50 nodes or a bigger 50x2 network with two hidden layers. The selection of 50 nodes in a layer was made as it roughly corresponds with the O3 input being set at 41 entries. For a fair comparison, the same network sizes were also kept for O2 and O1 selections. Even bigger and deeper networks were also investigated, however, the performance was found to be generally very comparable to the 50x2 networks. Training was carried out using the scale conjugate gradient algorithm over both noise-only and clutter plus noise scenarios. The full data were put to use for training, without any division into different sets for training or validation, over a total of 1 million epochs. To inspect how training actually transfers over to detectional performance on untrained data, a larger set of 6500 range-Doppler maps was constructed adopting the formerly described principles but with a set mean power value for the targets. Each map was evaluated in full through different detection methods and the trained neural networks to build up statistics. This process was repeated with varying average target power levels to obtain \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\) curves with respect to mean target SNR or SCR. \(P_{\mathrm{D}}\) was calculated as the number of correctly detected targets relative to the total number of simulated targets while \(P_{\mathrm{FA}}\) as the number of incorrectly detected targets in relation to total number of tests (26.2 millions per SNR/SCR). The network threshold was fixed at \(\kappa > 0.8\) which represents an outcome with high degree of certainty. Other values of \(\kappa\) can be chosen to shift the curves up or downward [27].

Table 1 Neural network training errors, noise-only training

4.1 Noise-only scenario

If a sensor operates mainly in an homogeneous environment, then the described clutter modeling aspects can be eliminated to detect single and dense targets in noise. Training on only noise provides opportunity to understand the behavior of neural networks and the classification process in a simpler context. Later, the performance can be compared against a network trained on both noise and clutter. The network convergence error rates after a completed training process are given in Table 1 for the noise-only case. Training over O2 dataset gives several times improvement over O1, while a further enhancement is attainable by using O3. The error rates for \(P=6\) are lower than for \(P=3\) pointing toward the fact that a low selection of P can result in a very large number of incorrect detections who are more difficult to classify. Two layer networks generally yield lower error rates though the improvement could stem from either the ability to detect more targets correctly or identify more false detections.

Subsequently training, results from the evaluation process executed over the untrained dataset of range-Doppler maps are given in Figs. 3 and 4 where the top plot depicts the \(P_{\mathrm{D}}\) curves, while the \(P_{\mathrm{FA}}\) curves are on the lower plot. The x-axis follows the average SNR in dB with fluctuating Swerling 1 targets. The best \(P_{\mathrm{D}}\) performance stems from CMSO-CFAR (dashed magenta) which also yields the highest \(P_{\mathrm{FA}}\). At the other extreme, GO-CFAR (magenta with diamonds) gives the lowest false alarm rate at the expense of lowest detectional capability. This is related to this detector’s inability to identify closely spaced targets. The standard SO-CFAR (solid black) performs in-between these two extreme detectors. Results of classification of CMSO-CFAR detections from the neural network trained only on references cells (O1) for 50x1 (blue starred) show a very close convergence toward standard SO-CFAR for both \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\). Although this solution does not lead to the detection of more than two closely spaced targets, it still becomes the best fit for the dataset and demonstrates the networks ability to coincide toward a classical solution if no alternatives can be found.

Fig. 3
figure 3

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Noise-only scenario, 50x1 network, \(P=3\)

Fig. 4
figure 4

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Noise-only scenario, 50x2 network, \(P=3\)

The other networks in Fig. 3 which utilize the guard cells information (O2, red) or guard cells combined with Doppler profile (O3, black starred) provide a clear \(P_{\mathrm{D}}\) advantage over standard SO-CFAR, and at high SNR the curves can be seen converging toward CMSO-CFAR. The false alarm rate is well below CMSO- or SO-CFAR though remains above GO-CFAR. This validates the basic claim that there is useful information present in guard cells and the Doppler cells. The false alarm rate for the case of O3 50x1 trained neural network stands out as it does not follow the CFAR property rather behaves in an adaptive manner. At low SNR, the \(P_{\mathrm{FA}}\) performance is more similar to GO-CFAR but attains a higher floor level at bigger SNRs. Otherwise, comparing the smaller 50x1 networks with the deeper 50x2 networks (Fig. 4), one notices that the \(P_{\mathrm{D}}\) is always better with the bigger networks though the false alarm rates are higher. The \(P_{\mathrm{D}}\) for the O1 50x2 network is marginally better than SO-CFAR but progresses with data options of O2 and O3. For 50x2 cases, the \(P_{\mathrm{FA}}\) fluctuates around that of SO-CFAR. Although these networks evaluate positive detections coming from a CMSO-CFAR detector, all of these are clearly approximating a SO-CFAR type of detectional approach and revising it based on available extra information. With O3, it is possible to attain CMSO-CFAR detectional level for high SNR targets, but the choice of O2 is also quite beneficial as an overall improvement on the traditional SO-CFAR detector. Recognizing and detecting a target correctly, compared to labeling it as a false detection, is evidently a more demanding task and must be based on potential information only available in the CUT, the few guard cells and/or the Doppler cells adjacent to the CUT.

The selection of \(P=3\) is useful for obtaining a high probability of detection, but the false alarm rates, except for the single case of 50x1 O3, remain relative large compared to GO-CFAR. To further curtail the \(P_{\mathrm{FA}}\) a greater value of P can be practiced which will reduce the \(P_{\mathrm{D}}\) but still be able to handle many complex situations. For the training and evaluation process with \(P=6\) the results are depicted in Figs. 5 and 6.

Fig. 5
figure 5

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Noise-only scenario, 50x1 network, \(P=6\)

Fig. 6
figure 6

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Noise-only scenario, 50x2 network, \(P=6\)

As one would predict, advancing from \(P=3\) to \(P=6\) reduces the detection capability of CMSO-CFAR; however, the reduction is quite small at medium to high SNR values. At low SNR, the fixed detection capability, which can be attributed to randomness, is now eliminated. To some extent, the results mirror the previous cases of \(P=3\) but with a decreased false alarm rate, particularly for the smaller networks of 50x1 (Fig. 5) which are now closer to the GO-CFAR level. In case of O1, the \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\) are both lower than SO-CFAR but with same curve characteristic for \(P_{\mathrm{D}}\); this solution is thus similar to a traditional SO-CFAR detector where a different trade-off is being made between \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\). The use of O2 (red curve) gives a comparable \(P_{\mathrm{D}}\) as of SO-CFAR with a low false alarm error closer to GO-CFAR. The application of O3 (black starred) gives an even reduced \(P_{\mathrm{FA}} < 1 \times 10^{-8}\) though the detection capability at medium SNR values is also hampered. Using Doppler information, the network manages to identify false detections very well and the training weighting is such that a reduction in \(P_{\mathrm{FA}}\) is preferred over \(P_{\mathrm{D}}\). For the bigger networks of 50x2 (Fig. 6), the false alarm rates are not as low but are distributed around \(1 \times 10^{-6}\), doing better than both CMSO or SO-CFAR, with the O3 method having a slight advantage. That a neural network trained only on noise and utilizing the same limited information (O1) as of a traditional CFAR detector can offer roughly identical \(P_{\mathrm{D}}\) performance as SO-CFAR but with a lower false alarm rate is an important finding here and is discussed further in Sect. 4.3.

Reviewing the results on noise-only setup, we observe that for the proposed training strategy, even small sized networks are very capable of identifying and reducing the number of false detections even though the starting premise may be a very coarse CMSO-CFAR detector. The detectional performance is nevertheless strongly dependent upon how much data the network is fed. Deeper networks manage to reduce the number of incorrect detections though they tend to place greater emphasis on the detectional aspects and, regardless the data option, end up offering a very similar false alarm rate. This explains the results of [19, 20] as the improvements demonstrated there can now be linked to the application of the O2 method. If no adjoining cell information is provided, then the trained network may converge to a standard solution (as shown for \(P=3\)), while a lower false alarm can still be achieved if the initial detector is more restrained (as demonstrated for \(P=6\)).

The assertion for training of a neural network commences from a fixed detection threshold. This is in contrast to a traditional setup, where a desired \(P_{\mathrm{FA}}\) is first determined and then one aims to maximize the \(P_{D}\). By varying the threshold level K and training multiple networks, keeping other parameters fixed, receiver operating characteristic (ROC) curves can still be generated where \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\) numbers can be compared against each other. Figure 7 provides example of such curves over the defined scenario for medium average SNR with \(P=6\) and both 50x1 and 50x2 networks, complementing Figs. 5 and 6.

Fig. 7
figure 7

Receiver operating characteristic curves, noise-only scenario

As the curves demonstrate, the O3 data option provides the leading \(P_{\mathrm{D}}\)/\(P_{\mathrm{FA}}\) ratios in both cases. The deeper network (right) can deliver higher quotients, noticeably at lower \(P_{\mathrm{FA}}\) levels whereas the 50x1 network (left) starts off at a smaller \(P_{\mathrm{FA}}\) as it is more fruitful in reducing false detection when there are many of these. O2 is also able to yield outcomes better than both GO or SO-CFAR though there are certain intervals where the effectiveness of the 50x2 network can be similar to the CMSO-CFAR detector; a trait the smaller network does not exhibit. The O1 selection is most applicable at medium to high \(P_{\mathrm{FA}}\) levels, i.e., when the number of false detections from the initial detector is small, the network can distinguish between the two classes with an advantage and follows or exceeds the performance of traditional detectors. The choice between 50x1 or 50x2 then depends primarily upon the aimed \(P_{\mathrm{FA}}\). The presence of multiple closely spaced targets makes it difficult to obtain a high \(P_{\mathrm{D}}\) as observed in previous figures; however, the curves demonstrate how different amount of data can be utilized by a neural network for leverage, while the initial detector may remain oblivious to it.

4.2 Clutter scenario

The preceding section established some important reference points for neural network target detection in noise-only surroundings. In a more convoluted environment with the presence of sea or ground clutter, target detection becomes more difficult; however, the use of CMSO-CFAR is a viable option as it can offer a high detection capability though with an inflated false alarm rate. We do note that false detections arising from clutter are likely to exhibit certain properties and have a statistical structure which a neural network may be able to recognize with more ease than false detections stemming from noise. The convergence error rates after the training session, for the combined case of noise and clutter setup, as of Fig. 2, are provided in Table 2. The error level decreases as the network engages with more data and as P and the size of the network is increased; nonetheless, the networks do not adapt to the data as well as for the noise-only case (Table 1).

Table 2 Neural network training errors, clutter and noise

From the table, one may conclude that a bigger network and more input data are most appropriate options for a neural network training, but to determine the impact on \(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\) simulations need to be carried out on an untrained data set as outlined previously but now with incorporated clutter. The results from subsequent evaluation for the case of \(P=3\) and 50x1 network are given in Fig. 8, while Fig. 9 demonstrates the use of larger 50x2 networks. The lower x-axis provides the average SNR of the targets in noise-only region, while the average SCR for the targets in the clutter region is given in the upper x-axis, the resulting signal to noise plus clutter ratio being in the interval from − 21 to 40 dB.

Fig. 8
figure 8

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Clutter and noise scenario, 50x1 network, \(P=3\)

Fig. 9
figure 9

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Clutter and noise scenario, 50x2 network, \(P=3\)

Introducing clutter in the first half of the range-Doppler map lowers the detection capability a little; however, the major impact is on the false alarm rates for CMSO-CFAR and SO-CFAR which are now much higher. The trained 50x1 networks (Fig. 8) reduce the false alarm rates substantially though this also comes with a large reduction in the \(P_{\mathrm{D}}\)s. Only at high SNR/SCR the detection rates for O2 and O3 exceed standard SO-CFAR; otherwise, the \(P_{\mathrm{D}}\) remains below SO-CFAR. Even though the starting point is a CMSO-CFAR detector, the O1 method is particularly only interesting as a GO-CFAR replacement since it does better than GO-CFAR with respect to detection, while the \(P_{\mathrm{FA}}\) remains similar to GO-CFAR. The deeper 50x2 networks (Fig. 9) are much more successful in \(P_{\mathrm{D}}\) efficiency which is well above that of SO-CFAR depending on whether O2 or O3 method is used while the false alarm rates are all comparable to GO-CFAR. The case of O1 is an exception as the detection capability is just beneath SO-CFAR though the \(P_{\mathrm{FA}}\) is significantly lower. As the false alarm rates for O1 and O2 are similar across 50x1 and 50x2, one can conclude that the smaller networks lack necessary resources to positively discriminate between the various type of targets in complicated environments.

The choice of \(P=3\) is a challenging situation, and the results from \(P=6\) are given in Figs. 10 and 11. The smaller 50x1 networks are, as in the previous case, able to reduce the false alarm rates greatly but with a marked decrease in the detection capability which is only on par with GO-CFAR at low to medium SNRs. At high SNR/SCR levels, the performance approaches that of SO-CFAR with either O2 or O3. The deeper 50x2 networks of Fig. 11 yield better outcomes, and although the reduction in the average \(P_{\mathrm{FA}}\) is lower, it is still significantly curtailed from the original CMSO- or SO-CFAR and float around that of GO-CFAR. The detection rates are all between SO-CFAR and CMSO-CFAR where the highest detection capability arises from O3 with a couple of dB reduction if O2 is employed. Both of these methods exhibit the same curve gradient as of CMSO-CFAR. The O1 method also performs better with 50x2 compared to 50x1 although the \(P_{\mathrm{D}}\) remains suboptimal against standard SO-CFAR.

Fig. 10
figure 10

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Clutter and noise scenario, 50x1 network, \(P=6\)

Fig. 11
figure 11

\(P_{\mathrm{D}}\) and \(P_{\mathrm{FA}}\), Clutter and noise scenario, 50x2 network, \(P=6\)

The provided figures have established some capabilities of trained neural networks for binary classification. The \(P_{\mathrm{FA}}\) as shown in all the figures is essentially constant satisfying the CFAR property at both low- and high-SNR regimes. The only exception being the case of O3 training and the smaller 50x1 network for \(P=3\) where the small choice of P leads to very coarse estimation of the noise level.

To consolidate Figs. 11, 12 displays two ROC curves for \(P=6\) and 50x2 networks trained for different thresholds for two different SNR and SCR settings. Not all combinations of \(P_{\mathrm{D}}\)/\(P_{\mathrm{FA}}\) are achievable as trained networks will always try to lower the initial false alarm rate. The best gain is recovered from the combined CMSO-CFAR and neural network detector with O3 data strategy while O2 also surpasses the traditional detectors. The O2 method for the low SCR plot on the left side exhibits an adaptive behavior where it converges toward standard GO-CFAR as the \(P_{\mathrm{FA}}\) increases. Identical to the noise-only case of Fig. 7, O1 can result in improved outcomes but only at higher \(P_{\mathrm{FA}}\) values which shows the importance of the sidelobe information present in guard cells and Doppler.

Fig. 12
figure 12

Receiver operating characteristic curves, clutter and noise scenario

4.3 Characteristic evaluation

The curves in the depicted plots demonstrate the average performance of the trained networks for the defined scenario at various SNR/SCR levels. To further investigate how these networks would perform for specific target conditions, more detailed evaluations were carried out for \(P=6\). Four different simulated setups were considered. Table 3 provides the numerical outcomes for the first two situations with only a single (S) target (target C in Fig. 2) in noise-only environment or only three multiple (M) close targets (D, E and F) in noise-only environment. In Table 4, the results are given for the cases with detection in mixed noise and clutter environment with either only a single (S) target in clutter edge (target C) or only dual multiple (M) targets in clutter region (targets I and J). The tables in all cases provide the mean detectional and false alarm rates over varying SNR as described for the training stage, evaluated across 300 untrained range-Doppler maps with approximately 103 million CFAR tests. A 0 in the table refers to a \(P_{\mathrm{FA}} < 9\times 10^{-9},\) and the three bottom rows represent the fallout from traditional CFAR detection schemes.

Table 3 Performance comparison, single target (S) and multiple close targets (M) in noise, \(P=6\)
Table 4 Performance comparison, single target in clutter edge (S) and multiple targets in clutter (M), \(P=6\)

The tables confirm the plots in establishing the progression in \(P_{\mathrm{D}}\) as one moves from feeding less data to more data, i.e., from O1 to O3. Comparing the networks trained on noise-only mode against those trained on noise and clutter (top six rows against six bottom NN rows for both tables), those trained on noise-only yield foremost performance in clutter-free environments. When the networks trained on only noise are evaluated in clutter surroundings (top 6 rows of Table 4), the comparable \(P_{\mathrm{FA}}\) increases, up to the level of SO-CFAR. This is still better than that of CMSO-CFAR considering the fact that these networks have not been provided any clutter data for training. Networks trained on both noise and clutter (6 bottom NN rows in both tables) offer low \(P_{\mathrm{FA}}\) regardless scenario, however, also yield a reduced \(P_{\mathrm{D}}\) when executed on noise-only setups. The loss is then more significant in case of O1 compared to O3, for example, comparing row 4 with row 10 a \(P_{\mathrm{D}}\) reduction can be seen of 4–6%, while for O3, row 6 against row 12, the disadvantage is of 0–2%. Networks trained on a combination of backgrounds thus exhibit a loss against more specialized networks, but this can to some extent be mitigated by a training process based on more input information. As established in the previous subsections, employing single layer 50x1 networks generally yield lower \(P_{\mathrm{FA}}\) compared to a 50x2 network, at the expense of \(P_{\mathrm{D}}\). Nevertheless, by training on both noise and clutter and employing a 50x2 neural network with O2 or O3 data strategy (row 11 and 12) one can obtain a \(P_{\mathrm{D}}\) which for all evaluated cases in the tables is higher than the one of SO-CFAR, while the false alarm rates are more comparable to GO-CFAR in clutter-based scenarios and just marginally higher in noise-only conditions.

The networks trained in a noise-only mode notably provide exceptional good noise limited detection (top 6 rows of Table 3). The \(P_{\mathrm{D}}\) performance is as good as SO-CFAR, while the \(P_{\mathrm{FA}}\) is at the levels of GO-CFAR. The exceptions to this are related to the smaller 50x1 O1 neural networks trained on noise-only (row 1) or noise and clutter (row 7) in both tables. The single target \(P_{\mathrm{D}}\) is closer to GO-CFAR, and these two smaller networks clearly sacrifice the \(P_{\mathrm{D}}\) performance of multiple targets in order to yield an overall lower false alarm rate. This strategy, on average, works well for the considered scenario taking account of limited capacity of the neural network. The bigger 50x2 O1 networks (row 4 and row 10) balance out this much better and can detect dense targets with more ease.

The 50x2 O1 network (row 4 in Table 3) is not using any more information than standard SO-CFAR and decreases the \(P_{\mathrm{D}}\) by 1%, but with a marked lower \(P_{\mathrm{FA}}\) demonstrating that certain type of false detections can systematically be curtailed by only utilizing the reference cell information. To visually illustrate this, Fig. 13 shows two randomly collected examples of sliding window data where traditional CMSO/SO-CFAR detectors generate a false positive while the network classifies both detections as false. In both cases, the ratio between the CUT and noise floor is marginally satisfied and the only reason why SO-CFAR returns a positive detection is due to the presence of a few dips which aid in satisfying the detectional criteria. These types of situations can be analyzed and later taken into consideration by a neural network during the evaluation process.

Fig. 13
figure 13

Two sliding window examples with incorrect SO-CFAR detection but correct NN classification

5 Conclusion

This paper proposed an implementation of artificial neural networks to identify false detections. It was suggested to utilize a modified version of SO-CFAR to increase the number of detections while also augmenting the false alarm rate. The objective of the on-following neural network is to only analyze positive detections and reduce the false alarm rate to an acceptable low level. Various strategies on how a neural network training can be accomplished were investigated in detail, and it was shown that a reduction of false alarm can typically be made with only moderate loss in the probability of detection with respect to traditional CFAR detectors. In this regard, incorporating the guard cell or the Doppler profile information is very constructive for a neural network assuming that the target and environmental specifications can be taken into account. Different trade-offs can be achieved by adjusting different parameters where smaller fully connected feedforwarding connected networks are particularly well suited for significant reduction of false alarm rates, while deeper networks tend to show a greater emphasis toward target detection.

Availability of data and materials

Trained neural networks are available from http://dx.doi.org/10.6084/m9.figshare.14252663.

Notes

  1. All of the trained neural networks described here are available for download from: http://dx.doi.org/10.6084/m9.figshare.14252663.

Abbreviations

CA:

Cell averaging

CFAR:

Constant false alarm rate

CMSO:

Censored mean smallest of

CPI::

Coherent processing interval

GO:

Greatest of

NN:

Neural networks

\(P_{\mathrm{D}}\) :

Probability of detection

\(P_{\mathrm{FA}}\) :

False alarm rate

ROC:

receiver operating characteristic

SCR:

signal-to-clutter ratio

SNR:

signal-to-noise ratio

SO:

Smallest of

References

  1. G.V. Trunk, Range resolution of targets using automatic detectors. IEEE Trans. Aerosp. Electron. Syst. 14(5), 750–755 (1978)

    Article  Google Scholar 

  2. M. Weiss, Analysis of some modified cell-averaging CFAR processors in multiple-target situations. IEEE Trans. Aerosp. Electron. Syst. 18(1), 102–114 (1982)

    Article  Google Scholar 

  3. H. Rohling, Radar CFAR thresholding in clutter and multiple target situations. IEEE Trans. Aerosp. Electron. Syst. 19(4), 608–621 (1983)

    Article  Google Scholar 

  4. V. Anastassopoulos, G.A. Lampropoulos, Optimal CFAR detection in Weibull clutter. IEEE Trans. Aerosp. Electron. Syst. 31(1), 52–64 (1995)

    Article  Google Scholar 

  5. S. Watts, Cell-averaging CFAR gain in spatially correlated K-distributed clutter. IET Radar Sonar Navig. 143(5), 321–327 (1996)

    Article  Google Scholar 

  6. S. Erfanian, V.T. Vakili, Introducing switching ordered statistic CFAR type I in different radar environments. EURASIP J. Adv. Signal Process. 525704 (2009)

  7. O.B. Daho, J. Khamlichi, O. Chappe, B. Lescalier, A. Gaugue, M., Menard, Using CFAR algorithm to further improve a combined through-wall imaging method, in European Signal Processing Conference (2012), pp. 2521–2525

  8. G.V. Weinberg, Coherent CFAR detection in compound gaussian clutter with inverse gamma texture. EURASIP J. Adv. Signal Process. 105 (2013)

  9. F. Gini, M. Greco, Suboptimum approach to adaptive coherent radar detection in compound-Gaussian clutter. IEEE Trans. Aerosp. Electron. Syst. 35(3), 1095–1104 (1999)

    Article  Google Scholar 

  10. Y. Abramovich, O. Besson, On the expected likelihood approach for assessment of regularization covariance matrix. IEEE Signal Process. Lett. 22, 777–781 (2015)

    Article  Google Scholar 

  11. E. Aboutanios, L. Rosenberg, Single snapshot coherent detection in sea clutter, in IEEE Radar Conference (2019)

  12. J. Liu, D. Massaro, D. Orlando, A. Farina, Radar adaptive detection architectures for heterogeneous environments. IEEE Trans. Signal Process. 68, 4307–4319 (2020)

    Article  Google Scholar 

  13. S. Yan, F. Lotfi, S. Chen, C. Hao, D. Orlando, Innovative two-stage radar detection architectures in adverse scenarios using two training data sets. IEEE Signal Process. Lett. 28, 1165–1169 (2021)

    Article  Google Scholar 

  14. F. Amoozegar, M. Sundareshan, A robust neural network scheme for constant false alarm rate processing for target detection in clutter environment, in Proceedings of the American Control Conference (1994)

  15. P.P. Gandhi, V. Ramamurti, Neural networks for signal detection in non-Gaussian noise. IEEE Trans. Signal Process. 45(11), 2846–2851 (1997)

    Article  Google Scholar 

  16. G.L. Risueño, J. Grajal, S. Haykin, R. Díaz-Oliver, Convolutional neural networks for radar detection, in Internation Conference on Artificial Neural Networks (2002), pp. 1150–1155

  17. N. Galvez, J. Pasciaroni, O. Agamennoni, J. Cousseau, Radar signal detector implemented with artificial neural networks, in Proceedings of the XIX Congreso Argentino de Control Automatico (2004)

  18. K. Cheikh, F. Soltani, Application of neural networks to radar signal detection in K-distributed clutter. IET Radar Sonar Navig. 153(5), 460–466 (2006)

    Article  Google Scholar 

  19. J. Akhtar, K.E. Olsen, A neural network target detector with partial CA-CFAR supervised training, in Proceedings of the International Conference on Radar (2018)

  20. J. Akhtar, K.E. Olsen, GO-CFAR trained neural network target detectors, in Proceedings of IEEE Radar Conference (2019)

  21. L. Wang, J. Tang, Q. Liao, A study on radar target detection based on deep neural networks. IEEE Sens. Lett. 3(2), 1–4 (2019)

    Google Scholar 

  22. M. Carretero, R. Harmanny, R. Trommel, Smart-CFAR, a machine learning approach to floating level detection in radar, in Proceedings of the 16th European Radar Conference (2019)

  23. S. Wagner, W. Johannes, Target detection using autoencoders in a radar surveillance system, in Proceedings of International of Radar Conference (2019)

  24. W. Ng, G.S. Wang, Z. Lin, B. Dutta, Range-Doppler detection in automotive radar with deep learning, in Proceedings of the International Joint Conference on Neural Networks (2020)

  25. D. Gusland, S. Rolfsjord, B. Torvik, Deep temporal detection—a machine learning approach to multiple-dwell target detection, in Proceedings of IEEE Radar Conference (2020)

  26. A. Bhattacharya, R. Vaugha, Deep learning radar design for breathing and fall detection. IEEE Sens. J. 20(9), 5072–5085 (2020)

    Article  Google Scholar 

  27. J. Akhtar, Training of neural network target detectors mentored by SO-CFAR, in European Signal Processing Conference (2020), pp. 1522–1526

  28. P.P. Gandhi, S.A. Kassam, Optimality of the cell averaging CFAR detector. IEEE Trans. Inf. Theory 40(4), 1226–1228 (1994)

    Article  Google Scholar 

  29. J.T. Rickard, G.M. Dillard, Adaptive detection algorithms for multiple-target situations. IEEE Trans. Aerosp. Electron. Syst. 13(4), 338–343 (1977)

    Article  Google Scholar 

  30. S.D. Himonas, M. Barkat, Automatic censored CFAR detection for nonhomogeneous environments. IEEE Trans. Aerosp. Electron. Syst. 28(1), 286–304 (1992)

    Article  Google Scholar 

  31. M.B. Mashade, Analysis of the censored-mean level CFAR processor in multiple target and nonuniform clutter. IEE Proc. Radar Sonar Navig. 142(5), 259–266 (1995)

    Article  Google Scholar 

  32. H. Huttunen, Deep neural networks: a signal processing perspective, in Handbook of Signal Processing Systems, 3rd edn., ed. by S.S. Bhattacharyya, E.F. Deprettere, R. Leupers, J. Takala (Springer, Berlin, 2019)

    Google Scholar 

  33. M.M. Horst, F.B. Dyer, M. Tuley, Radar sea clutter model, in Proceedings of the International IEEE AP/S URSI Symposium (1978), pp. 6–10

Download references

Acknowledgements

None others.

Funding

The work is not funded by any private grant.

Author information

Authors and Affiliations

Authors

Contributions

Author is solely responsible. The author read and approved the final manuscript.

Corresponding author

Correspondence to Jabran Akhtar.

Ethics declarations

Ethics approval and consent to participate

No trials on humans.

Consent for publication

It does not contain any individual person’s data.

Competing interests

The author declares no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Akhtar, J. A neural network framework for binary classification of radar detections. EURASIP J. Adv. Signal Process. 2021, 90 (2021). https://doi.org/10.1186/s13634-021-00801-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-021-00801-y

Keywords