Skip to main content

Target detection based on generalized Bures–Wasserstein distance


Radar target detection with fewer echo pulses in non-Gaussian clutter background is a challenging problem. In this instance, the conventional detectors using coherent accumulation are not very satisfactory. In contrast, the matrix detector based on Riemannian manifolds has shown potential on this issue since the covariance matrix of radar echo data during one coherent processing interval(CPI) has a smooth manifold structure. The Affine Invariant (AI) Riemannian distance between the cell under test (CUT) and the reference cells has been used as a statistic to achieve improved detection performance. This paper uses the Bures–Wasserstein (BW) distance and Generalized Bures–Wasserstein (GBW) distance on Riemannian manifolds as test statistics of matrix detectors, and propose relevant target detection method. Maximizing the GBW distance is formulated as an optimization problem and is solved by the Riemannian trust-region (RTR) method to achieve enhanced discrimination for target detection. Our evaluation of simulated data and measured data show that the matrix detector based on GBW distance leads to a significant performance gain over existing methods.

1 Introduction

The moving sea surface with time-varying and non-stationary properties is a realistic scenario for target detection for high-resolution marine radar. Conventional algorithms using coherent accumulation are not very satisfactory. With only a few pulses, the classical detector with constant false alarm probability (CFAR) using Fast Fourier transform (FFT) technology is suboptimal because of the poor Doppler resolution and the energy spread. Adaptive detection strategies [1, 2] rely on clutter information to achieve a performance improvement, which can ensure CFAR-ness of target detection under compound-Gaussian clutter. The improved detectors are superior to the classical ones in most cases. Unfortunately, clutter characteristics, such as sea state, grazing angle, wind speed, polarization, and radio frequency [3, 4], are often unavailable depending on the collection conditions and radar system.

In recent years, matrix detectors based on information geometry have been developed [5,6,7], involving the geometric theory of probability distribution and its application. The data processed by the matrix detector is the covariance matrix of echo pulse train in each range cell. The test statistic of the matrix detector is the Riemannian distance or divergence between two covariance matrices on the manifold. The Riemannian distance between the CUT and the reference cells can be calculated according to different Riemannian metrics on the manifold space. Determine whether there is a target by comparing the Riemannian distance and a given threshold value. The matrix CFAR detector based on the Riemannian manifold significantly outperforms conventional FFT cell-averaging CFAR and adaptive matched filtering detectors [8,9,10] in fewer pulse trains and heterogeneous clutter background [6, 11, 12].

The matrix detector based on Riemannian manifolds utilizes the geometric structure of the manifold to distinguish target and clutter. The curvature usually reflects the structure of manifolds [6], which varies with different metrics. The matrix detector with the AI Riemannian metric was applied in the monitoring of wake vortex turbulence [4, 13], drone detection [14], and target detection in high-frequency surface wave radar [10]. The other metrics are used in the matrix detector to improve the performance, such as Log-Euclidean [11], Jeffrey divergence (JD), Kullback–Leibler divergence (KLD) [15, 16], sKLD, tKLD [6, 17], Riemannian-Brauer and the angle-based hybrid Brauer [10].

It is exciting that the performance of the matrix detector still has the potential to be improved since the metric determines the Riemannian distance. The straight idea is to explore alternative metrics on the Riemannian manifold, which can increase the discrimination power between data. In this paper, motivated by optimal transmission theory, we generalize the BW distance into GBW distance on Riemannian manifolds and propose a matrix CFAR detector based on BW and GBW distances. Maximizing the GBW distance is formulated as an optimization problem and is solved by the Riemannian trust-region (RTR) method to achieve enhanced discrimination for target detection. Finally, the detection performance improvement of the proposed method is verified by the tests on simulated data and measured data.

The following content of this paper is organized as follows. Section II briefly introduces the signal model, the primary hypothesis of radar target detection, and the matrix detection framework based on Riemannian manifolds. Section III, The BW distance in the optimal transmission theory is introduced into the Riemannian manifold as a metric. The generalization of BW distance, called GBW distance, and their connections with AI metric is derived in detail. At the same time, the problem of radar target detection with GBW distance is formulated. Section IV discusses the GBW distance optimization for radar target detection. Section V shows computational complexity analysis and experimental results, which illustrate the improvements based on GBW distance for radar target detection. Finally, section VI provides the brief conclusions of this paper.

Some notations must be explained in advance: The boldface x denotes vector, and the uppercase letter A denotes matrix. Superscripts (·)T indicates matrix or vector transpose. The tr(·) represents the trace of a matrix, and the det(·) represents the determinant of a matrix. The vec(·) indicates the vectorization of matrices. The symbols \({\mathbb{R}}\)n and \({\mathbb{R}}\)nxm represent real vectors and n × m real matrices. \(\left| {\left| \cdot \right|} \right|_{{\text{F}}}\) is the Frobenius norm.\(\otimes\) denotes the Kronecker product. E[·] implies the statistical expectation. I mean the identity matrix.

2 Problem formulation

2.1 Signal model

Multiple data are generally sampled on the range dimension within one pulse repetition frequency (PRF). The observed complex radar data of N pulse sequences in each range cell can be written as \({\varvec{x}} = \{ x_{0} ,x_{1} , \cdots x_{N - 1} \}^{T}\) corresponding to complex circular multivariate stationary Gaussian process with covariance matrix R

$$P({\varvec{x}}|R) = \frac{1}{{\pi^{N/2} |R|^{1/2} }}\exp ( - {\varvec{x}}^{T} R^{ - 1} {\varvec{x}})$$

The target detection is a binary hypothesis problem as follows

$$\left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {{\mathcal{H}}_{0} :x = c} & {} \\ \end{array} } \\ {{\mathcal{H}}_{1} :x = s + c} \\ \end{array} } \right.$$

\({\mathbb{H}}\)0 and \({\mathbb{H}}\)1 represent null and alternative hypotheses, which indicate only clutter and target superimposed over clutter, respectively. x is the data of the CUT, s is the target echo, and c is the clutter and noise. The target signal during one CPI is

$$s = a \cdot \left[ {1,{\text{e}}^{{( - j2\pi f_{d} )}} , \cdots {\text{e}}^{{( - j2\pi f_{d} (K - 1))}} } \right]^{T}$$

where a is amplitude, and others are the steering vector. The fd is Doppler shift. K is the number of pulses within one CPI.

The likelihood function \(P({\varvec{x}}|R)\) can also represent the hypothetical model of observed data, which forms a statistical manifold \({\mathcal{M}} = \{ p({\varvec{x}}|R)\}\). Therefore, the covariance matrix can be used as the parameter of test statistics. The detection problem in the additive clutter-dominated environment can be formulated as the following binary hypotheses:

$$\left\{ {\begin{array}{*{20}c} {{\mathcal{H}}_{0} :{\varvec{x}}\sim {\mathcal{N}}(0,R_{c} )\begin{array}{*{20}c} {} & {} \\ \end{array} } \\ {{\mathcal{H}}_{1} :{\varvec{x}}\sim {\mathcal{N}}(0,R_{s} + R_{c} )} \\ \end{array} } \right.$$

where Rc and Rs are the covariance matrices of clutter and target signal, respectively.

The covariance matrix of observed data in each range cell can be written as

$$R = E\left( {{\varvec{xx}}^{H} } \right) = E\left( {\begin{array}{*{20}c} {x_{0} x_{0}^{ * } } & {x_{0} x_{1}^{ * } } & \cdots & {x_{0} x_{N - 1}^{ * } } \\ {x_{1} x_{0}^{ * } } & {x_{1} x_{1}^{ * } } & \cdots & {x_{1} x_{N - 1}^{ * } } \\ \vdots & \vdots & \ddots & \vdots \\ {x_{N - 1} x_{0}^{ * } } & {x_{N - 1} x_{1}^{ * } } & \cdots & {x_{N - 1} x_{N - 1}^{ * } } \\ \end{array} } \right){ = }\left( {\begin{array}{*{20}c} {a_{0} } & {a_{1}^{*} } & \cdots & {a_{N - 1}^{*} } \\ {a_{1} } & {a_{0} } & \cdots & {a_{N - 2}^{*} } \\ \vdots & \vdots & \ddots & \vdots \\ {a_{N - 1} } & {a_{N - 2} } & \cdots & {a_{0} } \\ \end{array} } \right)$$

where ai is the correlation coefficient \(a_{i} = E(x_{m} x_{m - i}^{*} )(i = 0,1...N - 1;m = 0,1...N - 1;m > i)\). The averaging over time can be substituted for the statistical expectation since the stationary Gaussian processes have ergodicity. The correlation coefficient can be estimated as

$$a_{i} = \frac{1}{N}\sum\limits_{j = 0}^{N - 1 - i} {x_{j} } x_{j + i}^{ * } ,\left( {i = 0,1...N - 1} \right)$$

2.2 Detection framework

The matrix detection method based on Riemannian manifolds can be illustrated in Fig. 1.

Fig. 1
figure 1

Matrix CFAR detection framework

The covariance matrix of the echo complex pulse train is a symmetric positive definite (SPD) matrix. In Euclidean space, the SPD matrices are stretched to form a convex subset space [18] which naturally lies on Riemannian manifolds [19]. This manifold can be given a Riemannian metric. Therefore, target and clutter can be discriminated by calculating the distance between the points on the curved Riemannian manifold with proper distance metric. The test statistic is used to determine whether the target exists or not, which can be written as

$$d\left( {R_{T} ,\overline{R}} \right)\mathop { \gtrless }\limits_{{{\mathcal{H}}_{0} }}^{{{\mathcal{H}}_{1} }} \gamma$$

where RT and \(\overline{R}\) represents the covariance matrix of CUT and reference cells. The test statistic d represents the distance between the CUT and the mean of reference cells, where it is assumed that these cells have no signal components. \(\gamma\) is a given threshold. Target detection can be understood as the discrimination between RT and \(\overline{R}\) on Riemannian manifolds. If the distance between the RT and RT is greater than the detection threshold, the target is present; otherwise, the target is absent.

2.3 Geometric distance and mean

On the Riemannian manifold, distance can be induced according to the inner product g, also known as metric. A smooth inner product is also known as a metric, through which we can measure the dissimilarity between SPD matrices. The SPD matrices X was defined as

$${\mathcal{M}}{ = }\{ X:X \in R^{n \times n} ,X = X^{T} ,v^{T} Xv > 0,v \in R^{n} \}$$

Under the view of differential geometry, the set of SPD is known to be a smooth manifold. A manifold accompanied by a continuous and smooth inner product is often referred to as a Riemannian manifold [20]. A curve l connecting two points [x, y] → \({\mathcal{M}}\) on Riemannian manifolds can be integrated to obtain length as

$${\text{Len}}\;(l) = \int_{x}^{y} {\sqrt {g(\dot{l}(t),\dot{l}(t))} } {\text{d}}t$$

The Riemannian distance is defined as the following infimum

$${\text{dist(}}x,y{)} = \mathop {{\text{inf}}}\limits_{\Gamma } {\text{Len}}\;{(}l{)}$$

where \(\Gamma\) indicates curve sets passing through x and y on the manifold.

There are many well-known Riemannian metrics studied in the relevant literature, such as the AI, Log-Euclidean (LE) [4, 11, 21], and Log-Cholesky [22] metrics. The AI metric can be written as

$$g_{AI} \left( {P,Q} \right) = {\text{vec}}\left( P \right)^{T} \left( {X \otimes X} \right)^{ - 1} {\text{vec}}\left( Q \right)$$

where \(X \in {\mathcal{M}}\) is a point on the manifold,\(P,Q \in T_{X} {\mathcal{M}}\) are tangent vectors in the neighborhood around X. The distance between two points \(R_{1} ,R_{2} \in {\mathcal{M}}\) induced by this metric is given

$$\begin{gathered} d_{AI}^{2} \left( {R_{1} ,R_{2} } \right) = \left\| {\log \left( {R_{1}^{ - 1/2} R_{2} R_{1}^{ - 1/2} } \right)} \right\|_{F}^{2} \hfill \\ = Tr\left[ {\log^{2} \left( {R_{1}^{ - 1/2} R_{2} R_{1}^{ - 1/2} } \right)} \right] \hfill \\ \end{gathered}$$

AI distance is also called geodesic distance, representing the actual distance of two points on a manifold. In the view of AI metric, the geometry is a Riemannian manifold with non-negative curvature [23,24,25].

Other metrics, such as divergence, have also been studied in some works [6, 26, 27]. The Jeffrey divergence and Kullback–Leibler divergence of two matrices \(R_{1} ,R_{2} \in {\mathcal{M}}\) are given as

$$d_{J}^{2} \left( {R_{1} ,R_{2} } \right) = \frac{1}{2}Tr\left( {R_{1}^{ - 1} R_{2} } \right) + \frac{1}{2}Tr\left( {R_{2}^{ - 1} R_{1} } \right) - n$$
$$d_{KL} \left( {R_{1} ,R_{2} } \right) = tr\left( {R_{1}^{ - 1} R_{2} - I} \right) - \log \left| {R_{1}^{ - 1} R_{2} } \right|$$

where n denotes the dimension of the matrix R. It should be noted that divergence can be used to distinguish points on the matrix manifold. However, it is not an actual "distance" because it does not meet the symmetry characteristics, such as KLD.

The mean of some matrices can be defined as the same as the arithmetic standard of several real numbers in Euclidean space. In the view of Karcher barycenter [28], the geometric mean of the SPD covariance matrix can be determined by solving the following formula

$$\overline{R}: = \arg \min \sum\limits_{i = 1}^{N} {d^{2} \left( {\overline{R},R_{i} } \right)}$$

The above problems show that the geometric mean is defined through the measure induced by the metric g. It should be noted that the geometric mean varies depending on the metric. To solve the optimization problem of geometric mean, we can use the gradient algorithm to find the closed solution or use the fixed-point iteration method to recurse [5, 29,30,31].

The AI geometric mean can be derived using the fixed-point method

$$\overline{R}_{t + 1} = \overline{R}_{t}^{1/2} \exp \left( {\frac{1}{N}\sum\limits_{i = 1}^{N} {\log \left( {\overline{R}_{t}^{ - 1/2} R_{i} \overline{R}_{t}^{ - 1/2} } \right)} } \right)\overline{R}_{t}^{1/2}$$

where t is the iteration index. It can be initialized using the arithmetic mean of the matrices.

The JD and KLD mean of SPD matrices are given as

$$\overline{R}_{JD} { = }P^{{{ - }1/2}} \left( {P^{1/2} QP^{1/2} } \right)^{1/2} P^{ - 1/2}$$
$$P = \sum\limits_{i} {R_{i}^{ - 1} } ,Q = \sum\limits_{i} {R_{i} }$$
$$\overline{R}_{KLD} = \left( {\frac{1}{N}\sum\limits_{i = 1}^{N} {R_{i}^{ - 1} } } \right)^{ - 1}$$

3 BW and GBW geometry

3.1 BW geometry

One central issue in statistics is to explore how to measure the difference between random variables in the probability space [32]. With the latest development in this topic, the Wasserstein metric is used to distinguish the dissimilarity of a probability distribution. The metric on the probability space can be written as [33]:

$$d^{2} (U,V) = \inf E\left\| {A - B} \right\|_{2}^{2}$$

where A and B are random variables with U and V distributions, respectively. Fréchet distance is the Wasserstein distance W2(μ,v) between μ and v as[33]

$$W_{2} (\mu ,v) = \mathop {\inf }\limits_{A\sim \mu ,B\sim v} \left\{ {E\left\| {A - B} \right\|_{2}^{2} } \right\}^{1/2} = \left\{ {\mathop {\inf }\limits_{\zeta \sim \Gamma (\mu ,v)} \int_{{{\mathbb{R}}^{n} \times {\mathbb{R}}^{n} }} {\left\| {a - b} \right\|_{2}^{2} } d\zeta (a,b)} \right\}^{1/2}$$

where probability is measured \(\Gamma (\mu ,v)\) with marginals μ and v. In quantum mechanics, this metric is named Bures distance and is known as Wasserstein distance in statistics [34]. Indeed, the BW metric is a Riemannian metric.

In many practical applications, the observation data X and Y are square-integrable following U and V with zeros means and SPD covariance matrices RX and RY. The BW distance between RX and RY is given by [9, 34]

$$d_{BW} \left( {R_{X} ,R_{Y} } \right) = \left( {{\text{tr}} (R_{X} ) + {\text{tr}} (R_{Y} ) - 2{\text{tr}} (R_{X}^{1/2} R_{Y} R_{X}^{1/2} )^{1/2} } \right)^{1/2}$$

The probability distribution is regarded as a differentiable manifold in the theory of information geometry; while, the covariance matrix of random variables is considered as points on these manifolds. We noted that the Wasserstein distance of order 2 between two Gaussian variables with zero mean is the BW distance of covariance matrices [34,35,36].

When endowed with a metric \(g\) (a smooth inner product), the SPD matrix set becomes a Riemannian manifold [24]. The BW metric [34, 37] is defined as, for \(U,V \in T_{X} {\mathcal{M}}\)

$$g_{BW} \left( {P,Q} \right) = \left\langle {P,Q} \right\rangle_{BW} = \frac{1}{2}{\text{vec}}\left( P \right)^{T} \left( {X \otimes I + I \otimes X} \right)^{ - 1} {\text{vec}}\left( Q \right)$$

The BW metric has a linear dependence on X, which is a more suitable and robust choice for some problems, especially for modeling time-varying data. The primary characteristic of \({\mathcal{M}}\)BW is non-negative sectional curvature[24], while \({\mathcal{M}}_{AI}\) is a nonpositively curved space [38]. The nonnegatively curved spaces can be applied in distinguishing the covariance matrix of radar echo.

3.2 GBW geometry

We can use parameters to generalize the BW metric, called GBW metric. The GBW is obtained by replacing the identity matrix with an arbitrary SPD matrix Z as follows

$$g_{GBW} \left( {P,Q} \right) = \left\langle {P,Q} \right\rangle_{GBW} = \frac{1}{2}{\text{vec}}\left( P \right)^{T} \left( {X \otimes Z + Z \otimes X} \right)^{ - 1} {\text{vec}}\left( Q \right)$$

The above formula shows that the GBW metric reduces to the BW metric when Z = I, and it is equivalent to the AI metric if Z = X. In other words, the AI and BW metrics are special cases of the GBW metric. The GBW metric connects BW and AI metrics through parameter Z. The geometry generated by the GBW metric was proved to have a Riemann structure, denoted as \({\mathcal{M}}\)GBW. The Riemannian distance is induced by the GBW metric between RX and RY as [18]

$$d_{GBW} \left( {R_{X} ,R_{Y} } \right) = \left( {tr(Z^{ - 1} R_{X} ) + tr(Z^{ - 1} R_{Y} ) - 2tr(R_{X}^{1/2} Z^{ - 1} R_{Y} Z^{ - 1} R_{X}^{1/2} )^{1/2} } \right)^{1/2}$$

The Riemannian distance on \({\mathcal{M}}\)GBW is equal to the Euclidean distance on general linear group \({\mathcal{M}}\)GL through a Riemannian submersion. A. Han [18] has proved that \({\mathcal{M}}\)GBW has non-negative sectional curvature, while \({\mathcal{M}}\)GL has zero curvature. The minimum curvature of GBW geometry is the same as BW geometry [39]; while, the maximum curvature is determined by Z. According to matrix theory, we can obtain the following formula

$$\left\{ {\begin{array}{*{20}l} {tr(Z^{ - 1} R_{X} ) = tr(Z^{ - 1/2} R_{X} Z^{ - 1/2} )} \hfill \\ {tr(Z^{ - 1} R_{Y} ) = tr(Z^{ - 1/2} R_{Y} Z^{ - 1/2} )} \hfill \\ {tr(R_{X}^{1/2} Z^{ - 1} R_{Y} Z^{ - 1} R_{X}^{1/2} )^{1/2} = tr(Z^{ - 1} R_{Y} Z^{ - 1} R_{X} )^{1/2} } \hfill \\ { = tr\left( {Z^{ - 1/2} R_{Y} Z^{ - 1/2} \cdot Z^{ - 1/2} R_{X} Z^{ - 1/2} } \right)^{1/2} } \hfill \\ { = tr\left( {(Z^{ - 1/2} R_{X} Z^{ - 1/2} )^{1/2} \cdot (Z^{ - 1/2} R_{Y} Z^{ - 1/2} ) \cdot (Z^{ - 1/2} R_{X} Z^{ - 1/2} )^{1/2} } \right)^{1/2} } \hfill \\ \end{array} } \right.$$

Therefore, the GBW distance between RX and RY can be considered as the BW distance between Z −1/2RX M −1/2 and Z −1/2RY Z −1/2. This formula shows that the parameterized metric is equivalent to mapping the original matrix into another manifold space using a positive definite matrix. Its curvature characteristics can be used to increase the variance of the data on a manifold, which enhances the non-similarity of data in various application fields.

Like general geometry, the geometric mean between two SPD matrices under the GBW geometry is the midpoint on the geodesic connecting them. The GBW mean (barycenter) definition for several SPD matrices is similar to that in Euclidean space. The solution of GBW mean is equivalent to the following optimization problems for N SPD matrices

$$\overline{R}: = \mathop {\min }\limits_{{\overline{R} \in S_{ + + }^{n} }} \sum\limits_{l = 1}^{N} {\omega_{l} } d_{gbw}^{2} \left( {R_{l} ,\overline{R}} \right),\sum\limits_{l = 1}^{N} {\omega_{l} } = 1$$

The optimization problem is solving a nonlinear equation unique in the convex cone of \({\mathbb{S}}_{{n + + }}\). An alternative method for calculating the barycenter is the fixed-point iteration [34, 40]

$$\overline{R}_{n + 1} : = M\overline{R}_{n}^{ - 1/2} \left( {\sum\limits_{l = 1}^{N} {\omega_{l} } \left( {\overline{R}_{n}^{1/2} M^{ - 1} R_{l} M^{ - 1} \overline{R}_{n}^{1/2} } \right)^{1/2} } \right)^{2} \overline{R}_{n}^{ - 1/2} M$$

If it reduces to the BW metric,\(M = I\),\(\omega_{l} = 1/N\), the mean is

$$\overline{R}_{n + 1} : = \overline{R}_{n}^{ - 1/2} \left( {\frac{1}{N}\sum\limits_{l = 1}^{N} {\left( {\overline{R}_{n}^{1/2} R_{l} \overline{R}_{n}^{1/2} } \right)^{1/2} } } \right)^{2} \overline{R}_{n}^{ - 1/2}$$

3.3 Target detection based on GBW metric

Radar target detection separates the target from the background as much as possible. Its essence is distinguishing two statistical models with the same type but different parameters. The detection method based on information geometry studies the problems of probability statistics and information theory as geometric problems in space. We distinguish their dissimilarity by the distance on the manifold formed by the covariance matrix of range cell echo data to determine whether they are targets or clutter. The variable geometry of the data covariance matrix has different distances under different metrics. The flow diagram of the proposed method is shown in Fig. 2.

Fig. 2
figure 2

a Flowchart of the proposed algorithm b Visual representation of the flowchart

The processing steps are described as follows: 1) Sampling data x of radar echo signals in one CPI, which includes m PRFs and n range cells. 2) Obtaining high-resolution data through range dimension pulse compression. 3) Estimating the covariance matrix Ri of the compressed data of each range cell. The covariance matrix is modeled as a point on the SPD manifold. 4) The geometric mean \(\overline{{\varvec{R}}}\) of the covariance matrix Ri of reference cells is calculated using BW distance on SPD manifold space. 5) Calculating the GBW distance d(RT,\(\overline{{\varvec{R}}}\)) between RT and \(\overline{{\varvec{R}}}\) by dimensionality-reduced ideas using the RTR optimization method. 6) Comparison of the test statistics d(RT,\(\overline{{\varvec{R}}}\)) and the threshold \(\gamma\) to determine whether the target is present or not. The threshold \(\gamma\) is estimated by the Monte Carlo experiment according to the desired probability of false alarm.

In this paper, using the manifold optimization method, we use the GBW geometry introduced above for target detection and map high-dimensional GBW geometry to low-dimensional space. By increasing the distance of the data covariance matrix on the manifold as much as possible, we can easily distinguish between targets and clutter. We expect to seek a positive definite matrix Z, which conforms to the following formula

$$\mathop {\max }\limits_{{M^{ - 1} \in {\mathbb{S}}_{ + + }^{{\text{n}}} }} d_{GBW}^{2} \left( {R_{T} ,\overline{R}} \right): = \mathop {\max }\limits_{{M^{ - 1} \in {\mathbb{S}}_{ + + }^{{\text{n}}} }} \left( {{\text{tr}} (Z^{ - 1} R_{T} ) + {\text{tr}} (Z^{ - 1} \overline{R}) - 2{\text{tr}} (R_{T}^{1/2} Z^{ - 1} \overline{R}Z^{ - 1} R_{T}^{1/2} )^{1/2} } \right)$$

where RT is the covariance matrix of CUT data and \(\overline{{\varvec{R}}}\) is the mean of the covariance matrices of the reference cells.

The optimization problem is a typical "max–min" problem, which is difficult to solve directly. From matrix theory, we know that any n-dimensional SPD matrix \(A \in {\mathbb{S}}_{{n + + }}\)can be factorized as A = WWT for any invertible matrix \(W \in {\mathbb{R}}\)nxd (d < n). Let \(Z^{ - 1} = WW^{T}\), the above formula can be written as

$$\begin{gathered} W^{*} : = \mathop {\sup }\limits_{{W^{T} W = I}} tr\left( {WW^{T} R_{T} } \right) + tr\left( {WW^{T} \overline{R}} \right) - 2tr\left( {R_{T}^{1/2} WW^{T} \overline{R}WW^{T} R_{T}^{1/2} } \right)^{1/2} \hfill \\ { } = \mathop {\sup }\limits_{{W^{T} W = I}} tr\left( {W^{T} R_{T} W} \right) + tr\left( {W^{T} \overline{R}W} \right) - 2tr\left( {W^{T} R_{T}^{1/2} WW^{T} \overline{R}WW^{T} R_{T}^{1/2} W} \right)^{1/2} \hfill \\ { } = \mathop {\sup }\limits_{{W^{T} W = I}} d_{BW} \left( {W^{T} R_{T} W,W^{T} \overline{R}W} \right) \hfill \\ \end{gathered}$$

If W is an optimal solution, then \(Z^{ - 1} = W^{*} (W^{*} )^{T}\). We construct an objective function

$$f\left( W \right) = - tr(W^{H} R_{T} W) - tr(W^{H} \overline{R}W) + tr(W^{H} R_{T} WW^{H} \overline{R}W)^{1/2}$$

Our optimization goal is the following classic problem:

$$W^{*} : = \arg \mathop {\min }\limits_{{W \in {\mathcal{M}}}} f\left( W \right)$$

In Lie group theory, the elements of Grassmann Manifolds or compact Stiefel Manifolds can be assumed as the parameter W in the above formula. This problem can be solved by the existing manifold optimization method, a promising alternative way to solve constrained optimization by unconstrained optimization on manifold [41, 42].

4 Optimization

Although the convergence is slow, the steepest descent algorithm is still the simplest solution to most convex optimization problems. The superlinear convergence is achieved by the second-order derivatives or Hesse of the objective function. The more workable superlinear convergence methods are grouped into two broad categories: trust-region and line-search strategies [41].

This paper focuses on the RTR method, which has some advantages. Firstly, the objective function value will decrease after each iteration. Secondly, the trust-region algorithm has been proven to converge to stationary points for all initial conditions. Finally, the trust region can guide stopping the internal iteration, which retains local convergence speed [41].

4.1 Some basic concepts

The RTR approach works with the concept of retraction.

Definition retraction:

A retraction R is a mapping from the tangent bundle T \({\mathcal{M}}\) to a manifold \({\mathcal{M}}\). The Rx represents the retraction on the tangent space Tx \({\mathcal{M}}\) of point x, which has two basic properties.

(1)\(R_{x} (0_{x} ) = x\), where 0× indicates the zero elements of tangent space.

(2) \({\text{DR}}_{x} (0_{x} ) = id_{{T_{x} {\mathcal{M}}}}\), where \({\text{id}}_{{T_{x} {\mathcal{M}}}}\) indicates the identity mapping on Tx \({\mathcal{M}}\).

Given a function defined on the Grassmann manifold \({\mathcal{G}}\left( {p,n} \right)\), its Riemannian gradient and Hessian at X [43] are defined as

$${\text{grad }}f(X):{ = }{\mathbf{P}}_{{H_{X} {\mathcal{G}}(p,n)}} (\nabla f(X))$$
$${\text{Hess}}{ }f(X){ }[V]{ = }{\mathbf{P}}_{{H_{X} {\mathcal{G}}(p,n)}} (\nabla^{2} f(X)[V] - VX^{T} \nabla f(X)),{ }V \in T_{X} {\mathcal{G}}(p,n)$$

where \({\mathbf{P}}_{{H_{X} {\mathcal{G}}(p,n)}} (Z) = Z - XX^{T} Z\) is the horizontal projection. The horizontal space can be represented as \(H_{X} {\mathcal{G}}(p,n) = \{ Z \in {\mathbb{R}}^{n \times p} :Z^{T} X = 0\}\). The \(\nabla f(X)\) is Euclidean gradient.

To obtain the Riemann gradient and Hessian of f(W), we first calculate the directional derivative Df(W). The directional derivative is generally defined as

$$Df(W)[H] = \mathop {\lim }\limits_{t \to 0} \frac{f(W + t \cdot H) - f(W)}{t}$$


$$\left\{ \begin{gathered} M = (W^{T} \overline{R}W)^{1/2} \hfill \\ P = D(M \cdot (W^{T} R_{T} W) \cdot M)^{1/2} [I_{d} ] \hfill \\ P_{1} = DM[P \cdot M \cdot W^{T} R_{T} W] \hfill \\ P_{2} = DM[W^{T} R_{T} W \cdot M \cdot P]{ } \hfill \\ \end{gathered} \right.$$

The Euclidean gradient of formula (34) can be derived as

$$\nabla f\left( W \right) = \frac{\partial f(W)}{{\partial W}} = - 2(R_{T} W + \overline{R}W - 2R_{T} W(M \cdot P \cdot M) - 2\overline{R}WP_{1} - 2\overline{R}WP_{2} )$$

Then we can obtain the Riemann gradient and Hessian by formula (34) and (35).

4.2 RTR algorithm

The most crucial step of the RTR algorithm in the iterative process is to solve the subproblem as follows, at kth iteration xk

$$\mathop {\min }\limits_{{\xi \in T_{xk} {\mathcal{M}}}} m_{k} (\xi ): = \left\langle {{\text{grad}}{ }f(x_{k} ),\xi } \right\rangle_{{x_{k} }} + \frac{1}{2}\left\langle {{\text{Hess }}f(x_{k} )[\xi ],\xi } \right\rangle_{xk} { }s.t.{ }\left\| \xi \right\|_{xk} \le \Delta_{k}$$

The trust radius of the kth iteration is \(\Delta_{k}\). This subproblem can be solved using the truncated conjugate-gradient (TCG) algorithm [44]. A descent direction \(\xi_{k} \in T_{{x_{k} }} {\mathcal{M}}\) is a solution to this subproblem. We determine whether \(z_{k} = R_{{x_{k} }} (\xi_{k} )\) be accepted or not through a ratio

$$\rho_{k} = \frac{{f(x_{k} ) - f(z_{k} )}}{{m_{{x_{k} }} (0_{{x_{k} }} ) - m_{{x_{k} }} (\xi_{k} )}}$$

If \(\rho_{k} > \eta \in (0,1)\), \(z_{k} = x_{k + 1}\) is accepted and updated, where η is a given reference factor. Otherwise, \(z_{k}\) is rejected. In some cases, the radius of the trust-region will be updated based on \(\rho_{k}\) to overcome the algorithm hesitating. The steps of the RTR algorithm for matrix detectors are shown in Table 1 [43]

Table 1 RTR algorithm for matrix detector

5 Performance assessment

5.1 Computational complexity analysis

Computational complexity is an important indicator for evaluating algorithms. In this section, we will briefly discuss the computational complexity of different detection method in this paper. Computational complexity analysis includes three types of calculations: basic matrix operations, geometric distances and geometric means, and complexity of optimization.

  1. 1)

    For SPD matrices R1 and R2 with dimension \(n \times n\), the computational complexity of basic operations is shown in Table 2.

  2. 2)

    Based on the basic matrix operations, we can calculate the distance between two matrices and the geometric mean of m matrices under different measures. k is the number of fixed-point iterations. These calculations’ cost is shown in Table 3. We see that the AI measure requires the maximum amount of computation. In other words, BW and GBW metrics achieve performance improvements without increasing computational complexity.

  3. 3)

    The Riemannian manifold optimization technique is employed in this paper. The computational cost of each iteration includes three parts, namely, the objective function, the gradient, and TCG on \({\mathcal{G}}\left( {p,n} \right)\). The computational complexity of objective function (31) is l(96n3 + 8n2-26n-6), where l is the number of iterations (about 30–50). The computational complexity of the gradient (36) is l(240n3 + 4n2-64n). The computational complexity of TCG on the Grassmann manifold contains Riemannian gradient(33) and retraction. The complexity of the Riemann gradient is l(24n3-6n2). We use the exponential retraction in this paper with a complexity of l(n4/2 + 24n3 + 3n2/2-n).

Table 2 Computational complexity of basic matrix operations
Table 3 Computational complexity of distance and geometric mean

5.2 Experiments

In this section, we use simulation data and measured data to verify the feasibility of the proposed matrix detection scheme based on GBW geometry. At the same time, this method is compared with some existing approaches to illustrate its advantages.

A. Simulated Data.

In the simulation experiments, we use K-distribution to simulate clutter data. The amplitude probability density of sea clutter follows the K distribution is

$$p\left( z \right) = \frac{2}{a\Gamma (v + 1)}\left( \frac{z}{2a} \right)^{v + 1} \cdot K_{v} \left( \frac{z}{a} \right),v > - 1,a > 0$$

where Kv(·) denotes v-order modified second-kind Bessel function, here shape parameter v = 1 and scale parameter a = 0.5; \(\Gamma\) (·) denotes Gamma function. A total of 105 CPI clutter data are generated. We assume that the number of pulse trains in one CPI is 7, the PRF is 1 kHz, and within one pulse duration contains 17 range cells.

Since the explicit expression for detection probability (Pd) is not available, the threshold required for false alarm (Pfa) probability is estimated through Monte Carlo experiments. In the case of only the clutter presence, the detection threshold is determined with desired Pfa by 105 Monte Carlo simulation. A point target with 2 guard cells is injected into the clutter data and placed in the range cell in the middle (9th). The Doppler frequency is set to constant. When the target is injected into the clutter, different signal clutter ratios (SCR) can be obtained by changing the target signal power while keeping the clutter power constant. For each SCR, we run independent 104 simulations to estimate the Pd. Here, we compare the performance of matrix detectors under different metrics.

Figure 3 shows Pd versus SCR under different geometric distances with Pfa = 10−4 and 10−3, respectively. It is obvious that the detection performance using AI metrics is poor, but there is still an improvement of about 3dB compared to coherent processing algorithms based on FFT. The detection performance based on JD and KLD is slightly better. Compared with AI distance, the SCR requirement is reduced by about 1.5dB and 4 dB at the same Pd. The proposed detector with BW metric has the best detection performance. For the same Pd, it has about 2dB lower SCR than the KLD-based detector. These results show the superiority of BW geometry for target detection.

Fig. 3
figure 3

Pd versus SCR with different metrics (simulated data)

The comparison result of Pd versus SCR by GBW geometric optimization is shown in Fig. 4. The indicator GBW-n in Figure represents mapping the matrix to the n-dimensional manifold, the optimization parameter \(W \in {\mathbb{R}}\)7xn (n < 7). Figure shows that the detection performance improvement slightly increases as n reduces. When n = 2, the SCR requirement is reduced by about 2dB and 1.5dB at the same Pd corresponding Pfa = 10−4 and 10−3, respectively. It clearly shows the detection performance based on GBW geometry further improved projection space. The simulation results demonstrate that the proposed matrix detection scheme based on GBW geometry achieves better detection performance.

Fig. 4
figure 4

Pd versus SCR with GBW geometric (simulated data)

B. Measured Data.

The radar measurement data collected from the Naval Aeronautical University (NAU) X-band solid-state radar is used to evaluate the introduced method's performance. The main technical indicators are shown in Table 4.

Table 4 X-band radar parameters

The available raw data are the complex data after pulse compression, which is preprocessed by removing the DC components and scaling down. Data file "20191012112446_01_staring.mat," collected from the radar work in the staring state[45], is used for experiments. The data set contains cells 104 pulses and 5250 range cells. We choose the clutter data without a target for the experiment. In total, 17 range cells and 7 pulses are used as one CPI in each test. One moving target is injected into the clutter data and located at the 9th range cell. Based on different geometric metrics, the result of experiments by the Monte Carlo technique is displayed in Fig. 5.

Fig. 5
figure 5

Pd versus SCR with different metrics (measured data)

The detection probability versus SCR with different metrics is depicted in Fig. 5. As illustrated, the proposed detector with BW metric has the best detection performance, which is consistent with the simulation results. The KLD-based detector and JD-based detector follow. Specifically, at the same detection probability, the SCR of the detector with BW metric is reduced by about 5dB compared with the AI-based detector. The performance of the coherent processing algorithm based on FFT is the worst.

Figure 6 indicates the result of Pd versus SCR by GBW geometric optimization. Similar to the simulation results, the detection performance improvement slightly increases as the dimensional reduces. When n = 2, SCR improvement of about 1.5dB and 1dB can be obtained at the same Pd corresponding Pfa = 10−4 and 10−3, respectively. The experimental results demonstrate that the detector based on GBW geometry has better detection characteristics. The reason is that the GBW geometry can enhance the discriminability between the target and clutter in the surroundings.

Fig. 6
figure 6

Pd versus SCR with GBW geometric (measured data)

6 Conclusion

The proposed method in this paper maps the covariance matrix of the data to the manifold space and uses BW and GBW distance to distinguish their differences using the intrinsic structure of Riemannian manifolds. It can not only be used for detecting sea surface targets but also for target detection in other non-Gaussian clutter backgrounds, such as targets in ground clutter backgrounds. It uses only a few numbers of echo pulses, which can meet the needs of fast scanning search radar. Compared with conventional coherent processing methods, its detection performance can be greatly improved in the case of fewer echo pulses, with only a small increase in computational complexity.

In conclusion, the BW geometry and GBW geometry based on the Riemannian manifold and their applications in target detection have been introduced in this paper. The advantages of GBW geometry in Riemannian manifolds include: (1) The GBW metric connects the Riemannian AI distance on the manifold with the BW distance in the optimal transmission theory; (2) the geometric curvature structure of GBW is conducive to target detection in clutter. The GBW optimization problem is converted to an unconstrained problem on the manifold and solved by the RTR method. The simulation and measured results demonstrate that the proposed strategy can effectively work in an actual application. Subsequent work includes include analysis of the complexity of the proposed scheme and the design of a practical detector.

Availability of data and materials

The datasets generated during and analyzed during the current study are available in the [Naval Aeronautical University] repository and are available [] with the permission of [Naval Aeronautical University].


  1. E. Conte, A.D. Maio, G. Ricci, Adaptive CFAR detection in compound-Gaussian clutter with circulant covariance matrix. IEEE Signal Process. Lett. 7(3), 63–65 (2000)

    Article  Google Scholar 

  2. E. Conte, A.D. Maio, G. Ricci, Covariance matrix estimation for adaptive CFAR detection in compound-Gaussian clutter. IEEE Trans. Aerosp. Electr. Syst. 38(2), 415–426 (2002)

    Article  Google Scholar 

  3. L. Rosenberg, Parametric Modeling of Sea Clutter Doppler Spectra. IEEE Trans. Geosci. Remote Sens.Geosci. Remote Sens. 60, 1–9 (2022)

    Google Scholar 

  4. L. Rosenberg, S. Watts, M.S. Greco, Modeling the Statistics of Microwave Radar Sea Clutter. IEEE A&E Syst. Mag. 34(10), 44–75 (2019)

    Article  Google Scholar 

  5. F. Barbaresco, "Innovative tools for radar signal processing based on cartan's geometry of SPD matrices and information geometry," in Proc. IEEE Radar Conf. Rome, Italy, May 2008, pp. 1–6.

  6. M. Arnaudon, F. Barbaresco, Le. Yang, Riemannian medians and means with applications to radar signal processing. IEEE J. Selected Topics Signal Process. 7(4), 595–604 (2013)

    Article  Google Scholar 

  7. H. Chahrour, R.M. Dansereau, S. Rajan et al., Target detection through Riemannian geometric approach with application to drone detection. IEEE ACCESS 9, 123950–123963 (2021)

    Article  Google Scholar 

  8. Z. Yang, Y. Cheng, H. Wu et al., Enhanced matrix CFAR detection with dimensionality reduction of riemannian manifold. IEEE Signal Process. Lett. 27, 2084–2088 (2020)

    Article  Google Scholar 

  9. X. Hua, Y. Cheng, H. Wang et al., Matrix CFAR detectors based on symmetrized Kullback-Leibler and total Kullback-Leibler divergences. Digital Signal Process 69, 106–116 (2017)

    Article  Google Scholar 

  10. Z. Yang, Y. Cheng, H. Wu, PCA-based matrix CFAR detection for radar target. Entropy 22(7), 756 (2020)

    Article  MathSciNet  Google Scholar 

  11. S. i. Amari, "Information Geometry and Its Applications," 1st ed. Springer Publishing Company, Incorporated, 2016.

  12. X. Hua, L. Peng, Mig median detectors with manifold filter. Signal Process. 188, 108–176 (2021)

    Article  Google Scholar 

  13. F. Barbaresco, Radar monitoring of a wake vortex: Electromagnetic reflection of wake turbulence in clear air. C. R. Phys. 11(1), 54–67 (2010)

    Article  Google Scholar 

  14. L. Ye, Q. Yang, Q. Chen et al., Multidimensional joint domain localized matrix constant false alarm rate detector based on information geometry method with applications to high-frequency surface wave radar. IEEE ACCESS (2019).

    Article  Google Scholar 

  15. S. Kullback, R.A. Leibler, On information and sufficiency. Ann. Math. Stat. 22, 79–86 (1951)

    Article  MathSciNet  MATH  Google Scholar 

  16. E. Grossi, M. Lops, "Kullback–Leibler divergence region in MIMO radar detection problems," In: International Conference on Information Fusion, 2012, pp. 896–901.

  17. H. Xiaoqiang, "Research on Radar Target Detection Method based on Matrix Information Geometry," National University of Defense Technology,2018.

  18. A. Han, B. Mishra, P. Jawanpuria, "Generalized Bures-Wasserstein Geometry for Positive Definite Matrices, manifold of SPD matrices," arXiv:2110.10464v1 [math.FA] 20 Oct 2021

  19. M. Harandi, M. Salzmann, R. Hartley, Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods. IEEE Trans. Pattern Anal. Mach. Intell.Intell. 40(1), 48–62 (2018)

    Article  Google Scholar 

  20. B. Jeuris, Riemannian Optimization for Averaging Positive Definite Matrices (KU Leuven-Faculty of Engineering Science, Belgium, 2015)

    Google Scholar 

  21. H. Chahrour, R.M. Dansereau, S. Rajan et al., Target detection through riemannian geometric approach with application. IEEE ACCESS (2021).

    Article  Google Scholar 

  22. Z. Lin, Riemannian geometry of symmetric positive definite matrices via Cholesky decomposition. SIAM J. Matrix Anal. Appl. 40(4), 1353–1370 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  23. K.M. Wong, J. Zhang, J. Liang, H. Jiang, Mean and median of PSD matrices on a Riemannian manifold: Application to detection of narrow-band sonar signals. IEEE Trans. Signal Process. 65(24), 6536–6550 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  24. A. Han, B. Mishra, P. Jawanpuria, "On Riemannian Optimization over Positive Definite Matrices with the Bures-Wasserstein Geometry,"In: 35th Conference on Neural Information Processing Systems (NeurIPS 2021).

  25. B. Jeuris, "Riemannian Optimization for Averaging Positive Definite Matrices," Arenberg Doctoral School, Dissertation presented in partial fulfillment of the requirements for the degree of Doctor in Engineering, June 2015

  26. V. Arsigny, P. Fillard, X. Pennec, N. Ayache, Geometric means in a novel vector space structure on symmetric positive definite matrices. SIAM J. Matrix Anal. Appl. 29(1), 328–347 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  27. S. Sra, Positive definite matrices and the S-divergence. Proc. Am. Math. Soc. 144(7), 2787–2797 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  28. M. Arnaudon, F. Barbaresco, L. Yang, Medians and Means in Riemannian Geometry: Existence Uniqueness and Computation Matrix Information Geometry (Springer, New York, NY, USA, 2012)

    MATH  Google Scholar 

  29. B. Afsari, "Means and averaging on Riemannian manifolds," University of Maryland, 2009

  30. R. Bhatia, J. Holbrook, Riemannian geometry and matrix geometric means. Linear Algebra Appl. 413, 594–618 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  31. B. Balaji, F. Barbaresco, and A. Decurninge, "Information geometry and estimation of Toeplitz covariance matrices," in Proc. Int. Radar Conf., Oct. 2014, pp.1–4.

  32. F. Barbaresco, "Geometric Radar Processing based on Fréchet Distance: Information Geometry versus Optimal Transport Theory," 12th International Radar Symposium (IRS), pp. 663–668,2011

  33. M. Fréchet, Sur la distance de deux lois de probabilité. CR Acad. Sci. Paris 244, 689–692 (1957)

    MathSciNet  MATH  Google Scholar 

  34. R. Bhatiaa, T. Jainb, Y. Lim, "On the Bures-Wasserstein distance between positive definite matrices," arXiv:1712.01504v1 [math.FA] 5 Dec.2017.

  35. G. Peyré and M. Cuturi, "Computational optimal transport," Foundations and Trends in Machine Learning, vol.11, pp:355–607, 2019.

  36. Jesse van Oostrum, "Bures-Wasserstein geometry," arXiv:2001.08056, 2020.

  37. L. Malagò, L. Montrucchio, G. Pistone, Wasserstein Riemannian geometry of Gaussian densities. Inf. Geometry 1(2), 137–179 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  38. X. Pennec, "Manifold-valued image processing with SPD matrices," In Riemannian Geometric Statistics in Medical Image Analysis, pp 75–134. Elsevier, 2020.

  39. E. Massart, J.M. Hendrickx, P-A Absil, "Curvature of the manifold of fixed-rank positive-semidefinite matrices endowed with the Bures-Wasserstein metric," In International Conference on Geometric Science of Information, 2019.

  40. P.C. Álvarez-Esteban, E. Del Barrio, J.A. Cuesta-Albertos, C. Matrán, A fixed-point approach to barycenters in Wasserstein space. J. Math. Anal. Appl. 441(2), 744–762 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  41. P.-A. Absil, R. Mahony, R. Sepulchre, Optimization algorithms on matrix manifolds (Princeton University Press, Princeton, 2008)

    Book  MATH  Google Scholar 

  42. N. Boumal. "An introduction to optimization on smooth manifolds," Available online, May 2022.

  43. J. Hu, X. Liu, Z.W. Wen, Y.X. Yuan, A Brief Introduction to Manifold Optimization. J. Opera. Res. Soc. China 8, 199–248 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  44. J. Nocedal, S.J. Wright, "Numerical Optimization (Springer Series in Operations Research and Financial Engineering, Springer, New York, 2006)

    MATH  Google Scholar 

  45. L. Ningbo, D. Yunlong, W. Guoqing et al., Sea-detecting X-band radar and data acquisition program. J. Radars 8(5), 656–667 (2019)

    Google Scholar 

Download references


This work is supported in part by National Natural Science Foundation of China (Grant 6196607), in part by the Guangxi Nature Science Funds (Grant2020GXNSFAA159067), and in part by the Fund of Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education (Grant CRKL200105). Middle-aged and Young Teachers’ Basic Ability Promotion Project of Guangxi (No: 2020KY21021)

Author information

Authors and Affiliations



Z-ZH performed and wrote the manuscript and the data analyses, and designed computer programs and algorithms. LZ contributed to the conception of the study, helped perform the analysis with constructive discussions, and supervised and led research planning.

Corresponding author

Correspondence to Lin Zheng.

Ethics declarations

Competing interests

The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, Z., Zheng, L. Target detection based on generalized Bures–Wasserstein distance. EURASIP J. Adv. Signal Process. 2023, 126 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: