Skip to main content

The upper bound of multi-source DOA information in sensor array and its application in performance evaluation

Abstract

Direction of arrival (DOA) estimation has been discussed extensively in the array signal processing field. In this paper, the authors focus on the multi-source DOA information which is defined as the mutual information between the DOA and the received signal contaminated by complex additive white Gaussian noise. A theoretical expression of DOA information with multiple sources is derived for the uniform linear array. At high SNRs and under the sparse-source assumption obtained is the upper bound of DOA information contained in K sparse sources which can be regarded as the sum of all single-source information minus the uncertainty of sources’ order logK!. Moreover, because of the uncertainty of multi-sources’ order, the posteriori probability distribution of DOA no longer obeys single peak Gaussian distribution so that the mean square error is unsuitable in evaluating the performance of multi-dimensional parameter estimation. Consequently, entropy error (EE) is used as a new performance evaluation metric, whose relationship with DOA information is given.

Introduction

It has long been researched to use sensor arrays, e.g., radar, sonar, wireless communication, to locate the far-field sources and estimate the parameters in various fields [1]. The problem of DOA estimation of multiple sources has been an active research area for decades [24]. Many high-resolution direction of arrival (DOA) estimation algorithms e.g., multiple signal classification (MUSIC) [5, 6], maximum-likelihood (ML) [7], estimating signal parameter via rotational invariance techniques (ESPRIT) [8], are designed to determine the DOA of multiple narrowband non-coherent signals. The question followed is how to evaluate these algorithms.

Therefore, estimation performance and error analysis of DOA estimation algorithms have also been studied widely [913]. Mean square error (MSE) is usually used to evaluate system performance and the Cramer-Rao bound (CRB) provides a fundamental physical limit on estimation accuracy. Stoica and Nehorai [14] introduced stochastic and deterministic signal models and derived the general expressions for the corresponding CRBs in the multi-source case. The comparisons of multiple signal classification MUSIC, ML, and CRB are presented [15, 16]. However, CRB is not a tight bound of MSE in the low SNR region [17]. When the received signal is given, the probability distribution of DOA no longer obeys Gaussian distribution in the low SNR region. Thus, using MSE (a second-order statistic) to evaluate the estimation results of the actual algorithm is insufficient when SNR is low. In this paper, we use information theory to define a new performance evaluation metric of multi-source DOA estimation algorithms.

The information theory [18] was proposed by Shannon in 1948 and plays a fundamental role in the field of information transmission, channel coding, data compression, etc. Similar to the communication system, the radar system and the sensor array system are both information acquisition systems. Woodward and Davies [19, 20] utilized mutual information to investigate the problem of measurement of the target’s range. Xu [21] had also employed the thoughts and methodologies of Shannon’s information theory to systematically establish an information theory for a radar system in the presence of complex Gaussian noise. However, existing investigations based on Shannon’s information theory for DOA estimation mainly focus on the enumeration of source signals. Wax [22, 23] introduced the information theory criterion into the problems of signal detection and proposed the methods to estimate the number of sources. To the best of our knowledge, only a few researchers employ the information theory to address the performance analysis of DOA estimation. Xu and Yan [24] studied the spatial status estimation process with a sensor array from the perspective of information theory and provided the quantity of information obtained from the sensor array. In their study, the upper bound of DOA information in the single-source scenario is derived. Furthermore, the entropy error (EE) is defined to measure estimation performance. The relationship between EE, MSE, and CRB was presented. However, their research is not yet complete in the multi-source scenario. In this paper, the research of DOA information in the multi-source scenario will be further promoted.

The remaining of this paper is organized as follows. In Section 2, we review the DOA information theory which includes the system model and the definition of DOA information. Then, a theoretical expression of DOA information in the multi-source scenario is obtained. At high SNRs and under the sparse-source assumption obtained is the upper bound of DOA information contained in K sparse sources can be regarded as the sum of all single-source information minus the uncertainty of sources’ order logK!. Moreover, the expression of EE and its low bound (EEB) in the multi-source scenario is obtained. We give the simulation comparison and discuss the obtained results in Section 3: the upper bound of DOA information is compared with the DOA information of single-source; the comparison between EE, EEB, MSE, and CRB in the case of dual-source is presented. Section 4 concludes the paper.

Notation: We use lower case letters to signify variable, upper case letters to denote random variable, bold italic lower case letters to signify column vectors, and bold italic upper case letters to denote random column vectors. Superscripts {·}T and {·}H denote the transpose of a matrix and complex conjugate transpose, respectively. We use \(\overline {\left \{ \cdot \right \}}\) to stand for mean value operator, E{·} for the expectation operator. \(\mathcal {R}\left \{ \cdot \right \}\) stands for the real part of a complex number, \(\mathcal {I}\left \{ \cdot \right \}\) stands for the imaginary part of a complex number. |a| denotes the modulus of a. And is Hadamard product. ab stands for the proportional relationship and ab denotes the approximate equation.

Methods

System model

Suppose that there are K narrowband far-field sources impinging on a uniform linear antenna array with M elements, as shown in Fig. 1. The received signal at the mth array element is given by is given by

$$ {x_{m}}\left(t \right) = \sum\limits_{k = 1}^{K} {{s_{k}}\left(t \right)} {e^{j{\omega_{0}}{\tau_{m}}\left({{\theta_{k}}} \right)}} + {w_{m}}\left(t \right) $$
(1)
Fig. 1
figure1

System model

where \({s_{k}}\left (t \right) = {\alpha }_{k} {e^{j{\varphi }_{k}}}\) denotes the kth (k=1,2,,K) source signal. The source signal’s amplitude αk is constant and its phase φ is random. w0 is the angular frequency of carrier signal. wm(t) stands for the complex additive white Gaussian noise (CAWGN) at the mth array element. And the noise added to different arrays is independent of each other. τm(θk) represents the time delay of the kth source signal with DOA θk to the mth array element. Suppose the distance between any two adjacent elements in the uniform linear array is d, then time delay τm(θk) can be expressed by τm(θk)=md sinθk/v, where v is the propagation velocity of the signal.

Constructing a matrix equation based on (1), we have

$$ {\boldsymbol{X}}\left(t \right) = {\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{S}}\left(t \right) + {\boldsymbol{W}}\left(t \right) $$
(2)

where

$$ {\boldsymbol{X}}\left(t \right) = {\left[ {{x_{1}}\left(t \right){\mathrm{ }}{x_{2}}\left(t \right){\mathrm{ }} \cdots {\mathrm{ }}{x_{M}}\left(t \right)} \right]^{\mathrm{T}}}, $$
(3)
$$ {\boldsymbol{S}}\left(t \right) = {\left[ {{s_{1}}\left(t \right){\mathrm{ }}{s_{2}}\left(t \right){\mathrm{ }} \cdots {\mathrm{ }}{{\mathrm{s}}_{K}}\left(t \right)} \right]^{\mathrm{T}}}, $$
(4)
$$ {\boldsymbol{W}}\left(t \right) = {\left[ {{w_{1}}\left(t \right){\mathrm{ }}{w_{2}}\left(t \right){\mathrm{ }} \cdots {\mathrm{ }}{w_{M}}\left(t \right)} \right]^{\mathrm{T}}}, $$
(5)
$$ {\boldsymbol{A}}\left(\boldsymbol{\theta} \right) = \left[ {{\boldsymbol{a}}\left({{\theta_{1}}} \right){\mathrm{ }}{\boldsymbol{a}}\left({{\theta_{2}}} \right){\mathrm{ }} \cdots {\mathrm{ }}{\boldsymbol{a}}\left({{\theta_{K}}} \right)} \right], $$
(6)

in which

$$ {\boldsymbol{a}}\left({{\theta_{k}}} \right) = {\left[ {{e^{j{\omega_{0}}{\tau_{1}}\left({{\theta_{k}}} \right)}}{\mathrm{ }}{e^{j{\omega_{0}}{\tau_{2}}\left({{\theta_{k}}} \right)}}{\mathrm{ }} \cdots {\mathrm{ }}{e^{j{\omega_{0}}{\tau_{M}}\left({{\theta_{k}}} \right)}}} \right]^{\mathrm{T}}} $$
(7)

where a(θk) is a so-called transfer vector between the kth source and received signal.

Considering a single snapshot scenario, omitting time t, we can rewrite (2) as

$$ {\boldsymbol{X}} = {\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{S}} + {\boldsymbol{W}} $$
(8)

Assuming that the source obeys uniform distribution within the observation interval of angle \({\mathcal Q} = \left [ { - {\left \| \varTheta \right \| }}/2,{ {\left \| \varTheta \right \| }}/2 \right ]\), where Θ is the observation interval, then the prior probability density function (PDF) of the Θ is given by

$$ p({\boldsymbol{\theta }}) = \prod\limits_{k = 1}^{K} {p({{\theta}_{k}})} = {\left(\frac{1}{{\left\| {\varTheta} \right\|}}\right)^{K}} $$
(9)

When the carrier frequency is very high, a small change in time delay will lead to a large change in phase. Therefore, Φ is regarded as a random variable subject to uniform distribution on the interval [0,2π], so the prior PDF of Φ is given by

$$ p({\boldsymbol{\varphi }}) = \prod\limits_{k = 1}^{K} {p({\varphi_{k}})} = {(1/{\mathrm{2}}\pi)^{K}} $$
(10)

Next, note that noise is CAWGN, and obeys

$$ \begin{array}{l} {E}\left[ {{\boldsymbol{W}}{{\boldsymbol{W}}^{\mathrm{H}}}} \right] = {N_{0}}I\\ {E}\left[ {{\boldsymbol{W}}{{\boldsymbol{W}}^{\mathrm{T}}}} \right] = 0 \end{array} $$
(11)

where I is an identity matrix and E{·} denotes the expectation. N0 is the power spectral density of noise, which represents the power of noise when the bandwidth is normalized. Then, we define the signal to noise ratio as

$$ {\rho_{k}}^{2} = {{\mathrm{E}\left\{ {{\alpha_{k}}^{2}}\right\}}}/{{{N_{0}}}} $$
(12)

where αk2 is the power of the kth source.

We will derive the expression of DOA information in the following section.

DOA information

In this section, we will provide the theoretical expression of the DOA information. The DOA information is defined as the mutual information between DOA and received source signal, i.e., I(X;Θ). We suppose the actual value of DOA is θ0=[θ10,θ20,,θK0]T. Considering CAWGN, the multi-dimensional PDF of X conditioned on Θ and Φ is given by (see (13)).

$$ \begin{aligned} p\left({{\boldsymbol{x}}\left| {{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right.} \right) &= {\left({\frac{1}{{\pi {N_{0}}}}} \right)^{M}}\exp \left[ { - \frac{1}{{{N_{0}}}}{{\left({{\boldsymbol{x}} - {\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{s}}} \right)}^{\mathrm{H}}}\left({{\boldsymbol{x}} - {\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{s}}} \right)} \right]\\ &= {\left({\frac{1}{{\pi {N_{0}}}}} \right)^{M}}\exp \left[ { - \frac{1}{{{N_{0}}}}\left({{{\boldsymbol{x}}^{\mathrm{H}}}{\boldsymbol{x}} - 2{\mathcal R}\left({{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}}} \right) + {{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{s}}} \right)} \right] \end{aligned} $$
(13)

The joint probability density of X and Θ conditioned on Φ is given by

$$ p\left({{\boldsymbol{x}},{\boldsymbol{\theta }}\left| {\boldsymbol{\varphi }} \right.} \right) = p\left({{\boldsymbol{x}}\left| {{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right.} \right)p({\boldsymbol{\theta }}) $$
(14)

Then, the joint probability density of X and Θ can be derived as

$$ p\left({{\boldsymbol{x}},{\boldsymbol{\theta }}} \right) = \oint {p\left({{\boldsymbol{x}},{\boldsymbol{\theta }}\left| {\boldsymbol{\varphi }} \right.} \right)p\left({\boldsymbol{\varphi }} \right)} \mathrm{d}{\boldsymbol{\varphi }} $$
(15)

Consequently, the probability density of Θ conditioned on X is given by

$$ p({\boldsymbol{\theta }}|{\boldsymbol{x}}) = \frac{{\oint {p\left({{\boldsymbol{x}}\left| {{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right.} \right)p({\boldsymbol{\theta }})p({\boldsymbol{\varphi }})\mathrm{d}{\boldsymbol{\varphi }}} }}{{\int_{\boldsymbol{\mathcal{Q}}} {\oint {p\left({{\boldsymbol{x}}\left| {{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right.} \right)p({\boldsymbol{\theta }})p({\boldsymbol{\varphi }})\mathrm{d}{\boldsymbol{\varphi }}} \mathrm{d}{\boldsymbol{\theta }}} }} $$
(16)

by omitting the terms independent of Θ, this expression can be simplified to

$$ p({\boldsymbol{\theta }}|{\boldsymbol{x}}){\mathrm{ = }}\frac{{\oint {g\left({{\boldsymbol{x}},{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right)\mathrm{d}{\boldsymbol{\varphi }}} }}{{\int_{\boldsymbol{\mathcal{Q}}} {\oint {g\left({{\boldsymbol{x}},{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right)\mathrm{d}{\boldsymbol{\varphi }}} \mathrm{d}{\boldsymbol{\theta }}} }} $$
(17)

where g(x,θ,φ) is given by

$$ \begin{aligned} &p\left({{\boldsymbol{x}}\left| {{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right.} \right) \propto g\left({{\boldsymbol{x}},{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right) \\ &= {\exp \left[ {\frac{2}{{{N_{0}}}}{\mathcal R}\left({{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}}} \right)} { - \frac{1}{{{N_{0}}}}{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{s}}} \right]} \end{aligned} $$
(18)

Since the posteriori probability density of Θ is given, the quantity of DOA information obtained from the multiple sources is the difference of the priori entropy and the conditional entropy of Θ, i.e.,

$$ \begin{array}{l} I\left({{\boldsymbol{X}};{\boldsymbol{\varTheta }}} \right) = h\left({\boldsymbol{\varTheta }} \right) - h\left({{\boldsymbol{\varTheta }}|{\boldsymbol{X}}} \right)\\ = K\log \left\| {{\varTheta }} \right\| + E\left[ {\int_{\boldsymbol{\mathcal{Q}}} {p({\boldsymbol{\theta }}|{\boldsymbol{x}})\log p({\boldsymbol{\theta }}|{\boldsymbol{x}})\mathrm{d}{\boldsymbol{\theta }}}} \right] \end{array} $$
(19)

where h(Θ) denotes the prior information of Θ and h(Θ|X) denotes the conditional entropy of Θ when X is obtained. Clearly, the DOA information is algorithm-independent. It can provide a bound for the performance of any algorithms, which has important theoretical guidance.

Upper bound of DOA information

The upper bound of DOA information in the single-source scenario is obtained in previous papers. In this section, we will use some reasonable assumptions and approximation methods to derive the upper bound of DOA information in the multi-source scenario.

Posterior PDF of sparse multi-source

Obviously, when the DOA of signal sources are close to each other, part of the DOA information will be lost because of the interference between sources. Therefore, to obtain the maximum DOA information, we suppose there are K(K<<M) independent sources with large spacing between any two sources to avoid this interference, i.e., sparse sources assumption.

Similar to the single-source scenario, p(θ|x) presents Gaussian-like distribution centered on the actual location of the source θ0. Thus, we obtain p(θ|x) in the neighborhood of θ0.

Clearly, in the case of multi-source, we have

$$ {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\mathrm{ = }}\left[ {\begin{array}{*{20}{c}} {{{\boldsymbol{a}}_{1}}^{\mathrm{H}}{{\boldsymbol{a}}_{1}}}&{{{\boldsymbol{a}}_{1}}^{\mathrm{H}}{{\boldsymbol{a}}_{2}}}& \cdots &{{{\boldsymbol{a}}_{1}}^{\mathrm{H}}{{\boldsymbol{a}}_{K}}}\\ {{{\boldsymbol{a}}_{2}}^{\mathrm{H}}{{\boldsymbol{a}}_{1}}}&{{{\boldsymbol{a}}_{2}}^{\mathrm{H}}{{\boldsymbol{a}}_{2}}}& \cdots &{{{\boldsymbol{a}}_{2}}^{\mathrm{H}}{{\boldsymbol{a}}_{K}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{{\boldsymbol{a}}_{K}}^{\mathrm{H}}{{\boldsymbol{a}}_{1}}}&{{{\boldsymbol{a}}_{K}}^{\mathrm{H}}{{\boldsymbol{a}}_{2}}}& \cdots &{{{\boldsymbol{a}}_{K}}^{\mathrm{H}}{{\boldsymbol{a}}_{K}}} \end{array}} \right] $$
(20)

where

$$ \begin{aligned} {{{\boldsymbol{a}}_{i}}^{\mathrm{H}}{{\boldsymbol{a}}_{j}}} &= {\boldsymbol{a}}^{\mathrm{H}}{\left({{\theta_{i}}} \right)}{\boldsymbol{a}}\left({{\theta_{j}}} \right)\\ &= {e^{- j{\frac{\left({M - 1} \right)\left(\beta_{i} - \beta_{j}\right) }{2}}}}\frac{{\sin \left({{\frac{M\left(\beta_{i} - \beta_{j}\right)}{2}}} \right)}}{{\sin \left({\frac{\left(\beta_{i} - \beta_{j}\right)}{2}} \right)}} \end{aligned} $$
(21)

βi=2πd sinθi/λ, βj=2πd sinθj/λ and λ is the wavelength of the signal.

Notice that when i=j, aH(θi)a(θj)=M. Furthermore, (21) has a distribution like the sinc function, its side lobe is quite small compared to the main lobe. Base on the sparse sources assumption, we have aH(θi)a(θj)<<M,(ij) when θ is in the neighborhood of θ0, i.e., θkU(θ0,δ), k=1,2,,K.

Therefore, (20) can be approximated to

$$ {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}} \left({\boldsymbol{\theta }}\right) \simeq M \cdot I $$
(22)

it follows that

$$ {\frac{1}{{{N_{0}}}}{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}\left({\boldsymbol{\theta }} \right){\boldsymbol{s}}} \simeq {\frac{M}{{{N_{0}}}}\sum\limits_{k = 1}^{K} {\alpha_{k}^{2}} } $$
(23)

Substituting (23) in (18) results in

$$ g\left({{\boldsymbol{x}},{\boldsymbol{\theta }},{\boldsymbol{\varphi }}} \right) \propto {\exp \left({\frac{2}{{{N_{0}}}}{\mathcal R}\left({{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}}} \right)} \right)} $$
(24)

In addition, for the actual received signal, we have

$$ {\boldsymbol{x}} = {\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}){{\boldsymbol{s}}_{0}} + {{\boldsymbol{w}}_{0}} $$
(25)

where θ0 is the actual value of DOA, and

$$\begin{array}{*{20}l} &{{\boldsymbol{s}}_{0}} = \left[ {{\alpha_{1} e^{j{\varphi_{10}}}},{\alpha_{2} e^{j{\varphi_{20}}}}, \cdots,{\alpha_{K} e^{j{\varphi_{K0}}}}} \right]^{\mathrm{T}}\\ &{\boldsymbol{w}}_{0} = [w_{1},w_{2},\cdots,w_{m}]^{\mathrm{T}} \end{array} $$

Substituting the received signal x into sHAH(θ)x results in

$$ {{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}} = {{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}){{\boldsymbol{s}}_{0}} + {{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){{\boldsymbol{w}}_{0}} $$
(26)

Same as (20), AH(θ)A(θ0) can be approximated when θ is in the neighborhood of θ0. At this time, aH(θk)a(θk0) is the only element left in its kth row and the rest is approximated to 0, i.e.

$$ {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}) \simeq {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}) * I $$
(27)

where is Hadamard product. Moreover, suppose that the signal amplitude of each source is equal, i.e., αk=α. It follows that (see (28)).

$$ \begin{array}{l} \oint {\exp \left({\frac{2}{{{N_{0}}}}{\mathcal R}\left({{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}}} \right)} \right)\mathrm{d}{\boldsymbol{\varphi }}} \\ \simeq \oint {\exp \left({\frac{2}{{{N_{0}}}}{\mathcal R}\left({{\boldsymbol{s}}^{\mathrm{H}}}\left({{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}) * I\right){{\boldsymbol{s}}_{0}} + {{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){{\boldsymbol{w}}_{0}} \right)} \right)\mathrm{d}{\boldsymbol{\varphi }}} \\ \simeq \int_{0}^{2\pi} { \cdots \int_{0}^{2\pi} {\!\exp\! \left(\! {\frac{{2 }}{{{N_{0}}}}{\mathcal R}\!\left({\sum\limits_{k = 1}^{K} {\alpha {e^{- j{\varphi_{k}}}}\sum\limits_{m = 0}^{M - 1} {\!\left({{e^{- j{\omega_{0}}{\tau_{m}}\left({{\theta_{k}}} \right)}}\left({{\alpha} {e^{j{\varphi_{k0}}}}{e^{j{\omega_{0}}{\tau_{m}}\left({{\theta_{k0}}} \right)}} + {w_{m}}} \right)}\! \right)}} } \!\right)} \!\right)\!\mathrm{d}{\varphi_{1}} \cdots} } \mathrm{d}{\varphi_{K}}\\ \simeq \prod\limits_{k = 1}^{K} {\int_{0}^{2\pi} {\exp \left({\frac{2{\alpha^{2} }}{{{N_{0}}}}{\mathcal R}\left({{e^{- j {{\varphi^{\prime}_{k}}} }}\sum\limits_{m = 0}^{M - 1} {\left({{e^{- j{\omega_{0}}{\tau_{m}}\left({{\theta_{k}}} \right)}}\left({{e^{j{\omega_{0}}{\tau_{m}}\left({{\theta_{k0}}} \right)}} +\frac{1}{\alpha} w^{\prime}_{m}} \right)} \right)}} \right)} \right)\mathrm{d}{\varphi^{\prime}_{k}}} } \end{array} $$
(28)

where \(w^{\prime }_{m} = {w_{m}}{e^{- j{\varphi _{k{\mathrm {0}}}}}}\), \({\varphi ^{\prime }_{k}} = {{\varphi _{k}} - {\varphi _{k{\mathrm {0}}}}}\). We further have

$$ \begin{array}{l} \oint {\exp \left({\frac{2}{{{N_{0}}}}{\mathcal R}\left({{{\boldsymbol{s}}^{\mathrm{H}}}{{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{x}}} \right)} \right)\mathrm{d}{\boldsymbol{\varphi }}} = \prod\limits_{k = 1}^{K} 2{\pi} {{I_{\mathrm{0}}}\left({\frac{{2{{\alpha^{2} }}}}{{{N_{0}}}}\left| {G\left({{\theta_{k}}} \right) + \frac{1}{\alpha}\xi({\theta_{k}},w)} \right|} \right)} \end{array} $$
(29)

where I0{·} is the first kind of zero-order Bessel function [25], and

$$ \begin{aligned} G\left({{\theta_{k}}} \right)&= \sum\limits_{m = 0}^{M - 1} {{e^{- j{\omega_{0}}\left({{\tau_{m}}\left({{\theta_{k}}} \right) - {\tau_{m}}\left({{\theta_{k {\mathrm{0}}}}} \right)} \right)}}}\\ &= {e^{- j{\frac{\left({M - 1} \right)\left(\beta_{k} - \beta_{k0}\right) }{2}}}}\frac{{\sin \left({{\frac{M\left(\beta_{k} - \beta_{k0}\right)}{2}}} \right)}}{{\sin \left({\frac{\left(\beta_{k} - \beta_{k0}\right)}{2}} \right)}} \end{aligned} $$
(30)

where βk=2πd sinθk/λ, βk0=2πd sinθk0/λ. And G(θk) can be regarded as the influence of the signal to the posteriori probability density of Θ. And

$$ \xi({\theta_{k}},w) = \sum\limits_{m = 0}^{M - 1} {w{'_{m}}{e^{- j{\omega_{0}}{\tau_{m}}\left({{\theta_{k}}} \right)}}} $$
(31)

can be regarded as the influence of the noise to the posteriori probability density of Θ.

Therefore, under the sparse sources condition, Eq. (17) can be rewritten as

$$ p({\boldsymbol{\theta }}|{\boldsymbol{x}}) \simeq \frac{\prod\limits_{k = 1}^{K} 2{\pi} {{I_{\mathrm{0}}}\left({\frac{{2{{\alpha^{2} }}}}{{{N_{0}}}}\left| {G\left({{\theta_{k}}} \right) + \frac{1}{\alpha}\xi({\theta_{k}},w)} \right|} \right)}}{{\int_{\boldsymbol{\mathcal{Q}}} { {\prod\limits_{k = 1}^{K} 2{\pi} {{I_{\mathrm{0}}}\left({\frac{{2{{\alpha^{2} }}}}{{{N_{0}}}}\left| {G\left({{\theta_{k}}} \right) + \frac{1}{\alpha}\xi({\theta_{k}},w)} \right|} \right)}} \mathrm{d}{\boldsymbol{\theta }}} }} $$
(32)

We have known that in the single-source scenario, DOA information will approach an upper bound with the increasing of SNR. The closed expression of the upper bound was derived under the condition of high SNR. Therefore, we follow this condition to derive the upper bound of multi-source DOA information. Considering the posterior PDF is composed of signal and noise components, in the case of high SNR, we can neglect the noise components to approximate p(θ|x) when θ is in the neighborhood of θ0. Moreover, p(θ|x) tends to 0 out of the neighborhood. Thus, we have

$$ p ({\boldsymbol{\theta }}|{\boldsymbol{x}}) \simeq \left\{ \begin{array}{l} {p'}({\boldsymbol{\theta }}|{\boldsymbol{x}}) \quad {\boldsymbol{\theta }}\in U({{{\boldsymbol{\theta }}_{\mathrm{0}}}},\boldsymbol{\delta});\\ 0 \qquad \qquad else. \end{array} \right. $$
(33)

in which

$$ {p'}({\boldsymbol{\theta }}|{\boldsymbol{x}}) \simeq \kappa {\prod\limits_{k = 1}^{K} 2{\pi} {{ I_{\mathrm{0}}}\left({\frac{{2{{\alpha^{2} }}}}{{{N_{0}}}}\left| {G\left({{\theta_{k}}} \right)} \right|} \right)} } $$
(34)

where κ is a normalizing constant.

In order to obtain the approximation of DOA information, we approximate |G(θk)| using the first-order Taylor series expansion at θk=θk0, it follows that

$$ \left| {G\left({{\theta_{k}}} \right)} \right| \simeq M - \frac{1}{2}M{{\mathcal L}^{2}}{\cos^{2}}{\theta_{k0}}{\left({{\theta_{k}} - {\theta_{k0}}} \right)^{2}} $$
(35)

where \({{\mathcal {L}}^{2}} = {{\pi ^{2}}{L^{2}}}/3\) is root mean square aperture width, L=Md/λ denotes the normalized aperture width, and cosθk0 is direction cosine of sensor arrays.

Substituting (35) in (34) and using the expansion of the Bessel function

$$ {I_{0}}\left(x \right) \simeq \frac{{{e^{x}}}}{{\sqrt {2\pi x} }}\left\{ {1 + \frac{1}{{8x}} + o\left({\frac{1}{{{x^{2}}}}} \right)} \right\} $$
(36)

It follows that the approximation of (34) is given by

$$ \begin{aligned} &{p'}({\boldsymbol{\theta }}|{\boldsymbol{x}}) \simeq \prod\limits_{k = 1}^{K} {\frac{1}{{\sqrt {2\pi {\sigma_{k}}^{2}} }}} \exp \left({ - \frac{{{{\left({{\theta_{k}} - {\theta_{k0}}} \right)}^{2}}}}{{2{\sigma_{k}}^{2}}}} \right)\\ &= \frac{1}{\sqrt{{{\left({2\pi} \right)}^{K}}{{\left| {{C_{\boldsymbol{\theta }}}} \right|}}}}\exp \left({ - \frac{1}{2}{{\left({{\boldsymbol{\theta }} - {{\boldsymbol{\theta }}_{\mathrm{0}}}} \right)}^{\mathrm{H}}} C_{\boldsymbol{\theta }}^{- 1}\left({{\boldsymbol{\theta }} - {{\boldsymbol{\theta }}_{\mathrm{0}}}} \right)} \right) \end{aligned} $$
(37)

where \({\sigma _{k}}^{2} = {\left ({2M{\rho ^{2}}{{\mathcal L}^{2}}{{\cos }^{2}}{\theta _{k0}}} \right)^{- 1}}\), ρ2=α2/N0, and

$${C_{\boldsymbol{\theta }}} = \left[ {\begin{array}{*{20}{c}} {{\sigma_{1}}^{2}}&{}&{}\\ {}& \ddots &{}\\ {}&{}&{{\sigma_{K}}^{2}} \end{array}} \right]$$

is covariance matrix of K-dimensional Gaussian distribution.

Using the expression of p(θ|x), we can observe the posterior PDF through numeral calculation. We take the dual-source scenario as an example, the actual value of DOA is set as θ0=[θ10,θ20]T. As shown in Fig. 2, the posterior PDF presents a two-dimensional probability distribution with two peaks, which are located at θ=[θ10,θ20]T and θ=[θ20,θ10]T, respectively.

Fig. 2
figure2

The posterior PDF of DOA

Since the order of sources is not determined and the K elements of θ0=[θ10,θ20,,θK0]T have K! different permutations, the posterior probability distribution presents a K-dimensional probability distribution with K! peaks when there are K sources.

To facilitate further derivation, we introduce the concept of permutation matrix. Set πl represents one of the permutations of [1,,K], where l=1,2,,K!. The permutation matrix of πl is written as \(P_{\pi _{l}}\). Then, the permutation of θ0 can be represented as \({P_{{\pi _{l}}}}{{\boldsymbol {\theta }}_{\mathrm {0}}}\).

According to the numeral calculation results, we find that the distribution of the posterior probability is mainly located in the neighborhood of \({P_{{\pi _{l}}}}{{\boldsymbol {\theta }}_{\mathrm {0}}}\). At this time, \({{\boldsymbol {a}}{{\left ({{\theta _{k}}} \right)}^{\mathrm {H}}}{\boldsymbol {a}}\left ({{\theta _{\pi _{l}(k)0}}} \right)}\) is the only element left in its kth row and the rest is approximated to zero. Now, (27) can be rewritten as

$$ {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{\boldsymbol{\theta }}_{0}}) \simeq {{\boldsymbol{A}}^{\mathrm{H}}}\left({\boldsymbol{\theta }} \right){\boldsymbol{A}}({{P_{{\pi_{l}}}}{\boldsymbol{\theta }}_{0}}) * I $$
(38)

The subsequent derivation is the same as (28)–(37). Therefore, the correction expression of posterior PDF in the neighborhood of \({P_{{\pi _{l}}}}{{\boldsymbol {\theta }}_{\mathrm {0}}}\) is given by

$$ {p_{\pi_{l}}}({\boldsymbol{\theta }}|{\boldsymbol{x}}) \simeq \kappa' {p_{\pi_{l}}} $$
(39)

where κ≈1/K! because p(θ|x) presents a K-dimensional probability distribution with K! same peaks when αk=α. And

$$ \begin{array}{l} {p_{\pi_{l}}} = \frac{1}{\sqrt{{{\left({2\pi} \right)}^{K}}{{\left| {{C_{\boldsymbol{\theta }}}} \right|}}}}\exp \left({ - \frac{1}{2}{{\left({{\boldsymbol{\theta }} - {P_{{\pi_{l}}}}{{\boldsymbol{\theta }}_{\mathrm{0}}}} \right)}^{\mathrm{H}}} C_{\boldsymbol{\theta }_{l}}^{- 1}\left({{\boldsymbol{\theta }} - {P_{{\pi_{l}}}}{{\boldsymbol{\theta }}_{\mathrm{0}}}} \right)} \right) \\ \end{array} $$
(40)

is the PDF of K-dimensional Gaussian distribution. In which,

$$C_{\boldsymbol{\theta }_{l}}= \left[ {\begin{array}{*{20}{c}} {{\sigma_{{\pi_{l}}(1)}}^{2}}&{}&{}\\ {}& \ddots &{}\\ {}&{}&{{\sigma_{{\pi_{l}}(K)}}^{2}} \end{array}} \right] $$

is the covariance matrix.

Upper bound of DOA information

Then, we divide the domain of integration into K! domains centered on each peak. The PDF in the neighborhood of each peak is given by (39). Next, we can extend the integral domain to the whole domain when calculating the integral of each domain for convenience. The error caused by such approximation is acceptable because the value of the Gaussian distribution outside the neighborhood of each peak is close to zero. The calculation process is given by

$$ \begin{aligned} &h\left({{\boldsymbol{\varTheta }}|{\boldsymbol{X}}} \right) = \sum\limits_{l = 1}^{K!} {\left({ - \int_{U({P_{{\pi_{l}}}}{{\boldsymbol{\theta }}_{\mathrm{0}}},\boldsymbol{\delta})} {\frac{1}{{K!}}{p_{\pi_{l}}}\log \frac{1}{{K!}}{p_{\pi_{l}}}\mathrm{d}{\boldsymbol{\theta }}}} \right)} \\ &\simeq K!\left({ - \frac{1}{{K!}}\int_{\boldsymbol{\mathcal{Q}}} {{p_{\pi l}}\log \frac{1}{{K!}}\mathrm{d}{\boldsymbol{\theta }}} - \frac{1}{{K!}}\int_{\boldsymbol{\mathcal{Q}}} {{p_{\pi_{l}}}\log {p_{\pi_{l}}}\mathrm{d}{\boldsymbol{\theta }}}} \right)\\ &\simeq \log K! - \int_{\boldsymbol{\mathcal{Q}}} {{p_{\pi_{l}}}\log {p_{\pi_{l}}}\mathrm{d}{\boldsymbol{\theta }}} \\ \end{aligned} $$
(41)

where

$$ \begin{aligned} &\int_{\boldsymbol{\mathcal{Q}}} {{p_{\pi_{l}}}\log {p_{\pi_{l}}}\mathrm{d}{\boldsymbol{\theta }}}\\ &= -\left(\frac{K}{2}\log\left({2\pi e} \right) + \frac{1}{2}{\mathrm{ }}\log\left| {{C_{{\boldsymbol{\theta }}_{l}}}} \right|\right)\\ &= - \frac{1}{2}{\mathrm{ }}\log\left({\prod\limits_{k = 1}^{K} {\frac{{\pi e}}{{M{\rho^{2}}{{\mathcal L}^{2}}{{\cos }^{2}}{\theta_{{\mathrm{k0}}}}}}}} \right) \end{aligned} $$
(42)

By substituting (41) in (19), we obtain an approximation of the upper bound of DOA information

$$ \begin{array}{l} I\left({{\boldsymbol{X}};{\boldsymbol{\varTheta }}} \right) = h\left({\boldsymbol{\varTheta }} \right) - h\left({{\boldsymbol{\varTheta }}|{\boldsymbol{X}}} \right)\\ = K\log{\left\| {\varTheta} \right\|} - \log K! - \frac{1}{2}{\mathrm{ }}\log\left({\prod\limits_{k = 1}^{K} {\frac{{\pi e}}{{M{\rho^{2}}{{\mathcal L}^{2}}{{\cos }^{2}}{\theta_{{\mathrm{k0}}}}}}}} \right)\\ = \sum\limits_{k = 1}^{K} {\log\frac{{{\left\| {\varTheta} \right\|}\sqrt M \rho {\mathcal L}\cos {\theta_{{\mathrm{k0}}}}}}{{\sqrt {\pi e} }}} - \log K! \end{array} $$
(43)

where the first term of (43) is the sum of DOA information of every single source and the second term is the loss of information due to the uncertainty of sources’ order. This is our main result.

Moreover, the upper bound of DOA information of the source signal with random amplitudes can be obtained simply by taking the SNR in (43) as a random SNR and taking the expectation of (43). For example, the upper bound of DOA information of the source signal with Rayleigh distribution amplitudes is given by

$$ \begin{array}{l} I^{\prime}\left({{\boldsymbol{X}};{\boldsymbol{\varTheta }}} \right) = E\left\{ I\left({{\boldsymbol{X}};{\boldsymbol{\varTheta }}} \right) \right\} \\ =\sum\limits_{k = 1}^{K} {\log\frac{{{\left\| {\varTheta} \right\|}\sqrt M \rho {\mathcal L}\cos {\theta_{{\mathrm{k0}}}}}}{{\sqrt {\pi e} }}} - \frac{K \gamma }{{2\ln 2}} - \log K! \end{array} $$
(44)

where γ is Euler-Mascheroni constant.

Entropy error and MSE

We know that the conditional entropy h(Θ|X) represents the uncertainty of Θ when the received signal is given. As the SNR increases, the conditional entropy continues to shrink, indicating that the estimation is more accurate. In other words, the conditional entropy or DOA information (when the prior entropy of Θ is determined) can be considered as a performance metric of DOA estimation.

In the case of the single source [24], EE is defined as the entropy power of the posterior probability distribution to measure the theoretical performance of DOA estimation. In this section, we will discuss the relationship between EE and MSE in the multi-source case.

Firstly, EE is defined as the entropy power of p(θ|x), which is given by

$$ \sigma_{\text{EE}}^{2} = \frac{{{2^{2 {h\left({{\boldsymbol{\varTheta }}\left| {\boldsymbol{X}} \right.} \right)} }}}}{\left(2 \pi e\right)^{K}}=\frac{1}{\left(2 \pi e\right)^{K}}\frac{{{{\left\| \varTheta \right\|}^{2K}}}}{{{2^{2I\left({{\boldsymbol{X}};{\boldsymbol{\varTheta }}} \right)}}}} $$
(45)

in the K-source case.

We can learn from (45) that once the sensor arrays obtain 1 bit of DOA information, the entropy deviation σEE is reduced by half.

In the previous section, we have derived the conditional entropy of DOA. Since (41) is the approximation of the conditional entropy in high SNR region, we can obtain an approximation of EE by substituting (41) into (45); it follows that EE’s low bound (EEB) is given by

$$ \begin{aligned} EEB&= \frac{{{2^{2\left[ {\log K! + \frac{K}{2}\log\left({2\pi e} \right) + \frac{1}{2}\log\left| {{C_{\boldsymbol{\theta }}}} \right|}\right] }}}}{\left(2 \pi e\right)^{K}}\\ &= \left| {{C_{\boldsymbol{\theta }}}} \right| (K!)^{2} \end{aligned} $$
(46)

where K! reflects the uncertainty of sources’ order.

As we mentioned in the introduction, MSE is usually used to evaluate the performance of DOA estimation algorithms. Xu and Yan [24] had pointed out the limitation of MSE at medium and low SNRs. Here, we discuss the limitations of MSE in the multi-source case. The MSE of N times DOA estimation for K sources is given by

$$ \sigma_{\text{MSE}}^{2} = \frac{1}{{KN}}\sum\limits_{n = 1}^{N} {\sum\limits_{k = 1}^{K} {{{\left({{{\hat \theta }_{kn}} - {\theta_{k0}}} \right)}^{2}}} } $$
(47)

which is calculated under the condition of the determined sources’ order. It only applies to a one-dimensional matching multi-source DOA estimation.

When we use multi-dimensional matching estimation to improve the angular resolution [26], the estimate of DOA \(\hat {\boldsymbol {\theta }}\) will present a K-dimensional probability distribution with K! peaks when there are K sources similar to Fig. 2 due to the uncertainty of sources’ order. Consequently, MSE is no longer applicable to evaluate the estimation performance.

CRB provides the best accuracy achievable by any unbiased estimator of the signal parameters and provides a fundamental physical limit on estimation accuracy. And the expression of CRB in the multi-source scenario is given by [15], which is shown by

$$ \begin{array}{l} CRB\left(\theta \right) = \frac{{{N_{0}}}}{2}{\left\{ {{\mathcal{R}}\left[ \begin{array}{l} diag{({\boldsymbol{S}})^{\mathrm{H}}}{{\boldsymbol{D}}^{\mathrm{H}}} \left({{\boldsymbol{I}} - {\boldsymbol{A}}{{\left({{{\boldsymbol{A}}^{\mathrm{H}}}{\boldsymbol{A}}} \right)}^{- 1}}{{\boldsymbol{A}}^{\mathrm{H}}}} \right){\boldsymbol{D}}diag({\boldsymbol{S}}) \end{array} \right]} \right\}^{- 1}} \end{array} $$
(48)

where

$$\begin{aligned} &{\boldsymbol{S}} = \left[\boldsymbol{s}_{1} \cdots \boldsymbol{s}_{K}\right]^{\mathrm{T}}\\ &{\boldsymbol{A}} = {\boldsymbol{A}}\left({\boldsymbol{\theta }} \right)\\ &{\boldsymbol{D}} = \left[ {{\boldsymbol{d}}\left({{\theta_{1}}} \right) \cdots {\boldsymbol{d}}\left({{\theta_{K}}} \right)} \right] \end{aligned}$$

recall that d(θ)=a(θ)/θ.

Similarly, as the theoretical lower bound of MSE, CRB can only be used as the lower bound of multi-source DOA estimation accuracy in one-dimensional matching estimation. Moreover, the relationship between EEB and CRB is given by

$$ EEB \simeq \left| CRB\left(\theta \right) \right| (K!)^{2} $$
(49)

which is shown in Fig. 5.

Results and discussion

In this section, we provide the numerical results to illustrate the theoretical result in the multi-source scenario with CAWGN. Taking the dual-source scenario for example, we consider the reflection coefficient α1=α2=1 and the phase follows a uniform distribution in the interval [0,2 π]. Considering that there are only two sources, the sparse-source assumption is still valid when the observation interval is small. In order to reduce the calculation time, the observation interval of the DOA is set as [−20,20]. Moreover, θ10 and θ20 are located at −5 and 5, respectively. And the number of array elements M is set as 32 in general.

Besides, we consider the reflection coefficient follows a Rayleigh distribution in the random source signal amplitudes scenario. The other conditions are the same as the constant source signal amplitudes scenario.

Upper bound of DOA information

In this subsection, we have presented the simulation results of DOA information and its upper bound in both constant amplitudes scenario and random amplitudes scenario, which are shown in Figs. 3 and 4, respectively.

Fig. 3
figure3

DOA information in constant amplitudes scenario

Fig. 4
figure4

DOA information in random amplitudes scenario

Fig. 5
figure5

Comparison of EE, EEB, MSE, and CRB

It can be seen from the two figures that the theoretical value of DOA information corresponds to the upper bound of DOA information obtained by us in the high SNR region, which proves the correctness of the derivation. Numerically, the sum of DOA information of two single sources is 1 bit more than the joint DOA information obtained by the two-dimensional search. As we mentioned in the explanation of (43), the 1 bit loss of information is caused by the uncertainty of sources’ order. It can be concluded from the simulation results that the calculation of DOA information containing multiple independent sources can be converted into the sum of all single source’s information minus the uncertainty of sources’ order logK!.

Comparison of EE, EEB, MSE, and CRB

Next, we compare EE, EEB, MSE, and CRB for various SNRs through simulation to show their relationship in the dual-source scenario. The EE is calculated by substituting the conditional entropy obtained by simulation into (45). EEB is obtained by (46). For comparison, we do the simulation of the ML algorithm. MSE is calculated by

$$ \sigma_{\text{MSE}}^{2} = \frac{1}{N}\sum\limits_{n = 1}^{N} {\prod\limits_{k = 1}^{K} {{{\left({{{\hat \theta }_{kn}} - {\theta_{k0}}} \right)}^{2}}}} $$
(50)

under the condition of the determined sources’ order. The empirical EE of the ML algorithm is obtained based on the probability distribution of the estimate of DOA \(\hat {\boldsymbol {\theta }}\). For the convenience of comparison, CRB will be further calculated by |CRB(θ)|.

According to the result of the simulation shown in Fig. 5, we find MSE decreases monotonically with increasing SNR and tends to CRB in the high SNR region. Similarly, EE decreases monotonically with increasing SNR and tends to EEB. However, unlike the single-source scenario, EEB does not coincide with CRB in the multi-source scenario. The difference is caused by the uncertainty of sources’ order. The empirical EE of the ML algorithm is bigger than and close to the theoretical EE. Moreover, when the sources’ order cannot be determined, CRB is unreachable, and EEB is a more reasonable theoretical bound.

So far, we have pointed out two limitations of MSE:

1) The posterior probability distribution of Θ no longer obeys Gaussian distribution in the case of medium and low SNR. MSE is invalid as a second-order statistic when SNR is low [24]. It is still valid in the multi-source scenario.

2) MSE cannot reflect the uncertainty of sources’ order in multi-dimensional matching multi-source DOA estimation.

EE avoids both of these limitations; it is more suitable to be used as an evaluation metric in medium and low SNRs and the multi-source case.

Conclusions

One of the significant findings to emerge from this study is that the upper bound of DOA information contained in K sparse sources can be regarded as the sum of all single-source information minus the uncertainty of sources’ order logK!. The second major finding was that MSE is no longer applicable to evaluate the estimation performance in multi-dimensional matching multi-source DOA estimation. Specifically, considering the uncertainty of sources’ order, the estimate of DOA \(\hat {\boldsymbol {\theta }}\) will present a K-dimensional probability distribution with K! peaks when there are K sources. Consequently, entropy error(EE) is defined as a new performance evaluation metric, and its low bound is given. In addition, EEB can be regarded as the generalized CRB considering the sources’ order in the multi-source scenario.

The main conclusions of this paper are given under the condition of high SNRs and sparse-source. However, the findings of this paper provide guidance for further study of multi-source DOA information and estimation performance evaluation in the general scenarios. Further investigations will be undertaken in future works in order to complete the research of DOA information theory in other scenarios.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

DOA:

Direction of arrival

EE:

Entropy error

MUSIC:

Multiple signal clas- sification

ML:

Maximum-likelihood

ESPRIT:

Estimating signal parameter via rotational invariance techniques

MSE:

Mean square error

CRB:

Cramer-Rao bound

CAWGN:

Complex additive white Gaussian noise

EEB:

Entropy error’s low bound

PDF:

Probability density function

References

  1. 1

    H. Krim, M. Viberg, Two decades of array signal processing research: the parametric approach. IEEE Signal Process. Mag.13(4), 67–94 (1996).

    Article  Google Scholar 

  2. 2

    T. Ballal, C. J. Bleakley, in 2009 17th European Signal Processing Conference. DOA estimation of multiple sparse sources using three widely-spaced sensors (IEEEGlasgow, 2009), pp. 1978–1982.

    Google Scholar 

  3. 3

    Z. Xi, H. Yu, G. Huang, J. Lu, in Proceedings of 2011 IEEE CIE International Conference on Radar, vol. 2. DOA estimation of multiple sources based on multiset canonical correlation analysis (IEEEChengdu, 2011), pp. 1410–1413.

    Google Scholar 

  4. 4

    M. Sekikawa, N. Hamada, in 2014 International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS). DOA estimation of multiple sources using arbitrary microphone array configuration in the presence of spatial aliasing (IEEEKuching, 2014), pp. 080–083.

    Google Scholar 

  5. 5

    R. Schmidt, R. O. Schmidt, Multiple emitter location and signal parameters estimation. IEEE Trans. Antennas Propag.34(3), 276–280 (1986).

    Article  Google Scholar 

  6. 6

    L. Liu, J. Xu, G. Wang, X. Xia, Y. Gao, T. Long, in 2016 CIE International Conference on Radar (RADAR). An extended dimension music method for DOA estimation of multiple real-valued sources (IEEEGuangzhou, 2016), pp. 1–5.

    Google Scholar 

  7. 7

    P. Stoica, K. C. Sharman, Maximum likelihood methods for direction-of-arrival estimation. IEEE Trans. Acoust. Speech Signal Process.38(7), 1132–1143 (1990). https://doi.org/10.1109/29.57542.

    Article  Google Scholar 

  8. 8

    R. Roy, T. Kailath, Esprit - estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process.37(7), 984–995 (1989).

    Article  Google Scholar 

  9. 9

    A. N. Lemma, A van der Veen, E. F. Deprettere, Analysis of joint angle-frequency estimation using ESPRIT. IEEE Trans. Signal Process.51(5), 1264–1283 (2003). https://doi.org/10.1109/TSP.2003.810306.

    MathSciNet  Article  Google Scholar 

  10. 10

    S. Marcos, A. Marsal, M. Benidir, in Proceedings of ICASSP ’94. IEEE International Conference on Acoustics, Speech and Signal Processing, vol. iv. Performances analysis of the propagator method for source bearing estimation, (1994), pp. 237–2404. https://doi.org/10.1109/ICASSP.1994.389823.

  11. 11

    P. Stoica, T. Soderstrom, Statistical analysis of music and subspace rotation estimates of sinusoidal frequencies. IEEE Trans. Signal Process.39(8), 1836–1847 (1991). https://doi.org/10.1109/78.91154.

    Article  Google Scholar 

  12. 12

    D. Khan, K. L. Bell, in 2010 IEEE Radar Conference. Analysis of DOA estimation performance of sparse linear arrays using the Ziv-Zakai bound, (2010), pp. 746–751.

  13. 13

    Z. Jaafer, S. Goli, A. S. Elameer, in 2018 1st Annual International Conference on Information and Sciences (AiCIS). Best performance analysis of DOA estimation algorithms, (2018), pp. 235–239.

  14. 14

    P. Stoica, A. Nehorai, Performance study of conditional and unconditional direction-of-arrival estimation. IEEE Trans. Acoust. Speech Signal Process.38(10), 1783–1795 (1990).

    Article  Google Scholar 

  15. 15

    P. Stoica, N. Arye, Music, maximum likelihood, and Cramer-Rao bound. IEEE Trans. Acoust. Speech Signal Process.38(5), 720–741 (1989).

    MathSciNet  Article  Google Scholar 

  16. 16

    P. Stoica, A. Nehorai, in International Conference on Acoustics, Speech, and Signal Processing. Music, maximum likelihood and Cramer-Rao bound: further results and comparisons, (1989), pp. 2605–26084. https://doi.org/10.1109/ICASSP.1989.267001.

  17. 17

    A. Matveyev, A. Gershman, J. Bohme, On the direction estimation Cramer-Rao bounds in the presence of uncorrelated unknown noise. Circ. Syst. Signal Process.18(5), 479–487 (1999). https://doi.org/10.1007/BF0138746.

    Article  Google Scholar 

  18. 18

    C. E. Shannon, A mathematical theory of communication. Bell Labs Tech. J.27(4), 379–423 (1948).

    MathSciNet  Article  Google Scholar 

  19. 19

    P. M. Woodward, I. Davies, Xcii. a theory of radar information. Lond. Edinb. Dublin Philos. Mag. J. Sci.41(321), 1001–1017 (1950).

    Article  Google Scholar 

  20. 20

    P. M. Woodward, Information theory and the design of radar receivers. Proc. IRE. 39(12), 1521–1524 (1951).

    Article  Google Scholar 

  21. 21

    S. Xu, D. Xu, H. Luo, in 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). Information theory of detection in radar systems (IEEEBilbao, 2017), pp. 249–254.

    Google Scholar 

  22. 22

    M. Wax, T. Kailath, Detection of signals by information theoretic criteria. IEEE Trans. Acoust. Speech Signal Process.33(2), 387–392 (1985).

    MathSciNet  Article  Google Scholar 

  23. 23

    M. Wax, I. Ziskind, Detection of the number of coherent signals by the MDL principle. IEEE Trans. Acoust. Speech Signal Process.37(8), 0–1196 (1989).

    Article  Google Scholar 

  24. 24

    D. Xu, X. Yan, S. Xu, H. Luo, J. Liu, X. Zhang, Spatial information theory of sensor array and its application in performance evaluation. IET Commun.13(15), 2304–2312 (2019). https://doi.org/10.1049/iet-com.2019.0355.

    Article  Google Scholar 

  25. 25

    I. S. Gradshteyn, I. M. Ryzhik, Table of integrals, series and products. Math. Comput.20(96), 1157–1160 (2007).

    MATH  Google Scholar 

  26. 26

    Y. Zhou, D. Xu, W. Tu, C. Shi, Spatial information and angular resolution of sensor array. Signal Process.174: (2020). https://doi.org/10.1016/j.sigpro.2020.10763.

Download references

Acknowledgements

Not applicable.

Funding

This work has been supported by the National Natural Science Foundation of China (grant No. 61971217) and Foundation of the Graduate Innovation Center, Nanjing University of Aeronautics and Astronautics (China) (grant no. kfjj20190411).

Author information

Affiliations

Authors

Contributions

DZ proposed the original idea of the full text. WT and YZ designed and implemented the simulation experiments. CS analyzed the results. WT drafted the manuscript and was a major contributor in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Weilin Tu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Tu, W., Xu, D., Zhou, Y. et al. The upper bound of multi-source DOA information in sensor array and its application in performance evaluation. EURASIP J. Adv. Signal Process. 2020, 42 (2020). https://doi.org/10.1186/s13634-020-00700-8

Download citation

Keywords

  • DOA estimation
  • Information theory
  • Upper bound
  • Cramer-Rao bound
  • Sensor arrays