 Research
 Open Access
 Published:
An RMTbased generalized Bayesian information criterion for signal enumeration
EURASIP Journal on Advances in Signal Processing volume 2023, Article number: 50 (2023)
Abstract
This paper provides a method for enumerating signals impinging on an array of sensors based on the generalized Bayesian information criterion (GBIC). The proposed method motivates by a statistic for testing the sphericity of the covariance matrix when the sample size n is less than the dimension m. The statistic consists of the first four moments of sample eigenvalue distribution and relaxes the assumption of Gaussian distribution. We derive the asymptotical distribution of the statistic as m, n tends to infinity at the same ratio by random matrix theory and propose the expression of GBIC for determining the signal number. Numerical simulations demonstrate that the proposed method has a high probability of detection in both the Gaussian and the nonGaussian noise, and performs better than other methods.
1 Introduction
Estimating the number of signals, also called signal enumeration, is an important problem in array signal processing. It is applied to diverse fields, such as radar and wireless communication [1, 2]. There are many different approaches to signal enumeration. Much attention has been aroused on information theoretic criterion (ITC), including the Akaike information criterion (AIC) [3], the Bayesian information criterion (BIC) [4], the minimum description length (MDL) principle [5], and the predictive minimum description length (PMDL) principle [6]. These conventional methods yield poor performance because the noises rely heavily on the assumption of normality. In practical applications, e.g., interference of indoor and outdoor mobile signals, the radar clutter, and underwater noises follow a nonGaussian distribution [7,8,9]. In [10], it proposes an approach to estimate the signal number by the entropy estimation of eigenvalues (EEE) under the white Gaussian and the nonGaussian assumption. Moreover, a signal subspace identification method is developed under the nonGaussian noise with heavytailed [11]. The above methods work well when the sample size n tends to infinity with a fixed sensor number m. However, in certain scenarios, the available sample size may be the same order of magnitude as the sensor number, even the scenarios \(m\gg n\) can occur likely in practice. For instance, in a shorttime stationary process, an array containing a large number of sensors just receives limited signal observations in the MIMO radar system. In these cases, these methods will encounter performance degradation because the sample eigenvalues can not converge to the true eigenvalues as m, n both tend to infinity at the same ratio [12].
The traditional methods, built on information theoretic criteria, have been analyzed and developed by random matrix theory (RMT) in the largedimensional regime. There are many methods based on the sample eigenvalue properties of largedimensional random matrices, respectively discussed in [13,14,15,16,17,18,19]. By the distribution of the largest sample eigenvalue of a Wishart matrix, the classic AIC is modified by increasing the penalty term [15]. In [16], an improved AIC is proposed when the highdimensional signals are contained in the white Gaussian noise by the sample moments of the random matrix with the Wishart distribution. In [17], a linear shrinkage based on the MDL criterion is devised for detecting the signal number by the spherical structure of noise subspace components under Gaussian assumption. In [18], the BIC is reformulated by constructing a penalized likelihood function with a convergence of the largest sample eigenvalues and the corresponding eigenvectors. In [19], the signal enumeration is realized based on the MDL and sample eigenvalues of the covariance matrix for Gaussian observations. Furthermore, a generalized Bayesian Information Criterion (GBIC) is derived. The GBIC is constructed by considering additional information including the probability distribution or statistics of sample eigenvalues from available data [20]. In a large array and finite sample scenario, a signal enumeration method is proposed via the GBIC based on a test statistic [21]. In [22], a new algorithm has been presented to estimate the number of sources embedded in a correlated complex elliptically distributed noise in the context of a large dimensional regime. As a whole, these methods mainly focus on signal enumeration in largedimensional and finite sample settings.
It is more common to meet signal enumeration in a nonGaussian noise when both the sample size and the sensor number are large. The ITClike methods are difficult to implement for the nonGaussian distributed observations because the probability density function (pdf) is difficult to be expressed. Moreover, the existing distribution properties of sample eigenvalues are only applied to the Gaussian observations.
This paper deals with estimating the signal number in a largedimensional regime when the noise is uncorrelated and nonGaussian. This estimator is considered in the complex elliptical symmetric (CES) noises [23], which is a better characterization for the noises in many applications. Motivated from [20], we construct a new test statistic consisting of highorder moments of sample eigenvalue distribution. It has the higher powers for the spiked covariance matrix in [24], according to [25, 26]. Whereas, the proposed approach differs from [27] and is analogous to the way devised in [28] from the technical point of view.
We propose an approach for signal enumeration based on GBIC, whether the noises are Gaussian or nonGaussian. The characteristics and advantages of our approach are explained as follows:

1.
The proposed approach is based on GBIC by utilizing RMT.

2.
We consider the problem of signal enumeration in the asymptotic situation where both the sample size n and the sensor number m tend to infinity with a ratio \(m/n \rightarrow c\in (0,\infty )\).

3.
We construct an estimator that is composed of the finite moments of the spectral distribution of the sample covariance matrix.

4.
The proposed estimator is evaluated numerically in scenarios with Gaussian and nonGaussian noise and has a higher probability of detection than some existing estimators.
The rest of this paper is as follows. Section 2 introduces the signal model and the GBIC. Section 3 proposes a new test statistic and deduces the asymptotic normality with general moment conditions under the null hypotheses. Section 4 provides some numerical simulation results for demonstrating the asymptotic behavior of the proposed statistic and the performance for highdimensional covariance in Gaussian and nonGaussian noise setting. Section 5 concludes.
1.1 Notations
The notation \(\mathbb {R}\) is the real number set. The notation \(\mathbb {C}^{m\times n}\) is the set of all \(m\times n\) complex matrix. The notation \(\mathbb {C}^m\) is the set of all mdimensional complex column vectors. The symbol E denotes the mathematical expectation. The symbol \(\varvec{I}_{m}\) denotes the identity matrix of order m. For a matrix \(\varvec{A}\), \(\varvec{A}^{T}\) and \(\varvec{A}^{H}\) respectively denote its transpose and conjugate transpose. For a squared matrix \(\varvec{A}\), \(\text {tr}(\varvec{A})\) denotes its trace.
2 Signal model and the GBIC
2.1 Formulation
Consider that K signals \(s_1, \ldots , s_K\), which are narrowband spatially incoherent, impinge on an array of m sensors from distinct directions \(\theta _1, \ldots , \theta _K\). The received sample vector at discrete time t denoted as \(\varvec{y}(t) \in \mathbb {C}^m\), can be modeled as
where \(\varvec{A}(\varvec{\theta }) = [\varvec{a}(\theta _1), \ldots , \varvec{a}(\theta _K)]\in \mathbb {C}^{m \times K}\) is the unknown manifold matrix with unit norm steering vectors \(\varvec{a}(\theta _k)\in \mathbb {C}^{m}\) corresponding to \(\theta _k, k = 1, \ldots , K\), and \(\varvec{s}(t) = [s_1(t), \ldots , s_K(t)]^T \in \mathbb {C}^{K}\) is the signal vector, and \(\varvec{w}(t) \in \mathbb {C}^{m}\) is the additive white noise impinging on the sensor array at the time t. Suppose that there are n snapshots of sensor array signals. The observation matrix is denoted as
where \(\varvec{Y}_n = [\varvec{y}(1), \ldots , \varvec{y}(n)], \varvec{S}_n = [\varvec{s}(1), \ldots , \varvec{s}(n)]\), and \(\varvec{W}_n = [\varvec{w}(1), \ldots , \varvec{w}(n)]\). We estimate the signal number K from \(\varvec{Y}_n\). Some statistical assumptions are made for the model as follows:

A1: The signal number is unknown and satisfies \(K < \min \{m,n\}\).

A2: The spatially incoherent signals \(s_1(t), \ldots , s_K(t)\) are stationary processes with 0 mean. The covariance matrix of signal vector \(\varvec{s}(t)\) is \(E\{\varvec{s}(t)\varvec{s}^H(t)\} = \text {diag}(\varvec{p}) \triangleq \varvec{P}_s\), where \(\varvec{p}= [p_1, \ldots , p_K]^T\) is the signal power vector.

A3: The noise components \(\varvec{w}(1), \ldots , \varvec{w}(n)\) are independent of \(\varvec{A}(\varvec{\theta }) \varvec{s}(1), \ldots , \varvec{A}(\varvec{\theta }) \varvec{s}(n)\) and come from CES with zeromean and the covariance matrix \(\sigma ^2 \varvec{I}_m\), i.e., \(E\{\varvec{w}(t)\varvec{w}^H(t)\} = \sigma ^2 \varvec{I}_m\), where \(\sigma ^2\) is unknown power. And \(s_1(t), \ldots , s_K(t)\) are mutually independent.
Under the above assumptions, the observation vector \(\varvec{y}(t)\) in (1) at discrete time t has mean zero and covariance matrix
We denote the eigenvalues of \(\varvec{R}\) as \(\lambda _1 \ge \ldots \lambda _K \ge \lambda _{K + 1} = \lambda _m = \sigma ^2\) with the corresponding eigenvectors \(\varvec{u}_1, \ldots , \varvec{u}_m\). The sample covariance matrix (SCM) is
The eigenvalues and eigenvectors of \({\hat{\varvec{R}}}\) are denoted as \({\hat{\lambda }}_{1} \ge \ldots \ge \hat{\lambda }_{m}\), \(\hat{\varvec{u}}_1, \ldots , \hat{\varvec{u}}_m\), also named the sample eigenvalues and the sample eigenvectors.
2.2 The review of generalized Bayesian information criterion
As a conventional method, the BIC deals with the model selection issue from a Bayesian point of view. But the criterion can not work well in some conditions such as small sample size, and low signaltonoise ratio (SNR). Meanwhile, the BIC relies heavily on the density of the observation. By incorporating the density functions of sample eigenvalues and related statistics, the GBIC is proposed to remedy the limitation of the BIC [20]. The GBIC has two different expressions with relaxed the Gaussian assumption of observation and respectively denoted as \(\hbox {GBIC}_1\) and \(\hbox {GBIC}_2\). The \(\hbox {GBIC}_1\) is made up of two parts: the density of observations and the ones of sample eigenvalues or related statistics. To reduce the influence of sample eigenvectors, the \(\hbox {GBIC}_2\) does not need the density of observations from \(\hbox {GBIC}_1\) for overcoming the limitation of the density. Meanwhile, it is convenient to describe the nonGaussian data. Suppose that \(\mathfrak {L}\) is a statistic based on the sample eigenvalues corresponding to an unknown vector \(\Theta _z^{(k)}\) with a possible signal number k. If the pdf of the statistic \(\mathfrak {L}\) is denoted as \(f(\mathfrak {L}\Theta _z^{(k)})\), then the expression of the \(\hbox {GBIC}_2\) is
where \({\hat{\Theta }}_z^{(k)}\), \(\nu (k)\) are the maximum likelihood estimate and the involved free parameter number of \(\Theta _z^{(k)}\) respectively. In (5), the first term is the loglikelihood function of \(f(\mathfrak {L}{\hat{\Theta }}_z^{(k)})\), and the second term is the penalty. In this paper, we suggest a sphericity test statistic to estimate the signal number, based on the \(\hbox {GBIC}_2\).
3 The signal enumeration method
Let \(\varvec{x}_1, \dots , \varvec{x}_n \in \mathbb {C}^p\) be an independent and identically distributed sample (circularly symmetric complex) with zero mean and covariance matrix \(\varvec{\Sigma }_p\). Consider the sphericity test for the population covariance matrix:
where \(\sigma\) is an unknown but fixed positive constant.
3.1 Estimators of \(\text {tr}\varvec{\Sigma }_p^i/p\)
Suppose that \(H_p\), \(F_n\) are spectral distributions of \(\varvec{\Sigma }_p\), \(\varvec{S}_n\) respectively, where \(\varvec{S}_n = \frac{1}{n}\sum ^n_{i=1}\varvec{x}_i \varvec{x}_i^{ H }\) is the SCM. We define the integerorder moments of \(H_p\) and \(F_n\):
Supposing that the observations are from Gaussian distribution, the estimators \({\hat{\alpha }}_i\) of \(\alpha _i\), \(i = 1, 2, 4\), are proved to be consistent, unbiased, and asymptotically normal as \((n, p) \rightarrow \infty\) and adopted in [25, 26, 29, 30]. These estimators can be expressed as the polynomials of \({\hat{\beta }}_i\):
where
If the population distribution is nonGaussian, the unbiasedness does not hold anymore for \({\hat{\alpha }}_k, k = 2, 3, 4\). However, the consistency and asymptotic normality are retained under some suitable assumptions in [30].
Assumption 1
The sample size n tends to infinity together the dimension p with a ratio \(c_n = p / n \rightarrow c \in (0, \infty )\).
Assumption 2
There exists a doubly infinite matrix composed of independent and identically distributed random variables \(w_{ij}\) satisfying
Denote \(\varvec{W}_n = (w_{ij})_{1 \le i \le p, 1 \le j \le n}\), the observation vectors can be expressed as \({\varvec{x}}_j = \varvec{\Sigma }_p^{1 / 2} \varvec{w}_{\cdot j}\), where \(\varvec{w}_{\cdot j} = (w_{ij})_{1 \le i \le p}\) is the jth column of \(\varvec{W}_n\).
Assumption 3
The spectral norm of \(\varvec{\Sigma }_p\) has a positive constant bound, and the population spectral distribution \(H_p\) weakly converges to a probability distribution H as \(p \rightarrow \infty\).
It is worth noting that we are accustomed to assuming \(E(w_{11}^4) = 3 + \Delta\), where \(\Delta\) is a finite constant which is 0 if \(w_{ij}\) is Gaussian in Assumption 2. Under these assumptions, we know that the estimators \(\hat{\alpha }_k\) converge almost surely to \(\alpha _k, k=1, 2, 3, 4\) [30].
3.2 Test procedure
Let \(\lambda _1, \ldots , \lambda _p \in \mathbb {R}\) be the eigenvalues of \(\varvec{\Sigma }_p\). From Cauchy–Schwarz inequality, we know that
with equality holding if and only if \(\lambda _1, \ldots , \lambda _p\) are equal. By simple deformation of (8), we have
Therefore, the moments of the distribution \(H_p\) satisfy
The equality holds if and only if the sphericity hypothesis is true. Denote
then \(\phi = 0\) iff \(\varvec{\Sigma }_p = \sigma ^2 \varvec{I}_p\). Therefore, the sphericity test (6) is equivalent to the following hypothesis
For convenience, we define a new variable
The following theorem investigates the asymptotic distribution of the proposed statistic \(\gamma\) under null hypothesis \(H_{0a}\).
Proposition 1
Under Assumptions 1–3, when the null hypothesis \(H_{0}\) in (6) holds, we have
where \(\tilde{\mu } = 2 \Delta\), \(V = 16 + 16\Delta /c\). Further, if \(E(w_{11}^4) = 3\), then
Proof
From (2) in [30], we know
where the mean vector \(\tilde{\varvec{m}}\) and the terms \(\varvec{V}_1\) and \(\varvec{V}_2\) of the covariance matrix respectively
Let
It is clear that \(f(\varvec{t})\) has a continuous partial derivative at \(\varvec{t}_0 = (1, 1, 1)^T\) and the Jacobian vector is
Note that \(f(\varvec{t}_0) = 0\) and \(J({\varvec{t}_0}) = (4, 2, 0)^T\). Based on the Delta method, we have
By simply calculating, we know
Namely,
Thus,
\(\square\)
By the work in [29], we employ the statistic \(T = \alpha _2/\alpha _1^2\), where \(\alpha _1 = \text {tr}(\varvec{\Sigma }_p)/p\) and \(\alpha _2= \text {tr}(\varvec{\Sigma }^2_p)/p\) for testing the sphericity of a \(p\) dimensional positive definite covariance matrix \(\varvec{\Sigma }_p\). If \(\varvec{\Sigma }_p\) is proportional to the identity matrix, there will be \(\alpha _1^2 = \alpha _2\) and \(T = 1\). Otherwise, the statistic \(T > 1\).
Therefore, T is a sphericity test statistic, which can be employed for testing the sphericity structure of a positive definite covariance matrix. In [29], \(\text {tr}(\varvec{\Sigma })\) and \(\text {tr}(\varvec{\Sigma }^2)\) are unbiasedly and consistently estimated on the asymptotic condition \(m, n \rightarrow \infty\) and \(n = O(m^\delta ), 1/2<\delta < 1\), where \(n= O(p^{\delta }), 1/2<\delta <1\) reveals that n is on the same magnitude as \(p^\delta\). However, Proposition 1 ensures that the proposed statistic T performs well for testing the sample covariance matrix when the dimension exceeds the sample size under the asymptotic condition.
The key problem is detecting the signal number using the proposed sphericity test statistic. We denote the presumptive covariance matrix of signals as \(\varvec{R}^{(k)}\) with eigenvalues \(\lambda _1\ge \cdots \ge \lambda _k>\lambda _{k+1}=\ldots =\lambda _m=\sigma ^2_k\), and the unknown parameter vector as \(\varvec{\Theta }^{(k)}=[\lambda _1, \ldots , \lambda _k, \sigma _k^2]^T\), where \(\sigma ^2\) is the noise power. The signal eigenvalues of \(\varvec{R}^{(k)}\) are \(\lambda _1, \ldots , \lambda _k\) with the signal subspace \(\varvec{U}_k \triangleq \{\varvec{u}_1, \ldots , \varvec{u}_k\}\), and the noise eigenvalues of \(\varvec{R}^{(k)}\) are \(\lambda _{k+1}, \ldots , \lambda _m\) with the noise subspace \(\varvec{U}_{mk}\triangleq \{\varvec{u}_{k+1}, \ldots , \varvec{u}_m\}\). By the unitary coordinate transformation, we decompose the received data \(\varvec{y}(t)\) as
where \(\varvec{y}_s^{(k)}(t)=\varvec{U}^H_k\varvec{y}(t)\in \mathbb {C}^{k}\) and \(\varvec{y}_w^{(k)}(t)=\varvec{U}^H_{mk}\varvec{y}(t)\in \mathbb {C}^{mk}\) are the presumptive signal and noise subspace components, respectively in [14, 17]. For the presumptive signal subspace components, its covariance matrix is
For the presumptive noise subspace component, the corresponding covariance matrix is
If there exists no signal in the presumptive noise subspace components \(\varvec{y}^{(k)}_w(t)\), \(\varvec{R}_w^{(k)}\) should be proportional to the identity matrix, i.e. \(\varvec{R}_w^{(k)}\) is spherical. Therefore, the problem of estimating the signal number turns out to be testing the sphericity of the presumptive noise subspace covariance matrices \(\varvec{R}_w^{(k)}, k=0, 1, \ldots , \min (m,n)  1\). We define
and
where
According to Proposition 1, the statistic \({\hat{T}}^{(k)}\) follows Gaussian distribution if we estimate \(\alpha _1^{(k)}, \alpha _2^{(k)}, \alpha _4^{(k)}\) by the presumptive noise subspace components \(\varvec{y}_w^{(k)}(t)\). If there exists no signal, the \(mk\) smallest eigenvalues will be equal to each other, and \(\varvec{R}_w^{(k)}\) will be spherical. If the presumptive noise subspace components contain signals, \(\varvec{R}_w^{(k)}\) turns out to be a diagonal matrix. The distribution of \({\hat{T}}^{(k)}\) is nonGaussian in this case. Thus, \({\hat{T}}^{(k)}\) is also a statistic of the \(mk\) smallest eigenvalues. We employ it as the statistic \(\mathfrak {L}\) of the second type expression of the GBIC. This shows that the signal number can be estimated via the GBIC with \({\hat{T}}^{(k)}\) by the principle that \({\hat{T}}^{(k)}\) can be employed for testing the kth presumptive noise subspace.
3.3 Implementation of the RMTbased GBIC
We adopt \({\hat{T}}^{(k)}\) to signal enumeration via the GBIC in this subsection. By the unitary coordinate transformation with sample eigenvalues and sample eigenvectors, we obtain the presumptive noise subspace components of the observations. The sample eigenvalues are \({\hat{\lambda }}_1>\cdots >{\hat{\lambda }}_m\) with the corresponding sample eigenvectors \({\hat{\varvec{u}}}_1, \ldots , {\hat{\varvec{u}}}_m\). For the k signals, we can estimate the noise subspace components as
where \({\hat{\varvec{U}}}_{mk}=[{\hat{\varvec{u}}}_{k+1}, \ldots , {\hat{\varvec{u}}}_{m}]\). The covariance matrix of the presumptive noise subspace components is unbiasedly estimated by
Letting
we have \({\hat{\alpha }}_1^{(k)}\), \({\hat{\alpha }}_2^{(k)}\) and \({\hat{\alpha }}_4^{(k)}\) can be unbiasedly and consistently estimated as follows:
where \(\tau _1, \tau _2, \tau _3\) and \(\tau _4\) are shown in (7).
The statistic \(\gamma ^{(k)}\) is estimated by
If \(\varvec{R}_w^{(k)}\) is proportional to the identity matrix, we have the asymptotic distribution of \({\hat{\gamma }}^{(k)}\) is
The pdf \(f({\hat{T}}^{(k)}{\hat{\varvec{\Theta }}}^{(k)})\) under the parameter estimate \({\hat{\varvec{\Theta }}}^{(k)}=[{\hat{\lambda }}_1, \ldots , {\hat{\lambda }}_k, {\hat{\sigma }}_k^2]\) with \({\hat{\sigma }}_k^2 = \frac{1}{mk}{\hat{\lambda }}_k\) is
Substituting (26) into (5) with \(f(\mathfrak {L}{\hat{\varvec{\Theta }}}^{(k)}) =f({\hat{T}}^{(k)}{\hat{\varvec{\Theta }}}^{(k)})\) and the parameters number \(\nu (k)=k+1\) and ignoring the constant term, we have the proposed GBICbased function
By minimizing (27) concerning k, we estimate the signal number as:
In summary, we write the proposed signal enumeration methods as the following algorithm (Table 1).
4 Numerical evaluation
This section verifies the performance of signal enumeration by comparing the proposed method with some existing methods. These methods are respectively presented in [15](denoted as BNAIC), [16] (denoted as RMTAIC), [18] (denoted as BICvariant), [10] (denoted as EEE) and [21] (denoted GBICSP), and set in the scenario of a large array in white Gaussian and nonGaussian noise setting. The existing criterion functions are collected in Table 2.
4.1 The situations of no source signals
When no source signals are present, we mainly look at two cases in large dimensional regime:

Case 1: the additive noise is distributed as complex Gaussian \(\varvec{w}\sim \mathbb {C}\mathcal {N}(\varvec{0}, \varvec{I}_m)\);

Case 2: the noise is Kdistributed \(\varvec{w}= \sqrt{\tau (t)}\varvec{n}(t)\) with \(\tau (t)\sim \text{ Gamma }(v, 1/v)\) and \(\varvec{n}\sim \mathbb {C}\mathcal {N}(\varvec{0}, \varvec{I}_m)\).
We compute the empirical probability of correct detection based on 3000 independent numerical runs in the two cases.
Figure 1 shows the correct detection probabilities versus the number of sensors in the Gaussian and Kdistributed noise when no sources are present. Figure 1 indicates that our proposed method shows a good detection effect when the ratio c becomes large, as the probability of correct detection is very close to 1.
4.2 The presence of signals
We assume that \(K=5\) spatially incoherent narrowband signals with identical power \(p_s\) from the directions of arrival (DOA) \(\varvec{\theta }= \{55^{\circ }, 25^{\circ }, 5^{\circ }, 20^{\circ }, 40^{\circ }\}\) impinge upon a linear array containing m sensors with uniform halfwavelength gap. We generate the signals by independent complex Gaussian sequences with steering vectors \(\varvec{a}(\theta _k) = (1/\sqrt{m})\big ( 1, e^{j(2\pi d/\lambda )\sin {\theta _k}}, \ldots , e^{j(2\pi d/\lambda )(m1)\sin {\theta _k}} \big )^{T}, k=1, \ldots , K\). We define the signaltonoise ratio (SNR) is equal to \(10\log (p_s/\sigma ^2\))), where the noise power \(\sigma ^2=1\). The setting of additive noise still adopts the two cases in Sect. 4.1.
Figure 2 respectively shows the empirical probability of the corrected detection versus SNR in the complex Gaussian and Kdistributed noise for large sample size n and sensor size m. Figure 2a, b indicate that the empirical correct detection probabilities of all methods increase as the SNR becomes large. Meanwhile, we note that the proposed method has the best performance.
Figure 3 shows the empirical correct detection probabilities versus the number of samples in the complex Gaussian and Kdistributed noise. Figure 3a, b indicate these estimators have consistency when sample size n becomes larger and larger except for EEE. As the SNR is low or the sample size is small, the EEE approach performs poorly.
Figure 4 shows the empirical probabilities of the correct detection versus the number of sensors for the highdimensional case with \(c = m/n = 1.5\). We can see that the proposed method is significantly better than the criteria collected in Table 2. The reason is that \({\hat{T}}^{(k)}\) is more sensitive to the structure of \({\hat{\varvec{R}}}^{(k)}\), and the accurate probabilistic model of \({\hat{T}}^{(k)}\) ensures that the proposed method can reach a higher correct detection probability. The BNAIc and BICvariant methods can not work because the denominators of the criterion functions equal 0.
Figure 5 shows the correct detection probabilities versus the number of sensors in the Gaussian and Kdistributed noise. Figure 6 shows the correct detection probabilities versus the ratio c in the Gaussian and Kdistributed noise. Figures 5 and 6 imply that the proposed method has good performance in most situations when the ratio c becomes large. The RMTAIC method has better performance in some places but exhibits instability.
Figure 7 shows the correct detection probabilities versus the parameter v of Gamma distribution in Kdistributed noise. We can see that the proposed method has the highest empirical correct detection probability than the others for large n and m with \(n>m\).
From the above simulation results, we have the following conclusions:

1.
All these methods perform well for the high SNR and large sample size with \(n>m\) in the above types of noise.

2.
The BNAIC and BICvariant methods can not work well in a highdimensional case with \(n<m\).

3.
The EEE approach performs well in the Gaussian noise but suffers the low SNR or small sample size.

4.
The proposed method has an optimal detection performance in the Gaussian and noGaussian noise for the case of large dimensions.
5 Conclusion
This paper has developed an RMTbased method for signal enumeration in Gaussian and nonGaussian noise when the sensor number is large together with the sample size. We estimate the signal number via RMTbased GBIC built on a new test statistic in the sphericity hypothesis of the noise subspace covariance matrix. The proposed method provides higher correct detection probabilities than the existing methods in Gaussian and nonGaussian noise settings. Moreover, the proposed method can achieve more accurate signal enumeration even when the sample size is less than the sensor number.
Availability of data and materials
All data generated during this study are included in this published article.
References
J. Morales, S. Agaian, D. Akopian, Mitigating anomalous measurements for indoor wireless local area network positioning. IET Radar Sonar Navig. 10(7), 1220–1227 (2016)
L. Xu, J. Li, P. Stoica, Target detection and parameter estimation for mimo radar systems. IEEE Trans. Aerosp. Electron. Syst. 44(3), 927–939 (2008)
H. Akaike, A new look at the statistical model identification. IEEE Trans. Autom. Control 19(6), 716–723 (1974)
G. Schwarz, Estimating the dimension of a model. Ann. Stat. 6(2), 461–464 (1978)
J. Rissanen, Modeling by shortest data description. Automatica 14(5), 465–471 (1978)
S. Valaee, P. Kabal, An information theoretic approach to source enumeration in array signal processing. IEEE Trans. Signal Process. 52(5), 1171–1178 (2004)
P.L. Brockett, M. Hinich, G.R. Wilson, Nonlinear and nonGaussian ocean noise. J. Acoust. Soc. Am. 82(4), 1386–1394 (1987)
K.L. Blackard, T.S. Rappaport, C.W. Bostian, Measurements and models of radio frequency impulsive noise for indoor wireless communications. IEEE J. Sel. Areas Commun. 11(7), 991–1001 (1993)
E. Ollila, D.E. Tyler, V. Koivunen, H.V. Poor, Complex elliptically symmetric distributions: survey, new results and applications. IEEE Trans. Signal Process. 60(11), 5597–5625 (2012)
H. Asadi, B. Seyfe, Source number estimation via entropy estimation of eigenvalues (EEE) in Gaussian and nonGaussian noise. arXiv preprint arXiv:1311.6051 (2013)
G. Anand, P. Nagesha, Source number estimation in nonGaussian noise. In: IEEE European Signal Processing Conference (EUSIPCO), pp. 1711–1715 (2014)
F. BenaychGeorges, R.R. Nadakuditi, The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices. Adv. Math. 227(1), 494–521 (2011)
S. Kritchman, B. Nadler, Nonparametric detection of the number of signals: hypothesis testing and random matrix theory. IEEE Trans. Signal Process. 57(10), 3930–3941 (2009)
L. Huang, C. Qian, H.C. So, J. Fang, Source enumeration for large array using shrinkagebased detectors with small samples. IEEE Trans. Aerosp. Electron. Syst. 51(1), 344–357 (2015)
B. Nadler, Nonparametric detection of signals by information theoretic criteria: performance analysis and an improved estimator. IEEE Trans. Signal Process. 58(5), 2746–2756 (2010)
R.R. Nadakuditi, A. Edelman, Sample eigenvalue based detection of highdimensional signals in white noise using relatively few samples. IEEE Trans. Signal Process. 56(7), 2625–2638 (2008)
L. Huang, H.C. So, Source enumeration via MDL criterion based on linear shrinkage estimation of noise subspace covariance matrix. IEEE Trans. Signal Process. 61(19), 4806–4821 (2013)
L. Huang, Y. Xiao, K. Liu, H.C. So, J. Zhang, Bayesian information criterion for source enumeration in largescale adaptive antenna array. IEEE Trans. Veh. Technol. 65(5), 3018–3032 (2016)
E. Yazdian, S. Gazor, H. Bastani, Source enumeration in large arrays using moments of eigenvalues and relatively few samples. IET Signal Proc. 6(7), 689–696 (2012)
Z. Lu, A.M. Zoubir, Generalized Bayesian information criterion for source enumeration in array processing. IEEE Trans. Signal Process. 61(6), 1470–1480 (2013)
Y. Liu, X. Sun, S. Zhao, Source enumeration via GBIC with a statistic for sphericity test in white Gaussian and nonGaussian noise. IET Radar Sonar Navig. 11(9), 1333–1339 (2017)
E. Terreaux, J.P. Ovarlez, F. Pascal, New model order selection in large dimension regime for complex elliptically symmetric noise. In: IEEE European Signal Processing Conference (EUSIPCO), pp. 1090–1094 (2017)
D. Kelker, Distribution theory of spherical distributions and a locationscale parameter generalization. Sankhyā: Indian J. Stat. Ser. A, 419–430 (1970)
I.M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis. Ann. Stat. 29, 295–327 (2001)
T.J. Fisher, X. Sun, C.M. Gallagher, A new test for sphericity of the covariance matrix for high dimensional data. J. Multivar. Anal. 101, 2554–2570 (2010)
T.J. Fisher, On testing for an identity covariance matrix when the dimensionality equals or exceeds the sample size. J. Stat. Plan. Inference 142, 312–326 (2012)
M.S. Srivastava, H. Yanagihara, T. Kubokawa, Tests for covariance matrices in high dimension with less sample size. J. Multivar. Anal. 130, 289–309 (2014)
Z.D. Bai, D.D. Jiang, J.F. Yao, S.R. Zheng, Corrections to LRT on largedimensional covariance matrix by RMT. Ann. Stat. 37, 3822–3840 (2009)
M.S. Srivastava, Some tests concerning the covariance matrix in high dimensional data. J. Jpn. Stat. Soc. 35(2), 251–272 (2005)
X.T. Tian, Y.T. Lu, W.M. Li, A robust test for sphericity of highdimensional covariance matrices. J. Multivar. Anal. 141, 217–227 (2015)
Acknowledgements
None.
Funding
This work was partly supported by the Innovation team of Pu’er University(CXTD019), the Guangxi Science and Technology Planning Project (2022AC21276), and Young backbone teachers training project of Puer University.
Author information
Authors and Affiliations
Contributions
SY was involved in conceptualization, methodology, software, and writing. BZ contributed to supervision and review. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yuan, S., Zhang, B. An RMTbased generalized Bayesian information criterion for signal enumeration. EURASIP J. Adv. Signal Process. 2023, 50 (2023). https://doi.org/10.1186/s13634023010123
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634023010123
Keywords
 Signal enumeration
 Generalized Bayesian information criterion
 Sphericity test
 Random matrix theory