Skip to main content

Simple subspace based adaptive beamforming under Toeplitz covariances


When uncorrelated signals are incident on a uniform linear array, the array covariance matrix is of the Toeplitz form. An adaptive beamforming method is proposed based on the signal-plus-interference (SI) subspace via the Toeplitz rectification of the sample matrix. The rectified matrix is shown to be more accurate in a norm sense than the modified matrix according to the centro-Hermitian property. Since the former also is centro-Hermitian we can efficiently obtain its eigen-decomposition from a real matrix and then the weight vector in the estimated SI subspace. The proposed method, showing robustness to pointing errors, is not only computationally efficient but also very quickly converges to the optimum performance as demonstrated in the simulation.

1 Introduction

Adaptive beamforming, controlling the sensor array outputs with the weight vector, is performed such that the reception of the desired signal is enhanced while the interferences are suppressed. In the minimum variance distortionless response (MVDR) beamformer [1], the array output power is minimized subject to the look direction constraint for the desired signal. It would be desirable that the adaptive beamformer can work well even with a small number of samples, showing a fast convergence speed. To improve the performance of convergence one can use some known properties such as the centro-Hermitian [2,3,4] (for the terminology, see the explanations above Sect. 2), Toeplitz covariances [5], the eigenstructure [6, 7], and unit circle roots [8, 9].

When narrowband signals that are uncorrelated with each other impinge on a uniform linear array (ULA), the array covariance matrix has a form of Toeplitz. Using the rectified matrix that has the same entries along each diagonal through the rectification of the sample matrix according to the Toeplitz property, one can improve the performance of adaptive beamforming [5]. The Toeplitz rectification has also been exploited in the field of direction finding [10,11,12,13]. It is well known that the optimum weight vector that maximizes the output signal-to-interference-plus-noise ratio (SINR) belongs to the signal-plus-interference (SI) subspace. Subspace based beamformers [6, 7, 14], in which the weight vectors are adjusted to lie in the respective estimated SI subspaces, can provide fast convergence in comparison with the sample matrix inversion (SMI).

In case the information on the direction of the desired signal is inaccurate the MVDR beamformer can suffer from significant performance deterioration. The estimation of the interference plus noise covariance matrix (INCM) has attracted much interest for robust beamforming e.g., [15,16,17,18,19,20]. Employing an estimated INCM, an adaptive beamformer can be not only robust against steering vector errors but also converge very quickly. However, its computational complexity is very high because, for example, matrix integrals (or summations) over some angular sectors are performed [15,16,17,18] and/or the powers of interferences as well as their directions should be estimated [16,17,18,19].

This paper proposes a computationally efficient beamforming method based on the SI subspace under the Toeplitz covariance. The rectified matrix is shown to be more accurate in terms of the \(l_{1}\) norm than the one modified according to the centro-Hermitian property [3]. Furthermore, it is shown that a square matrix is Toeplitz and centro-Hermitian if and only if it is Toeplitz and Hermitian, which allows us to transform the rectified matrix to a real matrix. In the proposed method, the SI space based weight vector is efficiently obtained from the eigen-decomposition of the transformed real matrix. Besides, it is robust to pointing errors through the direction estimation for the desired signal. Simulation results validate the effectiveness of the proposed method, showing that it outperforms the existing beamformers with no use of INCM and has virtually the same performance as an INCM-based beamformer.

Superscripts *, T, and H denote complex conjugate, transpose, and complex conjugate transpose, respectively. We use \(E( \cdot )\) for expectation and \({\text{Re}} ( \cdot )\) and \({\text{Im}} ( \cdot )\) for real and imaginary parts of complex numbers. Vectors and matrices are in bold type, and \({\varvec{I}}\) and \({\varvec{J}}\) denote, respectively, an identity matrix of appropriate size and an exchange matrix with ones on the antidiagonal and zeros elsewhere. For a matrix \({\varvec{V}}\), we say “conjugate symmetric” [3, 21] if \({\varvec{JV}} = {\varvec{V}}^{*}\) and “centro-Hermitian” [21, 22] if \({\kern 1pt} {\varvec{JV}}^{*} {\varvec{J}} = {\varvec{V}}\).

2 Data model

A desired signal and \(\eta\) interferences impinge on a ULA with M sensors from \({\varvec{\theta}} = (\theta_{0} ,\;\theta_{1} ,\; \ldots ,\;\theta_{\eta } )\) where \(\theta_{0}\) and \(\theta_{k}\) are the arrival angles of the desired signal and the kth interference. The total number of directional signals is \(\eta_{0} = \eta + 1\). The steering vector for a direction \(\theta\) is designated as \({\mathbf{a}}(\theta )\). The received signal vector can be written as

$${\varvec{x}}(t) = {\varvec{A}}({\varvec{\theta}}){\varvec{s}}(t) + {\varvec{n}}(t)$$

where \({\varvec{s}}(t) = [s_{0} (t),\;s_{1} (t),\; \ldots ,\;s_{\eta } (t)]^{T}\) is the complex envelope vector, \({\varvec{n}}(t)\) is the noise vector, and \({\varvec{A}}({\varvec{\theta}})\) is the steering vector matrix, that is, \({\varvec{A}}({\varvec{\theta}}) = [{\varvec{a}}(\theta_{0} ),\;{\varvec{a}}(\theta_{1} ),\; \ldots ,\;{\varvec{a}}(\theta_{\eta } )]\). The incident signals are uncorrelated with each other so that the covariance matrix of \({\varvec{s}}(t)\) is given as a diagonal matrix. Noise is a complex Gaussian random process with zero mean and variance \(\sigma^{2}\) and is uncorrelated from sensor to sensor.

For simplicity \({\varvec{a}}_{0}\) is used instead of \({\varvec{a}}(\theta_{0} )\). The sensor outputs are weighted and combined to yield

$$y(t) = {\varvec{w}}^{H} {\varvec{x}}(t).$$

The optimum weight vector maximizing the output SINR is given by \({\varvec{w}}_{o} = \mu {\kern 1pt} {\varvec{R}}_{\varvec{x}}^{ - 1} {\varvec{a}}_{0}\) where \({\varvec{R}}_{\varvec{x}} = E[{\varvec{x}}(t){\varvec{\varvec{x}}}^{H} (t)]\) is the array covariance matrix of the received vector. Since the incoming signals are uncorrelated \({\varvec{R}}_{\varvec{x}}\) becomes a Toeplitz matrix. In the MVDR array, the constant \(\mu\) is written as \(\mu = 1/{\varvec{a}}_{0}^{H} {\kern 1pt} {\varvec{R}}_{\varvec{x}}^{ - 1} {\varvec{a}}_{0}\) according to the look direction constraint of \({\varvec{w}}^{H} {\varvec{a}}_{0} = 1\).

The covariance matrix can be eigen-decomposed as

$${\varvec{R}}_{\varvec{x}} = \sum\limits_{m = 1}^{M} {\lambda_{m} {\varvec{e}}_{m} {\varvec{e}}_{m}^{H} }$$

where \(\lambda_{m}\), \(m = 1,\; \ldots ,\;M\), are the eignvalues, which are arranged in decreasing order, and \({\varvec{e}}_{m}\) are the corresponding eigenvectors. The first \(\eta_{0}\) eigenvalues are larger than \(\sigma^{2}\). The eigen-decomposition of \({\varvec{R}}_{{\varvec{x}}}\) is rewritten as

$${\varvec{R}}_{\varvec{x}} = {\varvec{E}}_{s}{\varvec{\varLambda}}_{s} {\kern 1pt} {\varvec{E}}_{s}^{H} + {\varvec{E}}_{n}{\varvec{\varLambda}}_{n} {\varvec{E}}_{n}^{H}$$

where the diagonal elements of the \(\eta_{0} \times \eta_{0}\) diagonal matrix \({\varvec{\varLambda}}_{s}\) consist of the eigenvalues larger than \(\sigma^{2}\), the \(M \times \eta_{0}\) matrix \({\varvec{E}}_{s}\) is composed of the corresponding eigenvectors, and \({\varvec{\varLambda}}_{n} = \sigma^{2} {\varvec{I}}\). The columns of \({\varvec{E}}_{s}\) constitute the SI subspace. The steering vector \({\mathbf{a}}_{0}\) belongs to it and is orthogonal to \({\varvec{E}}_{n}\). Thus the optimum vector is expressed as

$${\varvec{w}}_{o} = \mu {\varvec{E}}_{s}{\varvec{\varLambda}}_{x}^{ - 1} {\varvec{E}}_{s}^{H} {\varvec{a}}_{0} .$$

In reality, the array covariance matrix is unknown and should be estimated. With \(N\) data snapshots given, it can be estimated as

$$\hat{\varvec{R}}_{\varvec{x}} = \frac{1}{N}\sum\limits_{n = 1}^{N} {{\varvec{x}}(n){\varvec{x}}^{H} (n)} .$$

The eigenspace based (ESB) beamformer [6] utilizes the eigen-decomposition of \(\hat{\varvec{R}}_{{\varvec{x}}}\) according to (5).

3 Computationally efficient beamforming

The normal sample matrix is rectified to reflect the Toeplitz property so that the rectified matrix \(\overline{\varvec{R}}_{{\varvec{x}}}\) has the same entries along each diagonal. The operation of Toeplitz rectification is denoted by \(T( \cdot )\). Accordingly, \(\overline{\varvec{R}}_{{\varvec{x}}} = T(\hat{\varvec{R}}_{{\varvec{x}}} )\). We represent the (p,q)-elements of \({\varvec{R}}_{{\varvec{x}}}\), \(\hat{\varvec{R}}_{{\varvec{x}}}\) and \(\overline{\varvec{R}}_{{\varvec{x}}}\), respectively, by \(r_{pq}\), \(\hat{r}_{pq}\), and \(\overline{r}_{pq}\). The elements of \(\overline{\varvec{R}}_{{\varvec{x}}}\) are given as

$$\overline{r}_{1k} = \frac{1}{M + 1 - k}\sum\limits_{i = 1}^{M + 1 - k} {\hat{r}_{i(k - 1 + i)} }$$
$$\overline{r}_{1k} = \overline{r}_{2(k + 1)} = \cdots = \overline{r}_{(M + 1 - k)M}$$

where \(k = 1,\; \cdots ,\;M\). Clearly \(\overline{\varvec{R}}_{{\varvec{x}}}\) is Herimitian. Thus \(\overline{r}_{qp} = \overline{r}_{pq}^{*}\). The elements in (8) constitute the kth diagonal (of the upper triangular part) of \(\overline{\varvec{R}}_{{\varvec{x}}}\).

The Toeplitz averaging of (7) leads to less errors in \(\overline{\varvec{R}}_{{\varvec{x}}}\), in the sense that

$$||\overline{\varvec{R}}_{{\varvec{x}}} - {\varvec{R}}_{{\varvec{x}}} || \le ||\hat{\varvec{R}}_{{\varvec{x}}} - {\varvec{R}}_{{\varvec{x}}} ||$$

where \(|| \cdot ||\) denotes the \(l_{1}\) norm that is defined as the sum of the absolute values of the entries. Let \({\varvec{r}}_{k}\), \(\hat{\varvec{r}}_{k}\), and \(\overline{\varvec{r}}_{k}\) represent the kth diagonals of \({\varvec{R}}_{{\varvec{x}}}\), \(\hat{\varvec{R}}_{{\varvec{x}}}\), and \(\overline{\varvec{R}}_{{\varvec{x}}}\) respectively. Recall \({\varvec{R}}_{{\varvec{x}}}\) is a Toeplitz matrix so that \({\varvec{r}}_{k}\) has the same entries. It is easy to see using (7) and (8) that

$$||\overline{\varvec{r}}_{k} - {\varvec{r}}_{k} || = |\sum\limits_{i = 1}^{L} {(\hat{r}_{i(k - 1 + i)} - r_{i(k - 1 + i)} )} |$$

where \(L = M + 1 - k\). Using (10), we have

$$\begin{aligned} & ||\hat{\varvec{r}}_{k} - {\varvec{r}}_{k} || = \sum\limits_{i = 1}^{L} {|\hat{r}_{i(k - 1 + i)} - r_{i(k - 1 + i)} |} \\ & \quad \quad \ge |\sum\limits_{i = 1}^{L} {(\hat{r}_{i(k - 1 + i)} - r_{i(k - 1 + i)} )} | = ||\overline{\varvec{r}}_{k} - {\varvec{r}}_{k} ||. \\ \end{aligned}$$

Equation (11) results in (9). The conjugate symmetric beamformer (CSB) in [3] exploits the centro-Hermitian matrix

$$\tilde{\varvec{R}}_{{\varvec{x}}} = \frac{{\hat{\varvec{R}}_{{\varvec{x}}} + \varvec{J\hat{R}}_{{\varvec{x}}}^{*} {\varvec{J}}}}{2}.$$

It is not difficult to see that though an entry \(\hat{r}_{pq}\) of the Hermitian matrix \(\hat{\varvec{R}}_{{\varvec{x}}}\), where \(q \ge p\), moves by the operation of \(\varvec{J\hat{R}}_{{\varvec{x}}}^{*} {\varvec{J}}\) it is located on the same \((q - p + 1){\text{st}}\) diagonal as before. Thus \(T(\hat{\varvec{R}}_{{\varvec{x}}} ) = T(\varvec{J\hat{R}}_{{\varvec{x}}}^{*} {\varvec{J}}) = T(\tilde{\varvec{R}}_{{\varvec{x}}} )\), which brings

$$||\overline{\varvec{R}}_{{\varvec{x}}} - {\varvec{R}}_{{\varvec{x}}} || \le ||\tilde{\varvec{R}}_{{\varvec{x}}} - {\varvec{R}}_{{\varvec{x}}} ||.$$

The proof of (13) is derived by replacing \(\hat{\varvec{r}}_{k}\) in (11) with \(\tilde{\varvec{r}}_{k}\), the kth diagonal of \(\tilde{\varvec{R}}_{{\varvec{x}}}\). It indicates that \(\overline{\varvec{R}}_{{\varvec{x}}}\) is more accurate than \(\tilde{\varvec{R}}_{{\varvec{x}}}\) in terms of the \(l_{1}\) norm.

The following theorem is presented, which enables us to find the weight vector of the proposed method via simple computation.


An \(M{\kern 1pt} {\text{x}}M\) matrix V is Toeplitz and centro-Hermitian if and only if it is Toeplitz and Hermitian.


A (p,q)-entry of V is \(v_{pq}\). By definition, V is centro-Hermitian if \(v_{pq} = v_{(M + 1 - p)(M + 1 - q)}^{*}\). As V is Toeplitz and Hermitian,

$$v_{pq} = v_{(M + 1 - q)(M + 1 - p)} = v_{(M + 1 - p)(M + 1 - q)}^{*} ,$$

which proves “if” part. If V is Toeplitz and centro-Hermitian,

$$v_{pq} = v_{(M + 1 - q)(M + 1 - p)} = v_{qp}^{*} ,$$

which proves “only if” part.

The Toeplitz Hermitian \(\overline{\varvec{R}}_{{\varvec{x}}}\) becomes a Toeplitz centro-Hermitian matrix. Hence it can be transformed to a real matrix in such a way that

$$\overline{\varvec{R}}_{{\varvec{x}}}^{\prime } = {\varvec{Q}}^{H} \overline{\varvec{R}}_{{\varvec{x}}} {\kern 1pt} {\varvec{Q}}$$

where the unitary matrix Q is given by [3, 21, 22]

$${\varvec{Q}} = \left\{ {\begin{array}{*{20}c} {\frac{1}{\sqrt 2 }\left[ {\begin{array}{*{20}c} {\varvec{I}} & { - j{\varvec{J}}} \\ {\varvec{J}} & {j{\varvec{J}}} \\ \end{array} } \right]{\kern 1pt} ,} & {{\text{for}}\;{\text{even}}\;M} \\ {\frac{1}{\sqrt 2 }\left[ {\begin{array}{*{20}c} {\varvec{I}} & 0 & { - j{\varvec{J}}} \\ 0 & {\sqrt 2 } & 0 \\ {\varvec{J}} & 0 & {j{\varvec{I}}} \\ \end{array} } \right]{\kern 1pt} ,} & {{\text{for}}\;{\text{odd}}\;M} \\ \end{array} } \right..$$

The matrix Q is conjugate symmetric, i.e., \({\varvec{J}}{\kern 1pt} {\varvec{Q}} = {\varvec{Q}}^{*}\). The real matrix \(\overline{\varvec{R}}_{\varvec{x}}^{\prime }\) is eigen-decomposed as

$$\overline{\varvec{R}}_{\varvec{x}}^{\prime } = \sum\limits_{m = 1}^{M} {\overline{\lambda}_{m}^{\prime } \overline{\varvec{e}}_{m}^{\prime } \overline{\varvec{e}}_{m}^{\prime H} = \overline{\varvec{E}}_{s}^{\prime } \overline{\varvec{\Lambda }}_{s}^{\prime } \overline{\varvec{E}}_{s}^{\prime H} + \overline{\varvec{E}}_{n}^{\prime } \overline{\varvec{\Lambda }}_{s}^{\prime } \overline{\varvec{E}}_{n}^{\prime H} }$$

where \(\overline{\lambda }_{m}^{\prime }\) and \(\overline{\varvec{e}}_{m}^{\prime }\) are an eigenpair, the \(\overline{\varvec{\Lambda }}_{s}^{\prime }\) is a diagonal matrix consisting of the \(\eta_{0}\) largest eigenvalues, \(\overline{\varvec{E}}_{s}^{\prime }\) is the corresponding eigenvector matrix, and the remaining eigenvalues and eigenvectors compose \(\overline{\varvec{\Lambda }}_{n}^{\prime }\) and \(\overline{\varvec{E}}_{n}^{\prime }\) respectively. Since Q is unitary the eigenvalues and eigenvectors of \(\overline{\varvec{R}}_{{\varvec{x}}}\) corresponding to \(\overline{\varvec{\Lambda}}_{s}^{\prime }\) and \(\overline{\varvec{E}}_{s}^{\prime }\) are given by

$$\overline{\varvec{\Lambda }}_{s} = \overline{\varvec{\Lambda }}_{s}^{\prime }$$
$$\overline{\varvec{E}}_{s} = \varvec{Q}{\overline{\varvec{{E}}}}_{s}^{\prime } .$$

As the eigen-decomposition of \(\overline{\varvec{R}}_{{\varvec{x}}}\) can be obtained from the eigen-decomposition of \(\overline{\varvec{R}}_{{\varvec{x}}}^{\prime }\), the computational load becomes much less than directly eigen-decomposing \(\overline{\varvec{R}}_{{\varvec{x}}}\).

The column space of \(\overline{\varvec{E}}_{s}\) is an estimated SI space. In the proposed method, the weight vector is calculated as

$${\varvec{w}}_{p} = \mu \varvec{Q\overline{E}}_{s}^{\prime } \overline{\varvec{\Lambda }}_{s}^{\prime - 1} \overline{\varvec{E}}_{s}^{\prime H} {\varvec{Q}}^{H} {\varvec{a}}_{0}$$

so that it lies in the estimated SI space. The constant \(\mu\) can be given according to the look direction constraint of \({\varvec{w}}_{p}^{H} {\varvec{a}}_{0} = 1\). Since \({\varvec{a}}(\theta )\) is conjugate symmetric the vector \({\varvec{Q}}^{H} {\varvec{a}}(\theta )( \equiv \sqrt 2 {\varvec{\alpha}}(\theta ))\) becomes a real vector: for even M,

$${\varvec{\alpha}}(\theta ) = \left[ {\begin{array}{*{20}c} {{\text{Re}} ({\varvec{a}}_{1} (\theta ))} \\ {{\text{Im}} ({\varvec{a}}_{2} (\theta ))} \\ \end{array} } \right]{\kern 1pt}$$

where \({\varvec{a}}_{1} (\theta )\) and \({\varvec{a}}_{2} (\theta )\) are the vectors composed of, respectively, the first M/2 and last M/2 entries of \({\varvec{a}}(\theta )\). Equation (21) with the look direction constraint is rewritten as

$${\varvec{w}}_{p} = \frac{{{\varvec{Qb}}_{0} }}{{\sqrt 2 {\kern 1pt} {\varvec{b}}_{0}^{T} {\varvec{\alpha}}(\theta_{0} )}}.$$

where b0 is a real vector given as \({\varvec{b}}_{0} = \overline{\varvec{E}}_{s}^{\prime } \overline{\varvec{\Lambda }}_{s}^{\prime - 1} \overline{\varvec{E}}_{s}^{\prime H} {\varvec{\alpha}}(\theta_{0} )\). Note that the weight vector \({\varvec{w}}_{p}\) is conjugate symmetric.

The direction of the desired signal may not correctly be known a priori. The proposed method can be robust to direction errors by introducing a direction estimation. Using the eigen-decomposition of \(\overline{\varvec{R}}_{{\varvec{x}}}^{\prime }\), we can efficiently estimate the arrival angle based on the multiple signal classification (MUSIC) [23]. Let \(\Theta_{d}\) be an angular sector for the desired signal. When an angle \(\theta_{00}\) is given as the initial direction the sector \(\Theta_{d}\) is set as \(\Theta_{d} = [\theta_{00} - \delta \theta ,\;\theta_{00} + \delta \theta ]\) with \(\delta \theta\) a constant. According to the MUSIC principle, the estimate is obtained as

$$\hat{\theta }_{0} = \arg \mathop {\max }\limits_{{\theta \in \Theta_{d} }} g(\theta )$$


$$g(\theta ) = {\varvec{\alpha}}^{T} (\theta )\overline{\varvec{E}}_{s}^{\prime } \overline{\varvec{E}}_{s}^{\prime T} {\varvec{\alpha}}(\theta ).$$

Then, the weight vector is calculated as (23) with \({\varvec{\alpha}}(\hat{\theta }_{0} )\) in place of \({\varvec{\alpha}}(\theta_{0} )\).

The computational cost for the estimation of \(\theta_{0}\) is \(O(N_{p} \eta_{0} M)\) in terms of real multiplication where \(N_{p}\) is the number of search points in the sector \(\Theta_{d}\). The CSB needs the inversion of \(\tilde{\varvec{R}}_{{\varvec{x}}}\). The matrix \(\tilde{\varvec{R}}_{{\varvec{x}}}\) is centro-Hermitian and so it can also be transformed to a real matrix as in (16). The computational load for the weight vector \({\varvec{w}}_{p}\), aside from the cost for direction estimation, is similar to that of CSB. The estimation of INCM can be employed for robustness and fast convergence. In [20], an estimate for INCM is extracted from the sample matrix without the estimation of interference powers. However, the directions of all incident signals should be estimated for the extraction of INCM (EINCM). The proposed method that needs the direction estimation for the desired signal only has far less computational complexity than the EINCM method.

In practice, \(\eta_{0}\) would not be known in advance. It can be estimated with the minimum description length (MDL) [24]. As N tends to infinity the sample matrix \(\hat{\varvec{R}}_{{\varvec{x}}}\) approaches the array covariance matrix \({\varvec{R}}_{{\varvec{x}}}\) and \(\overline{\varvec{R}}_{{\varvec{x}}}\) does so. The MUSIC estimator can correctly finds \(\theta_{0}\) if \(\theta_{00}\) lies in the inside of \(\Theta_{d}\). Then the weight vector \({\varvec{w}}_{p}\) becomes identical to the optimum \({\varvec{w}}_{o}\) and the proposed method achieves the maximum SINR.

4 Simulation

A ULA is employed that consists of ten sensors with an interelement spacing of half wavelength, on which four uncorrelated signals impinge from \(\theta_{0} = 0^{{\text{o}}}\), \(\theta_{1} = - 30^{{\text{o}}}\), \(\theta_{2} = 15^{{\text{o}}}\), and \(\theta_{3} = 40^{{\text{o}}}\) relative to array broadside. The input interference-to-noise ratios (INRs) are equal.

First, in Fig. 1 where \(N = 20\), we investigate errors, in terms of the \(l_{1}\) norm, in the estimated covariance matrices, the normal \(\hat{\varvec{R}}_{{\varvec{x}}}\), the centro-Hermitian (CH) \(\tilde{\varvec{R}}_{{\varvec{x}}}\), and the Toeplitz rectified (TR) \(\overline{\varvec{R}}_{{\varvec{x}}}\). The errors are normalized with respect to \({\varvec{R}}_{{\mathbf{x}}}\) and are averaged through 1000 independent simulation runs. The input INR is 5 dB larger than the input signal-to-noise ratio (SNR). On the whole, they decrease as SNR increases. The error in \(\overline{\varvec{R}}_{{\varvec{x}}}\) is the smallest, which confirms (9) and (13). The matrix actually used by the proposed method is \(\overline{\varvec{E}}_{s} \overline{\varvec{\Lambda }}_{s}^{ - 1} \overline{\varvec{E}}_{s}^{H}\). In Fig. 2, the normalized errors with respect to \({\varvec{E}}_{s}{\varvec{\varLambda}}_{s}^{ - 1} {\varvec{E}}_{s}^{H}\) in the corresponding matrices by the SI components of \(\hat{\varvec{R}}_{{\varvec{x}}}\), \(\tilde{\varvec{R}}_{{\varvec{x}}}\), and \(\overline{\varvec{R}}_{{\varvec{x}}}\) are shown. As in Fig. 1, they decrease with an increase in SNR. The error in the matrix \(\overline{\varvec{E}}_{s} \overline{\varvec{\Lambda }}_{s}^{ - 1} \overline{\varvec{E}}_{s}^{H}\) is less than the others, from which the proposed method is expected to have superior performance relative to ESB [6] and CSB [3]. Now, let us investigate the SINR performance.

Fig. 1
figure 1

Normalized errors with respect to \({\varvec{R}}_{{\varvec{x}}}\) in \(\hat{\varvec{R}}_{{\varvec{x}}}\), \(\tilde{\varvec{R}}_{{\varvec{x}}}\), and \(\overline{\varvec{R}}_{{\varvec{x}}}\)

Fig. 2
figure 2

Normalized errors with respect to \({\varvec{E}}_{s}{\varvec{\varLambda}}_{s}^{ - 1} {\varvec{E}}_{s}^{H}\) in the corresponding matrices of \(\hat{\varvec{R}}_{{\varvec{x}}}\), \(\tilde{\varvec{R}}_{{\varvec{x}}}\), and \(\overline{\varvec{R}}_{{\varvec{x}}}\)

The output SINR of the proposed method is compared with those of existing ones including the ESB, the CSB, the structured maximum likelihood (SML) [4], the unit circle roots constraint (UCRC) [9], and the EINCM [20]. Unless the number of snapshots is very small the number of the incident signals can be accurately detected by the MDL. The \(\eta_{0}\) is assumed to be known in the proposed, ESB, and EINCM. In SML, the accurate noise variance \(\sigma^{2}\) is used. In addition to the root constraint, UCRC employs a null movement to preserve the mainbeam [9]. In the proposed method, \(\delta \theta = 5^{{\text{o}}}\) is used to set the angular sector \(\Theta_{d}\). The pointing error is defined as \(\theta_{e} = \theta_{00} - \theta_{0}\). In Figs. 3, 4, 5, 6, there are no pointing errors, i.e., \(\theta_{e} = 0\). Unless the number of snapshots is infinite, the SINRs are averaged via 200 independent runs.

Fig. 3
figure 3

SINR versus SNR in the absence of pointing error when \(N = 30\) and INR is 5 dB larger than SNR

Fig. 4
figure 4

SINR versus INR in the absence of pointing error when \(N = 50\) and \({\text{SNR}} = 5\;{\text{dB}}\)

Fig. 5
figure 5

SINR against N in the absence of pointing error with \({\text{SNR}} = 5\;{\text{dB}}\) and \({\text{INR}} = 10\;{\text{dB}}\)

Fig. 6
figure 6

SINR against N in the absence of pointing error with SNR = 10 dB and INR = 15 dB

Figure 3 illustrates the SINRs as functions of the input SNR when \(N = 30\). The input INR is 5 dB stronger than SNR. Though SML has higher SINR than CSB, it is slightly inferior to ESB. Among the existing methods except EINCM, the UCRC shows the best performance. But its SINR is much smaller than that of the proposed method, which is close to the optimum despite a relatively small number of snapshots, especially when SNR is not large. The SINR of EINCM when SNR is very small is slightly lower than that of the proposed method. Except this, its SINR is virtually identical with the optimum irrespective of SNR.

When \(N = 50\) and SNR = 5 dB, Fig. 4 displays the SINRs against INR. It is seen that the effect of INR on the performances of the beamformers are, on the whole, small, their SINRs showing small variations with respect to INR. As in Fig. 3, SML works better than CSB but is inferior to ESB. And UCRC is superior to ESB. The SINRs of the proposed and the EINCM, which outperform the others, are so close to the optimum that they appear to overlap.

The performances are presented as functions of \(N\) with SNR = 5 dB and INR = 10 dB in Fig. 5 and with SNR = 10 dB and INR = 15 dB in Fig. 6. It is seen in the absence of pointing error that the SINRs approach the optimum as \(N\) increases. The EINCM converges so fast that its SINR when \(N \ge 10\) is within 0.2 dB of the optimum in both examples. Comparing Figs. 5 and 6, we observe that the convergence rates of the beamformers except EINCM become slower in Fig. 6 with an increased desired signal power. The proposed beamformer when SNR = 10 dB shows slightly lower convergence rate than EINCM while the former when SNR = 5 dB has nearly the same performance as the latter. In contrast, the SINR of UCRC, which shows the fastest convergence among the other beamformers, is within 1 dB of the optimum for \(N \ge 300\) in Fig. 5 and for \(N \ge 600\) in Fig. 6.

The effect of pointing error is investigated in Figs. 7 and 8 where SNR = 5 dB and INR = 10 dB. In Fig. 7, when the number of snapshots is infinite so that \({\varvec{R}}_{{\varvec{x}}}\) is available, the steady state SINRs are illustrated against \(\theta_{e}\). The initial angle \(\theta_{00}\) belongs to \(\Theta_{d}\) over the range of \(\theta_{e}\). The proposed method accurately finds \(\theta_{0}\). The SINRs of the proposed and the EINCM are the same as the optimum regardless of \(\theta_{e}\). Though the performances of the others when \(\theta_{e} = 0\) are also equal to the optimum they are degraded due to pointing error. Especially, SML and CSB, which have the same performance as SMI with no capability of robustness, suffer from severe performance degradation.

Fig. 7
figure 7

SINR versus \(\theta_{e}\) with SNR = 5 dB and INR = 10 dB when Rx is available

Fig. 8
figure 8

SINR versus N with SNR = 5 dB and INR = 10 dB when \(\theta_{e} = 2^{{\text{o}}}\)

Figure 8 displays SINRs as functions of N with \(\theta_{e} = 2^{o}\). As can be seen from Fig. 7, the steady state SINRs of SML and CSB at \(\theta_{e} = 2^{\text{o}}\) are less than − 4 dB. The performances of the beamformers converge the corresponding SINRs in Fig. 7. The proposed beamformer, which substantially has the same performance as the EINCM, converges so quickly that its SINR is within 0.5 dB of the optimum for \(N \ge 10\). It is seen from the comparison of Figs. 5 and 8 that the performance curves of the proposed and the EINCM in Fig. 8 are essentially identical with the respective ones in Fig. 5, thereby indicating negligible effects of the pointing error on the beamformers.

5 Conclusion

A computationally efficient subspace beamforming method has been suggested using the rectified matrix \(\overline{\varvec{R}}_{{\varvec{x}}}\) under the Toeplitz covariance. The simple computation results from the fact that \(\overline{\varvec{R}}_{{\varvec{x}}}\) is centro-Hermitian, which allows us to obtain the eigen-decomposition of \(\overline{\varvec{R}}_{{\varvec{x}}}\) from the real matrix \(\overline{\varvec{R}}_{\varvec{x}}^{\prime }\). The rectified matrix \(\overline{\varvec{R}}_{{\varvec{x}}}\), in terms of the \(l_{1}\) norm, is more accurate than \(\tilde{\varvec{R}}_{{\varvec{x}}}\) that is used by CSB. As a result, the eigen-decomposition of the former has less errors than that of the latter, as shown in Fig. 2. Simulation results demonstrate the effectiveness of the proposed method, which converges very quickly to the optimum, outperforming the existing ones that exploit known properties without the estimation of INCM. Moreover, it is shown to have virtually the same performance as the computationally expensive EINCM.

Availability of data and materials

Data will be available on reasonable request.





Minimum variance distortionless response


Uniform linear array


Signal-to-interference-plus-noise ratio


Sample matrix inversion


Interference plus noise covariance matrix


Extraction of INCM


Eigenspace based


Conjugate symmetric beamformer


Multiple signal classification


Minimum description length


Interference-to-noise ratio




Toeplitz rectified


Signal-to-noise ratio


Structured maximum likelihood


Unit circle roots constrained


  1. H.L. Van Trees, Detection, Estimation, and Modulation Theory, Part IV, Optimum Array Processing (Wiley, New York, 2002)

    Book  Google Scholar 

  2. R. Nitzberg, Application of maximum likelihood estimation of persymmetric covariance matrices to adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 16(1), 124–127 (1980)

    Article  Google Scholar 

  3. K.-C. Huarng, C.-C. Yeh, Adaptive beamforming with conjugate symmetric weights. IEEE Trans. Antennas Propag. 39(7), 926–932 (1991)

    Article  Google Scholar 

  4. A. De Maio, Maximum likelihood estimation of structured persymmetric covariance matrices. Signal Process. 83(3), 633–640 (2003)

    Article  Google Scholar 

  5. D.R. Fuhrmann, Application of Toeplitz covariance estimation to adaptive beamforming and detection. IEEE Trans. Signal Process. 39(10), 2194–2198 (1991)

    Article  Google Scholar 

  6. S.-J. Yu, J.-H. Lee, Statistical performance of eigenspace-based adaptive array beamformers. IEEE Trans. Antennas Propag. 44(5), 665–671 (1996)

    Article  Google Scholar 

  7. D.D. Feldman, An analysis of the projection method for robust adaptive beamforming. IEEE Trans. Antennas Propag. 44(1), 1023–1029 (1996)

    Article  Google Scholar 

  8. A. Steinhardt, J. Guerci, STAP for RADAR: What works, what doesn’t, and what’s in store, in Proceedings of IEEE Radar Conference, 2004, pp. 469–473

  9. A. Shaw, J. Smith, A. Hassanien, MVDR beamformer design by imposing unit circle roots constraints for uniform linear arrays. IEEE Trans. Signal Process. 69, 6116–6130 (2021)

    Article  Google Scholar 

  10. P. Vallet, P. Loubaton, On the performance of MUSIC with Toeplitz rectification in the context of large arrays. IEEE Trans. Signal Process. 65(22), 5848–5859 (2017)

    Article  MathSciNet  Google Scholar 

  11. S. Liu, Z. Mao, Y.D. Zhang, Y. Huang, Rank minimization-based Toeplitz reconstruction for DoA estimation using coprime array. IEEE Commun. Lett. 25(7), 2265–2269 (2021)

    Article  Google Scholar 

  12. M.A. Doron, A.J. Weiss, Performance analysis of direction finding using lag redundancy averaging. IEEE Trans. Signal Process. 41, 1386–1391 (1993)

    Article  Google Scholar 

  13. A. Gorokhov, Y. Abramovich, J.F. Bohme, Unified analysis of DOA estimation algorithms for covariance matrix transforms. Signal Process. 55, 107–115 (1996)

    Article  Google Scholar 

  14. Y.-H. Choi, Interference subspace approximation based adaptive beamforming in the presence of a desired Signal. IEEE Proc. Radar Sonar Navig. 152(4), 232–238 (2005)

    Article  Google Scholar 

  15. J. Xie, H. Li, Z. He, C. Li, A robust adaptive beamforming method based on the matrix reconstruction against a large DOA mismatch. EURASIP J. Adv. Signal Process. 2014, 1–10 (2014)

    Article  Google Scholar 

  16. X. Zhu, Z. Ye, X. Xu, R. Zheng, Covariance matrix reconstruction via residual noise elimination and interference powers estimation for robust adaptive beamforming. IEEE Access 7, 53262–53272 (2019)

    Article  Google Scholar 

  17. H. Yang, P. Wang, Z. Ye, Robust adaptive beamforming via covariance matrix reconstruction and interference power estimation. IEEE Commun. Lett. 25(10), 3394–3397 (2021)

    Article  Google Scholar 

  18. S. Mohammadzadeh, V. Nascimento, R. Lamare, O. Kukrer, Covariance matrix reconstruction based on power spectral estimation and uncertainty region for robust adaptive beamforming. IEEE Trans. Aerosp. Electron. Syst. 59(4), 3848–3858 (2023)

    Article  Google Scholar 

  19. Z. Zheng, Y. Zheng, W.-Q. Wang, H. Zhang, Covariance matrix reconstruction with interference steering vector and power estimation for robust adaptive beamforming. IEEE Trans. Veh. Technol. 67(9), 8495–8503 (2018)

    Article  Google Scholar 

  20. Y.-H. Choi, Adaptive beamforming for maximum SINR in the presence of correlated interferences. Digit. Signal Process. 146, 104370 (2024)

    Article  Google Scholar 

  21. D.A. Linebarger, R.D. DeGroat, E.M. Dowling, Efficient direction-finding methods employing forward/backward averaging. IEEE Trans. Signal Process. 42(8), 2136–2145 (1994)

    Article  Google Scholar 

  22. A. Lee, Centrohermitian and skew-centrohermitian matrices. Linear Alg. Appl. 29, 205–210 (1980)

    Article  Google Scholar 

  23. R.O. Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 34(3), 276–280 (1986)

    Article  MathSciNet  Google Scholar 

  24. M. Wax, T. Kailath, Detection of signals by information theoretic criteria. IEEE Trans Acoust. Speech Signal Process. 33(2), 387–392 (1985)

Download references


Not applicable.


Not applicable.

Author information

Authors and Affiliations



The paper has been prepared and written solely by Yang-Ho Choi.

Corresponding author

Correspondence to Yang-Ho Choi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The author declares that he has no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Choi, YH. Simple subspace based adaptive beamforming under Toeplitz covariances. EURASIP J. Adv. Signal Process. 2024, 61 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: