### 3.1 INCM reconstruction

Similar to the assumptions implemented in [20,21,22,23,24,25,26,27,28,29], we also assume that the SOI region and the location of interferences are mutually separate. In this subsection, we utilize the GLQ to compute Eq. (13) efficiently, which can be expressed in the following form [17]:

$$\int\limits_{ - 1}^{1} {\rho (z)f(z){\text{d}}z} \approx \sum\limits_{j = 1}^{J} {A_{j} f(z_{j} )} ,$$

(14)

where \(\rho (z)\) represents the weight function, which is equal to the constant 1 in GLQ; \(f(z)\) is the integral function; \(A_{j}\) and \(z_{j} ,j = 1,2, \ldots ,J\) are independent of \(f(z)\) and denote the coefficients and nodes of GLQ, respectively. The nodes are generally the roots of the Legendre polynomial, indicated by

$${\text{Leg}}_{n} (z) = \frac{1}{{2^{n} n!}}\frac{{{\text{d}}^{n} }}{{{\text{d}}z^{n} }}\{ (z^{2} - 1)^{n} \} ,$$

(15)

where *n* is the order of the Legendre polynomial. Taking the trade-off between computational efficiency and numerical accuracy into account, we implement INCM reconstruction in terms of the fifth-order Legendre polynomial, which can be written as

$${\text{Leg}}_{5} (z) = \frac{1}{8}(63z^{5} - 70z^{3} + 15z).$$

(16)

Five nodes can be obtained according to the roots of Eq. (16):

$$\{ z_{1} ,z_{2} ,z_{3} ,z_{4} ,z_{5} \} = \left\{ { - \sqrt {\frac{{35 + 2\sqrt {70} }}{63}} , - \sqrt {\frac{{35 - 2\sqrt {70} }}{63}} ,0,\sqrt {\frac{{35 - 2\sqrt {70} }}{63}} ,\sqrt {\frac{{35 + 2\sqrt {70} }}{63}} } \right\}.$$

(17)

The following formula can be yielded by substituting Eq. (17) into Eq. (14):

$$\int\limits_{ - 1}^{1} {f(z){\text{d}}z} \approx \sum\limits_{j = 1}^{5} {A_{j} f(z_{j} )} .$$

(18)

According to the principle of GLQ, Eq. (19) holds true strictly when \(f(z)\) is taken in turn with \(1, \, z^{1} , \, z^{2} , \, z^{3} ,\) and \(z^{4}\), which yields

$$\begin{aligned} \int\limits_{ - 1}^{1} { \, 1\;{\text{d}}z} & = z_{1}^{0} A_{1} + z_{2}^{0} A_{2} + z_{3}^{0} A_{3} + z_{4}^{0} A_{4} + z_{5}^{0} A_{5} \\ \int\limits_{ - 1}^{1} { \, z^{1} \;{\text{d}}z} & = z_{1}^{1} A_{1} + z_{2}^{1} A_{2} + z_{3}^{1} A_{3} + z_{4}^{1} A_{4} + z_{5}^{1} A_{5} \\ \int\limits_{ - 1}^{1} {z^{2} \;{\text{d}}z} & = z_{1}^{2} A_{1} + z_{2}^{2} A_{2} + z_{3}^{2} A_{3} + z_{4}^{2} A_{4} + z_{5}^{2} A_{5} \\ \int\limits_{ - 1}^{1} {z^{3} \;{\text{d}}z} & = z_{1}^{3} A_{1} + z_{2}^{3} A_{2} + z_{3}^{3} A_{3} + z_{4}^{3} A_{4} + z_{5}^{3} A_{5} \\ \int\limits_{ - 1}^{1} {z^{4} \;{\text{d}}z} & = z_{1}^{4} A_{1} + z_{2}^{4} A_{2} + z_{3}^{4} A_{3} + z_{4}^{4} A_{4} + z_{5}^{4} A_{5} . \\ \end{aligned}$$

(19)

Equation (19) can be expressed in another form, \({\mathbf{ZA = F}}\), where \({\mathbf{A = }}[A_{1} ,A_{2} ,A_{3} ,A_{4} ,A_{5} ]^{T}\) represents the coefficient vector to be determined, \({\mathbf{F = }}[2,0,{2 \mathord{\left/ {\vphantom {2 3}} \right. \kern-0pt} 3},0,{2 \mathord{\left/ {\vphantom {2 5}} \right. \kern-0pt} 5}]^{T}\), and \({\mathbf{Z}}\) denotes the node matrix.

$${\mathbf{Z}} = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 & 1 & 1 \\ {z_{1}^{1} } & {z_{2}^{1} } & {z_{3}^{1} } & {z_{4}^{1} } & {z_{5}^{1} } \\ {z_{1}^{2} } & {z_{2}^{2} } & {z_{3}^{2} } & {z_{4}^{2} } & {z_{5}^{2} } \\ {z_{1}^{3} } & {z_{2}^{3} } & {z_{3}^{3} } & {z_{4}^{3} } & {z_{5}^{3} } \\ {z_{1}^{4} } & {z_{2}^{4} } & {z_{3}^{4} } & {z_{4}^{4} } & {z_{5}^{4} } \\ \end{array} } \right].$$

(20)

According to Eq. (20), \({\mathbf{Z}}\) is a \(5 \times 5\) dimensional Vandermonde matrix. Because the elements of the nodes, \(\{ z_{1} ,z_{2} , \ldots ,z_{5} \}\), are not equal, \({\mathbf{Z}}\) is invertible. Therefore, the coefficient vector can be determined as \({\mathbf{A = Z}}^{ - 1} {\mathbf{F}}\). The variables on the right side of Eq. (18) can then be obtained, and the fifth-order GLQ can be realized.

Since \(\{ z_{1} ,z_{2} , \ldots ,z_{5} \}\) belong to the interval \([ - 1,1]\), the interval used in INCM reconstruction should be \(\Theta_{l} = [\theta_{l}^{{{\text{low}}}} ,\theta_{l}^{{{\text{up}}}} ],\quad l = 1,2, \ldots ,L\); hence, it is unsatisfactory to reconstruct the INCM using these nodes. We can then linearly map the nodes in the interval \([ - 1,1]\) to the angular sectors of *L* interferences \(\Theta_{l}\) as follows [27]:

$$\theta_{lj} = \frac{{\theta_{l}^{{{\text{up}}}} - \theta_{l}^{{{\text{low}}}} }}{2}z_{j} + \frac{{\theta_{l}^{{{\text{up}}}} + \theta_{l}^{{{\text{low}}}} }}{2},\quad j = 1,2, \ldots ,5,$$

(21)

where \(\theta_{lj}\) represent the angular nodes within each interference region bounded by \([\theta_{l}^{low} ,\theta_{l}^{up} ]\). Then, by adjusting the integral interval, Eq. (18) is transformed into

$$\int\limits_{{\theta_{l}^{{{\text{low}}}} }}^{{{\theta_{l}{{\text{up}}}} }} {f(\theta ){\text{d}}\theta } \approx \frac{{\theta_{l}^{{{\text{up}}}} - \theta_{l}^{{{\text{low}}}} }}{2}\sum\limits_{j = 1}^{5} {A_{j} f(\theta_{lj} )} .$$

(22)

In Eq. (22), the uninterrupted integral in the interval \(\Theta_{l}\) is replaced with a linear combination of the function values of five nodes. Substituting \({\mathbf{r}}(\theta )\) as the integral function into Eq. (22), the novel INCM can be obtained as follows:

$$\begin{aligned} {\tilde{\mathbf{R}}}_{i + n} & = \frac{1}{2}\int\limits_{{\Theta_{i} }} {{\mathbf{r}}(\theta ){\text{d}}\theta } + \hat{\sigma }_{n}^{2} {\mathbf{I}} = \frac{1}{2}\sum\limits_{l = 1}^{L} {\int\limits_{{\theta_{l}^{{{\text{low}}}} }}^{{{\theta_{l}{{\text{up}}}} }} {{\mathbf{r}}(\theta ){\text{d}}\theta } } + \hat{\sigma }_{n}^{2} {\mathbf{I}} \\ & \approx \frac{1}{2}\sum\limits_{l = 1}^{L} {\frac{{\theta_{l}^{{{\text{up}}}} - \theta_{l}^{{{\text{low}}}} }}{2}\sum\limits_{j = 1}^{J} {A_{j} {\mathbf{r}}(\theta_{lj} )} } + \hat{\sigma }_{n}^{2} {\mathbf{I}} = \frac{{\theta_{l}^{{{\text{up}}}} - \theta_{l}^{{{\text{low}}}} }}{4}\sum\limits_{l = 1}^{L} {\sum\limits_{j = 1}^{J} {A_{j} {\mathbf{r}}(\theta_{lj} )} } + \hat{\sigma }_{n}^{2} {\mathbf{I}}, \\ \end{aligned}$$

(23)

where \(J = 5\) and \({\mathbf{r}}(\theta_{lj} )\) is numerically computed as

$${\mathbf{r}}(\theta_{lj} ) = \sum\limits_{q = 1}^{Q} {\frac{{{\overline{\mathbf{a}}}_{q} {\overline{\mathbf{a}}}_{q}^{\text{H}} }}{{{\overline{\mathbf{a}}}_{q}^{\text{H}} {\hat{\mathbf{R}}}_{x}^{ - 1} {\overline{\mathbf{a}}}_{q} }}} .$$

(24)

Here, \(Q\) denotes the number of sampling points within the uncertainty set, and \({\overline{\mathbf{a}}}_{q} \in \delta_{{\mathbf{a}}} (\theta_{lj} )\) stands for the SVs located at the surface of the sphere around \({\mathbf{a}}(\theta_{lj} )\) because \(\delta_{{\mathbf{a}}} (\theta_{lj} )\) contains collinear SVs [19]. For the sake of clarity, the comparison between Eqs. (13) and (23) is shown schematically in Fig. 2. The essence of Eq. (23) is to replace the volume integral in the dashed region with a linear combination of the integral over *J* spherical uncertainty sets colored in blue. Additionally, the number of sampling points in Eq. (23) within each interference interval is significantly fewer than that in Eq. (13) owing to the use of GLQ. Therefore, the proposed GLQ-based operational method can enhance the computational efficiency of the algorithm stated in [19]. Furthermore, the performance of the reconstructed \({\tilde{\mathbf{R}}}_{i + n}\) is evaluated through numerical simulations explained in Sect. 4.1.

### 3.2 SV correction of the SOI

The eigen-decomposition of SCM is as follows:

$${\hat{\mathbf{R}}}_{x} = \sum\limits_{m = 1}^{M} {\alpha_{m} {\mathbf{u}}_{m} {\mathbf{u}}_{m}^{\text{H}} } = {\mathbf{U}}_{s} {{\varvec{\Lambda}}}_{s} {\mathbf{U}}_{s}^{\text{H}} + {\mathbf{U}}_{n} {{\varvec{\Lambda}}}_{n} {\mathbf{U}}_{n}^{\text{H}} .$$

(25)

Here, \(\alpha_{M} \le \cdots \alpha_{m} \cdots \le \alpha_{1}\) stand for eigenvalues of \({\hat{\mathbf{R}}}_{x}\), \({\mathbf{u}}_{m}\) represents the eigenvectors of \(\alpha_{m}\), \({\mathbf{U}}_{s} = [{\mathbf{u}}_{1} ,{\mathbf{u}}_{2} , \ldots ,{\mathbf{u}}_{L + 1} ]\) spans the signal subspace, and \({\mathbf{U}}_{n} = [{\mathbf{u}}_{L + 2} , \ldots ,{\mathbf{u}}_{M} ]\) represents the noise subspace. Because the number of sources, \(L + 1\), can be got by applying the approach developed in [30], it is assumed that the number of sources is known as prior information [31]. On the basis of the orthogonality associated with the signal and noise subspaces, because the actual SV, \({\mathbf{a}}_{0}\), belongs to the former, the following formula can be derived:

$${\mathbf{a}}_{0}^{\text{H}} {\mathbf{U}}_{n} {\mathbf{U}}_{n}^{\text{H}} {\mathbf{a}}_{0} = 0.$$

(26)

Thereafter, according to Eq. (26), the SV estimation problem can be described as [16]

$$\begin{aligned} & \mathop {\min }\limits_{{{\tilde{\mathbf{a}}}_{0} }} \;{\tilde{\mathbf{a}}}_{0}^{\text{H}} {\mathbf{U}}_{n} {\mathbf{U}}_{n}^{\text{H}} {\tilde{\mathbf{a}}}_{0} \\ & {\text{s.t.}}\;\left\| {{\tilde{\mathbf{a}}}_{0} - {\hat{\mathbf{a}}}_{0} } \right\|_{2} \le \varepsilon \\ & \quad \;\left\| {{\tilde{\mathbf{a}}}_{0} } \right\|_{2} = \sqrt M , \\ \end{aligned}$$

(27)

where \({\tilde{\mathbf{a}}}_{0}\) denotes the optimized SV. The first constraint of Eq. (27) represents an uncertainty set restriction on \({\tilde{\mathbf{a}}}_{0}\), which guarantees that the optimal value is searched in the neighborhood of \({\hat{\mathbf{a}}}_{0}\) and prevents its convergence to the interference SV. The second constraint restricts the constant modulus to \({\tilde{\mathbf{a}}}_{0}\).

Similarly, the SOI covariance matrix, \({\tilde{\mathbf{R}}}_{s}\), can be obtained from Eq. (23), and its eigen-decomposition yields

$${\tilde{\mathbf{R}}}_{s} = \frac{{\theta_{0}^{{{\text{up}}}} - \theta_{0}^{{{\text{low}}}} }}{4}\sum\limits_{j = 1}^{J} {A_{j} {\mathbf{r}}(\theta_{0j} )} + \hat{\sigma }_{n}^{2} {\mathbf{I}} = \sum\limits_{m = 1}^{M} {\tau_{m} {\mathbf{v}}_{m} {\mathbf{v}}_{m}^{\text{H}} } .$$

(28)

A column orthogonal matrix \({\mathbf{V}}_{\eta } = [{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots ,{\mathbf{v}}_{\eta } ]\) composed of several principal eigenvectors can then be formed. The \(\eta\) eigenvalues occupy most of the energy of all eigenvalues; thus, \({\tilde{\mathbf{R}}}_{s} \approx {\mathbf{V}}_{\eta } {\mathbf{\Xi V}}_{\eta }^{H}\) can be obtained, where the diagonal elements of \({{\varvec{\Xi}}}\) are filled with \(\eta\) largest eigenvalues of \({\tilde{\mathbf{R}}}_{s}\). Hence, \({\mathbf{V}}_{\eta }\) spans the signal subspace, which suggests that the SV of the SOI can be formulated as a linear combination corresponding to the columns of \({\mathbf{V}}_{\eta }\) [32]. Consequently, the SV of SOI is expressed as

$${\tilde{\mathbf{a}}}_{0} = \sqrt M {\mathbf{V}}_{\eta } {\mathbf{b}},$$

(29)

where \({\mathbf{b}}\) is the rotating vector with \(\left\| {\mathbf{b}} \right\|_{2} = 1\). \({\tilde{\mathbf{R}}}_{s}\) originates from the volume integral of \({\hat{\mathbf{a}}}_{0}\), and the SV located at the spherical uncertainty set can be expressed on the basis of \({\mathbf{V}}_{\eta }\); therefore, \({\tilde{\mathbf{a}}}_{0} \in \delta_{{\mathbf{a}}} (\hat{\theta }_{0} )\) can be obtained, i.e., \(\left\| {{\tilde{\mathbf{a}}}_{0} - {\hat{\mathbf{a}}}_{0} } \right\|_{2} \le \varepsilon\) always holds true. Thus, Eq. (29) ensures that the optimal SV is searched in the neighborhood of \({\hat{\mathbf{a}}}_{0}\), and there is no possibility of convergence to the interference SV. Subsequently, the first constraint in Eq. (27) can be omitted. Substituting Eq. (29) into Eq. (27), the following formula can be derived:

$$\begin{aligned} & \mathop {\min }\limits_{{\mathbf{b}}} \;M{\mathbf{b}}^{\text{H}} {\mathbf{R}}_{V} {\mathbf{b}} \\ & {\text{s.t.}}\;\left\| {\mathbf{b}} \right\|_{2} = 1, \\ \end{aligned}$$

(30)

where \({\mathbf{R}}_{V} = {\mathbf{V}}_{\eta }^{\text{H}} {\mathbf{U}}_{n} {\mathbf{U}}_{n}^{\text{H}} {\mathbf{V}}_{\eta }\). In Eq. (30), *M* is a constant term and can be omitted.

$$\begin{aligned} & \mathop {\min }\limits_{{\mathbf{b}}} \;{\mathbf{b}}^{\text{H}} {\mathbf{R}}_{V} {\mathbf{b}} \\ & {\text{s.t.}}\;\left\| {\mathbf{b}} \right\|_{2} = 1. \\ \end{aligned}$$

(31)

On the basis of the Lagrange multiplier method [33], we derive the solution for Eq. (31). The Lagrangian function is constructed as

$$L({\mathbf{R}}_{V} ,{\mathbf{b}}) = \frac{1}{2}{\mathbf{b}}^{\text{H}} {\mathbf{R}}_{V} {\mathbf{b}} + \xi (1 - {\mathbf{b}}^{\text{H}} {\mathbf{b}}),$$

(32)

where \(\xi\) represents the Lagrange multiplier. Thereafter, we calculate the derivative of Eq. (32) and solve for its root as follows:

$$\frac{{\partial L({\mathbf{R}}_{V} ,{\mathbf{b}})}}{{\partial {\mathbf{b}}}} = {\mathbf{R}}_{V} {\mathbf{b}} - \xi {\mathbf{b}} = 0.$$

(33)

According to Eq. (33), we obtain \({\mathbf{R}}_{V} {\mathbf{b}} = \xi {\mathbf{b}}\). Substituting Eq. (33) into Eq. (31), we can conclude that to minimize the objective function, \(\xi\) should be the minimum eigenvalue of \({\mathbf{R}}_{V}\) and \({\mathbf{b}}\) is the eigenvector corresponding to this minimum eigenvalue, denoted as \({\mathbf{b}}_{\eta }\). Therefore, \({\tilde{\mathbf{a}}}_{0} = \sqrt M {\mathbf{V}}_{\eta } {\mathbf{b}}_{\eta }\) can be gained.

The weight vector of the array can be yielded with the resulting \({\tilde{\mathbf{R}}}_{i + n}\) and \({\tilde{\mathbf{a}}}_{0}\):

$${\mathbf{w}} = \frac{{{\tilde{\mathbf{R}}}_{i + n}^{ - 1} {\tilde{\mathbf{a}}}_{0} }}{{{\tilde{\mathbf{a}}}_{0}^{\text{H}} {\tilde{\mathbf{R}}}_{i + n}^{ - 1} {\tilde{\mathbf{a}}}_{0} }}.$$

(34)

### 3.3 Summary of the proposed algorithm

Unlike previous methods, the proposed approach incorporates the GLQ with the integral of spherical uncertainty sets to obtain more comprehensive information while reducing computational complexity. The idea of our work is partly inspired by [19]. However, the computational complexity of the algorithm in [19] is high, which is reduced in our study by introducing GLQ. It can be considered as an improved version combining the advantage of [17, 19]. Consequently, the INCM and SOI covariance matrix can be reconstructed more accurately compared with traditional methods. In addition, the nominal SV of SOI can be corrected adequately through the foregoing SV estimate operation. A summary of the proposed algorithm is listed below:

Steps | Details |
---|

(1) | Compute the coefficients \(A_{j}\) and nodes \(z_{j}\) of GLQ by applying Eqs. (16) and (19) |

(2) | Calculate the angular nodes using Eq. (21), and reconstruct \({\tilde{\mathbf{R}}}_{i + n}\) through Eq. (23) |

(3) | Obtain the optimized \({\tilde{\mathbf{a}}}_{0}\) via Eqs. (28) and (29), and derive the weight vector using Eq. (34) |

According to the steps of our method, Step (1) can be derived in advance according to the order of GLQ; hence, the complexity of our algorithm is mainly concentrated in Steps (2) and (3) and is roughly \(O\{ \max (JQM^{2} ,M^{3} )\}\). As \(J\) is significantly less than \(C\), it is apparent that our approach is preferable to the algorithm stated in [19] (\(O\{ \max (CQM^{2} ,M^{3.5} )\}\)) in terms of computational complexity.