Open Access

Robust group compressive sensing for DOA estimation with partially distorted observations

EURASIP Journal on Advances in Signal Processing20162016:128

Received: 3 August 2016

Accepted: 18 November 2016

Published: 1 December 2016


In this paper, we propose a robust direction-of-arrival (DOA) estimation algorithm based on group sparse reconstruction algorithm utilizing signals observed at multiple frequencies. The group sparse reconstruction scheme for DOA estimation is solved through the complex multitask Bayesian compressive sensing algorithm by exploiting the group sparse property of the received multi-frequency signals. Then, we propose a robust reconstruction algorithm in the presence of distorted signals. In particular, we consider a problem where the observed data in some frequencies are distorted due to, e.g., interference contamination. In this case, the residual error will follow the impulsive Gaussian mixture distribution instead of the Gaussian distribution due to the fact that some of the estimation errors significantly depart from the mean value of the estimation error distribution. Thus, the minimum least square restriction used in the conventional sparse reconstruction algorithm may lead to a failed reconstruction result. By exploiting the maximum correntropy criterion which is inherently insensitive to the impulsive noise, a weighting vector is derived to automatically mitigate the effect of the distorted narrowband signals, and a robust group compressive sensing approach is developed to achieve reliable DOA estimation. The robustness and effectiveness of the proposed algorithm are verified using simulation results.


Compressive sensingDOA estimationGroup sparsityRobust group compressive sensingMaximum correntropy criterion

1 Introduction

Direction-of-arrival (DOA) estimation is an important array signal processing technique which finds broad applications in radar, sonar, navigation and wireless communications. In the past decades, a number of high-resolution DOA estimation algorithms, such as Capon estimator [1], multiple signal classification (MUSIC) [2], estimation of parameters via rotation invariance techniques (ESPRIT) [3], and maximum likehood estimator (ML) [4], have been proposed. Recently, compressive sensing and sparse reconstruction methods become attractive in DOA estimation by exploiting the spatial sparsity of the signal arrivals (e.g., [58]). Various reconstruction algorithms are available in the literature to obtain desired sparse solutions, including least absolute shrinkage and selection operator (Lasso) [9], orthogonal matching pursuit (OMP) [10], and Bayesian compressive sensing (BCS) [11].

When multiple observations are available due to the use of, e.g., multiple polarizations and/or multiple frequencies, group sparse reconstruction algorithms allow exploitation of multiple observations for better determination of the sparse signal support and thus achieve improved sparse reconstruction performance as compared with conventional methods that do not account for such group sparse property. A number of group sparse reconstruction algorithms, such as group LASSO (gLasso) [12], block orthogonal matching pursuit (BOMP) [13], and multitask Bayesian compressive sensing (MT-BCS) [14] algorithms, have been developed for this purpose. The BCS algorithm based on the relevance vector machine (RVM) [15] has constituted a family of algorithms to recover sparse signals, and it is applied in this paper to solve the sparse reconstruction problem because it can achieve superior performance and is less sensitive to the coherence of the dictionary entries. To handle complex values involved in the underlying DOA estimation problem, a complex value can be decomposed into the real and imaginary components [16]. In this paper, we exploit the complex multitask compressive sensing (CMT-CS) algorithm [17] which achieves an improved performance by exploiting the sparsity pattern shared by the real and imaginary components of the complex-valued observations.

In [6], DOA estimation of wideband signals is examined by using a coprime array. In this approach, the wideband signals are divided into multiple subbands. The problem is considered in the context of group sparsity, i.e., the positions of the sparse DOA entries corresponding to the respective DOAs are shared by all the frequency subbands. That is, signal observations obtained from the virtual sensors as a result of difference coarray are fused in the context of group sparse reconstruction.

In this paper, we consider a more challenging situation in which a subset of the subbands is distorted due to, e.g., calibration error, filter misfunction, and/or narrowband interference contamination. A direction finding method with sensor gain and phase uncertainties is proposed in [18] which can simultaneously estimate the DOAs of signals and array parameters. However, this method requires that the array perturbation to be small and, due to the joint iteration between DOA and array parameters, it suffers from suboptimal convergence. In [19], a DOA estimation method is developed to utilize partially calibrated array by minimizing a certain cost function. Both methods in [18, 19] have a high computational complexity due to the requirement of spatial search and iterations. By modeling the imperfections of the partially calibrated array, an ESPRIT-based method is proposed in [20] that avoids the spatial search and iterations, thus leading to a low computational complexity.

When a subset of the subbands are distorted, the manifold matrix is distorted in these subbands. In this case, the abovementioned DOA estimation methods [1820] that are developed under the conventional group sparse reconstruction framework and exploit data information across all the subbands will lead to performance degradation.

Because the manifold matrix is distorted at the subbands, the residual errors follow an impulsive Gaussian mixture distribution, i.e., some of the error points are far from the mean value of the error distribution which can be viewed as outliers. The effect of such outliers will be amplified in the process of DOA estimation due to the minimum mean square error (MMSE) criterion used in the conventional sparse reconstruction scheme. Similarly, the group sparsity property will be distorted and, as a result, the performance of group sparsity-based DOA estimation will be degraded. In order to achieve robust reconstruction performance, the outliers caused by the subband distortion must be suppressed. As described in [21, 22], the correntropy can be treated as a generalized correlation function characterized by the kernel bandwidth, which controls the observation window and provides an effective mechanism to eliminate the detrimental effect of outliers. Intrinsically different from the threshold-based methods, the correntropy can be applied to nonlinear and non-Gaussian signal processing. Based on the content of correntropy, a new cost function, referred to as maximum correntopy criterion (MCC), is proposed in [22] that achieves robust results for observations corrupted with non-Gaussian noise and other types of outlier observations. The MCC has been successfully applied in various applications, such as radar localization [23] and ellipse fitting problem [24].

Motivated by this fact, the objective of this paper is to develop a novel robust group compressive sensing and sparse reconstruction algorithm that can be applied for DOA estimation when the amplitude and phase information is destroyed for a subset of the subbands. The proposed approach uses the MCC, instead of the conventionally used least square measure, to fuse the data observed at different subbands. By applying a weighting vector that is automatically optimized in the proposed method, the observations at the distorted subbands are adaptively suppressed, leading to a robust sparse reconstruction and DOA estimation performance in the presence of distorted subband signals.

The reminder of this paper is organized as following. In Section 2, we present the signal model based on multi-frequency signals. In Section 3, we first introduce a group sparse representation scheme for DOA estimation utilizing multi-frequency signal model and then briefly review the CMT-BCS algorithm, which is used for group sparsity-based DOA estimation. In Section 4, the effect of narrowband interference to the group sparse reconstruction algorithm is examined, and the MCC content is introduced. The proposed robust group sparse reconstruction algorithm for robust DOA estimation utilizing multi-frequency signal is then provided. The performance of the proposed algorithm is evaluated through computer simulations in Section 5. Finally, the conclusion of this paper is drawn in Section 6.

Notations: We use lower-case (upper-case) bold characters to denote vectors (matrices). In particular, I N denotes the N×N identity matrix, 1 N denotes a N×1 all-one vector, and 0 m×n denotes an all-zero matrix or, when n=1, an all-zero vector. (·) denotes complex conjugate, and (·) T and (·) H respectively denote the transpose and conjugate transpose of a matrix or vector. vec(·) denotes the vectorization operator that turns a matrix into a vector by concatenating all the columns, and diag{x} denotes a diagonal matrix with the elements of x constituting the diagonal entries. ·2 denotes the 2-norm of a vector, whereas ·1 denotes the 1-norm. E[·] denotes the statistical expectation operator. denotes the Kronecker product, presents the Hadamard product, and denotes Khatri-Rao product. x is the smallest integer greater than or equal to x.

2 Signal model

Assume there are K far-field sources impinging on an M-element uniform linear array (ULA) with an interelement spacing of d. The signals are assumed to occupy a frequency band consisting of L consecutive or disconnected subbands respectively centering at frequencies f 1,f 2,,f L . In this section, no signal distortions are considered. The effect of signal distortion of a subset of frequency subbands is discussed in Section 4.

At frequency f l , l=1,,L, the array received signal at the sampling time t can be expressed as
$$ \tilde{\boldsymbol{x}}_{[l]}(t) = \exp \left({j2\pi {f_{l}}t} \right)\sum\limits_{k = 1}^{K} {s_{[l]}^{(k)}(t){\boldsymbol{a}_{[l]}}({\theta_{k}})} + {\tilde{\boldsymbol{n}}_{[l]}}(t), $$
where θ k is the DOA of the kth source and \(s_{[l]}^{(k)}(t)\) is the corresponding signal coefficient at frequency f l . Different source signals \(s_{[l]}^{1}(t),\cdots,s_{[l]}^{K}(t)\) are assumed to be uncorrelated. In addition, for the lth frequency, a [l](θ k ) is the steering vector corresponding to the kth signal, expressed as
$$ {\boldsymbol{a}_{[l]}}({\theta_{k}}) = \left[1,e^{- j{\frac{2\pi d}{\lambda_{l}}\sin {\theta_{k}}}}, \cdots,e^{- j{\frac{2\pi d}{\lambda_{l}}}(M - 1)\sin {\theta_{k}}} \right]^{T}, $$

where λ l =c/f l is the wavelength corresponding to f l and c is the propagation velocity. Furthermore, \({\tilde {\boldsymbol {n}}_{[l]}}(t)\) denotes the additive white Gaussian noise vector.

After down converting the array received signal vector \(\tilde {\boldsymbol {x}}_{[l]}(t)\) to baseband, followed by low-pass filtering, the baseband signal corresponding to the lth subband can be represented as
$$ \begin{aligned} {\boldsymbol{x}_{[l]}}(t) &= \sum\limits_{k = 1}^{K} {s_{[l]}^{(k)}(t){\boldsymbol{a}_{[l]}}({\theta_{k}})} + {\boldsymbol{n}_{[l]}}(t)\\ &= {\boldsymbol{A}_{[l]}}{\boldsymbol{s}_{[l]}}(t) + {\boldsymbol{n}_{[l]}}(t), \end{aligned} $$

with l=1,,L, where A [l]=[a [l](θ 1),,a [l](θ K )], \({\boldsymbol {s}_{[l]}}(t) = {\left [s_{[l]}^{(1)}(t), \cdots,s_{[l]}^{(K)}(t)\right ]^{T}}\), and n [l](t) is the zero-mean noise term with variance matrix \(\sigma _{[l]}^{2}{\boldsymbol {I}_{M}}\).

The covariance matrix of the received signal corresponding to the lth subband is expressed as
$$ {\boldsymbol{R}_{\boldsymbol{x}[l]}} = E\left[ {{\boldsymbol{x}_{[l]}}(t)\boldsymbol{x}_{[l]}^{H}(t)} \right] = \boldsymbol{A}_{[l]}(\theta){\boldsymbol{R}_{\boldsymbol{s}[l]}}{\boldsymbol{A}_{[l]}^{H}}(\theta) + \sigma_{[l]}^{2}{\boldsymbol{I}_{M}}, $$
where \({\boldsymbol {R}_{\boldsymbol {s}[l]}} = E[{\boldsymbol {s}_{[l]}}(t)\boldsymbol {s}_{[l]}^{H}(t)] = \textmd {diag}\{ \sigma _{1[l]}^{2}, \cdots,\sigma _{K[l]}^{2}\} \) is the source covariance matrix with \(\sigma _{k[l]}^{2}\) denoting the kth source power at the lth subband. In practice, the covariance matrix in Eq. (4) can be estimated utilizing the collected T samples, i.e.,
$$ {\hat {\boldsymbol{R}}_{\boldsymbol{x}[l]}} = \frac{1}{T}\sum\limits_{t = 1}^{T} {{\boldsymbol{x}_{[l]}}(t)\boldsymbol{x}_{[l]}^{H}(t)}. $$

3 DOA estimation based on group sparse reconstruction

In this section, the group sparse property of the received signals across different subbands is exploited to estimate the DOAs through a group sparsity-based solution. A Bayesian sparse learning method, termed CMT-BCS [17], is utilized to effectively solve the sparse reconstruction problem.

3.1 Group sparse signal representation

In order to implement the CS-based DOA estimation, we vectorize the covariance matrix \({\hat {\boldsymbol {R}}_{\boldsymbol {x}[l]}}\) to an M 2×1 vector, which is expressed as
$$ \boldsymbol{y}_{[l]} = \textmd{vec}\left({\hat{\boldsymbol{R}}_{x[l]}}\right) = {\hat {\boldsymbol{A}}_{[l]}}(\boldsymbol{\theta}){\boldsymbol{b}_{[l]}} + \sigma_{[l]}^{2}{\boldsymbol{i}}, $$

where \({\hat {\boldsymbol {A}}_{[l]}}(\boldsymbol \theta) = \left [{\hat {\boldsymbol {A}}_{[l]}}({\theta _{1}}), \cdots {\hat {\boldsymbol {A}}_{[l]}}({\theta _{K}})\right ]\), \(\hat {\boldsymbol {a}}_{[l]}({\theta _{k}}) = {\boldsymbol {a}_{[l]}^{*}}({\theta _{k}}) \otimes \boldsymbol {a}_{[l]}({\theta _{k}})\) is the generalized steering vector, \( {\mathbf {b}}_{[l]}=\left [\sigma _{1[l]}^{2}, \cdots,\sigma _{K[l]}^{2}\right ]^{T}\), and i=vec(I M ).

Denote the discretized search grid of the DOA as \(\tilde {\boldsymbol \theta }=[{\tilde \theta _{1}},{\tilde \theta _{2}}, \cdots,{\tilde \theta _{Q}}]\). We assume that the number of the grid points Q is much greater than the number of sources, i.e., QK. Then, the manifold dictionary matrix corresponding to the signal at the lth subband can be expressed as
$$ {\tilde{\boldsymbol{A}}_{[l]}} = \left[ {{{\hat {\boldsymbol{A}}}_{[l]}}\left({{\tilde \theta }_{1}}\right), \cdots,{{\hat {\boldsymbol{A}}}_{[l]}}({{\tilde \theta }_{Q}})} \right], $$
where \({{\hat {\boldsymbol {A}}}_{[l]}}({\tilde \theta }_{q})= {\boldsymbol {a}_{[l]}^{*}}({{\tilde \theta }_{q}}) \otimes \boldsymbol {a}_{[l]}({{\tilde \theta }_{q}})\). Then, the sparse representation of y [l] in Eq. (6) can be expressed as
$$ {\boldsymbol{y}_{[l]}} = {\tilde {\boldsymbol{A}}_{[l]}} \tilde {\boldsymbol{b}}_{[l]} + \sigma_{[l]}^{2}{\boldsymbol{i}}, $$

where vector \(\tilde {\boldsymbol {b}}_{[l]}\) defines the spatial power spectrum over different grid points.

For different subband l, l=1,,L, the elements of \(\tilde {\boldsymbol {b}}_{[l]}\) take different values but all L vectors have a common support corresponding to the K sources which sparsely locate in the spatial domain. Thus, the underlying problem can be regarded as a group sparse reconstruction problem of locating the non-zero entries of \(\tilde {\boldsymbol {b}}_{[l]},l=1,\cdots,L\).

By stacking vectors y [1],,y [L] into a single vector \({\boldsymbol {y}}=[\boldsymbol {y}_{[1]}^{T},\cdots,\boldsymbol {y}_{[L]}^{T}]^{T}\), we have the following sparse representation vector
$$ {\boldsymbol{y}} = {\tilde {\boldsymbol{A}}}\tilde {\boldsymbol{b}} + {{\mathbf{z}}}, $$

where \(\tilde {\boldsymbol {A}}=\textmd {diag}\{{\tilde {\boldsymbol {A}}_{[l]}},\cdots,{\tilde {\boldsymbol {A}}_{[L]}}\}\) is a block diagonal matrix, \(\tilde {\boldsymbol {b}}=[\tilde {\boldsymbol {b}}_{[1]},\cdots,\tilde {\boldsymbol {b}}_{[L]}]\), and \( {\mathbf {z}} = {[\sigma _{[1]}^{2}{\boldsymbol {i}^{T}}, \cdots,\sigma _{[L]}^{2}{\boldsymbol {i}^{T}}]^{T}}\). Note that \(\tilde {\boldsymbol {b}}\) has only LK non-zero entries corresponding to the elements of \(\sigma _{k[l]}^{2}\), where k=1,,K and l=1,,L. The respective positions of non-zero entries in \(\tilde {\boldsymbol {b}}\) represent the targets’ DOAs. The elements of \(\tilde { {\mathbf {b}}}\) satisfy the group sparse property, i.e., the non-zero entries corresponding to the different frequency combinations share the same support.

Thus, the DOA estimation can be achieved by solving the following 1-norm minimization problem
$$ \begin{aligned} \min\limits_{\tilde{\boldsymbol{b}}}&\left\| {\xi (\tilde{\boldsymbol{b}})} \right\|_{1}\\ \text{s.t.}&\left\| {\boldsymbol{y} - \tilde {\boldsymbol{A}}\tilde{\boldsymbol{b}}} \right\|_{2} < \varepsilon, \end{aligned} $$

where ξ(·) is an operation that obtains the 2-norm of the L-element entries corresponding to each spatial grid points and ε is a user-specific tolerance factor. The CS-based DOA estimation methods have been successfully used for multi-frequency and wideband DOA estimation problems [6, 25]. As we described earlier, a number of group sparse reconstruction algorithms can solve the above 1-norm minimization problem. In this paper, the CMT-BCS algorithm [17], which effectively handles complex group sparse problems in the context of MT-BCS, is utilized due to its superior performance and robustness to dictionary coherence.

3.2 CMT–BCS algorithm

The entries of sparse vector \(\tilde {\boldsymbol {b}}_{[l]}\) in Eq. (8) which characterizes the lth task are assumed to be drawn from the following zero-mean Gaussian distributions:
$$ \tilde{\boldsymbol{b}}_{[l]}^{q} \sim {\mathcal{N}}(\boldsymbol{b}_{[l]}^{q}|{\mathbf{0}},{\alpha_{q}}{\boldsymbol{I}_{2}}),\quad q \in \left[ {1, \cdots,Q} \right] $$

where \(\tilde {\boldsymbol {b}}_{[l]}^{q}\) is a 2×1 vector which is consisted by the real-part coefficient \(b_{[l],q}^{r}\) and imagery-part coefficient \(b_{[l],q}^{i}\) of the qth element of \(\tilde {\boldsymbol {b}}_{[l]}\). α q is the variance of Gaussian probability distribution function (pdf). It should be note that the parameter α=[α 1,,α Q ] T is shared by all the L groups. Thus, the qth vector \(\boldsymbol {b}_{[l]}^{q}\) tends to be zero with probability 1 across the L groups when α q is set to zero [14].

To deal with the complex-valued variables in the context of MT-BCS which was originally developed to handle real-valued problems [14], the real and imaginary components of a complex variable are treated as two separate variables in [16] without utilizing the fact that their nonzero entries usually appear at the same positions. The CMT-BCS algorithm treats the real and imaginary components as group sparse and α q is shared and jointly estimated for both real and imagery components [17].

To promote the sparsity of \(\tilde {\boldsymbol {b}}_{[l]}\), a Gamma prior is placed on \(\alpha _{q}^{-1}\), i.e.,
$$ \alpha_{q}^{- 1} \sim \text{Gamma}\left(\alpha_{q}^{- 1}|a,b\right), $$

where \(\text {Gamma}(x^{- 1}|a,b)=\Gamma (a)^{-1}b^{a}x^{-(a-1)e^{-\frac {b}{x}}}\), and Γ(·) denotes the Gamma function.

It has been demonstrated in [15] that a proper selection of the hyper-parameters a and b encourages a sparse representation for the coefficients in \(\tilde {\boldsymbol {b}}_{[l]}\). We set a=b=0 as a default choice which avoids a subjective choice of a and b and leads to simplifications of computation. Considering that the covariance matrix is obtained on basis of received data samples in Eq. (5), a Gaussian prior \(\mathcal {N}( {\mathbf {0}},\beta _{0}\boldsymbol {I}_{2})\) is also placed on the additive noise. Similarly, \(\beta _{0}^{-1}\) is placed on the Gamma prior with hyper-parameters c and d, i.e., \(\beta _{0}^{-1}\sim \text {Gamma}(\beta _{0}^{-1}|c,d)\). We also let c=d=0 as a default choice.

Define \(\breve {\boldsymbol {b}}_{[l]}^{{RI}}=\left [\left (\breve {\boldsymbol {b}}_{[l]}^{{R}}\right)^{T}, \left (\breve {\boldsymbol {b}}_{[l]}^{{I}}\right)^{T}\right ]^{T}\) with \(\breve {\boldsymbol {b}}_{[l]}^{{R}} = {\left [b_{{[l],1}}^{{R}}, \ldots,b_{{[l],Q}}^{{R}}\right ]^{T}}\) and \(\breve {\boldsymbol {b}}_{[l]}^{I} = {\left [b_{{[l],1}}^{I}, \ldots,b_{[l],Q}^{I}\right ]^{T}}\) respectively denoting the real and imaginary parts. The posterior probability density function for \(\breve {\boldsymbol {b}}_{[l]}^{{RI}}\) can be evaluated analytically based on the Bayes’ rule as
$$ \Pr \left(\breve{\boldsymbol{b}}_{[l]}^{RI}| \boldsymbol{y}^{RI}_{[l]},{\hat{\boldsymbol{A}}_{[l]}},\boldsymbol\alpha,{\beta_{0}}\right) = \mathcal{N}\left(\breve{\boldsymbol{b}}_{[l]}^{RI}|{\boldsymbol\mu_{[l]}},{\boldsymbol\Sigma_{[l]}}\right), $$
$$ {\boldsymbol{\mu}_{[l]}}=\beta^{-1}_{0}{\boldsymbol\Sigma_{[l]}}{ \boldsymbol{\Psi}_{[l]}^{T}}\boldsymbol{y}_{[l]}^{{RI}}, $$
$$ \boldsymbol{y}_{[l]}^{{RI}}=\left[\text{Re}\left(\boldsymbol{y}_{[l]}^{T}\right),\text{Im}\left(\boldsymbol{y}_{[l]}^{T}\right)\right]^{T}, $$
$$ {\boldsymbol\Sigma_{[l]}}=\left[\beta^{-1}_{0}\boldsymbol{\Psi}_{[l]}^{T}\boldsymbol{\Psi}_{[l]}+\textbf{F}^{-1}\right]^{-1}, $$
$$ \boldsymbol{\Psi}_{[l]}=\left[ \begin{array}{cc} \text{Re}\left(\tilde{\boldsymbol{A}}_{[l]}\right) & -\text{Im}\left(\tilde{\boldsymbol{A}}_{[l]}\right)\\ \text{Im}\left(\tilde{\boldsymbol{A}}_{[l]}\right)&\text{Re}\left(\tilde{\boldsymbol{A}}_{[l]}\right) \end{array} \right], $$
$$ \mathbf{F}=\text{diag}\left\{\alpha_{1},\cdots,\alpha_{Q},\alpha_{1},\cdots,\alpha_{Q}\right\}. $$
It is noted that, the real and imaginary parts share the same α to ensure their group sparsity. Once the parameters α and β 0 are evaluated, the mean and variance of each signal powers corresponding to different spatial grid points can be derived. The logarithm of marginal likehood function is expressed as
$$ \left\{\boldsymbol\alpha,\beta_{0}\right\}=\arg\mathop{\max}\limits_{\boldsymbol\alpha,\beta_{0}}\mathcal{L}\left(\boldsymbol\alpha,\beta_{0}\right), $$
$$ \begin{aligned} \mathcal{L}\left(\boldsymbol\alpha,{\beta_{0}}\right) &= \sum\limits_{l = 1}^{L} {\log \Pr\left({b_{[l]}}|\boldsymbol{\alpha}, {\beta_{0}}\right)} \\ &= - {{1 \over 2}}\sum\limits_{l = 1}^{L} {\left[{\vphantom{{{\left(\boldsymbol{y}_{[l]}^{RI}\right)}^{T}}}}{2{M^{2}}\log (2\pi) + \log \left(\boldsymbol{C}_{[l]}\right)} \right.} \\ &\left.~~~+ {{\left(\boldsymbol{y}_{[l]}^{RI}\right)}^{T}}\boldsymbol{C}_{[l]}^{- 1}\boldsymbol{y}_{[l]}^{RI} \right] \end{aligned} $$
and \(\boldsymbol {C}_{[l]}=\beta _{0}\boldsymbol {I}_{2M^{2}}+\boldsymbol \Psi _{[l]}\textbf {F} \boldsymbol \Psi _{[l]}^{T}\). β 0 and α can be evaluated by maximizing Eq. (20) via the expectation maximization (EM) algorithm. α and β 0 can be expressed as [17]:
$$ {}\begin{aligned} {\alpha_{q}} = \sqrt {\frac{{{\sum\nolimits}_{l = 1}^{L} {{{\left(\tilde{\boldsymbol{b}}_{[l],q}^{{RI}}\right)}^{T}}{\tilde{\boldsymbol{b}}_{[l],q}^{{RI}}}+ {{\left(\tilde{\boldsymbol{b}}_{[l],Q+q}^{{RI}}\right)}^{T}}{\tilde{\boldsymbol{b}}_{[l],Q+q}^{{RI}}}} }}{{{\sum\nolimits}_{l = 1}^{L} {\text{trace}\left[ {{{\left(\boldsymbol{C}_{[l]}^{*}\right)}^{- 1}}\left(\boldsymbol\Psi_{[l]q}^{T}\boldsymbol\Psi_{[l]q}\right)} \right]} }}}, \end{aligned} $$
$$ {}\begin{aligned} {\beta_{0}} = \frac{{{\sum\nolimits}_{l = 1}^{L} {\left\{ {\text{trace}\left[{\boldsymbol\Sigma_{[l]}}{\boldsymbol\Psi_{[l]}}\boldsymbol\Psi_{[l]}^{T}\right] + \left\| {\boldsymbol{y}_{[l]}^{{RI}} - {\boldsymbol\Psi_{[l]}}{\boldsymbol\mu_{[l]}}} \right\|_{2}^{2}} \right\}}}}{{2QL}}, \end{aligned} $$

where Ψ [l]q denotes the qth column of Ψ [l] and \(\tilde {\boldsymbol {b}}_{[l]q}^{{RI}}=\left [b_{{[l]q}}^{{R}}, b_{{[l]q}}^{{I}}\right ]^{T}\) is a vector consisted of the real and imaginary part of the corresponding qth entry of \(\tilde {\boldsymbol {b}}_{[l]}\).

4 Robust DOA estimation

The CMT-BCS sparse reconstruction algorithm described in Section 3 provides effective DOA estimation only when there is no distortions. When the observed signals at some frequencies are contaminated with distortions to yield incorrect amplitude and phase information, these methods will result in erroneous DOA estimation. In this section, we propose an MCC-based robust group compressive sensing method for reliable DOA estimation in such scenarios. In the following, we first analyze the effect of narrowband distortions on the group sparsity-based DOA estimation and then introduce the concept of the MCC [21, 22]. Finally, the proposed robust group compressive sensing method for DOA estimation is presented.

4.1 Effect of narrowband distortion

As we discussed earlier, narrowband distortions may result from calibration error and filter misfunction. In addition, local interferer scattering that does not correspond to a clear array manifold would yield a similar effect (Fig. 1). Let \(\mathbb {L}\) be a subset of the frequency subbands which are contaminated by narrowband distortions or interference. Denote the cardinality of \(\mathbb {L}\) as \(|\mathbb {L}|= L_{o}\) where L o <L. At subband \(l_{o}\in \mathbb {L}\), the received signal vector becomes
$$ \tilde{\boldsymbol{x}}_{[l_{o}]}(t) = \exp \left({j2\pi {f_{l_{o}}}t}\right)\sum\limits_{k = 1}^{K} {s_{[l_{o}]}^{(k)}(t){\boldsymbol{a}^{o}_{[l_{o}]}}\left({\theta_{k}}\right)}+ {\tilde{\boldsymbol{n}}_{[l_{o}]}}(t), $$
Fig. 1

Array signals with narrowband interference scattering

where \({\boldsymbol {a}^{o}_{[l_{o}]}}({\theta _{k}})\) is the distorted steering vector contaminated by the l o th subband interference, which can be expressed by the Hadamard product of steering vector and a N×1 distortion vector \( {\mathbf {v}}_{[l_{o}]}\) with unknown amplitude and phase, i.e., \({\boldsymbol {a}^{o}_{[l_{o}]}}({\theta _{k}})={\boldsymbol {a}_{[l_{o}]}}({\theta _{k}}) \circ {\mathbf {v}}_{[l_{o}]}\).

After converting \(\tilde {\boldsymbol {x}}_{[l_{o}]}(t)\) to baseband, the corresponding received signal vector can be represented as
$$ \begin{aligned} \boldsymbol{x}_{[l_{o}]}(t) &=\sum\limits_{k = 1}^{K} {s_{[l_{o}]}^{(k)}(t){\boldsymbol{a}^{o}_{[l_{o}]}}({\theta_{k}})}+ {\boldsymbol{n}_{[l_{o}]}}(t)\\ &={\boldsymbol{A}_{[l_{o}]}^{o}}{\boldsymbol{s}_{[l_{o}]}}(t) + {\boldsymbol{n}_{[l_{o}]}}(t), \end{aligned} $$
where \({\boldsymbol {A}_{[l_{o}]}^{o}}=\boldsymbol {A}_{[l_{o}]} \circ \boldsymbol {V}_{[l_{o}]}\) is the distorted manifold matrix and \(\boldsymbol {V}= {\mathbf {v}}_{[l_{o}]} {\mathbf {1}}^{T}_{K}\) is the distortion matrix. The corresponding vectorized covariance matrix of (24) can be written as
$$ \boldsymbol{y}_{[l_{o}]} = \textmd{vec}\left({\hat{\boldsymbol{R}}_{x[l_{o}]}}\right) = {\hat {\boldsymbol{A}}^{o}_{[l_{o}]}}{{\boldsymbol{b}}^{o}_{[l_{o}]}} + \sigma_{[l]}^{2}{\boldsymbol{i}}, $$

where \({\hat {\boldsymbol {R}}_{x[l_{o}]}} = 1/T\sum _{t = 1}^{T} {{\boldsymbol {x}_{[l_{o}]}}(t)\boldsymbol {x}_{[l_{o}]}^{H}(t)}\), \({\hat {\boldsymbol {A}}^{o}_{[l_{o}]}}=\left ({\boldsymbol {A}_{[l_{o}]}^{o}}\right)^{*}\odot {\boldsymbol {A}_{[l_{o}]}^{o}}\), respectively.

With the existence of distortion matrix \(\boldsymbol {V}_{[l_{o}]}\), the group sparse property in (9) will be destroyed. In addition, under the sparse representation scheme, the distorted manifold matrix leads the residual errors, \( {\mathbf {e}}_{l_{o}}=\boldsymbol {y}_{[l_{o}]}-{\tilde {\boldsymbol {A}}_{[l_{o}]}}\tilde {\boldsymbol {b}}_{[l_{o}]}\), following the impulsive Gaussian mixture distribution. Note that both the ξ(·) operation and the constraint operation in (10) use l 2-norm operations which will generally amplify the effect of outliers so that the estimates DOAs become biased or erroneous unless the distortions is properly suppressed or compensated.

4.2 Maximum correntropy criterion

The correntropy between two arbitrary random variables X and Y is defined by [22]
$$ V_{\delta}=E\left[\kappa_{\delta}(X-Y)\right], $$
$$ \kappa_{\delta}(X-Y)=\exp\left(-\frac{(X-Y)^{2}}{2\delta^{2}}\right) $$
is the Gaussian kernel function with user-defined kernel size δ that controls the observation window. By adjusting the kernel size, the impulsive Gaussian noise can be effectively eliminated [22]. In practice, the correntropy is calculated by a finite number of samples of x n and y n for n=1,…,N, i.e.,
$$ \hat{V}_{\delta}=\frac{1}{N}\sum\limits_{n=1}^{N}\kappa_{\delta}(x_{n}-y_{n}). $$
In order to derive the MCC, we first give a simple regression model for two random variables y(n) and x(n) as an example:
$$ y(n)=f(x(n),p)+z(n) $$
with n[1,,N], where p is a parameter of function f, and z(n) is a noise process. If the probability density function of noise variable z(n) follows the Gaussian distribution, the optimal solution of the above regression problem in the minimum mean square error (MMSE) sense is found by
$$ \min\limits_{p} \quad J(p)=\sum\limits_{n = 1}^{N} {{{\left[ {f({x(n)},p) - {y(n)}} \right]}^{2}}}. $$
However, when the noise process z(n) follows a non-Gaussian distribution with large outliers, the solution obtained under the above MMSE criterion is no longer optimal. In this case, we consider the optimal solution under the following MCC [22]:
$$ \max\limits_{p} \quad J'(p) = \sum\limits_{n = 1}^{N}{\exp\left(-{\frac{{{{\left[ {f({x(n)},p) - {y(n)}} \right]}^{2}}}}{{2{\delta^{2}}}}} \right)}. $$

It is shown in [22] that the MCC outperforms MMSE in the impulsive Gaussian mixture noise case since correntropy is inherently insensitive to outliers and the MCC attains the same efficiency compared with MMSE under a Gaussian noise process.

4.3 Proposed robust DOA estimation technique

In this subsection, we apply the MCC to develop a robust group sparse reconstruction method, which is used to provide reliable DOA estimation in the presence of distortion in a subset of frequency subbands. Consider that the l o th subband, where \(l_{o}\in \mathbb {L}\), is distorted with the corresponding manifold matrix \({\tilde {\boldsymbol {A}}_{[l_{o}]}}\). Define \(\boldsymbol {y}^{o}=\left [\boldsymbol {y}_{[1]}^{T},\cdots,\boldsymbol {y}^{T}_{[l_{o}]},\cdots, \boldsymbol {y}_{[L]}^{T}\right ]^{T}\). If the block diagonal dictionary matrix designed in Eq. (9) is utilized, the entries of residual error vector \(\boldsymbol {y}^{o} - \tilde {\boldsymbol {A}}\tilde {\boldsymbol {b}}\) follow impulsive Gaussian mixture distribution rather than zero-mean Gaussian distribution. Thus, the solution computed from the 1-norm minimization problem in (10) is no longer optimal.

In order to solve this problem, the constraint function in (10) needs to be modified. By utilizing the MCC, the constrained 1-norm minimization problem is modified as [23]
$$ \begin{aligned} &\min\limits_{\tilde{\boldsymbol{b}}} {\left\| {\xi (\tilde{\boldsymbol{b}})} \right\|_{1}}\\ &\mathrm{ s.t.} \sum\limits_{m = 1}^{L{M^{2}}} {\exp \left({ - \frac{{{{\left| {{{y_{m}^{o}}} - {{\tilde {\boldsymbol{A}}}_{m}}\tilde{\boldsymbol{b}}} \right|}^{2}}}}{2\delta^{2}}} \right)} > \varepsilon_{1}, \end{aligned} $$

where \({y_{m}^{o}}\) is the mth element of y o and \({\tilde {\boldsymbol {A}}}_{m}\) is the mth row of dictionary matrix \({\tilde {\boldsymbol {A}}}\). Similar as ε in (10), ε 1 is a user-specific tolerance parameter.

However, the inequality constraint in Eq. (32) is both non-linear and nonconvex, and thus, it is difficult to solve this nonconvex optimization problem. In order to achieve an optimal solution of (32), we introduce the following proposition [26].

Proposition 1

For \(\kappa (x)=\exp \left (-\frac {{{|x|^{2}}}}{{{\delta ^{2}}}}\right)\), there exists a conjugate function φ such that
$$ \kappa(x) = \max\limits_{p} \left({p \frac{{{|x|^{2}}}}{{{\delta^{2}}}} - \varphi (p)} \right), $$

and for a fixed x, the maximum value is reached at p=−κ(x).

By exploiting Proposition 1, we can solve the above nonconvex optimization problem in an iterative method. Using the result presented in Proposition 1, there exists a conjugate function φ(ω) for the Gaussian kernel function \(\kappa \left ({{{y_{m}^{o}}} - {{\tilde {\boldsymbol {A}}}_{m}}\tilde {\boldsymbol {b}}}\right)\); thus, the constraint function in (32) can be modified as
$$ \sum\limits_{m = 1}^{M^{2}} {\left({{\omega_{m}}\left({|{{{y_{m}^{o}}} - {{\tilde {\boldsymbol{A}}}_{m}}\tilde{\boldsymbol{b}}} |}^{2}\right) - \varphi ({\omega_{n}})} \right)} > {\varepsilon_{2}}, $$
where \(\boldsymbol {\omega } = {\left [ {{\omega _{1}}, \cdots,{\omega _{LM^{2}}}} \right ]}\) is a weighting row vector to be optimized and ε 2 is a user-specific parameter. Comparing (34) and (33), it is clear that ω m in (34) plays the role of parameter p in (33). Based on Proposition 1, inequality (34) achieves its supreme when ω m is set as
$$ {\omega_{m}} = \exp \left(- \frac{{|{y_{m}^{o}} - {{\tilde{\boldsymbol{A}}}_{m}}\tilde{\boldsymbol{b}}{|^{2}}}}{2\delta_{[l]}^{2}}\right), $$

where \(l=\lceil \frac {m}{M^{2}}\rceil \). Note that the received signal is divided into L groups corresponding to L subband signals, and it is necessary to automatically adjust the kernel size δ [l] based on the individual subband signals rather than the whole received signals. According to Proposition 1, inequality (34) and the constraint function in (32) are equal when the value of ω is specified [26].

In addition, the conjugate function φ(ω m ) is a function of parameter ω m . Because ω m keeps constant in one iteration, φ(ω m ) also keeps constant in each iteration. In this case, the sparse vector \(\tilde {\boldsymbol {b}}\) still can be determined when the constraint function in (34) arrives its maximum value with ignoring the value of φ(ω m ). Hence, the inequality (34) can be reformulated as
$$ \sum\limits_{m = 1}^{LM^{2}} {\left({{\omega_{m}} \left({| {{{y_{m}^{o}}} - {{\tilde {\boldsymbol{A}}}_{m}}\tilde{\boldsymbol{b}}} |}^{2} \right)} \right)} > {\varepsilon_{3}}, $$

where ε 3 is a user-specific parameter as similar before.

For description convenience, the vector ω is divided into L groups, i.e., ω=[ω [1],,ω [L]], where \(\boldsymbol {\omega }_{[l]}=\left [\omega _{M^{2}(l-1)+1},\cdots,\omega _{M^{2}l}\right ]\) corresponds to the lth subband. It is desired that, when there is no distortion in the lth subband, the corresponding ω [l] is a 1×M 2 vector consisting of elements close to 1. Otherwise, for \(l_{o}\in \mathbb {L}\), \(\boldsymbol {\omega }_{[l_{0}]}\) should be set to a 1×M 2 vector with small value elements. Thus, the weighting vector ω is adaptively optimized to mitigate the effect of the narrowband distortions.

After the weighting vector w is determined, the optimization problem in (32) can be solved through the following optimization problem
$$\begin{array}{@{}rcl@{}} \begin{aligned} &\min\limits_{\tilde{\boldsymbol{b}}}{\left\|{\xi\left(\tilde{\boldsymbol{b}}\right)}\right\|_{1}}\\ &\text{s.t.} -\sum\limits_{m = 1}^{L{M^{2}}} \left({\omega_{m}}{\left| {{y_{m}} - {{\tilde{\boldsymbol{A}}}_{m}}\tilde{\boldsymbol{b}}} \right|^{2}}\right) < {\varepsilon_{4}}, \end{aligned} \end{array} $$

where ε 4 is a user-specific tolerance parameter. The exploitation of the MCC effectively suppresses the effect of the narrowband interference, which leads to a robust DOA estimation based on group sparse reconstruction scheme.

In summary, the proposed robust DOA estimation algorithm performs the following iterative process:
  1. 1.

    Initialization: Set the initial kernel size as δ [l]=, and \(\tilde {\boldsymbol {b}}=\textbf {{0}}_{LM^{2}\times 1} \).

  2. 2.

    Compute the initial values of ω n based on Eq. (35).

  3. 3.

    Solve the modified 1-norm minimization problem (37) utilizing group sparse reconstruction algorithm. As shown in Section 3, the CMT-BCS algorithm is applied in this step.

  4. 4.
    Update the initial kernel size δ [l] following the Silverman’s rule [22, 23]:
    $$ \delta_{[l]} = 1.06 \times \min \{ {\delta_{E}},{R \left/ {1.34}\right.}\} \times {(M^{2})^{- 0.4}}, $$

    where δ E and R stand for the standard deviation and error interquartile range of \(({{{y_{m}^{o}}} - {{\tilde {\boldsymbol {A}}}_{m}}\tilde {\boldsymbol {b}}})\), respectively, where m[(l−1)M 2+1,lM 2] [22].


Repeat steps 2 to 4 until a user-specific number of iteration is reached or the iterations are converged.

4.4 Computational complexity

In this subsection, we analyze the computational complexity of the proposed method in terms of the number of multiplications required in each iteration depicted in the previous subsection. The computational complexity to calculate the values of ω m in step 2 requires \({\mathcal O}\{ L{M^{2}}(LQ + 1)\}\) multiplications. In step 3, the numbers of multiplications required for the computation of parameters α and β 0 are \({\mathcal {O}}\{ {M^{2}}LQ(1 + 2{M^{4}})\} \) and \({\mathcal {O}}\left \{ L(4{Q^{2}}{M^{2}} + 4{Q^{2}}{M^{4}} + {M^{2}} + 2Q{M^{2}})\right \} \), respectively. In addition, \({\mathcal {O}}\{ L{M^{2}}\} \) multiplications are required to update the kernel size.

5 Simulation results

In this section, simulation results are presented to demonstrate the performance of the proposed algorithm. In the following simulations, we consider a 7-element ULA. Three far-field targets (K=3) impinge on the array from directions of −20°, 5°, and 25°, respectively. Assume that each signal occupies five frequency subbands (L=5). The ratio between the wavelength λ l and the interelement spacing d for the five subbbands is respectively 1.2, 1.6, 2.0, 2.4, and 2.8. Assume that the noise power is equal at the five subbands, and the input signal-to-noise ratio (SNR) is 5 dB. The number of snapshot is T=200, and the dictionary matrix consists of steering vectors corresponding to a uniformly sampled grid between −60° to 60° with a step size of 0.1°. We assume that the fifth subband is interfered with an omni-directional narrowband signal with an input interference-to-noise ratio (INR) of 10 dB. The iteration number we used is 10.

We first provide the sparse spatial spectrums obtained from the two Bayesian compressive sensing algorithms when there is no narrowband interference in Fig. 2. The true DOAs are marked with red vertical lines. From Fig. 2 a, b, it is clear that the CMT-BCS algorithm [17] achieves an outstanding performance compared with the MT-BCS algorithm [11]. Although the DOA estimates obtained from the MT-BCS altorithm are approximately accurate, there exists a spurious peak close to the true DOA peaks. Correspondingly, the CMT-BCS algorithm achieves an accurate spatial spectrum without fake peaks.
Fig. 2

Estimated spatial spectrum using the conventional method implemented by the a MT-BCS and b CMT-BCS methods, where no narrowband distortions are present

In Fig. 3, we provide the estimated spatial spectra using the two BCS algorithms which are not robust to the presence of narrowband signal interference. It is easy to see that both of the algorithms fail to achieve an accurate DOA estimation. From Fig. 3 a, which provides the estimated spatial spectrum obtained from the MT-BCS algorithm, we can see that there exists many spurious peaks which have higher peak values than the true DOA peaks. In Fig. 3 b, the spatial spectrum estimated from the CMT-BCS algorithm only captures one signal arrival with a bias, whereas it fails to estimate the DOAs of the other two signals. On the other hand, two fake spurious peaks are generated instead.
Fig. 3

Estimated spatial spectrum in the presence of narrowband interference using the conventional method implemented by both a MT-BCS and b CMT-BCS algorithms

The estimated spatial spectrum obtained from the proposed algorithm is given in Fig. 4. The two sparse reconstruction algorithms are respectively utilized in step 3 of the proposed algorithm. From the two plots in Fig. 4, we can see that the narrowband interference signal is eliminate efficiently and accurate DOA estimations are obtained. However, there exist spurious components in the estimated spatial spectrum when the MT-BCS algorithm is applied. On the other hannd, no such spurious components are in the spatial spectrum obtained from the CMT-BCS algorithm.
Fig. 4

Estimated spatial spectrum in the presence of narrowband interference using the proposed method implemented by both a MT-BCS and b CMT-BCS algorithms

Define the root mean square error (RMSE) as follows:
$$ \textrm{RMSE}(\theta)=\sqrt{\frac{1}{K}\sum\limits_{k=1}^{K}\left(E\left[\left(\hat{\theta}_{k}-\theta_{k}\right)^{2}\right]\right)}, $$
where \(\hat {\theta }_{k}\) is the estimated DOA of the kth source. The RMSE versus SNR is provided in Fig. 5 where the results are averaged over 100 independent trails using the proposed robust DOA estimation algorithm and the input SNR varies from −5 to 25 dB. It is clearly shown that the proposed method achieves a robust estimation and the reconstruction performance is significantly improved by using the CMT-BCS reconstruction algorithm.
Fig. 5

Comparison of the RMSE performance versus the input SNR

In the next example, we compare the performance of the proposed method under different proportions of distorted subbands in Fig. 6. We consider that three cases of subband distortion, i.e., 1 subband (20%), 2 subbands (40%), and 3 subbands (60%), are randomly distorted. While the RMSE performance generally degrades as the number of distorted subbands increases, the proposed method still provides robust DOA estimation in all the three considered cases.
Fig. 6

Comparison of the RMSE performance versus distortion proportion

6 Conclusions

In this paper, a robust group compressive sensing algorithm was developed to achieve reliable DOA estimation in the presence of narrowband signal distortion due to interference and calibration errors. We first construct the group sparse signal model utilizing the received multi-frequency signals. Then, we review the complex mutitask Bayesian compressive sensing (CMT-BCS) technique to solve the complex value group sparse reconstruction problem. Consider narrowband signal distortions that affect the manifold matrix in some of the frequency bands. In this case, the residual error between the observations and estimated signals follows the impulsive Gaussian mixture distribution which is characterized as outliers. Thus, it leads to performance degradation of DOA estimation when conventional group sparse reconstruction methods are used. In order to achieve robust DOA estimation, a robust group sparse reconstruction method is developed based on the maximum correntopy criterion. In this approach, an adaptive weight is applied to each subband observation to mitigate such outliers caused by the narrowband distortion. The reliable DOA estimation capability is verified in such challenging situations where conventional group compressive sensing methods fail.



B. Wang was supported by the China Scholarship Council for his stay at the Temple University. The work of Y. D. Zhang was supported in part by the National Science Foundation under grant AST-1547420. The work of W. Wang was supported by the National Natural Science Foundation (61571148), China Postdoctoral Special Funding (2015T80328), and China Postdoctoral Science Foundation Grant (2014M550182).

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

College of Automation, Harbin Engineering University, Heilongjiang, China
Department of Electrical and Computer Engineering, Temple University, Philadelphia, USA


  1. J Capon, High-solution frequency-wavenumber spectrum analysis. Proc. IEEE. 57(8), 1408–1418 (1969).View ArticleGoogle Scholar
  2. RO Schmidt, Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propagat. 37(12), 276–280 (1986).View ArticleGoogle Scholar
  3. R Roy, A Paulraj, T Kailath, ESPRIT—a subspae rotation approach to estimation of parameters of Cisoids in noise. IEEE Trans. Acoust. Speech Signal Proc. 34(5), 1340–1342 (1986).View ArticleGoogle Scholar
  4. I Ziskind, M Wax, Maximum likelihood localization of multiple sources by alternate projection. IEEE Trans. Acoust. Speech Signal Process. 36(10), 1553–1560 (1988).View ArticleMATHGoogle Scholar
  5. YD Zhang, MG Amin, B Himed, in Proc. IEEE ICASSP. Sparsity-based DOA estimation using co-prime arrays (Vancouver, Canada, 2013).Google Scholar
  6. Q Shen, W Liu, W Cui, S Wu, YD Zhang, MG Amin, Low-complexity wideband direction-of-arrival estimation based on co-prime arrays, IEEE/ACM Trans. Audio Speech Language Process. 23(9), 1445–1456 (2015).View ArticleGoogle Scholar
  7. S Qin, YD Zhang, MG Amin, Generalized coprime array configurations for direction-of-arrival estimation. IEEE Trans.Signal Process. 63(6), 1377–1390 (2015).MathSciNetView ArticleGoogle Scholar
  8. A Koochakzadeh, P Pal, Sparse source localization using perturbed arrays via bi-affine modeling. Digital Signal press. doi:10.1016/j.dsp.2016.06.004.
  9. R Tibshirani, Regression shrinkage and selection via the Lasso. JR Statist.Soc. 58(1), 267–288 (1996).MathSciNetMATHGoogle Scholar
  10. JA Tropp, AC Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007).MathSciNetView ArticleMATHGoogle Scholar
  11. S Ji, Y Xue, L Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008).MathSciNetView ArticleGoogle Scholar
  12. M Yuan, Y Lin, Model selection and estimation in regression with grouped variables. J. Royal Stat. Soc. Ser. B. 68(1), 49–67 (2007).MathSciNetView ArticleMATHGoogle Scholar
  13. Y Eldar, P Kuppinger, H Bölcskei, Block-sparse signal: uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 58(6), 3042–3054 (2010).MathSciNetView ArticleGoogle Scholar
  14. S Ji, D Dunson, L Carin, Multitask compressive sensing. IEEE Trans. Signal Process. 57(1), 92–106 (2009).MathSciNetView ArticleGoogle Scholar
  15. ME Tipping, Sparse Bayesian shrinkage and selection learning and the relevance vector machine. J. Royal Statist. Soc. Ser. B. 68(9), 211–244 (2001).MathSciNetMATHGoogle Scholar
  16. M Carlin, P Roccca, G Oliveri, F Viani, A Massa, Directions-of-arrival estimation through Bayesian compressive sensing strategies. IEEE Trans. Antennas Propagat. 61(7), 3828–3838 (2013).MathSciNetView ArticleGoogle Scholar
  17. Q Wu, YD Zhang, MG Amin, B Himed, in Proc. IEEE ICASSP. Complex multitask Bayesian compressive sensing (Florence, Italy, 2014), pp. 3375–3379.Google Scholar
  18. A Weiss, B Freidlander, Eigenstructure methods for direction finding with sensor gain and phase uncertainties. Circuits Syst. Signal Process. 9(3), 271–300 (1990).MathSciNetView ArticleMATHGoogle Scholar
  19. A Weiss, B Friedlander, DOA and steering vector estimation using a partially calibrated array. IEEE Trans. Aerospace Electron. Syst. 32(3), 1047–1057 (1996).View ArticleGoogle Scholar
  20. B Liao, S Chan, Direction finding with partly calibrated uniform linear arrays. IEEE Trans. Antennas Propagat. 60(2), 922–929 (2012).MathSciNetView ArticleGoogle Scholar
  21. I Santamaría, PP Pokharel, JC Principe, Generalized correlation function: definition, properties and application to blind equalization. IEEE Trans. Signal Process. 54(6), 2187–2197 (2006).View ArticleGoogle Scholar
  22. W Liu, P Pokharel, JC Principe, Correntropy: properties and application in non-Gaussian signal processing. IEEE Trans. Signal Process. 55(11), 4634–4643 (2007).MathSciNetGoogle Scholar
  23. J Liang, D Wang, L Su, B Chen, H Chen, HC So, Robust MIMO radar target localization via nonconvex optimization. Signal Process. 122:, 33–38 (2016).View ArticleGoogle Scholar
  24. J Liang, Y Wang, X Zeng, Robust ellipse fitting via half-quadratic and semidefinite relaxation optimization. IEEE Trans. Image Process. 24(11), 4276–4286 (2015).MathSciNetView ArticleGoogle Scholar
  25. YD Zhang, MG Amin, F Ahmad, B Himed, in Proc. IEEE Int. Workshop on Comput. Adv. Multi-Sens. Adaptive Process. DOA estimation using a sparse uniform linear array with two CW signals of co-prime frequencies (Saint Martin, 2013), pp. 404–407.Google Scholar
  26. X Yuan, B Hu, in Proc. Int. Conf. Machine Learning. Robust feature extraction via information theoretic learning (Montreal, Canada, 2009), pp. 1193–1200.Google Scholar


© The Author(s) 2016