# Subspace weighted *ℓ*_{2,1} minimization for sparse signal recovery

- Chundi Zheng
^{1}, - Gang Li
^{1}Email author, - Yimin Liu
^{1}and - Xiqin Wang
^{1}

**2012**:98

https://doi.org/10.1186/1687-6180-2012-98

© Zheng et al; licensee Springer. 2012

**Received: **30 August 2011

**Accepted: **2 May 2012

**Published: **2 May 2012

## Abstract

In this article, we propose a weighted *ℓ*_{2,1} minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular *ℓ*_{2,1} minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the *ℓ*_{1}-SVD algorithm, which exploits straightforwardly *ℓ*_{2,1} minimization, for both deterministic basis matrix and random basis matrix.

### Keywords

sparse signal recovery weighted*ℓ*

_{2,1}minimization multiple measurement vectors (MMV) direction-of-arrival estimation

## 1 Introduction

**A**∈ ℂ

^{M×K}(

*M*≪

*K*) and the sparsity prior on signal

**x**, the sparse representation problem of the noiseless measurements with single measurement vector (SMV)

**y = Ax**can be solved by a combinatorial

*ℓ*

_{0}problem

**x**has only

*P*nonzero components and ||

**x**||

_{0}=

*P*represents the number of nonzero components of

**x**. Unfortunately, the minimization problem (1) is NP-hard. A practicable way of solving the sparse representation problem is to employ the following convex optimization

where ${\u2225\mathbf{x}\u2225}_{1}={\sum}_{i=1}^{K}\left|{x}_{i}\right|$. As a surrogate of the *ℓ*_{0} norm, the regular *ℓ*_{1} norm is tractable, but it depends on signal's coefficient values and attenuates the nature of literal *ℓ*_{0} sparsity count, which may cause performance degrading in some situations. To avoid the disadvantage of the dependence on magnitude of the regular *ℓ*_{1} minimization, Candès et al. designed an iterative reweighted formulation of *ℓ*_{1} minimization to more democratically penalize nonzero coefficients, namely, large weights will discourage to reserve those entries who are more likely to be zero in recovered signal, whereas small weights will encourage to reserve larger entries [12]. In other words, the essence of the iterative reweighted *ℓ*_{1} minimization lies in that large weights are appointed to those elements of **x**, whose indices are more likely to be outside of the support [14, 16], which expels their indices from the support in the sparse solution and further consolidates the sparsity-encouraging nature of regular *ℓ*_{1} minimization [12–16]. The support of **x** is defined as Supp(**x**) = {*k*|*x*_{
k
} ≠ 0}. Incidentally, it was proved that the iterative reweighted *ℓ*_{1} minimization can indeed improve both the recoverable sparsity thresholds and the recovery accuracy upon the regular *ℓ*_{1} minimization [13–15].

It is worth noting that the iterative reweighted *ℓ*_{1} minimization was designed for the SMV problem [12]. In fact, the multiple measurement vectors (MMV) problem is encountered in many applications of sparse signal representation such as array processing [1, 6–11], magne-toencephalography [1], nonparametric spectrum analysis of time series [17], equalization of sparse communication channels [18] and so on. In the MMV case the *ℓ*_{1}-SVD method [7–9] replaces the *ℓ*_{1} norm minimization with the mixed norm *ℓ*_{2,1} norm minimization. Similar to the regular *ℓ*_{1} norm minimization, the *ℓ*_{2,1} minimization also meets the disadvantage of the dependence on magnitude. Therefore, in the MMV case how to design an appropriate weighting vector to cope with the disadvantage of the dependence on magnitude of the regular *ℓ*_{2,1} minimization is an interesting issue. In this article, we focus on the noisy MMV case and propose an algorithm of the jointly-sparse signal recovery based on the relationship between the noise subspace and the overcomplete basis for weighting the jointly sparse signals, which extends the essence of the iterative reweighted *ℓ*_{1} minimization in [12] from SMV to MMV.

where the vector **n**(*t*) denotes an additive noise vector with zero-mean and variance *σ*^{2}, the vector **x**(*t*) is the jointly-sparse signals and the support is independent of the snapshot *t*[1]. Without loss of generality, the additive noise **n**(*t*) is assumed to be uncorrelated with the jointly-spares signals **x**(*t*). The row support of the jointly sparse signals **X** (**X** denotes the matrix form of **x**(*t*)) plays a key role in the sparse signal recovery with MMV, and it can be defined as $\mathsf{\text{Sup}}{\mathsf{\text{p}}}_{\mathsf{\text{row}}}\mathsf{\text{(}}\mathbf{x}\mathsf{\text{)}}=\left\{k|{\mathbf{x}}_{k}^{\left({\ell}_{2}\right)}\ne 0\right\}\triangleq \Lambda $[6], where ${\mathbf{x}}_{k}^{\left({\ell}_{2}\right)}$ denotes the *k* th entry of ${\mathbf{x}}^{\left({\ell}_{2}\right)}$, and ${\mathbf{x}}^{\left({\ell}_{2}\right)}$ is a column vector whose *k* th elements denotes the *ℓ*_{2} norm of *k* th row of **X**. It is obvious that the index set Λ ⊆ {1, ..., *K*}and its cardinality |Λ|= *P*. Considering the relationship between the indices of the columns of **A** and the row support Λ, the overcomplete basis **A** can be divided into two submatrix, i.e., $\mathbf{A}=\left[{\mathbf{A}}_{\Lambda}\phantom{\rule{0.3em}{0ex}}{\mathbf{A}}_{{\Lambda}^{\mathsf{\text{c}}}}\right]$, where the indices of the columns of the submatrix **A**_{Λ} constitute the row support Λ, and the indices of the columns of the submatrix ${{\mathbf{A}}_{\Lambda}}_{{}^{\mathsf{\text{c}}}}$ constitute the complement of Λ, i.e., Λ∪Λ^{c} = {1, ..., *K*} and Λ∩Λ^{c} = ∅. On the other hand, the subspace decomposition on {**y**(*t*), *t* = 1, ..., *T*} provides the signal subspace and the noise subspace. It is noted that the noise subspace is orthogonal to the column space of **A**_{Λ}[19–21] but not to the submatrix ${{\mathbf{A}}_{\Lambda}}_{{}^{\mathsf{\text{c}}}}$.

From this observation, this article designs a subspace weighted (SW) *ℓ*_{2,1} minimization algorithm in the MMV case, in which small and large weights are generated by using the orthogonality between noise subspace and **A**_{Λ} and the isomorphism between noise subspace and ${{\mathbf{A}}_{\Lambda}}_{{}^{\mathsf{\text{c}}}}$. We will show that the designed weights can force the entries whose indices are more likely to be outside of the row support to be close to zero in the solution, and, therefore further promotes the sparseness of the solution and improves the recovery accuracy.

Although our proposed algorithm also exploits the singular value decomposition (SVD), compared with the *ℓ*_{1}-SVD algorithm [7–9], the key difference is that we not only use the SVD to reduce the computation complexity but also employ it to obtain the signal subspace and the noise subspace and, accordingly, to design the weights. Thus, we call the proposed algorithm as the SW *ℓ*_{2,1}-SVD algorithm. The experiments prove that the SW *ℓ*_{2,1}-SVD algorithm can achieve better estimation performance than *ℓ*_{1}-SVD algorithm that exploits straightforwardly *ℓ*_{2,1} minimization. In addition, simulations and experiments on real data also demonstrate that the SW *ℓ*_{2,1}-SVD algorithm can be applied to the DOA estimation, high resolution radar imaging, and other sparse recovery related problems with the random basis matrix.

The remainder of this article is organized as follows. In the following section, we describe the sparse signal representation framework in the MMV case. In Section 3, we formulate the SW *ℓ*_{2,1}-SVD algorithm. In Section 4, the performance of the proposed method is explored with some examples. The summary is given in Section 5.

## 2 The *ℓ*_{1}-SVD algorithm

For recovering the jointly-sparse signals **X**, a feasible way is that the row support of the jointly sparse signals is first determined and then the signals can be recovered by solving a least square (LS) problem [1]. In addition, in some applications the problem of interest is to determine the row support rather than recover **X** oneself. Therefore, in this article we consider the problem of determining the row support of the jointly sparse signals.

**Y**[7–9]:

**Y**=

**UΣV**

^{H}, the superscript H denotes the conjugate transpose, the non-zero entries of

**Σ**are equal to the singular values of

**Y**and they are sorted in descending order on the diagonal; the columns of

**U**and

**V**are, respectively, left singular vectors and right singular vectors for corresponding singular values;

**D**

_{P}= [

**I**

_{P};

**0**], where

**I**

_{P}is a

*P*×

*P*identity matrix and

**0**is a (

*T*−

*P*) ×

*P*matrix of zeros. Moreover, let

Obviously, **X**_{SV} and **X** have the same row support.

*ℓ*

_{1}-SVD algorithm can be described as [7–9]:

where ${\beta}^{2}\ge \left|\right|{\mathbf{N}}_{\mathsf{\text{SV}}}|{|}_{\mathsf{\text{F}}}^{2}$ is a regularization parameter, the mixed norm *ℓ*_{2,1} norm is defined as $\left|\right|\cdot |{|}_{2,1}\triangleq {\sum}_{i}{\left({{\sum}_{j}\left|{\left[\cdot \right]}_{i,j}\right|}^{2}\right)}^{1/2}$[22], and ||·||_{F} denotes Frobenius norm, respectively. In practice, we select the set of the indices of the *P* peaks in the solution as the estimate of the row support set $\widehat{\Lambda}$.

## 3 Subspace weighted *ℓ*_{2,1}-SVD algorithm

Here, given the MMV case we exploit the relationship between the noise subspace and the overcomplete basis to construct a weighting vector that can improve the performance of the *ℓ*_{1}-SVD method. Incidentally, we already presented the SW method in [23], where we used an extra eigendecomposition of sample correlation matrix to obtain the noise subspace. However it is not necessary to employ the extra eigendecomposition, because the SVD of measurements **Y** has revealed how to obtain subspace decomposition [17]. In addition, we address some interesting issues and extend the application of the SW method.

where **U**_{
s
} = [**u**_{1}, ..., **u**_{P}] and **U**_{
n
} = [**u**_{P+1} , ..., **u**_{M}], which correspond to the signal subspace and noise subspace, respectively, [17].

**B**=[

*b*

_{i,j}] and

*b*

_{i,j}→ 0 as the number of snapshots

*T*→ ∞. As a result, we have

where ${\mathbf{W}}_{\Lambda ,i}^{\left({\ell}_{2}\right)}\to 0$ and ${\mathbf{W}}_{{\Lambda}^{\mathbf{C}},i}^{\left({\ell}_{2}\right)}\to {C}_{i}^{\left({\ell}_{2}\right)}>0$ as *T* → ∞ [19]${\mathbf{W}}_{\Lambda ,i}^{\left({\ell}_{2}\right)},{\mathbf{W}}_{{\Lambda}^{\mathsf{\text{c}}},i}^{\left({\ell}_{2}\right)}$, and ${\mathbf{C}}_{i}^{\left({\ell}_{2}\right)}$ denote the *i* th entry of ${\mathbf{W}}_{\Lambda}^{\left({\ell}_{2}\right)},{\mathbf{W}}_{{\Lambda}^{\mathsf{\text{c}}}}^{\left({\ell}_{2}\right)}$, and ${\mathbf{C}}^{\mathsf{\text{(}}{\ell}_{\mathsf{\text{2}}}\mathsf{\text{)}}}$, respectively. This is consistent with the methodology of the iterative reweighted *ℓ*_{1} minimization, i.e., large weights are assigned to the entries whose indices are more likely to be outside of the row support, whereas small weights are assigned to the entries whose indices are inside of the row support [12, 14, 16]. When the limited snapshots are used in actual application, it is also guaranteed that the entries of ${\mathbf{W}}_{\Lambda}^{\left({\ell}_{2}\right)}$ are much smaller than those of ${\mathbf{W}}_{{\Lambda}^{\mathsf{\text{c}}}}^{\left({\ell}_{2}\right)}$[19–21].

^{a}

where ${\Vert \phantom{\rule{0.5em}{0ex}}\xb7\phantom{\rule{0.5em}{0ex}}\Vert}_{\text{w};2,1}\triangleq {\displaystyle {\sum}_{i}{w}_{i}}({{\displaystyle {\sum}_{j}|{[\cdot ]}_{i,j}{}^{2})|}}^{1/2}$, *w*_{
i
} denotes the *i* th entry of **w**.

Some related issues are discussed as follows.

*Discussion 1* : An interesting issue raised by the Equation (12) is how many snapshots are enough for the SW method to work. The weighted process can be seen as a preprocessing that obtains the rough information about the row support and the weighted values. The SW method employs the methodology of the MUSIC method [19] to achieve the preprocessing. Therefore, the SW method is consistent with the MUSIC method on the requirement of the number of snapshots. The theoretical limitation of the requirement of the number of snapshots *T* ≥ *P* is showed in [24, 25] for the MUSIC method. In other words, the SW method is able to work with very small number of snapshots.

*Discussion 2* : The prior information about the number of sources plays a key role in partitioning the noise subspace and the signal subspace. The right partition of the noise subspace and the signal subspace is beneficial to accomplish the optimal weights for the SW method. In practice, the number of sources can be determined by exploiting the information theoretic criterion such as the Akaike's information criterion (AIC) [26] and the minimum description length (MDL) criterion [27]. These methods require the eigenvalue of the sample correlation matrix $\widehat{\mathbf{R}}$, where $\widehat{\mathbf{R}}=\frac{1}{T}\mathbf{Y}{\mathbf{Y}}^{\mathsf{\text{H}}}$ is a Hermitian matrix. We can use the SVD of **Y** to obtain the eigenvalue because the eigenvalue decomposition (EVD) of a Hermitian matrix is a special case of the SVD of a general matrix [17]. As a result, we have ${\Sigma}_{e}=\frac{1}{T}\Sigma {\Sigma}^{\mathsf{\text{H}}}$, where the elements on the diagonal of Σ _{
e
} are the eigenvalue of $\widehat{\mathbf{R}}$. Therefore, the number of sources can be determined by combining the SVD of **Y** with the information theoretic criterion. However, in some situations, for example, when the signal-to-noise ratio (SNR) is very low or the number of snapshots is very small, the classical AIC and MDL rules are likely to overestimate or underestimate the number of sources. Thus, another interesting issue is the robustness of the proposed SW *ℓ*_{2,1}-SVD algorithm to the estimate of the number of sources. Here we give a brief explanation about the robustness of SW *ℓ*_{2,1}-SVD to the estimate of the number of sources and leave a detailed discussion to future work. For one thing, both the proposed SW *ℓ*_{2,1}-SVD algorithm and the original *ℓ*_{1}-SVD algorithm [8] use information about the number of sources to reduce the computational complexity, in which the incorrect determination of the number of sources does not incur catastrophic consequences [8]. For another, the weighted *ℓ*_{2,1} minimization processing is not very sensitive to the determination of the number of sources. Considering two cases, i.e., the estimate of the number of sources $\widehat{P}=0$ (the extreme underestimation case) and $P<\widehat{P}\le M-1$ (the overestimation case), where the real value of the number of sources is assumed as 0 < *P* < *M* − 1. For the former the estimate of the noise subspace ${\widehat{\mathbf{U}}}_{n}$ is equal to **U**, and then all weights are identical because the matrix **U** is a unitary matrix, i.e., $\left|\right|{\mathbf{a}}_{i}^{H}{\widehat{\mathbf{U}}}_{n}|{|}_{2}=\phantom{\rule{0.3em}{0ex}}|\left|{\mathbf{a}}_{j}^{H}{\widehat{\mathbf{U}}}_{n}\right|{|}_{2}$ for *i* ≠ *j*. As a result, SW *ℓ*_{2,1}-SVD becomes *ℓ*_{1}-SVD in the extreme underestimation case. For the latter ${\widehat{\mathbf{U}}}_{n}=\left[{\mathbf{U}}_{\hat{\mathsf{\text{P}}}+1},\dots ,{\mathbf{U}}_{\mathsf{\text{M}}}\right]$ and the subspace that its columns span is a true subset of the noise subspace. Therefore, the orthogonality between ${\widehat{\mathbf{U}}}_{n}$ and **A**_{Λ} still exists and it only incurs gradual degradation of performance because the shrunk subspace dimension will weaken the multiple averaging effect [28]. Obviously, SW *ℓ*_{2,1}-SVD can cope with the overestimation case. We illustrate this conclusion in Section 4.

## 4 Examples

In this section, we present some examples to demonstrate the performance of the proposed SW *ℓ*_{2,1}-SVD algorithm. We first address source localization with a uniform linear array (ULA) and a nonuniform linear array (NULA). Then we consider sparse signal recovery problem with the random basis matrix in the presence of noise. Lastly, we employ the real data to illustrate the performance of the proposed method. Here we use the CVX package for solving the convex optimization problem [29].

### 4.1 Source localization with ULA

We consider the ULA composed of *M* = 10 sensors separated by half a wavelength for the source localization problem. The grid is uniformly sampled with 0.1° from −90° to 90° (unless specifically stated). The overcomplete basis matrix **A** =[**a**(*ϕ*_{1}), ..., **a** (*ϕ*_{
K
} )] is a deterministic basis matrix under this condition, where the vector **a**(*ϕ*_{
k
} ) denotes the array steering vector and *ϕ*_{
k
} is the *k* th sampling grid.

#### 4.1.1 Localization accuracy

*θ*

_{1}= 12°,

*θ*

_{2}= 43°, and

*θ*

_{3}= 67°. The number of snapshots is

*T*= 200. We compare the RMSE of the DOA estimates yielded by SW

*ℓ*

_{2,1}-SVD with those of

*ℓ*

_{1}-SVD, Root-MUSIC [30], and CRB [20]. In Figure 1a, three sources are assumed to be uncorrelated; and in Figure 1b, the sources at

*θ*

_{1}= 12° and

*θ*

_{2}= 43° are coherent, whereas the source at

*θ*

_{3}= 67° is uncorrelated to the first two sources. The spatial smoothing technique with a 4-element smoothing subarray is employed for Root MUSIC to decorrelate the coherent signals. As can be seen from Figure 1, the Root-MUSIC algorithm that can be seen as the typical representative of the subspace-like algorithm can provide good accuracy in uncorrelated sources case, whereas it need use the spatial smoothing technique to obtain competitive performance in coherent sources case. As for the

*ℓ*

_{1}-SVD algorithm that employs the regular

*ℓ*

_{2,1}minimization, it can yield acceptable DOA estimates in coherent sources case but does not compete with the subspace-like algorithm in uncorrelated sources case. Since the weighted

*ℓ*

_{2,1}minimization further consolidates the sparsity-encouraging nature of regular

*ℓ*

_{2,1}minimization, the SW

*ℓ*

_{2,1}-SVD can improve the recovery accuracy. As a result, the presented SW

*ℓ*

_{2,1}-SVD algorithm gives competitive DOA estimates that are closer to the CRB for both uncorrelated and coherent sources.

#### 4.1.2 DOA tracking for mobile sources

*ℓ*

_{2,1}-SVD, which there are two uncorrelated moving sources in the array's viewing field and we need estimate their DOAs using a few snapshots.

^{b}We consider two moving sources in this simulation. A source moves linearly from 30° to 21°; the other one moves first from 0° to 3°, and then from 3° to 0°, and last from 0° to 3°. It is assumed that the moving step is 0.03° per snapshot over a course of 300 data snapshots. We use the most recent three snapshots to estimate DOAs for SNR = 12 dB. As can be seen from Figure 2, the Root-MUSIC algorithm has some strong outliers; especially some strong outliers confuse two sources over some periods. Although it is also demonstrated that the

*ℓ*

_{1}-SVD algorithm can work with a few snapshots, there are some outliers, which causes degradation in the performance of the trajectories. The presented SW

*ℓ*

_{2,1}-SVD has a few slight outliers, which do not affect DOA tracking for mobile sources. This shows that the weighted

*ℓ*

_{2,1}minimization outperforms the regular

*ℓ*

_{2,1}minimization in the sense that a few snapshots are employed to obtain exact sparse recovery.

### 4.2 Source localization with NULA

*M*= 10 sensors that are randomly selected from a ULA with 20 sensors for the source localization problem. Here we only consider the coherent sources case. Again, the sources at

*θ*

_{1}= 12° and

*θ*

_{2}= 43° are coherent, and the source at

*θ*

_{3}= 67° is uncorrelated to other sources. In Figure 3, we show the spatial spectrum obtained with MUSIC,

*ℓ*

_{1}-SVD, and SW

*ℓ*

_{2,1}-SVD in 100 Monte Carlo runs. In this experiment SNR is 10 dB, the number of snapshots is 200. The spatial smoothing technique is valid for an ULA, but not for NULA [17]. Therefore, as it is shown in Figure 3a, MUSIC has only one significant peak because spatial smoothing does not work for NULA. However, both

*ℓ*

_{1}-SVD and SW

*ℓ*

_{2,1}-SVD still provide good estimates. In addition, it also is noted that the proposed SW

*ℓ*

_{2,1}-SVD algorithm has smaller variance than that of the

*ℓ*

_{1}-SVD algorithm, which is consistent with the conclusion of the Section 4.1.1, i.e., the SW

*ℓ*

_{2,1}-SVD algorithm has better localization accuracy than that of the

*ℓ*

_{1}-SVD algorithm.

### 4.3 Sparse recovery for random basis matrix

*ℓ*

_{2,1}-SVD can be extended to other instances with a random basis matrix that accord with the model (3) in the sparse representation framework. Here, we give some examples to demonstrate the validity of the extension. We assume a MMV sparse matrix

**X**∈ ℝ

^{K×T}with

*P*non-zero rows that their indices are chosen randomly, and their amplitudes is chosen randomly from a standard normal distribution. The overcomplete basis matrix

**A**∈ ℂ

^{M×K}is a random matrix with i.i.d. Gaussian entries or i.i.d symmetric Bernoulli ±1 entries, and its columns are normalized. Our object is to estimate the row support of the sparse signal

**X**with the measurements

**Y**=

**AX**+

**N**, where

**N**is an additive Gaussian white noise matrix and its variance

*σ*

^{2}is determined from a specified SNR level as ${\sigma}^{2}=\frac{\parallel \mathbf{x}{\parallel}_{\mathsf{\text{F}}}^{2}}{T\times P}\times 1{0}^{-\mathsf{\text{SNR/10}}}.$ In the experiment parameter settings are as follows:

*M*= 12,

*K*= 30, and

*P*= 6. The set of the indices corresponding to

*P*peaks in the estimate ${\widehat{\mathbf{X}}}_{\mathsf{\text{SV}}}^{\left({\ell}_{2}\right)}$ are regarded as the estimate of the row support of the jointly sparse signals. The estimate of the row support $\widehat{\Lambda}$ is considered to be correct if and only if it is fully consistent with the true row support Λ. As demonstrated in Figures 4 and 5, the SW

*ℓ*

_{2,1}-SVD algorithm improves the recovery performance, especially reducing both the SNR requirement and the required number of snapshots for stable recovery. In addition, for exploring robustness to the number of sources, we employ the assumed number of sources (ANS) to perturb the SW

*ℓ*

_{2,1}-SVD algorithm. It is shown in Figures 4 and 5 that the SW

*ℓ*

_{2,1}-SVD algorithm accomplishes the optimal performance when ANS is equal to the number of sources

*P*(i.e., ANS =

*P*= 6). Furthermore, they also demonstrate that the SW

*ℓ*

_{2,1}-SVD algorithm can cope with both the overestimation case and the underestimation case and it still excels the

*ℓ*

_{1}-SVD algorithm in these cases.

### 4.4 High resolution radar imaging via sparse recovery

Here we attempt to obtain high range resolution in data collected by a real stepped frequency radar. The radar is Ka band and the frequency step size Δ*f* is 8 MHz, and the pule repeat interval (PRI) is 0.15 ms. In the observed scene two Corner Reflectors separated by 0.4 m are fixed on a straight road and they are collinear with radar. The width of the transmitted pulses is 1 *μ* s and 64 pulses data are collected. The data model can be written as $y\left(n\right)={\sum}_{i=1}^{P}{\beta}_{i}{r}_{i}\left(n\right)+w\left(n\right),$where *β*_{
i
} is the complex scattering intensity of the *i* th target, ${r}_{i}\left(n\right)=\text{exp}\left[j\frac{4\pi}{{\nu}_{c}}\left({f}_{0}+n\Delta f\right){R}_{i}\right],{v}_{c}$is the speed of light, *R*_{
i
} is the distance between the *ith* target and radar, *f*_{0} is the initial frequency, *w*(*n*) denotes the noise. Without loss of generality, in the recovery the item $\frac{2}{{v}_{c}}\left({f}_{0}+n\Delta f\right){R}_{i}$ is normalized (it is called as the normalized frequency in the context) and the interval [0 1] is uniformly sampled with 1024 grids.

*ℓ*

_{1}-SVD, and SW

*ℓ*

_{2,1}-SVD are displayed, where the data window size is 32, i.e., 33 snapshots can be obtained by sliding the window. In this case the mentioned algorithms clearly discern the two targets. Compared with the sparse recovery algorithm, however, the peak obtained from MUSIC is obtuse. In addition,

*ℓ*

_{1}-SVD has a stronger spurious peak that may confuse the true target with spurious peaks. The confidence interval exerts an influence on how many spurious peaks exist in the solution. Although increasing the confidence interval can suppress spurious peaks to some extent, as it is shown in Figure 7, we may adopt a pessimistic attitude for

*ℓ*

_{1}-SVD. It is worthwhile to note that SW

*ℓ*

_{2,1}-SVD has a very slight spurious peak so that we can ignore it, especially for a higher confidence interval. Furthermore, the confidence interval determines the size of the regularization parameter, i.e., the higher the confidence interval, the larger the regularization parameter. Figures 6 and 7 show clearly SW

*ℓ*

_{2,1}-SVD has a good performance when the size of the regularization parameter varies widely.

*ℓ*

_{2,1}-SVD algorithm to the estimate of the number of sources. In Figure 8, we artificially adjust the number of sources so that we can explore what extent underestimating or overestimating the number of sources affects the performance of the SW

*ℓ*

_{2,1}-SVD algorithm. The data window size is fixed at 32 and the assumed number of sources (ANS) is exploited to determine the dimension of the noise subspace and reduce the computational complexity (unless specially stated). Note that ANS = 0 is used only for determining the dimension of the noise subspace and

**D**

_{P}=[

**I**

_{2};

**0**] is utilized to reduce the computational complexity in Figure 8a. It is illustrated in Figure 8, in the extreme underestimation case, i.e., ANS = 0, the result of SW

*ℓ*

_{2,1}-SVD and

*ℓ*

_{1}-SVD is identical, which corroborates

*ℓ*

_{1}-SVD is a special example of SW

*ℓ*

_{2,1}-SVD. In addition, we can observe the fact SW

*ℓ*

_{2,1}-SVD can clearly discern the two targets in both underestimation and overestimation cases, whereas MUSIC is invalid in the underestimation case. Therefore, this illustration shows that underestimation and overestimation of the number of sources do not incur catastrophic consequences for SW

*ℓ*

_{2,1}-SVD.

*ℓ*

_{1}-SVD, and SW

*ℓ*

_{2,1}-SVD as a function of the data window size

*M*(the number of snapshots

*T*= 64−

*M*+1). When the data window size is too small or too large, MUSIC does not provide reliable results. Therefore, MUSIC need carefully select the data window size so as to it can obtain reliable estimates. Although the performance of the

*ℓ*

_{1}-SVD algorithm is better than that of MUSIC for large data window size (i.e.,

*M*≥ 61), it not give reliable results for small data window size (i.e.,

*M*≤ 7). For the same experiment context, as expected, SW

*ℓ*

_{2,1}-SVD yields competitive performance and it is robust to the data window size.

## 5 Conclusion

In this article, we proposed an effective weighted *ℓ*_{2,1} minimization algorithm that exploits the relationship between the noise subspace and the overcomplete basis matrix to obtain the weights for the jointly-sparse signal recovery problem. The proposed SW *ℓ*_{2,1}-SVD algorithm appoints the large weights to those, whose indices are more likely to be outside of the support so that their indices are banished from the support. This can further promote the sparseness of the solution at right positions. We provided experimental results to testify that, for both deterministic basis matrix and random basis matrix, the proposed SW *ℓ*_{2,1}-SVD algorithm can obtain better performance than that of the *ℓ*_{1}-SVD algorithm with fewer number of snapshots and lower SNR.

## Endnotes

The material in this article was presented in part at the 2011 International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2011), May 2011, Prague, Czech Republic. ^{a}In the context, we use the method introduced in [8] to determine the regularization parameter *β*^{2}, and the confidence interval that controls the size of *β*^{2} is set to 99% (unless specially stated). ^{b}Although the indices of non-zero rows of **x**(*t*) are dependent of the snapshot *t* for the moving sources, **x**(*t*) can still be seen as the jointly-sparse signals providing the number of snapshots *T* is very small.

## Declarations

### Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 40901157, and in part by the National Basic Research Program of China (973 Program) under Grant 2010CB731901, and in part by the Doctoral Fund of Ministry of Education of China under Grant 200800031050, and in part by Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation.

## Authors’ Affiliations

## References

- Cotter S, Rao B, Engan K, Kreutz-Delgado K: Sparse solutions to linear inverse problems with multiple measurement vectors.
*IEEE Trans Signal Process*2005, 53(7):2477-2488.MathSciNetView ArticleGoogle Scholar - Chen J, Huo X: Theoretical results on sparse representations of multiple-measurement vectors.
*IEEE Trans Signal Process*2006, 54(12):4634-4643.View ArticleGoogle Scholar - Fletcher A, Rangan S, Goyal V, Ramchandran K: Denoising by sparse approximation: Error bounds based on rate-distortion theory.
*EURASIP J Appl Signal Process*2006, 2006: 1-19.MathSciNetView ArticleGoogle Scholar - Mishali M, Eldar Y: Reduce and boost: Recovering arbitrary sets of jointly sparse vectors.
*IEEE Trans Signal Process*2008, 56(10):4692-4702.MathSciNetView ArticleGoogle Scholar - Lv J, Fan Y: A unified approach to model selection and sparse recovery using regularized least squares.
*Annals Stat*2009, 37(6A):3498-3528. 10.1214/09-AOS683MathSciNetView ArticleGoogle Scholar - Berg E, Friedlander M: Joint-sparse recovery from multiple measurements.
*Computing Research Repository - CORR*2009, 1-19. abs/0904.2Google Scholar - Malioutov D, Cetin M, Willsky A: Source localization by enforcing sparsity through a Laplacian prior: an SVD-based approach. In
*IEEE Workshop on Statistical Signal Processing*. St Louis, MO., USA; 2003:573-576.Google Scholar - Malioutov D, Cetin M, Willsky A: A sparse signal reconstruction perspective for source localization with sensor arrays.
*IEEE Trans Signal Process*2005, 53(8):3010-3022.MathSciNetView ArticleGoogle Scholar - Malioutov D: A sparse signal reconstruction perspective for source localization with sensor arrays.
*Master's thesis*Mass. Inst. Technol., Cambridge, MA; 2003. [http://ssg.mit.edu/~dmm/publications/malioutov_MS_thesis.pdf]Google Scholar - Zheng J, Kaveh M, Tsuji H: Sparse spectral fitting for direction of arrival and power estimation. In
*IEEE/SP 15th Workshop on Statistical Signal Processing*. Cardiff University and City Hall Cardiff, U.K.; 2009:429-432.Google Scholar - Tang Z, Blacquiere G, Leus G: Aliasing-free wideband beamforming using sparse signal representation.
*IEEE Trans Signal Process*2011, 59(7):3464-3469.MathSciNetView ArticleGoogle Scholar - Candes E, Wakin M, Boyd S: Enhancing sparsity by reweighted
*ℓ*_{1}minimization.*J Fourier Anal Appl*2008, 14(5):877-905. 10.1007/s00041-008-9045-xMathSciNetView ArticleGoogle Scholar - Wipf D, Nagarajan S: Iterative reweighted
*ℓ*_{1}and*ℓ*_{2}methods for finding sparse solutions.*IEEE J Sel Top Signal Process*2010, 4(2):317-329.View ArticleGoogle Scholar - Xu W, Khajehnejad M, Avestimehr A, Hassibi B: Breaking through the thresholds: an analysis for iterative reweighted
*ℓ*_{1}minimization via the grassmann angle framework. In*2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP)*. Dallas, TX, USA; 2010:5498-5501.View ArticleGoogle Scholar - Needell D: Noisy signal recovery via iterative reweighted
*ℓ*_{1}-minimization. In*Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers*. Pacific Grove, Calif, USA; 2009:113-117.Google Scholar - Khajehnejad M, Xu W, Avestimehr A, Hassibi B: Analyzing weighted
*ℓ*_{1}minimization for sparse recovery with nonuniform sparse models.*IEEE Trans Signal Process*2011, 59(5):1985-2001.MathSciNetView ArticleGoogle Scholar - Stoica P, Moses R:
*Spectral Analysis of Signals*. Pearson/Prentice Hall, Upper Saddle River, NJ; 2005.Google Scholar - Fevrier I, Gelfand S, Fitz M: Reduced complexity decision feedback equalization for multipath channels with large delay spreads.
*IEEE Trans Commun*1999, 47(6):927-937. 10.1109/26.771349View ArticleGoogle Scholar - Schmidt R: Multiple emitter location and signal parameter estimation.
*IEEE Trans Antennas Propag*1986, 34(3):276-280. 10.1109/TAP.1986.1143830View ArticleGoogle Scholar - Stoica P, Arye N: MUSIC, maximum likelihood, and Cramer-Rao bound.
*IEEE Trans Acoustics Speech Signal Process*1989, 37(5):720-741. 10.1109/29.17564View ArticleGoogle Scholar - Kaveh M, Barabell A: The statistical performance of the MUSIC and the minimum-norm algorithms in resolving plane waves in noise.
*IEEE Trans Acoustics Speech Signal Process*1986, 34(2):331-341. 10.1109/TASSP.1986.1164815View ArticleGoogle Scholar - Kowalski M: Sparse regression using mixed norms.
*Appl Comput Harmonic Anal*2009, 27(3):303-324. 10.1016/j.acha.2009.05.006View ArticleGoogle Scholar - Zheng C, Li G, Zhang H, Wang X: An approach of DOA estimation using noise subspace weighted
*ℓ*_{1}minimization. In*2011 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP)*. Prague Czech; 2011:2856-2859.View ArticleGoogle Scholar - Paulraj A, Reddy V, Shan T, Kailath T: Performance analysis of the MUSIC algorithm with spatial smoothing in the presence of coherent sources. In
*IEEE Military Communications Conference-Communications-Computers: Teamed for the 90's, 1986 MILCOM*.*Volume 3*. Monterey, Calif, USA; 1986:41.5.1-41.5.5.Google Scholar - Sason E:
*Source localization based on sparse signal representation*. [http://sipl.technion.ac.il/new/Archive/Annual_Proj_Pres/sipl2010/Posters/Source%20localization.pdf] - Akaike H: A new look at the statistical model identification.
*IEEE Trans Automatic Control*1974, 19(6):716-723. 10.1109/TAC.1974.1100705MathSciNetView ArticleGoogle Scholar - Rissanen J: Modeling by shortest data description.
*Automatica*1978, 14(5):465-471. 10.1016/0005-1098(78)90005-5View ArticleGoogle Scholar - Orfanidis SJ:
*Optimum Signal Processing*. 2nd edition. 2007. [http://eceweb1.rutgers.edu/~orfanidi/osp2e/]Google Scholar - Grant M, Boyd S, CVX: Matlab software for disciplined convex programming.
*version 1.21*2010. [http://cvxr.com/cvx/]Google Scholar - Barabell A: Improving the resolution performance of eigenstructure-based direction-finding algorithms. In
*IEEE International Conference on Acoustics, Speech, Signal Processing*.*Volume 8*. Boston, Mass., USA; 1983:336-339.Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.