- Research
- Open access
- Published:

# Sparse signal recovery with unknown signal sparsity

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 178 (2014)

## Abstract

In this paper, we proposed a detection-based orthogonal match pursuit (DOMP) algorithm for compressive sensing. Unlike the conventional greedy algorithm, our proposed algorithm does not rely on the priori knowledge of the signal sparsity, which may not be known for some application, e.g., sparse multipath channel estimation. The DOMP runs binary hypothesis on the residual vector of OMP at each iteration, and it stops iteration when there is no signal component in the residual vector. Numerical experiments show the effectiveness of the estimation of signal sparsity as well as the signal recovery of our proposed algorithm.

## 1 Introduction

Compressive sensing (CS) [1, 2], a framework to solve the under-determined system, has drawn great research attention in recent years. The CS problem can be modeled as finding the sparse solution of ** h** for equation

where the observation ** y**∈

*R*^{m×1}is obtained by using the sensing matrix

**∈**

*X*

*R*^{m×n}to measure the

*k*-sparse signal

**∈**

*h*

*R*^{n×1}. In CS framework, the sensing matrix

**in (1), is a ‘fat’ matrix, i.e.,**

*X**m*<

*n*.

To find the sparse solution of ** h**, i.e., recover the sparse signal, one can adopt either the convex relaxation based method, e.g., basis pursuit (BP) [3] or greedy algorithms, e.g., orthogonal matching pursuit (OMP) [4], regularized OMP (ROMP) [5], StOMP [6], etc. The greedy algorithm is often used for its low computational complexity and easy to implement. To implement the greedy algorithm, one needs to know the priori information on the signal’s sparsity

*k*. For example, in OMP and its variant, e.g., ROMP, the signal sparsity

*k*must be specified so that the computation stops after

*k*iterations. Other greedy algorithm such as subspace pursuit (SP) [7] also needs to know the value of

*k*so that exact

*k*candidate atoms could be selected at each iteration.

The multipath channels, e.g., underwater acoustic (UWA) channel in sonar system [8] and Rayleigh fading channel in wireless communication [9], can be modeled as FIR filter. Those channels can be viewed as sparse signals according to experimental data [10, 11]. Thus, CS can be applied for channel estimation. In [12], the authors shown that the CS approach achieves better estimation performance than the conventional methods.

In reality, the number of the channel taps, i.e., the signal sparsity, is usually unknown. Therefore, the greedy algorithms cannot be applied directly. In [13], the authors proposed the sparsity adaptive matching pursuit (SAMP) which does not need the signal sparsity information. In SAMP, the threshold is still needed to stop the iteration, and the performance of SAMP is sensitive to the threshold selection. In [14], stopping rules, i.e., the residual {\mathit{r}}_{t} at *t* th iteration meets \parallel {\mathit{r}}_{t}{\parallel}_{{\ell}_{2}}<\parallel \mathit{n}{\parallel}_{{\ell}_{2}}, for OMP under noise provides theoretical guarantee for sparse signal recovery.

In this paper, we proposed the detection-based orthogonal match pursuit (DOMP) algorithm which systematically provides the stop threshold based on the signal detection criteria. This is a more general threshold finding approach for stopping the OMP than the threshold proposed in [14]. Since the proposed DOMP is able to recover sparse signal without sparsity, it can be applied to the sparse channel estimation.

The rest of this paper is organized as follows. The analysis of residual vector of OMP is shown in Section 2. Section 3 discusses the hypothesis test on the residual vector. In Section 4, the threshold determined by given false alarm probability (*P*_{
FA
}) is discussed. The efficiency of the proposed stopping criteria is shown by numerical experiments in Section 5.

## 2 Analysis of residual vector in OMP

In this section, we show the property of the residual vector of OMP which motivates us to apply signal detection technique to determine the stopping criteria of OMP. In this study, we assume the sensing matrix ** X** satisfies the RIP condition, i.e., {\delta}_{k+1}<\frac{1}{\sqrt{k}+1}, which guarantees the perfect recovery without noise perturbation [15].

The OMP can be viewed as the successive interference cancellation method, i.e., at each iteration the strongest signal component is subtracted from the residual vector. We denote the residual vector at the *t* th iteration by *r*_{
t
}, the support of signal at the *t* th iteration by *S*_{
t
}, the sub-matrix formed by the columns of ** X** according to the support

*S*

_{ t }by {\mathit{X}}_{{S}_{t}}, and the rest of the matrix

**by {\mathit{X}}_{\stackrel{\u0304}{{S}_{t}}}.**

*X*At the *t* th iteration, the column index *i* of {\mathit{X}}_{\stackrel{\u0304}{{S}_{t}}} which has the highest correlation with the residual vector *r*_{
t
} is added to the support set, i.e., {S}_{t}={S}_{t-1}\bigcup i. After updating the signal support, the residual vector is updated by projecting ** y** onto the null-space of {\mathit{X}}_{{S}_{t}}, i.e.,

where {\mathit{P}}_{t}^{\perp}=\mathit{I}-{\mathit{P}}_{t} is orthogonal projector onto null space of {\mathit{X}}_{{S}_{t}} and {\mathit{P}}_{t}={\mathit{X}}_{t}{\left({\mathit{X}}_{t}^{T}{\mathit{X}}_{t}\right)}^{-1}{\mathit{X}}_{t}^{T}\in {R}^{m\times m}. Thus, the residual vector *r*_{
t
} after *t* iterations can be expressed as

We denote the support of the *k*-sparse signal ** h** by supp(

*h*), i.e., \text{supp}\left(h\right):=\{i\in \{1,2,\dots ,n\left\}\right|h\left(i\right)\ne 0\}, where

*h*(

*i*) is the

*i*th element of vector

**. When the support obtained via the iteration is the supper-set of the actual support of the signal, i.e., \text{supp}\left(\mathit{h}\right)\subset {S}_{t}, there is no signal component in the residual vector**

*h**r*

_{ t }.

Thus, we can adopt the signal detection method to test whether the signal component exists in the residual vector after each iteration. Since one entry of ** h**, indexed by largest column correlation with {\mathit{X}}_{\stackrel{\u0304}{{S}_{t}}}, is set to zero at the

*t*th iteration, we can define the signal component after

*t*iteration in the residual

*r*

_{ t }as

Then, (3) is equivalent to

According to the definition of RIP [16], for real signal ** h**,

**obeys**

*X*for all subsets {S}_{t} with \parallel {S}_{t}{\parallel}_{{l}_{0}}<k. Since we assumed that {\delta}_{k+1}<\frac{1}{\sqrt{k}+1}, the sensing matrix ** X** meets the RIP condition with {\delta}_{k}<\frac{1}{\sqrt{k-1}+1}\le 1, i.e.,

*δ*

_{ k }<1. Therefore, \parallel {\mathit{X}}_{{S}_{t}}\mathit{h}{\parallel}_{{\ell}_{2}}^{2}\ge (1-{\delta}_{k})\parallel \mathit{h}{\parallel}_{{\ell}_{2}}^{2}>0, for any

**. In other words, equation {\mathit{X}}_{{S}_{t}}\mathit{h}=0 has no nonzero solution, or any**

*h≠0**t*columns of

**are linearly independent. We have**

*X**r*

*a*

*n*

*k*(

*P*_{ t })=

*t*. Since \mathit{\text{rank}}\left({\mathit{P}}_{t}\right)+\mathit{\text{rank}}\left({\mathit{P}}_{t}^{\perp}\right)=m, {\mathit{P}}_{t}^{\perp} is not a row full rank matrix, and vector {\mathit{P}}_{t}^{\perp}\mathit{y} is of degenerated multivariate normal distribution. To derive the distribution of the residual vector

*r*_{ t }, the residual is further projected onto a subspace formed by taking any

*m*−

*t*rows from {\mathit{P}}_{t}^{\perp}. Since any

*m*−

*t*rows of {\mathit{P}}_{t}^{\perp} are linearly independent, i.e.,

*r*

*a*

*n*

*k*(

*P*_{ t })=

*m*−

*t*, the sub-matrix formed by these rows is of full row rank.

We denote the projection matrix by {\mathit{P}}_{m-t}={\mathit{M}}_{m-t}{\mathit{P}}_{t}^{\perp}, where *M*_{m−t} is a matrix that takes *m*−*t* rows from other matrix. For example, *M*_{3} can be

where *m* is the number of measurements, and *t* is the iteration times. {\mathit{P}}_{m-t}={\mathit{M}}_{m-t}{\mathit{P}}_{t}^{\perp} is the sub-matrix formed by the *m*−*t* rows of {\mathit{P}}_{m-t}^{\perp}. *P*_{m−t} projects *r*_{
t
} onto a subspace of rank *m*−*t*, that is *z*_{
t
}=*P*_{m−t}·*r*_{
t
}. Since any *m*−*t* rows of {\mathit{P}}_{t}^{\perp} are linearly independent, and other *t* rows can be linearly represented by these *m*−*t* rows, any *M*_{m−t} with full row rank projects the residual vector *r*_{
t
} onto the identical subspace. Thus, we take any *m*−*t* rows from {\mathit{P}}_{t}^{\perp} for the further projection.

Define {\mathit{C}}_{m-t}:={\mathit{P}}_{m-t}{\mathit{P}}_{m-t}^{T}. If there is only noise in the residual vector, that is, *z*_{
t
}=*P*_{m−t}** n**, then the projected residual

*z*_{ t }follows

If the residual vector consists the signal component and noise, i.e., {\mathit{r}}_{t}={\mathit{P}}_{m-t}\left(\mathit{X}{\mathit{h}}_{t}+\mathit{n}\right), the distribution of *z*_{
t
} is

where {\theta}_{t}=\parallel {\mathit{h}}_{t}{\parallel}_{{\ell}_{2}} is an unknown parameter.

## 3 Hypothesis test on residual vector

With the PDF of the residual vector known, we can form the binary hypothesis test on whether there are signal components in the residual vector after *t* iterations,

If *H*_{0} is decided, the iteration stops. Since one entry of signal *h*_{
t
} is set to zero at each iteration, \parallel {\mathit{h}}_{t}{\parallel}_{{\ell}_{2}} decreases after each iteration, and it needs to be estimated.

According to (6) and (7), the PDF of the residual vector under *H*_{0} and *H*_{1} are respectively given by

where {\mathit{C}}_{{H}_{0}}={\sigma}^{2}{\mathit{C}}_{m-t}, {\mathit{C}}_{{H}_{1}}=\left({\theta}_{t}+{\sigma}^{2}\right){\mathit{C}}_{m-t}. Then, the binary hypothesis test can be conducted using the generalized likelihood ratio test (GLRT) [17]. *H*_{1} is decided if the following inequality holds

where {\widehat{\theta}}_{t} is the maximum likelihood estimation (MLE) of *θ*_{
t
} at each iteration,

When {\widehat{\theta}}_{t}=0, i.e., no signal component exists, the iteration stops; otherwise, the {\widehat{\theta}}_{t} is plugged into (11) for further test. After plugging the {\widehat{\theta}}_{t} and simplification, the test statistics is given by

Since the function g\left(x\right)=x-lnx-1 in (13) is a monotonically increasing function of *x* for *x*>1, and its inverse function *g*^{−1} exists for *x*>1, (13) can be rewritten as

In (14), \frac{{\mathit{z}}_{t}^{T}{\mathit{C}}_{m-t}^{-1}{\mathit{z}}_{t}}{(m-t)}-{\sigma}^{2}>0, we have \frac{{\mathit{z}}_{t}^{T}{\mathit{C}}_{m-t}^{-1}{\mathit{z}}_{t}}{{\sigma}^{2}(m-t)}>1. Therefore, (14) is simplified as

Finally, we obtain the detector T\left({\mathit{z}}_{t}\right)={\mathit{z}}_{t}^{T}{\mathit{C}}_{m-t}^{-1}{\mathit{z}}_{t}, and choose *H*_{1} if

In other words, when *T*(*z*_{
t
}) is greater than the threshold *γ*_{
t
}, signal component remains in residual vector, and iteration should be continued.

## 4 Threshold selection

The threshold selection is crucial in the binary hypothesis test. We use the constant false alarm (CFA) criteria to determine the value of threshold *γ*_{
t
}. Recall that the detector is in the quadratic form of *T*=*v*^{T}*B*** v**, where

**is a symmetric**

*B**n*×

*n*matrix and

**is an**

*v**n*×1 vector following \mathcal{N}(0,\mathcal{C}). With

**=**

*B*

*C*^{−1}, we know that

*T*follows the chi-square distribution with

*n*degrees of freedom. Thus, we have

Therefore, false alarm probability and detect probability are given as

where {Q}_{{\chi}_{v}^{2}}\left(a\right) is the right-tail probability of Chi-Square {\chi}_{v}^{2} function given by

where f\left(a\right)=\frac{exp\left(-\frac{1}{2}a\right)}{\sqrt{\pi}}\sum _{k=1}^{\frac{v-1}{2}}\frac{(k-1)!{\left(2a\right)}^{k-\frac{1}{2}}}{(2k-1)!} and Q\left(\sqrt{a}\right)={\int}_{\sqrt{a}}^{\infty}\frac{1}{\sqrt{2\pi}}exp\left(-\frac{1}{2}{t}^{2}\right)\mathrm{d}t.

The stopping threshold *γ*_{
t
} shown in (17) can be calculated using numerical method [17]. Our proposed DOMP is shown in Algorithm ??1.

## 5 Numerical results

In this section, we present the numerical results of proposed DOMP algorithm. To evaluate the performance of DOMP, we define the mean square error of the estimated vector by

where {\u0125}_{i} is the recovered *h* of the *i* th experiment, and *N* is the number of experiments. *N* is set to be 5,000 in all our numerical experiments.

The detector *T*(*z*_{
t
}) of DOMP checks whether there is signal in residual for each iteration. First, we show that the detection performance of the *T*(*z*_{
t
}) on residuals for each iteration. In this test, the sensing matrix is a 128×256 Gaussian matrix whose elements follow i.i.d. Gaussian distribution of \mathcal{N}(0,1). A 3-sparse signal, whose nonzero elements are all ones, is sensed. For each *P*_{
FA
}, we perform 1,000 trials. The residual at the *i* th iteration is denoted as *r*_{
i
}, and the curves of logarithmic scaled (dB) *P*_{
FA
} versus *P*_{
D
} at each iteration for different SNRs are shown in Figure 1. The detection probabilities of signal components are high for the first two iterations (when there exists signal components in the residual), and the detection probabilities are low after three iterations (when the residual has no signal component) for *P*_{
FA
} between −30 and −10*d* *B*. In other words, *P*_{
FA
} about 0.001−0.1 provides good tradeoff between *P*_{
FA
} and *P*_{
D
}.

We then compare the performance of the support recovery rate and the MSE of the recovered signal using 1) OMP with sparsity *k* known; 2) OMP with unknown sparsity with stopping rule of \parallel {\mathit{r}}_{t}{\parallel}_{{\ell}_{2}}<\parallel \mathit{n}{\parallel}_{{\ell}_{2}} as proposed in [14]; 3) DOMP with different false alarm probabilities, i.e., *P*_{
FA
}=0.05, *P*_{
FA
}=0.01, and *P*_{
FA
}=0.001. The sensing matrix is Gaussian matrix whose elements follow i.i.d. Gaussian distribution \mathcal{N}(0,1). The nonzero elements of the 256-dimensional signal are set to one. In Figures 2 and 3, the performance of these methods are shown as number of measurements (dimension of ** y**) increases, while the sparsity of the signal is set to be 4 for SNR = 5 dB. The results shown that the OMP with sparsity

*k*known has the best performance followed by DOMP, and OMP with stopping rule, \parallel {\mathit{r}}_{t}{\parallel}_{{\ell}_{2}}<\parallel \mathit{n}{\parallel}_{{\ell}_{2}}. For DOMP with different

*P*

_{ FA }, the successful support recovery rate increases for lower

*P*

_{ FA }as the number of measurements increases, e.g., DOMP with

*P*

_{ FA }=0.01 outperforms DOMP with

*P*

_{ FA }=0.05 when the number of measurements is greater than 60. Note in Figure 3, we can observe the crossovers of DOMP with different

*P*

_{ FA }as the dimension of

**increases. This is due to the fact that detection probability is a increasing function both of the number of measurements and**

*y**P*

_{ FA }. With small number of measurements, the effect of lower less number of measurement is more dominant than the effect of lower

*P*

_{ FA }. Thus, higher

*P*

_{ FA }results better support recovery performance for DOMP when the number of measurements is small. As the number of measurement increases, the effect of the more measurement dominates, and the DOMP with lower

*P*

_{ FA }performs better.

In Figures 4 and 5, we show the performance as sparsity of signal increase, while keep the number of measurements fixed to be 128. The figures show again that the OMP with known sparsity outperforms other methods. Our proposed DOMP outperforms the OMP with stopping rule, \parallel {\mathit{r}}_{t}{\parallel}_{{\ell}_{2}}<\parallel \mathit{n}{\parallel}_{{\ell}_{2}}. The DOMP with lower *P*_{
FA
} has higher support recovery rate and lower MSE.

It is worth noting that in reality, the sparsity information may not be known in prior. Thus, one may not be able to directly apply OMP. Minimum description length (MDL) criterion is often used, in this scenario, to estimate the sparsity of the signal [18], i.e., the eigenvalues of the sample covariance matrix *R* of the received signal ** y**, denoted by

*λ*

_{ i }is used to estimate the signal sparsity as

where MDL(*k*) is given by

We now compare the accuracy of estimation of the signal sparsity by DOMP and MDL. In this experiment, the signal dimension is set to be *n*=256, and the sensing matrix ** X** is a 128×256 Gaussian matrix whose entries are i.i.d Gaussian with mean zero and variance of one. The

*k*-sparse signal

**is generated by randomly setting**

*h**k*entries in

**to be one and other entries of**

*h***to be zero. The experiment is conducted for SNR = 5 dB.**

*h*The estimated signal sparsity is shown in Table 1 for DOMP and MDL. We can observe that our proposed detection method gives accurate sparsity estimation for low sparsity signal. Actually, the estimated sparsity is the number of iteration for DOMP. Therefore, the average number of iterations for DOMP can be found in the Table 1, which actually matches the signal’s sparsity for low sparsity case.

Adopting the scheme shown in Figure 6, we compare the performance of DOMP and other greedy pursuit algorithms, OMP, CoSaMP, ROMP, and SP with the signal sparsity estimated using MDL criterion. Similar with the previous experiment, we choose the sensing matrix to be the Gaussian matrix whose entries follow i.i.d Gaussian distributed of \mathcal{N}(0,1). The support *S* of the signal is randomly selected, and the amplitude of the nonzero elements of the sparse signals ** h** are drawn from standard Gaussian distribution. The noise

**is a zero mean Gaussian noise. In Figure 7, the signal recovery MSE for different number of measurements (dimension of**

*n***) is shown. In this figure, we can observe that the estimated error of DOMP is less than other greedy pursuit algorithms with MDL when the number of measurements is less than 40.**

*y*One of the applications for DOMP is the channel estimation [12] since the number of channel taps are usually unknown. We compare the performances of estimating Rayleigh fading channel by MDL based OMP and DOMP following the same scheme shown in Figure 6. The Rayleigh fading channel ** h** is given by cost207 model [19] with the parameters shown in Table 2. Since the sensing matrix

**in wireless channel model**

*X***=**

*y*

*X***+**

*h***is a Toeplitz matrix constructed by the transmitted sequence**

*n***[20], we construct the sensing matrix by circular shift of the Gaussian vector whose elements are drawn from \mathcal{N}(0,1), which models the correlator output of spread spectrum signal. Figures 8 and 9 show the MSE of estimated BUx6 and RAx4 channel using DOMP and MDL based OMP, respectively. These two figures show that the error of estimation by DOMP is less than MDL-based OMP when the number of measurement**

*x**m*is less than 80.

## 6 Conclusions

In this paper, we proposed a detection-based OMP algorithm called DOMP. This method forms GLRT for each iteration to test if signal component exists in the residual vector. When no signal component exists, the algorithm stops the iteration. In this paper, we use OMP, a classical greedy algorithm, to apply this detection-based method. We envision that the detection-based method can be apply to other greedy algorithms for iteration stopping rules.

The numerical results show that the proposed DOMP outperforms the classical OMP algorithm without prior sparsity information at lower SNR or number of measurements. We use cost207 wireless channel estimation as an example to show the effectiveness of DOMP. The DOMP can be readily applied to other sparse recovery problems, e.g., underwater channel in sonar system and radar system, where the signal sparsity is unknown.

## References

Donoho DL: Compressed sensing.

*IEEE Trans. Inform. Theory*2006, 52(4):1289-1306. doi:10.1109/TIT.2006.871582Candes EJ, Wakin MB: An introduction to compressive sampling.

*IEEE Signal Process. Mag*2008, 25(2):21-30. doi:10.1109/MSP.2007.914731Chen SS, Saunders MA, Donoho DL: Atomic decomposition by basis pursuit.

*SIAM J. Sci. Comput*1998, 20(1):33-61. doi:10.1137/S1064827596304010. Accessed 2013-12-19 10.1137/S1064827596304010Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit.

*IEEE Trans. Inform. Theory*2007, 53(12):4655-4666. doi:10.1109/TIT.2007.909108Needell D, Vershynin R: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit.

*IEEE J. Selected Topics Signal Process*2010, 4(2):310-316. doi:10.1109/JSTSP.2010.2042412Donoho DL, Tsaig Y, Drori I, Starck J-L: Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit.

*IEEE Trans. Inform. Theory*2012, 58(2):1094-1121. doi:10.1109/TIT.2011.2173241Dai W, Milenkovic O: Subspace pursuit for compressive sensing signal reconstruction.

*IEEE Trans. Inform. Theory*2009, 55(5):2230-2249. doi:10.1109/TIT.2009.2016006Knight WC, Pridham RG, Kay SM: Digital signal processing for sonar.

*Proc. IEEE*1981, 69(11):1451-1506. doi:10.1109/PROC.1981.12186Tse D, Viswanath P:

*Fundamentals of wireless communication*. Cambridge University Press; 2005.Berger CR, Zhou S, Preisig JC, Willett P: Sparse channel estimation for multicarrier underwater acoustic communication: from subspace methods to compressed sensing.

*IEEE Trans. Signal Process*2010, 58(3):1708-1721. doi:10.1109/TSP.2009.2038424Saleh AAM, Valenzuela RA: A statistical model for indoor multipath propagation.

*IEEE J. Selected Areas Commun*1987, 5(2):128-137. doi:10.1109/JSAC.1987.1146527Bajwa WU, Haupt J, Sayeed AM, Nowak R: Compressed channel sensing: a new approach to estimating sparse multipath channels.

*Proc. IEEE*2010, 98(6):1058-1076. doi:10.1109/JPROC.2010.2042415Do TT, Gan L, Nguyen N, Tran TD: Sparsity adaptive matching pursuit algorithm for practical compressed sensing.

*Signals, Systems and Computers, 2008 42nd Asilomar Conference On*2008, 581-587. 00121Cai T. T, Wang L: Orthogonal matching pursuit for sparse signal recovery with noise.

*IEEE Trans. Inform. Theory*2011, 57(7):4680-4688. doi:10.1109/TIT.2011.2146090Wang J, Shim B: On the recovery limit of sparse signals using orthogonal matching pursuit.

*IEEE Trans. Signal Process*2012, 60(9):4973-4976. doi:10.1109/TSP.2012.2203124Candes EJ, Tao T: Decoding by linear programming.

*IEEE Trans. Inform. Theory*2005, 51(12):4203-4215. doi:10.1109/TIT.2005.858979 10.1109/TIT.2005.858979Kay SM:

*Fundamentals of statistical signal processing volume 2: detection theory*. Prentice Hall PTR; 1993.M Wax: Detection of signals by information theoretic criteria.

*IEEE Trans. Acoust. Speech Signal Process*1985, 33(2):387-392. doi:10.1109/TASSP.1985.1164557 10.1109/TASSP.1985.1164557Failli M: Digital land mobile radio communications COST 207. EC. 1989.

Haupt J, Bajwa WU, Raz G, Nowak R: Toeplitz compressed sensing matrices with applications to sparse channel estimation.

*IEEE Trans. Inform. Theory*2010, 56(11):5862-5875. doi:10.1109/TIT.2010.2070191

## Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant: 61101093, 61101090) and Fundamental Research Funds for the Central Universities (ZYGX2013J113).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

Wenhui Xiong, Jin Cao contributed equally to this work.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## About this article

### Cite this article

Xiong, W., Cao, J. & Li, S. Sparse signal recovery with unknown signal sparsity.
*EURASIP J. Adv. Signal Process.* **2014**, 178 (2014). https://doi.org/10.1186/1687-6180-2014-178

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-178