# Improved analysis of SP and CoSaMP under total perturbations

- Haifeng Li
^{1}Email author

**2016**:112

https://doi.org/10.1186/s13634-016-0412-5

© The Author(s) 2016

**Received: **27 April 2016

**Accepted: **21 October 2016

**Published: **7 November 2016

## Abstract

Practically, in the underdetermined model **y**=**A**
**x**, where **x** is a *K* sparse vector (i.e., it has no more than *K* nonzero entries), both **y** and **A** could be totally perturbed. A more relaxed condition means less number of measurements are needed to ensure the sparse recovery from theoretical aspect. In this paper, based on restricted isometry property (RIP), for subspace pursuit (SP) and compressed sampling matching pursuit (CoSaMP), two relaxed sufficient conditions are presented under total perturbations to guarantee that the sparse vector **x** is recovered. Taking random matrix as measurement matrix, we also discuss the advantage of our condition. Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

## Keywords

## 1 Introduction

Compressed sensing [1] has been attracted more and more attention since it has been proposed. According to compressed sensing, the sparse signals can be accurately reconstructed from far less samples than those required in the classical Shannon-Nyquist theorem.

is to be solved, where measurement matrix \(\mathbf {A}\in \mathbb {R}^{m\times N}\). There exists a unique solution for (1) when **x** is assumed to be *K*-sparse, i.e., **x** has at most *K* nonzero entries.

To get the sparsest solution of Eq. (1), minimizing \(\|\mathbf {x}\|_{\ell _{0}}\) (the *ℓ*
_{0}-“norm” counts the number of nonzero entries in **x**) is an intuitive idea. However, this is an NP-hard problem [2]. Many suboptimal methods were presented to overcome this difficulty.

The greedy algorithms have received considerable attention due to their low complexity and simple interpretation in geometry. They mainly include orthogonal matching pursuit (OMP) [3], subspace pursuit (SP) [4], compressive sampling matching pursuit (CoSaMP) [5], analysis SP (ASP), and analysis CoSaMP (ACoSaMP) [6]. The basic idea behind this kind of algorithms is to find the support of the unknown signal sequentially. Recently, using new method, two relaxed sufficient conditions were presented for SP and CoSaMP by Song et al. [7, 8]. In this paper, we mainly discuss SP and CoSaMP that are efficient algorithms.

In practice, both **y** and **A** are often perturbed in model (1). It is important to consider these perturbations since it can account for precision errors when applications call for physically implementing the matrix **A** in a sensor [9]. This case can be found in source separation [10].

with inputs \(\hat {\mathbf {A}}\in \mathbb {R}^{m\times N}\) and \(\hat {\mathbf {y}}\). Here, **e** and **E** can be defined as addition noise and multiple noise, respectively. This case can be found in remote sensing [11], radar [12], and so on. By restricted isometry property (RIP) [2], model (2) was discussed by Ding et al. [13] using OMP. Under total perturbations, the work of [14, 15] discussed the performance of SP and CoSaMP. They presented that oracle-order recovery performance of SP and CoSaMP is guaranteed. In addition, there are many previous works in the context of near-oracle performance [16–20].

Using the results in [7, 8], in this paper, we give improved conditions for SP and CoSaMP under total perturbations. For numerical experiments, figures validate that SP and CoSaMP can provide oracle-order recovery performance.

Now, we give some notations that will be used in this paper. Scalars are written as lowercase letters, e.g., *d*. We denote vectors by boldface lowercase letters, e.g., **x**, and matrices as boldface uppercase letters, e.g., **D**. The *i*th element of **x** is denoted by *x*
_{
i
}. **D**
^{′} denotes the transpose of **D**. The cardinality of a finite set *Γ* is denoted by |*Γ*|. \(\|\mathbf {D}\|^{(K)}_{2}\) denotes the largest spectral norm taken over all *K*-column submatrices of **D**. We write **D**
_{
Γ
} for the column submatrix of **D** whose indices are listed in the set *Γ*.

## 2 Problem formulation

*K*-sparse vector. The vector

**x**is assumed to be approximately sparse and we can use a

*K*-sparse vector

**x**

_{ K }to approximate it when the energy of \(\mathbf {x}_{K}^{c}=\mathbf {x}-\mathbf {x}_{K}\) is very small, where

**x**

_{ K }is

*K*-sparse that is the best

*K*-term approximation of

**x**, i.e., the nonzero entries in

**x**

_{ K }correspond to the

*K*largest (in magnitude) entries in

**x**. The approximation error can be quantified as

Here, the available information for recovering **x** is \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}=\mathbf {A}+\mathbf {E}\).

**E**and

**e**and are forced to estimate their relative upper bounds, instead. The perturbations

**E**and

**e**are quantified with the following relative bounds

**A**

**x**∥

_{2}are nonzero. Now, according to \(\mathbf {A}=\hat {\mathbf {A}}-\mathbf {E}\), we give the upper bound of \(\|\mathbf {E}\|_{2}^{(K)}\).

In this paper, we are only interested in the case where \(\varepsilon _{\mathbf {A}}^{}\) and *ε*
_{
y
} are far less than 1.

## 3 RIP-based recovery condition

###
**Definition 1**

**A**satisfies RIP of the order

*K*if there exists a constant

*δ*∈(0,1) such that

for all *K*-sparse vector **h**. In particular, the minimum of all constants *δ* satisfying (8) is called as the restricted isometry constant (RIC) *δ*
_{
K
}.

###
**Theorem 1**

**y**=

**A**

**x**

_{ K }+

**e**. If

**A**satisfies

*δ*

_{ aK }≤

*c*, then the sequence of

**x**

^{ n }defined by SP and CoSaMP satisfies

*a*,

*c*,

*ρ*, and

*τ*are illustrated in Table 1.

The value of constants

Constants | SP | CoSaMP |
---|---|---|

| 3 | 4 |

| 0.4859 | 0.5 |

| \(\frac {\sqrt {2\delta _{aK}^{2}(1+\delta _{aK}^{2})}}{1-\delta _{aK}^{2}}\) | \(\sqrt {\frac {2\delta _{aK}^{2}(1+2\delta _{aK}^{2})}{1-\delta _{aK}^{2}}}\) |

| \(\frac {(\sqrt {2}+2)\delta _{aK}}{\sqrt {1-\delta _{aK}^{2}}(1-\delta _{aK})(1-\rho)}\) | \(\frac {(\sqrt {2}+1)^{2}\delta _{aK}+(1-\delta _{aK})(2\sqrt {2}+1)\sqrt {1+\delta _{aK}}}{(1-\delta _{aK})(1-\rho)}\) |

\(\quad +\frac {2\sqrt {2}+1}{(1-\delta _{aK})(1-\rho)}\) |

###
**Theorem 2**

**x**

^{ n }of SP and CoSaMP satisfies

*a*,

*c*, \(\hat {\rho }\), and \(\hat {\tau }\) are illustrated in Tables 1 and 2. In addition, after at most

The value of constants

Constants | SP | CoSaMP |
---|---|---|

\(\hat {\rho }\) | \(\frac {\sqrt {2\hat {\delta }_{aK}^{2}(1+\hat {\delta }_{aK}^{2})}}{1-\hat {\delta }_{aK}^{2}}\) | \(\sqrt {\frac {2\hat {\delta }_{aK}^{2}(1+2\hat {\delta }_{aK}^{2})}{1-\hat {\delta }_{aK}^{2}}}\) |

\(\hat {\tau }\) | \(\frac {(\sqrt {2}+2)\hat {\delta }_{aK}}{\sqrt {1-\hat {\delta }_{aK}^{2}}(1-\hat {\delta }_{aK})(1-\hat {\rho })}\) | \(\frac {(\sqrt {2}+1)^{2}\hat {\delta }_{aK}+(1-\hat {\delta }_{aK})(2\sqrt {2}+1)\sqrt {1+\hat {\delta }_{aK}}}{(1-\hat {\delta }_{aK})(1-\hat {\rho })}\) |

\(\quad +\frac {2\sqrt {2}+1}{(1-\hat {\delta }_{aK})(1-\hat {\rho })}\) |

###
*Proof*

where (15) follows from (7) and (3).

**x**

^{ n }defined by SP and CoSaMP satisfies

where \(\hat {\rho }<1\) and \(\hat {\tau }\) are constants specified in Table 2.

###
**Remark 1**

The weaker the RIC bound is, the less required number of measurements we need, the improved RIC results can be used in many CS-based applications [7]. It is clear that when \(\hat {\rho }=\frac {1}{2}\), for SP, Theorem 2 presents \(\hat {\delta }_{3K}=0.3063\) and \(\hat {\tau }=13.1303\), while Theorem 2 in [15] gives \(\hat {\delta }_{3K}=0.1397\) and \(\hat {\tau }=15.6476\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). For CoSaMP, Theorem 2 presents \(\hat {\delta }_{4K}=0.3083\) and \(\hat {\tau }=13.9536\), while Theorem 2 in [15] gives \(\hat {\delta }_{4K}=0.101\) and \(\hat {\tau }=15.3485\) (\(\tilde {C}\) and \(\tilde {D}\) in [15]). So, the proposed results improve the theoretical guarantee for SP and CoSaMP relative to [15].

*m*×

*N*random matrix \(\hat {\mathbf {A}}\), whose entries are independent and identically distributed Gaussian random variables \(\mathcal {N}(0,\frac {1}{m})\), then \(\hat {\mathbf {A}}\) satisfies the RIP condition (\(\hat {\delta }_{K}\leq \varepsilon \)) with overwhelming probability under [22]

where *b* is a constant. Consider SP, by Lemma 4.1 in [23], \(\hat {\delta }_{3K}<0.4859\) can be changed to \(\hat {\delta }_{K}<0.097\), while \(\hat {\delta }_{3K}<0.206\) ([15]) can be changed to \(\hat {\delta }_{K}<0.041\). Hence, according to (21), the dimension of the measurements *m* ensuring reconstruction for Theorem 2 is \(m\geq 106.2812bK\makebox {log}(\frac {N}{K})\), while the measurements for Theorem 2 in [15] is \(m\geq 594.8840bK\makebox {log}(\frac {N}{K})\).

###
**Remark 2**

*r*

_{ K }+

*s*

_{ K }, ∥

**e**∥

_{2}, and \(\|\mathbf {E}\|_{2}^{(K)}\). In general, no recovery can do better than the oracle least squares (LS) method. The authors in [15] presented an upper bound of oracle recovery (Part IV.B in [15]):

where *κ*
_{
Ψ
}=∥*Ψ*∥_{2}∥*Ψ*
^{−1}∥_{2}=1 (*Ψ*is identity matrix in our paper) and \(\hat {D}=\frac {1}{\sqrt {1-\hat {\delta }_{K}}}\). When \(\hat {\mathbf {A}}\) is fixed, then \(\hat {D}\) and \(\hat {\tau }\) are constants. So, comparing (13) with (22), the error bound of SP (or CoSaMP) and the error bound of oracle recovery only differ in coefficients.

When **x** is *K*-sparse, it can be derive that *r*
_{
K
}=*s*
_{
K
}=0. The relative error of the solution is stated as Corollary 1.

###
**Corollary 1**

**x**is

*K*-sparse in model (4). If the perturbed matrix \(\hat {\mathbf {A}}\) satisfies RIP with

**x**

^{ n }of SP and CoSaMP satisfies

## 4 Numerical experiments

In this section, we perform some numerical experiments in MATLAB R2013a and research the performance of SP and CoSaMP under total perturbations. These algorithms are tested with two random matrix ensembles:

\(\bullet \mathcal {N}\): Gaussian matrices with entries drawn i.i.d. from \(\mathcal {N}\left (0,\frac {1}{m}\right)\);

\(\bullet \mathcal {S}_{7}\): sparse matrices with seven nonzero entries per column drawn with equal probability from \(\left \{-\frac {1}{\sqrt {7}},\frac {1}{\sqrt {7}}\right \}\) and locations in each column chosen uniformly.

As noted in [24], the above two random matrix ensembles are representative of the random matrices frequently encountered in compressed sensing.

The sparse vector **x** (with length of *N*=1024) is taken from the random binary vector distribution and are formed by uniformly selecting *K* locations for nonzero entries with values {−1,1} chosen with equal probability.

**e**and multiple noise

**E**are random Gaussian matrices. In each trial, according to (5), the relative perturbations are set to

**A**is measurement matrix. Then, \(\hat {\mathbf {y}}\) and \(\hat {\mathbf {A}}\) are generated by (2). The relative approximation error defined by

where **x**
^{∗} is an approximate solution. The simulation is conducted 500 times to obtain the average relative error.

### 4.1 Different sparsity level for SP and CoSaMP

*K*needs to be known a priori. The first experiment demonstrates the performance degradation of SP and CoSaMP when

*K*is misestimated. Here, sparsity level

*K*is 20. The number of measurements

*m*varies from 100 to 550 with step size 50. The results are shown in Figs. 1 and 2.

Figures 1 and 2 show the curves of the relative error vs. estimation of the sparsity. It is easy to see that the error decreases as *m* increases. In addition, one can see clearly that the relative errors of the SP and CoSaMP increase if the estimated sparsity *K* is far from the truth. So, in the future, we will propose some algorithm that recovers unknown signals without the sparsity level information.

### 4.2 Observed noise stability for SP and CoSaMP

*m*varies from 50 to 500 with step size 50. The results are shown in Figs. 3 and 4.

As can be seen from Figs. 3 and 4, the error decreases as *m* increases. In addition, the curves of SP, CoSaMP, and oracle LS method are almost the same when *m* is not smaller than 150. So, SP and CoSaMP can provide oracle-order recovery performance.

## 5 Conclusions

For SP and CoSaMP, in this paper, improved sufficient conditions are presented under total perturbations to guarantee that sparse vector **x** can be recovered. Comparing with the condition in [15], taking random matrix as measurement matrix, our condition can decrease the number of measurements. By numerical experiments, we point out that SP and CoSaMP algorithms can obtain oracle-order recovery performance under total perturbations. Furthermore, proposing an algorithm that does not need the sparsity level *K* is our future work.

## Declarations

### Acknowledgements

This work was supported by the Scientific Research Foundation for Ph.D. of Henan Normal University (no. qd14142), the Key Scientific Research Project of Colleges and Universities in Henan Province (no. 15B120004) and National Natural Science Foundation of China (no. 11526081 and 11601134).

### Authors’ contributions

∙ Two relaxed sufficient conditions are presented under total perturbations for SP and CoSaMP.

∙ The advantage of our condition is discussed.

∙ Numerical experiments validate that SP and CoSaMP can provide oracle-order recovery performance.

### Competing interests

The author declares that he has no competing interests.

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- J E Candès, T Romberg, Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory.
**52**(2), 489–509 (2006).View ArticleMATHGoogle Scholar - T E Candès, Tao, Decoding by linear programming. IEEE Trans. Inf. Theory.
**51**(12), 4203–4215 (2005).View ArticleMATHGoogle Scholar - J Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans.Inf. Theory.
**50**(10), 2231–2242 (2004).MathSciNetView ArticleMATHGoogle Scholar - W Dai, O Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans.Inf. Theory.
**55**(5), 2230–2249 (2009).MathSciNetView ArticleGoogle Scholar - D Needell, J Tropp, CoSaMP: Iterative signal recovery from in-complete and inaccurate samples, Appl.Comput. Harmon. Anal.
**26**(3), 301–321 (2009).MathSciNetView ArticleMATHGoogle Scholar - R Giryes, S Nam, M Elad, R Gribonval, M Davies, Greedy-like algorithms for the cosparse analysis model. Linear Algebra Appl.
**441:**, 22–60 (2014).MathSciNetView ArticleMATHGoogle Scholar - C Song, S Xia, X Liu, Improved analysis for subspace pursuit algorithm in terms of restricted isometry constant. IEEE Signal Process. Lett.
**21**(11), 1365–1369 (2014).View ArticleGoogle Scholar - C Song, S Xia, X Liu, Improved analysis for SP and CoSaMP algorithms in terms of restricted isometry constants. http://arxiv.org/pdf/1309.6073.pdf.
- M Herman, T Strohmer, General deviants: an analysis of perturbations in compressed sensing. IEEE J.Sel. Top. Signal Process.
**4**(2), 342–3496 (2010).View ArticleGoogle Scholar - T Blumensath, M Davies, in Int. Conf. Ind. Comp. Anal. Source Sep.Compressed sensing and source separation, (2007), pp. 341–348.Google Scholar
- A Fannjiang, P Yan, T Strohmer, Compressed remote sensing of sparse objects. SIAM J.Imag.Sci.
**3**(3), 596–618 (2010).MathSciNetView ArticleMATHGoogle Scholar - M Herman, T Strohmer, High-resolution radar via compressed sensing. IEEE Trans. Signal process.
**57**(6), 2275–2284 (2009).MathSciNetView ArticleGoogle Scholar - J Ding, YGu L Chen, Perturbation analysis of orthogonal matching pursuit. IEEE Trans. Signal Process.
**61**(2), 398–410 (2013).MathSciNetView ArticleGoogle Scholar - M Herman, D Needell, in
*Information Sciences and Systems (CISS), 2010 44th Annual Conference on*. Mixed operators in compressed sensing (IEEEPrinceton, 2010), pp. 1–6.View ArticleGoogle Scholar - L Chen, Y Gu, Oracle-order recovery performance of greedy pursuits with replacement against general perturbations. IEEE Trans.Signal Process.
**61**(18), 4625–4636 (2013).MathSciNetView ArticleGoogle Scholar - E Candès, T Tao, The Dantzig selector: statistical estimation when p is much larger than n. Ann. Stat.
**35:**, 2313–2351 (2007).MathSciNetView ArticleMATHGoogle Scholar - Z Ben-Haim, YC Eldar, M Elad, Coherence-based perfor- mance guarantees for estimating a sparse vector under random noise. IEEE Trans. Signal Process.
**58**(10), 5030–5043 (2010).MathSciNetView ArticleGoogle Scholar - P Bickel, Y Ritov, AB Tsybakov, Simultaneous analysis of Lasso and Dantzig selector. Ann. Stat.
**37**(4), 1705–1732 (2009).MathSciNetView ArticleMATHGoogle Scholar - R Giryes, M Elad, RIP-based near-oracle performance guarantees for SP, CoSaMP, and IHT. IEEE Trans.Signal Process.
**60**(3), 1465–1468 (2012).MathSciNetView ArticleGoogle Scholar - T Cai, L Wang, G Xu, Stable recovery of sparse signals and an oracle inequality. IEEE Trans. Inf. Theory.
**56**(7), 3516–3522 (2010).MathSciNetView ArticleGoogle Scholar - T Blumensath, M Davies, Iterative hard thresholding for compressed sensing, Appl.Comput. Harmon. Anal.
**27**(3), 265–274 (2009).MathSciNetView ArticleMATHGoogle Scholar - R Baraniuk, M Davenport, R DeVore, M Wakin, A simple proof of the restricted isometry property for random matrices. Construct.Approx.
**28**(3), 253–263 (2008).MathSciNetView ArticleMATHGoogle Scholar - T Cai, A Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery, Appl.Comput. Harmon. Anal.
**35**(1), 74–93 (2013).MathSciNetView ArticleMATHGoogle Scholar - J Blanchar, J Tanner, K Wei, Conjugate gradient iterative hard thresholding: observed noise stability for compressed sensing. IEEE Trans.Signal process.
**63**(2), 528–537 (2015).MathSciNetView ArticleGoogle Scholar