 Research
 Open Access
Near optimal bound of orthogonal matching pursuit using restricted isometric constant
 Jian Wang^{1},
 Seokbeop Kwon^{1} and
 Byonghyo Shim^{1}Email author
https://doi.org/10.1186/1687618020128
© Wang et al; licensee Springer. 2012
Received: 15 July 2011
Accepted: 13 January 2012
Published: 13 January 2012
Abstract
As a paradigm for reconstructing sparse signals using a set of under sampled measurements, compressed sensing has received much attention in recent years. In identifying the sufficient condition under which the perfect recovery of sparse signals is ensured, a property of the sensing matrix referred to as the restricted isometry property (RIP) is popularly employed. In this article, we propose the RIP based bound of the orthogonal matching pursuit (OMP) algorithm guaranteeing the exact reconstruction of sparse signals. Our proof is built on an observation that the general step of the OMP process is in essence the same as the initial step in the sense that the residual is considered as a new measurement preserving the sparsity level of an input vector. Our main conclusion is that if the restricted isometry constant δ_{ K }of the sensing matrix satisfies
then the OMP algorithm can perfectly recover K(> 1)sparse signals from measurements. We show that our bound is sharp and indeed close to the limit conjectured by Dai and Milenkovic.
Keywords
 compressed sensing
 sparse signal
 support
 orthogonal matching pursuit
 restricted isometric property
1 Introduction
As a paradigm to acquire sparse signals at a rate significantly below Nyquist rate, compressive sensing has received much attention in recent years [1–17]. The goal of compressive sensing is to recover the sparse vector using small number of linearly transformed measurements. The process of acquiring compressed measurements is referred to as sensing while that of recovering the original sparse signals from compressed measurements is called reconstruction.
for all Ksparse vectors x. It has been shown that if ${\delta}_{2K}<\sqrt{2}1$[13], the ℓ_{1}minimization is guaranteed to recover Ksparse signals exactly.
The second class is greedy search algorithms identifying the support (position of nonzero element) of the sparse signal sequentially. In each iteration of these algorithms, correlations between each column of Φ and the modified measurement (residual) are compared and the index (indices) of one or multiple columns that are most strongly correlated with the residual is identified as the support. In general, the computational complexity of greedy algorithms is much smaller than the LP based techniques, in particular for the highly sparse signals (signals with small K). Algorithms contained in this category include orthogonal matching pursuit (OMP) [1], regularized OMP (ROMP) [18], stagewise OMP (DL Donoho, I Drori, Y Tsaig, JL Starck: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit, submittd), and compressive sampling matching pursuit (CoSaMP) [16].
Recently, improved conditions of the OMP have been reported including ${\delta}_{K+1}<1/\sqrt{2K}$[21] and ${\delta}_{K+1}<1/\left(\sqrt{K}+1\right)$ (J Wang, B Shim: On recovery limit of orthogonal matching pursuit using restricted isometric property, submitted).
The primary goal of this article is to provide an improved condition ensuring the exact recovery of Ksparse signals of the OMP algorithm. While previously proposed recovery conditions are expressed in terms of δ_{K+ 1}[20, 21], our result, formally described in Theorem 1.1, is expressed in terms of the restricted isometric constant δ_{ K }of order K so that it is perhaps most natural and simple to interpret. For instance, our result together with the JohnsonLindenstrauss lemma [22] can be used to estimate the compression ratio (i.e., minimal number of measurements m ensuring perfect recovery) when the elements of Φ are chosen randomly [17]. Besides, we show that our result is sharp in the sense that the condition is close to the limit of the OMP algorithm conjectured by Dai and Milenkovic [19], in particular when K is large. Our result is formally described in the following theorem.
As mentioned, another interesting result we can deduce from Theorem 1.1 is that we can estimate the maximal compression ratio when Gaussian random matrix is employed in the sensing process. Note that the direct investigation of the condition δ_{ K }< ϵ for a given sensing matrix Φ is undesirable, especially when n is large and K is nontrivial, since the extremal singular values of $\left({}_{K}^{n}\right)$ submatrices need to be tested.
This result [m is expressed in the order of $O\left({K}^{2}\text{log}\frac{n}{K}\right)$ is desirable, since the size of measurements m grows moderately with the sparsity level K.
2 Proof of theorem 1.1
2.1 Notations
We now provide a brief summary of the notations used throughout the article.

T = supp(x) = { i  x_{ i }≠ 0} is the set of nonzero positions in x.

D is the cardinality of D.

T\D is the set of all elements contained in T but not in D.

Φ_{ D }∈ ℝ^{m×D}is a submatrix of Φ that only contains columns indexed by D.

x_{ D }∈ ℝ^{D}is a restriction of the vector x to the elements with indices in D.

span(Φ_{ D }) is the span (range) of columns in Φ_{ D }.

${\Phi \prime}_{D}$ denotes the transpose of the matrix Φ_{ D }.

${\Phi}_{D}^{\u2020}={\left({\Phi \prime}_{D}{\Phi}_{D}\right)}^{1}{\Phi \prime}_{D}$ is the pseudoinverse of Φ_{ D }.

${P}_{D}={\Phi}_{D}{\Phi}_{D}^{\u2020}$ denotes the orthogonal projection onto span(Φ_{ D }).

${P}_{D}^{\perp}=I{P}_{D}$ is the orthogonal projection onto the orthogonal complement of span(Φ_{ D }).
2.2 Preliminariesdefinitions and lemmas
In this subsection, we provide useful definition and lemmas used for the proof of Theorem 1.1.
for all x and x' such that x and x' are Ksparse and K'sparse respectively, and have disjoint supports.
for i ∈ T^{ k }.
for any K_{1} ≤ K_{2}. This property is referred to as the monotonicity of the restricted isometric constant.
and this completes the proof of the lemma.
2.3 Proof of theorem 1.1
Now we turn to the proof of our main theorem. Our proof is in essence based on the mathematical induction; First, we show that the index t^{1} found at the first iteration is correct (t^{1} ∈ T) under (4) and then we show that t^{k+ 1}is also correct (more accurately T^{ k }= {t^{1},t^{ 2 }, ...,t^{ k }} ∈ T then t^{ k+1 }∈ T\T^{ k }) under (4).
where (21) follows from Lemma 2.3.
OMP algorithm
Input:  measurements y 

sensing matrix Φ  
sparsity K.  
Initialize:  iteration count k = 0 
residual vector r^{0} = y  
support set estimate ${T}^{0}=\mathrm{0\u0338}$.  
While k < K  
k = k + 1.  
(Identify) t^{ k }= arg max_{ j }〈r^{k1},φ_{ j }〉.  
(Augment) T^{ k }= T^{k1}∪ {t^{ k }}.  
(Estimate) ${\widehat{x}}_{{T}^{k}}=\text{arg}\underset{x}{\text{min}}\parallel y{\Phi}_{{T}^{k}}x{\parallel}_{2}$.  
(Update) ${r}^{k}=y{\Phi}_{{T}^{k}}{\widehat{x}}_{{T}^{k}}.$.  
End  
Output:  $\widehat{x}=\text{arg}\underset{x:\mathsf{\text{supp}}\left(x\right)={T}^{K}}{\text{min}}{\u2225y\Phi x\u2225}_{2}.$ 
Since y = Φ_{ T }x_{ T }and ${\Phi}_{{T}^{k}}$ is a submatrix of Φ_{ T }, r^{ k }∈ span (Φ_{ T }) and hence r^{ k }can be expressed as a linear combination of the T (= K) columns of Φ_{ T }. Accordingly, we can express r^{ k }as r^{ k }= Φx^{k} where the support (set of indices for nonzero elements) of x^{ k }is contained in the support of x. That is, r^{ k }is a measurement of Ksparse signal x^{k} using the sensing matrix Φ.
which completes the proof.
3 Discussions
In [19], Dai and Milenkovic conjectured that the sufficient condition of the OMP algorithm guaranteeing exact recovery of Ksparse vector cannot be further relaxed to ${\delta}_{K+1}=1/\sqrt{K}$. This conjecture says that if the RIP condition is given by δ_{K+ 1}< ϵ then ϵ should be strictly smaller than $1/\sqrt{K}$. In [20], this conjecture has been confirmed via experiments for K = 2.
We now show that our result in Theorem 1.1 agrees with the conjecture, leaving only marginal gap from the limit. Note that since we cannot directly compare Dai and Milenkovic's conjecture (expressed in term of δ_{K+ 1}) with our condition (expressed in term of δ_{ K }), we need to modify our result. Following proposition provides a bit loose bound (sufficient condition) of our result expressed in the form of ${\delta}_{K+1}\le 1/\left(\sqrt{K}+\theta \right)$.
Proposition 3.1. If${\delta}_{K+1}<1/\left(\sqrt{K}+3\sqrt{2}\right)$then${\delta}_{K}<\sqrt{K1}/\left(\sqrt{K1}+K\right)$.
holds true for any integer K > 1 (see Appendix), if ${\delta}_{K+1}<1/\left(\sqrt{K}+3\sqrt{2}\right)$ then ${\delta}_{K+1}<\sqrt{K1}/\left(\sqrt{K1}+K\right)$. Also, from the monotonicity of the RIP constant (δ_{ K }≤ δ_{K+1}), if ${\delta}_{K+1}<\sqrt{K1}/\left(\sqrt{K1}+K\right)$ then ${\delta}_{K}<\sqrt{K1}/\left(\sqrt{K1}+K\right)$. Syllogism of above two conditions yields the desired result.
One can clearly observe that ${\delta}_{K+1}<1/\left(\sqrt{K}+3\sqrt{2}\right)\approx 1/\left(\sqrt{K}+1.5858\right)$ is better than the condition ${\delta}_{K+1}<1/\left(3\sqrt{K}\right)$[20], similar to the result of Wang and Shim, and also close to the achievable limit $\left({\delta}_{k+1}<1/\sqrt{K}\right)$, in particular for large K. Considering that the derived condition ${\delta}_{K+1}<1/\left(\sqrt{K}+3\sqrt{2}\right)$ is slightly worse than our original condition ${\delta}_{K}<\sqrt{K1}/\left(\sqrt{K1}+K\right)$, we may conclude that our result is fairly close to the optimal.
4 Conclusion
then the OMP algorithm can perfectly recover Ksparse signals from measurements. Our result directly indicates that the set of sensing matrices for which exact recovery of sparse signal is possible using the OMP algorithm is wider than what has been proved thus far. Another interesting point that we can draw from our result is that the size of measurements (compressed signal) required for the reconstruction of sparse signal grows moderately with the sparsity level.
Appendixproof of (36)
for K ≥ 2, f'(K) < 0 for K ≥ 2, which completes the proof of (36).
Endnote
^{a}In Section 3, we provide more rigorous discussions on this issue.
Declarations
Acknowledgements
This study was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20100012525) and the research grant from the second BK21 project.
Authors’ Affiliations
References
 Tropp JA, Gilbert AC: Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 2007, 53(12):46554666.MathSciNetView ArticleGoogle Scholar
 Tropp JA: Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 2004, 50(10):22312242. 10.1109/TIT.2004.834793MathSciNetView ArticleGoogle Scholar
 Donoho DL, Elad M: Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ^{1}minimization. Proc Natl Acad Sci 2003, 100(5):2197. 10.1073/pnas.0437847100MathSciNetView ArticleGoogle Scholar
 Donoho DL, Stark PB: Uncertainty principles and signal recovery. SIAM J Appl Math 1989, 49(3):906931. 10.1137/0149053MathSciNetView ArticleGoogle Scholar
 Giryes R, Elad M: RIPbased nearoracle performance guarantees for subspacepursuit, CoSaMP, and iterative hardthresholding. 2010.Google Scholar
 Qian S, Chen D: Signal representation using adaptive normalized Gaussian functions. Signal Process 1994, 36: 111. 10.1016/01651684(94)901740MathSciNetView ArticleGoogle Scholar
 Cevher V, Indyk P, Hegde C, Baraniuk RG: Recovery of clustered sparse signals from compressive measurements. In Sampling Theory and Applications (SAMPTA). Marseilles, France; 2009:1822.Google Scholar
 Malioutov D, Cetin M, Willsky AS: A sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans Signal Process 2005, 53(8):30103022.MathSciNetView ArticleGoogle Scholar
 Elad M, Bruckstein AM: A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans Inf Theory 2002, 48(9):25582567. 10.1109/TIT.2002.801410MathSciNetView ArticleGoogle Scholar
 Donoho DL: Compressed sensing. IEEE Trans Inf Theory 2006, 52(4):12891306.MathSciNetView ArticleGoogle Scholar
 Candès EJ, Romberg J, Tao T: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 2006, 52(2):489509.View ArticleGoogle Scholar
 Friedman JH, Stuetzle W: Projection pursuit regression. J Am Stat Assoc 1981, 76(376):817823. 10.2307/2287576MathSciNetView ArticleGoogle Scholar
 Candès EJ: The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique 2008, 346(910):589592. 10.1016/j.crma.2008.03.014View ArticleGoogle Scholar
 Cai TT, Xu G, Zhang J: On recovery of sparse signals via ℓ_{1}minimization. IEEE Trans Inf Theory 2009, 55(7):33883397.MathSciNetView ArticleGoogle Scholar
 Cai TT, Wang L, Xu G: New bounds for restricted isometry constants. IEEE Trans Inf Theory 2010, 56(9):43884394.MathSciNetView ArticleGoogle Scholar
 Needell D, Tropp JA: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harm Anal 2009, 26(3):301321. 10.1016/j.acha.2008.07.002MathSciNetView ArticleGoogle Scholar
 Baraniuk RG, Davenport MA, DeVore R, Wakin MB: A simple proof of the restricted isometry property for random matrices. Const Approx 2008, 28(3):253263. 10.1007/s003650079003xMathSciNetView ArticleGoogle Scholar
 Needell D, Vershynin R: Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J Sel Top Signal Process 2010, 4(2):310316.View ArticleGoogle Scholar
 Dai W, Milenkovic O: Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans Inf Theory 2009, 55(5):22302249.MathSciNetView ArticleGoogle Scholar
 Davenport MA, Wakin MB: Analysis of orthogonal matching pursuit using the restricted isometry property. IEEE Trans Inf Theory 2010, 56(9):43954401.MathSciNetView ArticleGoogle Scholar
 Liu E, Temlyakov VN: Orthogonal super greedy algorithm and applications in compressed sensing. IEEE Trans Inf Theory 2011, 18. PP(99)Google Scholar
 Johnson WB, Lindenstrauss J: Extensions of Lipschitz mappings into a Hilbert space. Contemp Math 1984, 26: 189206.MathSciNetView ArticleGoogle Scholar
 Cai TT, Wang L, Xu G: Shifting inequality and recovery of sparse signals. IEEE Trans Inf Theory 2010, 58(3):13001308.MathSciNetGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.