Skip to main content

Preconditioned generalized orthogonal matching pursuit

Abstract

Recently, compressed sensing (CS) has aroused much attention for that sparse signals can be retrieved from a small set of linear samples. Algorithms for CS reconstruction can be roughly classified into two categories: (1) optimization-based algorithms and (2) greedy search ones. In this paper, we propose an algorithm called the preconditioned generalized orthogonal matching pursuit (Pre-gOMP) to promote the recovery performance. We provide a sufficient condition for exact recovery via the Pre-gOMP algorithm, which says that if the mutual coherence of the preconditioned sampling matrix Φ satisfies \( \mu ({\Phi }) < \frac {1}{SK -S + 1}, \) then the Pre-gOMP algorithm exactly recovers any K-sparse signals from the compressed samples, where S (>1) is the number of indices selected in each iteration of Pre-gOMP. We also apply the Pre-gOMP algorithm to the application of ghost imaging. Our experimental results demonstrate that the Pre-gOMP can largely improve the imaging quality of ghost imaging, while boosting the imaging speed.

1 Introduction

Recently, compressed sensing (CS) has gained a lot of interests and promoted the applications of many fields, such as the imaging signal processing, applied mathematics, and statistics [15]. The main goal of CS is to estimate a high dimensional K-sparse signal vector \(\mathbf {x}\in \mathcal {R}^{n}\) ((x||0=Kn)) from a small number of linear samples:

$$ \mathbf{y}_{0} = \boldsymbol{\Psi}\mathbf{x}, $$
(1)

where \(\boldsymbol {\Psi }\in \mathcal {R}^{m\times n}\) is often called the sampling matrix. Although the Eq. 1 is underdetermined, owing to the sparsity prior, x can be accurately recovered from its samples y0 by solving the 0-minimization problem:

$$ \mathop{\min}\limits_{\mathbf{x}} {\left\| \mathbf{x} \right\|_{0}}~~ \mathrm{subject~ to}~~~~ \mathbf{y}_{0} = \boldsymbol{\Psi}\mathbf{x}. $$
(2)

There has been much effort in solving (2), which can be roughly classified into two categories: (i) those relying on optimization and (ii) those using greedy search. The optimization-based approaches relaxes the 0-norm to the 1-norm and solves the convex optimization problem:

$$ \mathop{\min}\limits_{\mathbf{x}} {\left\| \mathbf{x} \right\|_{1}}~~ \mathrm{subject~ to}~~~~ \mathbf{y}_{0} = \boldsymbol{\Psi} \mathbf{x}. $$
(3)

A well-known algorithm solving (3) is called basis pursuit (BP) [2] which can reliably recover sparse signals under appropriate constraints on the sampling matrix [2]. On the other hand, greedy search algorithms have received considerable attention due to their computational simplicity. Examples includes the matching pursuit (MP) [6], orthogonal matching pursuit (OMP) [7], and orthogonal least squares (OLS) [8]. To improve the computational efficiency and recovery performance, there have also been many studies on the modification of OMP. As a representative variant, generalized OMP (gOMP) [9] chooses S columns of Ψ that are maximally correlated with the residual vector at each iteration, which exhibits computational advantages over the conventional OMP algorithm.

As mentioned, the property of sampling matrix has a great influence on the recovery performance. To evaluate the property of Ψ, the mutual coherence property has been widely used [10], which is defined as

$$ \mu ({\boldsymbol{\Psi }}) = \mathop {\max }\limits_{1 \leq i \ne j \leq n} \frac{|\left\langle {\boldsymbol{\Psi }_{i},\boldsymbol{\Psi }_{j}} \right\rangle |}{\left\| \boldsymbol{\Psi}_{i} \right\|_{2} {\left\| {{\boldsymbol{\Psi }}_{j}} \right\|}_{2}}. $$
(4)

Generally speaking, a smaller μ contributes to better performance on signal recovery. One way to reduce the mutual coherence is to multiply a matrix P on both sides of (2), i.e.,

$$ \mathop {\min }\limits_{\mathbf{x}} {\left\| \mathbf{x} \right\|_{0}}~~ \mathrm{subject~ to}~~~~ \mathbf{P} \mathbf{y}_{0} = \mathbf{P} \boldsymbol{\Psi} \mathbf{x}. $$
(5)

In doing so, we wish μ(PΨ) to be smaller than that of the original one. This operation is commonly referred to as preconditioning in numerical linear algebra, where the matrix P is called preconditioner [11]. There has been much evidence that preconditioning is useful to promote the recovery quality of sparse signals.

In this paper, we propose a preconditioned gOMP (Pre-gOMP) algorithm for the recovery of sparse signals. As shown in Algorithm 1, the Pre-gOMP algorithm consists of (i) a preconditioning step and (ii) a conventional signal reconstruction step. The primary contributions of this paper are summarized as follows:

  1. 1.

    Based on the mutual coherence framework, we develop a sufficient condition for the Pre-gOMP algorithm. Specifically, we show that

    $$\mu(\mathbf{P} \boldsymbol{\Psi}) < \frac{1}{SK- S + 1}$$

    is sufficient for Pre-gOMP to exactly recover any K-sparse vector in K iterations.

  2. 2.

    To evaluate the recovery performance of the Pre-gOMP algorithm. We apply it to imaging objects in the application of ghost imaging (GI). Our experimental results reveal that Pre-gOMP algorithm can largely improve the imaging quality compared to the existing methods.

The rest of this paper is organized as follows. In Section 2, we introduce the Pre-gOMP algorithm and analyze it under the mutual coherence framework. Section 3 provides simulation and the setup of GI. Section 4 presents simulated results and experimental results for the propose algorithm. We conclude our work in Section 5.

2 Method

2.1 Notations

Let Ω={1,,n}. T=supp(x)={i|iΩ,xi≠0} is the support set of x. \(\mathcal {S} \subseteq \Omega \) is the set of selected indices in each iteration and \(|\mathcal {S}|\) is the cardinality of \(\mathcal {S}\). \(T \backslash \mathcal {S} = \{i|i\in T \backslash \mathcal {S}\}\). \(\Lambda _{k} = \Lambda _{k-1} \cup \mathcal {S}\) is the estimated support set at the kth iteration of Pre-gOMP. \(\mathbf {x}_{\mathcal {S}}\in \mathcal {R}^{|\mathcal {S}|}\) is the subset of x indexed by \(\mathcal {S}\). Similarly, \(\boldsymbol {\Psi }_{\mathcal {S}}\in \mathcal {R}^{m\times |\mathcal {S}|}\) is a submatrix of Ψ that contains columns of Ψ indexed by \(\mathcal {S}\). If \(\boldsymbol {\Psi }_{\mathcal {S}}\) has full column rank, then \(\boldsymbol {\Psi }_{\mathcal {S}}^{\dagger } = (\boldsymbol {\Psi }^{\mathrm {T}}_{\mathcal {S}}\boldsymbol {\Psi }_{\mathcal {S}})^{-1}\boldsymbol {\Psi }^{\mathrm {T}}_{\mathcal {S}}\) is the pseudoinverse of \(\boldsymbol {\Psi }_{\mathcal {S}}\). \(\text {span}(\boldsymbol {\Psi }_{\mathcal {S}})\) is the span of columns in \(\boldsymbol {\Psi }_{\mathcal {S}}\). \(\mathcal {P}_{\mathcal {S}} = \boldsymbol {\Psi }_{\mathcal {S}}\boldsymbol {\Psi }_{\mathcal {S}}^{\dagger }\) is the projection matrix onto \(\text {span}(\boldsymbol {\Psi }_{\mathcal {S}})\). \(\mathcal {P}_{\mathcal {S}}^{\perp } = \mathbf {I}-\mathcal {P}_{\mathcal {S}}\) is the projection matrix onto the orthogonal complement of \(\text {span}(\boldsymbol {\Psi }_{\mathcal {S}})\) where I is the identity matrix.

2.2 The pre-gOMP algorithm

As mentioned, the Pre-gOMP algorithm consists of two parts: (i) the preconditioning operation and (ii) the signal reconstruction step. The preconditioning operation aims to reduce the mutual coherence of the sampling matrix. In this paper, we adopt the operation in [12], in which the preconditioner P is given in closed-form as

$$ \mathbf{P} = \boldsymbol{\Psi}^{\mathrm{T}}\big(\boldsymbol{\Psi}\boldsymbol{\Psi}^{\mathrm{T}}\big)^{-1}, $$
(6)

which has been shown to very effective in improving the mutual coherence. Interested readers are referred to [12] for a detailed description and theoretical analysis of the preconditioner. A similar treatment has also been proposed in [13].

In the signal reconstruction step, the gOMP algorithm is used, where the preconditioned samples y=Py0 and the preconditioned sampling matrix Ψ=PΨ are the inputs. We would like to mention two advantages of the Pre-gOMP algorithm. Firstly, the preconditioning operation leads to a reduction of the mutual coherence, which is useful for promoting the recovery accuracy. Secondly, the signal reconstruction step can be very efficient because the gOMP algorithm essentially carries out a parallel processing to identify support indices of x, as pointed out in [9]. The computationally benefit is no doubt helpful in the application of GI.

The analysis of Pre-gOMP algorithm consists of two parts: (i) the reduction on the mutual coherence after preconditioning and (ii) the sufficient condition analysis for Pre-gOMP in terms of mutual coherence.

2.3 The reduction on μ after preconditioning

Lemma 1

(Preconditioning [12]) Given a sampling matrix \(\boldsymbol {\Psi } \in \mathcal {R}^{m \times n}\) with mn. The preconditioned matrix PΨ with P=ΨT(ΨΨT)−1 is a Parseval tight frame.

One can interpret from Lemma 1 that the preconditioned sampling matrix PΨ has identical non-zero singular values. As stated in [14], the larger the smallest non-zero singular value of a matrix is, the smaller mutual coherence of the matrix is. Therefore, the preconditionor P can be useful in improving the mutual coherence.

To test the effectiveness of the preconditioning method, we perform simulation and experiment. In our simulation, random negative exponential sampling matrices, which are commonly used in GI [15], is considered. The entries of random negative exponential sampling matrix Ψ are drawn independently from the negative exponential distribution \(p(\mathrm {x}) \sim \frac {1}{{\mathrm {\overline x }}}\exp \left ({ - \mathrm {\frac {x}{{\overline x }}}} \right)\). The size of the sampling matrix is m×n with fixed n=256 and m ranges from 10 to 256. For each sampling number, 500 independent trials are performed, and the mean mutual coherence of the matrix is calculated. In Fig. 1, we plot the mutual coherence as a function of the sampling rate r, which is defined as r=m/n, where the blue pentagram line describes μ(Ψ) as a function of the sampling rate r, while the red circle line represents μ(Ψ) (denoted as optimized matrix in Fig. 1) as a function of the sampling rate r. It is observed that the μ decreases as the sampling rate r increases. In particular, the μ(Ψ) is uniformly smaller than μ(Ψ) for all region of sampling rate, which clearly validates the effectiveness of our preconditioning method.

Fig. 1
figure 1

μ comparsion. Mutual coherence as a function of sampling rate

2.4 Sufficient condition for pre-gOMP based on μ

Theorem 1

Let \(\boldsymbol {\Psi } \in {\mathcal {R}^{m \times n }}\) be the preconditioned sampling matrix. Then, Pre-gOMP exactly recovers any K-sparse signal \(\mathbf {x} \in {\mathcal {R}^{n }}\) from its preconditioned samples y=Φx under

$$ \mu<\left\{ {\begin{array}{*{20}{c}} {\frac{1}{2K-1}, ~~~~~~when ~S=1,}\\ {\frac{{\mathrm{1}}}{{\left({K - 1} \right)S + 1}}, ~when ~S\geq 2, } \end{array}} \right. $$
(7)

where S (≥1) is the number of indices selected in each selection in Pre-gOMP algorithm.

Remark 1

When S=1, the sufficient condition for Pre-gOMP algorithm is the same as that for OMP algorithm [16]. When S≥2, the bound in (7) decreases monotonically in S. Namely, the larger S is, the more restrictive the requirement on the preconditioned sampling matrix Ψ would be. Nevertheless, the large S can largely reduce the number of iterations, which is useful for improving the computational complexity of the algorithm.

2.5 Proof of Theorem 1

Before proving Theorem 1, we give some lemmas that are useful in the proof.

Lemma 2

(norm inequality [17]) For matrices A, \(\mathbf {B}\in \mathcal {R}^{m\times n}\), and \(\mathbf {u}\in \mathcal {R}^{m}\), the following inequalities hold:

$$\begin{array}{*{20}l} &|\|\mathbf{A}\|_{2}-\|\mathbf{B}\|_{2}|\leq \|\mathbf{A}+\mathbf{B}\|_{2} \leq \|\mathbf{A}\|_{2}+\|\mathbf{B}\|_{2}, \\ &\hspace{15mm}\frac{\|\mathbf{u}\|_{1}}{\sqrt{m}}\leq \|\mathbf{u}\|_{2} \leq \sqrt{m}\|\mathbf{u}\|_{\infty}. \end{array} $$
(8a) (8b)

Lemma 3

(Lemma 6 in [12]) For two disjoint sets I1,I2{1,2,…,n} and \(\boldsymbol {\Psi }_{I_{1}}, \boldsymbol {\Psi }_{I_{2}}\) are the corresponding subsets of Ψ, then

$$ \|\boldsymbol{\Psi}_{I_{1}}^{\mathrm{T}}\boldsymbol{\Psi}_{I_{2}}\|_{2} \leq \mu \sqrt{|I_{1}||I_{2}|}, $$
(9)

where μis the mutual coherence of Ψ.

Lemma 4

(Consequences of RIP [18,19]): Let \(\mathcal {S}\subseteq \Omega \), if \(\delta _{|\mathcal {S}|}\in [0,1)\), then for any vector \(\mathbf {u}\in \mathcal {R}^{|\mathcal {S}|}\),

$$\begin{array}{*{20}l} & \big(1-\delta_{|\mathcal{S}|}\big)\|\mathbf{u}\|_{2} \leq \|\boldsymbol{\Psi}_{\mathcal{S}}^{\mathrm{T}}\boldsymbol{\Psi_{\mathcal{S}}} \mathbf{u}\|_{2}\leq \big(1+\delta_{|\mathcal{S}|} \big)\|\mathbf{u}\|_{2} \\ &\hspace{2mm} \frac{\|\mathbf{u}\|_{2}}{ 1 {+} \delta_{|\mathcal{S}|}} \leq \|(\boldsymbol{\Psi}_{\mathcal{S}}^{\mathrm{T}}\boldsymbol{\Psi_{\mathcal{S}}})^{-1} \mathbf{u}\|_{2} \leq \frac{\|\mathbf{u}\|_{2}}{1{-}\delta_{|\mathcal{S}|}} \\ &\hspace{20mm}\delta_{|\mathcal{S}|} \leq (|\mathcal{S}|-1)\mu \end{array} $$
(10a) (10b) (10c)

Now, we proceed to prove Theorem 1 via mathematical induction, which is a similar strategy as in [9,20,21] but with extension to the mutual coherence framework. Suppose that the Pre-gOMP algorithm has performed k iterations successfully, i.e., Λk contains at least k correct indices. Then, in the kth iteration, the residual is

$$\mathbf{r}_{k}= \mathcal{P}_{\Lambda_{k}}^{\bot} \boldsymbol{\Psi }_{T \backslash \Lambda_{k}} \mathbf{x}_{T \backslash \Lambda_{k}}. $$

Let β1 be the maximal absolute inner product of the residual signal rk and correct atoms Ψi,iT. Let αi,i=1,2,,S be the S largest absolute inner product of the residual signal rk and incorrect atoms Ψi,iTc. We arrange αi, i=1,2,,S, according to their magnitude in the descending order (α1α2αS). Following the strategy in [12], we build the sufficient condition by showing that

$$ \beta_{1} > \alpha_{S}, $$
(11)

which guarantees at least one correct atom selected at the (k+1)-th iteration of Pre-gOMP.

In the (k+1)-th (0≤kK−1) iteration, one has

$$ \begin{array}{ll} \beta_{1} & = \| \boldsymbol{\Psi}^{\mathrm{T}}_{T{\backslash}\Lambda_{k}} \mathbf{r}_{k} \|_{\infty} \\ & \overset{(\text{8b})}{\geq} \frac{ \| \boldsymbol{\Psi}_{T \backslash \Lambda_{k}}^{\mathrm{T}} \mathbf{r}_{k} \|_{2}} {\sqrt{\left| T\backslash \Lambda_{k} \right|}} \\ & = \frac{ \| \boldsymbol{\Psi}_{T \backslash \Lambda_{k}}^{\mathrm{T}} \mathcal{P}_{\Lambda_{k}}^{\bot} \boldsymbol{\Psi }_{T \backslash \Lambda_{k}} \mathbf{x}_{T \backslash \Lambda_{k}} \|_{2} }{\sqrt{\left| T\backslash \Lambda_{k} \right|}} \\ & \overset{(\text{8a})}{\geq} \frac{\| \boldsymbol{\Psi}_{T \backslash \Lambda_{k}}^{\mathrm{T}} \boldsymbol{\Psi}_{T \backslash \Lambda_{k}} \mathbf{x}_{T\backslash\Lambda_{k}} \|_{2} - \| \boldsymbol{\Psi}_{T\backslash \Lambda_{k}}^{\mathrm{T}} \mathcal{P}_{\Lambda_{k}} \boldsymbol{\Psi}_{T \backslash \Lambda_{k}} \mathbf{x}_{T \backslash \Lambda_{k}} \|_{2}}{\sqrt{ | T\backslash \Lambda_{k} |}},\\ \end{array} $$
(12)

where

$$\begin{array}{*{20}l} & \left\| \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}^{\mathrm{T}}\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}x_{T\backslash {\Lambda_{k}}} \right\|_{2}\\ & \overset{(10a),(10c)}{\geq} {\left({1 - \left({\left| {T\backslash {\Lambda_{k}}} \right| - 1} \right)\mu} \right)} \left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}, \end{array} $$
(13)

and

$$\begin{array}{*{20}l} & \left\| \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}^{\mathrm{T}} \mathcal{P}_{{\Lambda_{k}}}\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}\mathbf{x}_{T\backslash {\Lambda_{k}}} \right\|_{2} \\ & = \left\| \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{{\Lambda_{k}}}}{\left(\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{{\Lambda_{k}}}} \right)^{\mathrm{ - 1}}}\boldsymbol{\Psi}_{\Lambda_{k}}^{\mathrm{T}} \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}} \mathbf{x}_{T\backslash {\Lambda_{k}}} \right\|_{2} \\ & \overset{(9)}{\le} \mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|} \left\| {{\left(\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}} \boldsymbol{\Psi}_{{\Lambda_{k}}} \right)}^{\mathrm{-1}}} \boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}} \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}} \mathbf{x}_{T\backslash \Lambda_{k}} \right\|_{2} \\ & \overset{(10b),(10c)}{\leq} \frac{\mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}{\left\| {\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ & \overset{(9)}{\leq} \frac{\mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}\sqrt {\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|} {\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ & = \frac{{\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|\mu }}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}{\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}}. \end{array} $$
(14)

Combining (13) with (14), we can get

$$\begin{array}{*{20}l} & {\beta_{1}} \geq {\frac{1}{\sqrt {\left| {T\backslash {\Lambda_{k}}} \right|}} \Big(1 - \left({\left| {T\backslash {\Lambda_{k}}} \right| - 1} \right)\mu - \frac{{ \left| {T\backslash {\Lambda_{k}}} \right| \left| {{\Lambda_{k}}} \right|\mu }}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }} \Big)} \\ & \cdot \left\|\mathbf{x}_{T\backslash {\Lambda_{k}}} \right\|_{2}. \end{array} $$
(15)

Our next job is to calculate αS. Before doing so, we observe that

$$\begin{array}{*{20}l} & \left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\mathbf{r}_{k}}} \right\|_{2} \\ & = {\left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}\left({\mathbf{I} - {\mathcal{P}_{{\Lambda_{k}}}}} \right){\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ &\leq \left\| \boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}} \boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}} \mathbf{x}_{ T\backslash {\Lambda_{k}}} \right\|_{2} + \left\| \boldsymbol{\Psi}_{ {U_{k}}}^{\mathrm{T}} \mathcal{P}_{{\Lambda_{k}}} \boldsymbol{\Psi}_{ T\backslash {\Lambda_{k}}} \mathbf{x}_{T\backslash {\Lambda_{k}}} \right\|_{2}, \end{array} $$
(16)

where \( {U_{k}}: = \mathop {\arg \max }\limits _{s \subset { \Omega \backslash (T \cup \Lambda _{k})},\left | s \right | = S} {\left \| {\boldsymbol {\phi }_{s}^{\mathrm {T}}{\mathbf {r}_{k}}} \right \|_{1}}.\) The first and second term on the right-hand side of (16) can be rewritten as

$$\begin{array}{*{20}l} & \left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2} \overset{(9)}{\leq} \mu \sqrt {\left| {{U_{k}}} \right|\left| {T\backslash {\Lambda_{k}}} \right|} {\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ & \hspace{28mm} = \mu \sqrt {S\left| {T\backslash {\Lambda_{k}}} \right|} {\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}}, \end{array} $$
(17)

and

$$\begin{array}{*{20}l} & \left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\mathcal{P}_{{\Lambda_{k}}}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2} \\ & = {\left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{{\Lambda_{k}}}}{{\left({\boldsymbol{\Psi}_{{\Lambda_{k}}}^{T}{\boldsymbol{\Psi}_{{\Lambda_{k}}}}} \right)}^{{\mathrm{ - 1}}}}\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ & \overset{(9)}{\leq} \mu \sqrt {\left| {{U_{k}}} \right|\left| {{\Lambda_{k}}} \right|} {\left\| {{{\left({\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{{\Lambda_{k}}}}} \right)}^{{\mathrm{ - 1}}}}\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ &\overset{\text{(10b),(10c)}}{\leq} \frac{\mu \sqrt {\left| {{U_{k}}} \right|\left| {{\Lambda_{k}}} \right|}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}{\left\| {\boldsymbol{\Psi}_{{\Lambda_{k}}}^{\mathrm{T}}{\boldsymbol{\Psi}_{T\backslash {\Lambda_{k}}}}{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ &\overset{(9)}{\leq} \frac{\mu \sqrt {\left| {{U_{k}}} \right|\left| {{\Lambda_{k}}} \right|}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}\mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|} {\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}} \\ & = \frac{{\left| {{\Lambda_{k}}} \right|\sqrt {\left| {{U_{k}}} \right|\left| {T\backslash {\Lambda_{k}}} \right|} {\mu^{2}}}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}{\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}}, \end{array} $$
(18)

respectively. Combining (17) with (18), one gets

$$\begin{array}{*{20}l} & \left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\mathbf{r}_{k}}} \right\|_{2} \leq \Bigg({\mu \sqrt {S\left| {T\backslash {\Lambda_{k}}} \right|} + \frac{{\left| {\Lambda_{k}} \right|\sqrt {S\left| {T\backslash {\Lambda_{k}}} \right|} {\mu^{2}}}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}} \Bigg) \\ & \hspace{26mm} \cdot \left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}. \end{array} $$
(19)

On the other hand,

$$\begin{array}{*{20}l} &{\left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\mathbf{r}_{k}}} \right\|_{2}} \geq \frac{1}{{\sqrt S }}{\left\| {\boldsymbol{\Psi}_{{U_{k}}}^{\mathrm{T}}{\mathbf{r}_{k}}} \right\|_{1}} \\ &~~~~~~~~~~~~~\geq \frac{1}{{\sqrt S }}S{\alpha_{S}} \\ &~~~~~~~~~~~~~\geq \sqrt S {\alpha_{S}}. \end{array} $$
(20)

From (19) and (20), one further has

$$\begin{array}{*{20}l} &{\alpha_{S}} \leq \Bigg({\mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|} + \frac{{\left| {{\Lambda_{k}}} \right|\sqrt {\left| {T\backslash {\Lambda_{k}}} \right|} {\mu^{2}}}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}} \Bigg)\\ &\hspace{8mm}\cdot{\left\| {{\mathbf{x}_{T\backslash {\Lambda_{k}}}}} \right\|_{2}}. \end{array} $$
(21)

By combining (15) with (21), we obtain the sufficient condition of (11) as

$$\begin{array}{*{20}l} & \frac{1}{{\sqrt {\left| {T\backslash {\Lambda_{k}}} \right|} }}\left({1 - \left({\left| {T\backslash {\Lambda_{k}}} \right| - 1} \right)\mu - \frac{{\left| {T\backslash {\Lambda_{k}}} \right|\left| {{\Lambda_{k}}} \right|{\mu^{2}}}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}} \right) \\ &~~~~~~~~> {\mu \sqrt {\left| {T\backslash {\Lambda_{k}}} \right|} + \frac{{\left| {{\Lambda_{k}}} \right|\sqrt {\left| {T\backslash {\Lambda_{k}}} \right|} {\mu^{2}}}}{{1 - \left({\left| {{\Lambda_{k}}} \right| - 1} \right)\mu }}}, \end{array} $$
(22)

which can be simplified as

$$ \left| {T\backslash {\Lambda_{k}}} \right| < \frac{1}{2}\left(1 + \frac{1}{\mu} - \left| {{\Lambda_{k}}} \right| \right). $$
(23)

Since |TΛk|≤Kk and |Λk|=Sk, to guarantee (23) holds, it requires

$$ K < \frac{1}{2}\left({1 + \frac{1}{\mu }} \right) - \left({\frac{S}{2} - 1} \right)k. $$
(24)

When S=1, sufficient condition of (24) becomes

$$ K < \frac{1}{2}\left({1 + \frac{1}{\mu }} \right) + \frac{1}{2}k. $$
(25)

Since 0≤kK−1, (25) is guaranteed by

$$ K < \frac{1}{2}\left({1 + \frac{1}{\mu }} \right), $$
(26)

or equivalently,

$$ \mu < \frac{1}{2K-1}. $$
(27)

When S≥2, the sufficient condition of (24) can be given by

$$ K < \frac{1}{2}\left({1 + \frac{1}{\mu }} \right) - \left({\frac{S}{2} - 1} \right)\left(K-1 \right), $$
(28)

that is,

$$ \mu < \frac{1}{S(K-1)+1}. $$
(29)

Finally, by combining (27) with (29), the sufficient condition for Pre-gOMP can be given by

$$ \mu<\left\{ {\begin{array}{*{20}{c}} {\frac{1}{2K-1}, ~~~~~~ ~S=1,}\\ {\frac{{\mathrm{1}}}{{\left({K - 1} \right)S + 1}}, ~ ~S\geq 2. } \end{array}} \right. $$
(30)

3 Experiments

In this section, we carry out simulation and experiment to test the performance of Pre-gOMP. We also apply the Pre-gOMP algorithm to recover image signals in the application of GI.

3.1 Simulation experiments

In the simulation, we use the testing strategy in [22,23] which measures the effectiveness of recovery algorithms by checking the empirical frequency of exact reconstruction in the noiseless case. For comparsion, we adopt the OMP, iterative hard thresholding (IHT) [24], iterative soft thresholding (IST) [25], BP, gOMP, and Pre-gOMP algorithm to recover signals. In each trial, we construct m×n (m=128 and n=256) random negative exponential sampling matrix Ψ with entries drawn independently from the negative exponential distribution exp(1). Moreover, we generate K-sparse vector x whose support is chosen at random. Three types of sparse signals are taken into account: (i) sparse Gaussian signals, (ii) sparse pulse amplitude modulation (PAM) signals, and (iii) sparse two-valued signals, whose non-zeros elements are selected from \(\mathcal {N}(0,1)\), {±1,±3}, and {0,255}, respectively.

3.2 Setup of GI

Figure 2 presents a typical schematic of computational ghost imaging [26]. A light-emitting diode (LED) with wavelength λ=532nm is used as the light source. The light beam is uniformly projected on the digital micromirror device (DMD) by the means of Köhler illumination through the Köhler illumination lens. Here, a series of desired random coded patterns is prebuilt by the DMD, which controls the direction of light by the micro mirrors and the gray value of the light pattern is achieved by controlling the integration time of the DMD. Then, the coded patterns are projected on the object through the emission lens and photons transmitted through the object are gathered by a bucket detector through a conventional imaging lens.

Fig. 2
figure 2

The experimental schematic of ghost imaging via DMD

In the GI system, the DMD consists of 1024×786 pixels and the size of each pixel is 13um, and only 252×252 pixels on the center of the DMD are used to generate the coded pattern. The size of object is 9mm×9mm. The object’s image OGI(xr,yr) can be retrieved via conventional the GI and CS algorithm, respectively. In the GI algorithm, OGI(xr,yr) is retrieved from the correlation between test arm light intensity Bs and the reference arm light intensity \(I_{r}^{s}(x_{r},y_{r})\) [27]:

$$ O_{GI}(x_{r},y_{r}) = \frac{1}{m}\sum\limits_{s=1}^{m} {I_{r}^{s}(x_{r},y_{r})-\left\langle {I_{r}^{s}(x_{r},y_{r})} \right\rangle B^{s}}, $$
(31)

where s denotes the sth sampling, m is the total sampling number, and \(\left \langle {I_{r}^{s}(x_{r},y_{r})} \right \rangle = \sum \limits _{s=1}^{m} {I_{r}^{s}(x_{r},y_{r})} /m\) represents the ensemble average of \(I_{r}^{s}(x_{r},y_{r})\).

In the CS algorithm, OGI(xr,yr) is recovered by solving the problem,

$$ \mathop {\min }\limits_{\mathbf{O}} {\left\| \mathbf{O} \right\|_{0}}~~ \mathrm{subject~ to}~~~~ \mathbf{y} = \boldsymbol{\Psi} \mathbf{O}, $$
(32)

where \(\mathbf {y} = {\left [ {\begin {array}{*{20}{c}} {{B^{1}}}& \cdots &{{B^{s}}}&{\begin {array}{*{20}{c}} \cdots &{{B^{m}}} \end {array}} \end {array}} \right ]^{\mathrm {T}}}\in \mathcal {R}^{m}\) denotes the sampling vector, the object O is reshaped into a column vector, and the Ψ is related to the coded patterns \(I_{r}^{s}(x_{r},y_{r}),s=1,\cdots,m\).

4 Results and discussion

In Fig. 3, we perform 500 independent trials for each sparsity and plot the empirical frequency of exact reconstruction as a function of the sparsity level. In the gOMP algorithm, we choose S=3,5 in our simulation. As observed, Fig. 3a, b, and c tell the recovery performance for recovering sparse Gaussian signals, sparse PAM signals, and sparse two-valued signals, respectively. The results reveal that the critical sparsity of the Pre-gOMP algorithm is larger than that of the gOMP algorithm, which implies the preconditioning method indeed promotes the recovery performance. It can also be observed that the Pre-gOMP algorithm outperforms the OMP and the thresholding algorithms. Even when compared with BP, the Pre-gOMP still shows quite competitive recovery performance. Overall, we observe that the Pre-gOMP is effective for recovering all three types of sparse signals.

Fig. 3
figure 3

Frequency of exact recovery of sparse signals as a function of K for the random negative exponential sampling matrix

In the GI imaging experiment, we adopt GI, differential GI (DGI) [27], pseudo-inverse GI (PGI) [28], gOMP, gradient projection for sparse reconstruction (GPSR) [29], BP, and Pre-gOMP algorithms to recover the image of object and compare their performances. The reconstruction results of different objects via different reconstruction algorithms are shown in Fig. 4. In Fig. 4, some results at some specific sampling rate are selected and the sampling rate is labeled in the left column. The reconstruction results are shown in the middle column, the original objects are in the right column, and the reconstruction results via different algorithms are compared in adjacent columns. It is observed that the reconstruction results of GI is significantly improved by the Pre-gOMP algorithm.

Fig. 4
figure 4

Experimental results (the middle column) for GI, PGI, DGI, gOMP, GPSR, BP, and Pre-gOMP reconstruction algorithms. Objects are shown as the right one and the sampling rate selected to reconstruction are shown as the left one. Different objects are imaged and compared at different rows

Figure 5 shows the recovery performance with respect to different sampling rates. To quantitatively measure the recovery quality, the peak signal noise rate (PSNR) is adopted, which is defined as \( \text {PSNR} = 10\log \left ({\frac {{\mathrm {MAX_{_{I}}}^{2}}}{\mathrm {{MSE}}}} \right), \) where the \(\mathrm {MAX_{_{I}}}\) is the maximum value in the reconstructed image and the MSE is the mean square error between the reconstructed image and the original object. Larger PSNR generally implies better recovery quality. As observed in Fig. 5, the PSNR increases with the sampling rate. The Pre-gOMP algorithm indeed improves the recovery quality of GI compared with other algorithms. In particular, the recovery quality via Pre-gOMP is slightly better than that via BP within a range of sampling rate. The recovery quality via Pre-gOMP is improved over 2 dB to that via gOMP.

Fig. 5
figure 5

The reconstruction performance as a function of the sampling rate is shown. Different reconstruction algorithms are compared

Figure 6 represents the running time via different recovery algorithms. The running time is measured by using the MATLAB program under quad-core 64-bit processor and Windows 10 environment. In Fig. 6, the result of GI algorithm is not included because the computational complexity of GI is similar to that of DGI. Overall, it is observed that the running time of the DGI and PGI is smaller than that of CS algorithms, and the Pre-gOMP(S=2) algorithm is faster than the BP, gOMP(S=3), and GPSR algorithm. Both simulation and experimental results demonstrate that the Pre-gOMP algorithm exhibits competitive performance in the signal reconstruction, while with fast running time.

Fig. 6
figure 6

Running time as a function of sampling rate. Different reconstruction algorithms are compared

5 Conclusion

In this paper, we have proposed an algorithm called the Pre-gOMP algorithm for the recovery of sparse signals. Using the mutual coherence framework, we have developed a sufficient condition for Pre-gOMP to exact reconstruct any K-sparse signal. It is shown that if the μ of the preconditioned sampling matrix satisfies μ<1/(KSS+1),(S>1), then the Pre-gOMP algorithm perfectly recovers any K-sparse signals from its preconditioned samples. Furthermore, we apply the Pre-gOMP algorithm to recover the image signal in the application of GI. Our experimental results demonstrate that the Pre-gOMP can largely improve the imaging quality of GI, while boosting the recovery speed.

Availability of data and materials

Please contact the author for data requests.

References

  1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, M. J. Padgett, 3D computational imaging with single-pixel detectors. Science. 340(6134), 844–847 (2013).

    Article  Google Scholar 

  2. S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM Rev. 43(1), 129–159 (2001).

    Article  MathSciNet  Google Scholar 

  3. D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).

    Article  MathSciNet  Google Scholar 

  4. E. J. Candes, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).

    Article  MathSciNet  Google Scholar 

  5. E. Candes, T. Tao, The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat.35(6), 2313–2351 (2007).

    Article  MathSciNet  Google Scholar 

  6. S. G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries. IEEE Trans. Sig. Process. 41(12), 3397–3415 (1993).

    Article  Google Scholar 

  7. Y. C. Pati, R. Rezaiifar, P. S. Krishnaprasad, in Proceedings of 27th Asilomar conference on signals, systems and computers. Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition (IEEE, 1993). https://doi.org/10.1109/acssc.1993.3424651993.

  8. S. Chen, S. A. Billings, W. Luo, Orthogonal least squares methods and their application to non-linear system identification. Int. J. Control. 50(5), 1873–1896 (1989).

    Article  Google Scholar 

  9. J. Wang, S. Kwon, B. Shim, Generalized orthogonal matching pursuit. IEEE Trans. Sig. Process.60(12), 6202–6216 (2012).

    Article  MathSciNet  Google Scholar 

  10. E. J. Candès, The restricted isometry property and its implications for compressed sensing. C. R. Math.346(9-10), 589–592 (2008).

    Article  MathSciNet  Google Scholar 

  11. M. Benzi, Preconditioning techniques for large linear systems: A Survey. J. Comput. Phys.182(2), 418–477 (2002).

    Article  MathSciNet  Google Scholar 

  12. Z. Tong, J. Wang, S. Han, Preconditioned multiple orthogonal least squares and applications in ghost imaging via sparsity constraint. arXiv:1910.04926 (2019).

  13. X. Liao, H. Li, L. Carin, Generalized alternating projection for weighted- 2,1 minimization with applications to model-based compressive sensing. SIAM J. Imaging Sci.7(2), 797–823 (2014).

    Article  MathSciNet  Google Scholar 

  14. S. Ubaru, A. K. Seghouane, Y. Saad, Improving the incoherence of a learned dictionary via rank shrinkage. Neural Comput.29(1), 263–285 (2017).

    Article  MathSciNet  Google Scholar 

  15. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, L. A. Lugiato, High-resolution ghost image and ghost diffraction experiments with thermal light. Phys. Rev. Lett.94(184), 1836024 (2005).

  16. J. A. Tropp, A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007).

    Article  MathSciNet  Google Scholar 

  17. G. H. Golub, C. F. Van Loan, Matrix computations (Johns Hopkins University Press, Baltimore, 1983).

    MATH  Google Scholar 

  18. S. Kwon, J. Wang, B. Shim, Multipath matching pursuit. IEEE Trans. Inform. Theory. 60(5), 2986–3001 (2014).

    Article  MathSciNet  Google Scholar 

  19. H. Li, J. Wen, A new analysis for support recovery with block orthogonal matching pursuit. IEEE Sig. Process. Lett.26(2), 247–251 (2018).

    Article  Google Scholar 

  20. H. Li, L. Wang, X. Zhan, D. K. Jian, On the fundamental limit of orthogonal matching pursuit for multiple measurement vector. IEEE Access. 7:, 48860–48866 (2019).

    Article  Google Scholar 

  21. H. Li, J. Wen, Generalized covariance-assisted matching pursuit. Sig. Process.163:, 232–237 (2019).

    Article  Google Scholar 

  22. W. Dai, O. Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inform. Theory. 55(5), 2230–2249 (2009).

    Article  MathSciNet  Google Scholar 

  23. E. Candes, M. Rudelson, T. Tao, R. Vershynin, in IEEE 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05). Error correction via linear programming (IEEE, 2005). https://doi.org/10.1109/sfcs.2005.5464411.

  24. B. Thomas, D. E. Mike, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal.27(3), 265–274 (2009).

    Article  MathSciNet  Google Scholar 

  25. K. Bredies, A. D. Lorenz, Linear convergence of iterative soft-thresholding. J. Fourier Anal. Appl.14(5), 813–837 (2008).

    Article  MathSciNet  Google Scholar 

  26. H. J. Shapiro, Computational ghost imaging. Phys. Rev. A. 78(6), 061802–061809 (2008).

    Article  Google Scholar 

  27. F. Ferri, D. Magatti, L. A. Lugiato, A. Gatti, Differential ghost imaging. Phys. Rev. Lett.104(25), 253603 (2010).

    Article  Google Scholar 

  28. C. Zhang, S. Guo, J. Cao, J. Guan, F. Gao, Object reconstitution using pseudo-inverse for ghost imaging. Opt. Express.22(24), 30063–30073 (2014).

    Article  Google Scholar 

  29. M. A. Figueiredo, R. D. Nowak, S. J. Wright, Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems.IEEE J. Sel. Topics Sig. Process.1(4), 586–597 (2007).

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the National Natural Science Foundation of China and the Youth Innovation Promotion Association of the Chinese Academy of Sciences for their support and to thank anyone who supports this paper to be published.

Funding

This work is supported in part by the National Natural Science Foundation of China (61971146, U1509217) and the Shanghai Municipal Science and Technology Major Project (2018SHZDZX01), and in part by the Youth Innovation Promotion Association of the Chinese Academy of Sciences (2017-2013162).

Author information

Authors and Affiliations

Authors

Contributions

Authors’ contributions

Tong, Hu, Han, and Wang conceived of the algorithm and designed the experiments. Wang and Wang revised the manuscript. All authors read and approved the final manuscript. Zhishen Tong and Feng Wang contribute equally to the first author.

Authors’ information

Not applicable.

Corresponding author

Correspondence to Jian Wang.

Ethics declarations

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tong, Z., Wang, F., Hu, C. et al. Preconditioned generalized orthogonal matching pursuit. EURASIP J. Adv. Signal Process. 2020, 21 (2020). https://doi.org/10.1186/s13634-020-00680-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-020-00680-9

Keywords