Preconditioned generalized orthogonal matching pursuit

Recently, compressed sensing (CS) has aroused much attention for that sparse signals can be retrieved from a small set of linear samples. Algorithms for CS reconstruction can be roughly classified into two categories: (1) optimization-based algorithms and (2) greedy search ones. In this paper, we propose an algorithm called the preconditioned generalized orthogonal matching pursuit (Pre-gOMP) to promote the recovery performance. We provide a sufficient condition for exact recovery via the Pre-gOMP algorithm, which says that if the mutual coherence of the preconditioned sampling matrix Φ satisfies μ(Φ)<1SK−S+1,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$ \mu ({\Phi }) < \frac {1}{SK -S + 1}, $\end{document} then the Pre-gOMP algorithm exactly recovers any K-sparse signals from the compressed samples, where S (>1) is the number of indices selected in each iteration of Pre-gOMP. We also apply the Pre-gOMP algorithm to the application of ghost imaging. Our experimental results demonstrate that the Pre-gOMP can largely improve the imaging quality of ghost imaging, while boosting the imaging speed.

(2020) 2020: 21 Page 2 of 14 optimization-based approaches relaxes the 0 -norm to the 1 -norm and solves the convex optimization problem: min x x 1 subject to y 0 = x.
A well-known algorithm solving (3) is called basis pursuit (BP) [2] which can reliably recover sparse signals under appropriate constraints on the sampling matrix [2]. On the other hand, greedy search algorithms have received considerable attention due to their computational simplicity. Examples includes the matching pursuit (MP) [6], orthogonal matching pursuit (OMP) [7], and orthogonal least squares (OLS) [8]. To improve the computational efficiency and recovery performance, there have also been many studies on the modification of OMP. As a representative variant, generalized OMP (gOMP) [9] chooses S columns of that are maximally correlated with the residual vector at each iteration, which exhibits computational advantages over the conventional OMP algorithm. As mentioned, the property of sampling matrix has a great influence on the recovery performance. To evaluate the property of , the mutual coherence property has been widely used [10], which is defined as Generally speaking, a smaller μ contributes to better performance on signal recovery. One way to reduce the mutual coherence is to multiply a matrix P on both sides of (2), i.e., min x x 0 subject to Py 0 = P x.
In doing so, we wish μ(P ) to be smaller than that of the original one. This operation is commonly referred to as preconditioning in numerical linear algebra, where the matrix P is called preconditioner [11]. There has been much evidence that preconditioning is useful to promote the recovery quality of sparse signals.
In this paper, we propose a preconditioned gOMP (Pre-gOMP) algorithm for the recovery of sparse signals. As shown in Algorithm 1, the Pre-gOMP algorithm consists of (i) a preconditioning step and (ii) a conventional signal reconstruction step. The primary contributions of this paper are summarized as follows: 1. Based on the mutual coherence framework, we develop a sufficient condition for the Pre-gOMP algorithm. Specifically, we show that is sufficient for Pre-gOMP to exactly recover any K -sparse vector in K iterations. 2. To evaluate the recovery performance of the Pre-gOMP algorithm. We apply it to imaging objects in the application of ghost imaging (GI). Our experimental results reveal that Pre-gOMP algorithm can largely improve the imaging quality compared to the existing methods.
The rest of this paper is organized as follows. In Section 2, we introduce the Pre-gOMP algorithm and analyze it under the mutual coherence framework. Section 3 provides simulation and the setup of GI. Section 4 presents simulated results and experimental results for the propose algorithm. We conclude our work in Section 5.

Notations
Let = {1, · · · , n}. T = supp(x) = {i|i ∈ , x i = 0} is the support set of x. S ⊆ is the set of selected indices in each iteration and |S| is the cardinality of S. T\S = {i|i ∈ T\S}. k = k−1 ∪ S is the estimated support set at the kth iteration of Pre-gOMP.
x S ∈ R |S| is the subset of x indexed by S. Similarly, S ∈ R m×|S| is a submatrix of that contains columns of indexed by S. If S has full column rank, then S is the projection matrix onto span( S ). P ⊥ S = I − P S is the projection matrix onto the orthogonal complement of span( S ) where I is the identity matrix.

The pre-gOMP algorithm
As mentioned, the Pre-gOMP algorithm consists of two parts: (i) the preconditioning operation and (ii) the signal reconstruction step. The preconditioning operation aims to reduce the mutual coherence of the sampling matrix. In this paper, we adopt the operation in [12], in which the preconditioner P is given in closed-form as which has been shown to very effective in improving the mutual coherence. Interested readers are referred to [12] for a detailed description and theoretical analysis of the preconditioner. A similar treatment has also been proposed in [13].
In the signal reconstruction step, the gOMP algorithm is used, where the preconditioned samples y = Py 0 and the preconditioned sampling matrix = P are the inputs. We would like to mention two advantages of the Pre-gOMP algorithm. Firstly, the preconditioning operation leads to a reduction of the mutual coherence, which is useful for promoting the recovery accuracy. Secondly, the signal reconstruction step can be very efficient because the gOMP algorithm essentially carries out a parallel processing to identify support indices of x, as pointed out in [9]. The computationally benefit is no doubt helpful in the application of GI.
The analysis of Pre-gOMP algorithm consists of two parts: (i) the reduction on the mutual coherence after preconditioning and (ii) the sufficient condition analysis for Pre-gOMP in terms of mutual coherence.

The reduction on μ after preconditioning
Lemma 1 (Preconditioning [12]) Given a sampling matrix ∈ R m×n with m ≤ n. The preconditioned matrix P with P = T ( T ) −1 is a Parseval tight frame.
One can interpret from Lemma 1 that the preconditioned sampling matrix P has identical non-zero singular values. As stated in [14], the larger the smallest non-zero singular value of a matrix is, the smaller mutual coherence of the matrix is. Therefore, the preconditionor P can be useful in improving the mutual coherence.
To test the effectiveness of the preconditioning method, we perform simulation and experiment. In our simulation, random negative exponential sampling matrices, which are commonly used in GI [15], is considered. The entries of random negative exponential sampling matrix are drawn independently from the negative exponential distribution The size of the sampling matrix is m × n with fixed n = 256 and m ranges from 10 to 256. For each sampling number, 500 independent trials are performed, and the mean mutual coherence of the matrix is calculated. In Fig. 1, we plot the mutual coherence as a function of the sampling rate r, which is defined as r = m/n, where the blue pentagram line describes μ( ) as a function of the sampling rate r, while the red circle line represents μ( ) (denoted as optimized matrix in Fig. 1) as a function of the sampling rate r. It is observed that the μ decreases as the sampling rate r increases. In particular, the μ( ) is uniformly smaller than μ( ) for all region of sampling rate, which clearly validates the effectiveness of our preconditioning method.

Theorem 1 Let
∈ R m×n be the preconditioned sampling matrix. Then, Pre-gOMP exactly recovers any K-sparse signal x ∈ R n from its preconditioned samples y = x under where S (≥ 1) is the number of indices selected in each selection in Pre-gOMP algorithm. Remark 1 When S = 1, the sufficient condition for Pre-gOMP algorithm is the same as that for OMP algorithm [16]. When S ≥ 2, the bound in (7) decreases monotonically in S. Namely, the larger S is, the more restrictive the requirement on the preconditioned sampling matrix would be. Nevertheless, the large S can largely reduce the number of iterations, which is useful for improving the computational complexity of the algorithm.

Proof of Theorem 1
Before proving Theorem 1, we give some lemmas that are useful in the proof.
Now, we proceed to prove Theorem 1 via mathematical induction, which is a similar strategy as in [9,20,21] but with extension to the mutual coherence framework. Suppose that the Pre-gOMP algorithm has performed k iterations successfully, i.e., k contains at least k correct indices. Then, in the kth iteration, the residual is Let β 1 be the maximal absolute inner product of the residual signal r k and correct atoms i , i ∈ T. Let α i , i = 1, 2, · · · , S be the S largest absolute inner product of the residual signal r k and incorrect atoms i , i ∈ T c . We arrange α i ,i = 1, 2, · · · , S, according to their magnitude in the descending order (α 1 ≥ α 2 · · · ≥ α S ). Following the strategy in [12], we build the sufficient condition by showing that which guarantees at least one correct atom selected at the (k + 1)-th iteration of Pre-gOMP.
In the (k + 1)-th (0 ≤ k ≤ K − 1) iteration, one has where T T\ k T\ k x T\ k 2 (10a),(10c) Combining (13) with (14), we can get Our next job is to calculate α S . Before doing so, we observe that where U k := arg max s⊂ \(T∪ k ),|s|=S φ T s r k 1 . The first and second term on the right-hand side of (16) can be rewritten as respectively. Combining (17) with (18), one gets On the other hand, From (19) and (20), one further has By combining (15) with (21), we obtain the sufficient condition of (11) as which can be simplified as Since |T\ k | ≤ K − k and | k | = Sk, to guarantee (23) holds, it requires When S = 1, sufficient condition of (24) becomes Since 0 ≤ k ≤ K − 1, (25) is guaranteed by or equivalently, When S ≥ 2, the sufficient condition of (24) can be given by that is, Finally, by combining (27) with (29), the sufficient condition for Pre-gOMP can be given by

Experiments
In this section, we carry out simulation and experiment to test the performance of Pre-gOMP. We also apply the Pre-gOMP algorithm to recover image signals in the application of GI.

Simulation experiments
In the simulation, we use the testing strategy in [22,23] which measures the effectiveness of recovery algorithms by checking the empirical frequency of exact reconstruction in the noiseless case. For comparsion, we adopt the OMP, iterative hard thresholding (IHT) [24], iterative soft thresholding (IST) [25], BP, gOMP, and Pre-gOMP algorithm to recover signals. In each trial, we construct m × n (m = 128 and n = 256) random negative exponential sampling matrix with entries drawn independently from the negative exponential distribution exp (1). Moreover, we generate K-sparse vector x whose support is chosen at random. Three types of sparse signals are taken into account: (i) sparse Gaussian signals, (ii) sparse pulse amplitude modulation (PAM) signals, and (iii) sparse two-valued signals, whose non-zeros elements are selected from N (0, 1), {±1, ±3}, and {0, 255}, respectively. Figure 2 presents a typical schematic of computational ghost imaging [26]. A lightemitting diode (LED) with wavelength λ = 532nm is used as the light source. The light beam is uniformly projected on the digital micromirror device (DMD) by the means of Köhler illumination through the Köhler illumination lens. Here, a series of desired random coded patterns is prebuilt by the DMD, which controls the direction of light by the micro mirrors and the gray value of the light pattern is achieved by controlling the integration time of the DMD. Then, the coded patterns are projected on the object through the emission lens and photons transmitted through the object are gathered by a bucket detector through a conventional imaging lens. In the GI system, the DMD consists of 1024 × 786 pixels and the size of each pixel is 13um, and only 252 × 252 pixels on the center of the DMD are used to generate the coded pattern. The size of object is 9mm × 9mm. The object's image O GI (x r , y r ) can be retrieved via conventional the GI and CS algorithm, respectively. In the GI algorithm, O GI (x r , y r ) is retrieved from the correlation between test arm light intensity B s and the reference arm light intensity I s r (x r , y r ) [27]:

Setup of GI
where s denotes the sth sampling, m is the total sampling number, and I s r (x r , y r ) = m s=1 I s r (x r , y r )/m represents the ensemble average of I s r (x r , y r ). In the CS algorithm, O GI (x r , y r ) is recovered by solving the problem, where y = B 1 · · · B s · · · B m T ∈ R m denotes the sampling vector, the object O is reshaped into a column vector, and the is related to the coded patterns I s r (x r , y r ), s = 1, · · · , m.

Results and discussion
In Fig. 3, we perform 500 independent trials for each sparsity and plot the empirical frequency of exact reconstruction as a function of the sparsity level. In the gOMP algorithm, we choose S = 3, 5 in our simulation. As observed, Fig. 3a, b, and c tell the recovery performance for recovering sparse Gaussian signals, sparse PAM signals, and sparse twovalued signals, respectively. The results reveal that the critical sparsity of the Pre-gOMP algorithm is larger than that of the gOMP algorithm, which implies the preconditioning method indeed promotes the recovery performance. It can also be observed that the Pre-gOMP algorithm outperforms the OMP and the thresholding algorithms. Even when compared with BP, the Pre-gOMP still shows quite competitive recovery performance. Overall, we observe that the Pre-gOMP is effective for recovering all three types of sparse signals. In the GI imaging experiment, we adopt GI, differential GI (DGI) [27], pseudo-inverse GI (PGI) [28], gOMP, gradient projection for sparse reconstruction (GPSR) [29], BP, and Pre-gOMP algorithms to recover the image of object and compare their performances. The reconstruction results of different objects via different reconstruction algorithms are shown in Fig. 4. In Fig. 4, some results at some specific sampling rate are selected and the sampling rate is labeled in the left column. The reconstruction results are shown in the middle column, the original objects are in the right column, and the reconstruction results via different algorithms are compared in adjacent columns. It is observed that the reconstruction results of GI is significantly improved by the Pre-gOMP algorithm. Figure 5 shows the recovery performance with respect to different sampling rates. To quantitatively measure the recovery quality, the peak signal noise rate (PSNR) is adopted, which is defined as PSNR = 10 log MAX I 2 MSE , where the MAX I is the maximum value in the reconstructed image and the MSE is the mean square error between the reconstructed image and the original object. Larger PSNR generally implies better recovery quality. As observed in Fig. 5, the PSNR increases with the sampling rate. The Pre-gOMP algorithm indeed improves the recovery quality of GI compared with other algorithms. In particular, the recovery quality via Pre-gOMP is slightly better than that via BP within a range of sampling rate. The recovery quality via Pre-gOMP is improved over 2 dB to that via gOMP. Figure 6 represents the running time via different recovery algorithms. The running time is measured by using the MATLAB program under quad-core 64-bit processor and Windows 10 environment. In Fig. 6, the result of GI algorithm is not included because the computational complexity of GI is similar to that of DGI. Overall, it is observed that the running time of the DGI and PGI is smaller than that of CS algorithms, and the Pre-gOMP(S=2) algorithm is faster than the BP, gOMP(S=3), and GPSR algorithm. Both simulation and experimental results demonstrate that the Pre-gOMP algorithm exhibits competitive performance in the signal reconstruction, while with fast running time.

Conclusion
In this paper, we have proposed an algorithm called the Pre-gOMP algorithm for the recovery of sparse signals. Using the mutual coherence framework, we have developed a sufficient condition for Pre-gOMP to exact reconstruct any K-sparse signal. It is shown that if the μ of the preconditioned sampling matrix satisfies μ < 1/(KS − S + 1), (S > 1), then the Pre-gOMP algorithm perfectly recovers any K-sparse signals from its preconditioned samples. Furthermore, we apply the Pre-gOMP algorithm to recover the image signal in the application of GI. Our experimental results demonstrate that the Pre-gOMP can largely improve the imaging quality of GI, while boosting the recovery speed.