 Research
 Open Access
 Published:
Cosparsitybased Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals
EURASIP Journal on Advances in Signal Processing volume 2015, Article number: 101 (2015)
Abstract
The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l _{1}relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedylike algorithm, called Cosparsitybased Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l _{1}relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.
Introduction
Many highdimensional signals, such as natural images, can be represented in a lowdimensional space. A variety of methods have been proposed in the past for lowdimensional representations of highdimensional signals. Sparse representation is one of such techniques that have been studied extensively. This issue can be described as recovering an unknown signal x of a highdimension from a lowdimensional signal y having a limited set of measurements:
where M ∈ ℜ ^{m × d} is a known linear operator and e ∈ ℜ ^{m} is additive noise bounded by \( \parallel \boldsymbol{e}{\parallel}_2^2\le {\varepsilon}^2 \). In a noiseless case, e is set to be a zero vector. One such example is the problem of compressed sensing where M is a measurement matrix. For m < d, this is an illposed underdetermined problem and thus has an infinite number of solutions, and extra priors or constraints need to be imposed on the model in order to limit the range of possible solutions to x.
Sparsity is usually considered as an effective prior in both the synthesis and analysis models [1]. The synthesis model assumes that x can be represented by x = Dz, where D ∈ ℜ ^{d × n} is an overcomplete dictionary with n > d and z is the sparse representation coefficient. The original signal x can be recovered by solving the following optimization problem:
where ‖ ⋅ ‖_{0} is a l _{0} pseudonorm counting the number of nonzero elements in its argument vector.
Since solving (2) is an nondeterministic polynomial (NP)hard problem [2], many approximation techniques have been proposed to recover x. Basis pursuit (BP) [3], which is based on the l _{1}minimization using linear programming (LP), is a wellknown reconstruction algorithm. Another option for approximating (2) is to use a family of greedylike algorithms, such as Orthogonal Matching Pursuit [4] or the thresholding technique [5–10].
In the analysis model, recovering x from the incomplete measurements is achieved by solving the following minimization problem [11]:
where Ω ∈ ℜ ^{p × d} is a fixed analysis operator which is also referred to as the analysis dictionary. Typically, the dimensions are m ≤ d ≤ p, n. In the analysis model, the cosparsity l (used to distinguish from the term “sparsity” in the synthesis model) is defined as
The role of cosparsity in the analysis model is similar to the role of sparsity in the synthesis model. The level of sparsity in the synthesis model indicates the number of nonzeros in the representation vector z in (2), while in the analysis model, the cosparsity l is used to indicate the number of zeros in the vector Ω x, as defined in (4). In other words, the quantity l denotes the number of rows of Ω that are orthogonal to the signal.
Solving problem (3) is NPcomplete, just as in the synthesis case; thus, approximation methods are required for reconstructing x. As before, l _{1}relaxation [13, 14] can be used to replace the l _{0} pseudonormbased optimization and the problem can then be solved by linear programming. With l _{1}relaxation, only a small number of measurements are required to achieve a high reconstruction rate. However, the computational complexity of this method may limit its practical use in largescale applications. The restricted isometry property (RIP) is commonly used by both the synthesis and analysisbased algorithms to govern the recovery condition of the sparse or cosparse signals. The measurement matrix M has the ΩRIP property with a constant δ _{ l }, if δ _{ l } is the smallest constant that satisfies
whenever Ωv has at least l zeros [12].
Another popular class of cosparse reconstruction algorithms is based on the idea of iterative greedy pursuit, such as Greedy Analysis Pursuit (GAP) [11, 15, 16]. As compared with l _{1}relaxation, GAP has better reconstruction performance and, to some degree, a lower computational complexity. Analysis Iterative Hard Thresholding (AIHT) and Analysis Hard Thresholding Pursuit (AHTP) [12, 17, 18] have been proposed by incorporating the idea of backtracking, which enables the wrong cosupports obtained in the previous iteration to be pruned in the current iteration and offers strong theoretical guarantees. Experiments show that both of them recover the signal faster than the GAP algorithm. Nevertheless, they require a relatively large number of measurements for exact reconstruction.
Recently, more sophisticated greedy algorithms have been developed, such as Analysis Subspace Pursuit (ASP) and Analysis Compressive Sampling Matching Pursuit (ACoSaMP) [12, 19]. They employ the backtracking strategy and offer strong theoretical guarantees. ASP and ACoSaMP with a candidate set size of 2l − p have good performance on reconstructing the signal when l is close to d, but they require more measurements for an exact reconstruction with an increasing level of cosparsity. ASP and ACoSaMP with a candidate set size of l provide a comparable reconstruction quality to that of the l _{1}relaxation methods with a lower reconstruction complexity. Other recent methods include a Bayesian method [20] where the model parameters are estimated by a Bayesian algorithm for the reconstruction of the signal under consideration.
Although all these greedy pursuit methods achieve signal reconstruction with a high accuracy, they require the cosparsity l to be known a priori for signal recovery. However, l may not be available in many practical applications. For example, most natural image signals are only cosparse when represented by an analysis operator such as a twodimensional Fourier transform. It is difficult to define a cosparsity that exactly matches the signal under consideration. The inaccurate cosparsity may degrade the performance of the signal reconstruction algorithm, as demonstrated in the next section.
In this paper, a new greedy algorithm named Cosparsitybased Stagewise Matching Pursuit (CSMP) is proposed for the case where l is unknown. By analyzing the projection from the signal under consideration to the analysis operator, CSMP estimates the cosparsity with a preset step size stage by stage without the cosparsity knowledge. Cosupport and measurement residual are estimated alternately in the forward stage and finetuned in the backward stage, and the signal approximation is obtained at the end of the procedure. Our experiments show that the proposed algorithm has a reconstruction performance comparable to ASP and ACoSaMP, but without the knowledge of the cosparsity.
This paper is organized as follows. In Section 2, we present the motivation of this work. The CSMP algorithm is detailed in Section 3 and a theoretical analysis to the algorithm is also provided. The simulation results are given in Section 4, followed by concluding remarks and future work in Section 5.
Motivation
In analysis cosparse representation, it is important to establish the cosupport accurately for an exact signal reconstruction. A common approach is to adopt the “correlation” term defined as follows:
where M ^{T} y resembles the original signal, Ω _{ i } denotes the ith row of Ω, the superscript T denotes matrix transpose, and  ⋅  takes the absolute value of its argument. Obviously, a growing α _{ i } implies the increase of the correlation between the signal and Ω _{ i }. When the signal is orthogonal to Ω _{ i } and M = I, we get α _{ i } = 0, where I is an identity matrix. The existing recovery methods based on the analysis model can be categorized into bottomup and topdown methods, respectively. The bottomup methods, such as GAP, prune one or more rows of the analysis operator which correspond to the entries of the largest correlation in each iteration. It could lead to unreliable reconstruction if the cosupports have been removed incorrectly. In contrast, the topdown methods, such as ACoSaMP and ASP [12], employ the backtracking technique to establish the cosupports. Although such cosupport refinement technique improves the performance of ACoSaMP and ASP significantly, it requires the cosparsity l as a priori for an exact recovery of the target signal. In practical applications, such information is often unavailable and must be preset in advance. If l is set to an inappropriate value, the performance of the algorithm for signal recovery in terms of both accuracy and robustness could be degraded significantly.
To see this, we first perform an experiment for the ASP algorithm, when l is not given accurately. Here, the analysis operator is a twodimensional finite difference operator Ω ∈ ℜ^{p × d}, where p = 144 and d = 120; x is a Gaussian random signal of length d = 120 and a cosparsity of l = 90.^{Footnote 1} In this experiment, the sampling rate δ, which is defined as δ = m / d [12], is chosen from the set {0.50, 0.54, 0.58, 0.62, 0.66, 0.70}. The cosparsity l _{est}, which denotes the estimated cosparsity of the signal, is chosen from the set {110, 100, 90, 80, 70}. We draw a phase transition diagram [12] for this algorithm.
The vertical and horizontal axes of the diagram are l _{est} and δ, respectively. For each pair of l _{est} and δ, we repeat the experiment 50 times. In each experiment, we check whether we have an exact reconstruction in the sense that the energy of the difference between the reconstruction and the original signal is smaller than 10^{−6} [12]. Note that the choice of this threshold value is followed from the existing algorithms in the literature, such as GAP, ASP, ACoSaMP, AIHT, and AHTP, for fair comparisons to be demonstrated in our experiments later. White cells in the diagram denote a completely exact reconstruction, and black cells denote a total failure in the exact reconstruction. Figure 1 shows the probability of exact reconstruction changes with respect to the sampling rate δ and the cosparsity l _{est}. We can see that the performance of ASP drops significantly if an inaccurate cosparsity l is used.
Cosparsitybased Stagewise Matching Pursuit
Algorithm description
To address the above issue, we propose a novel greedy algorithm for blind cosparse reconstruction, where the cosupports are refined iteratively and the information on cosparsity is extracted automatically. The proposed CSMP algorithm, as shown in Algorithm 1, is composed of two processes, namely, the forward and backward processes. The forward process estimates the cosparsity, constructs the cosupport starting from a cosupport with all rows in Ω, and updates the measurement residual simultaneously. The procedure ends with a backward process that tries to add the rows in Ω with a smaller correlation until the terminating condition is reached. The terminating condition for CSMP is controlled by a threshold, which ensures that the estimated cosparsity is fairly close to the actual one and the target signal has been well reconstructed. The main steps of CSMP are summarized in Algorithm 1.
In Algorithm 1, cosupp(x) = {i: Ω _{ i } x = 0}, cosupp(x, l _{est}) returns the index set of l _{est} smallest (in absolute value) elements in Ω x, 2l _{est} − p is the size of Γ and for a vector α, function Min(α, 2l _{est} − p) returns the 2l _{est} − p indices corresponding to the 2l _{est} − p smallest values of α, index(Ω x, q) returns q elements from the (l _{est} + 1)th to (l _{est} + q)th smallest (in absolute value) rows in Ω x, and Γ, Λ ^{Δ}, \( {\tilde{\varLambda}}^k \) and Λ ^{k} are subsets of {1, 2, ⋯, p}, and ⌈s/q⌉ returns the smallest integer which is larger than s / q.
The CSMP algorithm adopts a stagewise approach [6] to estimate the real cosparsity in each stage in the forward process, which only requires the step size s to be set in initialization. Here, l is defined as the real cosparsity of the original signal, and s should not be larger than d − l normally [11]. The initial cosupport Λ ^{0} has the maximum number of rows in Ω that enables CSMP to recover the signal. To avoid underestimation of the cosparsity, a safe choice is s = 1 if l is unknown. Nevertheless, there is a tradeoff between s and the recovery speed as a smaller s requires more stages. We can see that GAP and ASP/ACoSaMP are the special cases of the proposed algorithm when CSMP has a step size of s = 1 and s = d − l, respectively.
Suppose CSMP has a cosparsity of l _{est} in the forward process. With this cosparsity, the candidate set is constructed by selecting some rows in Ω with smaller correlations. Here, we should explain the relation between greedy synthesis algorithms and their analysis counterparts. Given two vectors v _{1}, v _{2} ∈ ℜ ^{n} such that Λ _{1} = cosupp(Ω v _{1}) and Λ _{2} = cosupp(Ω v _{2}). Assuming that ‖Λ _{1}‖_{0} ≤ (l _{est})_{1} and ‖Λ _{2}‖_{0} ≤ (l _{est})_{2}, it holds that ‖Λ _{1} ∩ Λ _{2}‖_{0} ≥ (l _{est})_{1} + (l _{est})_{2} − p. For the case ‖Λ _{1}‖_{0} = ‖Λ _{2}‖_{0} = l _{est}, we have 2l _{est} − p ≤ ‖Λ _{1} ∩ Λ _{2}‖_{0} ≤ l _{est}. So 2l _{est} − p is a reasonable size of the candidate set for CSMP, which corresponds to the candidate set size of 2k of CoSaMP in the synthesis model. Denoting T _{1} = supp(Ω v _{1}) and T _{2} = supp(Ω v _{2}), it is clear that supp(Ω(v _{1} + v _{2})) ⊆ T _{1} ∪ T _{2}. Noticing that supp(⋅) = cosupp(⋅)^{C}, we get cosupp(Ω(v _{1} + v _{2})) ⊇ (T _{1} ∪ T _{2})^{C} = T _{1} ^{C} ∩ T _{2} ^{C} = Λ _{1} ∩ Λ _{2}, where the superscript C denotes the complementary set. This implies that the union of the supports in the synthesis case is parallel to the intersection of the cosupports in the analysis case.
Now, with the candidate set, we begin to construct the cosupport and update the measurement residual. The rows in the analysis operator, which correspond to the smallest l _{est} components in the temporarily estimated signal (calculated in Step 4 in Algorithm 1), are used to form the cosupport. Then, we approximate the signal and update the measurement residual of the current iteration with this cosupport. The backtracking strategy [12, 19], which can be described as not only selecting rows that match the current residual signal better from the candidate set in each iteration but also excluding the other rows from the cosupport, provides the basis for constructing a more accurate cosupport and obtaining a smaller measurement residual. Here, \( \mathrm{index}\left(\boldsymbol{\Omega} {\widehat{\boldsymbol{x}}}_{\mathrm{temp}},q\right) \) is reserved for sharing in the backward process. An efficient mechanism is required for stage switching when each stage finishes till l _{est} < l. This can be performed if the current measurement residual energy no longer decreases when compared with that in the last iteration. From Step 9 in Algorithm 1, we can see that the algorithm will run for some iterations with the same cosparsity l _{est} until it reaches the stage switching condition. The proposed algorithm will perform with a cosparsity of l _{est} − s in the next stage. The forward process will not stop until the measurement residual reaches a preset threshold such as ‖y _{ r }‖_{2}/‖y‖_{2} ≤ 10^{‐ 6} for the noiseless case or ‖y _{ r }‖_{2} ≤ ε for the noisy case.
In the backward process, the algorithm tries to further increase the cosparsity by adding the less used rows into the cosupport. These q rows have been chosen in the last iteration of the forward process. The choice of the value of q can be made in terms of the value of s. As a rule of thumb, a small q is chosen when s is relatively small, and likewise, a greater q should be chosen if s is large. Typically, in our experiments, we choose q = 1 when s = 10, but we select q = 50 when s is in the order of thousands. With this strategy, we can get a more accurate cosupport for signal approximation. The iterations stop as soon as the measurement residual reaches the terminating threshold used in the forward process. The backward process needs to repeat less than ⌈s/q⌉ times since it is enough for the cosparsity to change from l _{est} to l _{est} + s in this process.
Relaxed versions for highdimensional problems
In CSMP, the constrained minimization problem
is hard to solve for highdimensional signals, and we propose to replace it with the minimization of the following cost function in Steps 4, 6, 12, and 14 in Algorithm 1:
where λ is a relaxation constraint and we choose λ = 0.001 as in [12] in our experiments. This results in a relaxed version of the algorithm. We refer hereafter to this version as relaxed CSMP (RCSMP).
Theoretical performance analysis
This section describes our theoretical analysis of the behavior of CSMP for the cosparse model in both noiseless and noisy cases. Because the proposed algorithm has a similar strategy of backtracking which is used in ASP, the proofs are mainly based on the proof framework of ASP/ACoSaMP. The following theorems are formed in parallel with those in [12], except for the unknown cosparsity and the initial cosupport and measurement vectors.
To show the ability of exact and stable recovery of cosparse signals by the CSMP algorithm, we define all variables involved in the process of signal reconstruction as follows:
Definition 3.1 [12] Let \( {\boldsymbol{Q}}_{\varLambda }=\boldsymbol{I}{\boldsymbol{\Omega}}_{\varLambda}^{+}{\boldsymbol{\Omega}}_{\varLambda } \) be the orthogonal projection onto Ω _{Λ}, where Ω _{Λ} is a submatrix of the analysis operator Ω ∈ ℜ^{p × d} and Λ is a subset of {1, 2, ⋯, p}. A cosupport Ŝ _{ l } which implies a nearoptimal projection is defined as
where v ∈ ℜ ^{d} and \( \mathrm{L}_{l}=\left\{\varLambda\in\left[1,2,\cdots,p\right],\mid\varLambda\mid\geq l \right\} \) lcosparse cosupports.
Definition 3.2 (Problem p) [12] Consider a measurement vector y ∈ ℜ^{m} such that y = Mx + e, where x ∈ ℜ^{d} is lcosparse, M ∈ ℜ^{m × d} is a measurement matrix, and e ∈ ℜ^{m} is a bounded additive noise. The largest singular value of M is σ _{ M } and its Ω RIP constant is δ _{ l }. The analysis operator Ω ∈ ℜ^{p × d} is given and fixed. Define C _{ l } to be the fraction of the largest and the smallest eigenvalues (which are not zero) of the submatrix composed of l rows from Ω. Assume that Ŝ _{ l } = cosupp(Ω x,l). According to Definition 3.1, the cosupport of x is a nearoptimal projection. Our task is to recover x from y. The recovery result is denoted by \( \widehat{\boldsymbol{x}} \).
Now, we give the guarantees on exact recovery and stability of CSMP for recovering the cosparse signals.
Theorem 3.3 (Convergence for cosparse recovery) Consider the problem p, and suppose that there exists γ > 0 such that
then there exists \( \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right)>0, \) whenever \( {\delta}_{4{l}_q3p}\le \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right), \) such that the kth iteration of the algorithm satisfies
where
and
Moreover, when \( {\rho}_1^2{\rho}_2^2<1, \) the iteration converges. The constant γ gives a tradeoff between satisfying the theorem conditions and the noise level, and the conditions for the noiseless case are achieved when γ tends to zero.
Theorem 3.4 (Exact recovery for cosparse signals) Consider the problem p when ‖e‖_{2} = 0. Let l _{ s } = d − s⌈(d − l)/s⌉ and l _{ q } = l _{ s } + q⌊(l − l _{ s })/q⌋. If the measurement matrix M satisfies the ΩRIP with parameter \( {\delta}_{4{l}_q3p}\le \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right), \) where \( {C}_{2{l}_qp} \) and γ are as in Theorem 3.3 and \( \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right) \) is a constant guaranteed to be greater than zero whenever ( 10) is satisfied, the CSMP algorithm guarantees an exact recovery of x from y via a finite number of iterations.
The proof is mainly based on the following lemma:
Lemma 3.5 If the measurement matrix M satisfies the ΩRIP with the same conditions as in Theorem 3.4, then

The ⌈(d − l)/s⌉ + ⌊(l − l _{ s })/q⌋ − th stage of the algorithm is equivalent to the ASP algorithm with estimated cosparsity l _{ q } , except that they have different initial cosupports and initial measurement vectors.

CSMP recovers the target signal exactly after completing the ⌈(d − l)/s⌉ + ⌊(l − l _{ s })/q⌋ − th stage.
In the Appendix, Lemma 3.5 is proved in detail.
Lemma 3.5 describes that CSMP has a process of signal reconstruction that is equivalent to ASP, and could complete the exact recovery of the cosparse signals in a finite number of stages. To complete the proof, it is sufficient to show that the CSMP algorithm never gets stuck at any iteration of either stage, i.e., it takes a finite number of iterations up to ⌈(d − l)/s⌉ + ⌊(l − l _{ s })/q⌋ stages. At each stage, the cosupport (whose size is assumed to be l _{est}) adds and discards some rows of Ω, and the number of rows is fixed and finite. Hence, there are a finite number of combinations, at most, \( \left(\begin{array}{c}\hfill d\hfill \\ {}\hfill {l}_{\mathrm{est}}\hfill \end{array}\right), \) where d is the length of the signal. Thus, if CSMP takes an infinite number of iterations in this stage, the construction of cosupport would be repeated after at most \( \left(\begin{array}{c}\hfill d\hfill \\ {}\hfill {l}_{\mathrm{est}}\hfill \end{array}\right) \) iterations. Hence, Theorem 3.6 follows.
Lemma 3.6 (Stability for cosparse recovery) Consider the problem p. If (10) holds and \( {\delta}_{4{l}_q3p}\le \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right) \) , where γ is as in Theorem 3.3 and \( \delta \left({C}_{2{l}_qp},{\sigma}_{\boldsymbol{M}}^2,\gamma \right) \) is a constant guaranteed to be greater than zero whenever \( \frac{\left({C}_{2{l}_qp}^21\right){\sigma}_{\boldsymbol{M}}^2}{C_{2{l}_qp}^2}<1 \) is satisfied, then for any
we have
implying that CSMP leads to a stable recovery. The constants η _{1} , η _{2} , ρ _{1} , and ρ _{2} are the same as in Theorem 3.3.
Similarly, the proof of Lemma 3.6 is based on Lemma 3.5 and the corresponding theorems of ASP algorithm in [12], and we omit the detailed proof here.
The above theorems are sufficient conditions of CSMP for exact recovery and stability. They are slightly more restrictive than the corresponding results of ASP algorithms because the true cosparsity level l is always larger than or equal to the estimated one l _{ q }. This may be regarded as an additional cost for not having precise information of cosparsity. On the other hand, the proofs also show that these sufficient conditions may not be optimal or tight enough because they only consider the final stage and ignore the influence of previous stages on the performance of the algorithm.
Experiments
In this section, we evaluate the performance of the proposed algorithm, as compared with several baseline algorithms. To this end, we repeat some of the experiments performed in [12] for the noiseless case (e = 0) and noisy case.
Phase transition diagrams for synthetic signals in the noiseless case
We show the performance of the proposed algorithm as compared with six baseline methods, namely, AIHT, AHTP, ASP, ACoSaMP, l _{1}relaxation, and GAP, using the same experiments as performed in [12] for the noiseless case. We begin with synthetic signals and test the performance of CSMP with s = 1, s = 5, and s = 10, respectively. The results of the proposed algorithm are compared with those of AIHT and AHTP with an adaptively changing step size, ASP and ACoSaMP with a = 1 and \( a={\scriptscriptstyle \frac{2lp}{l}} \), l_{1}relaxation, and GAP. We use a random matrix M, where each entry is drawn independently from a Gaussian distribution, and a random tight frame Ω of size d = 120 and p = 144.
We draw a phase transition diagram [12] for each of the algorithms. In each phase transition diagram, 20 different possible values of m and 20 different values of l are tested. For each pair of m and l _{est}, we repeat the experiment for 50 times, with the value of l _{est} selected according to the formula in [12]:
where ρ is the ratio between the cosparsity of the signal and the number of measurements, shown as the vertical axis of the phase diagram. The sampling rate δ is defined as δ = m / d and shown as the horizontal axis. There are 400 cells in each phase transition diagram, and the gray level of the cell shows the exact reconstruction rate of its recovery algorithm. White cells in the diagram denote a completely exact reconstruction, and black cells denote a total failure in the exact reconstruction.
The reconstruction results of the proposed algorithm and the baseline algorithms are shown in Fig. 2.
In Fig. 2, experiments with s = 1, s = 5, and s = 10 are performed for CSMP, respectively. From Fig. 2, it can be seen that CSMP has better results than those of AIHT and AHTP with an adaptively changing step size, and ASP and ACoSaMP with \( a={\scriptscriptstyle \frac{2lp}{l}} \) when l _{est} is far from d. In addition, the proposed algorithm with s = 5 and s = 10 provides comparable performance to ASP and ACoSaMP with a = 1 and the l_{1}relaxation when the knowledge of cosparsity is unknown. Although the accurate recovery rates of GAP for experiments of all pairs of δ and ρ are higher than those of CSMP, the number of white cells is 59 in Fig. 2k, which is less than 67 in Fig. 2b and 63 in Fig. 2c.
Reconstruction of highdimensional images in the noiseless and noisy cases
We now test the methods for highdimensional signals. We use RCSMP (relaxed versions of CSMP defined in Section 3.2) for the reconstruction of the SheppLogan phantom from a few number of measurements. RCSMP is computationally more efficient than CSMP for highdimensional signals and is thus chosen in this experiment. The sampling operator M is a twodimensional Fourier transform that measures only a certain number of radial lines from the Fourier transform. The cosparse operator is a twodimensional finite difference operator Ω _{2DDIF}, of which the number of rows is p = 130, 560 and the real cosparsity of the signal under this operator is l = 128, 014. The original phantom image is presented in Fig. 3a. Using the RCSMP with s = 4000, q = 50, we get perfect reconstructions using only 12 radial lines just as RASP (relaxed Analysis Subspace Pursuit), i.e., only m = 3032 measurements out of d = 65,536 which is less than 4.63 % of the data in the original image. The algorithm requires less than 15 iterations which are less than those required by RASP for achieving this recovery percentage. The reconstruction results of CSMP are shown in Fig. 3c.
We now turn to test the method for the noisy case; we perform reconstruction using RCSMP with s = 4000, q = 50 of a noisy measurement of the phantom with 22 radial lines (in Fig. 4b) and signaltonoise ratio (SNR) at 20 dB. Figure 4c presents the noisy image, which is the result of applying inverse Fourier transform on the measurements. We get the reconstruction results by the proposed method with a peak SNR (PSNR) of 37.11 dB in Fig. 4e. The recovery image using GAP is shown in Fig. 4f, with a little worse PSNR of 34.34 dB. Note that for the minimization process, we solve conjugate gradients in Steps 4, 6, 12, and 14 in each iteration in Algorithm 1, take only the real part of the result, and crop the values of the resulting image to be in the range of [0, 1] [12].
Conclusions
We have presented a novel greedy pursuit algorithm CSMP for the cosparse analysis model. With the proposed algorithm, the information of the cosparsity of the target signal is not required as a priori. It addresses a common limitation associated with the existing greedy pursuit algorithms. The underlying intuition of CSMP is to obtain the cosparsity estimation and the signal approximation in the forward process and refine them in the backward process. Borrowing the idea from ASP, a theoretical study of the proposed algorithm has been performed to give guarantees for stable recovery under the assumption of the Ω  RIP and the existence of an optimal or a nearoptimal projection. Experiments have confirmed that the proposed algorithm gives competitive results for signal recovery as compared with those of l _{1}relaxation and ACoSaMP/ASP in the noiseless case and better results than GAP in the noisy case.
Notes
 1.
To form a Gaussian random signal x of length of d = 120 and a cosparsity of l = 90, we choose l rows from Ω randomly to form Ω _{ Λ } ∈ ℜ ^{l × d}, where Λ is composed of the indices of the chosen rows. We apply singular value decomposition (SVD) to Ω _{ Λ } = U ⋅ D ⋅ V ^{− 1}, where D ∈ ℜ ^{l × d} and \( \boldsymbol{D}\left(:,l+1:d\right)=\left[\kern1em \begin{array}{ccc}0\kern1em & \kern1em \cdots \kern1em & \kern1em 0\kern1em \\ {}\kern1em \vdots \kern1em & \kern1em \ddots \kern1em & \kern1em \vdots \kern1em \\ {}\kern1em 0\kern1em & \kern1em \cdots \kern1em & \kern1em 0\end{array}\kern1em \right]=\mathbf{0} \). Define Nullspace = V(:, l + 1 : d), and let r ∈ ℜ ^{d − l − 1} be an i.i.d. Gaussian random variable of identical variance. Further define x = Nullspace * r. We have \( {\boldsymbol{\Omega}}_{\varLambda}\cdot \boldsymbol{x}=\boldsymbol{U}\cdot \boldsymbol{D}\cdot {\boldsymbol{V}}^{1}\cdot \boldsymbol{V}\left(:,l+1:d\right)\cdot \boldsymbol{r}=\boldsymbol{U}\cdot \boldsymbol{D}\cdot \left[\kern1em \begin{array}{cc}\mathbf{0}\kern1em & \kern1em \mathbf{0}\kern1em \\ {}\kern1em \mathbf{0}\kern1em & \kern1em \mathbf{I}\end{array}\kern1em \right]\cdot \mathbf{r} \), where I ∈ ℜ ^{(d − l − 1) × (d − l − 1)} is a unit matrix, so we have Ω _{Λ} ⋅ x = U ⋅ D(:, l + 1 : d) ⋅ r= U ⋅ 0 ⋅ r= 0 and ‖Ω ⋅ x‖_{0} ≤ p − l. Then, we have generated a signal x of length of d = 120 and a cosparsity of l = 90.
References
 1.
M Elad, P Milanfar, R Rubinstein, Analysis versus synthesis in signal priors. Inverse Probl. 23(3), 947–968 (2007)
 2.
M Bruckstein, DL Donoho, M Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)
 3.
SS Chen, DL Donoho, MA Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)
 4.
Y Pati, R Rezaiifar, P Krishnaprasad, Orthonormal matching pursuit: recursive function approximation with applications to wavelet decomposition, in Proc. of the 27th Annual Asilomar Conf. on Signals, Systems and Computers, 1993, pp. 40–44
 5.
D Needell, J Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–320 (2009)
 6.
TT Do, L Gan, N Nguyen, TD Tran, Sparsity adaptive matching pursuit algorithm for practical compressed sensing, in Proc. of the 42nd Asilomar Conf. on Signals, Systems and Computers, IEEE, 2008, pp. 581–587
 7.
T Blumensath, M Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)
 8.
S Foucart, Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)
 9.
H Wu, S Wang, Adaptive sparsity matching pursuit algorithm for sparse reconstruction. IEEE Signal Proc Lett. 19(8), 471–474 (2012)
 10.
D Wu, K Wang, Y Zhao, W Wang, L Chen, Stagewise regularized orthogonal matching pursuit algorithm. Opt. Precis. Eng. 22(5), 1395–1402 (2014)
 11.
S Nam, M Davies, M Elad, R Gribonval, The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 34(1), 30–56 (2013)
 12.
R Giryes, S Nam, M Elad, R Gribonval, ME Davies, Greedylike algorithms for the cosparse analysis model. Linear Algebra. Appl. 441, 22–60 (2014)
 13.
EJ Candes, YC Eldar, D Needell, P Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1), 59–73 (2011)
 14.
S Vaiter, G Peyré, C Dossal, J Fadili, Robust sparse analysis regularization. IEEE Trans. Inf. Theory 59(4), 2001–2016 (2013)
 15.
S Nam, M Davies, M Elad, and R Gribonval, Cosparse analysis modeling, Workshop on Signal Processing with Adaptive Sparse Structured Representations. 2011.
 16.
S Nam, M Davies, M Elad, R Gribonval, Recovery of cosparse signals with greedy analysis pursuit in the presence of noise, in Proc. 4th IEEE Int. Workshop on CAMSAP, 2011, pp. 361–364.
 17.
R Giryes, S Nam, R Gribonval, and ME Davies, Iterative cosparse projection algorithms for the recovery of cosparse vectors, Signal Processing Conference, 2011 19th European. IEEE, 2011: 14601464.
 18.
R Giryes, Greedy algorithm for the analysis transform domain. arXiv Preprint, arXiv 1309, 7298 (2013)
 19.
R Giryes and M Elad, CoSaMP and SP for the cosparse analysis model, Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European. IEEE, 2012: 964968.
 20.
J Turek, I Yavneh, M Elad, On MAP and MMSE estimators for the cosparse analysis model. Digit. Signal. Process. 28, 57–74 (2014)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant 51379049 and the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant EP/K014307/1. The authors wish to thank the Associate Editor and the anonymous reviewers for their contributions to improving this paper.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
The work presented in this paper was carried out in collaboration between all authors. YZ and YH conceived the research theme. YZ and DW designed the algorithm, analyzed the algorithm, performed the experiments, and discussed the results. The theorem proofs were given by DW. WW contributed to the revision of the overall work and manuscript and provided insightful comments and suggestions.
Appendix
Appendix
Proof of Lemma 3.5
Proof of Lemma 3.5 Assuming that l _{ s } = d − s⌈(d − l)/s⌉ and l _{ q } = l _{ s } + q⌊(l − l _{ s })/q⌋, we first discuss the relation between l, s, and q, which includes the following two cases.
Case 1 When (d − l)/s is an integer, we have ⌊(l − l _{ s })/q⌋ = 0 and l _{ q } = l _{ s } = l. It means that after (d − l)/s stages, an estimated cosparsity l _{ s } can be obtained by the proposed algorithm which is equal to the real cosparsity l in the forward process; the backward process will not be performed in this situation.
Case 2 If ⌈(d − l)/s⌉ > (d − l)/s, we have l _{ s } < l. After ⌈(d − l)/s⌉ stages, an estimated cosparsity l _{ s } can be obtained by the proposed algorithm which is smaller than the real cosparsity l in the forward process. This is followed by the backward process. If (l − l _{ s })/q is an integer, we have l _{ q } = l. At the (l − l _{ s })/q stage of the backward process, the cosparsity l _{ q } increases to l and the CSMP algorithm will be performed as ASP with cosparsity with l _{ q }. If ⌊(l − l _{ s })/q⌋ < (l − l _{ s })/q, we have l _{ s } < l _{ q }.
For both cases, we summarize that CSMP will recover a signal exactly with an estimated cosparsity of l _{ q } (l _{ s } ≤ l _{ q }) after ⌈(d − l)/s⌉ + ⌊(l − l _{ s })/q⌋ stages.
The analysis operator has a property that for \( \tilde{l}\le l, \) it holds that \( {\delta}_l\le {\delta}_{\tilde{l}} \) [12]. Via (4) and this property, the inequality \( \left(1{\delta}_{4{l}_s3p}\right)\parallel \boldsymbol{v}{\parallel}_2^2\le \left(1{\delta}_{4{l}_q3p}\right)\parallel \boldsymbol{v}{\parallel}_2^2\le \parallel \boldsymbol{M}\boldsymbol{v}{\parallel}_2^2\le \left(1+{\delta}_{4{l}_q3p}\right)\parallel \boldsymbol{v}{\parallel}_2^2\le \left(1+{\delta}_{4{l}_s3p}\right)\parallel \boldsymbol{v}{\parallel}_2^2 \) holds; hence, we choose the parameter \( {\delta}_{4{l}_q3p} \) to satisfy the ΩRIP instead of \( {\delta}_{4{l}_s3p} \).
At the last stage, we know that the construction of the candidate set and cosupport of CSMP is similar to that of ASP with the corresponding value of cosparsity. The only difference is that while the ASP algorithm has full initial cosupport and measurement data, CSMP has the cosupport and measurement residual of the last iteration as its initialization. This is the first part of Lemma 3.5.
The second part of Lemma 3.5 is derived from the fact that the convergence condition of the ASP algorithm in [12] does not depend on those initial values but the construction of the candidate set and cosupport. In particular, it is based on the following observations:

The energy of the part of the signal x not captured by the current cosupport is a constant of times smaller than that of the signal x not captured by the cosupport in the previous iteration.

The energy of the measurement residual of the current iteration is a constant of times smaller than that of the previous iteration.
When the condition of ΩRIP is satisfied, both the above constants are smaller than the one that results in the exact recovery after a finite number of iterations. This is the main result of Theorem 6.13 in [12]. Because the final stage in the proposed algorithm is equivalent to ASP with estimated cosparsity level l _{ q }, it is obvious that the target signal will be exactly recovered after this stage if the condition on ΩRIP of parameter l _{ q } is satisfied.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Wu, D., Zhao, Y., Wang, W. et al. Cosparsitybased Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals. EURASIP J. Adv. Signal Process. 2015, 101 (2015). https://doi.org/10.1186/s1363401502813
Received:
Accepted:
Published:
Keywords
 Cosparse analysis model
 Greedy pursuit
 Adaptive cosparsity
 Stagewise strategy