- Research
- Open Access
Cosparsity-based Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals
- Di Wu1,
- Yuxin Zhao1Email author,
- Wenwu Wang2 and
- Yanling Hao1
https://doi.org/10.1186/s13634-015-0281-3
© Wu et al. 2015
- Received: 9 June 2015
- Accepted: 6 November 2015
- Published: 1 December 2015
Abstract
The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l 1-relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedy-like algorithm, called Cosparsity-based Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l 1-relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.
Keywords
- Cosparse analysis model
- Greedy pursuit
- Adaptive cosparsity
- Stagewise strategy
1 Introduction
Since solving (2) is an non-deterministic polynomial (NP)-hard problem [2], many approximation techniques have been proposed to recover x. Basis pursuit (BP) [3], which is based on the l 1-minimization using linear programming (LP), is a well-known reconstruction algorithm. Another option for approximating (2) is to use a family of greedy-like algorithms, such as Orthogonal Matching Pursuit [4] or the thresholding technique [5–10].
The role of cosparsity in the analysis model is similar to the role of sparsity in the synthesis model. The level of sparsity in the synthesis model indicates the number of non-zeros in the representation vector z in (2), while in the analysis model, the cosparsity l is used to indicate the number of zeros in the vector Ω x, as defined in (4). In other words, the quantity l denotes the number of rows of Ω that are orthogonal to the signal.
Another popular class of cosparse reconstruction algorithms is based on the idea of iterative greedy pursuit, such as Greedy Analysis Pursuit (GAP) [11, 15, 16]. As compared with l 1-relaxation, GAP has better reconstruction performance and, to some degree, a lower computational complexity. Analysis Iterative Hard Thresholding (AIHT) and Analysis Hard Thresholding Pursuit (AHTP) [12, 17, 18] have been proposed by incorporating the idea of backtracking, which enables the wrong cosupports obtained in the previous iteration to be pruned in the current iteration and offers strong theoretical guarantees. Experiments show that both of them recover the signal faster than the GAP algorithm. Nevertheless, they require a relatively large number of measurements for exact reconstruction.
Recently, more sophisticated greedy algorithms have been developed, such as Analysis Subspace Pursuit (ASP) and Analysis Compressive Sampling Matching Pursuit (ACoSaMP) [12, 19]. They employ the backtracking strategy and offer strong theoretical guarantees. ASP and ACoSaMP with a candidate set size of 2l − p have good performance on reconstructing the signal when l is close to d, but they require more measurements for an exact reconstruction with an increasing level of cosparsity. ASP and ACoSaMP with a candidate set size of l provide a comparable reconstruction quality to that of the l 1-relaxation methods with a lower reconstruction complexity. Other recent methods include a Bayesian method [20] where the model parameters are estimated by a Bayesian algorithm for the reconstruction of the signal under consideration.
Although all these greedy pursuit methods achieve signal reconstruction with a high accuracy, they require the cosparsity l to be known a priori for signal recovery. However, l may not be available in many practical applications. For example, most natural image signals are only cosparse when represented by an analysis operator such as a two-dimensional Fourier transform. It is difficult to define a cosparsity that exactly matches the signal under consideration. The inaccurate cosparsity may degrade the performance of the signal reconstruction algorithm, as demonstrated in the next section.
In this paper, a new greedy algorithm named Cosparsity-based Stagewise Matching Pursuit (CSMP) is proposed for the case where l is unknown. By analyzing the projection from the signal under consideration to the analysis operator, CSMP estimates the cosparsity with a pre-set step size stage by stage without the cosparsity knowledge. Cosupport and measurement residual are estimated alternately in the forward stage and fine-tuned in the backward stage, and the signal approximation is obtained at the end of the procedure. Our experiments show that the proposed algorithm has a reconstruction performance comparable to ASP and ACoSaMP, but without the knowledge of the cosparsity.
This paper is organized as follows. In Section 2, we present the motivation of this work. The CSMP algorithm is detailed in Section 3 and a theoretical analysis to the algorithm is also provided. The simulation results are given in Section 4, followed by concluding remarks and future work in Section 5.
2 Motivation
To see this, we first perform an experiment for the ASP algorithm, when l is not given accurately. Here, the analysis operator is a two-dimensional finite difference operator Ω ∈ ℜ p × d , where p = 144 and d = 120; x is a Gaussian random signal of length d = 120 and a cosparsity of l = 90.1 In this experiment, the sampling rate δ, which is defined as δ = m / d [12], is chosen from the set {0.50, 0.54, 0.58, 0.62, 0.66, 0.70}. The cosparsity l est, which denotes the estimated cosparsity of the signal, is chosen from the set {110, 100, 90, 80, 70}. We draw a phase transition diagram [12] for this algorithm.
The probability of exact reconstructions versus the cosparsity in ASP
3 Cosparsity-based Stagewise Matching Pursuit
3.1 Algorithm description
To address the above issue, we propose a novel greedy algorithm for blind cosparse reconstruction, where the cosupports are refined iteratively and the information on cosparsity is extracted automatically. The proposed CSMP algorithm, as shown in Algorithm 1, is composed of two processes, namely, the forward and backward processes. The forward process estimates the cosparsity, constructs the cosupport starting from a cosupport with all rows in Ω, and updates the measurement residual simultaneously. The procedure ends with a backward process that tries to add the rows in Ω with a smaller correlation until the terminating condition is reached. The terminating condition for CSMP is controlled by a threshold, which ensures that the estimated cosparsity is fairly close to the actual one and the target signal has been well reconstructed. The main steps of CSMP are summarized in Algorithm 1.
In Algorithm 1, cosupp(x) = {i: Ω i x = 0}, cosupp(x, l est) returns the index set of l est smallest (in absolute value) elements in Ω x, 2l est − p is the size of |Γ| and for a vector α, function Min(α, 2l est − p) returns the 2l est − p indices corresponding to the 2l est − p smallest values of α, index(Ω x, q) returns q elements from the (l est + 1)-th to (l est + q)-th smallest (in absolute value) rows in Ω x, and Γ, Λ Δ , \( {\tilde{\varLambda}}^k \) and Λ k are subsets of {1, 2, ⋯, p}, and ⌈s/q⌉ returns the smallest integer which is larger than s / q.
The CSMP algorithm adopts a stagewise approach [6] to estimate the real cosparsity in each stage in the forward process, which only requires the step size s to be set in initialization. Here, l is defined as the real cosparsity of the original signal, and s should not be larger than d − l normally [11]. The initial cosupport Λ 0 has the maximum number of rows in Ω that enables CSMP to recover the signal. To avoid under-estimation of the cosparsity, a safe choice is s = 1 if l is unknown. Nevertheless, there is a tradeoff between s and the recovery speed as a smaller s requires more stages. We can see that GAP and ASP/ACoSaMP are the special cases of the proposed algorithm when CSMP has a step size of s = 1 and s = d − l, respectively.
Suppose CSMP has a cosparsity of l est in the forward process. With this cosparsity, the candidate set is constructed by selecting some rows in Ω with smaller correlations. Here, we should explain the relation between greedy synthesis algorithms and their analysis counterparts. Given two vectors v 1, v 2 ∈ ℜ n such that Λ 1 = cosupp(Ω v 1) and Λ 2 = cosupp(Ω v 2). Assuming that ‖Λ 1‖0 ≤ (l est)1 and ‖Λ 2‖0 ≤ (l est)2, it holds that ‖Λ 1 ∩ Λ 2‖0 ≥ (l est)1 + (l est)2 − p. For the case ‖Λ 1‖0 = ‖Λ 2‖0 = l est, we have 2l est − p ≤ ‖Λ 1 ∩ Λ 2‖0 ≤ l est. So 2l est − p is a reasonable size of the candidate set for CSMP, which corresponds to the candidate set size of 2k of CoSaMP in the synthesis model. Denoting T 1 = supp(Ω v 1) and T 2 = supp(Ω v 2), it is clear that supp(Ω(v 1 + v 2)) ⊆ T 1 ∪ T 2. Noticing that supp(⋅) = cosupp(⋅) C , we get cosupp(Ω(v 1 + v 2)) ⊇ (T 1 ∪ T 2) C = T 1 C ∩ T 2 C = Λ 1 ∩ Λ 2, where the superscript C denotes the complementary set. This implies that the union of the supports in the synthesis case is parallel to the intersection of the cosupports in the analysis case.
Now, with the candidate set, we begin to construct the cosupport and update the measurement residual. The rows in the analysis operator, which correspond to the smallest l est components in the temporarily estimated signal (calculated in Step 4 in Algorithm 1), are used to form the cosupport. Then, we approximate the signal and update the measurement residual of the current iteration with this cosupport. The backtracking strategy [12, 19], which can be described as not only selecting rows that match the current residual signal better from the candidate set in each iteration but also excluding the other rows from the cosupport, provides the basis for constructing a more accurate cosupport and obtaining a smaller measurement residual. Here, \( \mathrm{index}\left(\boldsymbol{\Omega} {\widehat{\boldsymbol{x}}}_{\mathrm{temp}},q\right) \) is reserved for sharing in the backward process. An efficient mechanism is required for stage switching when each stage finishes till l est < l. This can be performed if the current measurement residual energy no longer decreases when compared with that in the last iteration. From Step 9 in Algorithm 1, we can see that the algorithm will run for some iterations with the same cosparsity l est until it reaches the stage switching condition. The proposed algorithm will perform with a cosparsity of l est − s in the next stage. The forward process will not stop until the measurement residual reaches a pre-set threshold such as ‖y r ‖2/‖y‖2 ≤ 10‐ 6 for the noiseless case or ‖y r ‖2 ≤ ε for the noisy case.
In the backward process, the algorithm tries to further increase the cosparsity by adding the less used rows into the cosupport. These q rows have been chosen in the last iteration of the forward process. The choice of the value of q can be made in terms of the value of s. As a rule of thumb, a small q is chosen when s is relatively small, and likewise, a greater q should be chosen if s is large. Typically, in our experiments, we choose q = 1 when s = 10, but we select q = 50 when s is in the order of thousands. With this strategy, we can get a more accurate cosupport for signal approximation. The iterations stop as soon as the measurement residual reaches the terminating threshold used in the forward process. The backward process needs to repeat less than ⌈s/q⌉ times since it is enough for the cosparsity to change from l est to l est + s in this process.
3.2 Relaxed versions for high-dimensional problems
3.3 Theoretical performance analysis
This section describes our theoretical analysis of the behavior of CSMP for the cosparse model in both noiseless and noisy cases. Because the proposed algorithm has a similar strategy of backtracking which is used in ASP, the proofs are mainly based on the proof framework of ASP/ACoSaMP. The following theorems are formed in parallel with those in [12], except for the unknown cosparsity and the initial cosupport and measurement vectors.
To show the ability of exact and stable recovery of cosparse signals by the CSMP algorithm, we define all variables involved in the process of signal reconstruction as follows:
Definition 3.2 (Problem p) [12] Consider a measurement vector y ∈ ℜ m such that y = Mx + e, where x ∈ ℜ d is l-cosparse, M ∈ ℜ m × d is a measurement matrix, and e ∈ ℜ m is a bounded additive noise. The largest singular value of M is σ M and its Ω -RIP constant is δ l . The analysis operator Ω ∈ ℜ p × d is given and fixed. Define C l to be the fraction of the largest and the smallest eigenvalues (which are not zero) of the sub-matrix composed of l rows from Ω. Assume that Ŝ l = cosupp(Ω x,l). According to Definition 3.1, the cosupport of x is a near-optimal projection. Our task is to recover x from y. The recovery result is denoted by \( \widehat{\boldsymbol{x}} \).
Now, we give the guarantees on exact recovery and stability of CSMP for recovering the cosparse signals.
Moreover, when \( {\rho}_1^2{\rho}_2^2<1, \) the iteration converges. The constant γ gives a tradeoff between satisfying the theorem conditions and the noise level, and the conditions for the noiseless case are achieved when γ tends to zero.
Theorem 3.4 (Exact recovery for cosparse signals) Consider the problem p when ‖e‖2 = 0. Let l s = d − s⌈(d − l)/s⌉ and l q = l s + q⌊(l − l s )/q⌋. If the measurement matrix M satisfies the Ω-RIP with parameter \( {\delta}_{4{l}_q-3p}\le \delta \left({C}_{2{l}_q-p},{\sigma}_{\boldsymbol{M}}^2,\gamma \right), \) where \( {C}_{2{l}_q-p} \) and γ are as in Theorem 3.3 and \( \delta \left({C}_{2{l}_q-p},{\sigma}_{\boldsymbol{M}}^2,\gamma \right) \) is a constant guaranteed to be greater than zero whenever ( 10) is satisfied, the CSMP algorithm guarantees an exact recovery of x from y via a finite number of iterations.
The proof is mainly based on the following lemma:
-
The ⌈(d − l)/s⌉ + ⌊(l − l s )/q⌋ − th stage of the algorithm is equivalent to the ASP algorithm with estimated cosparsity l q , except that they have different initial cosupports and initial measurement vectors.
-
CSMP recovers the target signal exactly after completing the ⌈(d − l)/s⌉ + ⌊(l − l s )/q⌋ − th stage.
In the Appendix, Lemma 3.5 is proved in detail.
Lemma 3.5 describes that CSMP has a process of signal reconstruction that is equivalent to ASP, and could complete the exact recovery of the cosparse signals in a finite number of stages. To complete the proof, it is sufficient to show that the CSMP algorithm never gets stuck at any iteration of either stage, i.e., it takes a finite number of iterations up to ⌈(d − l)/s⌉ + ⌊(l − l s )/q⌋ stages. At each stage, the cosupport (whose size is assumed to be l est) adds and discards some rows of Ω, and the number of rows is fixed and finite. Hence, there are a finite number of combinations, at most, \( \left(\begin{array}{c}\hfill d\hfill \\ {}\hfill {l}_{\mathrm{est}}\hfill \end{array}\right), \) where d is the length of the signal. Thus, if CSMP takes an infinite number of iterations in this stage, the construction of cosupport would be repeated after at most \( \left(\begin{array}{c}\hfill d\hfill \\ {}\hfill {l}_{\mathrm{est}}\hfill \end{array}\right) \) iterations. Hence, Theorem 3.6 follows.
Similarly, the proof of Lemma 3.6 is based on Lemma 3.5 and the corresponding theorems of ASP algorithm in [12], and we omit the detailed proof here.
The above theorems are sufficient conditions of CSMP for exact recovery and stability. They are slightly more restrictive than the corresponding results of ASP algorithms because the true cosparsity level l is always larger than or equal to the estimated one l q . This may be regarded as an additional cost for not having precise information of cosparsity. On the other hand, the proofs also show that these sufficient conditions may not be optimal or tight enough because they only consider the final stage and ignore the influence of previous stages on the performance of the algorithm.
4 Experiments
In this section, we evaluate the performance of the proposed algorithm, as compared with several baseline algorithms. To this end, we repeat some of the experiments performed in [12] for the noiseless case (e = 0) and noisy case.
4.1 Phase transition diagrams for synthetic signals in the noiseless case
We show the performance of the proposed algorithm as compared with six baseline methods, namely, AIHT, AHTP, ASP, ACoSaMP, l 1-relaxation, and GAP, using the same experiments as performed in [12] for the noiseless case. We begin with synthetic signals and test the performance of CSMP with s = 1, s = 5, and s = 10, respectively. The results of the proposed algorithm are compared with those of AIHT and AHTP with an adaptively changing step size, ASP and ACoSaMP with a = 1 and \( a={\scriptscriptstyle \frac{2l-p}{l}} \), l1-relaxation, and GAP. We use a random matrix M, where each entry is drawn independently from a Gaussian distribution, and a random tight frame Ω of size d = 120 and p = 144.
The phase transition diagrams for a CSMP with s = 1, b CSMP with s = 5, c CSMP with s = 10, d AIHT with an adaptive changing step size, e AHTP with an adaptive changing step size, f ASP with a = 1, g ASP with \( a={\scriptscriptstyle \frac{2l-p}{l}} \), h ACoSaMP with a = 1, i ACoSaMP with \( a={\scriptscriptstyle \frac{2l-p}{l}} \), j l 1-relaxation, and k GAP
In Fig. 2, experiments with s = 1, s = 5, and s = 10 are performed for CSMP, respectively. From Fig. 2, it can be seen that CSMP has better results than those of AIHT and AHTP with an adaptively changing step size, and ASP and ACoSaMP with \( a={\scriptscriptstyle \frac{2l-p}{l}} \) when l est is far from d. In addition, the proposed algorithm with s = 5 and s = 10 provides comparable performance to ASP and ACoSaMP with a = 1 and the l1-relaxation when the knowledge of cosparsity is unknown. Although the accurate recovery rates of GAP for experiments of all pairs of δ and ρ are higher than those of CSMP, the number of white cells is 59 in Fig. 2k, which is less than 67 in Fig. 2b and 63 in Fig. 2c.
4.2 Reconstruction of high-dimensional images in the noiseless and noisy cases
a Shepp-Logan Phantom image. b 12 sampled radial lines. c CSMP with s = 4000 and q = 50 using 12 radial lines
a Shepp-Logan Phantom image. b 22 sampled radial lines. c Noisy image with a SNR of 20 dB. d Location of non-zero elements in the difference map. e Recovery image using CSMP with s = 4000 and q = 50 and only using 22 radial lines. f Recovery image using GAP and only using 22 radial lines
5 Conclusions
We have presented a novel greedy pursuit algorithm CSMP for the cosparse analysis model. With the proposed algorithm, the information of the cosparsity of the target signal is not required as a priori. It addresses a common limitation associated with the existing greedy pursuit algorithms. The underlying intuition of CSMP is to obtain the cosparsity estimation and the signal approximation in the forward process and refine them in the backward process. Borrowing the idea from ASP, a theoretical study of the proposed algorithm has been performed to give guarantees for stable recovery under the assumption of the Ω - RIP and the existence of an optimal or a near-optimal projection. Experiments have confirmed that the proposed algorithm gives competitive results for signal recovery as compared with those of l 1-relaxation and ACoSaMP/ASP in the noiseless case and better results than GAP in the noisy case.
To form a Gaussian random signal x of length of d = 120 and a cosparsity of l = 90, we choose l rows from Ω randomly to form Ω Λ ∈ ℜ l × d , where Λ is composed of the indices of the chosen rows. We apply singular value decomposition (SVD) to Ω Λ = U ⋅ D ⋅ V − 1, where D ∈ ℜ l × d and \( \boldsymbol{D}\left(:,l+1:d\right)=\left[\kern1em \begin{array}{ccc}0\kern1em & \kern1em \cdots \kern1em & \kern1em 0\kern1em \\ {}\kern1em \vdots \kern1em & \kern1em \ddots \kern1em & \kern1em \vdots \kern1em \\ {}\kern1em 0\kern1em & \kern1em \cdots \kern1em & \kern1em 0\end{array}\kern1em \right]=\mathbf{0} \). Define Nullspace = V(:, l + 1 : d), and let r ∈ ℜ d − l − 1 be an i.i.d. Gaussian random variable of identical variance. Further define x = Nullspace * r. We have \( {\boldsymbol{\Omega}}_{\varLambda}\cdot \boldsymbol{x}=\boldsymbol{U}\cdot \boldsymbol{D}\cdot {\boldsymbol{V}}^{-1}\cdot \boldsymbol{V}\left(:,l+1:d\right)\cdot \boldsymbol{r}=\boldsymbol{U}\cdot \boldsymbol{D}\cdot \left[\kern1em \begin{array}{cc}\mathbf{0}\kern1em & \kern1em \mathbf{0}\kern1em \\ {}\kern1em \mathbf{0}\kern1em & \kern1em \mathbf{I}\end{array}\kern1em \right]\cdot \mathbf{r} \), where I ∈ ℜ (d − l − 1) × (d − l − 1) is a unit matrix, so we have Ω Λ ⋅ x = U ⋅ D(:, l + 1 : d) ⋅ r= U ⋅ 0 ⋅ r= 0 and ‖Ω ⋅ x‖0 ≤ p − l. Then, we have generated a signal x of length of d = 120 and a cosparsity of l = 90.
Declarations
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant 51379049 and the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant EP/K014307/1. The authors wish to thank the Associate Editor and the anonymous reviewers for their contributions to improving this paper.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
- M Elad, P Milanfar, R Rubinstein, Analysis versus synthesis in signal priors. Inverse Probl. 23(3), 947–968 (2007)View ArticleMATHMathSciNetGoogle Scholar
- M Bruckstein, DL Donoho, M Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)View ArticleMATHMathSciNetGoogle Scholar
- SS Chen, DL Donoho, MA Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)View ArticleMathSciNetGoogle Scholar
- Y Pati, R Rezaiifar, P Krishnaprasad, Orthonormal matching pursuit: recursive function approximation with applications to wavelet decomposition, in Proc. of the 27th Annual Asilomar Conf. on Signals, Systems and Computers, 1993, pp. 40–44View ArticleGoogle Scholar
- D Needell, J Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 26(3), 301–320 (2009)View ArticleMATHMathSciNetGoogle Scholar
- TT Do, L Gan, N Nguyen, TD Tran, Sparsity adaptive matching pursuit algorithm for practical compressed sensing, in Proc. of the 42nd Asilomar Conf. on Signals, Systems and Computers, IEEE, 2008, pp. 581–587Google Scholar
- T Blumensath, M Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 27(3), 265–274 (2009)View ArticleMATHMathSciNetGoogle Scholar
- S Foucart, Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. Numer. Anal. 49(6), 2543–2563 (2011)View ArticleMATHMathSciNetGoogle Scholar
- H Wu, S Wang, Adaptive sparsity matching pursuit algorithm for sparse reconstruction. IEEE Signal Proc Lett. 19(8), 471–474 (2012)View ArticleGoogle Scholar
- D Wu, K Wang, Y Zhao, W Wang, L Chen, Stagewise regularized orthogonal matching pursuit algorithm. Opt. Precis. Eng. 22(5), 1395–1402 (2014)View ArticleGoogle Scholar
- S Nam, M Davies, M Elad, R Gribonval, The cosparse analysis model and algorithms. Appl. Comput. Harmon. Anal. 34(1), 30–56 (2013)View ArticleMATHMathSciNetGoogle Scholar
- R Giryes, S Nam, M Elad, R Gribonval, ME Davies, Greedy-like algorithms for the cosparse analysis model. Linear Algebra. Appl. 441, 22–60 (2014)View ArticleMATHMathSciNetGoogle Scholar
- EJ Candes, YC Eldar, D Needell, P Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1), 59–73 (2011)View ArticleMATHMathSciNetGoogle Scholar
- S Vaiter, G Peyré, C Dossal, J Fadili, Robust sparse analysis regularization. IEEE Trans. Inf. Theory 59(4), 2001–2016 (2013)View ArticleGoogle Scholar
- S Nam, M Davies, M Elad, and R Gribonval, Cosparse analysis modeling, Workshop on Signal Processing with Adaptive Sparse Structured Representations. 2011.Google Scholar
- S Nam, M Davies, M Elad, R Gribonval, Recovery of cosparse signals with greedy analysis pursuit in the presence of noise, in Proc. 4th IEEE Int. Workshop on CAMSAP, 2011, pp. 361–364.Google Scholar
- R Giryes, S Nam, R Gribonval, and ME Davies, Iterative cosparse projection algorithms for the recovery of cosparse vectors, Signal Processing Conference, 2011 19th European. IEEE, 2011: 1460-1464.Google Scholar
- R Giryes, Greedy algorithm for the analysis transform domain. arXiv Preprint, arXiv 1309, 7298 (2013)Google Scholar
- R Giryes and M Elad, CoSaMP and SP for the cosparse analysis model, Signal Processing Conference (EUSIPCO), 2012 Proceedings of the 20th European. IEEE, 2012: 964-968.Google Scholar
- J Turek, I Yavneh, M Elad, On MAP and MMSE estimators for the co-sparse analysis model. Digit. Signal. Process. 28, 57–74 (2014)View ArticleGoogle Scholar