- Research
- Open access
- Published:
Non-convex block-sparse compressed sensing with coherent tight frames
EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 2 (2020)
Abstract
In this paper, we present a non-convex ℓ2/ℓq(0<q<1)-analysis method to recover a general signal that can be expressed as a block-sparse coefficient vector in a coherent tight frame, and a sufficient condition is simultaneously established to guarantee the validity of the proposed method. In addition, we also derive an efficient iterative re-weighted least square (IRLS) algorithm to solve the induced non-convex optimization problem. The proposed IRLS algorithm is tested and compared with the ℓ2/ℓ1-analysis and the ℓq(0<q≤1)-analysis methods in some experiments. All the comparisons demonstrate the superior performance of the ℓ2/ℓq-analysis method with 0<q<1.
1 Introduction
Data compression and data recovery (maybe from its compressed observed data) are two crucial problems in many real-world applications, including information processing [1], machine learning [2], statistical inference [3], swarm intelligence [4, 5], and compressed sensing (CS) [6, 7]. Among these applications, CS is particularly attractive since it provides insights into signal processing with significantly smaller samplings than classical signal processing approaches based on the Nyquist-Shannon sampling theorem.
CS was pioneered by Donoho [6] and Candès et al. [7] around 2006, and it has already captured lots of attention from researchers in a growing number of fields, including signal processing, machine learning, mathematical statistics, etc. A crucial concern in CS is to recover an unknown signal \(\boldsymbol {f}\in \mathbb {R}^{\widetilde {n}}\) from its small set of linear measurements
where \(\boldsymbol {y}\in \mathbb {R}^{\widetilde {m}}\) is an observed signal vector and \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is a given measurement matrix with \(\widetilde {m}\ll \widetilde {n}\).
Conventional CS heavily relies on techniques that can express signals as a few linear combination of base vectors from an orthogonal basis. However, in a large number of practical applications, the signals are not sparse in terms of an orthogonal basis, but in terms of an overcomplete and tight frame [8, 9]. In such a scenario, one natural way to express f is to write f=Ψx, where \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) is a matrix with \(\widetilde {n}\leq n\) whose n columns form a tight frame, and \(\boldsymbol {x}\in \mathbb {R}^{n}\) is sparse (or nearly sparse). In order to recover f, a popular approach is using an ℓ1-synthesis method [10, 11], which first solves following problem:
to get the transform-based sparse coefficient vector x♯, and then reconstruct the original signal f♯ by applying a synthesis operator Ψ on x♯, i.e., f♯=Ψx♯. Since the entries in ΦΨ are correlated when Ψ is highly coherent, ΦΨ may no longer satisfy the required assumptions such as the restricted isometry property (RIP) and the mutual incoherence property (MIP) which have been widely used in conventional CS. Therefore, it is not easy to study the theoretical performance of the ℓ1-synthesis method.
Fortunately, there exists an alternative to the ℓ1-synthesis method called the ℓ1-analysis method [10, 12], which directly finds an estimator f♯ by solving following ℓ1-analysis problem:
The ℓ1-analysis method has its roots in the analysis-style sparse representation x=ΨTf, and is different from the above-mentioned synthesis method, which is based on the synthesis-style sparse representation, i.e., f=Ψx. The existing literature has shown that there is a remarkable difference between the two methods despite their apparent similarity. For example, these two methods have totally different recovery conditions to guarantee robust recovery of any signal, and their ways to utilize the sparsity prior are also totally different. Please see [10, 13] and references therein for more details. To investigate the theoretical performance of the ℓ1-analysis method, Candès et al. [12] introduced the definition of Ψ-RIP: a measurement matrix Φ is said to satisfy the RIP adapted to Ψ (Ψ-RIP) with constant δk if
holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is k-sparse, and establishes a sufficient condition related to Ψ-RIP for recovering general signals. In addition, they also demonstrated the efficiency of the ℓ1-analysis strategy with a large number of experiments based on real signals.
Different from the general case in CS that the transform-based coefficient vector x is sparse, some signals in the real world may exhibit additional sparse structures in terms of a fixed transform basis Ψ. Take for example the block-sparse structure, i.e., the non-zero elements of x are assembled in a few fixed blocks, which is also our main concern in this paper. Such structured signals naturally arise in various applications. Prominent examples include DNA microarrays [14], color imaging [15], and motion segmentation [16]. Without loss of generality, we assume that there are m blocks of size d=n/m in x. Then, one can write any block-sparse vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) as
where x[i] denotes the ith block of x. If x has at most k non-zero blocks, i.e., ∥x∥2,0≤k, we refer to such a vector x as a block k-sparse signal. Accordingly, we can also write \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) as
where Ψi with i=1,2,⋯,n and Ψ[j] with j=1,2,⋯,m are denoted by the ith column vector and the jth sub-block matrix of Ψ, respectively. Most current papers focus more on the conventional sparse or nearly sparse case in terms of Ψ. As one of some exceptions, Wang et al. [17] proposed an ℓ2/ℓ1-analysis method to investigate the recovery of block-sparse signals in terms of Ψ. Basing their theoretical analysis on block Ψ-RIP, which is a block version of Ψ-RIP that we will define in the next section, Wang et al. [17] also developed several sufficient conditions to guarantee robust recovery of general signals. For completeness, we present the ℓ2/ℓ1-analysis problem as follows
Obviously, when d=1, the ℓ2/ℓ1-analysis method will degenerate to the ℓ1-analysis method mentioned above.
Recently, the work of Chartrand et al. [18–20] has shown that the non-convex ℓq(0<q<1) method allows the exact recovery of sparse signals from a smaller set of linear measurement than that of the ℓ1 method, providing a new paradigm to study CS problems. In this paper, along with previous works on the non-convex ℓq(0<q<1) strategy, we first propose an ℓ2/ℓq-analysis method with 0<q≤1 to recover general signals that can be expressed as block-sparse signals in terms of Ψ. Our method is different from conventional CS methods, which only concern cases where the signals per se are sparse or block-sparse [18, 21–23], and also different from previous analysis methods [24–26], which only focus on the recovery of general signals that are expressed as non-block structured signals in terms of Ψ. Specifically, the proposed method can be described as:
In many application problems, the observed signal y may be polluted by a bounded noise e, i.e., y=Φf+e. So for the general situation, we have the model:
where ε is the noise level. Secondly, for (1), we also establish a sufficient conditions for robust recovery of general signals. The obtained results associate two constants Ψ-RIC and Ψ-ROC in a block version with different q∈(0,1], and provide a series of selectable conditions for robust recovery via the ℓ2/ℓq-analysis method. Finally, inspired by the ideas of [21, 27], we derive an iterative re-weighted least square (IRLS) algorithm to solve our ℓ2/ℓq-analysis problem. Also, some experiments are conducted later that further demonstrate the efficiency of our ℓ2/ℓq-analysis method with 0<q≤1.
The rest of the paper is organized as follows. In Section 2, we first state three key definitions and then present our main theoretical results. In Section 3, we propose an IRLS algorithm to solve the ℓ2/ℓq-analysis problem, and conduct some experiments to support the validity of our ℓ2/ℓq-analysis method. Finally, the conclusion is addressed in Section 4.
2 Robust recovery for ℓ 2/ℓ q-analysis problem
In this section, we mainly establish a sufficient condition to robustly recover general signals that can be expressed as block-sparse vectors in terms of Ψ. Before presenting our main results, we first introduce several definitions that will be used later. We start with the introduction of two important definitions, which can also be found in many references such as [17].
Definition 1
Let \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) with \(\widetilde {n}\leq n\) be a matrix whose n columns form a tight frame. A measurement matrix \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is said to satisfy the block Ψ-RIP condition with constant δk|d(block Ψ-RIC) if
holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is block k-sparse.
Definition 2
The block Ψ-restricted orthogonality constant(block Ψ-ROC), denoted by \(\theta _{(k_{1},k_{2})|d}\), is the smallest positive number that satisfies
for every x1 and x2 such that x1 and x2 are block k1-sparse and block k2-sparse, respectively.
It is easy to see that if one sets Ψ to be the identity matrix of size \(\widetilde {n}\times \widetilde {n}\), then the above-mentioned definitions will be reduced to the well-known block-RIC and block-ROC definitions. Furthermore, if one sets the block size d=1, then we will get the classical RIC and ROC definitions. Obviously, block-RIC and block-ROC definitions, together with RIC and ROC definitions, are just two special cases of Definitions 1 and 2. In addition, we also need the following definition, which also plays a key role in our theorem.
Definition 3
Given \(\boldsymbol {x}\in \mathbb {R}^{n}\), we denote the best k-block approximation of x as
For convenience, in the remainder of this paper, we use δk and \(\theta _{k_{1},k_{2}}\), instead of δk|d and \(\theta _{(k_{1},k_{2})|d}\), to represent the block Ψ-RIC and block Ψ-ROC, respectively, whenever confusion is not caused.
Next, we present our main results, which are included in the following theorem.
Theorem 1
Let k1 and k2 be two positive integers such that 0≤8(k1−k)≤k2, and denote \(t=2^{1/q-1}(k_{1}/k_{2})^{1/q-1/2}+\sqrt {k_{2}/k_{1}}[(q/2)^{q/(2-q)}-(q/2)^{2/(2-q)}]-(1+2^{1/q-1})\sqrt {k_{2}/k_{1}}\left [(k_{1}-k)/k_{2}\right ]^{1/q}\) with 0<q≤1. If the matrix Φ satisfies
where δk and θk,k are defined in Definitions 1 and 2, then the solution f♯ to problem (1) obeys
where
In what follows, we present two remarks for the established results, and the proof of the theorem will be given in the appendix.
Remark 1
Theorem 1 presents a sufficient condition to robustly recover general signals via the ℓ2/ℓq-analysis method with 0<q≤1. The obtained sufficient condition associates block Ψ-RIC and block Ψ-ROC with different q∈(0,1], and provides a series of selectable conditions for robust recovery of general signals that can be expressed as block-sparse vectors. Since condition (2) is related to t, which is a complex combination of k1, k2, k, and q, it is difficult to intuitively analyze the obtained condition. So by choosing some representative values, we induce a series of new conditions, as detailed in Table 1.
Remark 2
Inequality (3) indicates that our reconstructed error ∥f−f♯∥2 can be bounded by the lowest k-block approximation error and the noise level ε. As a special case where ε=0 and the original signal f can be expressed as a block k-sparse vector with a fixed Ψ, i.e., ∥ΨTf∥2,0≤k, if the matrix Φ satisfies (2), solving problem (1) will lead to exact recovery of the original signal f.
3 Numerical experiments and results
In this section, we conduct some numerical experiments to evaluate the performance of our ℓ2/ℓq(0<q<1)-analysis method. An IRLS algorithm is first proposed to solve the induced ℓ2/ℓq(0<q<1)-analysis problem. We then compare our ℓ2/ℓq(0<q<1)-analysis method with other analysis-style methods, including ℓ2/ℓ1-analysis [17] and ℓq(0<q≤1)-analysis [26].
3.1 An IRLS algorithm for ℓ 2/ℓ q-analysis
In order to solve the ℓ2/ℓq-analysis problem (1) with 0<q≤1, we derive an efficient analysis-style IRLS algorithm. The proposed algorithm can be seen as a natural extension of the traditional IRLS algorithm [21, 27] for sparse problems. We first rewrite the problem (1) as
where λ is a regularization parameter and \(\|\Psi ^{T}\boldsymbol {f}\|_{2,q}^{\epsilon }=\sum \limits _{i=1}^{m}\left (\epsilon ^{2}+\|{\Psi [i]^{T}}f\|_{2}^{2}\right)^{\frac {q}{2}}\).
Using the first-order optimality condition on (4), we have
where \(\widetilde {\boldsymbol {f}}\) denotes a critical point of (4). Due to the non-linearity in the above equation, there is no straightforward way to obtain an accurate solution of (5). However, utilizing some numerical techniques, one can well approximate an accurate solution of (5). Along the ideas in [21, 27], we present a similar iterative procedure as
which is implemented by Algorithm 1.
3.2 Experimental settings
Throughout the experiments, the measurement matrix Φ is generated by creating an \(\widetilde {m}\times \widetilde {n}\) Gaussian matrix with \(\widetilde {m}=64\) and \(\widetilde {n}=256\), and the overcomplete and tight frame Ψ is generated by taking the first \(\widetilde {n}\) rows from an n×n Hadamard matrix with n=512. The original signal f is synthesized as f=Ψx where x is a block k-sparse signal with block size d=4. We set the value of the noise vector e as obeying a Gaussian distribute with mean 0 and standard deviation 0.05. We consider four different values of q=0.1,0.5,0.7,1 for both the ℓ2/ℓq-analysis and ℓq-analysis methods. The relative error between the reconstructed signal f♯ and the original signal f is calculated as ∥f−f♯∥2/∥f∥2.
3.3 Experimental results
In order to find the value of λ that minimizes the relative error, we conduct two sets of trials. Figure 1a depicts the relative error versus λ for recovering the signals f, which can be expressed as block 5-sparse signals in terms of Ψ. It is easy to see that choosing λ less than 1×10−2 is appropriate. Similar results can also be found in Fig. 1b. Without loss of generality, we take λ=1×10−3 as the best regularization parameter value.
Next, we compare our ℓ2/ℓq(0<q<1)-analysis method with the ℓ2/ℓ1-analysis and ℓq(0<q≤1)-analysis methods. The results are depicted in Fig. 2.
It is easy to see that the ℓ2/ℓq(0<q<1)-analysis method is far superior to the ℓ2/ℓ1-analysis method. Take for example the ℓ2/ℓ0.5-analysis method when k=8. The relative error from the ℓ2/ℓ0.5-analysis method is 0.016, which is about 16 times smaller than that of the ℓ2/ℓ1-analysis method (0.257). Additionally, in terms of the non-convex strategy, a proper value of q contributes to better performance of both the ℓ2/ℓq(0<q≤1)-analysis and ℓq(0<q≤1)-analysis methods. However, with increasing block-sparsity k, the three methods above tend to consistency. What is more, when it comes to recovering the signals, which can be expressed as block-sparse coefficient vectors based on Ψ, our method performs better than the other two methods. An instance is also presented in Fig. 3, which displays the recovery of the signal f synthesized by a block 7-sparse vector based on the Ψ via ℓ2/ℓq(0<q≤1)-analysis and ℓq(0<q≤1)-analysis methods, respectively.
4 Conclusion and discussion
This paper mainly investigates an ℓ2/ℓq(0<q≤1)-analysis method to recover a general signal that can be expressed as a block-sparse vector in terms of an overcomplete and tight frame. To the best of our knowledge, this is the first theoretical characterization of the proposed non-convex ℓ2/ℓq-analysis method with 0<q<1. Specifically, the obtained results contribute to CS in the following three aspects:
We proposed an ℓ2/ℓq-analysis method to recover a general signal that can be expressed as a block-sparse vector in a certain frame, generalizing both the traditional methods in CS to recover sparse signals and the recent novel analysis methods to recover general signals.
Basing our theoretical approach on a proposed ℓ2/ℓq-analysis method, we established a sufficient condition for robust recovery of general signals that can be expressed as a block-sparse signals, which associates block Ψ-RIC and block Ψ-ROC with different values of q∈(0,1], providing a series of selectable conditions related to q.
We derive an analysis-style IRLS algorithm to solve the proposed method and compare our method with that of other representative methods, obtaining some convincing results.
There are still some issues left for future work. For example, one could consider establishing sharp recovery conditions of our ℓ2/ℓq(0<q<1)-analysis method, and one could also consider replacing our ℓ2/ℓq(0<q<1)-analysis method with other more general non-convex methods.
5 Appendix
The proof of Theorem 1 is proved as follows.
Let f♯=f+h be a solution of (1), where f is the original signal. Write ΨTh=(c[1],c[2],⋯,c[m])T and rearrange the block indices such that ∥c[1]∥2≥∥c[2]∥2≥⋯≥∥c[m]∥2. Let \({\widetilde {\Omega }}=\{1,2,\cdots,k\}\) and Ω the block index set over the k blocks with largest ℓ2 norm of ΨTf. We denote by Ωc the complement set of Ω in {1,2,⋯,d}. For convenience, we use \(\Psi ^{T}_{{\widetilde {\Omega }}}\) to denote \((\Psi _{{\widetilde {\Omega }}})^{T}\) where ΨS is the matrix Ψ restricted to the column-blocks indexed by \({\widetilde {\Omega }}\), and then partition {1,2,⋯,d} into the following sets
Since f♯ is a minimizer of (1), we have
This implies
Note that \(\left \|\Psi ^{T}_{{\widetilde {\Omega }}}\boldsymbol {h}\right \|_{2,q}\geq \left \|\Psi ^{T}_{\Omega }\boldsymbol {h}\right \|_{2,q}\) and \(\left \|\Psi ^{T}_{{\widetilde {\Omega }}^{\mathrm {c}}}\boldsymbol {h}\right \|_{2,q}\leq \left \|\Psi ^{T}_{\Omega ^{\mathrm {c}}}\boldsymbol {h}\right \|_{2,q}\), and thus it follows from (6) that
Further, we have
Using the inequality involving ℓ2 and ℓq(0<q≤1) normsFootnote 1(see [28], Lemma 3), it is easy to obtain that
holds for any j≥1, where Pq=(q/2)q/(2−q)−(q/2)2/(2−q). Thus, summing these terms yields
This, along with (7), thus gives
Since \(\|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2,q}\leq (k_{1})^{1/q-1/2}\|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2}\) and \(\boldsymbol {c}[k_{1}+1]\|_{2}^{2}\leq \|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2}^{2}/k_{1}\), we can get
where \(t=2^{1/q-1}(k_{1}/k_{2})^{1/q-1/2}+P_{q}\sqrt {k_{2}/k_{1}}-(1+2^{1/q-1})\sqrt {k_{2}/k_{1}}\left [(k_{1}-k)/k_{2}\right ]^{1/q}\).
In fact, to make (8) work, one way is to set
which is equivalent to
To this end, we have to estimate the minimal value of
By means of mathematical skills, we can deduce that f(q) arrives at its minimum value 0.125 when q=1. An auxiliary result is depicted in Fig. 4.
A direct result is that if the condition \(0\leq \frac {k_{1}-k}{k_{2}}\leq 0.125\) holds, one can easily get (8) for all 0<q≤1.
Similar to the consequence of Ψ-RIP, we have
By applying the equality
to the above inequality, we get
By the feasibility of f♯, we have
Thus,
It then follows from (10) and (11) that
By (7), it is easy to see that
Consequently, we have
Plugging (12) into the previously mentioned inequality and by a direct calculation, we get
which yields (3).
Availability of data and materials
Please contact any of the authors for data and materials.
Notes
In fact, when 0<q<1, the ℓq norm is a quasi norm. For consistency, we instead use the norm.
Abbreviations
- Ψ-RIC:
-
Ψ-Restricted isometry constant
- Ψ-RIP:
-
Ψ-Restricted isometry property
- Ψ-ROC:
-
Ψ-Restricted orthogonality constant
- Ψ-ROP:
-
Ψ-Restricted orthogonality property
- CS:
-
Compressed sensing
- DNA:
-
Deoxyribonucleic acid
- IRLS:
-
Iterative re-weighted least square
References
D. Salomon, A concise introduction to data compression (2008).
D. Sculley, C. E. Brodley, in Proc. of the 2006 Data Compression Conference (DCC’2006). Compression and machine learning: A new perspective on feature space vectors (IEEE Computer SocietySnowbird, 2006), pp. 332–332. https://doi.org/10.1109/DCC.2006.13.
L. Martino, V. Elvira, Compressed Monte Carlo for distributed Bayesian inference. viXra:1811.0505 (2018). https://www.rxiv.org/pdf/1811.0505v1.pdf.
Y. Zheng, J. Ma, L. Wang, Consensus of hybrid multi-agent systems. IEEE Transactions on Neural Networks and Learning Systems. 29(4), 1359–1365 (2018).
J. Ma, M. Ye, Y. Zheng, Y. Zhu, Consensus analysis of hybrid multi-agent systems: a game-theoretic approach. Int. J. Robust Nonlinear Control. 29(6), 1840–1853 (2019).
D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).
S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20(1), 33–61 (1998).
A. M. Bruckstein, D. L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev.51(1), 34–81 (2009).
M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse Probl.23(3), 947–968 (2007).
H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory. 54(5), 2210–2219 (2008).
E. J. Candès, Y. C. Eldar, D. Needel, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal.31(1), 59–73 (2011).
I Selesnick, M Figueiredo, Signal restoration with overcomplete wavelet transforms: Comparison of analysis and synthesis priors. Proc. SPIE. 7446: (2009).
F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Sig. Process. 2(3), 275–285 (2008).
A. Majumdar, R. K. Ward, Compressed sensing of color images. Sig. Process. 90(12), 3122–3127 (2010).
R. Vidal, Y. Ma, A unified algebraic approach to 2-D and 3-D motion segmentation and estimation. J. Math. Imaging Vision. 25(3), 403–421 (2006).
Y. Wang, J. J. Wang, Z. B. Xu, A note on block-sparse signal recovery with coherent tight frames. Discret. Dyn. Nat. Soc.2013(1), 1–8 (2013).
R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization. Sig. Process. Lett.14(10), 707–710 (2007).
R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressive sensing. Inverse Probl.24(3), 20–35 (2008).
Z. B. Xu, H. L. Guo, Y. Wang, H. Zhang, Representative of L 1/2 regularization among L q(0<q≤1) regularizations: an experimental study based on phase diagram. Acta Autom. Sin.38(7), 1225–1228 (2012).
Y. Wang, J. J. Wang, Z. B. Xu, On recovery of block-sparse signals via mixed ℓ 2/ℓ q(0<q≤1) norm minimization. EURASIP J. Adv. Sig. Process.2013(1), 1–17 (2013).
Y. Wang, J. J. Wang, Z. B. Xu, Restricted p-isometry properties of nonconvex block-sparse compressed sensing. Sig. Process. 104:, 188–196 (2014).
H. T. Yin, S. T. Li, L. Y. Fang, Block-sparse compressed sensing: non-convex model and iterative re-weighted algorithm. Inverse Probl. Sci. Eng.21(1), 141–154 (2013).
J. H. Lin, S. Li, Y. Shen, New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Sig. Process.61(3), 611–621 (2013).
J. H. Lin, S. Li, Y. Shen, Compressed data separation with redundant dictionaries. IEEE Trans. Inf. Theory. 59(7), 4309–4315 (2013).
S. Li, J. H. Lin, Compressed sensing with coherent tight frame via ℓ q minimization. Inverse Probl. Imaging. 8:, 761–777 (2014).
M. J. Lai, Y. Y. Xu, W. T. Yin, Improved iteratively reweighted least squares for unconstrained smoothed ℓ q minimization. SIAM J. Numer. Anal.51(2), 927–957 (2013).
Y. Hsia, R. L. Sheu. arXiv: 1312.3379 (2014). http://arxiv.org/abs/1312.3379.
Acknowledgments
This paper is subsidized by the project of the key laboratory of Intelligent Information and Big Data Processing of NingXia Province, North Minzu University (no. NXKLIIBDP2019).
Author information
Authors and Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Luo, X., Yang, W., Ha, J. et al. Non-convex block-sparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process. 2020, 2 (2020). https://doi.org/10.1186/s13634-019-0659-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634-019-0659-8