 Research
 Open Access
 Published:
Nonconvex blocksparse compressed sensing with coherent tight frames
EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 2 (2020)
Abstract
In this paper, we present a nonconvex ℓ_{2}/ℓ_{q}(0<q<1)analysis method to recover a general signal that can be expressed as a blocksparse coefficient vector in a coherent tight frame, and a sufficient condition is simultaneously established to guarantee the validity of the proposed method. In addition, we also derive an efficient iterative reweighted least square (IRLS) algorithm to solve the induced nonconvex optimization problem. The proposed IRLS algorithm is tested and compared with the ℓ_{2}/ℓ_{1}analysis and the ℓ_{q}(0<q≤1)analysis methods in some experiments. All the comparisons demonstrate the superior performance of the ℓ_{2}/ℓ_{q}analysis method with 0<q<1.
1 Introduction
Data compression and data recovery (maybe from its compressed observed data) are two crucial problems in many realworld applications, including information processing [1], machine learning [2], statistical inference [3], swarm intelligence [4, 5], and compressed sensing (CS) [6, 7]. Among these applications, CS is particularly attractive since it provides insights into signal processing with significantly smaller samplings than classical signal processing approaches based on the NyquistShannon sampling theorem.
CS was pioneered by Donoho [6] and Candès et al. [7] around 2006, and it has already captured lots of attention from researchers in a growing number of fields, including signal processing, machine learning, mathematical statistics, etc. A crucial concern in CS is to recover an unknown signal \(\boldsymbol {f}\in \mathbb {R}^{\widetilde {n}}\) from its small set of linear measurements
where \(\boldsymbol {y}\in \mathbb {R}^{\widetilde {m}}\) is an observed signal vector and \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is a given measurement matrix with \(\widetilde {m}\ll \widetilde {n}\).
Conventional CS heavily relies on techniques that can express signals as a few linear combination of base vectors from an orthogonal basis. However, in a large number of practical applications, the signals are not sparse in terms of an orthogonal basis, but in terms of an overcomplete and tight frame [8, 9]. In such a scenario, one natural way to express f is to write f=Ψx, where \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) is a matrix with \(\widetilde {n}\leq n\) whose n columns form a tight frame, and \(\boldsymbol {x}\in \mathbb {R}^{n}\) is sparse (or nearly sparse). In order to recover f, a popular approach is using an ℓ_{1}synthesis method [10, 11], which first solves following problem:
to get the transformbased sparse coefficient vector x^{♯}, and then reconstruct the original signal f^{♯} by applying a synthesis operator Ψ on x^{♯}, i.e., f^{♯}=Ψx^{♯}. Since the entries in ΦΨ are correlated when Ψ is highly coherent, ΦΨ may no longer satisfy the required assumptions such as the restricted isometry property (RIP) and the mutual incoherence property (MIP) which have been widely used in conventional CS. Therefore, it is not easy to study the theoretical performance of the ℓ_{1}synthesis method.
Fortunately, there exists an alternative to the ℓ_{1}synthesis method called the ℓ_{1}analysis method [10, 12], which directly finds an estimator f^{♯} by solving following ℓ_{1}analysis problem:
The ℓ_{1}analysis method has its roots in the analysisstyle sparse representation x=Ψ^{T}f, and is different from the abovementioned synthesis method, which is based on the synthesisstyle sparse representation, i.e., f=Ψx. The existing literature has shown that there is a remarkable difference between the two methods despite their apparent similarity. For example, these two methods have totally different recovery conditions to guarantee robust recovery of any signal, and their ways to utilize the sparsity prior are also totally different. Please see [10, 13] and references therein for more details. To investigate the theoretical performance of the ℓ_{1}analysis method, Candès et al. [12] introduced the definition of ΨRIP: a measurement matrix Φ is said to satisfy the RIP adapted to Ψ (ΨRIP) with constant δ_{k} if
holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is ksparse, and establishes a sufficient condition related to ΨRIP for recovering general signals. In addition, they also demonstrated the efficiency of the ℓ_{1}analysis strategy with a large number of experiments based on real signals.
Different from the general case in CS that the transformbased coefficient vector x is sparse, some signals in the real world may exhibit additional sparse structures in terms of a fixed transform basis Ψ. Take for example the blocksparse structure, i.e., the nonzero elements of x are assembled in a few fixed blocks, which is also our main concern in this paper. Such structured signals naturally arise in various applications. Prominent examples include DNA microarrays [14], color imaging [15], and motion segmentation [16]. Without loss of generality, we assume that there are m blocks of size d=n/m in x. Then, one can write any blocksparse vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) as
where x[i] denotes the ith block of x. If x has at most k nonzero blocks, i.e., ∥x∥_{2,0}≤k, we refer to such a vector x as a block ksparse signal. Accordingly, we can also write \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) as
where Ψ_{i} with i=1,2,⋯,n and Ψ[j] with j=1,2,⋯,m are denoted by the ith column vector and the jth subblock matrix of Ψ, respectively. Most current papers focus more on the conventional sparse or nearly sparse case in terms of Ψ. As one of some exceptions, Wang et al. [17] proposed an ℓ_{2}/ℓ_{1}analysis method to investigate the recovery of blocksparse signals in terms of Ψ. Basing their theoretical analysis on block ΨRIP, which is a block version of ΨRIP that we will define in the next section, Wang et al. [17] also developed several sufficient conditions to guarantee robust recovery of general signals. For completeness, we present the ℓ_{2}/ℓ_{1}analysis problem as follows
Obviously, when d=1, the ℓ_{2}/ℓ_{1}analysis method will degenerate to the ℓ_{1}analysis method mentioned above.
Recently, the work of Chartrand et al. [18–20] has shown that the nonconvex ℓ_{q}(0<q<1) method allows the exact recovery of sparse signals from a smaller set of linear measurement than that of the ℓ_{1} method, providing a new paradigm to study CS problems. In this paper, along with previous works on the nonconvex ℓ_{q}(0<q<1) strategy, we first propose an ℓ_{2}/ℓ_{q}analysis method with 0<q≤1 to recover general signals that can be expressed as blocksparse signals in terms of Ψ. Our method is different from conventional CS methods, which only concern cases where the signals per se are sparse or blocksparse [18, 21–23], and also different from previous analysis methods [24–26], which only focus on the recovery of general signals that are expressed as nonblock structured signals in terms of Ψ. Specifically, the proposed method can be described as:
In many application problems, the observed signal y may be polluted by a bounded noise e, i.e., y=Φf+e. So for the general situation, we have the model:
where ε is the noise level. Secondly, for (1), we also establish a sufficient conditions for robust recovery of general signals. The obtained results associate two constants ΨRIC and ΨROC in a block version with different q∈(0,1], and provide a series of selectable conditions for robust recovery via the ℓ_{2}/ℓ_{q}analysis method. Finally, inspired by the ideas of [21, 27], we derive an iterative reweighted least square (IRLS) algorithm to solve our ℓ_{2}/ℓ_{q}analysis problem. Also, some experiments are conducted later that further demonstrate the efficiency of our ℓ_{2}/ℓ_{q}analysis method with 0<q≤1.
The rest of the paper is organized as follows. In Section 2, we first state three key definitions and then present our main theoretical results. In Section 3, we propose an IRLS algorithm to solve the ℓ_{2}/ℓ_{q}analysis problem, and conduct some experiments to support the validity of our ℓ_{2}/ℓ_{q}analysis method. Finally, the conclusion is addressed in Section 4.
2 Robust recovery for ℓ _{2}/ℓ _{q}analysis problem
In this section, we mainly establish a sufficient condition to robustly recover general signals that can be expressed as blocksparse vectors in terms of Ψ. Before presenting our main results, we first introduce several definitions that will be used later. We start with the introduction of two important definitions, which can also be found in many references such as [17].
Definition 1
Let \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) with \(\widetilde {n}\leq n\) be a matrix whose n columns form a tight frame. A measurement matrix \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is said to satisfy the block ΨRIP condition with constant δ_{kd}(block ΨRIC) if
holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is block ksparse.
Definition 2
The block Ψrestricted orthogonality constant(block ΨROC), denoted by \(\theta _{(k_{1},k_{2})d}\), is the smallest positive number that satisfies
for every x_{1} and x_{2} such that x_{1} and x_{2} are block k_{1}sparse and block k_{2}sparse, respectively.
It is easy to see that if one sets Ψ to be the identity matrix of size \(\widetilde {n}\times \widetilde {n}\), then the abovementioned definitions will be reduced to the wellknown blockRIC and blockROC definitions. Furthermore, if one sets the block size d=1, then we will get the classical RIC and ROC definitions. Obviously, blockRIC and blockROC definitions, together with RIC and ROC definitions, are just two special cases of Definitions 1 and 2. In addition, we also need the following definition, which also plays a key role in our theorem.
Definition 3
Given \(\boldsymbol {x}\in \mathbb {R}^{n}\), we denote the best kblock approximation of x as
For convenience, in the remainder of this paper, we use δ_{k} and \(\theta _{k_{1},k_{2}}\), instead of δ_{kd} and \(\theta _{(k_{1},k_{2})d}\), to represent the block ΨRIC and block ΨROC, respectively, whenever confusion is not caused.
Next, we present our main results, which are included in the following theorem.
Theorem 1
Let k_{1} and k_{2} be two positive integers such that 0≤8(k_{1}−k)≤k_{2}, and denote \(t=2^{1/q1}(k_{1}/k_{2})^{1/q1/2}+\sqrt {k_{2}/k_{1}}[(q/2)^{q/(2q)}(q/2)^{2/(2q)}](1+2^{1/q1})\sqrt {k_{2}/k_{1}}\left [(k_{1}k)/k_{2}\right ]^{1/q}\) with 0<q≤1. If the matrix Φ satisfies
where δ_{k} and θ_{k,k} are defined in Definitions 1 and 2, then the solution f^{♯} to problem (1) obeys
where
In what follows, we present two remarks for the established results, and the proof of the theorem will be given in the appendix.
Remark 1
Theorem 1 presents a sufficient condition to robustly recover general signals via the ℓ_{2}/ℓ_{q}analysis method with 0<q≤1. The obtained sufficient condition associates block ΨRIC and block ΨROC with different q∈(0,1], and provides a series of selectable conditions for robust recovery of general signals that can be expressed as blocksparse vectors. Since condition (2) is related to t, which is a complex combination of k_{1}, k_{2}, k, and q, it is difficult to intuitively analyze the obtained condition. So by choosing some representative values, we induce a series of new conditions, as detailed in Table 1.
Remark 2
Inequality (3) indicates that our reconstructed error ∥f−f^{♯}∥_{2} can be bounded by the lowest kblock approximation error and the noise level ε. As a special case where ε=0 and the original signal f can be expressed as a block ksparse vector with a fixed Ψ, i.e., ∥Ψ^{T}f∥_{2,0}≤k, if the matrix Φ satisfies (2), solving problem (1) will lead to exact recovery of the original signal f.
3 Numerical experiments and results
In this section, we conduct some numerical experiments to evaluate the performance of our ℓ_{2}/ℓ_{q}(0<q<1)analysis method. An IRLS algorithm is first proposed to solve the induced ℓ_{2}/ℓ_{q}(0<q<1)analysis problem. We then compare our ℓ_{2}/ℓ_{q}(0<q<1)analysis method with other analysisstyle methods, including ℓ_{2}/ℓ_{1}analysis [17] and ℓ_{q}(0<q≤1)analysis [26].
3.1 An IRLS algorithm for ℓ _{2}/ℓ _{q}analysis
In order to solve the ℓ_{2}/ℓ_{q}analysis problem (1) with 0<q≤1, we derive an efficient analysisstyle IRLS algorithm. The proposed algorithm can be seen as a natural extension of the traditional IRLS algorithm [21, 27] for sparse problems. We first rewrite the problem (1) as
where λ is a regularization parameter and \(\\Psi ^{T}\boldsymbol {f}\_{2,q}^{\epsilon }=\sum \limits _{i=1}^{m}\left (\epsilon ^{2}+\{\Psi [i]^{T}}f\_{2}^{2}\right)^{\frac {q}{2}}\).
Using the firstorder optimality condition on (4), we have
where \(\widetilde {\boldsymbol {f}}\) denotes a critical point of (4). Due to the nonlinearity in the above equation, there is no straightforward way to obtain an accurate solution of (5). However, utilizing some numerical techniques, one can well approximate an accurate solution of (5). Along the ideas in [21, 27], we present a similar iterative procedure as
which is implemented by Algorithm 1.
3.2 Experimental settings
Throughout the experiments, the measurement matrix Φ is generated by creating an \(\widetilde {m}\times \widetilde {n}\) Gaussian matrix with \(\widetilde {m}=64\) and \(\widetilde {n}=256\), and the overcomplete and tight frame Ψ is generated by taking the first \(\widetilde {n}\) rows from an n×n Hadamard matrix with n=512. The original signal f is synthesized as f=Ψx where x is a block ksparse signal with block size d=4. We set the value of the noise vector e as obeying a Gaussian distribute with mean 0 and standard deviation 0.05. We consider four different values of q=0.1,0.5,0.7,1 for both the ℓ_{2}/ℓ_{q}analysis and ℓ_{q}analysis methods. The relative error between the reconstructed signal f^{♯} and the original signal f is calculated as ∥f−f^{♯}∥_{2}/∥f∥_{2}.
3.3 Experimental results
In order to find the value of λ that minimizes the relative error, we conduct two sets of trials. Figure 1a depicts the relative error versus λ for recovering the signals f, which can be expressed as block 5sparse signals in terms of Ψ. It is easy to see that choosing λ less than 1×10^{−2} is appropriate. Similar results can also be found in Fig. 1b. Without loss of generality, we take λ=1×10^{−3} as the best regularization parameter value.
Next, we compare our ℓ_{2}/ℓ_{q}(0<q<1)analysis method with the ℓ_{2}/ℓ_{1}analysis and ℓ_{q}(0<q≤1)analysis methods. The results are depicted in Fig. 2.
It is easy to see that the ℓ_{2}/ℓ_{q}(0<q<1)analysis method is far superior to the ℓ_{2}/ℓ_{1}analysis method. Take for example the ℓ_{2}/ℓ_{0.5}analysis method when k=8. The relative error from the ℓ_{2}/ℓ_{0.5}analysis method is 0.016, which is about 16 times smaller than that of the ℓ_{2}/ℓ_{1}analysis method (0.257). Additionally, in terms of the nonconvex strategy, a proper value of q contributes to better performance of both the ℓ_{2}/ℓ_{q}(0<q≤1)analysis and ℓ_{q}(0<q≤1)analysis methods. However, with increasing blocksparsity k, the three methods above tend to consistency. What is more, when it comes to recovering the signals, which can be expressed as blocksparse coefficient vectors based on Ψ, our method performs better than the other two methods. An instance is also presented in Fig. 3, which displays the recovery of the signal f synthesized by a block 7sparse vector based on the Ψ via ℓ_{2}/ℓ_{q}(0<q≤1)analysis and ℓ_{q}(0<q≤1)analysis methods, respectively.
4 Conclusion and discussion
This paper mainly investigates an ℓ_{2}/ℓ_{q}(0<q≤1)analysis method to recover a general signal that can be expressed as a blocksparse vector in terms of an overcomplete and tight frame. To the best of our knowledge, this is the first theoretical characterization of the proposed nonconvex ℓ_{2}/ℓ_{q}analysis method with 0<q<1. Specifically, the obtained results contribute to CS in the following three aspects:
We proposed an ℓ_{2}/ℓ_{q}analysis method to recover a general signal that can be expressed as a blocksparse vector in a certain frame, generalizing both the traditional methods in CS to recover sparse signals and the recent novel analysis methods to recover general signals.
Basing our theoretical approach on a proposed ℓ_{2}/ℓ_{q}analysis method, we established a sufficient condition for robust recovery of general signals that can be expressed as a blocksparse signals, which associates block ΨRIC and block ΨROC with different values of q∈(0,1], providing a series of selectable conditions related to q.
We derive an analysisstyle IRLS algorithm to solve the proposed method and compare our method with that of other representative methods, obtaining some convincing results.
There are still some issues left for future work. For example, one could consider establishing sharp recovery conditions of our ℓ_{2}/ℓ_{q}(0<q<1)analysis method, and one could also consider replacing our ℓ_{2}/ℓ_{q}(0<q<1)analysis method with other more general nonconvex methods.
5 Appendix
The proof of Theorem 1 is proved as follows.
Let f^{♯}=f+h be a solution of (1), where f is the original signal. Write Ψ^{T}h=(c[1],c[2],⋯,c[m])^{T} and rearrange the block indices such that ∥c[1]∥_{2}≥∥c[2]∥_{2}≥⋯≥∥c[m]∥_{2}. Let \({\widetilde {\Omega }}=\{1,2,\cdots,k\}\) and Ω the block index set over the k blocks with largest ℓ_{2} norm of Ψ^{T}f. We denote by Ω^{c} the complement set of Ω in {1,2,⋯,d}. For convenience, we use \(\Psi ^{T}_{{\widetilde {\Omega }}}\) to denote \((\Psi _{{\widetilde {\Omega }}})^{T}\) where Ψ_{S} is the matrix Ψ restricted to the columnblocks indexed by \({\widetilde {\Omega }}\), and then partition {1,2,⋯,d} into the following sets
Since f^{♯} is a minimizer of (1), we have
This implies
Note that \(\left \\Psi ^{T}_{{\widetilde {\Omega }}}\boldsymbol {h}\right \_{2,q}\geq \left \\Psi ^{T}_{\Omega }\boldsymbol {h}\right \_{2,q}\) and \(\left \\Psi ^{T}_{{\widetilde {\Omega }}^{\mathrm {c}}}\boldsymbol {h}\right \_{2,q}\leq \left \\Psi ^{T}_{\Omega ^{\mathrm {c}}}\boldsymbol {h}\right \_{2,q}\), and thus it follows from (6) that
Further, we have
Using the inequality involving ℓ_{2} and ℓ_{q}(0<q≤1) norms^{Footnote 1}(see [28], Lemma 3), it is easy to obtain that
holds for any j≥1, where P_{q}=(q/2)^{q/(2−q)}−(q/2)^{2/(2−q)}. Thus, summing these terms yields
This, along with (7), thus gives
Since \(\\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\_{2,q}\leq (k_{1})^{1/q1/2}\\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\_{2}\) and \(\boldsymbol {c}[k_{1}+1]\_{2}^{2}\leq \\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\_{2}^{2}/k_{1}\), we can get
where \(t=2^{1/q1}(k_{1}/k_{2})^{1/q1/2}+P_{q}\sqrt {k_{2}/k_{1}}(1+2^{1/q1})\sqrt {k_{2}/k_{1}}\left [(k_{1}k)/k_{2}\right ]^{1/q}\).
In fact, to make (8) work, one way is to set
which is equivalent to
To this end, we have to estimate the minimal value of
By means of mathematical skills, we can deduce that f(q) arrives at its minimum value 0.125 when q=1. An auxiliary result is depicted in Fig. 4.
A direct result is that if the condition \(0\leq \frac {k_{1}k}{k_{2}}\leq 0.125\) holds, one can easily get (8) for all 0<q≤1.
Similar to the consequence of ΨRIP, we have
By applying the equality
to the above inequality, we get
By the feasibility of f^{♯}, we have
Thus,
It then follows from (10) and (11) that
By (7), it is easy to see that
Consequently, we have
Plugging (12) into the previously mentioned inequality and by a direct calculation, we get
which yields (3).
Availability of data and materials
Please contact any of the authors for data and materials.
Notes
In fact, when 0<q<1, the ℓ_{q} norm is a quasi norm. For consistency, we instead use the norm.
Abbreviations
 ΨRIC:

ΨRestricted isometry constant
 ΨRIP:

ΨRestricted isometry property
 ΨROC:

ΨRestricted orthogonality constant
 ΨROP:

ΨRestricted orthogonality property
 CS:

Compressed sensing
 DNA:

Deoxyribonucleic acid
 IRLS:

Iterative reweighted least square
References
D. Salomon, A concise introduction to data compression (2008).
D. Sculley, C. E. Brodley, in Proc. of the 2006 Data Compression Conference (DCC’2006). Compression and machine learning: A new perspective on feature space vectors (IEEE Computer SocietySnowbird, 2006), pp. 332–332. https://doi.org/10.1109/DCC.2006.13.
L. Martino, V. Elvira, Compressed Monte Carlo for distributed Bayesian inference. viXra:1811.0505 (2018). https://www.rxiv.org/pdf/1811.0505v1.pdf.
Y. Zheng, J. Ma, L. Wang, Consensus of hybrid multiagent systems. IEEE Transactions on Neural Networks and Learning Systems. 29(4), 1359–1365 (2018).
J. Ma, M. Ye, Y. Zheng, Y. Zhu, Consensus analysis of hybrid multiagent systems: a gametheoretic approach. Int. J. Robust Nonlinear Control. 29(6), 1840–1853 (2019).
D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).
S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20(1), 33–61 (1998).
A. M. Bruckstein, D. L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev.51(1), 34–81 (2009).
M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse Probl.23(3), 947–968 (2007).
H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory. 54(5), 2210–2219 (2008).
E. J. Candès, Y. C. Eldar, D. Needel, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal.31(1), 59–73 (2011).
I Selesnick, M Figueiredo, Signal restoration with overcomplete wavelet transforms: Comparison of analysis and synthesis priors. Proc. SPIE. 7446: (2009).
F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Sig. Process. 2(3), 275–285 (2008).
A. Majumdar, R. K. Ward, Compressed sensing of color images. Sig. Process. 90(12), 3122–3127 (2010).
R. Vidal, Y. Ma, A unified algebraic approach to 2D and 3D motion segmentation and estimation. J. Math. Imaging Vision. 25(3), 403–421 (2006).
Y. Wang, J. J. Wang, Z. B. Xu, A note on blocksparse signal recovery with coherent tight frames. Discret. Dyn. Nat. Soc.2013(1), 1–8 (2013).
R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization. Sig. Process. Lett.14(10), 707–710 (2007).
R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressive sensing. Inverse Probl.24(3), 20–35 (2008).
Z. B. Xu, H. L. Guo, Y. Wang, H. Zhang, Representative of L _{1/2} regularization among L _{q}(0<q≤1) regularizations: an experimental study based on phase diagram. Acta Autom. Sin.38(7), 1225–1228 (2012).
Y. Wang, J. J. Wang, Z. B. Xu, On recovery of blocksparse signals via mixed ℓ _{2}/ℓ _{q}(0<q≤1) norm minimization. EURASIP J. Adv. Sig. Process.2013(1), 1–17 (2013).
Y. Wang, J. J. Wang, Z. B. Xu, Restricted pisometry properties of nonconvex blocksparse compressed sensing. Sig. Process. 104:, 188–196 (2014).
H. T. Yin, S. T. Li, L. Y. Fang, Blocksparse compressed sensing: nonconvex model and iterative reweighted algorithm. Inverse Probl. Sci. Eng.21(1), 141–154 (2013).
J. H. Lin, S. Li, Y. Shen, New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Sig. Process.61(3), 611–621 (2013).
J. H. Lin, S. Li, Y. Shen, Compressed data separation with redundant dictionaries. IEEE Trans. Inf. Theory. 59(7), 4309–4315 (2013).
S. Li, J. H. Lin, Compressed sensing with coherent tight frame via ℓ _{q} minimization. Inverse Probl. Imaging. 8:, 761–777 (2014).
M. J. Lai, Y. Y. Xu, W. T. Yin, Improved iteratively reweighted least squares for unconstrained smoothed ℓ _{q} minimization. SIAM J. Numer. Anal.51(2), 927–957 (2013).
Y. Hsia, R. L. Sheu. arXiv: 1312.3379 (2014). http://arxiv.org/abs/1312.3379.
Acknowledgments
This paper is subsidized by the project of the key laboratory of Intelligent Information and Big Data Processing of NingXia Province, North Minzu University (no. NXKLIIBDP2019).
Author information
Authors and Affiliations
Contributions
The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Luo, X., Yang, W., Ha, J. et al. Nonconvex blocksparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process. 2020, 2 (2020). https://doi.org/10.1186/s1363401906598
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363401906598
Keywords
 Blocksparse compressed sensing
 Restricted isometry property
 Restricted orthogonality property
 Tight frames
 ℓ _{2}/ℓ _{q}analysis method