Skip to main content

Non-convex block-sparse compressed sensing with coherent tight frames


In this paper, we present a non-convex 2/q(0<q<1)-analysis method to recover a general signal that can be expressed as a block-sparse coefficient vector in a coherent tight frame, and a sufficient condition is simultaneously established to guarantee the validity of the proposed method. In addition, we also derive an efficient iterative re-weighted least square (IRLS) algorithm to solve the induced non-convex optimization problem. The proposed IRLS algorithm is tested and compared with the 2/1-analysis and the q(0<q≤1)-analysis methods in some experiments. All the comparisons demonstrate the superior performance of the 2/q-analysis method with 0<q<1.


Data compression and data recovery (maybe from its compressed observed data) are two crucial problems in many real-world applications, including information processing [1], machine learning [2], statistical inference [3], swarm intelligence [4, 5], and compressed sensing (CS) [6, 7]. Among these applications, CS is particularly attractive since it provides insights into signal processing with significantly smaller samplings than classical signal processing approaches based on the Nyquist-Shannon sampling theorem.

CS was pioneered by Donoho [6] and Candès et al. [7] around 2006, and it has already captured lots of attention from researchers in a growing number of fields, including signal processing, machine learning, mathematical statistics, etc. A crucial concern in CS is to recover an unknown signal \(\boldsymbol {f}\in \mathbb {R}^{\widetilde {n}}\) from its small set of linear measurements

$$\begin{array}{*{20}l} \boldsymbol{y}=\Phi\boldsymbol{f}, \end{array} $$

where \(\boldsymbol {y}\in \mathbb {R}^{\widetilde {m}}\) is an observed signal vector and \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is a given measurement matrix with \(\widetilde {m}\ll \widetilde {n}\).

Conventional CS heavily relies on techniques that can express signals as a few linear combination of base vectors from an orthogonal basis. However, in a large number of practical applications, the signals are not sparse in terms of an orthogonal basis, but in terms of an overcomplete and tight frame [8, 9]. In such a scenario, one natural way to express f is to write f=Ψx, where \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) is a matrix with \(\widetilde {n}\leq n\) whose n columns form a tight frame, and \(\boldsymbol {x}\in \mathbb {R}^{n}\) is sparse (or nearly sparse). In order to recover f, a popular approach is using an 1-synthesis method [10, 11], which first solves following problem:

$$\begin{array}{*{20}l} \min_{\boldsymbol{x}\in\mathbb{R}^{n}}\|\boldsymbol{x}\|_{1}~~~\text{subject~~to}~~~\boldsymbol{y}=\Phi\Psi\boldsymbol{x} \end{array} $$

to get the transform-based sparse coefficient vector x, and then reconstruct the original signal f by applying a synthesis operator Ψ on x, i.e., f=Ψx. Since the entries in ΦΨ are correlated when Ψ is highly coherent, ΦΨ may no longer satisfy the required assumptions such as the restricted isometry property (RIP) and the mutual incoherence property (MIP) which have been widely used in conventional CS. Therefore, it is not easy to study the theoretical performance of the 1-synthesis method.

Fortunately, there exists an alternative to the 1-synthesis method called the 1-analysis method [10, 12], which directly finds an estimator f by solving following 1-analysis problem:

$$\begin{array}{*{20}l} \min_{\boldsymbol{f}\in\mathbb{R}^{N}}\left\|\Psi^{T}\boldsymbol{f}\right\|_{1}~~~\text{subject~~to}~~~\boldsymbol{y}=\Phi\boldsymbol{f}. \end{array} $$

The 1-analysis method has its roots in the analysis-style sparse representation x=ΨTf, and is different from the above-mentioned synthesis method, which is based on the synthesis-style sparse representation, i.e., f=Ψx. The existing literature has shown that there is a remarkable difference between the two methods despite their apparent similarity. For example, these two methods have totally different recovery conditions to guarantee robust recovery of any signal, and their ways to utilize the sparsity prior are also totally different. Please see [10, 13] and references therein for more details. To investigate the theoretical performance of the 1-analysis method, Candès et al. [12] introduced the definition of Ψ-RIP: a measurement matrix Φ is said to satisfy the RIP adapted to Ψ (Ψ-RIP) with constant δk if

$$(1-\delta_{k})\|\Psi\boldsymbol{x}\|_{2}^{2}\leq\|\Phi\Psi\boldsymbol{x}\|_{2}^{2}\leq(1+\delta_{k})\|\Psi\boldsymbol{x}\|_{2}^{2} $$

holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is k-sparse, and establishes a sufficient condition related to Ψ-RIP for recovering general signals. In addition, they also demonstrated the efficiency of the 1-analysis strategy with a large number of experiments based on real signals.

Different from the general case in CS that the transform-based coefficient vector x is sparse, some signals in the real world may exhibit additional sparse structures in terms of a fixed transform basis Ψ. Take for example the block-sparse structure, i.e., the non-zero elements of x are assembled in a few fixed blocks, which is also our main concern in this paper. Such structured signals naturally arise in various applications. Prominent examples include DNA microarrays [14], color imaging [15], and motion segmentation [16]. Without loss of generality, we assume that there are m blocks of size d=n/m in x. Then, one can write any block-sparse vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) as

$$\begin{array}{*{20}l} {}\boldsymbol{x}=[\underbrace{\boldsymbol{x}_{1},\cdots,\boldsymbol{x}_{d}}\limits_{\boldsymbol{x}[1]},\underbrace{\boldsymbol{x}_{d+1},\cdots,\boldsymbol{x}_ {2d}}\limits_{\boldsymbol{x}[2]},\cdots,\underbrace{\boldsymbol{x}_{n-d+1},\cdots,\boldsymbol{x}_{n}}\limits_{\boldsymbol{x}[m]}]^{T}, \end{array} $$

where x[i] denotes the ith block of x. If x has at most k non-zero blocks, i.e., x2,0k, we refer to such a vector x as a block k-sparse signal. Accordingly, we can also write \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) as

$$\begin{array}{*{20}l} {}\Psi=[\underbrace{\Psi_{1},\cdots,\Psi_{d}}\limits_{\Psi[1]},\underbrace{\Psi_{d+1},\cdots,\Psi_{2d}}\limits_{\Psi[2]},\cdots,\underbrace{\Psi_{n-d+1},\cdots, \Psi_{n}}\limits_{\Psi[m]}], \end{array} $$

where Ψi with i=1,2,,n and Ψ[j] with j=1,2,,m are denoted by the ith column vector and the jth sub-block matrix of Ψ, respectively. Most current papers focus more on the conventional sparse or nearly sparse case in terms of Ψ. As one of some exceptions, Wang et al. [17] proposed an 2/1-analysis method to investigate the recovery of block-sparse signals in terms of Ψ. Basing their theoretical analysis on block Ψ-RIP, which is a block version of Ψ-RIP that we will define in the next section, Wang et al. [17] also developed several sufficient conditions to guarantee robust recovery of general signals. For completeness, we present the 2/1-analysis problem as follows

$$\begin{array}{*{20}l} {}\min_{\boldsymbol{f}\in\mathbb{R}^{N}}\left\|\Psi^{T}\boldsymbol{f}\right\|_{2,1}{:=\sum\limits_{i=1}^{m}\left\|\Psi[i]^{T}\boldsymbol{f}\right\|_{2}}~~~\text{subject~~to}~~~\boldsymbol{y}=\Phi\boldsymbol{f}, \end{array} $$

Obviously, when d=1, the 2/1-analysis method will degenerate to the 1-analysis method mentioned above.

Recently, the work of Chartrand et al. [1820] has shown that the non-convex q(0<q<1) method allows the exact recovery of sparse signals from a smaller set of linear measurement than that of the 1 method, providing a new paradigm to study CS problems. In this paper, along with previous works on the non-convex q(0<q<1) strategy, we first propose an 2/q-analysis method with 0<q≤1 to recover general signals that can be expressed as block-sparse signals in terms of Ψ. Our method is different from conventional CS methods, which only concern cases where the signals per se are sparse or block-sparse [18, 2123], and also different from previous analysis methods [2426], which only focus on the recovery of general signals that are expressed as non-block structured signals in terms of Ψ. Specifically, the proposed method can be described as:

$$\begin{array}{*{20}l} {}\min_{\boldsymbol{f}\in\mathbb{R}^{N}}\left\|\Psi^{T}\boldsymbol{f}\right\|_{2,q}{\!:=\!\left\{\!\sum\limits_{i=1}^{m}\left\|\Psi[i]^{T}\boldsymbol{f}\right\|_{2}^{q}\!\right\}^{1/q}} \text{subject~~to}~~~\boldsymbol{y}=\Phi\boldsymbol{f}, \end{array} $$

In many application problems, the observed signal y may be polluted by a bounded noise e, i.e., y=Φf+e. So for the general situation, we have the model:

$$\begin{array}{*{20}l} \min_{\boldsymbol{f}\in\mathbb{R}^{N}}\left\|\Psi^{T}\boldsymbol{f}\right\|_{2,q}~~~\text{subject~~to}~~~\|\boldsymbol{y}-\Phi\boldsymbol{f}\|_{2}\leq\epsilon, \end{array} $$

where ε is the noise level. Secondly, for (1), we also establish a sufficient conditions for robust recovery of general signals. The obtained results associate two constants Ψ-RIC and Ψ-ROC in a block version with different q(0,1], and provide a series of selectable conditions for robust recovery via the 2/q-analysis method. Finally, inspired by the ideas of [21, 27], we derive an iterative re-weighted least square (IRLS) algorithm to solve our 2/q-analysis problem. Also, some experiments are conducted later that further demonstrate the efficiency of our 2/q-analysis method with 0<q≤1.

The rest of the paper is organized as follows. In Section 2, we first state three key definitions and then present our main theoretical results. In Section 3, we propose an IRLS algorithm to solve the 2/q-analysis problem, and conduct some experiments to support the validity of our 2/q-analysis method. Finally, the conclusion is addressed in Section 4.

Robust recovery for 2/ q-analysis problem

In this section, we mainly establish a sufficient condition to robustly recover general signals that can be expressed as block-sparse vectors in terms of Ψ. Before presenting our main results, we first introduce several definitions that will be used later. We start with the introduction of two important definitions, which can also be found in many references such as [17].

Definition 1

Let \(\Psi \in \mathbb {R}^{\widetilde {n}\times n}\) with \(\widetilde {n}\leq n\) be a matrix whose n columns form a tight frame. A measurement matrix \(\Phi \in \mathbb {R}^{\widetilde {m}\times \widetilde {n}}\) is said to satisfy the block Ψ-RIP condition with constant δk|d(block Ψ-RIC) if

$$\begin{array}{*{20}l} (1-\delta_{k|d})\|\Psi\boldsymbol{x}\|_{2}^{2}\leq\|\Phi\Psi\boldsymbol{x}\|_{2}^{2}\leq(1+\delta_{k|d})\|\Psi\boldsymbol{x}\|_{2}^{2} \end{array} $$

holds for every vector \(\boldsymbol {x}\in \mathbb {R}^{n}\) that is block k-sparse.

Definition 2

The block Ψ-restricted orthogonality constant(block Ψ-ROC), denoted by \(\theta _{(k_{1},k_{2})|d}\), is the smallest positive number that satisfies

$$\begin{array}{*{20}l} |\langle \Phi\Psi\boldsymbol{x}_{1}, \Phi\Psi\boldsymbol{x}_{2}\rangle-\langle \Psi\boldsymbol{x}_{1},\Psi\boldsymbol{x}_{2}\rangle|\leq\theta_{(k_{1},k_{2})|d}\|\boldsymbol{x}_{1}\|_{2}\|\boldsymbol{x}_{2}\|_{2} \end{array} $$

for every x1 and x2 such that x1 and x2 are block k1-sparse and block k2-sparse, respectively.

It is easy to see that if one sets Ψ to be the identity matrix of size \(\widetilde {n}\times \widetilde {n}\), then the above-mentioned definitions will be reduced to the well-known block-RIC and block-ROC definitions. Furthermore, if one sets the block size d=1, then we will get the classical RIC and ROC definitions. Obviously, block-RIC and block-ROC definitions, together with RIC and ROC definitions, are just two special cases of Definitions 1 and 2. In addition, we also need the following definition, which also plays a key role in our theorem.

Definition 3

Given \(\boldsymbol {x}\in \mathbb {R}^{n}\), we denote the best k-block approximation of x as

$$\begin{array}{*{20}l} \boldsymbol{x}_{[k]}=\arg\min_{\|\boldsymbol{u}\|_{2,0}\leq k}\|\boldsymbol{x}-\boldsymbol{u}\|_{2,1},~~\boldsymbol{u}\in\mathbb{R}^{n}. \end{array} $$

For convenience, in the remainder of this paper, we use δk and \(\theta _{k_{1},k_{2}}\), instead of δk|d and \(\theta _{(k_{1},k_{2})|d}\), to represent the block Ψ-RIC and block Ψ-ROC, respectively, whenever confusion is not caused.

Next, we present our main results, which are included in the following theorem.

Theorem 1

Let k1 and k2 be two positive integers such that 0≤8(k1k)≤k2, and denote \(t=2^{1/q-1}(k_{1}/k_{2})^{1/q-1/2}+\sqrt {k_{2}/k_{1}}[(q/2)^{q/(2-q)}-(q/2)^{2/(2-q)}]-(1+2^{1/q-1})\sqrt {k_{2}/k_{1}}\left [(k_{1}-k)/k_{2}\right ]^{1/q}\) with 0<q≤1. If the matrix Φ satisfies

$$\begin{array}{*{20}l} \delta_{k_{1}}+ t\theta_{k_{1},k_{2}}<1, \end{array} $$

where δk and θk,k are defined in Definitions 1 and 2, then the solution f to problem (1) obeys

$$\begin{array}{*{20}l} \left\|\boldsymbol{f}-\boldsymbol{f}^{\sharp}\right\|_{2}\leq C_{1}\epsilon+C_{2}\|\left(\Psi^{T}\boldsymbol{f}\right)_{[k]}\|_{2,q}, \end{array} $$


$$\begin{array}{*{20}l} C_{1}&=\frac{2\sqrt{(1+\delta_{k_{1}})\left((2k_{1})^{1/q-1}+1\right)}}{1-\delta_{k_{1}}-t\theta_{k_{1},k_{2}}},\\ C_{2}&= \frac{2^{2/q-1}(k_{2})^{1/2-1/q}\theta_{k_{1},k_{2}}\sqrt{(2k_{1})^{1/q-1}+1}}{1-\delta_{k_{1}}-t\theta_{k_{1},k_{2}}}\\ &\quad+\frac{2^{2/q-2}}{\sqrt{k_{1}\left((2k_{1})^{1/q-1}+1\right)}}. \end{array} $$

In what follows, we present two remarks for the established results, and the proof of the theorem will be given in the appendix.

Remark 1

Theorem 1 presents a sufficient condition to robustly recover general signals via the 2/q-analysis method with 0<q≤1. The obtained sufficient condition associates block Ψ-RIC and block Ψ-ROC with different q(0,1], and provides a series of selectable conditions for robust recovery of general signals that can be expressed as block-sparse vectors. Since condition (2) is related to t, which is a complex combination of k1, k2, k, and q, it is difficult to intuitively analyze the obtained condition. So by choosing some representative values, we induce a series of new conditions, as detailed in Table 1.

Table 1 Different sufficient conditions related to k1, k2, and q

Remark 2

Inequality (3) indicates that our reconstructed error ff2 can be bounded by the lowest k-block approximation error and the noise level ε. As a special case where ε=0 and the original signal f can be expressed as a block k-sparse vector with a fixed Ψ, i.e., ΨTf2,0k, if the matrix Φ satisfies (2), solving problem (1) will lead to exact recovery of the original signal f.

Numerical experiments and results

In this section, we conduct some numerical experiments to evaluate the performance of our 2/q(0<q<1)-analysis method. An IRLS algorithm is first proposed to solve the induced 2/q(0<q<1)-analysis problem. We then compare our 2/q(0<q<1)-analysis method with other analysis-style methods, including 2/1-analysis [17] and q(0<q≤1)-analysis [26].

An IRLS algorithm for 2/ q-analysis

In order to solve the 2/q-analysis problem (1) with 0<q≤1, we derive an efficient analysis-style IRLS algorithm. The proposed algorithm can be seen as a natural extension of the traditional IRLS algorithm [21, 27] for sparse problems. We first rewrite the problem (1) as

$$\begin{array}{*{20}l} \min_{\boldsymbol{f}\in\mathbb{R}^{N}}\left\|\Psi^{T}\boldsymbol{f}\right\|_{2,q}^{\epsilon}+\frac{1}{2\lambda}\|\boldsymbol{y}-\Phi\boldsymbol{f}\|_{2}^{2}, \end{array} $$

where λ is a regularization parameter and \(\|\Psi ^{T}\boldsymbol {f}\|_{2,q}^{\epsilon }=\sum \limits _{i=1}^{m}\left (\epsilon ^{2}+\|{\Psi [i]^{T}}f\|_{2}^{2}\right)^{\frac {q}{2}}\).

Using the first-order optimality condition on (4), we have

$$\begin{array}{*{20}l} {}\sum_{i=1}^{m}\frac{q\Psi[i]{\Psi[i]^{T}}}{\left(\epsilon^{2}+\|{\Psi[i]^{T}}\widetilde{\boldsymbol{f}}\|_{2}^{2}\right)^{1-q/2}}\widetilde{\boldsymbol{f}} +\frac{1}{\lambda}\left(\Phi^{T}\Phi\widetilde{\boldsymbol{f}}-\Phi^{T}\boldsymbol{y}\right)=0, \end{array} $$

where \(\widetilde {\boldsymbol {f}}\) denotes a critical point of (4). Due to the non-linearity in the above equation, there is no straightforward way to obtain an accurate solution of (5). However, utilizing some numerical techniques, one can well approximate an accurate solution of (5). Along the ideas in [21, 27], we present a similar iterative procedure as

$$\begin{array}{*{20}l} {}\left\{\!\sum_{i=1}^{m}\frac{q\lambda\Psi[i]{\Psi[i]^{T}}}{\left[\left(\epsilon^{(t)}\right)^{2}+\left\|{\Psi[i]^{T}}\boldsymbol{f}^{(t)}\right\|_{2}^{2}\right]^{1-q/2}}\,+\,\Phi^{T}\Phi\!\right\}\boldsymbol{f}^{(t+1)}\,=\,\Phi^{T}\boldsymbol{y}, \end{array} $$

which is implemented by Algorithm 1.

Experimental settings

Throughout the experiments, the measurement matrix Φ is generated by creating an \(\widetilde {m}\times \widetilde {n}\) Gaussian matrix with \(\widetilde {m}=64\) and \(\widetilde {n}=256\), and the overcomplete and tight frame Ψ is generated by taking the first \(\widetilde {n}\) rows from an n×n Hadamard matrix with n=512. The original signal f is synthesized as f=Ψx where x is a block k-sparse signal with block size d=4. We set the value of the noise vector e as obeying a Gaussian distribute with mean 0 and standard deviation 0.05. We consider four different values of q=0.1,0.5,0.7,1 for both the 2/q-analysis and q-analysis methods. The relative error between the reconstructed signal f and the original signal f is calculated as ff2/f2.

Experimental results

In order to find the value of λ that minimizes the relative error, we conduct two sets of trials. Figure 1a depicts the relative error versus λ for recovering the signals f, which can be expressed as block 5-sparse signals in terms of Ψ. It is easy to see that choosing λ less than 1×10−2 is appropriate. Similar results can also be found in Fig. 1b. Without loss of generality, we take λ=1×10−3 as the best regularization parameter value.

Fig. 1

Selection of regularization parameter λ over the range from 1×10−6 to 1. a k=5. b q=0:5

Next, we compare our 2/q(0<q<1)-analysis method with the 2/1-analysis and q(0<q≤1)-analysis methods. The results are depicted in Fig. 2.

Fig. 2

Comparison of recovery performance among non-convex 2/q(0<q<1)-analysis, 2/1-analysis and q(0<q≤1)-analysis methods

It is easy to see that the 2/q(0<q<1)-analysis method is far superior to the 2/1-analysis method. Take for example the 2/0.5-analysis method when k=8. The relative error from the 2/0.5-analysis method is 0.016, which is about 16 times smaller than that of the 2/1-analysis method (0.257). Additionally, in terms of the non-convex strategy, a proper value of q contributes to better performance of both the 2/q(0<q≤1)-analysis and q(0<q≤1)-analysis methods. However, with increasing block-sparsity k, the three methods above tend to consistency. What is more, when it comes to recovering the signals, which can be expressed as block-sparse coefficient vectors based on Ψ, our method performs better than the other two methods. An instance is also presented in Fig. 3, which displays the recovery of the signal f synthesized by a block 7-sparse vector based on the Ψ via 2/q(0<q≤1)-analysis and q(0<q≤1)-analysis methods, respectively.

Fig. 3

An instance for recovering the signal, which can be expressed as a block 5-sparse vector in terms of Ψ for different recovery methods. The solid line with pentagram markers represent the original signal, and the dotted line represents the recovered signal. a 2/0.5-analysis method. b 2/1-analysis method. c 0.5-analysis method. d 1-analysis method

Conclusion and discussion

This paper mainly investigates an 2/q(0<q≤1)-analysis method to recover a general signal that can be expressed as a block-sparse vector in terms of an overcomplete and tight frame. To the best of our knowledge, this is the first theoretical characterization of the proposed non-convex 2/q-analysis method with 0<q<1. Specifically, the obtained results contribute to CS in the following three aspects:

  • We proposed an 2/q-analysis method to recover a general signal that can be expressed as a block-sparse vector in a certain frame, generalizing both the traditional methods in CS to recover sparse signals and the recent novel analysis methods to recover general signals.

  • Basing our theoretical approach on a proposed 2/q-analysis method, we established a sufficient condition for robust recovery of general signals that can be expressed as a block-sparse signals, which associates block Ψ-RIC and block Ψ-ROC with different values of q(0,1], providing a series of selectable conditions related to q.

  • We derive an analysis-style IRLS algorithm to solve the proposed method and compare our method with that of other representative methods, obtaining some convincing results.

There are still some issues left for future work. For example, one could consider establishing sharp recovery conditions of our 2/q(0<q<1)-analysis method, and one could also consider replacing our 2/q(0<q<1)-analysis method with other more general non-convex methods.


The proof of Theorem 1 is proved as follows.

Let f=f+h be a solution of (1), where f is the original signal. Write ΨTh=(c[1],c[2],,c[m])T and rearrange the block indices such that c[1]2c[2]2c[m]2. Let \({\widetilde {\Omega }}=\{1,2,\cdots,k\}\) and Ω the block index set over the k blocks with largest 2 norm of ΨTf. We denote by Ωc the complement set of Ω in {1,2,,d}. For convenience, we use \(\Psi ^{T}_{{\widetilde {\Omega }}}\) to denote \((\Psi _{{\widetilde {\Omega }}})^{T}\) where ΨS is the matrix Ψ restricted to the column-blocks indexed by \({\widetilde {\Omega }}\), and then partition {1,2,,d} into the following sets

$$\begin{array}{*{20}l} {\Omega_{0}}=&\{1,2,\cdots,k_{1}\},\\ {\Omega_{1}}=&\{k_{1}+1,k_{1}+2,\cdots,k_{1}+k_{2}\},\\ {\Omega_{2}}=&\{k_{1}+k_{2}+1,k_{1}+k_{2}+2,\cdots,k_{1}+2k_{2}\},\\ \cdots \end{array} $$

Since f is a minimizer of (1), we have

$$\begin{array}{*{20}l} {}\left\|\Psi^{T}_{\Omega}\boldsymbol{f}\right\|_{2,q}^{q}&+\left\|\Psi^{T}_{\Omega^{\mathrm{c}}}\boldsymbol{f}\right\|_{2,q}^{q}\\ &=\!\left\|\Psi^{T}\boldsymbol{f}\right\|_{2,q}^{q}\!\geq\! \left\|\Psi^{T}_{\Omega}(\boldsymbol{f}\,+\,\boldsymbol{h})\right\|_{2,q}^{q}\,+\,\left\|\Psi^{T}_{\Omega^{\mathrm{c}}}(\boldsymbol{f}+\boldsymbol{h})\right\|_{2,q}^{q}\\ &\geq\!\left\|\Psi^{T}_{\Omega}\boldsymbol{f}\right\|_{2,q}^{q}\,-\,\left\|\!\Psi^{T}_{\Omega}\boldsymbol{h}\!\right\|_{2,q}^{q}\,+\,\left\|\!\Psi^{T}_{\Omega^{\mathrm{c}}}\boldsymbol{h}\!\right\|_{2,q}^{q} \,-\,\left\|\!\Psi^{T}_{\Omega^{\mathrm{c}}}\boldsymbol{f}\!\right\|_{2,q}^{q}. \end{array} $$

This implies

$$\begin{array}{*{20}l} \left\|\Psi^{T}_{\Omega^{\mathrm{c}}}\boldsymbol{h}\right\|_{2,q}^{q}\leq2\left\|\Psi^{T}_{\Omega^{\mathrm{c}}}\boldsymbol{f}\right\|_{2,q}^{q}+\left\|\Psi^{T}_{\Omega}\boldsymbol{h}\right\|_{2,q}^{q}. \end{array} $$

Note that \(\left \|\Psi ^{T}_{{\widetilde {\Omega }}}\boldsymbol {h}\right \|_{2,q}\geq \left \|\Psi ^{T}_{\Omega }\boldsymbol {h}\right \|_{2,q}\) and \(\left \|\Psi ^{T}_{{\widetilde {\Omega }}^{\mathrm {c}}}\boldsymbol {h}\right \|_{2,q}\leq \left \|\Psi ^{T}_{\Omega ^{\mathrm {c}}}\boldsymbol {h}\right \|_{2,q}\), and thus it follows from (6) that

$$\begin{array}{*{20}l} \left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}^{T}\boldsymbol{h}\right\|_{2,q}^{q}\leq\left\|\Psi_{{\widetilde{\Omega}}}^{T}\boldsymbol{h}\right\|_{2,q}^{q}+2\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}^{q}. \end{array} $$

Further, we have

$$\begin{array}{*{20}l} \left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}^{T}\boldsymbol{h}\right\|_{2,q}\leq2^{1/q-1}\left\|\Psi_{{\widetilde{\Omega}}}^{T}\boldsymbol{h}\right\|_{2,q}+2^{2/q-1}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}. \end{array} $$

Using the inequality involving 2 and q(0<q≤1) normsFootnote 1(see [28], Lemma 3), it is easy to obtain that

$$\begin{array}{*{20}l} {}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2} & \leq(k_{2})^{1/2-1/q}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2,q}\\ &+\!P_{q}\sqrt{k_{2}}\!\left\{\!\|\boldsymbol{c}[k_{1}\,+\,(j\,-\,1)k_{2}\,+\,1]\!\|_{2}\,-\,\|\boldsymbol{c}[k_{1}\!+j k_{2}]\|_{2}\right\} \end{array} $$

holds for any j≥1, where Pq=(q/2)q/(2−q)−(q/2)2/(2−q). Thus, summing these terms yields

$$\begin{aligned} {}\sum_{j\geq1}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2}&\leq(k_{2})^{1/2-1/q}\sum_{j\geq1}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2,q}\\ &\quad+P_{q}\sqrt{k_{2}}\|\boldsymbol{c}[k_{1}+1]\|_{2}\\ &\leq(k_{2})^{1/2-1/q}\left\{\left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}^{T}\boldsymbol{h}\right\|_{2,q}-(k_{1}-k)^{1/q}\|\boldsymbol{c}[k_{1}+1]\|_{2}\right\}\\ &\quad+P_{q}\sqrt{k_{2}}\|\boldsymbol{c}[k_{1}+1]\|_{2}. \end{aligned} $$

This, along with (7), thus gives

$${}\begin{aligned} \sum_{j\geq1}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2}&\leq(k_{2})^{1/2-1/q}\left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}^{T}\boldsymbol{h}\right\|_{2,q} \\ &\quad+\left[P_{q}\sqrt{k_{2}}-(k_{2})^{1/2-1/q}(k_{1}-k)^{1/q}\right]\|\boldsymbol{c}[k_{1}+1]\|_{2}\\ &\leq2^{1/q-1}(k_{2})^{1/2-1/q}\left\|\Psi_{{\widetilde{\Omega}}}^{T}\boldsymbol{h}\right\|_{2,q}\\ &\quad+2^{2/q-1}(k_{2})^{1/2-1/q}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\\ &\quad+\left[P_{q}\sqrt{k_{2}}-(k_{2})^{1/2-1/q}(k_{1}-k)^{1/q}\right]\|\boldsymbol{c}[k_{1}+1]\|_{2}\\ &\leq2^{1/q-1}(k_{2})^{1/2-1/q}\left\{\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2,q}-(k_{1}-k)^{1/q}\|\boldsymbol{c}[k_{1}+1]\|_{2}\right\}\\ &\quad+2^{2/q-1}(k_{2})^{1/2-1/q}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\\ &\quad+\left[P_{q}\sqrt{k_{2}}-(k_{2})^{1/2-1/q}(k_{1}-k)^{1/q}\right]\|\boldsymbol{c}[k_{1}+1]\|_{2}\\ &=2^{1/q-1}(k_{2})^{1/2-1/q}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2,q}\\ &\quad+2^{2/q-1}(k_{2})^{1/2-1/q}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\\ &\quad+\left[P_{q}\sqrt{k_{2}}-(k_{2})^{1/2-1/q}(k_{1}-k)^{1/q}\right.\\ &\quad\left.-2^{1/q-1}(k_{2})^{1/2-1/q}(k_{1}-k)^{1/q}\right]\|\boldsymbol{c}[k_{1}+1]\|_{2}. \end{aligned} $$

Since \(\|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2,q}\leq (k_{1})^{1/q-1/2}\|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2}\) and \(\boldsymbol {c}[k_{1}+1]\|_{2}^{2}\leq \|\Psi _{{\Omega _{0}}}^{T}\boldsymbol {h}\|_{2}^{2}/k_{1}\), we can get

$$\begin{array}{*{20}l} {}\sum_{j\geq1}\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\|_{2}\leq t \|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\|_{2}+2^{2/q-1}(k_{2})^{1/2-1/q}\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\|_{2,q}, \end{array} $$

where \(t=2^{1/q-1}(k_{1}/k_{2})^{1/q-1/2}+P_{q}\sqrt {k_{2}/k_{1}}-(1+2^{1/q-1})\sqrt {k_{2}/k_{1}}\left [(k_{1}-k)/k_{2}\right ]^{1/q}\).

In fact, to make (8) work, one way is to set

$$\begin{array}{*{20}l} {}P_{q}\sqrt{k_{2}/k_{1}}-\left(1+2^{1/q-1}\right)\sqrt{k_{2}/k_{1}}\left[(k_{1}-k)/k_{2}\right]^{1/q}\geq0,\\ \text{~for all~} 0< q\leq1, \end{array} $$

which is equivalent to

$$\begin{array}{*{20}l} 0\leq\frac{k_{1}-k}{k_{2}}\leq\left[\frac{(q/2)^{q/(2-q)}-(q/2)^{2/(2-q)}}{2^{1/q-1}+1}\right]^{q}. \end{array} $$

To this end, we have to estimate the minimal value of

$$\begin{array}{*{20}l} {}f(q)\!=\left[\frac{(q/2)^{q/(2-q)}-(q/2)^{2/(2-q)}}{2^{1/q-1}+1}\right]^{q},\ \text{where~}0< q\leq1. \end{array} $$

By means of mathematical skills, we can deduce that f(q) arrives at its minimum value 0.125 when q=1. An auxiliary result is depicted in Fig. 4.

A direct result is that if the condition \(0\leq \frac {k_{1}-k}{k_{2}}\leq 0.125\) holds, one can easily get (8) for all 0<q≤1.

Similar to the consequence of Ψ-RIP, we have

$$\begin{array}{*{20}l} {}\left\langle\Phi\boldsymbol{h},\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle&=\left\|\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}\\ &\quad+\sum_{j\geq1}\left\langle\Phi \Psi\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h},\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle\\ &\geq\left(1-\delta_{k_{1}}\right)\left\|\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}\\ &\quad+\sum_{j\geq1}\left\langle\Psi\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}, \Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle\\ &\quad-\theta_{k_{1},k_{2}}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\sum_{j\geq1}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2}. \end{array} $$

By applying the equality

$$\begin{array}{*{20}l} {}\sum_{j\geq1}\left\langle\Psi\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}, \Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle&=\left\langle \boldsymbol{h}-\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}, \Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle\\ &=\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}-\left\|\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}, \end{array} $$

to the above inequality, we get

$$ {}\begin{aligned} &\left\langle\Phi\boldsymbol{h},\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\rangle\\ &\quad\geq\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2} -\delta_{k_{1}}\left\|\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}\\ &\quad\quad-\theta_{k_{1},k_{2}}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\sum_{j\geq1}\left\|\Psi_{{\Omega_{j}}}^{T}\boldsymbol{h}\right\|_{2}\\ &\quad\geq\left(1-\delta_{k_{1}}-t \theta_{k_{1},k_{2}}\right)\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}\\ &\quad\quad-2^{2/q-1}(k_{2})^{1/2-1/q}\theta_{k_{1},k_{2}}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}. \end{aligned} $$
Fig. 4

Plot of f(·) (given in (9)) with respect to q from 0.001 to 1 with step size 0.001. One can easily find that the function f(q) obtains its maximum (around 0.5300) at about q=0.16 and its minimum value (around 0.125) at q=1. Besides, when q=0.5 the value of f(q) is around about 0.3969. In other words, f(q)≥f(1) for any q(0,1]

By the feasibility of f, we have

$$\begin{array}{*{20}l} {}\|\Phi\boldsymbol{h}\|_{2}=\|\Phi(\boldsymbol{f}^{\sharp}-\boldsymbol{f})\|_{2}\leq\|\Phi\boldsymbol{f}^{\sharp}-\boldsymbol{y}\|_{2}+\|\Phi\boldsymbol{f}-\boldsymbol{y}\|_{2}\leq2\epsilon. \end{array} $$


$$\begin{array}{*{20}l} {}\langle\Phi\boldsymbol{h},\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\rangle&\leq\|\Phi\boldsymbol{h}\|_{2}\|\Phi\Psi\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\|_{2}\\ &\leq2\epsilon\sqrt{1+\delta_{k_{1}}}\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\|_{2}. \end{array} $$

It then follows from (10) and (11) that

$$ {}\begin{aligned} &\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\leq\frac{1}{1-\delta_{k_{1}}-t \theta_{k_{1},k_{2}}}\\ &\quad\left[2\epsilon\sqrt{1+\delta_{k_{1}}}+2^{2/q-1}(k_{2})^{1/2-1/q}\theta_{k_{1},k_{2}}\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\|_{2,q}\right]. \end{aligned} $$

By (7), it is easy to see that

$${}\begin{aligned} \left\|\Psi_{{\Omega_{0}}^{\mathrm{c}}}\boldsymbol{h}\right\|_{2}^{2}\leq&\left\|\Psi_{{\Omega_{0}}^{\mathrm{c}}}\boldsymbol{h}\right\|_{2,q}\frac{\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2,1}}{k_{1}}\\ &\leq\left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}\boldsymbol{h}\right\|_{2,q}\frac{\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2,1}}{k_{1}}\\ &\leq\left(2^{1/q-1}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2,q}+2^{2/q-1}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\right)\frac{\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\|_{2,1}}{k_{1}}\\ &\leq(2k_{1})^{1/q-1}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}+\frac{2^{2/q-1}}{\sqrt{k_{1}}}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}. \end{aligned} $$

Consequently, we have

$$ {}\begin{aligned} \|\boldsymbol{h}\|_{2}^{2}&=\!\left\|\Psi^{T}\boldsymbol{h}\right\|_{2}^{2}=\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}+\left\|\Psi_{{\widetilde{\Omega}}^{\mathrm{c}}}^{T}\boldsymbol{h}\right\|_{2}^{2}\\ &\leq\!\left[(2k_{1})^{1/q-1}+1\right]\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}^{2}+\frac{2^{2/q-1}}{\sqrt{k_{1}}}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\\ &\leq\!\left\{\sqrt{(2k_{1})^{1/q-1}+1}\left\|\Psi_{{\Omega_{0}}}^{T}\boldsymbol{h}\right\|_{2}\,+\,\frac{2^{2/q-2}}{\sqrt{k_{1}\left[(2k_{1})^{1/q-1}+1\right]}}\left\|\Psi_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\right\}^{2}. \end{aligned} $$

Plugging (12) into the previously mentioned inequality and by a direct calculation, we get

$$ {}\begin{aligned} \|\boldsymbol{h}\|_{2}\leq&\left\{\frac{{\vphantom{\frac{2^{2/q-2}}{\sqrt{k_{1}\left[(2k_{1})^{1/q-1}+1\right]}}}}2^{2/q-1}(k_{2})^{1/2-1/q}\theta_{k_{1},k_{2}}\sqrt{(2k_{1})^{1/q-1}+1}}{1-\delta_{k_{1}}-t\theta_{k_{1},k_{2}}}\right.\\ &\left.+\frac{2^{2/q-2}}{\sqrt{k_{1}\left[(2k_{1})^{1/q-1}+1\right]}}\right\}\left\|D_{\Omega^{\mathrm{c}}}^{T}\boldsymbol{f}\right\|_{2,q}\\ &+\frac{2\sqrt{(1+\delta_{k_{1}})\left[(2k_{1})^{1/q-1}+1\right]}}{1-\delta_{k_{1}}-t\theta_{k_{1},k_{2}}}\epsilon, \end{aligned} $$

which yields (3).

Availability of data and materials

Please contact any of the authors for data and materials.


  1. 1.

    In fact, when 0<q<1, the q norm is a quasi norm. For consistency, we instead use the norm.



Ψ-Restricted isometry constant


Ψ-Restricted isometry property


Ψ-Restricted orthogonality constant


Ψ-Restricted orthogonality property


Compressed sensing


Deoxyribonucleic acid


Iterative re-weighted least square


  1. 1

    D. Salomon, A concise introduction to data compression (2008).

    Google Scholar 

  2. 2

    D. Sculley, C. E. Brodley, in Proc. of the 2006 Data Compression Conference (DCC’2006). Compression and machine learning: A new perspective on feature space vectors (IEEE Computer SocietySnowbird, 2006), pp. 332–332.

    Google Scholar 

  3. 3

    L. Martino, V. Elvira, Compressed Monte Carlo for distributed Bayesian inference. viXra:1811.0505 (2018).

  4. 4

    Y. Zheng, J. Ma, L. Wang, Consensus of hybrid multi-agent systems. IEEE Transactions on Neural Networks and Learning Systems. 29(4), 1359–1365 (2018).

    Article  Google Scholar 

  5. 5

    J. Ma, M. Ye, Y. Zheng, Y. Zhu, Consensus analysis of hybrid multi-agent systems: a game-theoretic approach. Int. J. Robust Nonlinear Control. 29(6), 1840–1853 (2019).

    Article  Google Scholar 

  6. 6

    D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).

    MathSciNet  Article  Google Scholar 

  7. 7

    E. J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory. 52(2), 489–509 (2006).

    MathSciNet  Article  Google Scholar 

  8. 8

    S. S. Chen, D. L. Donoho, M. A. Saunders, Atomic decomposition by basis pursuit. SIAM J. Sci. Comput.20(1), 33–61 (1998).

    MathSciNet  Article  Google Scholar 

  9. 9

    A. M. Bruckstein, D. L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev.51(1), 34–81 (2009).

    MathSciNet  Article  Google Scholar 

  10. 10

    M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse Probl.23(3), 947–968 (2007).

    MathSciNet  Article  Google Scholar 

  11. 11

    H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inf. Theory. 54(5), 2210–2219 (2008).

    MathSciNet  Article  Google Scholar 

  12. 12

    E. J. Candès, Y. C. Eldar, D. Needel, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal.31(1), 59–73 (2011).

    MathSciNet  Article  Google Scholar 

  13. 13

    I Selesnick, M Figueiredo, Signal restoration with overcomplete wavelet transforms: Comparison of analysis and synthesis priors. Proc. SPIE. 7446: (2009).

  14. 14

    F. Parvaresh, H. Vikalo, S. Misra, B. Hassibi, Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Sig. Process. 2(3), 275–285 (2008).

    Article  Google Scholar 

  15. 15

    A. Majumdar, R. K. Ward, Compressed sensing of color images. Sig. Process. 90(12), 3122–3127 (2010).

    Article  Google Scholar 

  16. 16

    R. Vidal, Y. Ma, A unified algebraic approach to 2-D and 3-D motion segmentation and estimation. J. Math. Imaging Vision. 25(3), 403–421 (2006).

    MathSciNet  Article  Google Scholar 

  17. 17

    Y. Wang, J. J. Wang, Z. B. Xu, A note on block-sparse signal recovery with coherent tight frames. Discret. Dyn. Nat. Soc.2013(1), 1–8 (2013).

    MathSciNet  MATH  Google Scholar 

  18. 18

    R. Chartrand, Exact reconstruction of sparse signals via nonconvex minimization. Sig. Process. Lett.14(10), 707–710 (2007).

    Article  Google Scholar 

  19. 19

    R. Chartrand, V. Staneva, Restricted isometry properties and nonconvex compressive sensing. Inverse Probl.24(3), 20–35 (2008).

    MathSciNet  Article  Google Scholar 

  20. 20

    Z. B. Xu, H. L. Guo, Y. Wang, H. Zhang, Representative of L 1/2 regularization among L q(0<q≤1) regularizations: an experimental study based on phase diagram. Acta Autom. Sin.38(7), 1225–1228 (2012).

    Article  Google Scholar 

  21. 21

    Y. Wang, J. J. Wang, Z. B. Xu, On recovery of block-sparse signals via mixed 2/ q(0<q≤1) norm minimization. EURASIP J. Adv. Sig. Process.2013(1), 1–17 (2013).

    MathSciNet  Article  Google Scholar 

  22. 22

    Y. Wang, J. J. Wang, Z. B. Xu, Restricted p-isometry properties of nonconvex block-sparse compressed sensing. Sig. Process. 104:, 188–196 (2014).

    Article  Google Scholar 

  23. 23

    H. T. Yin, S. T. Li, L. Y. Fang, Block-sparse compressed sensing: non-convex model and iterative re-weighted algorithm. Inverse Probl. Sci. Eng.21(1), 141–154 (2013).

    MathSciNet  Article  Google Scholar 

  24. 24

    J. H. Lin, S. Li, Y. Shen, New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Sig. Process.61(3), 611–621 (2013).

    MathSciNet  Article  Google Scholar 

  25. 25

    J. H. Lin, S. Li, Y. Shen, Compressed data separation with redundant dictionaries. IEEE Trans. Inf. Theory. 59(7), 4309–4315 (2013).

    MathSciNet  Article  Google Scholar 

  26. 26

    S. Li, J. H. Lin, Compressed sensing with coherent tight frame via q minimization. Inverse Probl. Imaging. 8:, 761–777 (2014).

    MathSciNet  Article  Google Scholar 

  27. 27

    M. J. Lai, Y. Y. Xu, W. T. Yin, Improved iteratively reweighted least squares for unconstrained smoothed q minimization. SIAM J. Numer. Anal.51(2), 927–957 (2013).

    MathSciNet  Article  Google Scholar 

  28. 28

    Y. Hsia, R. L. Sheu. arXiv: 1312.3379 (2014).

Download references


This paper is subsidized by the project of the key laboratory of Intelligent Information and Big Data Processing of NingXia Province, North Minzu University (no. NXKLIIBDP2019).

Author information




The authors read and approved the final manuscript.

Corresponding author

Correspondence to Xishan Tian.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Luo, X., Yang, W., Ha, J. et al. Non-convex block-sparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process. 2020, 2 (2020).

Download citation


  • Block-sparse compressed sensing
  • Restricted isometry property
  • Restricted orthogonality property
  • Tight frames
  • 2/ q-analysis method