 Research
 Open access
 Published:
Framebased block sparse compressed sensing via \(l_2/l_1\)synthesis
EURASIP Journal on Advances in Signal Processing volumeÂ 2024, ArticleÂ number:Â 76 (2024)
Abstract
In this paper, we consider the framebased block sparse signal recovery via a \(l_2/l_1\)synthesis method. A new kind of null space property based on the given dictionary D(block DNSP) is proposed. It is proved that sensing matrices satisfying the block DNSP is not just a sufficient and necessary condition for the \(l_2/l_1\)synthesis method to exactly recover signals that are block sparse in frame D, but also a sufficient and necessary condition for the \(l_2/l_1\)synthesis to stably recover signals which are blockcompressible in frame D. To the best of our knowledge, this new property is the first sufficient and necessary condition for successful signal recovery via the \(l_2/l_1\)synthesis. In addition, we also characterize the theoretical performance of recovering signals via the \(l_2/l_1\)synthesis in the case of the measurements are disturbed.
1 Introduction
Compressed sensing is a revolutionary innovation, pioneered by Donoho [13] and CandÃ¨s et al. [8, 9] around 2006. It is receiving increasing attention from fields such as signal processing, sparse modeling, machine learning, and color imaging, among others (see [12, 19, 27, 29]). The sparsity assumption plays an important role in signal reconstruction. A vector is called ksparse if the number of its nonzero entries is no more than k. The fundamental idea of compressed sensing is to recover a sparse signal \(x\in {\mathbb {R}}^d\) from its undersampled linear measurements \(y=Ax+e\), where \(y\in {\mathbb {R}}^m\), \(A\in {\mathbb {R}}^{m\times d}(m\ll d)\), and \(e\in {\mathbb {R}}^m\) is a vector of measurement errors with \(\Vert e\Vert _2\le \epsilon\). The classical compressed sensing theory points out that the sparse or compressible (nearly sparse) signal \(x_0\) can be successfully reconstructed through the following \(l_1\)minimization model under certain conditions of measurement matrix A.
where \(\Vert \cdot \Vert _2\) is the Euclidean norm of vectors and \(\Vert x\Vert _1=\underset{i=1}{\overset{d}{\sum }}x_i\) denotes the \(l_1\)norm. When \(\epsilon =0\), we call it the noiseless case, and \(\epsilon >0\), the noisy case.
One of the key research works of compressed sensing is designing an appropriate sensing matrix to ensure good reconstruction performance of the minimization problem (1.1). The restricted isometry property (RIP) introduced by CandÃ¨s and Tao in [8] is shown to provide stable recovery of signals nearly sparse via (1.1). Various sufficient conditions based on the RIP for sparse signal recovery, exactly or stably, can be found in [3,4,5,6,7,8,9, 16, 28]. Null space property (NSP) is another wellknown property used to characterize the sensing matrix. A matrix A satisfies the NSP of order k, which means for any \(v\in \ker A\setminus \{0\}\), and any index set \(T\le k\), it holds that
The NSP is a necessary and sufficient condition which guarantees the exact reconstruction of the sparse signal using the \(l_1\)minimization model (1.1). Many works are based on NSP (see [14, 17, 18, 20, 21, 38]), especially [17], which proposed the stable NSP and the robust NSP, and used them to characterize the solutions of (1.1). Moreover, it was shown that NSP matrices can reach a similar stability result as RIP matrices, except that the constants may be larger [1, 33].
However, in many practical applications, the signal of interest is not sparse in an orthogonal basis. More often than not, sparsity is expressed in terms of an overcomplete dictionary D. This kind of signal is called dictionarysparse signal or framesparse signal, and is called Dsparse signal when the dictionary D is given, while the signals which are nearly sparse in D will be called Dcompressible. The signal \(x_0\in {\mathbb {R}}^d\) is expressed as \(x_0=Dz_0\), where \(D\in {\mathbb {R}}^{d\times n}\), \(d\ll n\) is some overcomplete dictionary of \({\mathbb {R}}^d\) and the coefficient \(z_0\in {\mathbb {R}}^n\) is sparse or compressible. The linear measurement is \(y=Ax_0\). We refer to [2, 10, 23, 24, 31, 32, 34, 37] and the reference therein for details.
A natural idea of recovering \(x_0\) from the measurement \(y=Ax_0\) is to solve the minimization problem:
for the sparse coefficient \({\hat{z}}\) at first, then synthesizing it to get \({\hat{x}}=D{\hat{z}}\). This method is called \(l_1\)synthesis. For the case with noise, it naturally solves the following:
In [31], reconstruction conditions were established by making AD to satisfy RIP. As pointed out in Article [10, 22], under such strong condition, the exact reconstruction of not only \(x_0\) but also \(z_0\) is ultimately obtained. Accurate reconstruction of \(z_0\) is unnecessary, for we only care about the estimation of the original signal \(x_0\). Especially when the frame D is completely correlated (with two identical columns), (1.2) could has infinitely minimizers, and all of them lead to the same true signal \(x_0\).
For this framesparse recovery problem, CandÃ¨s, Eldar et al propose the following \(l_1\)analysis method in [10]:
where \(D^*\) is the transpose of D. By assuming \(D^*x_0\) to be sparse, [10] proves that \(l_1\)analysis can stably reconstruct \(x_0\), if A satisfies a kind RIP condition related to D(DRIP). This is the first result of framesparse compressed sensing that does not require the frame to be highly incoherent. The assumption that \(D^*x_0\) should be sparse does not seem very realistic, because even when \(x_0\) is sparse in terms of D, it does not necessarily mean that \(D^*x_0\) is sparse. The work in [25] proposes an optimal dual based \(l_1\)analysis method along with an efficient algorithm. In [22], they show that the optimal dual based \(l_1\)analysis is equivalent to \(l_1\)synthesis; then, \(l_1\)analysis appears to be a subproblem of the \(l_1\)synthesis. Numerical experiments shown in [15, 22] also suggest that \(l_1\)synthesis is more accurate and thorough. In [11], the authors establish the first necessary and sufficient condition for reconstructing Dsparse signal based on the \(l_1\)synthesis method by using a dictionarybased NSP(DNSP). Such kind of DNSP does not make any assumptions about incoherent of the frame D.
In this paper, we consider signals to have additional sparse structures under an overcomplete dictionary, i.e., the nonzero coefficients appear in a few fixed blocks with an overcomplete dictionary. Such signal we called here, the frameblocksparse signal (or Dblocksparse signal when D is given). Such structured signals arise in various applications, such as DNA microarrays [29], color imaging [27], and motion segmentation [35]. Assuming \(x\in R^d\) with block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), where \(0<d_i\le d,i=1,2,\ldots ,M\), and \(\underset{i=1}{\overset{M}{\sum }}d_i=d\), then the signal x can be described as \(x=[\underset{x_{[1]}}{\underbrace{x_1,x_2,\ldots ,x_{d_1}}},\underset{x_{[2]}}{\underbrace{x_{d_1+1},\ldots ,x_{d_1+d_2}}},\ldots ,\underset{x_{[M]}}{\underbrace{x_{d_{M1}+1},\ldots ,x_{d}}}]^*\), where \(x_{[i]}\) denotes the ith block of x. When x has at most k nonzero blocks, i.e., \(\Vert x\Vert _{2\cdot 0}=\underset{i=1}{\overset{M}{\sum }}{I(\Vert x_{[i]}\Vert _2>0)}\le k\), we call such signal x as block ksparse signal. Recently, many papers focus on the signal which is block sparse in terms of a overcomplete dictionary D, i.e., \(x=Dz\), where the coefficient z is block sparse [26, 36, 39]. Most of the existing literature adopts \(l_1\)analysis methods, and they establish sufficient conditions to guarantee stable recovery using the blockRIP of frame D. As mentioned earlier, the \(l_1\)synthesis method is more thorough and accurate in solving framesparse signal recovery problems, and DNSP can allow for frame D to be highly correlated [11]. Based on these theories and practical conclusions, we introduce a mixed \(l_2/l_1\)norm null space property of the dictionary D, to characterize the reconstruction performance of \(l_2/l_1\)synthesis method.
For Dblocksparse signal recovery form linear measurement \(y=Ax_0\) in the noiseless case, we consider the following \(l_2/l_1\)synthesis method:
For the recovery of Dblockcompressible signals \(x_0\) in the case of the linear measurement y is perturbed, we naturally consider the following method:
We generalize the DNSP proposed by [11] to block DNSP, and show that the block DNSP is a sufficient and necessary condition for the block \(l_1\)synthesis to exactly recover all block Dsparse signals of order k. Moreover, when the measurements are perturbed and the signals are Dblockcompressible, we prove that block DNSP is still a sufficient and necessary condition for stable recovery.
The remainder of this paper is organized as follows. Some notations, definitions, and some useful lemmas are introduced in sectionÂ 2. In sectionÂ 3, we present the main theorems for recovering Dblocksparse signals in the noiseless case and Dblockcompressible signals in noisy case. Finally, a conclusion is made in sectionÂ 4.
2 Preliminaries
We provide the notations of this paper roughly as follows. For a vector \(z=(z_1,z_2,\ldots ,z_n)^*\in {\mathbb {R}}^n\), \(M<n\) be a positive integer, let \(\text{ supp }(z)=\{i\Vert z_{[i]}\Vert _2\ne 0,i=1,2,\ldots ,M\}\) denote the block support of z. The \(l_2/l_0\)norm of z is defined as \(\Vert z\Vert _{2\cdot 0}=\underset{i=1}{\overset{M}{\sum }}{I(\Vert z_{[i]}\Vert _2>0)}\), and z is called block ksparse when \(\Vert z\Vert _{2\cdot 0}\le k\). \(\Vert z\Vert _{2\cdot 1}=\underset{i=1}{\overset{M}{\sum }}\Vert z_{[i]}\Vert _2\) is the mixed \(l_2/l_1\)norm of vector z. \(T\subseteq \{1,2,\ldots ,M\}\) is an index set, and \(T^c\) is the complement of T in \(\{1,2,\ldots ,M\}\). In the following text, \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\) always represents the block index set, with \(\underset{i=1}{\overset{M}{\sum }}d_i=d\), \(0<d_i<d,i=1,2,\ldots ,M\). Denote \(z_{T}\in {\mathbb {R}}^n\) the vector z with all but the parts which block index is in T set to zero, and therefore, \(z_{T^c}=zz_T\). For a given frame \(D\in {\mathbb {R}}^{d\times n}\), we define
\(D^*\) is the transpose of D, \(D^{1}(E)\) denoted the preimage of the set E under the operator D. Denote \(\sigma _k(z_0)=\underset{\Vert z\Vert _{2\cdot 0}\le k}{\inf }\Vert zz_0\Vert _{2\cdot 1}\) to be the optimal kterm approximation of \(z_0\) in mixed \(l_2/l_1\)norm.
The following two new null space properties are very important in characterizing the reconstruction performance of \(l_1\)synthesis methods (1.5) and (1.6).
Definition 2.1
(korder block sparse NSP of a frame D (block kDNSP)). Given a frame \(D\in {\mathbb {R}}^{d\times n}\). For any index set \(T\subseteq \{1,2,\ldots ,M\}\) with \(T\le k\), a matrix \(A\in {\mathbb {R}}^{m\times d}\) satisfies the block \(D\text{NSP }\) of order k over \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), if there exists \(u\in \ker D\), such that
Definition 2.2
(korder strong block sparse NSP of a frame D (block kDSNSP)) Given a dictionary \(D\in {\mathbb {R}}^{d\times n}\). For any index set \(T\subseteq \{1,2,\ldots ,M\}\) with \(T\le k\), a matrix A satisfies the strong block sparse null space property with respect to D of order k over \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), if there is a positive constant c and \(u\in \ker D\), such that
The following lemmas will be useful in the next part of the paper.
Lemma 2.3
For any \(a,b\in R^n\), the following inequality holds.
This triangle inequality in \(l_2/l_1\)norm can be easily obtained by definition, so we will not prove it here.
Given a index set \(T\subseteq \{1,2,\ldots ,M\}\), and a vector \(v\in D^{1}(\ker A{\setminus }\{0\})\), for any \(u\in \ker D\) and \(t>0\), we defined the real functions
and
Lemma 2.4
Suppose that A satisfies the block kDNSP over the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), then for any \(v\in D^{1}(\ker A{\setminus }\{0\})\), the function defined in (2.3) satisfies
Proof
Since A satisfies the block kDNSP over \({\mathfrak {T}}\), it is easy to see that \(f_v(u,t)>0\), for any \(v\in D^{1}(\ker A{\setminus }\{0\})\), and it is sufficient to show that there is no \(v_0\in D^{1}(\ker A\setminus \{0\})\) such that \(\underset{u\in \ker D,t>0}{\inf }f_{v_0}(u,t)=0.\) If this is not true, then for any \(\eta >0\), there is \(u_0\in \ker D,t_0>0\) such that \(f_{v_0}(u_0,t_0)<\eta\). By the definition of \(f_{v_0}(u_0,t_0)\), that is,
This will lead to \(\Vert (t_0v_0+u_0)_{T^c}\Vert _{2\cdot 1}\Vert (t_0v_0+u_0)_T+{\tilde{u}}\Vert _{2\cdot 1}\le 0\), for any \({\tilde{u}}\in \ker D\), and it is contradicts with the assumption that A satisfies the block kDNSP. \(\square\)
3 Main results
Theorem 3.1
Block kDNSP is a necessary and sufficient condition for \(l_2/l_1\)synthesis (1.5) to successfully recover all signals in the set \(D\Sigma _k\).
Proof
(Sufficient part) Suppose that the sensing matrix A satisfies the block DNSP of order k, then the \(l_2/l_1\)synthesis method (1.5) can successfully recover all block Dsparse signals \(x\in D\Sigma _k\) from measurements \(y=Ax\). Otherwise, there is a vector \(x_0\in D\Sigma _k\), the reconstruction of which is \({{\hat{x}}}=D{{\hat{z}}}\ne x_0\). Denote \(x_0=Dz_0\), where \(\Vert z_0\Vert _{2\cdot 0}\le k\). Let \(v=z_0{{\hat{z}}}\), since \(Dv\ne 0\) and \(AD{\hat{x}}=ADx_0=y\), it is easy to check that \(v\in D^{1}(\ker A{\setminus }\{0\})\). Denote \(T\subseteq \{1,2,\ldots ,M\}\) with \(T\le k\) to be the block support set of \(z_0\), by the definition of block kDNSP, there must exist a \(u\in \ker D\), such that \(\Vert v_T+u\Vert _{2\cdot 1}<\Vert v_{T^c}\Vert _{2\cdot 1}\), which implies \(\Vert (z_0{{\hat{z}}})_T+u\Vert _{2\cdot 1}=\Vert z_0{{\hat{z}}}_T+u\Vert _{2\cdot 1}< \Vert {{\hat{z}}}_{T^c}\Vert _{2\cdot 1}\), and
This leads to the contradiction of the assumption that \({{\hat{z}}}\) is a minimizer of the problem (1.5).
(Necessary part) Assuming the \(l_2/l_1\)synthesis method (1.5) can successfully recover all signals in \(D\Sigma _k\), we need to show that the sensing matrix A satisfies block DNSP of order k. For any \(v\in D^{1}(\ker A\setminus \{0\})\), \(T\subseteq \{1,2,\ldots ,M\}\) with \(T\le k\) and the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), denote \(x_0=Dv_T\), then \(x_0\in D\Sigma _k\), and let \(y=Ax_0\) be the measurement. Let \({\hat{z}}\) be the solution of (1.5) and \({\hat{x}}=D{\hat{z}}\) be the reconstructed signal. By the assumption, we have \({\hat{x}}=D{\hat{z}}=x_0\), and there is a \(u\in \ker D\), such that \({\hat{z}}=v_T+u\). Since \(AD(v_Tv)=y\) and \(D(v_Tv)\ne x_0\), then \(v_Tv\) cannot be a minimizer of (1.5). Therefore, we get \(\Vert v_T+u_{2\cdot 1}<\Vert v_Tv\Vert _{2\cdot 1}=\Vert v_{T^c}\Vert _{2\cdot 1}\), which implies A satisfies block kDNSP. \(\square\)
In classical compressed sensing theory, it is well known that the null space property is a sufficient and necessary condition not just for the sparse signal recovery in noiseless case, but also for compressible signal recovery with measurement errors [1, 33]. We will show that this result can be generalized to block DNSP when the reconstruction is carried on a signal which is block sparse or blockcompressible in a given frame.
The block DSNSP defined in definitionÂ 2.2 looks stronger than the block DNSP. We now show that, with this stronger property, Dblockcompressible signals can be stably recovered via (1.6) as follows.
Theorem 3.2
If the sensing matrix \(A\in {\mathbb {R}}^{m\times d}\) satisfies block kDSNSP, then any solution \({\hat{z}}\) of \(l_2/l_1\)synthesis method (1.6) satisfies
where \(z_0\) is any representation of \(x_0\) in D, \(\sigma _k(z_0)=\underset{\Vert z\Vert _{2\cdot 0}\le k}{\inf }\Vert zz_0\Vert _{2\cdot 1}\),\(C_1,C_2\) are constants.
Proof
Denote \(x_0=Dz_0\) as the unknown signal we want to recover and \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\) as the block index set, \(T\subseteq \{1,2,\ldots ,M\}\) is the index set with the k largest block of \(z_0\) (in \(l_2\) norm). Denote \(h=D({\hat{z}}z_0)={\hat{x}}x_0\), and decompose it as \(h=t+\eta\) where \(t\in \ker A,\eta \in \ker A^\bot\). Let \(w=D^T(DD^T)^{1}t\), then \(h=Dw+\eta\) with \(Dw\in \ker A\). It is not difficult to know that
where \({\mathcal {V}}_A\) is the smallest positive singular value of A. Let \(\xi =D^*(DD^*)^{1}\eta\), then \(\eta =D\xi\), and it is easy to show
Since \(h=D(w+\xi )\) and \(h=D({\hat{z}}z_0)\), \({\hat{z}}z_0=w+\xi +u_1\) with \(u_1\in \ker D\).
Let \(v=w+u_1\), then \({\hat{z}}z_0=v+\xi\) and \(v\in \ker AD\). By the assumption, A satisfies the block kDSNSP; then, there is a \(u\in \ker D\) such that
Therefore,
On the other side, since \({\hat{z}}\) is a minimizer, we have
By rearranging the above inequality, we will obtain
Combining (3.4) with (3.5), we get
Using the HÃ¶lder inequality with \(\Vert \xi \Vert _{2\cdot 1}\), the above inequality will become
That is,
By using (3.2) and (3.3), the above inequality can be modified such that
where \(C_1=\frac{2}{c}\),\(C_2=\frac{2}{{\mathcal {V}}_A}(\frac{\sqrt{M}}{c{\mathcal {V}}_D}+1)\epsilon\). \(\square\)
Remark 3.3

(a)
When \(d_1=d_2=\cdots =d_M=1\) and \(M=n\), then block DSNSP become DSNSP, and our result is consistent with Theorem 5.2 in [11].

(b)
When \(z_0\) is block ksparse and \(\epsilon =0\), it means that DSNSP block sparse signals can be exactly recovered by (1.5).
By DefinitionÂ 2.2, it is obvious that block DSNSP is not weaker than DNSP. We want to find it out that how much stronger it is than block DNSP. The following theorem shows that these two conditions are actually the same.
Theorem 3.4
Let \(A\in {\mathbb {R}}^{m\times d}\), \(D\in {\mathbb {R}}^{d\times n}\), matrix A satisfying block DNSP is equivalent to A satisfying block DSNSP with the same order.
Proof
Suppose A satisfies block kDNSP over the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\). For any \(w\in \ker AD\), take \(u=0\), when \(w=0\), and \(u=w\) for \(w\ne 0\), \(Dw=0\), then \(\Vert w_{T^c}\Vert _{2\cdot 1}\Vert w_T+u\Vert _{2\cdot 1}=0\), and (2.2) holds for any positive number C. To complete the proof, we just need to show the function
has a positive lower bound on \(D^{1}(\ker A\setminus \{0\})\) for every \(T\le k\).
Decompose w into two parts as \(w=tv+u\), where \(u=\text{ P}_{\ker D}w\), \(tv=\text{ P}_{(\ker D)^{\bot }}w\), with \(\Vert v\Vert _2=1\), and \(t>0\). By the definition of infimum, we have
By LemmaÂ 2.4, the function \(\underset{u\in \ker D,t>0}{\inf }f_v(u,t)\) is always positive. Since \((\ker D)^{\bot }\cap {\mathbb {S}}^{n1}\) is a compact set, it is sufficient to prove that the function \(\underset{u\in \ker D,t>0}{\inf }f_v(u,t)\) is lower semicontinuous with respect to v.
Since, for any \(v\in D^{1}(\ker A\setminus \{0\})\) and any \(\eta >0\), there is a \(\delta =\frac{\eta }{\sqrt{M}}>0\), such that for any \(\Vert e\Vert _2<\delta\),
Taking the infimum over u in \(\ker D\) and \(t>0\) of both sides, we get
which shows that the function is a lower semicontinuous, and the proof is completed. \(\square\)
4 Conclusion
In this paper, we generalized the DNSP proposed by [11] to block DNSP. We proved in TheoremÂ 3.1 that this new property is equivalent to the exact recovery of Dblocksparse signals via \(l_2/l_1\)synthesis method. In addition, a stable reconstruction result of Dblockcompressible signals via \(l_2/l_1\)synthesis in noise case was given in TheoremÂ 3.2. To the best of our knowledge, these studies provide the first characterization of block sparse signal recovery with dictionaries via \(l_2/l_1\)synthesis approach.
By TheoremÂ 3.4, we proved that A satisfies block DSNSP is equivalent to A satisfies block DNSP with the same order. Combined with TheoremsÂ 3.1 and 3.2, it is clear that block DNSP is not only a sufficient and necessary condition for the success of \(l_1\)synthesis without measurement errors, but also sufficient and necessary condition for stability of \(l_2/l_1\)synthesis in the case with noise.
As we all know, the better the sparse representation of signal x, the more advantageous it is for solving the reconstruction problem. The importance of block D NSP lies in that it does not require D to be incoherent, which expands the selection range of framework D. These results help characterize the reconstruction performance of \(l_2/l_1\)synthesis approach, and of great significance to study and design the measurement matrix A.
Availability of data and materials
Not applicable.
References
A. Aldroubi, X. Chen, A.M. Powell, Perturbations of measurement matrices and dictionaries in compressed sensing. Appl. Comput. Harmonic Anal. 33, 282â€“291 (2012)
W. Bajwa, R. Calderbank, S. Jafarpour, Why Gabor frames Two fundamental measures of coherence and their geometric significance. J. Commun. Netw. 12, 289â€“307 (2010)
T. Cai, A. Zhang, Compressed sensing and affine rank minimization under restricted isometry. IEEE Trans. Signal. Process. 61, 3279â€“3290 (2013)
T. Cai, A. Zhang, Sharp RIP bound for sparse signal and lowrank matrix recovery. Appl. Comput. Harmonic Anal. 35, 74â€“93 (2013)
T. Cai, A. Zhang, Sparse representation of a polytope and recovery of sparse signals and lowrank matrices. IEEE Trans. Inf. Theory 60, 122â€“132 (2014)
T. Cai, L. Wang, G. X., New bounds for restricted isometry constants. IEEE Trans. Inform. Theory 56, 4388â€“4394 (2010)
T. Cai, L. Wang, G. Xu, Shifting inequality and recovery of sparse signals. IEEE Trans. Signal. Process. 58, 1300â€“1308 (2010)
E.J. CandÃ¨s, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203â€“4215 (2005)
E.J. CandÃ¨s, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Mater. 59, 1207â€“1223 (2006)
E.J. CandÃ¨s, Y.C. Eldar, D. Needell, P. Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31, 59â€“73 (2011)
X. Chen, H. Wang, R. Wang, A null space analysis of the l_{1}synthesis method in dictionarybased compressed sensing. Appl. Comput. Harmonic Anal. 37, 492â€“515 (2014)
S. Cotter, B. Rao, Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 50, 374â€“377 (2002)
D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52, 1289â€“1306 (2006)
M. Elad, A.M. Bruckstein, A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48, 2558â€“2567 (2002)
M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse. Probl. 23, 947â€“968 (2007)
S. Foucart, M.J. Lai, Sparsest solutions of underdetermined linear systems via l_{q}minimization for 0 < q â‰¤ 1. Appl. Comput. Harmonic Anal. 26, 395â€“407 (2009)
S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, New York, 2013)
R. Gribonval, M. Nielsen, Sparse decompositions in unions of bases. IEEE Trans. Inf. Theory 49(12), 3320â€“3325 (2003)
J. Huang , X. Huang, D. Metaxas, Learning with Dynamic Group Sparsity, in IEEE 12th International Conference on Computer Vision, pp. 64â€“71 (2009)
B.S. Kashin, V.N. Temlyakov, A remark on compressed sensing. Math. Notes 82(5â€“6), 748â€“755 (2007)
M.J. Lai, Y. Liu, The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmonic Anal. 30(3), 402â€“406 (2011)
S. Li, T. Mi, Y. Liu, Performance analysis of l_{1}synthesis with coherent frames. arXiv:1202.2223 (2012)
J. Lin, S. Li, Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO. Appl. Comput. Harmonic Anal. 37, 126â€“139 (2014)
J. Lin, S. Li, Y. Shen, New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal. Process. 61, 611â€“621 (2013)
Y. Liu, T. Mi, S. Li, Compressed sensing with general frames via optimaldualbased l_{1}analysis. IEEE Trans. Inf. Theory 58, 4201â€“4214 (2012)
X. Luo, W. Y, J. Ha, Nonconvex blocksparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process., 2020, 1â€“9 (2020)
A. Majumdar, R.K. Ward, Compressed sensing of color images. Signal Process. 90(12), 3122â€“3127 (2010)
Q. Mo, S. Li, New bounds on the restricted isometry constant Î´_{2k}. Appl. Comput. Harmonic Anal. 31, 460â€“468 (2011)
F. Parvaresh, H. Vikalo, H. Misra et al., Recovering sparse signals usingsparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275â€“285 (2008)
J. Peng, S. Yue, H. Li, NP/CLP equivalence: a phenomenon hidden among sparsity models for information processing. arXiv:1501.02018
H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inform. Theory 54, 2210â€“2219 (2008)
L. Song, J. Lin, Compressed Sensing with coherent tight frames via l_{q}minimization for 0 < q â‰¤ 1. Inverse Probl. Image 8(3), 761â€“777 (2017)
Q. Sun, Sparse approximation property and stable recovery of sparse signals from noisy measurements. IEEE Trans. Signal Process. 59(10), 5086â€“5090 (2011)
J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50, 2231â€“2242 (2004)
R. Vidal, Y. Ma, A unified algebraic approach to 2D and 3D motion segmentation and estimation. J. Math. Imaging Vis. 25(3), 403â€“421 (2006)
Y. Wang, J. Wang, Z. Xu, A note on blocksparse signal recovery with coherent tight frames. Discrete Dyn. Nat. Soc. 3, 1â€“6 (2013)
F.G. Wu, D.H. Li, The restricted isometry property for signal recovery with coherent tight frames. Bull. Aust. Math. Soc. 92(3), 496â€“507 (2015)
Z.Q. Xu, Compressed sensing. Sci. China Math. 42(9), 865â€“877 (2012)
F. Zhang, J. Wang, Blocksparse compressed sensing with redundant tight frames via \(l_2/l_1\)minimization. Pure Appl. Math. (2019)
Acknowledgements
All authors express their sincere gratitude to the reviewers for their careful review and excellent suggestions, which have made the article more comprehensive.
Funding
This work is generously supported by the Featured Innovation Projects of the General University of Guangdong Province (Grant No. 2023KTSCX096) and partially funded by the Characteristic Innovation Project of Universities in Guangdong Province (Natural Science), China (Grant No. 2021KTSCX085).
Author information
Authors and Affiliations
Contributions
This article was mainly written by the first author Fengong Wu, the second author Penghong Zhong participated in the discussion, and the third author Huasong Xiao and the fourth author Chunmei Miao collected relevant literature and proofread the initial draft.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
All authors have seen and approved the final version of the submitted manuscript.
Competing interests
All authors declare no conflict of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. We confirm that all organizations that funded our research in my submission have been mentioned, including grant numbers.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wu, F., Zhong, P., Xiao, H. et al. Framebased block sparse compressed sensing via \(l_2/l_1\)synthesis. EURASIP J. Adv. Signal Process. 2024, 76 (2024). https://doi.org/10.1186/s13634024011757
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634024011757