Skip to main content

Frame-based block sparse compressed sensing via \(l_2/l_1\)-synthesis

Abstract

In this paper, we consider the frame-based block sparse signal recovery via a \(l_2/l_1\)-synthesis method. A new kind of null space property based on the given dictionary D(block D-NSP) is proposed. It is proved that sensing matrices satisfying the block D-NSP is not just a sufficient and necessary condition for the \(l_2/l_1\)-synthesis method to exactly recover signals that are block sparse in frame D, but also a sufficient and necessary condition for the \(l_2/l_1\)-synthesis to stably recover signals which are block-compressible in frame D. To the best of our knowledge, this new property is the first sufficient and necessary condition for successful signal recovery via the \(l_2/l_1\)-synthesis. In addition, we also characterize the theoretical performance of recovering signals via the \(l_2/l_1\)-synthesis in the case of the measurements are disturbed.

1 Introduction

Compressed sensing is a revolutionary innovation, pioneered by Donoho [13] and Candès et al. [8, 9] around 2006. It is receiving increasing attention from fields such as signal processing, sparse modeling, machine learning, and color imaging, among others (see [12, 19, 27, 29]). The sparsity assumption plays an important role in signal reconstruction. A vector is called k-sparse if the number of its nonzero entries is no more than k. The fundamental idea of compressed sensing is to recover a sparse signal \(x\in {\mathbb {R}}^d\) from its undersampled linear measurements \(y=Ax+e\), where \(y\in {\mathbb {R}}^m\), \(A\in {\mathbb {R}}^{m\times d}(m\ll d)\), and \(e\in {\mathbb {R}}^m\) is a vector of measurement errors with \(\Vert e\Vert _2\le \epsilon\). The classical compressed sensing theory points out that the sparse or compressible (nearly sparse) signal \(x_0\) can be successfully reconstructed through the following \(l_1\)-minimization model under certain conditions of measurement matrix A.

$$\begin{aligned} {\hat{x}}=\arg \underset{x\in {\mathbb {R}}^d}{\min }\Vert x\Vert _1,\ \ \ \text{ subject } \text{ to }\ \ \Vert Ax-y\Vert _2\le \epsilon , \end{aligned}$$
(1.1)

where \(\Vert \cdot \Vert _2\) is the Euclidean norm of vectors and \(\Vert x\Vert _1=\underset{i=1}{\overset{d}{\sum }}|x_i|\) denotes the \(l_1\)-norm. When \(\epsilon =0\), we call it the noiseless case, and \(\epsilon >0\), the noisy case.

One of the key research works of compressed sensing is designing an appropriate sensing matrix to ensure good reconstruction performance of the minimization problem (1.1). The restricted isometry property (RIP) introduced by Candès and Tao in [8] is shown to provide stable recovery of signals nearly sparse via (1.1). Various sufficient conditions based on the RIP for sparse signal recovery, exactly or stably, can be found in [3,4,5,6,7,8,9, 16, 28]. Null space property (NSP) is another well-known property used to characterize the sensing matrix. A matrix A satisfies the NSP of order k, which means for any \(v\in \ker A\setminus \{0\}\), and any index set \(|T|\le k\), it holds that

$$\begin{aligned} \Vert v_T\Vert _1<\Vert v_{T^c}\Vert _1. \end{aligned}$$

The NSP is a necessary and sufficient condition which guarantees the exact reconstruction of the sparse signal using the \(l_1\)-minimization model (1.1). Many works are based on NSP (see [14, 17, 18, 20, 21, 38]), especially [17], which proposed the stable NSP and the robust NSP, and used them to characterize the solutions of (1.1). Moreover, it was shown that NSP matrices can reach a similar stability result as RIP matrices, except that the constants may be larger [1, 33].

However, in many practical applications, the signal of interest is not sparse in an orthogonal basis. More often than not, sparsity is expressed in terms of an overcomplete dictionary D. This kind of signal is called dictionary-sparse signal or frame-sparse signal, and is called D-sparse signal when the dictionary D is given, while the signals which are nearly sparse in D will be called D-compressible. The signal \(x_0\in {\mathbb {R}}^d\) is expressed as \(x_0=Dz_0\), where \(D\in {\mathbb {R}}^{d\times n}\), \(d\ll n\) is some overcomplete dictionary of \({\mathbb {R}}^d\) and the coefficient \(z_0\in {\mathbb {R}}^n\) is sparse or compressible. The linear measurement is \(y=Ax_0\). We refer to [2, 10, 23, 24, 31, 32, 34, 37] and the reference therein for details.

A natural idea of recovering \(x_0\) from the measurement \(y=Ax_0\) is to solve the minimization problem:

$$\begin{aligned} {\hat{z}}=\arg \underset{z\in {\mathbb {R}}^n}{\min }\Vert z\Vert _1,\ \ \ \text{ subject } \text{ to }\ \ y=ADz, \end{aligned}$$
(1.2)

for the sparse coefficient \({\hat{z}}\) at first, then synthesizing it to get \({\hat{x}}=D{\hat{z}}\). This method is called \(l_1\)-synthesis. For the case with noise, it naturally solves the following:

$$\begin{aligned} {\hat{z}}=\arg \min \Vert z\Vert _1 \ \ \ \text{ subject } \text{ to } \ \ \Vert y-ADz\Vert _2\le \epsilon . \end{aligned}$$
(1.3)

In [31], reconstruction conditions were established by making AD to satisfy RIP. As pointed out in Article [10, 22], under such strong condition, the exact reconstruction of not only \(x_0\) but also \(z_0\) is ultimately obtained. Accurate reconstruction of \(z_0\) is unnecessary, for we only care about the estimation of the original signal \(x_0\). Especially when the frame D is completely correlated (with two identical columns), (1.2) could has infinitely minimizers, and all of them lead to the same true signal \(x_0\).

For this frame-sparse recovery problem, Candès, Eldar et al propose the following \(l_1\)-analysis method in [10]:

$$\begin{aligned} {\hat{x}}=\arg \underset{x\in {\mathbb {R}}^d}{\min }\Vert D^*x\Vert _1,\ \ \ \text{ subject } \text{ to }\ \ \Vert y-Ax\Vert _2\le \epsilon , \end{aligned}$$
(1.4)

where \(D^*\) is the transpose of D. By assuming \(D^*x_0\) to be sparse, [10] proves that \(l_1\)-analysis can stably reconstruct \(x_0\), if A satisfies a kind RIP condition related to D(D-RIP). This is the first result of frame-sparse compressed sensing that does not require the frame to be highly incoherent. The assumption that \(D^*x_0\) should be sparse does not seem very realistic, because even when \(x_0\) is sparse in terms of D, it does not necessarily mean that \(D^*x_0\) is sparse. The work in [25] proposes an optimal dual based \(l_1\)-analysis method along with an efficient algorithm. In [22], they show that the optimal dual based \(l_1\)-analysis is equivalent to \(l_1\)-synthesis; then, \(l_1\)-analysis appears to be a subproblem of the \(l_1\)-synthesis. Numerical experiments shown in [15, 22] also suggest that \(l_1\)-synthesis is more accurate and thorough. In [11], the authors establish the first necessary and sufficient condition for reconstructing D-sparse signal based on the \(l_1\)-synthesis method by using a dictionary-based NSP(D-NSP). Such kind of D-NSP does not make any assumptions about incoherent of the frame D.

In this paper, we consider signals to have additional sparse structures under an overcomplete dictionary, i.e., the nonzero coefficients appear in a few fixed blocks with an overcomplete dictionary. Such signal we called here, the frame-block-sparse signal (or D-block-sparse signal when D is given). Such structured signals arise in various applications, such as DNA microarrays [29], color imaging [27], and motion segmentation [35]. Assuming \(x\in R^d\) with block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), where \(0<d_i\le d,i=1,2,\ldots ,M\), and \(\underset{i=1}{\overset{M}{\sum }}d_i=d\), then the signal x can be described as \(x=[\underset{x_{[1]}}{\underbrace{x_1,x_2,\ldots ,x_{d_1}}},\underset{x_{[2]}}{\underbrace{x_{d_1+1},\ldots ,x_{d_1+d_2}}},\ldots ,\underset{x_{[M]}}{\underbrace{x_{d_{M-1}+1},\ldots ,x_{d}}}]^*\), where \(x_{[i]}\) denotes the ith block of x. When x has at most k nonzero blocks, i.e., \(\Vert x\Vert _{2\cdot 0}=\underset{i=1}{\overset{M}{\sum }}{I(\Vert x_{[i]}\Vert _2>0)}\le k\), we call such signal x as block k-sparse signal. Recently, many papers focus on the signal which is block sparse in terms of a overcomplete dictionary D, i.e., \(x=Dz\), where the coefficient z is block sparse [26, 36, 39]. Most of the existing literature adopts \(l_1\)-analysis methods, and they establish sufficient conditions to guarantee stable recovery using the block-RIP of frame D. As mentioned earlier, the \(l_1\)-synthesis method is more thorough and accurate in solving frame-sparse signal recovery problems, and D-NSP can allow for frame D to be highly correlated [11]. Based on these theories and practical conclusions, we introduce a mixed \(l_2/l_1\)-norm null space property of the dictionary D, to characterize the reconstruction performance of \(l_2/l_1\)-synthesis method.

For D-block-sparse signal recovery form linear measurement \(y=Ax_0\) in the noiseless case, we consider the following \(l_2/l_1\)-synthesis method:

$$\begin{aligned} {\hat{z}}=\text{ argmin }\Vert z\Vert _{2\cdot 1}\ \ \ \ \ \text{ subject } \text{ to }\ \ \ \ y=ADz. \end{aligned}$$
(1.5)

For the recovery of D-block-compressible signals \(x_0\) in the case of the linear measurement y is perturbed, we naturally consider the following method:

$$\begin{aligned} {\hat{z}}=\text{ argmin }\Vert z\Vert _{2\cdot 1}\ \ \ \ \ \text{ subject } \text{ to }\ \ \ \ \Vert ADz-y\Vert _2\le \epsilon . \end{aligned}$$
(1.6)

We generalize the D-NSP proposed by [11] to block D-NSP, and show that the block D-NSP is a sufficient and necessary condition for the block \(l_1\)-synthesis to exactly recover all block D-sparse signals of order k. Moreover, when the measurements are perturbed and the signals are D-block-compressible, we prove that block D-NSP is still a sufficient and necessary condition for stable recovery.

The remainder of this paper is organized as follows. Some notations, definitions, and some useful lemmas are introduced in section 2. In section 3, we present the main theorems for recovering D-block-sparse signals in the noiseless case and D-block-compressible signals in noisy case. Finally, a conclusion is made in section 4.

2 Preliminaries

We provide the notations of this paper roughly as follows. For a vector \(z=(z_1,z_2,\ldots ,z_n)^*\in {\mathbb {R}}^n\), \(M<n\) be a positive integer, let \(\text{ supp }(z)=\{i|\Vert z_{[i]}\Vert _2\ne 0,i=1,2,\ldots ,M\}\) denote the block support of z. The \(l_2/l_0\)-norm of z is defined as \(\Vert z\Vert _{2\cdot 0}=\underset{i=1}{\overset{M}{\sum }}{I(\Vert z_{[i]}\Vert _2>0)}\), and z is called block k-sparse when \(\Vert z\Vert _{2\cdot 0}\le k\). \(\Vert z\Vert _{2\cdot 1}=\underset{i=1}{\overset{M}{\sum }}\Vert z_{[i]}\Vert _2\) is the mixed \(l_2/l_1\)-norm of vector z. \(T\subseteq \{1,2,\ldots ,M\}\) is an index set, and \(T^c\) is the complement of T in \(\{1,2,\ldots ,M\}\). In the following text, \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\) always represents the block index set, with \(\underset{i=1}{\overset{M}{\sum }}d_i=d\), \(0<d_i<d,i=1,2,\ldots ,M\). Denote \(z_{T}\in {\mathbb {R}}^n\) the vector z with all but the parts which block index is in T set to zero, and therefore, \(z_{T^c}=z-z_T\). For a given frame \(D\in {\mathbb {R}}^{d\times n}\), we define

$$\begin{aligned} D\Sigma _k=\{x\in {\mathbb {R}}^d|\text{ if } \text{ there } \text{ exists }\ \ z\in {\mathbb {R}}^n,\Vert z\Vert _{2\cdot 0}\le k\, \text{ such } \text{ that }\ \ x=Dz\}. \end{aligned}$$

\(D^*\) is the transpose of D, \(D^{-1}(E)\) denoted the preimage of the set E under the operator D. Denote \(\sigma _k(z_0)=\underset{\Vert z\Vert _{2\cdot 0}\le k}{\inf }\Vert z-z_0\Vert _{2\cdot 1}\) to be the optimal k-term approximation of \(z_0\) in mixed \(l_2/l_1\)-norm.

The following two new null space properties are very important in characterizing the reconstruction performance of \(l_1\)-synthesis methods (1.5) and (1.6).

Definition 2.1

(k-order block sparse NSP of a frame D (block k-D-NSP)). Given a frame \(D\in {\mathbb {R}}^{d\times n}\). For any index set \(T\subseteq \{1,2,\ldots ,M\}\) with \(|T|\le k\), a matrix \(A\in {\mathbb {R}}^{m\times d}\) satisfies the block \(D\text{-NSP }\) of order k over \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), if there exists \(u\in \ker D\), such that

$$\begin{aligned} \Vert v_T+u\Vert _{2\cdot 1}<\Vert v_{T^c}\Vert _{2\cdot 1},\ \ \ \forall v\in D^{-1}(\ker A\setminus \{0\}). \end{aligned}$$
(2.1)

Definition 2.2

(k-order strong block sparse NSP of a frame D (block k-D-SNSP)) Given a dictionary \(D\in {\mathbb {R}}^{d\times n}\). For any index set \(T\subseteq \{1,2,\ldots ,M\}\) with \(|T|\le k\), a matrix A satisfies the strong block sparse null space property with respect to D of order k over \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), if there is a positive constant c and \(u\in \ker D\), such that

$$\begin{aligned} \Vert v_{T^c}\Vert _{2\cdot 1}-\Vert v_T+u\Vert _{2\cdot 1}\ge c\Vert Dv\Vert _2,\ \ \ \forall v\in \ker (AD). \end{aligned}$$
(2.2)

The following lemmas will be useful in the next part of the paper.

Lemma 2.3

For any \(a,b\in R^n\), the following inequality holds.

$$\begin{aligned} \Vert a+b\Vert _{2\cdot 1}\le \Vert a\Vert _{2\cdot 1}+\Vert b\Vert _{2\cdot 1}. \end{aligned}$$

This triangle inequality in \(l_2/l_1\)-norm can be easily obtained by definition, so we will not prove it here.

Given a index set \(T\subseteq \{1,2,\ldots ,M\}\), and a vector \(v\in D^{-1}(\ker A{\setminus }\{0\})\), for any \(u\in \ker D\) and \(t>0\), we defined the real functions

$$\begin{aligned} \phi _v(u,t)=\underset{{\tilde{u}}\in \ker D}{\sup }(\Vert (tv+u)_{T^c}\Vert _{2\cdot 1}-\Vert (tv+u)_T+{\tilde{u}}\Vert _{2\cdot 1}) \end{aligned}$$

and

$$\begin{aligned} f_v(u,t)=\frac{\phi _v(u,t)}{t}. \end{aligned}$$
(2.3)

Lemma 2.4

Suppose that A satisfies the block k-D-NSP over the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), then for any \(v\in D^{-1}(\ker A{\setminus }\{0\})\), the function defined in (2.3) satisfies

$$\begin{aligned} \underset{u\in \ker D,t>0}{\inf }f_v(u,t)>0. \end{aligned}$$

Proof

Since A satisfies the block k-D-NSP over \({\mathfrak {T}}\), it is easy to see that \(f_v(u,t)>0\), for any \(v\in D^{-1}(\ker A{\setminus }\{0\})\), and it is sufficient to show that there is no \(v_0\in D^{-1}(\ker A\setminus \{0\})\) such that \(\underset{u\in \ker D,t>0}{\inf }f_{v_0}(u,t)=0.\) If this is not true, then for any \(\eta >0\), there is \(u_0\in \ker D,t_0>0\) such that \(f_{v_0}(u_0,t_0)<\eta\). By the definition of \(f_{v_0}(u_0,t_0)\), that is,

$$\begin{aligned} \frac{1}{t_0}\underset{{\tilde{u}}\in \ker D}{\sup }(\Vert (t_0v_0+u_0)_{T^c}\Vert _{2\cdot 1}-\Vert (t_0v_0+u_0)_T+{\tilde{u}}\Vert _{2\cdot 1})<\eta . \end{aligned}$$

This will lead to \(\Vert (t_0v_0+u_0)_{T^c}\Vert _{2\cdot 1}-\Vert (t_0v_0+u_0)_T+{\tilde{u}}\Vert _{2\cdot 1}\le 0\), for any \({\tilde{u}}\in \ker D\), and it is contradicts with the assumption that A satisfies the block k-D-NSP. \(\square\)

3 Main results

Theorem 3.1

Block k-D-NSP is a necessary and sufficient condition for \(l_2/l_1\)-synthesis (1.5) to successfully recover all signals in the set \(D\Sigma _k\).

Proof

(Sufficient part) Suppose that the sensing matrix A satisfies the block D-NSP of order k, then the \(l_2/l_1\)-synthesis method (1.5) can successfully recover all block D-sparse signals \(x\in D\Sigma _k\) from measurements \(y=Ax\). Otherwise, there is a vector \(x_0\in D\Sigma _k\), the reconstruction of which is \({{\hat{x}}}=D{{\hat{z}}}\ne x_0\). Denote \(x_0=Dz_0\), where \(\Vert z_0\Vert _{2\cdot 0}\le k\). Let \(v=z_0-{{\hat{z}}}\), since \(Dv\ne 0\) and \(AD{\hat{x}}=ADx_0=y\), it is easy to check that \(v\in D^{-1}(\ker A{\setminus }\{0\})\). Denote \(T\subseteq \{1,2,\ldots ,M\}\) with \(|T|\le k\) to be the block support set of \(z_0\), by the definition of block k-D-NSP, there must exist a \(u\in \ker D\), such that \(\Vert v_T+u\Vert _{2\cdot 1}<\Vert v_{T^c}\Vert _{2\cdot 1}\), which implies \(\Vert (z_0-{{\hat{z}}})_T+u\Vert _{2\cdot 1}=\Vert z_0-{{\hat{z}}}_T+u\Vert _{2\cdot 1}< \Vert {{\hat{z}}}_{T^c}\Vert _{2\cdot 1}\), and

$$\begin{aligned} \Vert z_0+u\Vert _{2\cdot 1}\le & {} \Vert z_0-{{\hat{z}}}_T+u\Vert _{2\cdot 1}+\Vert {{\hat{z}}}_T\Vert _{2\cdot 1}\\< & {} \Vert {{\hat{z}}}_{T^c}\Vert _{2\cdot 1}+\Vert {{\hat{z}}}_T\Vert _{2\cdot 1}\\= & {} \Vert {{\hat{z}}}\Vert _{2\cdot 1}. \end{aligned}$$

This leads to the contradiction of the assumption that \({{\hat{z}}}\) is a minimizer of the problem (1.5).

(Necessary part) Assuming the \(l_2/l_1\)-synthesis method (1.5) can successfully recover all signals in \(D\Sigma _k\), we need to show that the sensing matrix A satisfies block D-NSP of order k. For any \(v\in D^{-1}(\ker A\setminus \{0\})\), \(T\subseteq \{1,2,\ldots ,M\}\) with \(|T|\le k\) and the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\), denote \(x_0=Dv_T\), then \(x_0\in D\Sigma _k\), and let \(y=Ax_0\) be the measurement. Let \({\hat{z}}\) be the solution of (1.5) and \({\hat{x}}=D{\hat{z}}\) be the reconstructed signal. By the assumption, we have \({\hat{x}}=D{\hat{z}}=x_0\), and there is a \(u\in \ker D\), such that \({\hat{z}}=v_T+u\). Since \(AD(v_T-v)=y\) and \(D(v_T-v)\ne x_0\), then \(v_T-v\) cannot be a minimizer of (1.5). Therefore, we get \(\Vert v_T+u||_{2\cdot 1}<\Vert v_T-v\Vert _{2\cdot 1}=\Vert v_{T^c}\Vert _{2\cdot 1}\), which implies A satisfies block k-D-NSP. \(\square\)

In classical compressed sensing theory, it is well known that the null space property is a sufficient and necessary condition not just for the sparse signal recovery in noiseless case, but also for compressible signal recovery with measurement errors [1, 33]. We will show that this result can be generalized to block D-NSP when the reconstruction is carried on a signal which is block sparse or block-compressible in a given frame.

The block D-SNSP defined in definition 2.2 looks stronger than the block D-NSP. We now show that, with this stronger property, D-block-compressible signals can be stably recovered via (1.6) as follows.

Theorem 3.2

If the sensing matrix \(A\in {\mathbb {R}}^{m\times d}\) satisfies block k-D-SNSP, then any solution \({\hat{z}}\) of \(l_2/l_1\)-synthesis method (1.6) satisfies

$$\begin{aligned} \Vert D{\hat{z}}-x_0\Vert _2\le C_1\sigma _k(z_0)+C_2\epsilon , \end{aligned}$$
(3.1)

where \(z_0\) is any representation of \(x_0\) in D, \(\sigma _k(z_0)=\underset{\Vert z\Vert _{2\cdot 0}\le k}{\inf }\Vert z-z_0\Vert _{2\cdot 1}\),\(C_1,C_2\) are constants.

Proof

Denote \(x_0=Dz_0\) as the unknown signal we want to recover and \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\) as the block index set, \(T\subseteq \{1,2,\ldots ,M\}\) is the index set with the k largest block of \(z_0\) (in \(l_2\) norm). Denote \(h=D({\hat{z}}-z_0)={\hat{x}}-x_0\), and decompose it as \(h=t+\eta\) where \(t\in \ker A,\eta \in \ker A^\bot\). Let \(w=D^T(DD^T)^{-1}t\), then \(h=Dw+\eta\) with \(Dw\in \ker A\). It is not difficult to know that

$$\begin{aligned} \Vert \eta \Vert _2\le \frac{1}{{\mathcal {V}}_A}\Vert Ah\Vert _2\le \frac{2\epsilon }{{\mathcal {V}}_A}, \end{aligned}$$
(3.2)

where \({\mathcal {V}}_A\) is the smallest positive singular value of A. Let \(\xi =D^*(DD^*)^{-1}\eta\), then \(\eta =D\xi\), and it is easy to show

$$\begin{aligned} \Vert \xi \Vert _2\le \frac{1}{{\mathcal {V}}_D}\Vert \eta \Vert _2\le \frac{2}{{\mathcal {V}}_A{\mathcal {V}}_D}\epsilon . \end{aligned}$$
(3.3)

Since \(h=D(w+\xi )\) and \(h=D({\hat{z}}-z_0)\), \({\hat{z}}-z_0=w+\xi +u_1\) with \(u_1\in \ker D\).

Let \(v=w+u_1\), then \({\hat{z}}-z_0=v+\xi\) and \(v\in \ker AD\). By the assumption, A satisfies the block k-D-SNSP; then, there is a \(u\in \ker D\) such that

$$\begin{aligned} \Vert v_{T^c}\Vert _{2\cdot 1}-\Vert v_T+u\Vert _{2\cdot 1}\ge c\Vert Dv\Vert _2. \end{aligned}$$

Therefore,

$$\begin{aligned}{} & {} \Vert v+z_{0,T}\Vert _{2\cdot 1}-\Vert -u+z_{0,T}\Vert _{2\cdot 1}\nonumber \\{} & {} \quad =\Vert v_{T^c}+v_T+z_{0,T}\Vert _{2\cdot 1}-\Vert -u_T+z_{0,T}-u_{T^c}\Vert _{2\cdot 1}\nonumber \\{} & {} \quad =\Vert v_{T^c}\Vert _{2\cdot 1}+\Vert v_T+z_{0,T}\Vert _{2\cdot 1}-\Vert -u_T+z_{0,T}\Vert _{2\cdot 1}-\Vert u_{T^c}\Vert _{2\cdot 1}\nonumber \\{} & {} \quad =\Vert v_{T^c}\Vert _{2\cdot 1}-(\Vert u_T-z_{0,T}\Vert _{2\cdot 1}-\Vert v_T+z_{0,T}\Vert _{2\cdot 1})-\Vert u_{T^c}\Vert _{2\cdot 1}\nonumber \\{} & {} \quad \ge \Vert v_{T^c}\Vert _{2\cdot 1}-\Vert u_T+v_T\Vert _{2\cdot 1}-\Vert u_{T^c}\Vert _{2\cdot 1}\nonumber \\{} & {} \quad =\Vert v_{T^c}\Vert _{2\cdot 1}-\Vert u+v_T\Vert _{2\cdot 1}\ge c\Vert Dv\Vert _2. \end{aligned}$$
(3.4)

On the other side, since \({\hat{z}}\) is a minimizer, we have

$$\begin{aligned}{} & {} \Vert -u+z_{0,T}\Vert _{2\cdot 1}+\Vert z_{0,T^c}\Vert _{2\cdot 1}\ge \Vert -u+z_0\Vert _{2\cdot 1}\ge \Vert {\hat{z}}\Vert _{2\cdot 1}\\{} & {} \quad =\Vert z_0+v+\xi \Vert _{2\cdot 1}\ge \Vert v+z_0\Vert _{2\cdot 1}-\Vert \xi \Vert _{2\cdot 1}\\{} & {} \quad \ge \Vert v+z_{0,T}\Vert _{2\cdot 1}-\Vert z_{0,T^c}\Vert _{2\cdot 1}-\Vert \xi \Vert _{2\cdot 1}. \end{aligned}$$

By rearranging the above inequality, we will obtain

$$\begin{aligned} \Vert v+z_{0,T}\Vert _{2\cdot 1}-\Vert -u+z_{0,T}\Vert _{2\cdot 1}\le 2\Vert z_{0,T^c}\Vert _{2\cdot 1}+\Vert \xi \Vert _{2\cdot 1}. \end{aligned}$$
(3.5)

Combining (3.4) with (3.5), we get

$$\begin{aligned} c\Vert Dv\Vert _2\le 2\Vert z_{0,T^c}\Vert _{2\cdot 1}+\Vert \xi \Vert _{2\cdot 1}. \end{aligned}$$

Using the Hölder inequality with \(\Vert \xi \Vert _{2\cdot 1}\), the above inequality will become

$$\begin{aligned} \Vert Dv\Vert _2\le \frac{2}{c}\Vert z_{0,T^c}\Vert _{2\cdot 1}+\frac{\sqrt{M}}{c}\Vert \xi \Vert _2. \end{aligned}$$
(3.6)

Finally, using (3.2), (3.6)

$$\begin{aligned} \Vert h\Vert _2= & {} \Vert Dv+D\xi \Vert _2=\Vert Dv+\eta \Vert _2\le \Vert Dv\Vert _2+\Vert \eta \Vert _2,\\\le & {} \frac{2}{c}\Vert z_{0,T^c}\Vert _{2\cdot 1}+\frac{\sqrt{M}}{c}\Vert \xi \Vert _2+\Vert \eta \Vert _2. \end{aligned}$$

That is,

$$\begin{aligned} \Vert {\hat{x}}-x_0\Vert _2\le \frac{2}{c}\Vert z_{0,T^c}\Vert _{2\cdot 1}+\frac{\sqrt{M}}{c}\Vert \xi \Vert _2+\Vert \eta \Vert _2. \end{aligned}$$

By using (3.2) and (3.3), the above inequality can be modified such that

$$\begin{aligned} \Vert {\hat{x}}-x_0\Vert _2\le C_1\Vert z_{0,T^c}\Vert _{2\cdot 1}+C_2\epsilon =C_1\sigma _k(z_0)+C_2\epsilon , \end{aligned}$$
(3.7)

where \(C_1=\frac{2}{c}\),\(C_2=\frac{2}{{\mathcal {V}}_A}(\frac{\sqrt{M}}{c{\mathcal {V}}_D}+1)\epsilon\). \(\square\)

Remark 3.3

  1. (a)

    When \(d_1=d_2=\cdots =d_M=1\) and \(M=n\), then block D-SNSP become D-SNSP, and our result is consistent with Theorem 5.2 in [11].

  2. (b)

    When \(z_0\) is block k-sparse and \(\epsilon =0\), it means that D-SNSP block sparse signals can be exactly recovered by (1.5).

By Definition 2.2, it is obvious that block D-SNSP is not weaker than D-NSP. We want to find it out that how much stronger it is than block D-NSP. The following theorem shows that these two conditions are actually the same.

Theorem 3.4

Let \(A\in {\mathbb {R}}^{m\times d}\), \(D\in {\mathbb {R}}^{d\times n}\), matrix A satisfying block D-NSP is equivalent to A satisfying block D-SNSP with the same order.

Proof

Suppose A satisfies block k-D-NSP over the block index set \({\mathfrak {T}}=\{d_1,d_2,\ldots ,d_M\}\). For any \(w\in \ker AD\), take \(u=0\), when \(w=0\), and \(u=-w\) for \(w\ne 0\), \(Dw=0\), then \(\Vert w_{T^c}\Vert _{2\cdot 1}-\Vert w_T+u\Vert _{2\cdot 1}=0\), and (2.2) holds for any positive number C. To complete the proof, we just need to show the function

$$\begin{aligned} F(w)=\underset{{\tilde{u}}\in \ker D}{\sup }\frac{\Vert w_{T^c}\Vert _{2\cdot 1}-\Vert w_T+{\tilde{u}}\Vert _{2\cdot 1}}{\Vert Dw\Vert _2} \end{aligned}$$

has a positive lower bound on \(D^{-1}(\ker A\setminus \{0\})\) for every \(|T|\le k\).

Decompose w into two parts as \(w=tv+u\), where \(u=\text{ P}_{\ker D}w\), \(tv=\text{ P}_{(\ker D)^{\bot }}w\), with \(\Vert v\Vert _2=1\), and \(t>0\). By the definition of infimum, we have

$$\begin{aligned} \underset{w\in D^{-1}(\ker A\setminus \{0\})}{\inf }F(w)=\underset{v\in \ker D^\bot ,\Vert v\Vert _2=1}{\inf }\underset{u\in \ker D,t>0}{\inf }f_v(u,t)/\Vert Dv\Vert _2. \end{aligned}$$

By Lemma 2.4, the function \(\underset{u\in \ker D,t>0}{\inf }f_v(u,t)\) is always positive. Since \((\ker D)^{\bot }\cap {\mathbb {S}}^{n-1}\) is a compact set, it is sufficient to prove that the function \(\underset{u\in \ker D,t>0}{\inf }f_v(u,t)\) is lower semicontinuous with respect to v.

Since, for any \(v\in D^{-1}(\ker A\setminus \{0\})\) and any \(\eta >0\), there is a \(\delta =\frac{\eta }{\sqrt{M}}>0\), such that for any \(\Vert e\Vert _2<\delta\),

$$\begin{aligned} f_{v+e}(u,t)= & {} \underset{{\tilde{u}}\in \ker D}{\sup }\frac{\Vert (tv+te+u)_{T^c}\Vert _{2\cdot 1}-\Vert (tv+te+u)_T+{\tilde{u}}\Vert _{2\cdot 1}}{t}\\\ge & {} \underset{{\tilde{u}}\in \ker D}{\sup }\frac{\Vert (tv+u)_{T^c}\Vert _{2\cdot 1}-\Vert (tv+u)_T+{\tilde{u}}\Vert _{2\cdot 1}}{t}-\Vert e\Vert _{2\cdot 1}\\\ge & {} \underset{{\tilde{u}}\in \ker D}{\sup }\frac{\Vert (tv+u)_{T^c}\Vert _{2\cdot 1}-\Vert (tv+u)_T+{\tilde{u}}\Vert _{2\cdot 1}}{t}-\sqrt{M}\Vert e\Vert _2. \end{aligned}$$

Taking the infimum over u in \(\ker D\) and \(t>0\) of both sides, we get

$$\begin{aligned} \underset{u\in \ker D,t>0}{\inf }f_{v+e}(u,t)\ge \underset{u\in \ker D,t>0}{\inf }f_v(u,t)-\eta , \end{aligned}$$

which shows that the function is a lower semicontinuous, and the proof is completed. \(\square\)

4 Conclusion

In this paper, we generalized the D-NSP proposed by [11] to block D-NSP. We proved in Theorem 3.1 that this new property is equivalent to the exact recovery of D-block-sparse signals via \(l_2/l_1\)-synthesis method. In addition, a stable reconstruction result of D-block-compressible signals via \(l_2/l_1\)-synthesis in noise case was given in Theorem 3.2. To the best of our knowledge, these studies provide the first characterization of block sparse signal recovery with dictionaries via \(l_2/l_1\)-synthesis approach.

By Theorem 3.4, we proved that A satisfies block D-SNSP is equivalent to A satisfies block D-NSP with the same order. Combined with Theorems 3.1 and 3.2, it is clear that block D-NSP is not only a sufficient and necessary condition for the success of \(l_1\)-synthesis without measurement errors, but also sufficient and necessary condition for stability of \(l_2/l_1\)-synthesis in the case with noise.

As we all know, the better the sparse representation of signal x, the more advantageous it is for solving the reconstruction problem. The importance of block D- NSP lies in that it does not require D to be incoherent, which expands the selection range of framework D. These results help characterize the reconstruction performance of \(l_2/l_1\)-synthesis approach, and of great significance to study and design the measurement matrix A.

Availability of data and materials

Not applicable.

References

  1. A. Aldroubi, X. Chen, A.M. Powell, Perturbations of measurement matrices and dictionaries in compressed sensing. Appl. Comput. Harmonic Anal. 33, 282–291 (2012)

    Article  MathSciNet  Google Scholar 

  2. W. Bajwa, R. Calderbank, S. Jafarpour, Why Gabor frames Two fundamental measures of coherence and their geometric significance. J. Commun. Netw. 12, 289–307 (2010)

    Article  Google Scholar 

  3. T. Cai, A. Zhang, Compressed sensing and affine rank minimization under restricted isometry. IEEE Trans. Signal. Process. 61, 3279–3290 (2013)

    Article  MathSciNet  Google Scholar 

  4. T. Cai, A. Zhang, Sharp RIP bound for sparse signal and low-rank matrix recovery. Appl. Comput. Harmonic Anal. 35, 74–93 (2013)

    Article  MathSciNet  Google Scholar 

  5. T. Cai, A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 60, 122–132 (2014)

    Article  MathSciNet  Google Scholar 

  6. T. Cai, L. Wang, G. X., New bounds for restricted isometry constants. IEEE Trans. Inform. Theory 56, 4388–4394 (2010)

    Article  MathSciNet  Google Scholar 

  7. T. Cai, L. Wang, G. Xu, Shifting inequality and recovery of sparse signals. IEEE Trans. Signal. Process. 58, 1300–1308 (2010)

    Article  MathSciNet  Google Scholar 

  8. E.J. Candès, T. Tao, Decoding by linear programming. IEEE Trans. Inf. Theory 51, 4203–4215 (2005)

    Article  MathSciNet  Google Scholar 

  9. E.J. Candès, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Mater. 59, 1207–1223 (2006)

    Article  MathSciNet  Google Scholar 

  10. E.J. Candès, Y.C. Eldar, D. Needell, P. Randall, Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31, 59–73 (2011)

    Article  MathSciNet  Google Scholar 

  11. X. Chen, H. Wang, R. Wang, A null space analysis of the l1-synthesis method in dictionary-based compressed sensing. Appl. Comput. Harmonic Anal. 37, 492–515 (2014)

    Article  MathSciNet  Google Scholar 

  12. S. Cotter, B. Rao, Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 50, 374–377 (2002)

    Article  Google Scholar 

  13. D. Donoho, Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  14. M. Elad, A.M. Bruckstein, A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 48, 2558–2567 (2002)

    Article  MathSciNet  Google Scholar 

  15. M. Elad, P. Milanfar, R. Rubinstein, Analysis versus synthesis in signal priors. Inverse. Probl. 23, 947–968 (2007)

    Article  MathSciNet  Google Scholar 

  16. S. Foucart, M.J. Lai, Sparsest solutions of underdetermined linear systems via lq-minimization for 0 < q ≤ 1. Appl. Comput. Harmonic Anal. 26, 395–407 (2009)

    Article  MathSciNet  Google Scholar 

  17. S. Foucart, H. Rauhut, A Mathematical Introduction to Compressive Sensing (Springer, New York, 2013)

    Book  Google Scholar 

  18. R. Gribonval, M. Nielsen, Sparse decompositions in unions of bases. IEEE Trans. Inf. Theory 49(12), 3320–3325 (2003)

    Article  Google Scholar 

  19. J. Huang , X. Huang, D. Metaxas, Learning with Dynamic Group Sparsity, in IEEE 12th International Conference on Computer Vision, pp. 64–71 (2009)

  20. B.S. Kashin, V.N. Temlyakov, A remark on compressed sensing. Math. Notes 82(5–6), 748–755 (2007)

    Article  MathSciNet  Google Scholar 

  21. M.J. Lai, Y. Liu, The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmonic Anal. 30(3), 402–406 (2011)

    Article  MathSciNet  Google Scholar 

  22. S. Li, T. Mi, Y. Liu, Performance analysis of l1-synthesis with coherent frames. arXiv:1202.2223 (2012)

  23. J. Lin, S. Li, Sparse recovery with coherent tight frames via analysis Dantzig selector and analysis LASSO. Appl. Comput. Harmonic Anal. 37, 126–139 (2014)

    Article  MathSciNet  Google Scholar 

  24. J. Lin, S. Li, Y. Shen, New bounds for restricted isometry constants with coherent tight frames. IEEE Trans. Signal. Process. 61, 611–621 (2013)

    Article  MathSciNet  Google Scholar 

  25. Y. Liu, T. Mi, S. Li, Compressed sensing with general frames via optimal-dual-based l1-analysis. IEEE Trans. Inf. Theory 58, 4201–4214 (2012)

    Article  Google Scholar 

  26. X. Luo, W. Y, J. Ha, Non-convex block-sparse compressed sensing with coherent tight frames. EURASIP J. Adv. Signal Process., 2020, 1–9 (2020)

  27. A. Majumdar, R.K. Ward, Compressed sensing of color images. Signal Process. 90(12), 3122–3127 (2010)

    Article  Google Scholar 

  28. Q. Mo, S. Li, New bounds on the restricted isometry constant δ2k. Appl. Comput. Harmonic Anal. 31, 460–468 (2011)

    Article  MathSciNet  Google Scholar 

  29. F. Parvaresh, H. Vikalo, H. Misra et al., Recovering sparse signals usingsparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Top. Signal Process. 2(3), 275–285 (2008)

    Article  Google Scholar 

  30. J. Peng, S. Yue, H. Li, NP/CLP equivalence: a phenomenon hidden among sparsity models for information processing. arXiv:1501.02018

  31. H. Rauhut, K. Schnass, P. Vandergheynst, Compressed sensing and redundant dictionaries. IEEE Trans. Inform. Theory 54, 2210–2219 (2008)

    Article  MathSciNet  Google Scholar 

  32. L. Song, J. Lin, Compressed Sensing with coherent tight frames via lq-minimization for 0 < q ≤ 1. Inverse Probl. Image 8(3), 761–777 (2017)

    Google Scholar 

  33. Q. Sun, Sparse approximation property and stable recovery of sparse signals from noisy measurements. IEEE Trans. Signal Process. 59(10), 5086–5090 (2011)

    Article  MathSciNet  Google Scholar 

  34. J.A. Tropp, Greed is good: algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 50, 2231–2242 (2004)

    Article  MathSciNet  Google Scholar 

  35. R. Vidal, Y. Ma, A unified algebraic approach to 2-D and 3-D motion segmentation and estimation. J. Math. Imaging Vis. 25(3), 403–421 (2006)

    Article  MathSciNet  Google Scholar 

  36. Y. Wang, J. Wang, Z. Xu, A note on block-sparse signal recovery with coherent tight frames. Discrete Dyn. Nat. Soc. 3, 1–6 (2013)

    Google Scholar 

  37. F.G. Wu, D.H. Li, The restricted isometry property for signal recovery with coherent tight frames. Bull. Aust. Math. Soc. 92(3), 496–507 (2015)

    Article  MathSciNet  Google Scholar 

  38. Z.Q. Xu, Compressed sensing. Sci. China Math. 42(9), 865–877 (2012)

    Google Scholar 

  39. F. Zhang, J. Wang, Block-sparse compressed sensing with redundant tight frames via \(l_2/l_1\)-minimization. Pure Appl. Math. (2019)

Download references

Acknowledgements

All authors express their sincere gratitude to the reviewers for their careful review and excellent suggestions, which have made the article more comprehensive.

Funding

This work is generously supported by the Featured Innovation Projects of the General University of Guangdong Province (Grant No. 2023KTSCX096) and partially funded by the Characteristic Innovation Project of Universities in Guangdong Province (Natural Science), China (Grant No. 2021KTSCX085).

Author information

Authors and Affiliations

Authors

Contributions

This article was mainly written by the first author Fengong Wu, the second author Penghong Zhong participated in the discussion, and the third author Huasong Xiao and the fourth author Chunmei Miao collected relevant literature and proofread the initial draft.

Corresponding author

Correspondence to Fengong Wu.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors have seen and approved the final version of the submitted manuscript.

Competing interests

All authors declare no conflict of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. We confirm that all organizations that funded our research in my submission have been mentioned, including grant numbers.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, F., Zhong, P., Xiao, H. et al. Frame-based block sparse compressed sensing via \(l_2/l_1\)-synthesis. EURASIP J. Adv. Signal Process. 2024, 76 (2024). https://doi.org/10.1186/s13634-024-01175-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-024-01175-7

Keywords

Mathematics Subject Classification