- Research
- Open Access
- Published:

# SBL-based multi-task algorithms for recovering block-sparse signals with unknown partitions

*EURASIP Journal on Advances in Signal Processing*
**volume 2014**, Article number: 14 (2014)

## Abstract

We consider in this paper the problem of reconstructing block-sparse signals with unknown block partitions. In the first part of this work, we extend the block-sparse Bayesian learning (BSBL) originally developed for recovering a single block-sparse signal in a single compressive sensing (CS) task scenario to the case of multiple CS tasks. A new multi-task signal recovery algorithm, called the extended multi-task block-sparse Bayesian learning (EMBSBL), is proposed. EMBSBL exploits the statistical correlation among multiple signals as well as the intra-block correlation within individual signals to improve performance. Besides, it does not need *a priori* information on block partition. As the second part of this paper, we develop the EMBSBL-based synthesized multi-task signal recovery algorithm, namely SEMBSBL, to make it applicable to the single CS task case. The idea is to synthesize new CS tasks from the single CS task via circular-shifting operations and utilizes the minimum description length principle to determine the proper set of the synthesized CS tasks for signal reconstruction. SEMBSBL can achieve better signal reconstruction performance over other algorithms that recover block-sparse signals individually. Simulations corroborate the theoretical developments.

## 1 Introduction

Compressive sensing (CS) enables reconstructing a signal that is sparse in a certain domain from its measurements obtained at a rate significantly lower than the Nyquist frequency[1]. If in addition to sparsity, the signal representation is also structured in the form of clustered non-zeros, the signal would be referred to as being block-sparse. In practice, block-sparsity can be found in multi-band signals[2] or in the measurements of gene expression levels[3]. It has been shown that exploring the block-sparsity enables robust signal recovery from fewer compressive measurements[4]. We shall consider in this paper the efficient recovery of block-sparse signals.

Several block-sparse signal reconstruction algorithms have been developed in literature. Based on the compressive sampling matching pursuit (CoSaMP)[5], the block compressive sampling matching pursuit (BCoSaMP) was proposed in[4]. It utilizes the knowledge on the number of non-zero blocks to achieve signal recovery. On the basis of the orthogonal matching pursuit (OMP)[6], the block orthogonal matching pursuit (BOMP) was developed in[7]. Zou et al. proposed a block fixed-point continuation algorithm in[8] for block-sparse signal recovery. Elhamifar and Vidal approached the problem via the application of convex relaxation and convex optimization[9]. The two methods developed in[8] and[9] require the availability of the information on the block size. In[10], the dictionary optimization for block-sparse signal representation was studied and the work assumed that the maximum block length was known. More recently, CluSS-MCMC[11] and BM-MAP-OMP[12] have been proposed, which require little *a priori* knowledge on the block partition. On the basis of Bayesian sparse learning for temporally correlated signals[13, 14] proposed two block-sparse signal recovery algorithms, the block-sparse Bayesian learning (BSBL) and its extended version, the EBSBL algorithm. The BSBL algorithm utilizes the block partition information while EBSBL handles signals with unknown block partitions. Most techniques reviewed above fall under the category of the single-task CS, where the focus is on recovering a block-sparse signal from its compressive measurements.

The contribution of this paper is twofold. We shall first consider the block-sparse signal reconstruction in a multi-task scenario, where the signals in different CS tasks are statistically correlated. The multi-task compressive sensing (MCS) was originally developed in[15]. Mathematically, we have *L* CS tasks

where *y*_{
i
} is the compressive measurement vector of the *i* th task and **Φ**_{
i
} is the *M*_{
i
} × *N* measurement matrix (*M*_{
i
} << *N*). *θ*_{
i
} is the original signal in the *i* th task to be recovered and *n*_{
i
} represents the measurement noise. In MCS, the correlation among *θ*_{
i
} is explored so that *θ*_{
i
} are reconstructed jointly. MCS outperforms the single-task CS algorithm in terms of the reduced number of compressive measurements needed for efficient signal recovery. However, existing MCS techniques do not take into account the structural information in signals, such as block-sparsity. We shall therefore propose in this paper an extended version of the EBSBL algorithm from[14]. The original EBSBL method does not assume the knowledge on the block partition information, and it exhibits better block-sparse signal reconstruction performance over methods such as CluSS-MCMC and BM-MAP-OMP. We shall generalize EBSBL to the MCS scenario and obtain a new technique, referred to as extended multi-task block-sparse Bayesian learning (EMBSBL). Besides using the statistical correlation among *θ*_{
i
} as in MCS, EMBSBL also utilizes the intra-block correlation within each signal to improve performance. Simulations show that the block-sparse signal recovery performance of EMBSBL is superior to that of the benchmark algorithms.

When there is only one CS task, the proposed EMBSBL algorithm would become inapplicable. To address this problem, in the second part of this work, we shall augment EMBSBL with the concept of the synthesized multi-task-based signal recovery. The new algorithm is referred to as SEMBSBL in the rest of the paper. SEMBSBL first synthesizes multiple CS tasks from the single-task CS and then applies EMBSBL to recover the block-sparse signal. The multiple CS tasks are produced via simply circular-shifting the columns of the measurement matrix of the original CS model, which corresponds to circular-shifting the elements in the original signal vector and creates signals that have overlapping clusters, or equivalently speaking, correlated signals. The number of synthesized tasks is determined by the minimum description length (MDL) principle. With increase in the computational complexity, the newly proposed SEMBSBL technique outperforms the previously developed block-sparse signal recovery methods in terms of significantly reduced reconstruction errors and the removal of the needs for detailed information on the sparsity structure. Computer simulations are provided to demonstrate the good performance of the proposed SEMBSBL method.

The remainder of this paper is organized as follows. Section 2 presents the new EMBSBL algorithm for recovering multiple correlated block-sparse signals jointly. Section 3 illustrates the idea of synthesizing multiple CS tasks from a single one and presents the proposed SEMBSBL algorithm. Simulation results are given in Section 4 and Section 5 concludes the paper.

## 2 EMBSBL algorithm

The development of EMBSBL starts with extending BSBL-BO in[14] to the case of multiple CS tasks. The resulting algorithm, called MBSBL, can jointly recover block-sparse signals when their non-zero blocks all have the same size. We next generalize MBSBL to obtain EMBSBL that does not need the information on the signal sparsity structure.

### 2.1 MBSBL

Let *S* be the block size and *K* be the number of blocks in every signal to be recovered. If the measurement noise *n*_{
i
} in (1) follows an i.i.d. Gaussian distribution with zero mean and covariance matrix *β*^{-1}**I**, the conditional likelihood function of *y*_{
i
} is

where\mathcal{N}\left({\mathit{y}}_{i}|{\mathbf{\Phi}}_{i}{\mathit{\theta}}_{i},{\beta}^{-1}\mathbf{I}\right) represents a Gaussian distribution with mean **Φ**_{
i
}*θ*_{
i
} and covariance matrix *β*^{-1}**I**. In BSBL, each block{\mathit{\theta}}_{i,j}\in {\mathcal{R}}^{S\times 1} is assumed to satisfy a zero-mean multivariate Gaussian distribution

If we further assume that blocks are mutually uncorrelated, the prior for *θ*_{
i
} is given byp\left({\mathit{\theta}}_{i}\left|\mathit{\gamma},{\mathbf{B}}_{0}\right.\right)=\mathcal{N}\left({\mathit{\theta}}_{i}\left|\mathbf{\text{0}},{\mathbf{\Sigma}}_{0}\right.\right), where ** γ** = {

*γ*

_{ j }}

_{j=1,…,K},

**B**

_{0}= {

**B**

_{ j }}

_{j=1,…,K},

Here, **B**_{
j
} is a positive definite matrix, capturing the correlation structure within the *j* th block, and *γ*_{
j
} is a nonnegative parameter controlling the block-sparsity of *θ*_{i,j}. When *γ*_{
j
} = 0, the *j* th block becomes zero. During the learning process, most *γ*_{
j
} tend to be zero, due to the mechanism of automatic relevance determination[13].

To avoid overfitting, we set **B**_{
j
} = **B**,*j* = 1,…,*K*. Thus, **Σ**_{0} = **Γ** ⊗ **B**, where\mathbf{\Gamma}\stackrel{\Delta}{=}\text{diag}\left({\gamma}_{1},\dots ,{\gamma}_{K}\right) and ⊗ denotes the Kronecker product. The posterior distribution of *θ*_{
i
} is then given by

where

From (5), we note that *β*,** γ**, and

**B**are the sharing parameters of all CS tasks. To estimate them, let

**Y**= {

*y*_{1},…,

*y*_{ L }} be the measurement set of the

*L*CS tasks. We have

The logarithm of *p*(**Y**|*β*,** γ**,

**B**) is

where{\mathbf{\text{C}}}_{i}={\beta}^{-1}{\mathbf{I}+\mathrm{\Phi}}_{i}{\mathbf{\Sigma}}_{0}{\mathbf{\Phi}}_{i}^{T}. Maximizing *L*(*β*,** γ**,

**B**) would yield the estimates of the sharing parameters

*β*,

**and**

*γ***B**. We shall adopt the approach used in[14] to identify

**via the bound-optimization method, and find**

*γ**β*and

**B**via expectation maximization (EM).

#### 2.1.1 Estimating *γ*

Maximizing (9) is equivalent to the minimization of\sum _{i=1}^{L}\left[log\left(det\left({\mathbf{\text{C}}}_{i}\right)\right)+{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i}\right]. For this purpose, we replace the term log(det(**C**_{
i
})) with an upper bound and apply a surrogate function for the term{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i} and then minimize their summation.

The upper bound of log(det(**C**_{
i
})) depends on its supporting hyperplane. Let *γ*^{∗} be a given point in the ** γ**-space and we have

where{\mathbf{\Sigma}}_{{\mathit{y}}_{i}}^{\ast}={\beta}^{-1}{\mathbf{I}+\mathrm{\Phi}}_{i}{\mathbf{\Sigma}}_{0}^{\ast}{\mathbf{\Phi}}_{i}^{T} and{\mathbf{\Sigma}}_{0}^{\ast}\stackrel{\Delta}{=}{\mathbf{\Sigma}}_{0}\left|{}_{\mathit{\gamma}={\mathit{\gamma}}^{\ast}}\right..{\mathbf{\Phi}}_{i}^{j}\in {\mathcal{R}}^{{M}_{i}\times S} is a submatrix of{\mathbf{\Phi}}_{i}=[{\mathbf{\Phi}}_{i}^{1},{\mathbf{\Phi}}_{i}^{2},\mathrm{...},{\mathbf{\Phi}}_{i}^{K}], which corresponds to the *j* th block of *θ*_{
i
}. We next introduce the surrogate function for the term{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i}. The purpose is to facilitate evaluating the partial derivatives of{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i} with respect to the sharing parameters *β* and ** γ**. Originally, the sharing parameters appear in the inverse of the matrix

**C**

_{ i }(see the definition of

**C**

_{ i }under (9)). The surrogate function for{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i} is

It can be easily verified that the cost function to be minimized on the rightmost of (11) is the logarithm of the numerator of (5), *p*(*y*_{
i
}|*θ*_{
i
},*β*)*p*(*θ*_{
i
}|** γ**,

**B**), and the solution to the minimization problem is{\mathit{\mu}}_{{\mathit{\theta}}_{\mathit{i}}} defined in (6).

Putting (10) and (11) into (9), we have

Let **Θ** = {*θ*_{1},…,*θ*_{
L
}} be the set of original signals from the *L* CS tasks. We can express the upper bound of\sum _{i=1}^{L}\left[log\left(det\left({\mathbf{\text{C}}}_{i}\right)\right)+{\mathit{y}}_{i}^{T}{\mathbf{\text{C}}}_{i}^{-1}{\mathit{y}}_{i}\right] as

Taking the partial derivative of *G*(** γ**,

**Θ**) with respect to

*γ*

_{ j }and setting the result to zero yield the desired estimate of

*γ*

_{ j }, the

*j*th element in

**, which is given by**

*γ*#### 2.1.2 Estimating B and *β*

The EM technique is used to find **Ω** = {**B**,*β*}. We proceed by first treating *θ*_{
i
} as hidden variables and then maximizing

where **Ω**^{(old)} denotes the evaluated parameters in the previous iteration,

Here, we only consider the terms relating to **B** in *W*(*β*,**B**) and use the notation

The partial derivative of (17) with respective to **B**, which is symmetric and positive definite because it characterizes the covariance matrix of every signal block (see the definition given above (5)), is

where{\mathit{\mu}}_{{\mathit{\theta}}_{i}}^{j}\stackrel{\Delta}{=}{\mathit{\mu}}_{{\mathit{\theta}}_{i}}\left(\left(j-1\right)S+1:\mathit{\text{jS}}\right),{\mathbf{\Sigma}}_{{\mathit{\theta}}_{i}}^{j}\stackrel{\Delta}{=}{\mathbf{\Sigma}}_{{\mathit{\theta}}_{i}}\left(\left(j-1\right)S+1:\mathit{\text{jS}},\left(j-1\right)S+1:\mathit{\text{jS}}\right). Setting (18) to zero yields

Similar to[14], we improve the performance of the algorithm by restraining the matrix **B**. Specifically, we attempt to find a positive definite and symmetric matrix\widehat{\mathbf{B}} to approximate **B**. Mathematically, we set\widehat{\mathbf{B}} to be a Toeplitz matrix equal to

wherer=\frac{{m}_{1}}{{m}_{0}}, *m*_{0}, and *m*_{1} are obtained by averaging the elements along the main diagonal and the main sub-diagonal of **B** in (19). As a result, the approximated version of **B** is fully characterized by *r*. This method can also be applied with some modifications to the case where signal blocks have different sizes. In particular, in this case, we first compute\stackrel{\u0304}{r}=\frac{{\stackrel{\u0304}{m}}_{1}}{{\stackrel{\u0304}{m}}_{0}}, where{\stackrel{\u0304}{m}}_{0}=\sum _{j=1}^{K}{m}_{0}^{j} and{\stackrel{\u0304}{m}}_{1}=\sum _{j=1}^{K}{m}_{1}^{j}\phantom{\rule{.3em}{0ex}}.\phantom{\rule{0.3em}{0ex}}{m}_{0}^{j} and{m}_{1}^{j} are obtained by averaging the elements along the main diagonal and the main sub-diagonal of **B**_{
j
}, where it can be shown that{\mathbf{B}}_{j}=\frac{1}{L{\gamma}_{j}}\sum _{i=1}^{L}\left({\mathbf{\Sigma}}_{{\mathit{\theta}}_{i}}^{j}+{\mathit{\mu}}_{{\mathit{\theta}}_{i}}^{j}{\left({\mathit{\mu}}_{{\mathit{\theta}}_{i}}^{j}\right)}^{T}\right). **B**_{
j
} are approximated with{\widehat{\mathbf{B}}}_{j}=\text{Toeplitz}\left(\left[1,\stackrel{\u0304}{r},\cdots \phantom{\rule{0.3em}{0ex}},{\stackrel{\u0304}{r}}^{{S}_{j}-1}\right]\right) such that again,{\widehat{\mathbf{B}}}_{j} depend on the value of\stackrel{\u0304}{r} only. Here, *S*_{
j
} is the size of block *j*.

We next evaluate *β*. Consider the terms relating to *β* in *W*(*β*,**B**) and use the notation

Differentiating *W*_{2}(*β*) with respect to *β* and then setting the result to zero, we obtain

We have, after some manipulations,

The iterative process for estimating *β*,** γ**, and

**B**starts with initial solution guesses of{\mathit{\mu}}_{{\mathit{\theta}}_{\mathit{i}}},{\mathbf{\Sigma}}_{{\mathit{\theta}}_{\mathit{i}}},

*γ*

_{ j },

**B**, and

*β*. We then evaluate sequentially (6), (7), and (14) to find

*γ*

_{ j }and proceed to find the updated estimates of

**B**and

*β*using (19) and (22). With the obtained estimates of the sharing parameters, the original signals

*θ*_{ i }of the

*L*CS tasks can be reconstructed by following the MCS technique[15]. This completes the development of MBSBL.

### 2.2 EMBSBL

We shall present the new EMBSBL algorithm that is based on the developed MBSBL technique. Similar to[14], we first assume that all the blocks are of equal size *h* and the non-zero blocks are arbitrarily located. We will show via simulations that EMBSBL is not sensitive to the choice of *h*. There arep\stackrel{\Delta}{=}N-h+1 possible blocks in every signal *θ*_{
i
}. The *j* th block starts at the *j* th element of *θ*_{
i
} and continues until the (*j* + *h* - 1)th element. All the non-zero elements of *θ*_{
i
} lie within a subset of these blocks. From the analysis above, we can have the decomposition of *θ*_{
i
}

where{\mathit{z}}_{i,j}\in {\mathcal{R}}^{h\times 1};E\left({\mathit{z}}_{i,j}{\mathit{z}}_{i,k}^{T}\right)={\delta}_{j,k}{\gamma}_{j}\mathbf{B} (*δ*_{j,k} = 1 if *j* = *k*; otherwise, *δ*_{j,k} = 0); and{\mathit{z}}_{i}={\left[{\mathit{z}}_{i,1}^{T},\dots ,{\mathit{z}}_{i,p}^{T}\right]}^{T}\sim {\mathcal{N}}_{{\mathit{z}}_{i}}\left(\mathbf{\text{0}},{\stackrel{~}{\mathbf{\Sigma}}}_{0}\right),{\stackrel{~}{\mathbf{\Sigma}}}_{0}=\text{diag}\left({\gamma}_{1}\mathbf{B},\dots ,{\gamma}_{p}\mathbf{B}\right)\in {\mathcal{R}}^{\mathit{\text{ph}}\times \mathit{\text{ph}}};{\mathbf{\text{E}}}_{j}\in {\mathcal{R}}^{N\times h} is a zero matrix except that the submatrix composed of its *j* th row to (*j* + *h* - 1)th row is replaced by the identity matrix **I**, and **E**_{
j
} is the same for every *θ*_{
i
}. The CS model (1) can then be re-expressed as

where

and

The new CS model (24) has its signals with the property of block-sparsity and the intra-block correlation is explicit. *z*_{
i
} can be recovered using MBSBL and by utilizing (23), the original signals *θ*_{
i
} of the CS tasks can then be found, which finishes the development of EMBSBL for recovering block-sparse signals under the MCS framework.

## 3 SEMBSBL algorithm

The EMBSBL cannot be directly applied to recover a single block-sparse signal in the single CS task scenario, due to its nature of being an MCS technique. We shall augment it with the idea of synthesized MCS to address this difficulty. CS task synthesis via circular-shifting operation is developed below. This section ends with the improved EMBSBL algorithm, namely synthesized EMBSBL (SEMBSBL), which utilizes the MDL principle to determine the optimal number of synthesized CS tasks to achieve satisfactory signal recovery performance.

### 3.1 Synthesis of multiple CS tasks

Figure1 illustrates synthesizing multiple CS tasks from a single one. The absence of measurement noise is assumed here to improve clarity. The original CS task is *y*_{1} = **Φ**_{1}*θ*_{1}, where *θ*_{1} is the block-sparse signal to be recovered and it has two non-zero clusters (shadowed). The columns of the measurement matrix **Φ**_{1} corresponding to the non-zero elements in *θ*_{1} are also shadowed for illustration. Figure1 indicates that a new CS task can be synthesized from the original one by circularly shifting the columns of **Φ**_{1} to the right by one column. In this way, the new CS task has a new measurement matrix **Φ**_{2} and a new signal *θ*_{2} whose the elements are generated by circularly shifting *θ*_{1} downward by one sample. The new CS task has the same compressive measurements as the original one. We assume that this observation holds also for the case where measurement noise is present. Comparing *θ*_{1} with *θ*_{2} reveals that the locations of their non-zero elements have overlaps. This implies that *θ*_{1} and *θ*_{2} are correlated. This forms the basis for utilizing EMBSBL in block-sparse signal recovery. Additional CS tasks can be synthesized by following a similar approach but with different directions and shifting amounts of the circular-shifting operations.

It can be expected that due to the block-sparsity of the signal to be recovered, the signals of some synthesized CS tasks may not be well correlated with others. In other words, they only have few overlapping non-zero elements. The utilization of these CS tasks in recovering the original signal via EMBSBL would lead to poor signal reconstruction performance. To address this problem, we propose to utilize the MDL principle to determine the number of synthesized CS tasks for the block-sparse signal reconstruction, as will be detailed in the following subsection.

### 3.2 SEMBSBL

This section presents the proposed SEMBSBL algorithm. We shall first provide a method for evaluating the signal recovery quality of EMBSBL for a given set of synthesized CS tasks. This is essential for selecting the optimal set of synthesized CS tasks for block-sparse signal recovery. For this purpose, we apply the MDL principle. Basically, it states that among a set of competing statistical models, the best model is the one having the minimum code length for the given data[16, 17]. This is mathematically equivalent to solving\widehat{Q}=arg\underset{Q\in \mathfrak{M}}{\text{min}}\mathit{\text{CL}}\left(\mathit{y},Q\right), where denotes the set of possible models and *C* *L*(** y**,

*Q*) is the code length function. We set

*C*

*L*(

**,**

*y**Q*) to be the Shannon code length[18], i.e.,

*C*

*L*(

**,**

*y**Q*) = - log

_{2}

*p*(

**,**

*y**Q*), where

*p*(

**,**

*y**Q*) is the probability density function of

**under the model**

*y**Q*.

For the problem of applying EMBSBL to recovering the block-sparse signal in a single CS task scenario (without loss of generality, we assume the task is *y*_{1} = **Φ**_{1} + *n*_{1}), we denote the estimates of the sharing parameters *β*,** γ**,

**B**as\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}. They are output by the EMBSBL algorithm for a given set of synthesized CS tasks. The description length for

*y*_{1},

*C*

*L*(

*y*_{1}), can be then expressed as, after using (9) and setting

*L*= 1,

where\mathit{\text{CL}}\left({\mathit{y}}_{1}\left|\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}\right.\right)=-{log}_{2}p\left({\mathit{y}}_{1}\left|\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}\right.\right) measures the goodness of fit between the data and the current model,\mathit{\text{CL}}\left(\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}\right)=-{log}_{2}p\left(\hat{\beta}\right)-{log}_{2}p\left(\hat{\mathit{\gamma}}\right)-{log}_{2}p\left(\hat{\mathbf{B}}\right) represents the model complexity, andp\left(\hat{\beta}\right),p\left(\hat{\mathit{\gamma}}\right), andp\left(\hat{\mathbf{B}}\right) denote the prior distributions of\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}, e which are the base of the natural logarithm. Because we do not impose any specific distributions on\hat{\beta},\hat{\mathit{\gamma}},\hat{\mathbf{B}}, their prior probability distributions are thus set to be the uniform distributions. In other words,-{log}_{2}p\left(\hat{\beta}\right)-{log}_{2}p\left(\hat{\mathit{\gamma}}\right)-{log}_{2}p\left(\hat{\mathbf{B}}\right) is a constant.{\mathbf{\text{C}}}_{1}={\hat{\beta}}^{-1}{\mathbf{I}+A}_{1}{\hat{\mathbf{\Sigma}}}_{0}{\mathbf{A}}_{1}^{T},\hat{{\mathbf{\Sigma}}_{0}}=\text{diag}\left({\hat{\gamma}}_{1}\hat{\mathbf{B}},\dots ,{\hat{\gamma}}_{K}\hat{\mathbf{B}}\right).

We are now ready to present the proposed SEMBSBL algorithm. It is an iterative method that improves the signal recovery quality gradually. In each iteration, a new CS tasks is synthesized using the circular-shifting operation as illustrated in Figure1. The newly produced task is applied together with the previously synthesized CS tasks as well as the original CS task in EMBSBL for jointly reconstructing the block-sparse signal. The above process continues until the number of synthesized CS tasks reaches a pre-specified value or including the newly synthesized CS task does not lead to better signal reconstruction quality (or equivalently, reduced code length for describing the data, which is given in (27)).

The algorithm is summarized in Algorithm 1. *l*_{max} is the user-specified maximum number of the synthesized CS tasks. EMBSBL^{l}(**Y**,**A**) represents the application of EMBSBL for signal reconstruction in the *l* th iteration and it uses *l* CS tasks. **Y** and **A** collect the compressive measurements and their associated measurement matrices of the *l* CS tasks. The output of EMBSBL^{l}(**Y**,**A**) is{\hat{\beta}}^{l},{\hat{\mathit{\gamma}}}^{l},{\hat{\mathbf{B}}}^{l},{\hat{{\mathit{\theta}}_{1}}}^{l}, which are the estimates of the sharing parameters *β*,** γ**,

**B**and the original signal

*θ*_{1}. The operators Left(

**A**

_{1},

*l*) and Right(

**A**

_{1},

*l*) denote circular shifting the columns of

**A**

_{1}to the left and to the right by

*l*columns.

**Algorithm 1** SEMBSBL

## 4 Simulations

We shall provide simulation results to demonstrate the performance of the EMBSBL algorithm proposed in Section 2 and the SEMBSBL algorithm developed in Section 3. The signal reconstruction error is quantified using{\u2225{\mathit{\theta}}_{i}-{\widehat{\mathit{\theta}}}_{i}\u2225}_{2}/{\u2225{\mathit{\theta}}_{i}\u2225}_{2}, where *θ*_{
i
} and{\widehat{\mathit{\theta}}}_{i} are the true and the estimated signals. The elements of the measurement matrix **Φ**_{
i
} are initially drawn from the standard normal distribution\mathcal{N}\left(0,1\right) and each row of **Φ**_{
i
} is then normalized to have a unit norm.

In the first experiment, we simulate a two CS task scenario (*L* = 2) where the original signals both have a length of *N* = 500 and each contains 50 spikes with different amplitudes at random locations. The two signals also have six non-zero blocks with random sizes and they are at non-overlapping random locations. We consider two cases where 80% and 100% of the spikes of the two signals are at the same positions.

The signal-to-noise ratio (SNR) in log scale is defined as\text{SNR}\stackrel{\Delta}{=}{20log}_{10}\left(\sqrt{\beta}{\u2225{\mathbf{\Phi}}_{i}{\mathit{\theta}}_{i}\u2225}_{2}\right) and zero-mean Gaussian noise is added to every CS measurement vector. We set SNR = 15 dB and use MCS, BSBL-BO, and EBSBL-BO as benchmark techniques for comparison. When implementing BSBL-BO, EBSBL-BO, and EMBSBL, we set the block size parameter *h* to be *h* = 4 and *h* = 8 to illustrate the impact of different choice of *h* on their performance. The signal reconstruction error results shown are obtained via averaging over 50 ensemble runs.

Figure2 compares MCS, BSBL-BO, EBSBL-BO, and EMBSBL in terms of their signal reconstruction errors as a function of the number of the compressive measurements. In this simulation, the intra-block correlation coefficient for each block is uniformly distributed between 0 and 0.1. Simulation results indicate that for BSBL-BO and EBSBL-BO, the performance curves when 80% and 100% of the spikes of the two original signals are at the same locations are very similar to each other. Therefore, to improve the clarity of the figures, we only provide in Figure2 and the following Figures3 and4 the results when 80% of the spikes have the same locations.

We can see from Figure2a that the proposed EMBSBL has the least signal recovery error and its performance improves as more spikes of the original signals share the same locations. More importantly, EMBSBL is less sensitive to the choice of the block size parameter *h* than BSBL-BO and EBSBL-BO. This is because the new EMBSBL technique is an MCS algorithm that recovers the original signals of multiple CS tasks jointly. Compared with BSBL-BO and EBSBL-BO that are single-task CS algorithms, EMBSBL explores the inter-correlation among original signals to improve performance, besides the intra-block correlation. The use of this additional information improves the robustness of EMBSBL to the deviation of the presumed block size from the true value. Finally, as shown in Figure2b, EMBSBL has the comparable complexity as EBSBL-BO, despite of being an MCS technique.

We repeat the simulation that produced Figure2 but this time, we allow the intra-block correlation coefficient for each block to be uniformly distributed between 0.4 and 0.5. The obtained results are summarized in Figure3. It can be seen that the proposed EMBSBL continues to offer the best signal recovery performance. Besides, the increase in the intra-block correlation improves the performance of the EBSBL-BO.

In producing Figure4, we further increase the intra-block correlation coefficient for each block to be uniformly distributed from 0.8 to 0.9. The performance curve of MCS is excluded from the figure for sake of clarity, because in this case, the signal reconstruction error of MCS is large. We can find in Figure4 that under significant intra-block correlation, the proposed EMBSBL still has the best signal recovery performance. Besides, comparing Figure4 with Figures2 and3 reveals that higher intra-block correlation leads to improved performance of the proposed EMBSBL method. This is because it explicitly utilizes the intra-block correlation for better signal recovery (see Section 2).

We next study the impact of different signal inter-correlation levels on the signal recovery performance of EMBSBL. The simulation setup is the same as that leading to Figure2 except that the percentage of overlapping non-zeros of the two original signals is set to be 40%, 60%, 80%, and 100%. Three sets of simulation results are produced. They correspond to the intra-block correlation coefficient being uniformly distributed within [0, 0.1], [0.4, 0.5], and [0.8, 0.9]. The obtained simulation results are summarized in Figure5. We find that EMBSBL can recover the original signals with reduced signal reconstruction error as the inter-correlation among original signals increases. This observation holds for different intra-block correlation coefficients. The performance improvement is somewhat expected, because the EMBSBL algorithm is an MCS technique and its signal recovery performance would benefit from increased inter-correlation among original signals.

To validate the development of SEMBSBL (see Section 3), we consider a single CS task scenario. Again, the original signal has a length of *N* = 500 and it contains 50 spikes at random locations. Besides, the signal include six non-zero blocks with random sizes and random but non-overlapping locations. We set SNR = 15 dB in the simulation and use BSBL-BO and EBSBL-BO as benchmarking techniques. For BSBL-BO, EBSBL-BO, and the proposed SEMBSBL, we generate signal recovery error performance curves with the block size parameter setting to be *h* = 4 and *h* = 8. The results are averaged over 50 runs. For the proposed SEMBSBL, we set the pre-specified maximum number of the synthesized CS tasks *l*_{max} to be six. We consider three cases, where the intra-block correlation coefficient for each block is uniformly distributed within [0, 0.1], [0.4, 0.5], and [0.8, 0.9]. The obtained simulation results are shown in Figure6. It can be observed that the proposed SEMBSBL outperforms benchmark algorithms and SEMBSBL is less sensitive to the choice of the parameter *h* than EBSBL-BO. But the running time of SEMBSBL is high, since it executes the EMBSBL algorithm a couple of times before it finds an optimal set of synthesized CS tasks for signal recovery.

## 5 Conclusion

In this paper, a novel algorithm for jointly recovering multiple block-sparse signals from their compressive measurements, termed as the EMBSBL algorithm, was developed. EMBSBL exploits both the statistical correlation among signals and signals’ intra-block correlation to achieve superior signal recovery performance. Moreover, the new algorithm eliminates the requirement on the availability of *a priori* information on the sparsity structure of the original signal. We also developed in this paper SEMBSBL that applies EMBSBL to the single CS task case. It synthesizes new CS tasks from the single CS task via simple circular-shifting operations to make EMBSBL applicable. The MDL principle was adopted to determine the proper set of the synthesized CS tasks for reconstructing the block-sparse signal. Computer simulations were carried out and revealed that the proposed EMBSEBL and SEMBSBL are able to outperform existing techniques in providing greatly enhanced block-sparse signal reconstruction performance at the cost of increased computational complexity.

## References

DL Donoho: Compressed sensing.

*IEEE Trans. Inf. Theory*2006, 52(4):1289-1306.M Mishali, YC Eldar: Blind multi-band signal reconstruction: compressed sensing for analog signals.

*IEEE Trans. Signal Process.*2009, 57(3):993-1009.F Parvaresh, H Vikalo, S Misra, B Hassibi: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays.

*IEEE J. Sel. Topics Signal Process.*2008, 2(3):275-285.RG Baraniuk, V Cevher, MF Duarte, C Hegde: Model-based compressive sensing.

*IEEE Trans. Inf. Theory*2010, 56(4):1982-2001.D Needell, JA Tropp: CoSaMP: iterative signal recovery from incomplete and inaccurate samples.

*Appl. Comput. Harmon. Anal.*2009, 26: 301-321. 10.1016/j.acha.2008.07.002YC Pati, R Rezaiifar, PS Krishnaprasad: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In

*Proceedings of the 27th Ann. Asilomar Conf. Signals, Systems, and Computers*. (Pacific Grove, CA, 1–3 Nov 1993);YC Eldar, P Kuppinger, H Bolcskei: Block-sparse signals: uncertainty relations and efficient recovery.

*IEEE Trans. Signal Process*2010, 58(6):3042-3054.J Zou, Y Fu, S Xie: A block fixed point continuation algorithm for block-sparse reconstruction.

*IEEE Trans. Signal Process. Lett*2012, 19(6):364-367.E Elhamifar, R Vidal: Block-sparse recovery via convex optimization.

*IEEE Trans. Signal Process*2012, 60(8):4094-4107.L Zelnik-Manor, K Rosenblum, YC Eldar: Dictionary optimization for block-sparse representations.

*IEEE Trans. Signal Process*2012, 60(5):2386-2395.L Yu, H Sun, JP Barbot, G Zheng: Bayesian compressive sensing for cluster structured sparse signals.

*Signal Process*2012, 92(1):259-269. 10.1016/j.sigpro.2011.07.015T Peleg, Y Eldar, M Elad: Exploiting statistical dependencies in sparse representations for signal recovery.

*IEEE Trans. Signal Process*2012, 60(5):2286-2303.Z Zhang, BD Rao: Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning.

*IEEE J. Selected Topics Signal Process*2011, 5(5):912-926.Z Zhang, BD Rao: Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation.

*IEEE Trans. Signal Process*2013, 61(8):2009-2015.S Ji, D Dunson, L Carin: Multi-task compressive sensing.

*IEEE Trans. Signal Process*2009, 57(1):92-106.A Barron, J Rissanen, B Yu: The minimum description length principle in coding and modeling.

*IEEE Trans. Inf. Theory*1998, 44(6):2743-2760. 10.1109/18.720554I Ramirez, G Sapiro: An MDL framework for sparse coding and dictionary learning.

*IEEE Trans. Signal Process*2012, 60(6):2913-2927.T Cover, J Thomas:

*Elements of Information Theory, 2nd ed*. (Wiley, New York, 2006);

## Acknowledgements

The authors wish to thank the associate editor and the anonymous reviewers for their constructive suggestions. The authors thank Zhilin Zhang, Bhaskar D. Rao, Shihao Ji, and David Dunson for sharing codes of their algorithms. This work was supported in part by Hunan Provincial Innovation Foundation for Postgraduates under Grant CX2012B019, Fund of Innovation, Graduate School of National University of Defense Technology under grant B120404, and National Natural Science Foundation of China (no. 61304264).

## Author information

### Affiliations

### Corresponding author

## Additional information

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## About this article

### Cite this article

Wang, YG., Yang, L., Liu, Z. *et al.* SBL-based multi-task algorithms for recovering block-sparse signals with unknown partitions.
*EURASIP J. Adv. Signal Process.* **2014, **14 (2014). https://doi.org/10.1186/1687-6180-2014-14

Received:

Accepted:

Published:

DOI: https://doi.org/10.1186/1687-6180-2014-14

### Keywords

- Sparse Bayesian learning
- Block-sparse
- Multi-task
- Circular shifting
- Minimum description length