Skip to main content

SBL-based multi-task algorithms for recovering block-sparse signals with unknown partitions

Abstract

We consider in this paper the problem of reconstructing block-sparse signals with unknown block partitions. In the first part of this work, we extend the block-sparse Bayesian learning (BSBL) originally developed for recovering a single block-sparse signal in a single compressive sensing (CS) task scenario to the case of multiple CS tasks. A new multi-task signal recovery algorithm, called the extended multi-task block-sparse Bayesian learning (EMBSBL), is proposed. EMBSBL exploits the statistical correlation among multiple signals as well as the intra-block correlation within individual signals to improve performance. Besides, it does not need a priori information on block partition. As the second part of this paper, we develop the EMBSBL-based synthesized multi-task signal recovery algorithm, namely SEMBSBL, to make it applicable to the single CS task case. The idea is to synthesize new CS tasks from the single CS task via circular-shifting operations and utilizes the minimum description length principle to determine the proper set of the synthesized CS tasks for signal reconstruction. SEMBSBL can achieve better signal reconstruction performance over other algorithms that recover block-sparse signals individually. Simulations corroborate the theoretical developments.

1 Introduction

Compressive sensing (CS) enables reconstructing a signal that is sparse in a certain domain from its measurements obtained at a rate significantly lower than the Nyquist frequency[1]. If in addition to sparsity, the signal representation is also structured in the form of clustered non-zeros, the signal would be referred to as being block-sparse. In practice, block-sparsity can be found in multi-band signals[2] or in the measurements of gene expression levels[3]. It has been shown that exploring the block-sparsity enables robust signal recovery from fewer compressive measurements[4]. We shall consider in this paper the efficient recovery of block-sparse signals.

Several block-sparse signal reconstruction algorithms have been developed in literature. Based on the compressive sampling matching pursuit (CoSaMP)[5], the block compressive sampling matching pursuit (BCoSaMP) was proposed in[4]. It utilizes the knowledge on the number of non-zero blocks to achieve signal recovery. On the basis of the orthogonal matching pursuit (OMP)[6], the block orthogonal matching pursuit (BOMP) was developed in[7]. Zou et al. proposed a block fixed-point continuation algorithm in[8] for block-sparse signal recovery. Elhamifar and Vidal approached the problem via the application of convex relaxation and convex optimization[9]. The two methods developed in[8] and[9] require the availability of the information on the block size. In[10], the dictionary optimization for block-sparse signal representation was studied and the work assumed that the maximum block length was known. More recently, CluSS-MCMC[11] and BM-MAP-OMP[12] have been proposed, which require little a priori knowledge on the block partition. On the basis of Bayesian sparse learning for temporally correlated signals[13, 14] proposed two block-sparse signal recovery algorithms, the block-sparse Bayesian learning (BSBL) and its extended version, the EBSBL algorithm. The BSBL algorithm utilizes the block partition information while EBSBL handles signals with unknown block partitions. Most techniques reviewed above fall under the category of the single-task CS, where the focus is on recovering a block-sparse signal from its compressive measurements.

The contribution of this paper is twofold. We shall first consider the block-sparse signal reconstruction in a multi-task scenario, where the signals in different CS tasks are statistically correlated. The multi-task compressive sensing (MCS) was originally developed in[15]. Mathematically, we have L CS tasks

y i = Φ i θ i + n i ,i=1,,L,
(1)

where y i is the compressive measurement vector of the i th task and Φ i is the M i  × N measurement matrix (M i  << N). θ i is the original signal in the i th task to be recovered and n i represents the measurement noise. In MCS, the correlation among θ i is explored so that θ i are reconstructed jointly. MCS outperforms the single-task CS algorithm in terms of the reduced number of compressive measurements needed for efficient signal recovery. However, existing MCS techniques do not take into account the structural information in signals, such as block-sparsity. We shall therefore propose in this paper an extended version of the EBSBL algorithm from[14]. The original EBSBL method does not assume the knowledge on the block partition information, and it exhibits better block-sparse signal reconstruction performance over methods such as CluSS-MCMC and BM-MAP-OMP. We shall generalize EBSBL to the MCS scenario and obtain a new technique, referred to as extended multi-task block-sparse Bayesian learning (EMBSBL). Besides using the statistical correlation among θ i as in MCS, EMBSBL also utilizes the intra-block correlation within each signal to improve performance. Simulations show that the block-sparse signal recovery performance of EMBSBL is superior to that of the benchmark algorithms.

When there is only one CS task, the proposed EMBSBL algorithm would become inapplicable. To address this problem, in the second part of this work, we shall augment EMBSBL with the concept of the synthesized multi-task-based signal recovery. The new algorithm is referred to as SEMBSBL in the rest of the paper. SEMBSBL first synthesizes multiple CS tasks from the single-task CS and then applies EMBSBL to recover the block-sparse signal. The multiple CS tasks are produced via simply circular-shifting the columns of the measurement matrix of the original CS model, which corresponds to circular-shifting the elements in the original signal vector and creates signals that have overlapping clusters, or equivalently speaking, correlated signals. The number of synthesized tasks is determined by the minimum description length (MDL) principle. With increase in the computational complexity, the newly proposed SEMBSBL technique outperforms the previously developed block-sparse signal recovery methods in terms of significantly reduced reconstruction errors and the removal of the needs for detailed information on the sparsity structure. Computer simulations are provided to demonstrate the good performance of the proposed SEMBSBL method.

The remainder of this paper is organized as follows. Section 2 presents the new EMBSBL algorithm for recovering multiple correlated block-sparse signals jointly. Section 3 illustrates the idea of synthesizing multiple CS tasks from a single one and presents the proposed SEMBSBL algorithm. Simulation results are given in Section 4 and Section 5 concludes the paper.

2 EMBSBL algorithm

The development of EMBSBL starts with extending BSBL-BO in[14] to the case of multiple CS tasks. The resulting algorithm, called MBSBL, can jointly recover block-sparse signals when their non-zero blocks all have the same size. We next generalize MBSBL to obtain EMBSBL that does not need the information on the signal sparsity structure.

2.1 MBSBL

Let S be the block size and K be the number of blocks in every signal to be recovered. If the measurement noise n i in (1) follows an i.i.d. Gaussian distribution with zero mean and covariance matrix β-1I, the conditional likelihood function of y i is

p y i θ i , β =N y i Φ i θ i , β - 1 I ,
(2)

whereN y i | Φ i θ i , β - 1 I represents a Gaussian distribution with mean Φ i θ i and covariance matrix β-1I. In BSBL, each block θ i , j R S × 1 is assumed to satisfy a zero-mean multivariate Gaussian distribution

p θ i , j γ j , B j =N θ i , j 0 , γ j B j ,j=1,,K.
(3)

If we further assume that blocks are mutually uncorrelated, the prior for θ i is given byp θ i γ , B 0 =N θ i 0 , Σ 0 , where γ = {γ j }j=1,…,K, B0 = {B j }j=1,…,K,

Σ 0 = γ 1 B 1 γ K B K .
(4)

Here, B j is a positive definite matrix, capturing the correlation structure within the j th block, and γ j is a nonnegative parameter controlling the block-sparsity of θi,j. When γ j  = 0, the j th block becomes zero. During the learning process, most γ j tend to be zero, due to the mechanism of automatic relevance determination[13].

To avoid overfitting, we set B j  = B,j = 1,…,K. Thus, Σ0 = ΓB, whereΓ = Δ diag γ 1 , , γ K and denotes the Kronecker product. The posterior distribution of θ i is then given by

p θ i y i , β , γ , B = p y i θ i , β p θ i γ , B p y i θ i , β p θ i γ , B d θ i = N θ i μ θ i , Σ θ i
(5)

where

μ θ i = Σ 0 Φ i T β - 1 I + Φ i Σ 0 Φ i T - 1 y i
(6)
Σ θ i = Σ 0 - 1 + β Φ i T Φ i - 1 .
(7)

From (5), we note that β,γ, and B are the sharing parameters of all CS tasks. To estimate them, let Y = {y1,…,y L } be the measurement set of the L CS tasks. We have

p Y β , γ , B = i = 1 L p y i β , γ , B .
(8)

The logarithm of p(Y|β,γ,B) is

L β , γ , B = Δ i = 1 L log p y i β , γ , B = i = 1 L log p y i θ i , β p θ i γ , B d θ i = - 1 2 i = 1 L M i log 2 π + log det C i + y i T C i - 1 y i ,
(9)

where C i = β - 1 I + Φ i Σ 0 Φ i T . Maximizing L(β,γ,B) would yield the estimates of the sharing parameters β,γ and B. We shall adopt the approach used in[14] to identify γ via the bound-optimization method, and find β and B via expectation maximization (EM).

2.1.1 Estimating γ

Maximizing (9) is equivalent to the minimization of i = 1 L log det C i + y i T C i - 1 y i . For this purpose, we replace the term log(det(C i )) with an upper bound and apply a surrogate function for the term y i T C i - 1 y i and then minimize their summation.

The upper bound of log(det(C i )) depends on its supporting hyperplane. Let γ be a given point in the γ-space and we have

log det C i = log det β - 1 I + Φ i Σ 0 Φ i T log det β - 1 I + Φ i Σ 0 Φ i T + j = 1 K Tr Σ y i - 1 Φ i j B Φ i j T γ j - γ j = log det Σ y i + j = 1 K Tr Σ y i - 1 Φ i j B Φ i j T γ j - γ j ,
(10)

where Σ y i = β - 1 I + Φ i Σ 0 Φ i T and Σ 0 = Δ Σ 0 γ = γ . Φ i j R M i × S is a submatrix of Φ i =[ Φ i 1 , Φ i 2 ,..., Φ i K ], which corresponds to the j th block of θ i . We next introduce the surrogate function for the term y i T C i - 1 y i . The purpose is to facilitate evaluating the partial derivatives of y i T C i - 1 y i with respect to the sharing parameters β and γ. Originally, the sharing parameters appear in the inverse of the matrix C i (see the definition of C i under (9)). The surrogate function for y i T C i - 1 y i is

y i T C i - 1 y i = y i T β - 1 I + Φ i Σ 0 Φ i T - 1 y i = min θ i 1 β y i - Φ i θ i 2 2 + θ i T Σ 0 - 1 θ i .
(11)

It can be easily verified that the cost function to be minimized on the rightmost of (11) is the logarithm of the numerator of (5), p(y i |θ i ,β)p(θ i |γ,B), and the solution to the minimization problem is μ θ i defined in (6).

Putting (10) and (11) into (9), we have

i = 1 L log det C i + y i T C i - 1 y i i = 1 L log det Σ y i + j = 1 K Tr Σ y i - 1 Φ i j B Φ i j T γ j - γ j + i = 1 L min θ i 1 β y i - Φ i θ i 2 2 + θ i T Σ 0 - 1 θ i .
(12)

Let Θ = {θ1,…,θ L } be the set of original signals from the L CS tasks. We can express the upper bound of i = 1 L log det C i + y i T C i - 1 y i as

G γ , Θ = Δ i = 1 L log det Σ y i + j = 1 K Tr Σ y i - 1 Φ i j B Φ i j T γ j - γ j + i = 1 L 1 β y i - Φ i θ i 2 2 + θ i T Σ 0 - 1 θ i .
(13)

Taking the partial derivative of G(γ,Θ) with respect to γ j and setting the result to zero yield the desired estimate of γ j , the j th element in γ, which is given by

γ j = i = 1 L θ i j T B - 1 θ i j i = 1 L Tr Φ i j T Σ y i - 1 Φ i j B , j = 1 , , K .
(14)

2.1.2 Estimating B and β

The EM technique is used to find Ω = {B,β}. We proceed by first treating θ i as hidden variables and then maximizing

W β , B = E Θ Y , Ω old log p Y , Θ Ω = E Θ Y , Ω old log p Y Θ , β + E Θ Y , Ω old log p Θ B ,
(15)

where Ω(old) denotes the evaluated parameters in the previous iteration,

log p Θ B = i = 1 L log p θ i B = - 1 2 i = 1 L M i log 2 π + log det Σ 0 + θ i T Σ 0 - 1 θ i .
(16)

Here, we only consider the terms relating to B in W(β,B) and use the notation

W 1 B = Δ E Θ Y , Ω old log p Θ B = - 1 2 i = 1 L M i log 2 π + log det Γ S det B K + E θ i T Γ - 1 B - 1 θ i = - 1 2 i = 1 L M i log 2 π + S log det Γ + K log det B + Tr Γ - 1 B - 1 Σ θ i + μ θ i μ θ i T .
(17)

The partial derivative of (17) with respective to B, which is symmetric and positive definite because it characterizes the covariance matrix of every signal block (see the definition given above (5)), is

W 1 ( B ) B = - 1 2 i = 1 L K B - 1 - j = 1 K 1 γ j B - 1 Σ θ i j + μ θ i j μ θ i j T B - 1 = - 1 2 KL B - 1 - B - 1 i = 1 L j = 1 K 1 γ j Σ θ i j + μ θ i j μ θ i j T B - 1 ,
(18)

where μ θ i j = Δ μ θ i j - 1 S + 1 : jS , Σ θ i j = Δ Σ θ i j - 1 S + 1 : jS , j - 1 S + 1 : jS . Setting (18) to zero yields

B= 1 KL i = 1 L j = 1 K 1 γ j Σ θ i j + μ θ i j μ θ i j T .
(19)

Similar to[14], we improve the performance of the algorithm by restraining the matrix B. Specifically, we attempt to find a positive definite and symmetric matrix B ̂ to approximate B. Mathematically, we set B ̂ to be a Toeplitz matrix equal to

B ̂ = Toeplitz [ 1 , r , , r S - 1 ] = 1 r r S - 1 r S - 1 r S - 2 1 ,

wherer= m 1 m 0 , m0, and m1 are obtained by averaging the elements along the main diagonal and the main sub-diagonal of B in (19). As a result, the approximated version of B is fully characterized by r. This method can also be applied with some modifications to the case where signal blocks have different sizes. In particular, in this case, we first compute r ̄ = m ̄ 1 m ̄ 0 , where m ̄ 0 = j = 1 K m 0 j and m ̄ 1 = j = 1 K m 1 j . m 0 j and m 1 j are obtained by averaging the elements along the main diagonal and the main sub-diagonal of B j , where it can be shown that B j = 1 L γ j i = 1 L Σ θ i j + μ θ i j μ θ i j T . B j are approximated with B ̂ j =Toeplitz 1 , r ̄ , , r ̄ S j - 1 such that again, B ̂ j depend on the value of r ̄ only. Here, S j is the size of block j.

We next evaluate β. Consider the terms relating to β in W(β,B) and use the notation

W 2 β = Δ E Θ Y , Ω old log p Y Θ , β = - 1 2 i = 1 L M i S log 2 π - M i S log β + β E θ i y i , Ω old y i - Φ i θ i 2 2 = - 1 2 i = 1 L M i S log 2 π - M i S log β + β y i - Φ i μ θ i 2 2 + E Φ i θ i - μ θ i 2 2 = - 1 2 i = 1 L M i S log 2 π - M i S log β + β y i - Φ i μ θ i 2 2 + Tr Σ θ i Φ i T Φ i .
(20)

Differentiating W2(β) with respect to β and then setting the result to zero, we obtain

W 2 β = - 1 2 - 1 β i = 1 L M i S + β i = 1 L y i - Φ i μ θ i 2 2 + Tr Σ θ i Φ i T Φ i = 0 .
(21)

We have, after some manipulations,

β= i = 1 L M i S i = 1 L y i - Φ i μ θ i 2 2 + Tr Σ θ i Φ i T Φ i .
(22)

The iterative process for estimating β,γ, and B starts with initial solution guesses of μ θ i , Σ θ i , γ j , B, and β. We then evaluate sequentially (6), (7), and (14) to find γ j and proceed to find the updated estimates of B and β using (19) and (22). With the obtained estimates of the sharing parameters, the original signals θ i of the L CS tasks can be reconstructed by following the MCS technique[15]. This completes the development of MBSBL.

2.2 EMBSBL

We shall present the new EMBSBL algorithm that is based on the developed MBSBL technique. Similar to[14], we first assume that all the blocks are of equal size h and the non-zero blocks are arbitrarily located. We will show via simulations that EMBSBL is not sensitive to the choice of h. There arep = Δ N-h+1 possible blocks in every signal θ i . The j th block starts at the j th element of θ i and continues until the (j + h - 1)th element. All the non-zero elements of θ i lie within a subset of these blocks. From the analysis above, we can have the decomposition of θ i

θ i = j = 1 p E j z i , j ,
(23)

where z i , j R h × 1 ;E z i , j z i , k T = δ j , k γ j B (δj,k = 1 if j = k; otherwise, δj,k = 0); and z i = z i , 1 T , , z i , p T T N z i 0 , Σ ~ 0 , Σ ~ 0 =diag γ 1 B , , γ p B R ph × ph ; E j R N × h is a zero matrix except that the submatrix composed of its j th row to (j + h - 1)th row is replaced by the identity matrix I, and E j is the same for every θ i . The CS model (1) can then be re-expressed as

y i = j = 1 p Φ i E j z i , j + n i = A i z i + n i , i = 1 , , L
(24)

where

A i = A i , 1 , , A i , p
(25)

and

A i , j = Φ i E j .
(26)

The new CS model (24) has its signals with the property of block-sparsity and the intra-block correlation is explicit. z i can be recovered using MBSBL and by utilizing (23), the original signals θ i of the CS tasks can then be found, which finishes the development of EMBSBL for recovering block-sparse signals under the MCS framework.

3 SEMBSBL algorithm

The EMBSBL cannot be directly applied to recover a single block-sparse signal in the single CS task scenario, due to its nature of being an MCS technique. We shall augment it with the idea of synthesized MCS to address this difficulty. CS task synthesis via circular-shifting operation is developed below. This section ends with the improved EMBSBL algorithm, namely synthesized EMBSBL (SEMBSBL), which utilizes the MDL principle to determine the optimal number of synthesized CS tasks to achieve satisfactory signal recovery performance.

3.1 Synthesis of multiple CS tasks

Figure1 illustrates synthesizing multiple CS tasks from a single one. The absence of measurement noise is assumed here to improve clarity. The original CS task is y1 = Φ1θ1, where θ1 is the block-sparse signal to be recovered and it has two non-zero clusters (shadowed). The columns of the measurement matrix Φ1 corresponding to the non-zero elements in θ1 are also shadowed for illustration. Figure1 indicates that a new CS task can be synthesized from the original one by circularly shifting the columns of Φ1 to the right by one column. In this way, the new CS task has a new measurement matrix Φ2 and a new signal θ2 whose the elements are generated by circularly shifting θ1 downward by one sample. The new CS task has the same compressive measurements as the original one. We assume that this observation holds also for the case where measurement noise is present. Comparing θ1 with θ2 reveals that the locations of their non-zero elements have overlaps. This implies that θ1 and θ2 are correlated. This forms the basis for utilizing EMBSBL in block-sparse signal recovery. Additional CS tasks can be synthesized by following a similar approach but with different directions and shifting amounts of the circular-shifting operations.

Figure 1
figure 1

Synthesis of a new CS task via circular shifting.

It can be expected that due to the block-sparsity of the signal to be recovered, the signals of some synthesized CS tasks may not be well correlated with others. In other words, they only have few overlapping non-zero elements. The utilization of these CS tasks in recovering the original signal via EMBSBL would lead to poor signal reconstruction performance. To address this problem, we propose to utilize the MDL principle to determine the number of synthesized CS tasks for the block-sparse signal reconstruction, as will be detailed in the following subsection.

3.2 SEMBSBL

This section presents the proposed SEMBSBL algorithm. We shall first provide a method for evaluating the signal recovery quality of EMBSBL for a given set of synthesized CS tasks. This is essential for selecting the optimal set of synthesized CS tasks for block-sparse signal recovery. For this purpose, we apply the MDL principle. Basically, it states that among a set of competing statistical models, the best model is the one having the minimum code length for the given data[16, 17]. This is mathematically equivalent to solving Q ̂ =arg min Q M CL y , Q , where denotes the set of possible models and C L(y,Q) is the code length function. We set C L(y,Q) to be the Shannon code length[18], i.e., C L(y,Q) = - log2p(y,Q), where p(y,Q) is the probability density function of y under the model Q.

For the problem of applying EMBSBL to recovering the block-sparse signal in a single CS task scenario (without loss of generality, we assume the task is y1 = Φ1 + n1), we denote the estimates of the sharing parameters β,γ,B as β ̂ , γ ̂ , B ̂ . They are output by the EMBSBL algorithm for a given set of synthesized CS tasks. The description length for y1, C L(y1), can be then expressed as, after using (9) and setting L = 1,

CL y 1 = CL y 1 β ̂ , γ ̂ , B ̂ + CL β ̂ , γ ̂ , B ̂ = - log 2 p y 1 β ̂ , γ ̂ , B ̂ - log 2 p β ̂ - log 2 p γ ̂ - log 2 p B ̂ = 1 2 M 1 log 2 2 π + log 2 det C 1 + y 1 T C 1 - 1 y 1 log 2 e + const ,
(27)

whereCL y 1 β ̂ , γ ̂ , B ̂ =- log 2 p y 1 β ̂ , γ ̂ , B ̂ measures the goodness of fit between the data and the current model,CL β ̂ , γ ̂ , B ̂ =- log 2 p β ̂ - log 2 p γ ̂ - log 2 p B ̂ represents the model complexity, andp β ̂ ,p γ ̂ , andp B ̂ denote the prior distributions of β ̂ , γ ̂ , B ̂ , e which are the base of the natural logarithm. Because we do not impose any specific distributions on β ̂ , γ ̂ , B ̂ , their prior probability distributions are thus set to be the uniform distributions. In other words,- log 2 p β ̂ - log 2 p γ ̂ - log 2 p B ̂ is a constant. C 1 = β ̂ - 1 I + A 1 Σ ̂ 0 A 1 T , Σ 0 ̂ =diag γ ̂ 1 B ̂ , , γ ̂ K B ̂ .

We are now ready to present the proposed SEMBSBL algorithm. It is an iterative method that improves the signal recovery quality gradually. In each iteration, a new CS tasks is synthesized using the circular-shifting operation as illustrated in Figure1. The newly produced task is applied together with the previously synthesized CS tasks as well as the original CS task in EMBSBL for jointly reconstructing the block-sparse signal. The above process continues until the number of synthesized CS tasks reaches a pre-specified value or including the newly synthesized CS task does not lead to better signal reconstruction quality (or equivalently, reduced code length for describing the data, which is given in (27)).

The algorithm is summarized in Algorithm 1. lmax is the user-specified maximum number of the synthesized CS tasks. EMBSBLl(Y,A) represents the application of EMBSBL for signal reconstruction in the l th iteration and it uses l CS tasks. Y and A collect the compressive measurements and their associated measurement matrices of the l CS tasks. The output of EMBSBLl(Y,A) is β ̂ l , γ ̂ l , B ̂ l , θ 1 ̂ l , which are the estimates of the sharing parameters β,γ,B and the original signal θ1. The operators Left(A1,l) and Right(A1,l) denote circular shifting the columns of A1 to the left and to the right by l columns.

Algorithm 1 SEMBSBL

4 Simulations

We shall provide simulation results to demonstrate the performance of the EMBSBL algorithm proposed in Section 2 and the SEMBSBL algorithm developed in Section 3. The signal reconstruction error is quantified using θ i - θ ̂ i 2 θ i 2 , where θ i and θ ̂ i are the true and the estimated signals. The elements of the measurement matrix Φ i are initially drawn from the standard normal distributionN 0 , 1 and each row of Φ i is then normalized to have a unit norm.

In the first experiment, we simulate a two CS task scenario (L = 2) where the original signals both have a length of N = 500 and each contains 50 spikes with different amplitudes at random locations. The two signals also have six non-zero blocks with random sizes and they are at non-overlapping random locations. We consider two cases where 80% and 100% of the spikes of the two signals are at the same positions.

The signal-to-noise ratio (SNR) in log scale is defined asSNR = Δ 20 log 10 β Φ i θ i 2 and zero-mean Gaussian noise is added to every CS measurement vector. We set SNR = 15 dB and use MCS, BSBL-BO, and EBSBL-BO as benchmark techniques for comparison. When implementing BSBL-BO, EBSBL-BO, and EMBSBL, we set the block size parameter h to be h = 4 and h = 8 to illustrate the impact of different choice of h on their performance. The signal reconstruction error results shown are obtained via averaging over 50 ensemble runs.

Figure2 compares MCS, BSBL-BO, EBSBL-BO, and EMBSBL in terms of their signal reconstruction errors as a function of the number of the compressive measurements. In this simulation, the intra-block correlation coefficient for each block is uniformly distributed between 0 and 0.1. Simulation results indicate that for BSBL-BO and EBSBL-BO, the performance curves when 80% and 100% of the spikes of the two original signals are at the same locations are very similar to each other. Therefore, to improve the clarity of the figures, we only provide in Figure2 and the following Figures3 and4 the results when 80% of the spikes have the same locations.

Figure 2
figure 2

Comparison of MCS, BSBL-BO, EBSBL-BO, and EMBSBL as a function of the number of compressive measurements. When the intra-block correlation coefficient for each block is uniformly distributed between 0 and 0.1. (a) Signal reconstruction error performance, (b) algorithm running time.

Figure 3
figure 3

Comparison of MCS, BSBL-BO, EBSBL-BO, and EMBSBL as function of the number of compressive measurements. When the intra-block correlation coefficient for each block is uniformly distributed between 0.4 and 0.5. (a) Signal reconstruction error performance, (b) algorithm running time.

Figure 4
figure 4

Comparison of MCS, BSBL-BO, EBSBL-BO, and EMBSBL as function of the number of compressive measurements. When the intra-block correlation coefficient for each block is uniformly distributed between 0.8 and 0.9. (a) Signal reconstruction error performance, (b) algorithm running time.

We can see from Figure2a that the proposed EMBSBL has the least signal recovery error and its performance improves as more spikes of the original signals share the same locations. More importantly, EMBSBL is less sensitive to the choice of the block size parameter h than BSBL-BO and EBSBL-BO. This is because the new EMBSBL technique is an MCS algorithm that recovers the original signals of multiple CS tasks jointly. Compared with BSBL-BO and EBSBL-BO that are single-task CS algorithms, EMBSBL explores the inter-correlation among original signals to improve performance, besides the intra-block correlation. The use of this additional information improves the robustness of EMBSBL to the deviation of the presumed block size from the true value. Finally, as shown in Figure2b, EMBSBL has the comparable complexity as EBSBL-BO, despite of being an MCS technique.

We repeat the simulation that produced Figure2 but this time, we allow the intra-block correlation coefficient for each block to be uniformly distributed between 0.4 and 0.5. The obtained results are summarized in Figure3. It can be seen that the proposed EMBSBL continues to offer the best signal recovery performance. Besides, the increase in the intra-block correlation improves the performance of the EBSBL-BO.

In producing Figure4, we further increase the intra-block correlation coefficient for each block to be uniformly distributed from 0.8 to 0.9. The performance curve of MCS is excluded from the figure for sake of clarity, because in this case, the signal reconstruction error of MCS is large. We can find in Figure4 that under significant intra-block correlation, the proposed EMBSBL still has the best signal recovery performance. Besides, comparing Figure4 with Figures2 and3 reveals that higher intra-block correlation leads to improved performance of the proposed EMBSBL method. This is because it explicitly utilizes the intra-block correlation for better signal recovery (see Section 2).

We next study the impact of different signal inter-correlation levels on the signal recovery performance of EMBSBL. The simulation setup is the same as that leading to Figure2 except that the percentage of overlapping non-zeros of the two original signals is set to be 40%, 60%, 80%, and 100%. Three sets of simulation results are produced. They correspond to the intra-block correlation coefficient being uniformly distributed within [0, 0.1], [0.4, 0.5], and [0.8, 0.9]. The obtained simulation results are summarized in Figure5. We find that EMBSBL can recover the original signals with reduced signal reconstruction error as the inter-correlation among original signals increases. This observation holds for different intra-block correlation coefficients. The performance improvement is somewhat expected, because the EMBSBL algorithm is an MCS technique and its signal recovery performance would benefit from increased inter-correlation among original signals.

Figure 5
figure 5

Signal reconstruction performance of EBSBL-BO and EMBSBL as function of the number of compressive measurements. Under different signal inter-correlation levels. (a) Intra-block correlation coefficient uniformly distributed in [0, 0.1], (b) intra-block correlation coefficient uniformly distributed in [0.4, 0.5], (c) intra-block correlation coefficient uniformly distributed in [0.8, 0.9].

To validate the development of SEMBSBL (see Section 3), we consider a single CS task scenario. Again, the original signal has a length of N = 500 and it contains 50 spikes at random locations. Besides, the signal include six non-zero blocks with random sizes and random but non-overlapping locations. We set SNR = 15 dB in the simulation and use BSBL-BO and EBSBL-BO as benchmarking techniques. For BSBL-BO, EBSBL-BO, and the proposed SEMBSBL, we generate signal recovery error performance curves with the block size parameter setting to be h = 4 and h = 8. The results are averaged over 50 runs. For the proposed SEMBSBL, we set the pre-specified maximum number of the synthesized CS tasks lmax to be six. We consider three cases, where the intra-block correlation coefficient for each block is uniformly distributed within [0, 0.1], [0.4, 0.5], and [0.8, 0.9]. The obtained simulation results are shown in Figure6. It can be observed that the proposed SEMBSBL outperforms benchmark algorithms and SEMBSBL is less sensitive to the choice of the parameter h than EBSBL-BO. But the running time of SEMBSBL is high, since it executes the EMBSBL algorithm a couple of times before it finds an optimal set of synthesized CS tasks for signal recovery.

Figure 6
figure 6

Comparison of reconstruction performance of BSBL-BO, EBSBL-BO, and SEMBSBL. In the original CS task when the intra-block correlation value for each block is uniformly randomly varied from 0 to 0.1 (a), from 0.4 to 0.5 (b), from 0.8 to 0.9 (c).

5 Conclusion

In this paper, a novel algorithm for jointly recovering multiple block-sparse signals from their compressive measurements, termed as the EMBSBL algorithm, was developed. EMBSBL exploits both the statistical correlation among signals and signals’ intra-block correlation to achieve superior signal recovery performance. Moreover, the new algorithm eliminates the requirement on the availability of a priori information on the sparsity structure of the original signal. We also developed in this paper SEMBSBL that applies EMBSBL to the single CS task case. It synthesizes new CS tasks from the single CS task via simple circular-shifting operations to make EMBSBL applicable. The MDL principle was adopted to determine the proper set of the synthesized CS tasks for reconstructing the block-sparse signal. Computer simulations were carried out and revealed that the proposed EMBSEBL and SEMBSBL are able to outperform existing techniques in providing greatly enhanced block-sparse signal reconstruction performance at the cost of increased computational complexity.

References

  1. DL Donoho: Compressed sensing. IEEE Trans. Inf. Theory 2006, 52(4):1289-1306.

    Article  Google Scholar 

  2. M Mishali, YC Eldar: Blind multi-band signal reconstruction: compressed sensing for analog signals. IEEE Trans. Signal Process. 2009, 57(3):993-1009.

    Article  MathSciNet  Google Scholar 

  3. F Parvaresh, H Vikalo, S Misra, B Hassibi: Recovering sparse signals using sparse measurement matrices in compressed DNA microarrays. IEEE J. Sel. Topics Signal Process. 2008, 2(3):275-285.

    Article  Google Scholar 

  4. RG Baraniuk, V Cevher, MF Duarte, C Hegde: Model-based compressive sensing. IEEE Trans. Inf. Theory 2010, 56(4):1982-2001.

    Article  Google Scholar 

  5. D Needell, JA Tropp: CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26: 301-321. 10.1016/j.acha.2008.07.002

    Article  MathSciNet  Google Scholar 

  6. YC Pati, R Rezaiifar, PS Krishnaprasad: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Ann. Asilomar Conf. Signals, Systems, and Computers. (Pacific Grove, CA, 1–3 Nov 1993);

    Google Scholar 

  7. YC Eldar, P Kuppinger, H Bolcskei: Block-sparse signals: uncertainty relations and efficient recovery. IEEE Trans. Signal Process 2010, 58(6):3042-3054.

    Article  MathSciNet  Google Scholar 

  8. J Zou, Y Fu, S Xie: A block fixed point continuation algorithm for block-sparse reconstruction. IEEE Trans. Signal Process. Lett 2012, 19(6):364-367.

    Article  Google Scholar 

  9. E Elhamifar, R Vidal: Block-sparse recovery via convex optimization. IEEE Trans. Signal Process 2012, 60(8):4094-4107.

    Article  MathSciNet  Google Scholar 

  10. L Zelnik-Manor, K Rosenblum, YC Eldar: Dictionary optimization for block-sparse representations. IEEE Trans. Signal Process 2012, 60(5):2386-2395.

    Article  MathSciNet  Google Scholar 

  11. L Yu, H Sun, JP Barbot, G Zheng: Bayesian compressive sensing for cluster structured sparse signals. Signal Process 2012, 92(1):259-269. 10.1016/j.sigpro.2011.07.015

    Article  Google Scholar 

  12. T Peleg, Y Eldar, M Elad: Exploiting statistical dependencies in sparse representations for signal recovery. IEEE Trans. Signal Process 2012, 60(5):2286-2303.

    Article  MathSciNet  Google Scholar 

  13. Z Zhang, BD Rao: Sparse signal recovery with temporally correlated source vectors using sparse Bayesian learning. IEEE J. Selected Topics Signal Process 2011, 5(5):912-926.

    Article  Google Scholar 

  14. Z Zhang, BD Rao: Extension of SBL algorithms for the recovery of block sparse signals with intra-block correlation. IEEE Trans. Signal Process 2013, 61(8):2009-2015.

    Article  Google Scholar 

  15. S Ji, D Dunson, L Carin: Multi-task compressive sensing. IEEE Trans. Signal Process 2009, 57(1):92-106.

    Article  MathSciNet  Google Scholar 

  16. A Barron, J Rissanen, B Yu: The minimum description length principle in coding and modeling. IEEE Trans. Inf. Theory 1998, 44(6):2743-2760. 10.1109/18.720554

    Article  Google Scholar 

  17. I Ramirez, G Sapiro: An MDL framework for sparse coding and dictionary learning. IEEE Trans. Signal Process 2012, 60(6):2913-2927.

    Article  MathSciNet  Google Scholar 

  18. T Cover, J Thomas: Elements of Information Theory, 2nd ed. (Wiley, New York, 2006);

    Google Scholar 

Download references

Acknowledgements

The authors wish to thank the associate editor and the anonymous reviewers for their constructive suggestions. The authors thank Zhilin Zhang, Bhaskar D. Rao, Shihao Ji, and David Dunson for sharing codes of their algorithms. This work was supported in part by Hunan Provincial Innovation Foundation for Postgraduates under Grant CX2012B019, Fund of Innovation, Graduate School of National University of Defense Technology under grant B120404, and National Natural Science Foundation of China (no. 61304264).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying-Gui Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wang, YG., Yang, L., Liu, Z. et al. SBL-based multi-task algorithms for recovering block-sparse signals with unknown partitions. EURASIP J. Adv. Signal Process. 2014, 14 (2014). https://doi.org/10.1186/1687-6180-2014-14

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-14

Keywords