Skip to content

Advertisement

  • Review
  • Open Access

Bayesian approach with prior models which enforce sparsity in signal and image processing

EURASIP Journal on Advances in Signal Processing20122012:52

https://doi.org/10.1186/1687-6180-2012-52

Received: 18 January 2012

Accepted: 1 March 2012

Published: 1 March 2012

Abstract

In this review article, we propose to use the Bayesian inference approach for inverse problems in signal and image processing, where we want to infer on sparse signals or images. The sparsity may be directly on the original space or in a transformed space. Here, we consider it directly on the original space (impulsive signals). To enforce the sparsity, we consider the probabilistic models and try to give an exhaustive list of such prior models and try to classify them. These models are either heavy tailed (generalized Gaussian, symmetric Weibull, Student-t or Cauchy, elastic net, generalized hyperbolic and Dirichlet) or mixture models (mixture of Gaussians, Bernoulli-Gaussian, Bernoulli-Gamma, mixture of translated Gaussians, mixture of multinomial, etc.). Depending on the prior model selected, the Bayesian computations (optimization for the joint maximum a posteriori (MAP) estimate or MCMC or variational Bayes approximations (VBA) for posterior means (PM) or complete density estimation) may become more complex. We propose these models, discuss on different possible Bayesian estimators, drive the corresponding appropriate algorithms, and discuss on their corresponding relative complexities and performances.

Keywords

  • sparsity
  • Bayesian approach
  • sparse priors
  • inverse problems

1 Introduction

In many generic inverse problems in signal and image processing we want to infer on an unknown signal f(t) or an unknown image f(r) with r= (x, y) through an observed signal g(s) or an observed image g(s) related between them through an operator such as convolution g = h * f or any other linear or non linear transformation g = f . When this relation is linear and we have discretized the problem, we arrive to the relation:
g = H f + ε ,
(1)
where f= [f1, ..., f n ]' represents the unknowns, g= [g1, ..., g m ]' the observed data, ϵ= [ϵ1, ..., ϵ m ]' the errors of modeling and measurement and H the matrix of the system response. We may note that, even if the noise could be neglected (ϵ= 0) and the matrix H invertible (m = n), in general, the solution f ^ = H - 1 g is not forcibly the good solution, because this solution may be too sensitive to small changes in the data due to the ill-conditioning of this matrix. for the general case of mn, one tries to obtain a regularized solution, for example by defining it as the optimizer of a two parts criterion
f ^ = arg min f { J ( f ) = g - H f 2 + λ f 2 }
(2)

which is given by f ^ = [ H H + λ I ] - 1 H g . When the regularization parameter λ = 0, one gets a generalized inverse f ^ = [ H H ] - 1 H g and when H invertible, one gets the normal inverse solution f ^ = H - 1 g . The regularization theory has been developed since the pioneer work of Tikhonov [1] and Tikhonov and Arsénine [2] who had introduced a quadratic regularization terms to account for some prior properties of the solution (smoothness). Since that, many different regularization terms have been proposed. In particular, in place of L2 norm: L 2 ( f ) = f 2 2 = j f j 2 , it has been proposed to use the L0 norm L 0 ( f ) = f 0 = j δ ( f j ) or the L1 norm L1(f) = ||f||1 = Σ j |f j | to enforce the sparsity of the solution [311]. Then, due to the fact that L0(f) is not convex and L1(f) is convex, but not continuous, the optimization of a criterion with these expressions becomes more difficult than the L2 norm case. For this reason, there was a great number of works who specialized in proposing algorithms for the optimization of such criteria.

Interestingly, defining the solution of the problem (1) as the optimization of a criterion with two parts can be assimilated to a maximum a posteriori (MAP) solution in a Bayesian approach where the first term of the criterion (2) can be related to the likelihood and the second term to a prior model as we will see in the following where the main objective is to show how the Bayesian approach can go farther than the regularization in at least the following aspects:

  • A better account for the noise term characteristics;

  • A better and easier way for translating the prior knowledge and in particular the sparsity;

  • New tools for assessing the regularization parameter, a great subject of discussion for all those work with regularization theory;

  • New solutions and new tools for doing computations (optimizations and integrations).

1.1 The Bayesian approach

The Bayesian inference approach is based on the posterior law:
p ( f | g , θ 1 , θ 2 ) = p ( g | f , θ 1 ) p ( f | θ 2 ) p ( g | θ 1 , θ 2 ) p ( g | f , θ 1 ) p ( f | θ 2 )
(3)

where the sign stands for "proportional to", p(g|f, θ1) is the likelihood, p(f|θ2) the prior model, θ= (θ1, θ2) are their corresponding parameters (often called the hyper parameters of the problem) and p(g|θ1, θ2) is called the evidence of the model.

This general Bayesian approach is illustrated as follows:

In this approach, the likelihood p(g|f, θ1) summarizes our knowledge about the noise and the model linking the observed data g to the unknowns f and the prior term p(f|θ2) summarizes our incomplete prior knowledge about the unknowns and the posterior law p(f|g, θ) combines these two terms and contains all our state of knowledge about the unknowns f after accounting for the prior and the observed data.

As a very simple example, when the noise is assumed to be Gaussian, then the MAP solution f ^ = arg max f { p ( f g , θ ) } is obtained as the optimizer of the criterion J(f) = ||g- Hf||2 + λ Ω(f) where the expression of Ω(f) depends on the prior law. When the prior knowledge is translated as a Gaussian probability law, then Ω ( f ) = f 2 2 and when it is translated as a Laplace probability law, then Ω(f) = ||f||1 [1214].

The first interest of using the Bayesian approach to the regularization approach is to have new tools for handling the hyper parameters [15].

1.2 Full Bayesian approach

When the parameters θ have to be estimated too, we can assign them a prior p(θ|θ0) with fixed values for θ0 (often called hyper-hyper-parameters) and express the joint posterior
p ( f , θ g , θ 0 ) = p ( g f , θ 1 ) p ( f θ 2 ) p ( θ θ 0 ) p ( g θ 0 )
(4)
and then try to estimate them jointly, for example joint MAP [16]:
( f ^ , θ ^ ) = arg max ( f , θ ) { p ( f , θ g , θ 0 ) }
(5)
This Full Bayesian approach is illustrated as follows:
One may also first integrate out one of them, for example f to obtain
p ( θ g , θ 0 ) = p ( f , θ g , θ 0 ) d f ,
(6)
estimate θ, for example by
θ ^ = arg max θ { p ( θ g , θ 0 ) }
(7)

and then use it for the estimation of the other one using p ( f g , θ ^ ) .

This approach (called sometimes type II maximum likelihood) is illustrated as follows:
However, very often this marginalization cannot be done analytically and so the optimization for the estimation of θ cannot be achieved. In such cases, the expectation-maximization (EM) algorithms can be helpful [17]. Considering g as incomplete data, f as hidden variable, (g, f) as complete data and noting ln p(g|θ) as incomplete data log-likelihood and ln p(g, f|θ) complete data log-likelihood, the classical EM algorithm writes:
E - step : q ( θ , θ ^ ( k ) ) = E p ( f g , θ ^ ( k ) ) { ln p ( g , f θ ) } M - step: θ ^ ( k ) = arg max θ q ( θ , θ ^ ( k - 1 ) )
(8)
The Bayesian version (Bayesian EM) is not very far and differs only by the introduction of p(θ):
E - step : q ( θ , θ ^ ( k ) ) = E p ( f g , θ ^ ( k ) ) { ln p ( g , f θ ) + ln p ( θ ) } M - step : θ ^ ( k ) = arg max θ q ( θ , θ ^ ( k - 1 ) )
(9)
This is illustrated as follows:

As we mentioned before, one of the main steps in the Bayesian approach is the prior modeling which has the role of translating our prior knowledge on the unknown signal or image in a probability law. Sparsity is one of the prior knowledge we may translate. The main objective of this article is to see what are the different possibilities.

1.3 Prior modeling

In this article, we propose different prior modeling for signals and images which can be used in a Bayesian inference approach in many inverse problems in signal and image processing where we want to infer on sparse signals or images. The sparsity may be directly on the original space or in a transformed space (see Figures 1, 2, 3, and 4). In this article, we consider the sparsity directly in the original domain.
Figure 1
Figure 1

Sparsity: explicite sparse signals. The signal at the right is sparse, but its derivative (signal at the left) is still more sparse.

Figure 2
Figure 2

Sparsity: sparse signals in a transformed domaine (Fourier or wavelet). First row: signals, second row: Fourier or wavelet transforms.

Figure 3
Figure 3

Sparsity: explicite sparse images. The images at the top are sparse. The images at the bottom are not sparse, but their Laplaciens are (images at top).

Figure 4
Figure 4

Sparsity: sparse images in a transformed domain (Fourier or wavelet). First row: images, second row: Fourier or wavelet transforms.

The prior models discussed are the following:
  • generalized Gaussian (GG) with Gaussian (G) and Laplace or double exponential (DE) as particular cases;

  • symmetric Weibull (W) with symmetric Rayleigh (R) and again the DE as particular cases;

  • Student-t (St) with Cauchy (C) as particular case;

  • Elastic net prior model;

  • generalized hyperbolic model;

  • Dirichlet and symmetric Dirichlet;

  • Mixture of two centered Gaussians (MoG2), one with very small and one with a large variances;

  • Bernoulli-Gaussian (BG), also called Spike and slab;

  • Mixture of two Gammas (MoGamm);

  • Bernoulli-Gamma (BGamma);

  • Mixture of three Gaussians (MoG3), one centered with very small variance and two symmetrically centered on positive and negative axes and large variances;

  • Mixture of one Gaussian and two Gammas (MoGGammas), and in a more summary the case of

  • Bernoulli-Multinomial (BMult) or mixture of Dirichlet (MoD).

Some of these models are well-known [1214, 1826], some others less. In general, we can classify them into two categories: (i) simple non Gaussian models with heavy tails and (ii) mixture models with hidden variables which result to hierarchical models.

In the Section 2, we give more details about the sparsity and all these prior models which enforce the sparsity.

1.4 Bayesian computation

The second main step in the Bayesian approach is to do the computations. Depending on the prior model selected, the Bayesian computations needed are:

  • For simple prior models:
    • Simple optimization of p(f|θ, g) for the MAP:
    • Joint optimization p(f, θ|g) for joint MAP:
    • Generation of samples from the conditionals p(f|θ, g) and p(θ|f, g) for the MCMC Gibbs sampling methods,
    • Variational approximation (VA) of the joint p(f, θ|g) by a separable
      q ( f , θ g ) = q 1 ( f θ ̃ , g ) q 2 ( θ f ̃ , g )
      and then using them for estimation
  • For hierarchical prior models with hidden variables z:
    • Joint optimization p(f, z, θ|g) for joint MAP,
    • Generation of samples from the conditionals p(f|z, θ, g), p(θ|z, f, g) and p(z|f, θ, g) for the MCMC Gibbs sampling methods:
    • Variational approximation (VA) of the joint p(f, z, θ|g) by a separable
      q ( f , z , θ g ) = q 1 ( f z ̃ , θ ̃ , g ) q 2 ( z f ̃ , θ ̃ , g ) q 3 ( θ z ̃ , f ̃ , g )
      and then using them for estimation

The second main objective of this article is to discuss on the relative complexities and performances of the algorithms obtained with the proposed prior law.

The rest of the article is organized as follows:

In Section 2, we present in details the proposed prior models and discuss their properties. For example, we will see that the Student-t model can be interpreted as an infinite mixture with a variance hidden variable or that the BG model can be considered as the degenerate case of a MoG2 where one of the variances go to zero. Also, we will examine the less known models of MoG3 and MoGGammas where the heavy tails are obtained by combining a centered Gaussian and two large variance non-centered Gaussians or Gammas.

In Section 3, we examine the expression of the posterior laws that we obtain using these priors and discuss then on complexity of the Bayesian computation of the algorithms. In particular for the mixture models, we give details of the joint estimation of the signal and the hidden variable as well as the hyper parameters (parameters of the mixtures and the noise) for unsupervised cases.

In Section 4, we give more details on the variational Bayesian approximation method, first for the general case and then for the case of mixture laws and more specifically the case of the Student-t considered as a continuous mixture.

Finally, we present the main conclusions of this article in Section 5.

2 Prior models enforcing sparsity

First, as we mentioned, the sparsity is a property which can be described either directly for the signal itself or after some transformation, for example on the derivative of the signal, or in more general on the coefficients of the projection of the signal on any basis or any set of functions.

Different prior models have been used to enforce sparsity.

2.1 Generalized Gaussian (GG), Gaussian (G) and double exponentials (DE) models

This is the simplest and the most used model (see for example, [27]). Its expression is:
p ( f γ , β ) = j G G ( f j γ , β ) exp - γ j f j β
(10)
where
G G ( f j γ , β ) = β γ 2 Γ ( 1 / β ) exp { - γ f j β } .
(11)

Two particular cases are of importance:

  • β = 2 (Gaussian):
    p ( f γ ) = j N ( f j 0 , 1 / ( 2 γ ) ) exp - γ j f j 2 exp { - γ f 2 2 }
    (12)
  • β = 1 (double exponential or Laplace):
    p ( f γ ) = j D ( f j γ ) exp - γ j f j exp { - γ f 1 }
    (13)
The general shape of these priors are shown in Figure 5, where the cases β = 1 and 0 < β < 1, which are of great interest for sparsity enforcing are compared to the Gaussian case β = 2.
Figure 5
Figure 5

Generalized Gaussian family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.2 Symmetric Weibull (W) and symmetric Rayleigh (R) models

The second model we consider is the symmetric Weibull probability density function (pdf):
p ( f γ , β ) = j W ( f j γ , β ) exp - γ j f j β + ( β - 1 ) log f j
(14)
where
W ( f j γ , β ) = c f j ( β - 1 ) exp { - γ f j β }
(15)
and where γ > 0 and β > 0, and the particular cases of β = 1 is the double exponential and β = 2 is the symmetric Rayleigh distribution:
p ( f γ , β ) = j ( f j γ ) exp - γ j f j 2 + log f j
(16)
the cases where 0 < β < 1 are of great interest for sparsity enforcing. This family of models are illustrated on Figure 6.
Figure 6
Figure 6

Symmetric Weibull family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.3 Student-t (St) and Cauchy (C) models

The second simplest model is the Student-t model:
p ( f ν ) = j S t ( f j ν ) exp - ν + 1 2 j log ( 1 + f j 2 / ν )
(17)
where
S t ( f j ν ) = 1 π ν Γ ( ( ν + 1 ) / 2 ) Γ ( ν / 2 ) ( 1 + f j 2 / ν ) - ( ν + 1 ) / 2
(18)
Knowing that
S t ( f j ν ) = 0 N ( f j 0 , 1 / τ j ) G ( τ j ν / 2 , ν / 2 ) d τ j
(19)
we can write this model via the positive hidden variables τ j :
p ( f , τ ) = j p ( f j τ j ) = j N ( f j 0 , 1 / τ j ) exp - 1 2 j τ j f j 2 p ( τ j a , b ) = G ( τ j a , b ) τ j ( a - 1 ) exp { - b τ j } with a = b = ν / 2
(20)
Cauchy model is obtained when ν = 1:
p ( f ) = j C ( f j ) exp - j log ( 1 + f j 2 )
(21)
This family of models are illustrated on Figure 7.
Figure 7
Figure 7

Student-t and Cauchy family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.4 Elastic Net (EN) prior model

A prior model inspired from elastic net regression literature [28] is:
p ( f ν ) = j N ( f j ν ) exp - j ( γ 1 f j + γ 2 f j 2 )
(22)
where
N ( f j | ν ) = N ( 0 , 1 / γ 1 ) D ( γ 1 ) exp { γ 1 | f j | γ 2 f j 2 ) }
(23)
which is a product of a Gaussian and a double exponential pdfs. This family of models are illustrated on Figure 8.
Figure 8
Figure 8

Elastic net family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.5 Generalized hyperbolic (GH) prior model

Another general prior model which can be used is:
p ( f δ , ν , β ) = j ( δ 2 + f j 2 ) ( ν - 1 / 2 ) / 2 exp { β x } K ν - 1 / 2 ( α δ 2 + f j 2 )
(24)
where Kν-1/2is the second kind Bessel function of order (ν - 1/2). This family of models are illustrated on Figure 9.
Figure 9
Figure 9

Generalized hyperbolic family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.6 Dirichlet (D) and symmetric Dirichlet (SD) models

When f j are positive and sums to one, we can use the Dirichlet model
D ( f α ) j f j α j - 1 with f j > 0 , j f j = 1
(25)
where α= {α1, ..., α N } with α j > 0. The proportionality constant is
B ( α ) = j Γ ( α j ) Γ j Γ ( α j )
(26)

It is noted that the support of this distribution is [0,1] N and ||f||1 = Σ j f j = 1.

It is also interesting to note that the domain of the Dirichlet distribution is itself a probability distribution, specifically a N-dimensional discrete distribution and the set of points in the support of a N-dimensional Dirichlet distribution is the open standard N - 1-simplex, which is a generalization of a triangle, embedded in the next-higher dimension.

A very common special case is the symmetric Dirichlet (SD) distribution, where all of the elements making up the parameter vector α have the same value α called the concentration parameter:
D ( f α ) j f j α - 1 with f j > 0, j f j = 1
(27)
When α > 1, the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard standard N - 1-simplex, i.e., it is uniform over all points in its support. α > 1 prefer variants that are dense, evenly-distributed distributions, i.e., all probabilities f j returned are similar to each other. α < 1 prefer sparse distributions, i.e., most of the probabilities f j returned will be close to 0, and the vast majority of the mass will be concentrated in a few of them. This is the case on which we are interested. An illustration of this family of models are illustrated on Figure 10.
Figure 10
Figure 10

Dirichlet family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.7 Mixture of two Gaussians (MoG2) model

The mixture models are also very commonly used as prior models. In particular the mixture of two Gaussians (MoG2) model:
p ( f λ , v 1 , v 0 ) = j ( λ N ( f j 0 , v 1 ) + ( 1 - λ ) N ( f j 0 , v 0 ) )
(28)
which can also be expressed through the binary valued hidden variables z j {0,1}
p ( f z ) = j p ( f j z j ) = j N ( f j 0 , v z j ) exp - 1 2 j f j 2 v z j P ( z j = 1 ) = λ , P ( z j = 0 ) = 1 - λ
(29)
In general v1 >> v0 and λ measures the sparsity (0 < λ << 1). This family of models are illustrated on Figure 11.
Figure 11
Figure 11

Mixture of two Gaussians family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.8 Bernoulli-Gaussian (BG) model

The Bernoulli-Gaussian model can be considered as the particular case of the MoG2 with the particular degenerate case of v0 = 0:
p ( f λ , v ) = j p ( f j ) = j ( λ N ( f j 0 , v ) + ( 1 - λ ) δ ( f j ) )
(30)
which can also be written as
p ( f z ) = j p ( f j z j ) = j [ N ( f j 0 , v ) ] δ ( z j ) j [ δ ( f j ) ] δ ( 1 - z j ) P ( z j = 1 ) = λ , P ( z j = 0 ) = 1 - λ
(31)
This model has also been called spike and slab. This family of models are illustrated on Figure 12.
Figure 12
Figure 12

Bernouilli-Gaussian family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.9 Mixture of three Gaussians (MoG3) model

Another mixture model proposed is using a Mixture of three Gaussians, one centered at zero and two symmetrically placed:
p ( f λ , v 0 , v + 1 , v - 1 , β ) = j ( 1 - λ ) N ( f j 0 , v 0 ) + ( λ / 2 ) N ( f j + β , v + 1 ) + ( λ / 2 ) N ( f j - β , v - 1 )
(32)
which can also be expressed through the ternary valued hidden variables z j {-1, 0, +1}
p ( f z ) = j p ( f j z j ) = j N ( f j z j β , v z j ) P ( z j = 1 ) = λ / 2 , P ( z j = - 1 ) = λ / 2 , P ( z j = 0 ) = 1 - λ .
(33)
In general v+1 = v-1 = v >> v0 and λ measures the sparsity (0 < λ << 1). This family of models are illustrated on Figure 13.
Figure 13
Figure 13

Mixture of three Gaussians family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.10 Mixture of one Gaussian and two Gammas (MoGGammas) model

Another mixture model proposed is using a mixture of one central Gaussian and two symmetric Gammas:
p ( f λ , v 0 , α , β ) = j ( 1 - λ ) N ( f j 0 , v 0 ) + ( λ / 2 ) G ( f j α , β ) + ( λ / 2 ) G ( - f j α , β )
(34)
which can also be expressed through the ternary valued hidden variables z j {-1, 0, +1}
p ( f z ) = j ( f j z j ) = [ N ( f j 0 , v 0 ) ] j δ ( z j ) × [ G ( f j α , β ) ] j δ ( z j - 1 ) × [ G ( - f j α , β ) ] j δ ( z j + 1 ) P ( z j = 1 ) = λ / 2 , P ( z j = - 1 ) = λ / 2 , P ( z j = 0 ) = 1 - λ .
(35)
This family of models are illustrated on Figure 14.
Figure 14
Figure 14

Mixture of one Gaussian and two Gammas family. The probability density function p(x) is shown in the left and - lnp(x) is shown in the right.

2.11 Bernoulli-Gamma (BGamma) model

As in the BG model, when we want to enforce both sparsity and positivity, we can use the BGamma model:
p ( f λ , α , β ) = j [ λ δ ( f j ) + ( 1 - λ ) G ( f j α , β ) ]
(36)
or
p ( f z ) = j p ( f j z j ) = j [ z j G ( f j α , β ) ] j ( 1 - z j ) δ ( f j ) P ( z j = 1 ) = λ , P ( z j = 0 ) = 1 - λ
(37)
A particular case of this model is Bernoulli-exponential (BExponential) which obtained when α = 1. These families of models are illustrated on Figure 15 and Figure 16.
Figure 15
Figure 15

Bernouilli-Gamma family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

Figure 16
Figure 16

Mixture of 2 Gammas family. The probability density function p(x) is shown in the left and - ln p(x) is shown in the right.

2.12 Mixture of Dirichlet (MoD) model

  • Mixture of Dirichlet model
    p ( f λ , α 1 , α 2 ) = λ D ( f α 1 ) + ( 1 - λ ) D ( f α 2 )
    (38)
where
D ( f α ) j f j α - 1 with f j > 0 , j f j = 1
(39)

is the symmetric Dirichlet distribution. We need to choose α1 > 1 for dense part and 0 < α2 < 1 for the sparse part.

2.13 Bernoulli-multinomial (BMultinomial) model

As in the BG or BGamma model, when we know that the signal is sparse and can only take one of the K discrete values {a1, ..., a K }, we can use the BMultinomial model:
p ( f λ , a , α ) = j λ u l t ( f j a , α ) + ( 1 - λ ) δ ( f j )
(40)
where a= {a1, ..., a K } and α= {α1, ..., α K } with ∑ k α k = 1 and
u l t ( f j a , α ) = n ! a 1 ! a K ! k α k a j
or
p ( f z ) = j p ( f j z j ) = j z j u l t ( f j α ) j ( 1 - z j ) δ ( f j ) P ( z j = 1 ) = λ , P ( z j = 0 ) = 1 - λ
(41)

3 Bayesian inference with sparsity enforcing priors

The priors proposed can be used in a Bayesian approach to infer on f given the observed data g through the posterior law given in Equation (3). First let assume the error ϵ to be centered, Gaussian and white: ε ~ N ( ε 0 , v ε I ) . Then, using the forward model (1) we have
p ( g f ) = N ( H f , v ε I ) exp 1 2 v ε g - H f 2
(42)

Now, we consider different priors.

3.1 Simple prior models

Given p(g|f) and any simple prior law p(f), the posterior law is written:
p ( f g ) p ( g f ) p ( f ) exp { J ( f ) }
(43)
with
J ( f ) = 1 2 v ε g - H f 2 + Ω ( f )
(44)
where Ω(f) = -ln p(f) and so the Maximum A Posteriori (MAP) solution is expressed as the minimizer of this criterion which has two parts: the first part is due to the likelihood and the second part is due to the prior:
Thus, depending on the choice of the prior we obtain different expressions for Ω(f). For example for the GG model of (10) we get
Ω ( f ) = γ j | f j | β ) .
(45)
For the symmetric Weibull model (14) we get
Ω ( f ) = - γ j f j β + ( β - 1 ) log f j .
(46)
For the Student-t model (17) we get
Ω ( f ) = ν + 1 2 j log ( 1 + f j 2 / v ) .
(47)
For the elastic net model we get
Ω ( f ) = j γ 1 f j + γ 2 f j 2
(48)
and for the Dirichlet model we get
Ω ( f ) = j f j α - 1 , f j > 0 , j f j = 1 .
(49)
For each of these cases, we may discuss on the unimodality and convexity of the criterion J(f) which depends mainly on its Hessian
Δ J ( f ) = 2 J ( f ) f j f i = H H + 2 Ω ( f ) f i f j = H H + 2 Ω ( f ) f j 2
(50)

We may look at each case to examine the range of the parameters for which this Hessian matrix is positive definite.

The optimization is done iteratively:
Update operation can be additive, multiplicative or more complex. Updating steps α(k)can be fixed or computed adaptively at each step (steepest descent for example). δ f(k)can be, for example proportional to the gradient, in which case, we have
We may also consider to estimate some of these parameters by assigning them appropriate priors and then express the joint p(f, θ|g, θ0) as given in Equation (4) and then try to estimate them jointly, for example joint MAP:
or alternate optimization:
We may also want to explore this joint posterior by generating samples from it. This can be done, for example, through the following Gibbs sampling scheme:

When a great number of samples are thus generated, we may compute their means, variances or any other statistics about them.

Finally, we may try to approximate this joint posterior by a simpler one, for example by a separable q(f, θ) = q1(f) q2(θ) using the variational approximation (VA). The main idea and the main basic steps to achieve this is more detailed in the following section. Here, however, we present the result on the following scheme:
To illustrate the differences, we may consider the simple case of a linear forward model and Gaussian priors:
p ( g f , v ε ) = N ( H f , v ε I ) p ( f v f ) = N ( 0 , v f I )
(51)
In this case, if we know θ= (v ϵ , v f ), then
p ( f g , v ε , v f ) = N ( μ ^ , Σ ^ ) with: μ ^ = [ H H + λ I ] - 1 H g Σ ^ = [ H H + λ I ] - 1
(52)
with λ = v ε v f . So, we have f ^ = μ ^ which can be computed by optimizing J(f) = ||g- H f||2 + λ||f||2. A gradient based algorithm is shown below:
Now putting inverse Gamma priors on v ϵ and v f , or equivalently Gamma priors on τ ϵ = 1/v ϵ and τ f = 1/v f :
p ( τ ε α τ 0 , β τ 0 ) = G ( α ε 0 , β ε 0 ) p ( τ f α f 0 , β f 0 ) = G ( α f 0 , β f 0 )
(53)
we have
p ( τ ε f , g , α τ 0 , β τ 0 ) = G ( α ^ ε , β ^ ε ) p ( τ f f , α f 0 , β f 0 ) = G ( α ^ f , β ^ f )
(54)
with
α ^ ε = α ε 0 + 1 / 2 β ^ ε = β ε 0 + g - H f 2 / 2 τ ^ ε = α ^ τ / β ^ τ = 2 α ε 0 + 1 2 β ε 0 + g - H f 2 α ^ f = α f 0 + 1 / 2 β ^ f = β f 0 + f 2 / 2 τ ^ f = β ^ f / α ^ f = 2 α f 0 + 1 2 β f 0 + f 2
(55)
and λ ^ = τ ^ f τ ^ ε . Then, the alternate optimization of the JMAP estimate algorithm becomes
The Gibbs sampling algorithm becomes
The VBA algorithm becomes
q ( f ) = N ( μ ^ , Σ ^ ) Σ ̃ = [ H H + λ I ] - 1 μ ̃ = Σ ̃ H g q ( τ ε ) = G ( α ̃ ε , β ̃ ε ) α ̃ ε = α ε 0 + 1 / 2 β ̃ ε = β ε 0 + g - H < f > 2 / 2 q ( τ f ) = G ( α ̃ f , β ̃ f ) α ̃ f = α f 0 + 1 / 2 β ̃ f = β f 0 + 1 2 f 2 = β f 0 + 1 2 j f j 2 < f > = μ ̃ f 2 = μ ̃ 2 + diag Σ ̃
(56)
and λ ^ = τ ^ f τ ^ ε :

We recently implemented these algorithms for different applications such as: synthetic aperture radar (SAR) Imaging [29], ...

3.2 Mixture models

For the mixture models, and in general for the models which can be expressed via the hidden variables, we want to estimate jointly the original unknowns f and the hidden variables: τ in Cauchy model, z in MoG2, BG or BGam models and z in MoG3 or MoGGammas. Let examine these a little in details.

3.3 Student-t and Cauchy models

In this case the joint prior law can be written as:
p ( f , τ ) = j p ( f j τ j ) p ( τ j ) = j N ( f j 0 , 1 / τ j ) p ( τ j ) exp - 1 2 j τ j f j 2 + a ln τ j - b τ j with a  =  b = ν / 2
(57)
such that
p ( f , τ g ) p ( g f ) p ( f , τ ) exp { - J ( f , τ ) }
(58)
where
j ( f , τ ) = 1 2 v ε g - H f 2 + j 1 2 τ j f j 2 - a ln τ j + b τ j
(59)
Joint optimization of this criterion, alternatively with respect to f(with fixed τ)
f ^ = arg min f { J ( f , τ ) } = arg min f 1 2 v ε g - H f 2 + j 1 2 τ j f j 2
(60)
and with respect to τ(with fixed f)
τ ^ = arg min τ { J ( f , τ ) } = arg min τ j 1 2 τ j f j 2 - a ln τ j + b τ j
(61)
results in the following iterative algorithm:
f ^ = [ H H + v ε D ( τ ^ ) ] - 1 H g τ ^ j = ϕ ( f ^ j ) = a f j 2 ^ + b D ( τ ^ ) = diag [ 1 / τ j ^ , j = 1 , . . . , n ]
(62)

Note that, τ j is the inverse of a variance and we have 1 / τ j = f j 2 + b a . We can interpret this as an iterative quadratic regularization inversion followed by the estimation of variances τ j which are used in the next iteration to define the variance matrix D(τ).

Here too, we may study the conditions on which the joint criterion is uni-modal and its alternate optimization converges to its unique solution.

We may also consider a Gibbs sampling scheme
f ~ p ( f τ , g ) p ( g f ) p ( f u ) = N ( f f ^ , Σ ^ ) τ ~ p ( τ f , g ) p ( f τ ) p ( τ ) = j G ( τ j α ^ , β ^ )
(63)
where
Σ ^ = [ H H + v ε D ( τ ) ] - 1 f ^ = Σ ^ H g
(64)
and
α ^ = 1 2 f ^ j + a = 1 2 f ^ j + ν / 2 β ^ = b = ν / 2
(65)
For the VBA, we have
p ( g f , v ε ) = N ( g H f , v ε I ) , τ ε = 1 / v ε p ( τ ε ) = G ( τ ε α ε 0 , β ε 0 ) p ( f v ) = j p ( f j v j ) = j N ( f j 0 , v j ) = N ( f 0 , V ) V = diag [ v ] , τ j = 1 / v j , τ = diag [ τ ] = V - 1 p ( τ ) = j G ( τ j α 0 , β 0 )
(66)
q ̃ ( f ) = N ( f μ ̃ , Σ ̃ ) μ ̃ = Σ ̃ H g Σ ̃ = ( τ ̃ ε H H + V ̃ ) - 1 , with V ̃ = diag[ v ̃ ]
(67)
q ̃ ( τ ε ) = G ( τ ε α ̃ ε , β ̃ ε ) , α ̃ ε = α ε 0 + ( n + 1 ) / 2 β ̃ ε = β ε 0 + 1 / 2 τ ̃ = α ̃ τ / β ̃ τ q ̃ ( τ j ) = G ( τ j α ̃ j , β ̃ j ) α ̃ j = α 00 + 1 / 2 β ̃ j = β 00 + < f j 2 > / 2 z ̃ j = β ̃ j / α ̃ j
(68)

3.4 Mixture of two Gaussians (MoG2) model

In this case, following the same arguments, we obtain:
p ( f , z g ) p ( g f ) p ( f , z ) exp { - J ( f , z ) }
(69)
where
J ( f , z ) = 1 2 v ε g - H f 2 + j f j 2 2 v z j + z j ln λ + ( 1 - z j ) ln ( 1 - λ )
(70)
Again, in this case also, the optimization of this criterion, alternatively with respect to f and z results in the following iterative algorithm:
f ^ = [ H H + v ε D ( z ) ] - 1 H g z ^ j = ϕ ( f j ) = 1 , if f j 2 ^ ( v 1 - v 0 ) ln 1 - λ λ 0 , if f j 2 ^ < ( v 1 - v 0 ) ln 1 - λ λ D ( z ^ ) = diag v z j ^ , j = 1 , . . , n
(71)
Here too, we may also consider a Gibbs sampling scheme
f ~ p ( f z , g ) p ( g f ) p ( f u ) = N ( f f ^ , Σ ^ ) z ~ p ( z f , g ) p ( f z ) p ( z ) = j P ( z j = k f j )
(72)
where
Σ ^ = [ H H + v ε D ( z ) ] - 1 f ^ = Σ ^ H g
(73)
and
P ( z j = 1 f j ) = 1 , if f j 2 ( v 1 - v 0 ) ln 1 - λ λ P ( z j = 0 f j ) = 1 , if f j 2 < ( v 1 - v 0 ) ln 1 - λ λ
(74)

3.5 BG model

For the case of BG we have to be more careful, because the joint probability laws are degenerated. Two approaches are then possible:

i) Considering them as the particular case of the MoG models where the variance v0 is fixed to a small value or reduced gradually during the iterations.

ii) Trying first to integrate out f from the expression of p(f, z|g) to obtain p(z|g) and optimize it with respect to z(detection step) and then use it for the estimation step.

To go further in detail of the second approach, we may remark that for the given z, the expression of p(f, z|g) as a function of f is Gaussian and so it can be easily integrated out and we obtain:
p ( z g ) p ( g z ) p ( z ) N ( g 0 , H ( v diag[ z j , j  = 1, . . . , n ] ) H + v ε I ) × λ j z j ( 1 - λ ) j ( 1 - z j )
(75)
Now writing the expression of ( z ) = - ln p ( z g ) and keeping only all terms depending on z we obtain:
( z ) = - g B - 1 ( z ) g - ln B ( z ) - 2 n ln 1 - λ λ
(76)

where B(z) = H(v diag [z j , j = 1, ..., n])H' + v ϵ I. We see the complexity of this expression which needs the inversion of the matrix B and its optimization which is a combinatorial optimization needing to evaluate this expression 2 n times.

However, we may also remark that when z obtained, the estimation of f is easy. We have:
f ^ = H B - 1 g .
(77)

which needs again the inversion of the matrix B.

The exact computations of z ^ and f ^ are often too costly, one may try to obtain approximate solutions. Many approximations have been proposed. A good overview of these methods can be found in [30, Chap. 5] and also in [31, 32].

3.6 BGamma and MoGGammas model

In these cases, it is no more possible to integrate out f analytically as it was the case with Gaussians. One strategy here is to use the MCMC methods to generate samples from the joint posterior. The second approach is to approximate the joint posterior by a simpler one, for example by a separable one on f and the hidden variables z in the BGamma or the MoGGammas cases. Very often then we can do the computations analytically. However, it may happens that, even after these separable approximations, still we need to use the MCMC methods on some of variables. Detailed explanation of these general methods is out of focus of this article. See [30, 33, 34]. Here, we just give the details for the case of the Gaussian mixtures (MoG2 or MoG3).

4 Variational Bayesian approximation for the case of mixture laws

To start and to be complete as to propose an unsupervised method, we include also the estimation of the parameters θ and write the joint posterior law of all the unknowns:
p ( f , z , θ g ) p ( g f , θ ) p ( f z , θ ) p ( z θ ) p ( θ )
(78)
which can also be written as
q ( f , z , θ g ) = p ( f z , θ ; g ) p ( z θ ; g ) p ( θ g )
(79)
where
p ( f z , θ ; g ) = p ( g f , θ ) p ( f z , θ ) / p ( g z , θ )
(80)
with
p ( g z , θ ) = p ( g f , θ ) p ( f z , θ ) d f
and
p ( z θ ; g ) = p ( g z , θ ) p ( z θ ) / p ( g θ )
(81)
with
p ( g θ ) = p ( g z , θ ) p ( z θ ) d z
or
p ( g θ ) = z p ( g z , θ ) p ( z θ )
when z are discrete valued, and finally
p ( θ g ) = p ( g θ ) p ( θ ) / p ( g )
(82)
with
p ( g ) = p ( g θ ) p ( θ ) d θ
One can also write:
p ( z θ , g ) = p ( f , z θ , g ) d f
(83)
and
p ( θ g ) = p ( f , z , θ g ) d f d z = p ( z θ ; g ) d z
(84)
or
p ( θ g ) = z p ( f , z , θ g ) d f = z p ( z θ ; g )
(85)

when z are discrete valued.

We see that the first term
p ( f z , θ , g ) p ( g f , θ ) p ( f z , θ )
(86)

will be easy to handle because it is the product of two Gaussians and so it is a multivariate Gaussian. But the two others are not.

The main idea behind the VBA is to approximate the joint posterior p(f, z, θ|g) by a separable one, for example
q ( f , z , θ g ) = q 1 ( f g ) q 2 ( z g ) q 3 ( θ g )
(87)
illustrated here:
and where the expressions of q(f, z, θ|g) is obtained by minimizing the Kullback-Leibler divergence
KL ( q : p ) = q ln q p = ln q p q
(88)
It is then easy to show that
KL ( q : p ) = ln p ( g ) - ( q )
(89)
where p ( g ) is the likelihood of the model
p ( g ) = p ( f , z , θ , g ) d f d z d θ
(90)
with
p ( f , z , θ , g ) = p ( g f , θ ) p ( f z , θ ) p ( z θ ) p ( θ )
(91)
and ( q ) is the free energy associated to q defined as
( q ) = ln p ( f , z , θ , g ) q ( f , z , θ ) q
(92)

So, for a given model , minimizing KL(q : p) is equivalent to maximizing ( q ) and when optimized, ( q * ) gives a lower bound for ln p ( g ) .

Without any other constraint than the normalization of q, an alternate optimization of ( q ) with respect to q1, q2, and q3 results in
q 1 ( f ) exp - ln p ( f , z , θ , g ) q ( z ) q ( θ ) q 2 ( z ) exp - ln p ( f , z , θ , g ) q ( f ) q ( θ ) q 3 ( θ ) exp - ln p ( f , z , θ , g ) q ( f ) q ( z )
Note that these relations represent an implicit solution for q1(f), q2(z), and q3(θ) which need, at each iteration, the expression of the expectations in the right hand of exponentials. If p(g|f, z, θ1) is a member of an exponential family and if all the priors p(f|z, θ2), p(z|θ3), p(θ1), p(θ2), and p(θ3) are conjugate priors, then it is to see that these expressions leads to standard distributions for which the required expectations are easily evaluated. In that case, we may note
q ( f , z , θ g ) = q 1 ( f z ̃ , θ ̃ ; g ) q 2 ( z f ̃ , θ ̃ ; g ) q 3 ( θ f ̃ , z ̃ ; g )
(93)
where the tilded quantities z ̃ , f ̃ and θ ̃ are, respectively functions of ( f ̃ , θ ̃ ) , ( z ̃ , θ ̃ ) and ( f ̃ , z ̃ ) :
and where the alternate optimization results to alternate updating of the parameters ( z ̃ , θ ̃ ) for q1, the parameters ( f ̃ , θ ̃ ) of q2 and the parameters ( f ̃ , z ̃ ) of q3.
Finally, we may note that, to monitor the convergence of the algorithm, we may evaluate the free energy
( q ) = ln p ( f , z , θ , g ) q ( f , z , θ ) q = ln p ( f , z , θ , g ) q + - ln q ( f , z , θ ) q = ln p ( g f , z , θ ) q + ln p ( f z , θ ) q + ln p ( z θ ) q + - ln q ( f ) q + - ln q ( z ) q + - ln q ( θ ) q
(94)

where all the expectations are with respect to q.

Other decompositions are also possible:
q ( f , z , θ g ) = q 1 ( f z ̃ , θ ̃ ; g ) j q 2 j ( z j f ̃ , z ̃ ( - j ) , θ ̃ ; g ) j q 3 l ( θ l f ̃ , z ̃ , θ ̃ ( - l ) ; g )
(95)
illustrated here:
or even by:
q ( f , z , θ g ) = j q 1 j ( f j f ̃ ( - j ) , z ̃ , θ ̃ ; g ) j q 2 j ( z j f ̃ , z ̃ ( - j ) , θ ; g ) l q 3 l ( θ l f ̃ , z ̃ , θ ̃ ( - l ) ; g )
(96)
illustrated here:
Here, we consider the second case (Equation (95)) and give some more details on it. First to simplify the notations, we write it as:
q ( f , z , θ ) = q 1 ( f ) j q 2 j ( z j ) l q 3 l ( θ l )
(97)
where it can be shown that:
q 1 ( f ) exp - ln p ( f , z , θ , g ) q 2 ( z ) q 3 ( θ ) q 2 j ( z j ) exp - ln p ( f , z , θ , g ) q 1 ( f ) q 3 ( θ ) q 2 ( z ( - j ) ) q 3 l ( θ l ) exp - ln p ( f , z , θ , g ) q 1 ( f ) q 2 ( z ) q 3 ( θ ( - l ) )

where p(f, z, θ, g) = p(g|f, θ)p(f|z, θ)p(z|θ)p(θ) and where q2(z) = Π j q2j(z j ), q3(θ) = Π l q3l(θ l ), q2(z(-j)) = Πijq2j(z j ), 〈.〉 q means expected value with respect to q.

In that case, with appropriate models for the priors (exponential families) and hyper parameters (conjugate priors), we see that q(f) is a multivariate Gaussian g ( f ) = N ( f μ ̃ , Σ ̃ ) , q(θ l ) are either Gaussians (for the means) or Inverse Gammas (for the variances) and q(z j ) are discrete distributions whose expressions can be written easily.

To illustrate this in more detail, we consider the case of the Student-t model.

4.1 Student-t model

In this case, we have the following relations for the forward model and the prior laws:
p ( g f , v ε ) = N ( g H f , v ε I ) , τ = 1 / v ε p ( f z ) = j p ( f j z j ) = j N ( z j 0 , z j ) = N ( f 0 , Z ) Z = diag [ z ] , a j = 1 / z j , A = diag [ a ] = Z - 1 p ( a ) = j G ( a j α 0 , β 0 ) p ( τ ) = G ( τ α τ 0 , β τ 0 )
(98)
Then, we obtain the following expressions for the VBA:
q ̃ ( f ) = N ( f μ ̃ , Σ ̃ ) μ ̃ = < τ > Σ ̃ H g Σ ̃ = ( < τ > H H + Z ̃ ) - 1 , with Z ̃ = A ̃ - 1 = diag[ a ̃ ]
(99)
q ̃ ( τ ) = G ( τ α ̃ τ , β ̃ τ ) , α ̃ τ = α τ 0 + ( n + 1 ) / 2 β ̃ τ = β τ 0 + 1 / 2 g 2 - 2 < f > H g + H < f f > H q ̃ ( a j ) = G ( a j α ̃ j , β ̃ j ) α ̃ j = α 00 + 1 / 2 β ̃ j = β 00 + < f j 2 > / 2
(100)
where the expressions of the expectations needed are:
< f > = μ ̃ < ff > = Σ + μ μ < f j 2 > = [ Σ ] j j + μ j 2 < τ > = τ ̃ = α ̃ τ / β ̃ τ < a j > = a ̃ j = α ̃ j / β ̃ j
(101)
We can also express the free energy expression:
( q ) = ln p ( f , a , τ , g ) q ( f , a , τ ) = ln p ( g f , a , τ ) + ln p ( f a , τ ) + ln p ( a τ ) + - ln q ( f ) + - ln q ( a ) + - ln q ( τ )
(102)
where
ln p ( g | f , τ ) = n 2 ( < ln τ > ln ( 2 π ) ) 1 2 { < τ > g g 2 < f > H g + H < f f > H } ln p ( f | a ) = n + 1 2 ln ( 2 π ) 1 2 { j < ln α j > < α j > < f j 2 > } ln p ( a ) = ( n + 1 ) α ε 0 ln ( β ε 0 ) + ( α ε 0 1 ) j [ < ln α j > β < α j ) ] ( n + 1 ) ln Γ ( α ) p ( τ ) ) = c ln d + ( c 1 ) < ln τ ) > d < τ > ln Γ ( c )
and
ln q ( f ) = n + 1 2 ( 1 + ln ( 2 π ) ) 1 2 ln | Σ j | ln q ( a ) = j [ α ˜ j ln ( β ˜ j ) + ( α ˜ j 1 ) < ln α ˜ j > β ˜ j < α j > ln Γ ( α ˜ j ) ] q ( τ ) ) = c ˜ ln d ˜ + ( c ˜ 1 ) < ln τ ) > d ˜ < τ > ln Γ ( c ˜ )
In these equations,
< ln a j > = ψ ( ã j ) - ln b ̃ j < ln τ > = ψ ( c ̃ ) - ln d ̃ ψ ( a ) = ln Γ ( a ) a
(103)
The resulting algorithm can be summarized as follows

5 Conclusion

The sparsity is a required property in many signal and image processing applications. In this article, first we reviewed the main steps of the Bayesian approach for inverse problems in signal and image processing. Then we presented in a synthetic way the different prior models which can be used to enforce the sparsity. These models have been presented in two categories: simple and hierarchical with hidden variables. For each of these prior models, we discuss their properties and the way to use them in a Bayesian approach resulting to many different inversion algorithms.

We have applied these Bayesian algorithms in many different applications such as X-ray computed tomography [35, 36], optical diffraction tomography [3739], positron emission tomography [40], Microwave imaging [41, 42], Sources separation [4346], spectrometry [47, 48], Hyper spectral imaging [49], super resolution [5052], image fusion [53], image segmentation [54], synthetic aperture radar (SAR) imaging [29]. To save the place and be very synthetic, we did not give here any simulation results or any results on different applications of these methods. These can be found in different articles just referenced.

Declarations

Acknowledgements

This study had been partially founded by the C5Sys project (Circadian and Cell cycle Clock systems in Cancer) of ERASYSBIO+. http://www.erasysbio.net/index.php?index=272

Authors’ Affiliations

(1)
Laboratoire des signaux et systèmes (L2S), UMR 8506 CNRS-SUPELEC-UNIV PARIS SUD, SUPELEC, Plateau de Moulon, Gif-sur-Yvette, France

References