Skip to main content

Unsupervised joint deconvolution and segmentation method for textured images: a Bayesian approach and an advanced sampling algorithm

Abstract

The paper tackles the problem of joint deconvolution and segmentation of textured images. The images are composed of regions containing a patch of texture that belongs to a set of K possible classes. Each class is described by a Gaussian random field with parametric power spectral density whose parameters are unknown. The class labels are modelled by a Potts field driven by a granularity coefficient that is also unknown. The method relies on a hierarchical model and a Bayesian strategy to jointly estimate the labels, the K textured images in addition to hyperparameters: the signal and the noise levels as well as the texture parameters and the granularity coefficient. The capability to estimate the latter is an important feature of the paper. The estimates are designed in an optimal manner as a risk minimizer that yields the marginal posterior maximizer for the labels and the posterior mean for the rest of the unknowns. They are computed based on a convergent procedure from samples of the posterior obtained through an advanced MCMC algorithm: Perturbation-Optimization step and Fisher Metropolis-Hastings step within a Gibbs loop. Various numerical evaluations provide encouraging results despite the strong difficulty of the problem.

1 Introduction: motivation and state of the art

This paper addresses the complex problem of textured image segmentation that is a subject of importance in various applications [1, 2] (see also [3, 4]). In practice, observations are often affected by blur (due to finite resolution of observation systems) and by noise (due to various sources of error). However, existing approaches do not account for these issues and focus only on segmentation. On the contrary, this paper addresses the problem of textured image segmentation from indirect (blurred and noisy) observations. It tackles the problem of joint deconvolution-segmentation of textured images, and to the best of our knowledge, no other paper has done any work in this area.

Image segmentation is a computer vision/image processing problem [5] consisting in partitioning an image into groups of adjacent pixels that have a certain homogeneity property (grey level, colour or texture) or that compose an object of interest. Since the problem has been of great interest for decades, the literature is extensive.

The most straightforward segmentation method is thresholding; however, it is seldom applicable, since it is only adapted for piecewise constant images, not affected by blur or noise. In the class of region growing methods, Zhang et al. [6] present a seeded image segmentation based on a heat diffusion process, Ugarriza et al. [7] describe an unsupervised region growing and multiresolution merging algorithm and Alpert et al. [8] present a bottom-up aggregation approach. Partial differential equation-based techniques have also been employed. For instance, Chan and Mulet [9] introduce an active contour without edges method for object segmentation, based on level sets, curve evolution, and the Mumford-Shah model. As for the watershed approach, Malik et al. [10] present a normalized cut approach relying on a local measure of similarity of the textural features in a neighbourhood of the pixel, while Grady [11] uses a small number of predefined labels and computes a probability for each unlabelled pixel. The final label is the one maximizing this probability. Sinop and Grady [12] unify the graph cuts and the random walker methods in a common framework, based on Lq norms minimization for seeded image segmentation.

One of the first approaches for textured image segmentation [13] is based on using as texture features the moments of the image, computed on small windows. The more recent method in [14] consists in computing features based on the Discrete Wavelet Transform of blocks of the image, evaluating the difference between these features on adjacent blocks and processing to obtain a one-pixel thick contours. This method does not provide a label field; thus, it gives no information about which texture belongs to which region. Another method providing texture edges [15] uses active contours and the patch-based approach for texture analysis. Textured image segmentation is also achieved in [16], based on features extracted from the Fourier transform of the learning textures. A significant method for image segmentation based on both grey level (intervening contour framework) and texture (textons) is presented in [10]. The segmented image is obtained using a normalized cut approach. Mobahi et al. [17] model a homogeneous textured region of an image by a Gaussian density and the region boundaries by adaptive chain codes. The segmentation is obtained using a clustering process. Another approach devoted to strongly resembling textures is given in [18]. The goal is to accurately characterize the textures, and this is achieved by combining a collection of statistics and filter responses. This local information is then used in an aggregation process to determine the segmentation. A three-stage segmentation method is presented in [19] and relies on characterizing both textured and non-textured regions using local spectral histograms. Texel-based image segmentation is achieved in [20] by identifying the modes in the probability density function of region properties.

A very significant class of segmentation methods relies on a probabilistic model-based formulation. Geman et al. [21] present an approach for image partitioning into homogeneous regions and for locating edges based on disparity measures. In [22], an image segmentation method is developed based on Monte Carlo Markov Chain (MCMC) and the K adventurers algorithm by integrating clustering and edge detection in the proposal probabilities. Deng and Clausi [23] introduced a weighted Markov random field model that estimates the model parameters and thus performs unsupervised image segmentation. Among the probabilistic methods, the graph partitioning approach is very popular. Felzenszwalb and Huttenlocher [24] uses a graph-based image model and measures the evidence for a boundary between two regions, while Boykov and Funka-Lea [25] describe the basic framework for efficient object extraction from multi-dimensional image data using graph cuts. One of the most commonly used models for the labels in the probabilistic approaches is the Potts model to favour homogeneous regions. The pixels that belong to different regions are considered independent of each other (given the labels). Within a region, the pixels are either independent or in a Markovian dependency, most often Gaussian or conditionally Gaussian. This type of image model is mostly used for piecewise constant or piecewise smooth images. It is explored by [26] (see also [27]) for image segmentation by introducing a site-dependent external field. Barbu and Zhu [28] present a method based on a generalized Swendsen-Wang form. It is based on an adjacency graph and computes probabilities for each edge and performs graph clustering and graph flipping (instead of single pixel flipping as in the case of the Gibbs sampler). Pereyra et al. [29] propose a method for jointly estimating the Potts parameter using a likelihood free Metropolis-Hastings algorithm.

However, none of the aforementioned segmentation approaches is formulated in the context of indirect observations. Interesting works [2937] are the Bayesian methods for image segmentation from indirect data (inversion-segmentation) also based on Potts model for the labels. These developments have been an important source of inspiration but they are devoted to piecewise constant or piecewise smooth images and not adapted for textures. On the contrary, the present work tackles the question of textured image segmentation, from indirect (blurred and noisy) data. It proposes a solution for joint deconvolution-segmentation of textured images, and to the best of our knowledge, it is a first attempt to solve the problem. In addition, the approach also includes the estimation of the hyperparameters: the signal and the noise levels as well as the texture parameters and the Potts coefficient. The capability to estimate the latter is an important feature of the paper. The solution is designed by means of a Bayesian strategy, in an optimal scheme. It yields the decision/estimation as the posterior maximizer or mean depending on the type of variable. They are computed based on a convergent procedure from samples of the posterior obtained through an MCMC algorithm. These two properties, optimality and convergence, are also crucial features of the proposed method.

2 Method: probabilistic modelling

In this work, y represents the blurred and noisy observation of the original image z and represents the hidden label field. y, z and are column vectors of size P (the total number of pixels). The unobserved image z is composed of a small number of regions, each of these regions consisting in a patch of texture. The texture patches belong to one of K given texture classes.

Remark 1

There is no constraint specifying that all the texture classes must be represented in the image. Consequently, K only represents an upper bound of the number of classes that will be present in the estimation.

2.1 Label model

The label set =[p,p=1,…P] naturally takes its values in {1,…K}P and is considered to follow a Potts model [38, 39] in order to favour compact regions. It is driven by the granularity coefficient \(\beta \in {\mathbb {R}}_{+}\) that tunes the mean size of these regions. For a configuration , the probability reads:

$$ \text{Pr}[{\, {\boldsymbol{\ell}}|\beta}] = {\mathcal{Z}}(\beta)^{-1} \, \exp\left[\beta \sum_{r\sim s} \delta(\ell_{r},\ell_{s})\right] \, {,} $$
(1)

where \({\mathcal {Z}}\) is the normalization constant (partition function), stands for the neighbour relationship in a 4-connectivity system and δ is the Kronecker function, δ(k,k) is 1 if k=k and 0 otherwise.

Remark 2

Let us note \(\sigma ({\boldsymbol {\ell }}) = \sum _{p \sim q} \delta (\ell _{p} ; \ell _{q})\). It is the number of pairs of neighbour pixels with identical label. The total number of pairs of neighbour pixels minus σ() is hence the number of “active contours” and then the length of the contours of the label image. It is also the zero-norm of a “gradient” of the label image.

An important feature of the proposed method is the capability to estimate the parameter β. To this end, the partition function \({\mathcal {Z}}\) is a crucial function since it is involved in the likelihood of β attached to any configuration. Its analytical expression is unknownFootnote 1 and it is a huge summation over the KP possible configurations. However, based on stochastic simulations, we have precomputed it for several numbers of pixel P and numbers of class K (see Appendix A and our previous papers [40, 41]). The reader is invited to consult papers such as [29, 42] for alternatives. See also [4346] for complementary results.

2.2 Texture model

The textured images \({\boldsymbol {x}}_{k}\in \mathbbm{C}^{P}\), for k=1,…K are modelled as zero-mean stationary Gaussian random fields with covariance Rk:

$$f({\boldsymbol{x}}_{k}|{\mathbf{R}}_{k})=(2\pi)^{-P} \det({\mathbf{R}}_{k})^{-1}\exp\left(-\|{\boldsymbol{x}}_{k}\|^{2}_{{\mathbf{R}}_{k}^{-1}}\right) \,{.} $$

Remark 3

We address the case of textured images having grey level with the same mean and similar variance since it is particularly challenging. However, the method is also suited for textured images having different mean and variance grey levels.

For notational convenience, Rk is defined through a scale parameter γk and a structure matrix Λk, that is to say \({\mathbf {R}}_{k}^{-1}=\gamma _{k}{\boldsymbol {\Lambda }}_{k}\). Since xk is a stationary field, Rk is a Toeplitz-block-Toeplitz (TbT) matrix and by Whittle approximation, it becomes Circulant-block-Circulant (CbC), meaning that the previous expression becomes separable in the Fourier domain:

$$ f({\boldsymbol{x}}_{k}|{\mathbf{R}}_{k})=\prod_{p=1}^{P} (2\pi)^{-1} \gamma_{k} \lambda_{k,p}\exp\left[ - \gamma_{k} \lambda_{k,p}|\overset{{\circ}}{x}_{k,p}|^{2}\right] $$
(2)

where the \(\overset {{\circ }}{x}_{k,p}\) for p=1,…P are the Fourier coefficients of the image xk and the λk,p for p=1,…P are the eigenvalues of Λk. Thus, as a physical interpretation, \(\lambda _{k}^{-1}\) describes the Power Spectral Density (PSD) of xk in discrete form. More specifically, γkλk,p is the inverse variance of \(\overset {{\circ }}{x}_{k,p}\).

We have chosen a parametric model for the PSD, of Lorentzian form:

$$ \lambda^{-1}(\nu_{x},\nu_{y},{\boldsymbol{\theta}}) = \pi^{2} \, u_{x} \, u_{y} \, \left[1 + S_{x}^{2} \right] \, \left[1 + S_{y}^{2} \right] $$
(3)

with

$$S_{x}=\left(\nu_{x}-\nu_{x}^{0}\right)/u_{x} ~\text{and\:}~ S_{y}=\left(\nu_{y}-\nu_{y}^{0}\right)/u_{y} $$

where νx / νy are the horizontal/vertical frequency and \({\boldsymbol {\theta }}=\left [\nu _{x}^{0}, \nu _{y}^{0}, u_{x}, u_{y} \right ]\) is the shape parameter. The parameters \(\nu _{x}^{0},\nu _{y}^{0}\) are the central frequencies and ux,uy are the PSD widths. Nevertheless, any other parametric form can be used for the PSD, e.g., Gaussian and Laplacian.

Remark 4

The variables νx,νy[−0.5,0.5]2 are the continuous reduced frequencies, while (νm,νn) are the discretized reduced frequencies. We associate the frequency pair (νm,νn) to index p. Then, λp(θ)=λ(νm,νn,θ).

2.3 Image model

The process of obtaining the image z containing the textured patches, starting from the full textured images xk, k=1,…K and the labels , can be visualized by the schematic in Fig. 1. This image forming process is mathematically formalized as:

$$ {\boldsymbol{z}} = \sum\limits_{k=1}^{K} {\mathbf{S}}_{k}({\boldsymbol{\ell}}) \, {\boldsymbol{x}}_{k} $$
(4)
Fig. 1
figure 1

Image forming process (see Eq. (4)) in a case with K=3 texture classes. Left: true label . Central panel: three images \({\boldsymbol {x}}_{1}^{\star }\), \({\boldsymbol {x}}_{2}^{\star }\) and \({\boldsymbol {x}}_{3}^{\star }\) (top) and extracted parts \({\mathbf {S}}_{1}({\boldsymbol {\ell }}^{\star }){\boldsymbol {x}}_{1}^{\star }\), \({\mathbf {S}}_{2}({\boldsymbol {\ell }}^{\star }){\boldsymbol {x}}_{2}^{\star }\) and \({\mathbf {S}}_{3}({\boldsymbol {\ell }}^{\star }){\boldsymbol {x}}_{3}^{\star }\) (bottom). Right: true image z

where Sk() are P×P diagonal binary matrices obtained based on the labels . These matrices extract from the textured image xk the pixels with label k and replace the other pixels with 0. They are zero-forcing matrices defined by:

$${\mathbf{S}}_{k}({\boldsymbol{\ell}})={\text{diag}}\left\{\delta(\ell_{p},k),\, p=1,\dots P\right\} $$

with entries 1 at pixel p when p=k and 0 elsewhere.

Remark 5

Let us consider \({\mathcal {I}}_{k}=\left \{p\,|\,\ell _{p}=k\right \}\) the set of sites having label k. Then, these sets for k=1,…K encode a repartition of the set of pixel indices, thus having the properties:

  • Are disjoint, i.e., \({\mathcal {I}}_{k} \cap {\mathcal {I}}_{l} = \varnothing \), for lk

  • Cover the entire lattice, i.e., \(\cup _{k} {\mathcal {I}}_{k} = \left \{1,\dots P\right \}\)

  • May be empty

In terms of the extraction matrices, these properties are summarized by \(\sum _{k=1}^{K} {\mathbf {S}}_{k} = {\mathbf {I}}_{P}\), the identity matrix of size P.

2.4 Observation system model

Now we turn to the observation system that is modelled as a linear and invariant transform. It is accounted for through a P×P convolution matrix with a TbT structure denoted by H. It becomes CbC by circulant approximation, and its eigenvalues are defined by the Fourier transfer function \(\overset {{\circ }} h_{p}\). Any function could be introduced (Gaussian, Lorentzian, Airy,…), and the considered one is a Laplacian:

$$\overset{{\circ}} h_{nm} = \exp\left[- w^{-1}\left(|\nu_{n}| + |\nu_{m}|\right) / 2 \right] $$

centred in the (0,0) frequency with width w. This is only one of the countless models that can be used.

2.5 Noise model

The noise is considered to be additive, zero-mean, white, and Gaussian of inverse variance γn. Based on this model, the density of the data given the image z and the noise parameter γn, reads:

$$ f({\boldsymbol{y}}|{\boldsymbol{z}},\gamma_{\mathrm{n}}) = (2\pi)^{-P} \, \gamma_{\mathrm{n}}^{P} \, \exp\left[-\gamma_{\mathrm{n}}\|{\boldsymbol{y}}-{\mathbf{H}}{\boldsymbol{z}}\|^{2}\right] $$
(5)

that is to say the likelihood.

2.6 Hierarchical model

Based on the variables above, the hierarchy for the model in preparation for the segmentation problem from blurred and noisy data can be established and it is graphically represented in Fig. 2. Based on the variable dependencies encoded in this figure, the joint distribution can be expressed:

$$ \begin{aligned} f({\boldsymbol{y}},{\boldsymbol{\ell}},{\boldsymbol{x}}_{1\dots K}, &\gamma_{\mathrm{n}},\gamma_{1\dots K},{\boldsymbol{\theta}}_{1\dots K})=f({\boldsymbol{y}}|\gamma_{\mathrm{n}},{\boldsymbol{\ell}},{\boldsymbol{x}}_{1\dots K}) \\ & \text{Pr}\left[{\, {\boldsymbol{\ell}}|\beta}\right] ~ \prod_{k=1}^{K} f({\boldsymbol{x}}_{k}|{\boldsymbol{\theta}}_{k},\gamma_{k})\\ & f(\beta) ~ f(\gamma_{\mathrm{n}}) ~ \prod_{k=1}^{K} f({\boldsymbol{\theta}}_{k}) ~ \prod_{k=1}^{K} f(\gamma_{k}) \,{.} \end{aligned} $$
(6)
Fig. 2
figure 2

Hierarchical model: the round/square nodes show the estimated/given variable. The xk are the textured images (Gaussian density) and the θk,γk are the texture parameters. is the label set (Potts field) and β is the label parameter (granularity coefficient). The observed image is y. See also the notation table (end of the paper)

In order to complete the probabilistic description, the next section introduces the hyperparameter densities.

2.7 Hyperparameter models

Regarding the precision parameters γk,k=1,…K and γn, it can be noticed that in the model for the textured images xk (Eq. (2)) and for the observation y (Eq. (5)), they intervene as precision parameters in Gaussian conditional densities; hence, the Gamma densities \({\mathcal {G}}\left (\gamma ; \alpha ^{0}, \beta ^{0}\right)\) are conjugate forms. Furthermore, little prior information is available on these parameters, so uninformative Jeffreys prior are used, by setting (α0,β0)→(0,0).

Otherwise, the dependency of the likelihood w.r.t. the parameter θk is very complicated, meaning that there is no conjugate form. Moreover, the lack of prior information suggests the use of the uniform density between a minimum and a maximum value: \(f({\boldsymbol {\theta }}_{k})={\mathcal {U}}_{\left [{\boldsymbol {\theta }}_{k}^{\mathrm {m}},{\boldsymbol {\theta }}_{k}^{\mathrm {M}}\right ]}({\boldsymbol {\theta }}_{k})\).

When it comes to β, a conjugate prior is not available, given the expression of the partition function \({\mathcal {Z}}(\beta)\). A uniform prior on an interval [0,B] is considered as a reasonable choice: \(f(\beta) = \mathcal {U}_{[0, B]}(\beta)\) where B is defined as the maximum possible value of β.

3 Method: Bayesian formulation

3.1 Estimation

The Bayesian strategy designs an estimator based on a loss function that quantifies the discrepancy between the true value of a parameter and an estimated one. It then relies on a risk that is the mean value of the loss function, the mean being considered under the joint distribution (6) that is to say the distribution of the unknown parameters and the data. The optimal estimator is defined as the function of the data that minimizes the risk. It is naturally different for the various types of parameters and choices of loss function.

  • Regarding the labels (discrete parameters), we resort to a binary loss function and the estimates are the marginal posterior maximizers.

  • Regarding the continuous parameters β, γn, the γk, the θk and the xk, we resort to the quadratic loss function and the estimates are the posterior means.

Remark 6

A specificity of the chosen loss functions is separability, resulting in marginal estimates. It allows for relatively fast computations but with possible limitations regarding image quality. Alternatives could rely on non-separable loss function and non-marginal estimates, for instance (joint) posterior maximizer. Numerical implementation could then rely on non-guaranteed optimization algorithm (e.g. block iterative conditional mode) or on computationally intensive algorithm (e.g. simulated annealing).

An estimate \(\widehat {\boldsymbol {z}}\) of the image z can be obtained based on the estimated labels \(\widehat {\boldsymbol {\ell }}\) and textured images \(\widehat {\boldsymbol {x}}_{k}\) based on Eq. (4) as follows:

$$ \widehat{\boldsymbol{z}}=\sum_{k} {\mathbf{S}}_{k}(\widehat{\boldsymbol{\ell}}) \, \widehat{\boldsymbol{x}}_{k} $$
(7)

each extraction matrix being based on the label estimate \(\widehat {\boldsymbol {\ell }}\).

3.2 Posterior

The posterior is proportional to the joint distribution (6) and is fully specified based on the formation model (4) for the image z, the model (2) for the textured images xk, the Potts model (1) for the labels , the model (5) for the observation y and the priors above for β, for γn, for the γk and θk.

$$ \begin{aligned} f({\boldsymbol{\ell}},&{\boldsymbol{x}}_{1\dots K}, \gamma_{\mathrm{n}},\gamma_{1\dots K},{\boldsymbol{\theta}}_{1\dots K},\beta|{\boldsymbol{y}}) \propto \\ & \exp\left[- \gamma_{\mathrm{n}}\| {\boldsymbol{y}}-{\mathbf{H}}\sum_{k}{\mathbf{S}}_{k}({\boldsymbol{\ell}}){\boldsymbol{x}}_{k}\|^{2}\right]\\ & {\mathcal{Z}}(\beta)^{-1}\, \exp\left[\beta \sum_{r\sim s} \delta(\ell_{r},\ell_{s})\right]\\ & \prod_{k} \left[\det({\boldsymbol{\Lambda}}_{k}({\boldsymbol{\theta}}_{k}))^{-1} \exp\left(-\gamma_{k}\|{\boldsymbol{x}}_{k}\|^{2}_{{\boldsymbol{\Lambda}}_{k}({\boldsymbol{\theta}}_{k})}\right)\right] \\ & \prod_{k}\left[\gamma_{k}^{\alpha_{k}+P-1} ~ \exp\left(-\gamma_{k}\beta_{k}\right)\right]\\ & \gamma_{\mathrm{n}}^{\alpha_{\mathrm{n}}+ P -1} ~ \exp\left(-\gamma_{\mathrm{n}}\beta_{\mathrm{n}}\right) ~ {\mathcal{U}}_{\left[{\boldsymbol{\theta}}_{k}^{\mathrm{m}},{\boldsymbol{\theta}}_{k}^{\mathrm{M}}\right]}\left({\boldsymbol{\theta}}_{k}\right) ~ \mathcal{U}_{[0, B]}(\beta) \,{.} \end{aligned} $$
(8)

This distribution summarizes all the information about the unknowns contained by the data and the prior models.

3.3 Computing—posterior conditionals

Due to the sophisticated form of the posterior, the estimates (marginal posterior maximizers or means) cannot be calculated; consequently, they will be numerically extracted. Stochastic samplers seem adequate and the literature on the subject is abundant and varied [4750]. More specifically, a (block) Gibbs loop is particularly appealing since it enables to split the global sophisticated problem in several far simpler sub-problems. It requires to sequentially sample each variable, under its conditional posterior. These distributions are described in the next section.

4 Algorithm: sampling aspects

This section describes the conditional posterior for each unknown parameter in order to implement a Gibbs sampler. In particular, it details the cumbersome task of sampling the full textured images (Section 4.4) and the labels (Section 4.5). These two sampling processes represent the major algorithmic challenges of our approach.

Each conditional posterior can be deduced from the joint posterior (8) by picking the factors that are function of the considered parameter.

4.1 Precision parameters

Regarding the noise parameter γn and the texture scale parameters γk, from (8), we have:

$$\begin{array}{@{}rcl@{}} \gamma_{\mathrm{n}} &\sim& \gamma_{\mathrm{n}}^{\,\alpha_{\mathrm{n}}^{0}\, +P-1} \, \exp - \gamma_{\mathrm{n}} \left[\beta_{\mathrm{n}}^{0} + \| {\boldsymbol{y}}-{\mathbf{H}} {\boldsymbol{z}} \|^{2}\right] \,{,}\\ \gamma_{k} &\sim& \gamma_{k}^{\,\alpha_{k}^{0}\, +P-1} \, \exp-\gamma_{k}\left[\beta_{k}^{0} + \|{\boldsymbol{x}}_{k}\|^{2}_{{\boldsymbol{\Lambda}}_{k}({\boldsymbol{\theta}}_{k})}\right] \,{.} \end{array} $$

They must be sampled under Gamma densities \({\mathcal {G}}(\gamma ; \alpha, \beta)\) with respective parameters:

$$\alpha = \alpha_{\mathrm{n}}^{0}+P ~~\text{and\:}~~\beta=\beta_{\mathrm{n}}^{0} + \| {\boldsymbol{y}}-{\mathbf{H}}{\boldsymbol{z}} \|^{2} $$

for the noise parameter γn and

$$\alpha = \alpha_{k}^{0}+ P ~~\text{and\:}~~\beta=\beta_{k}^{0} + \|{\boldsymbol{x}}_{k}\|^{2}_{{\boldsymbol{\Lambda}}_{k}({\boldsymbol{\theta}}_{k})} $$

for the texture parameters γk. As Gamma variables, they can be straightforwardly sampled. In addition, given the hierarchical structure (see Fig. 2), they are mutually (a posteriori) independent.

4.2 Shape texture parameters

Regarding the shape parameters θk of the textured image PSD, the problem is made far more complicated by the intricate relation between the density, the PSD and the parameter θk; see Eqs. (2) and (3). As a consequence, the conditional posterior has a non-standard form:

$${\boldsymbol{\theta}}_{k} \sim {\mathcal{U}}_{[{\boldsymbol{\theta}}_{k}^{\mathrm{m}},{\boldsymbol{\theta}}_{k}^{\mathrm{M}}]}({\boldsymbol{\theta}}_{k}) \prod_{p} \lambda_{p}({\boldsymbol{\theta}}_{k}) \, \exp-\gamma_{k}\lambda_{p}({\boldsymbol{\theta}}_{k}) |\overset{{\circ}} x_{k,p}|^{2} $$

nevertheless, it can be sampled using a Metropolis-Hastings (MH) stepFootnote 2. Basically, it consists in drawing a proposal based on a proposition law, evaluating an acceptance probability, and then, at random according to this probability, setting the new value as the proposal (acceptation) or as the current value (duplication). There are numerous options in order to formulate a proposition law, and both the convergence rate and the mixing properties are influenced by its adequacy to the (conditional) posterior. Thus, designing a proposition law that embeds information about the posterior will significantly enhance the performances. In this context, the directional Random Walk MH (RWMH) algorithm taking advantage of first- or second-order derivatives of the posterior seems relevant. A standard case is the Metropolis-adjusted Langevin algorithm (MALA) [51], which takes advantage of the posterior derivative. The preconditioned MALA [52] and the quasi-Newton proposals [53] exploit the posterior curvature. More advanced versions rely on the Fisher matrix (instead of the Hessian) and leads to an efficient sampler called the Fisher-RWMH: [54] proposes a general statement and our previous paper [55] (see also [56]) focuses on texture parameters.

Explicitly, from the current value θc, the algorithm formulates the proposal θp:

$$ {\boldsymbol{\theta}}_{\mathrm{p}} = {\boldsymbol{\theta}}_{\mathrm{c}} + \frac{1}{2} \varepsilon^{2} \, {\mathcal{I}}^{-1}({\boldsymbol{\theta}}_{c}) \, {\mathcal{L}}^{\prime}({\boldsymbol{\theta}}_{c}) + \varepsilon \, {\mathcal{I}}({\boldsymbol{\theta}}_{c})^{-1/2} \, {\boldsymbol{u}} $$

where \({\mathcal {I}}({\boldsymbol {\theta }})\) is the Fisher matrix, \({\mathcal {L}}({\boldsymbol {\theta }})\) is the log of the conditional posterior and \({\mathcal {L}}^{\prime }({\boldsymbol {\theta }})\) its gradient, ε is a tuning parameter and \({\boldsymbol {u}}\sim {\mathcal {N}}(0,{\mathbf {I}})\) a standard Gaussian sample.

4.3 Potts parameter

The granularity coefficient β conditionally follows an intricate density also deduced from (8):

$$\beta \sim {\mathcal{Z}}(\beta)^{-1} \, \exp\left[{ \beta {\sum}_{p \sim q} \delta(\ell_{p}; \ell_{q})}\right] ~ \mathcal{U}_{[0,B]}(\beta) \,{.} $$

The sampling is a very difficult task first of all because the density does not have a standard form. Moreover, the major problem is that \({\mathcal {Z}}(\beta)\) is intractable, so the density cannot even be evaluated for a given value of β.

To overcome the obstacle, the partition function \({\mathcal {Z}}(\beta)\) has been precomputed on a fine grid of values for β, ranging from β=0 to β=B=3, with a step of 0.01, for several numbers of pixel P and numbers of class K. Details are given in Annex 6. It is therefore easy to compute the cumulative density function F(β) by standard numerical integration / interpolation. Then, it suffices to sample a uniform variable u on [0,1] and to compute β=F−1(u) to obtain a desired sample. So, this step is inexpensive (since the table of values of \({\mathcal {Z}}(\beta)\) is precomputed).

Remark 7

Although it allows for very efficient computations, this approach has a limitation: \({\mathcal {Z}}\) must be precomputed for the considered number of pixel P and class K.

The procedure is identical to the one presented in our previous papers [40, 41, 57]. The reader is invited to consult [29, 4246] for alternatives and complementary results.

4.4 Textured image

Remark 8

To improve the readability, in the following, we will use the simplified notation Λk=Λk(θk).

The textured image xk has a Gaussian density, deduced from (8):

$$ {\boldsymbol{x}}_{k} \sim \exp - \left[ \gamma_{\mathrm{n}} \| {\boldsymbol{y}}-{\mathbf{H}}\sum_{l}{\mathbf{S}}_{l}{\boldsymbol{x}}_{l} \|^{2} \!\!+ \gamma_{k}\|{\boldsymbol{x}}_{k}\|^{2}_{{\boldsymbol{\Lambda}}_{k} }\right] $$
(9)

and it is easy to show that the mean μk and the covariance Σk write:

$$\begin{array}{@{}rcl@{}} {\boldsymbol{\Sigma}}_{k}^{-1} &=& \gamma_{\mathrm{n}} {\mathbf{S}}_{k}^{\dag} {\mathbf{H}}^{\dag}\mathbf{H}\mathbf{S}_{k} + \gamma_{k}{\boldsymbol{\Lambda}}_{k}\\ {\boldsymbol{\mu}}_{k} &=& \gamma_{\mathrm{n}} {\boldsymbol{\Sigma}}_{k} {\mathbf{S}}_{k}^{\dag}{\mathbf{H}}^{\dag}\bar {\boldsymbol{y}}_{k} \end{array} $$

where \(\bar {\boldsymbol {y}}_{k}= {\boldsymbol {y}}-{\mathbf {H}}\sum _{l\neq k}{\mathbf {S}}_{l}{\boldsymbol {x}}_{l}\). This quantity is founded on the extraction of the contribution of the image xk from the data. More specifically, \(\bar {\boldsymbol {y}}_{k}\) relies on the subtraction from the observations y of the convolution of all the parts of the image z that are not labelled k.

However, the practical sampling of this Gaussian density is a thorny issue due to the high dimension of the variable. Usually, sampling a Gaussian density requires handling the covariance or the precision, for instance factorization (e.g., Cholesky), diagonalisation, or inversion, which are impossible here. This could be possible for special structures, e.g., sparse or circulant. Here, Λk, H and by extension HH are CbC; however, the presence of the Sk breaks the circularity: Σk is not diagonalizable by FFT and, consequently, the sampling of xk cannot be performed efficiently in the Fourier domain.

Nevertheless, the literature accounts for alternatives based on the strong links between matrix factorization, diagonalization, inversion, linear system and optimization of quadratic criteria [5862]. We resort here to our previous work [61] (see also [63]) based on a perturbation-optimization (PO) principle: adequate stochastic perturbation of a quadratic criterion and optimization of the perturbed criterion. It is shown that the criterion optimizer is a sample of the target density. It is applicable if the precision matrix and the mean can be written as a sum of the form:

$${\boldsymbol{\Sigma}}_{k}^{-1} = \sum_{n=1}^{N} {\mathbf{M}}_{n}^{\mathrm{t}} {\mathbf{C}}_{n}^{-1} {\mathbf{M}}_{n} ~~\text{and\:}~~ {\boldsymbol{\mu}}_{k} = {\boldsymbol{\Sigma}}_{k} \sum_{n=1}^{N} {\mathbf{M}}_{n}^{\mathrm{t}} {\mathbf{C}}_{n}^{-1} {\boldsymbol{m}}_{n} $$

By identification, with N=2:

$$\begin{aligned} \left\{\begin{array}{ll} {\mathbf{M}}_{1} & =~ {\mathbf{H}}{\mathbf{S}}_{k} \\ {\mathbf{C}}_{1} & =~ \gamma_{\mathrm{n}}^{-1} {\mathbf{I}}_{P}\\ {\boldsymbol{m}}_{1} & =~ \bar{\boldsymbol{y}}_{k} \end{array}\right. \hspace{1cm} \left\{\begin{array}{ll} {\mathbf{M}}_{2}& =~ {\mathbf{I}}_{P}\\ {\mathbf{C}}_{2}& =~ \gamma_{k}^{-1} {\boldsymbol{\Lambda}}_{k}^{-1}\\ {\boldsymbol{m}}_{2}& =~ {\mathbf{O}}_{P} \end{array}\right. \end{aligned} $$

4.4.1 Perturbation

The perturbation phase of this algorithm consists in drawing the following Gaussian samples:

$${\boldsymbol{\xi}}_{1} \sim {\mathcal{N}}\left({\boldsymbol{m}}_{1},{\mathbf{C}}_{1}\right) ~~\text{and\:}~~ {\boldsymbol{\xi}}_{2} \sim {\mathcal{N}}\left({\boldsymbol{m}}_{2},{\mathbf{C}}_{2}\right) $$

The cost of these sampling is not prohibitive: ξ1 is a realization of a white noise and ξ2 is a realization of the prior model for xk and it is computed by FFT.

4.4.2 Optimization

In order to obtain a sample of the image xk, the following criterion must be optimized w.r.t. x:

$$J_{k}({\boldsymbol{x}}) = \gamma_{\mathrm{n}} \left\|{\boldsymbol{\xi}}_{1}-{\mathbf{H}}{\mathbf{S}}_{k}{\boldsymbol{x}}\right\|^{2}+\gamma_{k}\left\|{\boldsymbol{\xi}}_{2}-{\boldsymbol{x}}\right\|^{2}_{{\boldsymbol{\Lambda}}_{k}} \,{.} $$

For notational convenience, let us rewrite:

$$\begin{array}{@{}rcl@{}} J_{k}({\boldsymbol{x}}) & = & {\boldsymbol{x}}^{\dag} {\mathbf{Q}}_{k} {\boldsymbol{x}} - 2 {\boldsymbol{x}}^{\dag} {\boldsymbol{q}}_{k} + J_{k}(0) \end{array} $$

where the matrix \({\mathbf {Q}}_{k}=\gamma _{\mathrm {n}} {\mathbf {S}}_{k}^{\dag } {\mathbf {H}}^{\dag }{\mathbf {H}}{\mathbf {S}}_{k}+\gamma _{k}{\boldsymbol {\Lambda }}_{k}\) is half the Hessian (and the precision matrix) and the vector \({\boldsymbol {q}}_{k}=\gamma _{\mathrm {n}} {\mathbf {S}}_{k}^{\dag }{\mathbf {H}}^{\dag } {\boldsymbol {\xi }}_{1} + \gamma _{k} {\boldsymbol {\Lambda }}_{k}^{-1}{\boldsymbol {\xi }}_{2}\) is the opposite of the gradient at the origin. The gradient at x itself is: gk=2(Qkxqk).

Theoretically, there is no constraint on the optimization technique to be used and the literature on the subject is abundant [6466]. We have only considered algorithms that are guaranteed to converge (to the unique minimizer) and among them the basic directions:

  • Gradient descent,

  • Conjugate gradient descent.

We have first used the conjugate gradient direction, since it is more efficient especially for a high-dimension problem and a quadratic criterion. However, we have experienced convergence difficulties, making the overall algorithm very slow. In practice, the step length at each iteration was extremely small, probably due to conditioning issues. Consequently, the differences between the iterates were almost insignificant. The solution relies on a preconditioner. It has been defined as a CbC approximation of the inverse Hessian of Jk:

$$ {\boldsymbol{\Pi}}_{k}= \left(\gamma_{\mathrm{n}} {\mathbf{H}}^{\dag}{\mathbf{H}} + \gamma_{k}{\boldsymbol{\Lambda}}_{k}\right)^{-1}/2 $$
(10)

obtained by eliminating the Sk matrix from Qk and chosen for computational efficiency. It is used for both of the aforementioned directions:

  • Preconditioned gradient descent,

  • Preconditioned conjugate gradient descent.

In this context, the two methods have yielded similar results, and finally, we have focused on the preconditioned gradient.

The second ingredient that is necessary is the step length s in the considered direction, at each iteration. Here again, a variety of strategies is available. We have used an optimal step that is explicitly given:

$$s = \frac{{{\boldsymbol{g}}_{k}}^{\dag} {\boldsymbol{\Pi}}_{k}^{\dag} {\boldsymbol{g}}_{k}}{ {{\boldsymbol{g}}_{k}}^{\dag} {\boldsymbol{\Pi}}_{k}^{\dag} {\mathbf{Q}}_{k} {\boldsymbol{\Pi}}_{k} {\boldsymbol{g}}_{k}} $$

and efficiently computable.

4.4.3 Practical implementation

The algorithm requires at each iteration the computation of the preconditioned gradient and the step length. Finally, the required computations for performing the optimization are the vector qk and the products of a vector by the matrices Πk and Qk.

  • The vector qk writes:

    $$ {\boldsymbol{q}}_{k} = \gamma_{\mathrm{n}} \underbrace{{\mathbf{S}}_{k}^{\dag}\underbrace{{\mathbf{H}}^{\dag} {\boldsymbol{\xi}}_{1}}_{\text{FFT}}}_{\text{ZF}}+ \gamma_{k} \underbrace{{\boldsymbol{\Lambda}}_{k}^{-1}{\boldsymbol{\xi}}_{2}}_{\text{FFT}} $$
    (11)

    and thus efficiently computed through a FFT and zero-forcing (ZF).

  • The product Qkx writes:

    $${\mathbf{Q}}_{k} {\boldsymbol{x}} = \gamma_{\mathrm{n}} \underbrace{{\mathbf{S}}_{k}^{\dag} \underbrace{\underbrace{{\mathbf{H}}^{\dag}{\mathbf{H}}}_{\text{FFT}}\underbrace{{\mathbf{S}}_{k} {\boldsymbol{x}}}_{\text{ZF}}}_{\text{FFT}} }_{\text{ZF}} + \gamma_{k} \underbrace{{\boldsymbol{\Lambda}}_{k}{\boldsymbol{x}}}_{\text{FFT}} $$

    and thus also efficiently computed through a series of FFT and ZF.

  • Regarding Πkgk, since the matrix Πk is CbC, the product can also be efficiently computed by FFT.

The zero-forcing process is achieved in the spatial domain (it amounts to setting to zero some pixels of images), while the costly products by matrices are performed in the Fourier domain (all of them by FFT).

4.5 Labels

The label set has a multidimensional categorical distribution:

$${\boldsymbol{\ell}} \sim \exp\left[ \beta \sum_{r\sim s} \, \delta(\ell_{r},\ell_{s}) - \gamma_{\mathrm{n}} \| {\boldsymbol{y}}-{\mathbf{H}}\sum_{k}{\mathbf{S}}_{k}({\boldsymbol{\ell}}){\boldsymbol{x}}_{k}\|^{2} \right] $$

and it is a non-separable and non-standard form, so its sampling is not an easy task. A solution is to sample the p one by one conditionally on the others and on the rest of the variables, in a Gibbs scheme.

To this end, let us introduce the notation \({\boldsymbol {z}}_{k}^{p}\) for the image with all its pixels identical to z except for pixel p. The pixel p in \({\boldsymbol {z}}_{k}^{p}\) is the pixel p from xk. Let us note \({\mathcal {E}}_{p,k}=\left \|{\boldsymbol {y}}-{\mathbf {H}}{\boldsymbol {z}}_{k}^{p}\right \|^{2}\). This error quantifies the discrepancy between the data and the class k regarding pixel p.

Sampling a label \(\ell _{p_{0}}\) requires its conditional probability. A precise analysis of the conditional distribution for \(\ell _{p_{0}}\) yields:

$$\text{Pr}(\ell_{p_{0}}=k|\star) \propto \exp\left[\beta \sum_{r;r\sim p_{0}} \delta(\ell_{r},k)-\gamma_{\mathrm{n}} {\mathcal{E}}_{p_{0},k}\right] $$

for k=1,…K. This computation is performed up to a multiplicative constant, which can be determined knowing that the probabilities sum to 1.

To compute these probabilities, we must evaluate the two terms of the argument of the exponential function, at pixel p0. The first term is the contribution of the prior and it can be easily computed for each k by counting the neighbours of pixel p0 having the label k. Let us now focus on the second term, \({\mathcal {E}}_{p,k}\). To write this term in a more convenient form, we introduce:

  • A vector \({\mathbbm{1}}_{p}\in {\mathbb {R}}^{P}\): its p-th entry is 1 and the other is 0.

  • A quantity \(\Delta _{p,k}\in {\mathbb {R}}\) that records the difference between the p-th pixel of the image z and the one of image xk: \(\Delta _{p,k} = {\mathbbm{1}}_{p}^{\dag } ({\boldsymbol {z}}-{\boldsymbol {x}}_{k})\).

We then have \({\boldsymbol {z}}_{k}^{p}={\boldsymbol {z}}-\Delta _{p,k} {\mathbbm{1}}_{p}\), so \({\mathcal {E}}_{p,k}\) writes:

$$ \begin{aligned} {\mathcal{E}}_{p,k} &=\left\|{\boldsymbol{y}}-{\mathbf{H}}\left({\boldsymbol{z}}-\Delta_{p,k}{\mathbbm{1}}_{p}\right)\right\|^{2}\\ &=\left\|({\boldsymbol{y}}-{\mathbf{H}}{\boldsymbol{z}}) - \Delta_{p,k}{\mathbf{H}}{\mathbbm{1}}_{p} \right\|^{2}\\ &=\bar{\boldsymbol{y}}^{\dag} \bar{\boldsymbol{y}} + \Delta_{p,k}^{2} {\mathbbm{1}}_{p}^{\dag}{\mathbf{H}}^{\dag}{\mathbf{H}} {\mathbbm{1}}_{p} - 2 \Delta_{p,k} {\mathbbm{1}}_{p}^{\dag} {\mathbf{H}}^{\dag} \bar{\boldsymbol{y}} \end{aligned} $$
(12)

where \(\bar {\boldsymbol {y}} = {\boldsymbol {y}}-{\mathbf {H}}{\boldsymbol {z}}\). Then, to complete the description, let us analyse each term.

  1. 1.

    The first term \(\bar {\boldsymbol {y}}^{\dag } \bar {\boldsymbol {y}}\) does not depend on p or k. Consequently, its value is not required in the sampling process and it can be included in a multiplicative factor.

  2. 2.

    The term \({\mathbbm{1}}_{p}^{\dag }{\mathbf {H}}^{\dag }{\mathbf {H}} {\mathbbm{1}}_{p}=\| {\mathbf {H}} {\mathbbm{1}}_{p}\|^{2}\) does not depend on p due to the CbC form of the H matrix. Moreover, this norm only needs to be computed once for all, since the H matrix does not change throughout the iterations. In fact, this norm amounts to the sum \({\sum _{q}}{|\overset {{\circ }} h_{q}}|^{2}\).

  3. 3.

    Finally, in the third term \({\mathbbm{1}}_{p}^{\dag } {\mathbf {H}}^{\dag } \bar {\boldsymbol {y}}\), the product \({\mathbf {H}}^{\dag } \bar {\boldsymbol {y}}\) is a convolution efficiently computable by FFT and the product with \({\mathbbm{1}}_{p}^{\dag }\) selects the pixel p. Under this form, the computation would not be efficient since \({\mathbf {H}}^{\dag } \bar {\boldsymbol {y}}\) should be recomputed at each iteration. A far better alternative is to update \({\mathbf {H}}^{\dag } \bar {\boldsymbol {y}}\) after updating each label.

5 Results and discussion

The problem of texture segmentation has a considerable degree of difficulty, especially in the present case, where (i) the data are affected by blur and noise, (ii) the texture parameters are unknown and (iii) the granularity coefficient, the signal and the noise levels are also unknown. The previous sections provide a detailed description of our method, and this section presents numerical results, as follows.

  1. 1.

    First, implementation and practical considerations are described.

  2. 2.

    A study is then given for different image topologies in various combinations of blur and noise to assess the method versatility and identify the limitations.

  3. 3.

    Moreover, a posterior statistics analysis is given in order to evaluate the associated uncertainty.

5.1 Implementation and practical considerations

The method has been implementedFootnote 3 as shown in Algorithm 1. Under different scenarios, the algorithm has been run several times from identical and different initializations, and it has shown consistent qualitative and quantitative behaviours. It has lead us to a series of practical considerations.

  • The label set is initialized by a realization of a white noise with uniform probability in {1,…K}. Our tests have shown a faster convergence as compared to other initialization (e.g. constant label field).

  • An important practical point is the initialization of the texture parameters θk. Each frequency is set to the maximizer of the periodogram of the observed image y over its prior interval.

  • The preconditioned gradient and the preconditioned conjugate gradient directions have similar performances. Contrary, the non preconditioned versions are very slow.

  • Stopping rule: the algorithm stops when the difference between successive updates of the image z (see Eq. (7) and last line of Algorithm 1) becomes smaller than a given threshold s. Practically, we set s=10−3, the algorithm iterates usually about two hundred times and it takes about 4 min for a 256×256 image.

5.2 Evaluation of the method

The first example is given in Fig. 4. It consists in a simple image topology containing K=3 classes of texture. The true values of the frequency parameters of the textured images are given in Table 1. The value of the spectral width is ux=uy=0.005 for all the textures (and it is assumed to be known). These values produce two oriented textures and a low-frequency noise shown in Fig. 4 (and already given in Fig. 1). The observation scenario is with w=1/2 (full width at half maximum is about 0.5) and γn=10 (signal to noise ratio is about 5 dB).

Table 1 The horizontal and vertical frequencies \(\nu _{x}^{0}\) and \(\nu _{y}^{0}\) are respectively given in the top and the bottom part of the table. For each parameter, it gives the true value, the prior interval and the estimated values. They are clearly very closed to the true ones

For an illustrative plot, the algorithm has been iterated arbitrarily 100 times and Fig. 3 shows the simulated chains for the granularity coefficient β and for the noise parameter γn. It shows that the distributions are stable after about T=50 iterations (burn-in period). The first T samples are then discarded. From the remaining samples, the decisions for the labels are computed as the empirical marginal posterior maximizers and the estimations for the other parameters are computed as empirical posterior averages.

Fig. 3
figure 3

Arbitrarily 100 samples of the simulated chains: granularity coefficient β (top) and noise parameter γn (bottom)

The algorithm produces a label configuration (Fig. 4d) very similar to the true one (Fig. 4a), with only 0.90% of mislabelled pixels, despite the degradation of the image.

Fig. 4
figure 4

Segmentation and reconstructed images (Example 1). a True labels . b True image z. c Data y. d Estimated labels \(\hat {\boldsymbol {\ell }}\). e Estimated image \(\hat {\boldsymbol {z}}\)

Remark 9

The method is region-based, meaning that it provides closed contours, unlike a part of the existing works in texture segmentation.

Moreover, the texture parameters estimation error is small, less than 10−2, as mentioned in Table 1. The full textured images xk are also accurately estimated, having the same characteristics as the original textured images. The blur and the noise are reduced in the resulting image (Fig. 4e) with respect to the data (Fig. 4c), and it strongly resembles the original image (Fig. 4b).

5.2.1 Label analysis: error and probability of error

One of the main advantages of probabilistic approaches is that they not only provide estimates for the unknowns, but also coherent uncertainties associated to these estimates. Figure 5 illustrates our analysis on the label estimates and their probability.

Fig. 5
figure 5

Link between the probability of the selected label and the labelling error. a From left to right: probability for each pixel of having label 1, 2 or 3, respectively (black is zero and white is one). b Probabilities. c Mislabelled

Figure 5a gives the empirical marginal probabilities for the three values of the label p=1, p=2 and p=3 for each pixels p=1,…P. Figure 5b gives the probabilities of the selected labels (the one with the maximum probability). This maximum probability can have various values: a small value indicates a less reliable decision for the label. These probabilities are small (black or grey) at certain locations in the image Fig. 5b, and it is safe to assume that at these locations, there is a smaller chance of selecting a correct label.

This analysis can naturally be done even without the knowledge of the true labels. In order to verify if indeed we are more prone to error in the area with small posterior probability, we have compared the selected label configuration \(\widehat {\boldsymbol {\ell }}\) to the true one . We can immediately notice in Fig. 5c that all of the mislabelled pixels are in fact positioned in the areas of weaker probability, shown in Fig. 5b. This reinforces our statement concerning the utility of the probabilistic approach, due to its ability to anticipate errors.

5.2.2 Other image topologies, blur and noise

In the case of the second image topology, given in Fig. 6, although the number of textures is reduced (K=2), the task is more difficult due to the shape of the regions: the presence of a relatively thin, continuous structure makes the label decision hard. In addition, only a small patch of the texture associated with the “white” class is present and that complicates the texture parameter estimation. However, the results shown in Fig. 6 are remarkably correct, for both label and image. We only have 0.64% of mislabelled pixels.

Fig. 6
figure 6

Segmentation and reconstructed images (Example 2). a True labels . b True image z. c Data y. d Estimated labels \(\hat {\boldsymbol {\ell }}\). e Estimated image \(\hat {\boldsymbol {z}}\)

Our third example is given in Fig. 7. Here again, the shape of some of the regions are relatively thin making the label decision hard and only a small patches of the second texture class is observed making texture parameter estimation difficult. Figure 7 illustrates the method performances in a weaker convolution case w=2 and higher noise level γn=5. The method performs very well in this case, the estimated label field being very close to the true labels (only 0.70% of miss-labelled pixels).

Fig. 7
figure 7

Segmentation and reconstructed images (Example 3). a True labels . b True image z. c Data y. d Estimated labels \(\hat {\boldsymbol {\ell }}\). e Estimated image \(\hat {\boldsymbol {z}}\)

6 Conclusion and perspectives

The paper presents our method for joint deconvolution and segmentation, dedicated to textured images, with an emphasis on oriented structures. This is a very difficult task due to the large amount of unknowns and their complicated dependencies. The formulation of the problem itself has demanded a careful consideration in order to design the best manner to accurately account for the hierarchical dependencies. In this context, the most adapted choice was to model K full images xk corresponding to each class, rather than directly model the compound image z itself. This has allowed us to obtain an expression for the joint probability distribution in a relatively convenient form.

The proposed solution follows a Bayesian strategy that yields optimal functions in the sense of minimum risk for the decisions (labels) and for the estimations (continuous parameters). Both are founded on the posterior (marginal maximizer and mean). The intricate nature of the posterior distribution does not allow for an analytical expression for either the decisions or the estimates. A numerical approach is then used to explore the posterior, and the samples are subsequently used in computing them. The numerical scheme is guaranteed to converge: samples are asymptotically drawn under the posterior and empirical approximation converges towards the optimal decisions and estimates.

Nevertheless, the sampling process for the full set of variables has also proved to be challenging and has required advanced sampling approaches to overcome the impasses. We resort to a Gibbs sampler in order to split the original problem for the full set of variables in several smaller problems for subsets of variables.

  1. (i)

    One of the steps requires the sampling of a Gaussian density in large dimension and we resort to recent developments on Perturbation-Optimization.

  2. (ii)

    The method includes the sampling of the granularity coefficient: it is itself a thorny question, hardly ever tackled. The proposed approach relies on the inverse cumulative density function and takes advantage of our precomputations of the partition function.

  3. (iii)

    Regarding the texture parameters, the algorithm resorts to a recent efficient directional Metropolis-Hastings step within the Gibbs loop.

The proposed methodological aspects are original and have contributed to developing an approach that is both theoretically sound and practically efficient for the problem.

The previous section has presented the results of a series of numerical assessments performed on various convolution and noise conditions, for different image topologies. These results have shown that the method is able to accurately segment the image, provide a good estimation for the texture parameters as well as the hyperparameters and thus accurately restore the original image.

From a theoretical and modelling standpoint, the study leads us to several future developments.

  • A future contribution is the use of a non-Gaussian model for the constituent textures, possibly based on latent variables and conditional Gaussian models [67]. This would add an extra layer of complexity to the model and to the sampling stage.

  • The second future development aims at performing a myopic deconvolution [56, 68], i.e., considering that w, the width of the convolution filter, is unknown and estimating it along with the rest of the parameters.

  • Thirdly, the problem of missing data (inpainting) will also be addressed. An extension of the present work to solve this problem would require to include a truncation matrix, say T, and substitute H by TH in (5).

  • The fourth future contribution will deal with the problem of model selection, especially to choose the number of classes [69] (see also our previous works [7072]). The difficulty would regard the computation of the evidences of the models.

The study also opens up new perspectives from a numerical standpoint, notably in order to reduce computation time.

  • A future contribution will resort to the Swendsen-Wang algorithm in order to improve the sampling step of the label field [73] (see also [74]).

  • The second future development in order to reduce computation time could rely on variational Bayes approaches [30, 32, 75] (see also [76, 77]).

  • Thirdly, the problem of fast sampling will also be addressed through the abundant literature as already mentioned [4754] and more recently [78].

As it can be seen from this brief listing of the perspectives, the work on this topic is far from being over. Nevertheless, even in its current form, the method presented in this paper addresses a problem that had not been tackled so far (deconvolution-segmentation of textured images including hyperparameter and texture parameter estimation), while achieving very satisfactory results.

7 Appendix A: Potts partition

For the sake of self-containedness, we describe here the pre-computation of the partition function already given in our previous paper [40]. It is based on known properties [38, 39] for the partition function of the exponential family distributions.

Let us note \(\sigma ({\boldsymbol {\ell }}) = \sum _{p \sim q} \delta (\ell _{p} ; \ell _{q})\) the number of pair of adjacent pixels with identical label. The partition \({\mathcal {Z}}(\beta)\) normalizes the probability distribution (1), so it writes:

$${\mathcal{Z}}(\beta) = {\sum}_{\boldsymbol{\ell}} \exp\left[{\beta \sigma({\boldsymbol{\ell}})}\right] $$

where the summation runs over all the configurations of the field {1,...K}P. Numerically, it is a colossal summation over the KP possible configurations and the exhaustive exploration of these configurations is impossible (except for minuscule images). The derivation w.r.t. β straightforwardly yields:

$${\mathcal{Z}}^{\prime}(\beta) = {\sum}_{\boldsymbol{\ell}} \sigma({\boldsymbol{\ell}}) \exp\left[{\beta \sigma({\boldsymbol{\ell}})}\right] $$

then dividing by \({\mathcal {Z}}(\beta)\) we have

$$\frac{{\mathcal{Z}}^{\prime}(\beta)}{{\mathcal{Z}}(\beta)} = {\sum}_{\boldsymbol{\ell}} \sigma({\boldsymbol{\ell}}) \, {\mathcal{Z}}(\beta)^{-1} \exp\left[{\beta \sigma({\boldsymbol{\ell}})}\right] \,{.} $$

The left-hand side reads as the derivative of the log-partition \(\bar {\mathcal {Z}}(\beta)=\log {\mathcal {Z}}(\beta)\) and the right-hand side reads as an expectation:

$$\bar {\mathcal{Z}}^{\prime}(\beta) = {\sum}_{\boldsymbol{\ell}} \sigma({\boldsymbol{\ell}}) \, \text{Pr}\left[\!{\, {\boldsymbol{\ell}}|\beta}\right] = {\mathrm{E}}\left[{\sigma({\boldsymbol{L}})}\right] \,{.} $$

Consequently, the derivative of the log-partition is an expectation. It can be approximated by an empirical average:

$$\bar {\mathcal{Z}}^{\prime}(\beta) \simeq \frac{1}{N} \sum_{n} \sigma({\boldsymbol{\ell}}_{n}) $$

where the n, for n=1,…,N, are N realizations of the field (given β). It remains a huge task but it is attainable: it required several weeks of intensive computation (on a standard PC), but it is done once for all. Results are given in Fig. 8 and this is the keystone for the estimation of β in this paper.

Fig. 8
figure 8

From to to bottom: Log-partition \(\bar {\mathcal {Z}}(\beta)\), its first and second derivatives as a function of β (from β=0 to β=3) for various sizes of image (P=64,128,256,512) and number of class (K=2,3,4,5)

Notes

  1. Except for the Ising field (K=2), see [79], also [80, 81].

  2. A unique step is used, in order to design a valid algorithm.

  3. The algorithm is implemented within the computing environment Matlab on a standard PC with a 3 GHz CPU and 64 GB of RAM.

Abbreviations

:

Neighbor relation between pixels

β :

Granularity coefficient (Potts field)

K :

Number of texture classes

:

Unobserved (hidden) labels

N :

{0,1,…,N−1}

θ k :

Texture parameters

P :

Number of pixels

R k, Λ k :

Texture covariance and precision structure

x k :

Unobserved (hidden) textured images

y :

Observed image

γ k, γ n :

Texture and noise levels

z :

Unobserved (hidden) image

\({\mathcal {Z}}\) :

Partition function (Potts field)

δ(·,·):

Kronecker function

CbC:

Circulant-block-circulant

MALA:

Metropolis adjusted Langevin algorithm

MCMC:

Monte Carlo Markov chain

MH:

Metropolis-hastings

PSD:

Power spectral density

RWMH:

Random walk metropolis-hastings

TbT:

Toeplitz-block-toeplitz

w.r.t.:

With respect to

ZF:

Zero-forcing

References

  1. M. Petrou, P. Garcia-Sevilla, Dealing with Texture (Wiley, Chichester, England, 2006).

    MATH  Google Scholar 

  2. G. L. Gimel’farb, Image Textures and Gibbs Random Fields (Kluwer Academic Publishers, 1999).

  3. J. P. Da Costa, F. Michelet, C. Germain, O. Lavialle, G. Grenier, Delineation of vine parcels by segmentation of high resolution remote sensed images. Precision Agric.8:, 95–110 (2007).

    Google Scholar 

  4. J. P. Da Costa, F. Galland, A. Roueff, C. Germain, Unsupervised segmentation based on Von Mises circular distributions for orientation estimation in textured images. JElectron Imaging. 21(2) (2012).

  5. J. C. Russ, The Image Processing Handbook (Seventh Edition) (CRC Press, 2015).

  6. J. Zhang, J. Zheng, J. Cai, in IEEEConference on Computer Vision and Pattern Recognition. A diffusion approach to seeded image segmentation, (2010), pp. 2125–2132.

  7. L. Garcia Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, R. Bhaskar, Automatic Image Segmentation by Dynamic Region Growth and Multiresolution Merging. IEEE Trans. Image Process.18(10), 2275–2288 (2009).

    MathSciNet  MATH  Google Scholar 

  8. S. Alpert, M. Galun, A. Brandt, R. Basri, Image Segmentation by Probabilistic Bottom-Up Aggregation and Cue Integration. IEEE Trans. Pattern. Anal. Mach. Intell.34(2), 315–327 (2012).

    Google Scholar 

  9. T. F. Chan, P. Mulet, On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM Numer J. Anal.36(2), 354–367 (1999).

    MathSciNet  MATH  Google Scholar 

  10. J. Malik, S. Belongie, T. Leung, J. Shi, Contour and Texture Analysis for Image Segmentation. Int. Comput J. Vis. 43:, 7–27 (2001).

    MATH  Google Scholar 

  11. L. Grady, Random Walks for Image Segmentation. IEEE Trans. Pattern. Anal. Mach. Intell. 28(11), 1768–1783 (2006).

    Google Scholar 

  12. A. K. Sinop, L. Grady, in IEEE International Conference on Computer Vision. ASeeded Image Segmentation Framework Unifying Graph Cuts And Random Walker Which Yields ANew Algorithm, (2007), pp. 1–8.

  13. M. Tuceryan, Moment-based texture segmentation. Pattern Recogn. Lett. 15(7), 659–668 (1994).

    Google Scholar 

  14. S. Arivazhagan, L. Ganesan, Texture segmentation using wavelet transform. Pattern. Recogn. Lett.24:, 3197–3203 (2003).

    MATH  Google Scholar 

  15. L. Wolf, X. Huang, I. Martin, D. Metaxas, in In European Conference on Computer Vision. Patch-based texture edges and segmentation, (2006).

  16. A. Lillo, G. Motta, J. A. Storer, in Pattern Recognition and Image Analysis. vol. 4477 of Lecture Notes in Computer Science, ed. by J. Martí, J. M. Benedí, A. M. Mendonça, and J. Serrat. Supervised Segmentation Based on Texture Signatures Extracted in the Frequency Domain (Springer Berlin Heidelberg, 2007), pp. 89–96.

  17. H. Mobahi, S. Rao, A. Y. Yang, S. S. Sastry, Y. Ma, Segmentation of Natural Images by Texture and Boundary Compression. Int. Comput, J. Vis. 95(1), 86–98 (2011).

    Google Scholar 

  18. M. Galun, E. Sharon, R. Basri, A. Brandt, in IEEEInternational Conference on Computer Vision. vol. 1. Texture segmentation by multiscale aggregation of filter responses and shape elements, (2003), pp. 716–723.

  19. X. Liu, D. Wang, Image and Texture Segmentation Using Local Spectral Histograms. IEEE Trans. Image Process. 15(10), 3066–3077 (2006).

    Google Scholar 

  20. S. Todorovic, N. Ahuja, in IEEE International Conference on Computer Vision. Texel-based texture segmentation, (2009), pp. 841–848.

  21. D. Geman, S. Geman, C. Graffigne, P. Dong, Boundary Detection by Constrained Optimization. IEEE Trans. Pattern. Anal. Mach. Intell. 12(7), 609–628 (1990).

    Google Scholar 

  22. Z. Tu, S. C. Zhu, H. Y. Shum, in IEEE International Conference on Computer Vision. vol. 2. Image segmentation by data driven Markov chain Monte Carlo, (2001), pp. 131–138.

  23. H. Deng, D. A. Clausi, Unsupervised image segmentation using a simple MRF model with a new implementation scheme. Pattern. Recognit.37(12), 2323–2335 (2004).

    Google Scholar 

  24. P. F. Felzenszwalb, D. P. Huttenlocher, Efficient Graph-Based Image Segmentation. Int. Comput, J. Vis.59(2), 167–181 (2004).

    Google Scholar 

  25. Y. Boykov, G. Funka-Lea, Graph cuts and efficient ND image segmentation. Int. Comput, J. Vis.70(2), 109–131 (2006).

    Google Scholar 

  26. G. Celeux, F. Forbes, N. Peyrard, EM-based image segmentation using Potts models with external field (INRIA, 2002).

  27. R. Morris, X. Descombes, J. Zerubia, Fully Bayesian image segmentation - an engineering perspective (INRIA, Sophia Antipolis France, 1996). 3017.

  28. A. Barbu, S. C. Zhu, Generalizing Swendsen-Wang to sampling arbitrary posterior probabilities. IEEE Trans. Pattern. Anal. Mach. Intell.27(8), 1239–1253 (2005).

    Google Scholar 

  29. M. Pereyra, N. Dobigeon, H. Batatia, J. Y. Tourneret, Estimating the Granularity Coefficient of a Potts-Markov Random Field within a Markov Chain Monte Carlo Algorithm. IEEE Trans. Image Process.22(6), 2385–2397 (2013).

    MathSciNet  MATH  Google Scholar 

  30. O. Féron, B. Duchêne, A. Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data. Inverse Problems.21(6), 95–115 (2005).

    MathSciNet  MATH  Google Scholar 

  31. M. Mignotte, A Segmentation-Based Regularization Term for Image Deconvolution. IEEE Trans. Image Process.15(7), 1973–1984 (2006).

    Google Scholar 

  32. H. Ayasso, A. Mohammad-Djafari, Joint NDT Image Restoration and Segmentation Using Gauss-Markov-Potts prior Models and Variational Bayesian Computation. IEEE Trans. Image Process.19(9), 2265–2277 (2010).

    MathSciNet  MATH  Google Scholar 

  33. O. Eches, N. Dobigeon, J. Y. Tourneret, Enhancing hyperspectral image unmixing with spatial correlations. IEEE Trans. Geosci. Remote Sens.49(11), 4239–4247 (2011).

    Google Scholar 

  34. M. Pereyra, N. Dobigeon, H. Batatia, J. Y. Tourneret, Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model. IEEE Trans. Med. Imaging.31(8), 1509–1520 (2012).

    Google Scholar 

  35. O. Eches, J. A. Benediktsson, N. Dobigeon, J. Y. Tourneret, Adaptive Markov random fields for joint unmixing and segmentation of hyperspectral image. IEEE Trans. Image Process.22(1), 5–16 (2013).

    MathSciNet  MATH  Google Scholar 

  36. Y. Altmann, N. Dobigeon, S. McLaughlin, J. Y. Tourneret, Residual component analysis of hyperspectral images - Application to joint nonlinear unmixing and nonlinearity detection. IEEE Trans. Image Process.23(5), 2148–2158 (2014).

    MathSciNet  MATH  Google Scholar 

  37. M. Storath, A. Weinmann, J. Frikel, M. Unser, Joint image reconstruction and segmentation using the Potts model. Inverse Probl.31 (2015).

  38. G. Winkler, Image Analysis, Random Fields and Markov Chain Monte Carlo Methods (Springer Verlag, Berlin, Germany, 2003).

    MATH  Google Scholar 

  39. D. MacKay, Information Theory, Inference, and Learning Algorithms (Cambridge University Press, 2008).

  40. R. Rosu, J. F. Giovannelli, A. Giremus, C. Vacar, in Proceedings of the International Conference on Acoustic, Speech and Signal Processing. Potts model parameter estimation in Bayesian segmentation of piecewise constant images (Brisbane, Australia, 2015).

  41. J. F. Giovannelli, A. Barbos, in Proceedings of the International Conference on Statistical Signal Processing. Unsupervised segmentation of piecewise constant images from incomplete, distorted and noisy data (Palma de Majorque, Spain, 2016).

  42. L. Risser, T. Vincent, P. Ciuciu, J. Idier, Application to within-subject fMRI data analysis, (London England, 2009).

  43. N. Friel, A. N. Pettitt, R. Reeves, E. Wit, Bayesian inference in hidden Markov random fields for binary data defined on large lattices. Comput, J. Graph. Stat.18:, 243–261 (2009).

    MathSciNet  Google Scholar 

  44. J. Moller, A. N. Pettitt, R. Reeves, K. K. Berthelsen, An efficient Markov chain Monte Carlo method for distributions with untractable normalising constants. Biometrika.93(2), 451–458 (2006).

    MathSciNet  MATH  Google Scholar 

  45. A. N. Pettitt, N. Friel, R. Reeves, Efficient calculation of the normalizing constant of the autologistic and related models on the cylinder and lattice. J. Royal Stat. Soc. B.65(1), 235–246 (2003).

    MathSciNet  MATH  Google Scholar 

  46. R. Reeves, A. N. Pettitt, Efficient recursions for general factorisable models. Biometrika.91(3), 751–757 (2004).

    MathSciNet  MATH  Google Scholar 

  47. J. M. Marin, C. P. Robert, Bayesian Core. APractical Approach to Computational Bayesian Statistics. Texts in statistics (Springer, Paris, France, 2007).

    Google Scholar 

  48. J. Albert, Bayesian Computation With R (Springer-Verlag New York Inc., New York, 2009).

    MATH  Google Scholar 

  49. C. P. Robert, G. Casella, Monte-Carlo Statistical Methods. Springer Texts in Statistics (Springer, New York, 2004).

    Google Scholar 

  50. D. Gamerman, H. F. Lopes, Markov Chain Monte Carlo: stochastic simulation for Bayesian inference. 2nd ed. (Chapman & Hall/CRC, Boca USA Raton, 2006).

    MATH  Google Scholar 

  51. G. O. Roberts, R. L. Tweedie, Exponential Convergence of Langevin Distributions and Their Discrete Approximations. Bernoulli.2(4), 341–363 (1996).

    MathSciNet  MATH  Google Scholar 

  52. G. Roberts, O. Stramer, Langevin Diffusions and Metropolis-Hastings Algorithms. Methodol. Comput. Appl. Probab.4:, 337–358 (2003).

    MathSciNet  MATH  Google Scholar 

  53. Y. Qi, T. P. Minka, in First Cape Cod Workshop on Monte Carlo Methods. Hessian-based Markov Chain Monte-Carlo Algorithms (Cape CodMassachusetts, USA, 2002).

    Google Scholar 

  54. M. Girolami, B. Calderhead, Riemannian manifold Hamiltonian Monte Carlo (with discussion). J. Royal Stat. Soc. B.73:, 123–214 (2011).

    Google Scholar 

  55. C. Vacar, J. F. Giovannelli, Y. Berthoumieu, in Proceedings of the International Conference on Acoustic, Speech and Signal Processing. Langevin and Hessian with Fisher approximation stochastic sampling for parameter estimation of structured covariance (Prague, Czech Republic, 2011), pp. 3964–3967.

  56. C. Vacar, J. F. Giovannelli, Y. Berthoumieu, Bayesian texture and instrument parameter estimation from blurred and noisy images using MCMC. IEEE Signal Process. Lett.21(6), 707–711 (2014).

    Google Scholar 

  57. J. F. Giovannelli, C. Vacar, in EUSIPCO. Deconvolution-Segmentation for Textured Images (Kos, Greece, 2017).

  58. C. Fox, AConjugate Direction Sampler for Normal Distributions with a Few Computed Examples. University of Otago, Dunedin, New Zealand: Electronics Technical Report No. 2008-1 (2008). Internal report.

  59. G. Papandreou, A. Yuille, in Proc. Int. Conf. on Neural Information Processing Systems (NIPS). Gaussian Sampling by Local Perturbations (Vancouver, Canada, 2010), pp. 1858–1866.

  60. A. Parker, C. Fox, Sampling Gaussian Distributions in Krylov Spaces with ConjugateGradients. SIAM J. Sci. Comput.34(3) (2012).

  61. F. Orieux, O. Féron, J. F. Giovannelli, Sampling high-dimensional Gaussian fields for general linear inverse problem. IEEE Signal Process. Lett.19(5), 251–254 (2012).

    Google Scholar 

  62. A. Barbos, F. Caron, J. F. Giovannelli, A. Doucet. booktitle=NIPS-2017. Clone MCMC: Parallel High-Dimensional Gaussian Gibbs Sampling (Long Beach, USA, 2017).

  63. C. Gilavert, S. Moussaoui et, J. Idier, Efficient Gaussian sampling for solving large-scale inverse problems using MCMC. IEEE Trans Signal Processing. 63(1), 70–80 (2015).

    MathSciNet  MATH  Google Scholar 

  64. D. P. Bertsekas, Nonlinear programming. 2nd ed. (Belmont, MAUSA: Athena Scientific, 1999).

    Google Scholar 

  65. J. Nocedal, S. J. Wright, Numerical Optimization. Series in Operations Research (Springer Verlag, New York, 2008).

    Google Scholar 

  66. S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. vol. 3 of Foundations and Trends in Machine Learning (USA MA Now Publishers Inc, Hanover, 2011).

    MATH  Google Scholar 

  67. C. Vacar, J. F. Giovannelli, A. M. Roman, in Proceedings of the International Conference on Image Processing. vol. 19. Bayesian texture model selection by harmonic mean (Orlando, 2012), p. 5.

  68. F. Orieux, J. F. Giovannelli, T. Rodet, Bayesian estimation of regularization and point spread function parameters for Wiener–Hunt deconvolution. J. Opt. Soc. Am.27(7), 1593–1607 (2010).

    Google Scholar 

  69. T. Ando, Bayesian model selection and statistical modeling (Chapman & Hall/CRC, Boca USA Raton, 2010).

    MATH  Google Scholar 

  70. J. F. Giovannelli, A. Giremus, in Proceedings of the International Conference on Statistical Signal Processing (special session). Bayesian noise model selection and system identification based on approximation of the evidence (Gold Coast, Australia, 2014).

  71. A. Barbos, A. Giremus, J. F. Giovannelli, in Actes du 25 e colloque GRETSI. Bayesian noise model selection and system identification using Chib approximation based on the Metropolis-Hastings sampler (Lyon, France, 2015).

  72. C. Vacar, J. F. Giovannelli, Y. Berthoumieu, Bayesian Texture Classification From Indirect Observations Using Fast Sampling. IEEE Trans. Signal Process.64(1), 146–159 (2016).

    MathSciNet  Google Scholar 

  73. D. M. Higdon, Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications. J. Am. Stat. Assoc.93(442), 585–595 (2012).

    MATH  Google Scholar 

  74. J. Sodjo, A. Giremus, N. Dobigeon, J. F. Giovannelli, in Proceedings of the International Conference on Acoustic, Speech and Signal Processing. A generalized Swendsen-Wang algorithm for Bayesian nonparametric joint segmentation of multiple images (New Orleans, USA, 2017).

  75. V. Smidl, A. Quinn, The variational Bayes Method in Signal Processing (Springer, 2006).

  76. W. Fan, N. Bouguila, D. Ziou, Variational learning for finite Dirichlet mixture models and applications. IEEE Trans. Neural Netw. Learn. Syst.3(5), 762–774 (2012).

    Google Scholar 

  77. B. Ait-El-Fquih, J. F. Giovannelli, N. Paul, A. Girard, I. Hoteit, in Proceedings of the International Conference on Statistical Signal Processing. A variational Bayesian estimation scheme for parameteric point-like pollution source of groundwater layers (Freibourg, Germany, 2018).

  78. L. Martino, J. Read, D. Luengo, Independent Doubly Adaptive Rejection Metropolis Sampling Within Gibbs Sampling. 63(12), 3123–3138 (2015).

  79. L. Onsager, ATwo-Dimensional Model with an Order-Disorder Transition. Phys Rev.65(3 & 4), 117–149 (1944).

    MathSciNet  MATH  Google Scholar 

  80. J. F. Giovannelli, in Proceedings of the International Conference on Image Processing. Estimation of the Ising field parameter thanks to the exact partition function (Hong-Kong, 2010), pp. 1441–1444.

  81. Giovannelli J.F., in Proceedings of the International Conference on Image Processing. Estimation of the Ising field parameter from incomplete and noisy data (Brussels, Belgium, 2011), pp. 1893–1896.

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Availability of data and materials

Please contact the author for data requests.

Author information

Authors and Affiliations

Authors

Contributions

Both authors jointly developed the work presented in this paper. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Jean-François Giovannelli.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vacar, C., Giovannelli, JF. Unsupervised joint deconvolution and segmentation method for textured images: a Bayesian approach and an advanced sampling algorithm. EURASIP J. Adv. Signal Process. 2019, 17 (2019). https://doi.org/10.1186/s13634-018-0597-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-018-0597-x

Keywords