 Research
 Open Access
 Published:
Regularized supervised Bayesian approach for image deconvolution with regularization parameter estimation
EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 15 (2020)
Abstract
Image deconvolution consists in restoring a blurred and noisy image knowing its point spread function (PSF). This inverse problem is illposed and needs prior information to obtain a satisfactory solution. Bayesian inference approach with appropriate prior on the image, in particular with a Gaussian prior, has been used successfully. Supervised Bayesian approach with maximum a posteriori (MAP) estimation, a method that has been considered recently, is unstable and suffers from serious ringing artifacts in many applications. To overcome these drawbacks, we propose a regularized version where we minimize an energy functional combined by the mean square error with H^{1} regularization term, and we consider the generalized cross validation (GCV) method, a widely used and very successful predictive approach, for choosing the smoothing parameter. Theoretically, we study the convergence behavior of the method and we give numerical tests to show its effectiveness.
1 Introduction
Images are indispensable in science and everyday life. Mirroring the capabilities of our human visual system, it is natural to display observations of the world in graphical form. Images are obtained in areas ranging from everyday photography to astronomy, medical imaging, remote sensing, and microscopy. In each case, there is an underlying object we wish to observe, which is the original or true image. Indeed, this true image is the ideal representation of the observed scene.
Yet, the observation process is never ideal: there is uncertainty in the measurements, such as blur, noise, and other types of degradation. Image restoration aims to recover an estimate of the true image from the degraded observations. The key to being able to solve this problem is proper incorporation of prior knowledge about the original image into the restoration process.
Classical image restoration seeks an estimate of the true image f(x,y) when the point spread function (PSF) h(x,y) is known a priori, so it is assumed that the observed image g(x,y) is the output of a linear spatially invariant system
where * represents the convolution operation and ε(x,y) the errors. Therefore, it becomes a nonblind deconvolution problem.
The discretized version of model (1) is given by
where H represents the 2D convolution matrix obtained from the PSF of the imaging system. For more details on this discretization refer to the paper [1].
Image deconvolution is an illposed problem. The solution may not depend continuously on the data, may not be unique, or it may not even exist, this means that, in practice, the data g alone is not sufficient to define a unique and satisfactory solution. Practical approaches as regularization theory and Bayesian inversion have been successful for this task [2–8].
The earliest classical methods for nonblind image deconvolution include the Wiener filter [9, 10] and the RichardsonLucy algorithm [11, 12]. The Wiener filter works in the frequency domain, attempting to minimize the impact of deconvolved noise at frequencies. It has widespread use as the frequency spectrum of most visual images is fairly well behaved and may be estimated easily. It is simple and effective for some special images. However, it is not stable, has serious ringing artifacts in many applications, and its restoration quality is still limited (results often too blurred in the case of gaussian blurred images in which the kernel is dense).
The RichardsonLucy algorithm maximizes the likelihood that the resulting image, when convolved with the PSF, is an instance of the blurred image, assuming Poisson noise statistics. A key limitation of this iterative method is the considerable computation it takes to obtain a (visually) stable solution due to the low convergence speed of this type of algorithm.
Also, increasing the number of iterations not only slows down the computational process but also magnifies noise and introduces waves near sharp edges.
Another popular regularization method for denoising and deblurring is the total variation (TV) method. It was introduced to image denoising by Rudin, Osher, and Fatemi [13] and then applied to deconvolution by Rudin and Osher [14]. It preserves edges well but has sometimes undesirable staircase effect, namely, the transformation of smooth regions into piecewise constant regions (stairs), which implied that the finer details in the original image may not be recovered satisfactorily.
Based on the spirit of TV regularization and bilateral filtering, Elad et al. [15, 16] proposed a bilateral TV (BTV) as a new regularization term that is computationally cheap to implement and preserves edges. This term is more robust to noise and can preserve more details, but it tends to remove texture, create flat intensity regions, and new contours that lead to cartoonlike images.
As well as this regularization theory, the supervised Bayesian approach is a method that has been used successfully [17–22]. In order to increase stability and overcome some limitations of this method (ringing effect), we propose a novel regularized version for image deconvolution. We consider H^{1} regularization term incorporating the nonnegative constraint of the restored image and the wellknown generalized cross validation (GCV) method for choosing suitable regularization parameters. Experiments show that the proposed approach is superior to the Wiener filter, the RichardsonLucy algorithm, the TV method, and the BTV approach, widely used in literature, in terms of restoration quality and stability.
The remaining of this article is organized as follows. In Section 2, we review the supervised Bayesian approach and then propose our deconvolution variational model. We give the experimental tests and discuss the obtained results in Section 3. The Section 4 concludes the paper.
2 Methods
2.1 Bayesian approach
From this point, the main objective is to infer on f given the forward model (2). In the classical reconstruction problems with the known blur kernel h, we mean to use the Bayes rule:
to obtain what is called the posterior law p(fg) from the likelihood p(gf) and the prior p(f), and we can infer on f using this law.
2.1.1 Bayesian estimation with simple prior
The Bayesian inference approach is based on the posterior law:
where the term p(gf,θ_{1}) is the likelihood, p(fθ_{2}) is the prior model, θ=(θ_{1},θ_{2}) are their corresponding parameters (often called the hyperparameters of the problem) and p(gθ_{1},θ_{2}) is called the evidence of the model. This relation is showed in the following scheme:
2.1.2 Assigning the likelihood p(gf) and the prior p(f)
We consider the forward model (2) and suppose that we have some prior knowledge about the error term ε. In fact, if we can assign a probability law p(ε), then we can deduce the likelihood term p(gf).
To account for possible nonstationarity of the noise, we propose to use
where \(v_{\epsilon }=[v_{\epsilon _{1}},\cdots,v_{\epsilon _{M}}]^{T}\phantom {\dot {i}\!}\) contains the unknown variances of the nonstationary noise and Dg is a diagonal matrix whose entries are the M elements of vector v_{ε}.
From this, we can define the expression of the likelihood:
To be able to estimate the unknown variances, we assign an Inverse Gamma conjugate prior on \(v_{\epsilon _{i}}\):
where \(\alpha _{\epsilon _{i}}\) and \(\beta _{\epsilon _{i}}\) are two positive parameters. The shape parameter \(\alpha _{\epsilon _{i}}\) controls the height of the probability density function of the Inverse Gamma distribution, and the scale parameter \(\beta _{\epsilon _{i}}\) controls the spread [23].
The next step is to assign a prior to the unknown f. Here too, different approaches can be used, we may have some prior information such as the mean and the variance of the unknown quantity. The objective is to assign a prior law p(fθ_{2}) in such a way to translate our incomplete prior knowledge on f.
We propose
We also assign an Inverse Gamma conjugate prior on \(v_{f_{j}}\):
2.1.3 Supervised Bayesian approach
We consider ε, v_{ε} and v_{f} are known. We can define the expression of the likelihood as
We propose the prior for f:
with the supposes of the parameters and hyperparameters, the joint posterior of all the unknowns becomes
We can obtain the estimate of f as below:
So the criterion to be optimized is a quadratic one:
where \(\lambda _{f}=\frac {v_{\epsilon }}{v_{f}}\). If we work it out directly, this criterion has a solution:
Using the singular value decomposition (SVD) of H by assuming \(\mathcal {H}\), the Fourier transform of H, to be circular bloccirculant (CBC), it can be shown that \(\hat {f}\) in (15) can be computed using the Fourier transform. The result is comparable to the Wiener filter:
where \(\mathcal {F}\) and \(\mathcal {G}\) denote the Fourier transforms of f and g, respectively.
2.2 The proposed method
In the situation where ε, v_{ε} and v_{f} are known, we can use the MAP estimation. As mentioned, this estimation is unstable and suffers from serious ringing artifacts. To overcome these drawbacks, we propose a regularized MAP method where we minimize an energy functional combined by the mean square error with H^{1} regularization term:
where ∇=(∇_{h},∇_{v}) is the gradient operator combined by difference operators along horizontal and vertical directions. The last two terms are regularization terms which ask that f should be smooth in H^{1} sense.
The staircase effect is partly due to the fact that the used norms are not biased against discontinuous nor continuous functions (e.g., TVnorm). The term \(\parallel \nabla f \parallel _{2}^{2}\) has a strong bias against discontinuous functions, so this model substantially reduces the staircase effect and recover the smooth region’s value in the image.
We use the Neumann boundary condition, a natural choice in image processing [24], to discretize the gradient by a finite difference scheme. This type of boundary condition requires that λ_{f}≠0 in the aim to prove the coercivity of the proposed functional in the space H^{1}(Ω), Ω is a bounded Lipschitz domain in \(\mathbb {R}^{2}\), and then the existence of a solution of the minimization problem (17). Numerically, this choice avoids remarkably the apparition of a blurring effect in the resulting images instead of fixing λ_{f} at 0, using an empirical reduction rule to set it from large to small. We will give more details on this in the next section.
Solving the problem using Fourier properties results in a simple algorithm, that is easier to implement and takes a very short time to converge.
2.2.1 The convergence behavior
Let E be a Banach space, \(F:E \longrightarrow \mathbb {R}\), and consider the minimization problem
Theorem 1
Let E be a reflexive Banach space and \(F: E \longrightarrow \mathbb {R}\) a sequentially weakly lower semicontinuous and coercive function, then the problem (18) has a solution. Furthermore, if F is strictly convex, then this solution is unique.
Proof
Proving the existence of a solution of problem (18) is usually achieved by the following steps, which constitute the direct method of the calculus of variations:
Step 1: One constructs a minimizing sequence u_{n}∈E, i.e., a sequence satisfying \(\lim \limits _{n\longrightarrow +\infty } F(u_{n})=\underset {u\in E} \inf {F(u)}\).
Step 2: F is coercive \(\left (\lim \limits _{\parallel u \parallel _{E}\longrightarrow +\infty } F(u)=+\infty \right)\), one can obtain a uniform bound ∥u_{n}∥_{E}≤C, C>0. E is reflexive, then by the theorem of weak sequential compactness [25] one deduces the existence of u_{0}∈E and of a subsequence denoted also as (u_{n})_{n} such that \(u_{n} \underset {E}{\rightharpoonup } u_{0}\).
Step 3: F is sequentially weakly lower semicontinuous (for all sequence \(x_{n} \underset {E}{\rightharpoonup } x\) we have \(\underset {x_{n} \rightharpoonup x} \varliminf {F(x_{n}}) \geq F(x)\)), so \(\underset {u_{n} \rightharpoonup u_{0}} \varliminf {F(u_{n})} \geq F(u_{0})\), which obviously implies that \(F(u_{0})=\underset {u\in E} \inf {F(u)}\).
F is strictly convex (F(λx+(1−λ)y)<λF(x)+(1−λ)F(y), for all x≠y∈E and λ∈]0;1[); therefore, the minimum is unique. □
So, we have
Existence: ∙ H^{1}(Ω) is a Hilbert Banach reflexive space.∙F is coercive :
where α= min(λ_{f},λ)>0.∙F is sequentially weakly lower semicontinuous.
Let f_{n}→f in H^{1}, then \(\parallel f_{n}\parallel _{2}^{2}\longrightarrow \parallel f{\parallel }_{2}^{2}\) and \(\parallel \nabla f_{n}{\parallel }_{2}^{2}\longrightarrow \parallel \nabla f{\parallel }_{2}^{2}\).
Furthermore, g−Hf_{n}=g−H(f_{n}−f)−Hf, as f_{n}→f in H^{1}, f_{n}−f→0 in H^{1}⇒g−Hf_{n}→g−Hf in H^{1}, finally \(\parallel g {Hf}_{n}{\parallel }_{2}^{2}\longrightarrow \parallel g Hf{\parallel }_{2}^{2}\).
So, the problem admits a solution.
Uniqueness: The function F is strictly convex, then the solution is unique.
In the Fourier domain, our model (17) is equivalent to
where D=(D_{h},D_{v}) denotes the Fourier transform of ∇=(∇_{h},∇_{v}), and
By taking the Wirtinger derivative of functional in (21) with respect to \( \mathcal {F}\) and setting the result to be 0, we get the optimal condition of \( \mathcal {F}\) as follows:
where D^{2}=D_{h}^{2}+D_{v}^{2}. The above equality gives the solution of \(\mathcal {F}\)
We use the inverse Fourier transform to get the estimation.
2.2.2 A parameter selection method for H^{1} regularization
We now consider a parameter choice method. An appropriate selection of the regularization parameters λ_{f} and λ is important in the regularization. The wellknown methods for this purpose are the Lcurve [26] and the GCV method [27, 28]; here, we consider the GCV one. It is a widely used and very successful predictive method for choosing the smoothing parameter. The basic idea is that, if one datum point is dropped, then a good value of the regularization parameter should predict the missing datum value fairly well. In our case, the regularization parameter λ_{f} is first selected by the empirical reduction rule, then λ is chosen to minimize the GCV function
where \(H_{\lambda }=\mathcal {H}^{T}\mathcal {H}+\lambda _{f} I+\lambda D^{T}D\). This function can be simplified using the Generalized Singular Value Decompositions (GSVD) [29] of the pair \((\mathcal {H},D)\). Thus, there exist orthonormal matrices U,V and invertible matrix X such that
where N=mn. Therefore, the GCV function when used with this regularization can be simplified to
with \(\tilde {g}=U^{T}g\). For the particular case where the matrix D reduces to the identity I, the GSVD of the pair \((\mathcal {H},I)\) reduces to the SVD of the matrix \(\mathcal {H}\) and the expression of GCV is given by the following formula
where σ_{i} is the ith singular value of the matrix \(\mathcal {H}\).
GCV(λ) in this case is a continuous function, so we use the MATLAB function fminbnd, which is based on a combination of golden section search and quadratic interpolation search, to find the value of λ at which GCV(λ) is minimized.
Beginning with an initial image, the following sequence defines our algorithm:
3 Results and discussion
This section presents a culmination of all the numerical tests we performed of the proposed approach for solving image deconvolution problem, and compares it with the Wiener filter, the RichardsonLucy deconvolution [30], the TV approach, and the BTV method. We consider the problem (2), the blurring matrix H is determined from the PSF h [31] of the imaging system which defines how each pixel is blurred. We use three different types of blur kernel: binary blur kernel of size 21×21 and normalized elements to sum 1 (Fig. 1) and Gaussian blur kernel of size 20×20 and standard deviation 3 and a Motion blur kernel of length 15, generated by MATLAB routine fspecial(‘gaussian’, 20, 3) and fspecial(‘motion’, 15), respectively. This choice is for showing the effectiveness of our approach against different types of degradation.
For comparison, it is hard to determine whether one method is better than the others just by looking at the images; therefore, it is necessary to compute the peak signaltonoise ratio (PSNR), which is defined as
where s and t are numbers of row and column of the image. Note that PSNR is a standard image quality measure that is widely used in comparing image restoration results. We also use the Structural Similarity Index Measure (SSIM), an image quality metric that assesses the visual impact of three characteristics of an image: luminance, contrast, and structure [32].
A crucial issue in solving the problem is the determination of the regularization parameters λ_{f} and λ. A good selection of the parameters will result in a promising deblurring result, whereas a bad choice may lead to slow convergence as well as the existence of severe artifacts in the results. Bigger values lead to smoother deblurred images and more stability of the algorithm. However, too big λ_{f} and λ will over smooth the image details and decrease the restoration quality. Generally, when the degradation in the blurry image is significant, the values need to be set large, to reduce the blur as much as possible. However, in the continuing iterations, the blurry effect is decreased gradually. In this case, small values are required since large values will damage the fine details in the image. By considering these effects, a direct implementation is to set λ_{f} from large to small according to an empirical reduction rule [33–35]
which depends on the initial value \(\lambda _{f}^{0}\), the minimal value \(\lambda _{f}^{\mbox{{min}}}\) and the reduction factor r∈(0,1). We choose r=0.5. This setting ensures the improvement of the convergence speed of the algorithm. We compute the other regularization parameter λ using the estimated value of λ_{f} and the GCV function (28).
A projection on the convex set C is required to impose the image constraint
For fair comparison, in Wiener filter [36], \(< \hat {b}^{2} > \) denotes the spectral of \(\hat {b} ^{2}\) which can be estimated from the noise level. \(< \hat b^{2} >=\frac {\alpha }{\mbox{{variance}}(g)}\), we look for the optimal α that gives us the best result in each case. Furthermore, we consider the number of iterations as the stopping criterion in the RichardsonLucy algorithm.
The output image in both methods exhibits ringing introduced by the discrete Fourier transforms used, so to reduce this effect, we use the function edgetaper before the processing.
In TV and BTVbased image restoration methods, the computations are too intensive to run until convergence (even with a large tolerance). Instead, we run until a specified reasonable maximum iteration.
We try to get the best result with each method and then compare it with our approach.
We use seven images in our experiments which are the standards for image processing (Fig. 2). They are of different graylevel histogram. The blurry versions are obtained by convolving the original images with the three PSFs defined previously. We show the deconvolution results in Figs. 3, 4, 5, 6, 7, 8, and 9, and PSNR and SSIM values in Tables 1, 2, and 3.
By carefully observing these results, we find that the edges and details are well recovered, and the ringing artifacts are well suppressed too in each case using our approach instead of the Wiener filter, the RichardsonLucy algorithm, the TV method, and the BTV approach (slow convergence methods). Also, more the structure and details of the image are important more the degradation is worse and the restoration quality is less. In terms of image quality measures, the proposed MAP H^{1} has the highest PSNR and SSIM values among all, the BTV method is the second best, in general, following by the TV approach, the Wiener filter and finally the RichardsonLucy algorithm.
Depending on the size of the image, the execution of the main proposed algorithm requires an average of 2 to 20 s on Intel(R) Celeron(R) CPU N2815 1.86 GHz computer, making it faster than the other methods.
4 Conclusions
In this work, we have extended the supervised Bayesian approach by adding H^{1} regularization term in an energy formulation and by proposing a method for choosing the regularization parameter. Numerical results show that the proposed algorithm is stable and can suppress ringing artifacts successfully using the proposed techniques instead of the Wiener filter, the RichardsonLucy algorithm, the TV method, and the BTV approach, robust methods used in literature, for different types of the blur kernel. Future works will be focused more on using other blur kernels and tests on real images.
Availability of data and materials
The set of images used to demonstrate the effectiveness of the proposed approach is composed of seven standard images used for image processing. These images are frequently found in literature and available on the following site: www.imageprocessingplace.com/root_files_V3/image_databases.htm.
Abbreviations
 BTV:

Bilateral total variation
 CBC:

Circular bloccirculant
 GCV:

Generalized cross validation
 GSVD:

Generalized singular value decompositions
 MAP:

Maximum a posteriori
 PSF:

Point spread function
 PSNR:

Peak signaltonoise ratio
 RL:

RichardsonLucy
 SSIM:

Structural Similarity Index Measure
 SVD:

Singular value decomposition
 TV:

Total variation
References
A. MohammadDjafari, Inverse Problems in imaging systems and the general Bayesian inversion framework. J. Iran. Assoc. Electr. Electron. Eng.3(2), 3–21 (2006).
T. F. Chan, C. K. Wong, Total variation blind deconvolution. IEEE Trans. Image Process.7(3), 370–375 (1998).
T. J. Holmes, Blind deconvolution of quantumlimited incoherent imagery: maximumlikelihood approach. J. Opt. Soc. Am. A.9(7), 1052–1061 (1992).
S. U. Pillai, B. Liang, Blind image deconvolution using a robust gcd approach. IEEE Trans. Image Process.8(2), 295–301 (1999).
A. Levin, Y. Weiss, F. Durand, W. T Freeman, Understanding blind deconvolution algorithms. IEEE Trans. Pattern. Anal. Mach. Intell.33(12), 2354–2367 (2011).
H. Liao, K. MNg, Blind deconvolution using generalized crossvalidation approach to regularization parameter estimation. IEEE Trans. Image Process.20(3), 670–680 (2011).
M. Rostami, O. Michailovich, Z. Wang, Image deblurring using derivative compressed sensing for optical imaging application. IEEE Trans. Image Process.21(7), 3139–3149 (2012).
F. Sroubek, P. Milanfar, Robust multichannel blind deconvolution via fast alternating minimization. IEEE Trans. Image Process.21(4), 1687–1700 (2012).
N. Wiener, Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications (MIT press, Cambridge, MA, 1950).
R. C. Gonzalez, R. E. Woods, Digital image processing (Prentice hall, Upper Saddle River, 2002).
W. H. Richardson, Bayesianbased iterative method of image restoration. JOSA. 62(1), 55–59 (1972).
L. B. Lucy, An iterative technique for the rectification of observed distributions. Astron. J.79(6), 745–754 (1974).
L. I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D nonlinear Phenom.60(14), 259–268 (1992).
L. I. Rudin, S. Osher, Total variation based image restoration with free local constraints. Proc. 1st Int. Conf. Image Process. IEEE. 1:, 31–35 (1994).
M. Elad, On the origin of the bilateral filter and ways to improve it. IEEE Trans. Image Process.11(10), 1141–1151 (2002).
S. Farsiu, M. D. Robinson, M. Elad, P. Milanfar, Fast and robust multiframe super resolution. IEEE Trans. Image Process.13(10), 1327–1344 (2004).
R. Molina, J. Mateos, A. K. Katsaggelos, Blind deconvolution using a variational approach to parameter, image, and blur estimation. IEEE Trans. Image Process.15(12), 3715–3727 (2006).
S. U. Park, N. Dobigeon, A. O. Hero, Semiblind sparse image reconstruction with application to MRFM. IEEE Trans. Image Process.21(9), 3838–3849 (2012).
Z. Xu, E. Y. Lam, Maximum a posteriori blind image deconvolution with Huber–Markov randomfield regularization. Opt. Lett.34(9), 1453–1455 (2009).
S. D. Babacan, J. Wang, R. Molina, A. K. Katsaggelos, Bayesian blind deconvolution from differently exposed image pairs. IEEE Trans. Image Process.19(11), 2874–2888 (2010).
L. Blanco, L. M. Mugnier, Marginal blind deconvolution of adaptive optics retinal images. Opt. Express. 19(23), 7–23239 (2011).
S. Yousefi, N. Kehtarnavaz, Y. Cao, Computationally tractable stochastic image modeling based on symmetric markov mesh random fields. IEEE Trans. Image Process.22(6), 2192–2206 (2013).
A. G. Glen, On the inverse gamma as a survival distribution. J. Qual. Technol.43(2), 158–166 (2011).
M. K. Ng, R. H. Chan, W. C. Tang, A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J. Sci. Comput.21(3), 851–866 (1999).
G. Aubert, P. Kornprobst, Mathematical problems in image processing: partial differential equations and the calculus of variations (Springer Science & Business Media, New York, 2006).
P. C. Hansen, D. P. O’Leary, The use of the Lcurve in the regularization of discrete illposed problems. SIAM J. Sci. Comput.14(6), 1487–1503 (1993).
G. H. Golub, M. Heath, G. Wahba, Generalized crossvalidation as a method for choosing a good ridge parameter. Technometrics. 21(2), 215–223 (1979).
A. H. Bentbib, M. El Guide, K Jbilou, Matrix Krylov subspace methods for image restoration. New Trends Math. Sci.3(3), 136–148 (2015).
G. H. Golub, C. van Loan, Matrix computation, the third edition (The Johns Hopkins University Press, Baltimore, 1996).
M. Bergounioux, Introduction au traitement mathématique des imagesméthodes déterministes (Springer, Berlin, 2015).
P. C. Hansen, Regularization tools: a Matlab package for analysis and solution of discrete illposed problems. Numer. Algoritm.6(1), 1–35 (1994).
Z. Wang, A. C. Bovik, H. R. Sheikh, E. P. Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process.13(4), 600–612 (2004).
Y. W Tai, P Tan, M. S Brown, Richardsonlucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern. Anal. Mach. Intell.33(8), 1603–1618 (2010).
M. S Almeida, L. B Almeida, Blind and semiblind deblurring of natural images. IEEE Trans. Image Process.19(1), 36–52 (2009).
E. Faramarzi, D. Rajan, M. P. Christensen, Unified blind method for multiimage superresolution and single/multiimage blur deconvolution. IEEE Trans. Image Process.22(6), 2101–2114 (2013).
M. Bergounioux, Approche fréquentielle : Filtre de Wiener. Introduction au traitement mathématique des imagesméthodes déterministes vol. 76 (Springer, Berlin, 2015).
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
Author’s contributions
The three authors contributed equally to the realization of this work. The author(s) read and approved the final manuscript.
Authors’ information
Bouchra LAAZIRI, first author, PhD student of the Laboratory of Applied Mathematics and Computer Science, Faculty of Science and Techniques, Cadi Ayyad University, Marrakesh, Morocco Said Raghay, Professor at Department of Mathematics, Laboratory of Applied Mathematics and Computer Science, Faculty of Science and Techniques, Cadi Ayyad University, Marrakesh, Morocco. Abdelilah Hakim, Professor at Department of Mathematics, Laboratory of Applied Mathematics and Computer Science, Faculty of Science and Techniques, Cadi Ayyad University, Marrakesh, Morocco.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Laaziri, B., Raghay, S. & Hakim, A. Regularized supervised Bayesian approach for image deconvolution with regularization parameter estimation. EURASIP J. Adv. Signal Process. 2020, 15 (2020). https://doi.org/10.1186/s1363402000671w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363402000671w
Keywords
 Image deconvolution
 Supervised Bayesian approach
 MAP estimation
 Regularization
 GCV method