- Research
- Open Access
Boundary reconstruction process of a TV-based neural net without prior conditions
- Miguel A Santiago^{1}Email author,
- Guillermo Cisneros^{1} and
- Emiliano Bernués^{2}
https://doi.org/10.1186/1687-6180-2011-115
© Santiago et al; licensee Springer. 2011
- Received: 30 April 2011
- Accepted: 23 November 2011
- Published: 23 November 2011
Abstract
Image restoration aims to restore an image within a given domain from a blurred and noisy acquisition. However, the convolution operator, which models the degradation, is truncated in a real observation causing significant artifacts in the restored results. Typically, some assumptions are made about the boundary conditions (BCs) outside the field of view to reduce the ringing. We propose instead a restoration method without prior conditions which reconstructs the boundary region as well as making the ringing artifact negligible. The algorithm of this article is based on a multilayer perceptron (MLP) which minimizes a truncated version of the total variation regularizer using a back-propagation strategy. Various experiments demonstrate the novelty of the MLP in the boundary restoration process without neither any image information nor prior assumption on the BCs.
Keywords
- image restoration
- neural nets
- multilayer perceptron (MLP)
- boundary conditions (BCs)
- image boundary restoration
- degradation models
- TV (total variation).
1. Introduction
Restoration of blurred and noisy images is a classical problem arising in many applications, including astronomy, biomedical imaging, and computerized tomography [1]. This problem aims to invert the degradation because of a capture device, but the underlying process is mathematically ill posed and leads to a highly noise sensitive solution. A large number of techniques have been developed to cope with this issue, most of them under the regularization or the Bayesian frameworks (a complete review can be found in [2–4]).
The degraded image is generally modeled as a convolution of the unknown true image with a linear point spread function (PSF), along with the effects of an additive noise. The non-local property of the convolution implies that part of the blurred image near the boundary integrates information of the original scenery outside the field of view. However, this information is not available in the deconvolution process and may cause strong ringing artifacts on the restored image, i.e., the well-known boundary problem [5]. Typical methods to counteract the boundary effect is to make assumptions about the behavior of the original image outside the field of view such as Dirichlet, Neuman, periodic, or other recent conditions in [6–8]. The result of restoration with these methods is an image defined in the field-of-view (FOV) domain, but it lacks the boundary area which is actually present in the true image.
In this article we present a restoration method which deals with a blurred image defined in the FOV, but with neither any image information nor prior assumption on the boundary conditions (BCs). Furthermore, the objective is not only to reduce the ringing artifacts on the whole image, but also reconstruct the missed boundaries of the original image without prior assumption.
1.1. Contribution
In recent studies [9, 10], we have developed an algorithm using a multilayer perceptron (MLP) to restore a real image without relying on the typical BCs of the literature. The main goal is to model the blurred image as truncation of the convolution operator, where the boundaries have been removed and they are not further used in the algorithm.
A first step of our neural net was given in a previous study [9] using the standard l_{2} norm in the energy function, as done in other regularization algorithms [11–15]. However, the success of the total variation (TV) in deconvolution [16–20] motivated its incorporation in the MLP. By means of matrix algebra and the approximation of the TV operator with the majorization-minimization (MM) algorithm of [19], we presented a newer version of the MLP [10] for both l_{1} and l_{2} regularizers and mainly devoted to compare the truncation model with the traditional BCs.
Now we will analyze the TV-based MLP with the purpose of going into the boundary restoration process. In general, the neural network is very well suited to learn about the degradation model and then restore the borders without the values of the blurred data therein. Besides, the algorithm adapts the energy optimization to the whole image and makes the ringing artifact negligible.
Finally, let us recall that our MLP is somehow based on the same algorithmic base presented for the authors about the desensitization problem [21]. In fact, our MLP simulates at every iteration an approach to both the degradation (backward) and the restoration (forward) processes, thus extending the same iterative concept but applied to a nonlinear problem.
1.2 Paper organization
This article is structured as follows. In the next section, we provide a detailed formulation of the problem, establishing naming conventions, and the energy function to be minimized. In Section 3, we present the architecture of the neural net under analysis. Section 4 describes the adjustment of its synaptic weights in every layer and outlines the reconstruction of boundaries. We present some experimental results in Section 5 and, finally, concluding remarks are given in Section 6.
2. Problem formulation
where $M=\left[{M}_{1}\times {M}_{2}\right]\subset {\Re}^{2}$ and $L=\left[{L}_{1}\times {L}_{2}\right]\subset {\Re}^{2}$ are the supports which define the PSF and the original image, respectively. Let B_{1} and B_{2} be the horizontal and vertical bandwidths of the PSF mask, then we can rewrite the support M as $\left[\left(2{B}_{1}+1\right)\times \left(2{B}_{2}+1\right)\right]$.
where H is the blurring matrix corresponding to the filter mask h of (1), y is the observed image (blurred and noisy image) and n is a sample of a zero mean white Gaussian additive noise of variance σ^{2}.
where T has a Toeplitz structure and B, which is defined by the BCs, is often structured, sparse and low rank. BCs make assumptions about how the observed image behaves outside the FOV and they are often chosen for algebraic and computational conveniences. The following cases are commonly referenced in literature:
Zero BCs [22], aka Dirichlet, impose a black boundary so that the matrix B is all zeros and, therefore, H has a Toeplitz structure (BTTB). This implies an artificial discontinuity at the borders which can lead to serious ringing effects.
Periodic BCs [22], aka Neumann, assume that the scene can be represented as a mosaic of a single infinite-dimensional image, repeated periodically in all directions. The resulting matrix H is BCCB which can be diagonalized by the unitary discrete Fourier transform and leads to a restoration problem implemented by FFTs. Although computationally convenient, it cannot actually represent a physical observed image and still produces ringing artifacts.
Reflective BCs [23] reflect the image like a mirror with respect to the boundaries. In this case, the matrix H has a Toeplitz-plus-Hankel structure which can be diagonalized by the orthonormal discrete cosine transformation if the PSF is symmetric. As these conditions maintain the continuity of the gray level of the image, the ringing effects are reduced in the restoration process.
Anti-reflective BCs [7], similarly reflect the image with respect to the boundaries but using a central symmetry instead of the axial symmetry of the reflective BCs. The continuity of the image and the normal derivative are both preserved at the boundary leading to an important reduction of ringing. The structure of H is Toeplitz-plus-Hankel and a structured rank 2 matrix, which can be also efficiently implemented if the PSF satisfies a strong symmetry condition.
for linear convolution (aperiodic model).
and whose area is calculated by B = (L_{1}-B_{1}) × 4B_{1}, if we consider square dimensions such that B_{1} = B_{2} and L_{1} = L_{2}.
where ${\u2225\mathit{z}\u2225}_{2}^{2}=\sum _{i}{z}_{i}^{2}$ denotes the ${\ell}_{2}$ norm, $\widehat{\mathit{x}}$ is the restored image, and D is the regularization operator, built on the basis of a high pass filter mask d of support $N=\left[{N}_{1}\times {N}_{2}\right]\subset {\Re}^{2}$ and using the same BCs described previously. The first term in (9) is the ${\ell}_{2}$ residual norm appearing in the least-squares approach and ensures fidelity to data. The second term is the so-called "regularizer" or "side constrain" and captures prior knowledge about the expected behavior of x through an additional ${\ell}_{2}$ penalty term involving just the image. The hyper-parameter (or regularization parameter) λ is a critical value which measures the trade-off between a good fit and a regularized solution.
built on the basis of the respective masks d^{ ξ } and d^{ μ } of support $N=\left[{N}_{1}\times {N}_{2}\right]\subset {\Re}^{2}$, which turn out the horizontal and vertical first-order differences of the image. Compared to the expression (9), the TV regularization provides a ${\ell}_{1}$ penalty term which can be thought as a measure of signal variability. Once again, λ is the critical regularization parameter to control the weight we assign to the regularizer relatively to the data misfit term.
Size of the variables involved in the definition of the MLP, both in the degradation and the restoration processes
Degradation | ||||
---|---|---|---|---|
size{x} | size{h} | size{H_{ a }} | size{H_{ a }x} | size{y_{tru}} |
L × 1 | M × 1 | $\stackrel{\u0303}{L}\times L$ | $\stackrel{\u0303}{L}\times 1$ | $\stackrel{\u0303}{L}\times 1$ |
L = [L_{1} × L_{2}] | $M=\left[\begin{array}{c}\hfill \left(2{B}_{1}+1\right)\times \hfill \\ \hfill \left(2{B}_{2}+1\right)\hfill \end{array}\right]$ | $\stackrel{\u0303}{L}=\left[\left({L}_{1}+2{B}_{1}\right)\times \left({L}_{2}+2{B}_{2}\right)\right]$ | Truncated image y_{tru} is defined in the support $\mathsf{\text{FOV}}=\left[\begin{array}{c}\hfill \left({L}_{1}-2{B}_{1}\right)\times \hfill \\ \hfill \left({L}_{2}-2{B}_{2}\right)\hfill \end{array}\right]$ and the rest are zeros up to the size $\stackrel{\u0303}{L}$ | |
Restoration | ||||
size{d^{ ξ }}, size{d^{ μ }} | $\mathsf{\text{size}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}\right\}$, $\mathsf{\text{size}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}\right\}$ | $\mathsf{\text{size}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}\mathit{x}\right\}$, $\mathsf{\text{size}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}\mathit{x}\right\}$ | $\mathsf{\text{size}}\left\{\mathsf{\text{trunc}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}\mathit{x}\right\}\right\}$, $\mathsf{\text{size}}\left\{\mathsf{\text{trunc}}\left\{{\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}\mathit{x}\right\}\right\}$ | |
N × 1 | U × L | U × 1 | U × 1 | |
N = [N_{1}×N_{2}] | U = [(L_{1}+N_{1}-1) × (L_{2}+N_{2}-1)] | Truncated images ${\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}\mathit{x}$ and ${\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}\mathit{x}$ are defined in the support [(L_{1}-N_{1}+1) × (L_{2}-N_{2}+1)] and the rest are zeros up to the size U |
To go through this problem, we know that neural networks are particularly well suited as their ability to nonlinear mapping and self-adaptiveness. In fact, the Hopfield network has been used in the literature to solve the optimization problem (9) and recent studies provide neural network solutions to the TV regularization (10) as in [16, 17]. In this article, we present a simple solution to solve the TV-based solution by means of an MLP with back-propagation. Previous researches of the authors [10] showed that the MLP also using the ${\ell}_{2}$ term of (9).
3. Definition of the MLP approach
At every iteration, the neural net works by simulating both an approach to the degradation process (backward) and to the restoration solution (forward), while refining the results according to a optimization criteria. However, the input to the net is always the image y_{tru}, as no net training is required. Let us remark that we manage "backward" and "forward" concepts in the opposite sense to a standard image restoration problem due to the specific architecture of the net.
During the back-propagation process, the network must iteratively minimize a regularized error function which we will set to the expression (12) in the following sections. Since the trunc{·} operator is involved in those expressions, the truncation of the boundaries is performed at every iteration but also their reconstruction as deduced by the $\stackrel{\u0303}{L}$ size at the input (though it is really defined in FOV since the rest of pixels are zeros) and the L size at the output. What deserves attention is that no a priori knowledge, assumption or estimation concerning the unknown borders is needed to perform the regeneration. In general, this could be explained by the neural net behavior, which is able to learn about the degradation model. A restored image is therefore obtained in real conditions on the basis of a global energy minimization strategy, with reconstructed borders while adapting the center of the image to the optimum solution and thus making the ringing artifact negligible.
which is defined in the domain 0 ≤ φ{·} ≤ 1.
where i and i+1 are superscripts to denote two consecutive layers of the net. Although this superscripting of layers should be appended to all variables, for notational simplicity we shall remove it from all formulae of the manuscript when deduced by the context.
4. Adjustment of the neural net
In this section, our purpose is to show the procedure of adjusting the interconnection weights as the MLP iterates. A variant of the well-known algorithm of back-propagation is applied by solving the optimization problem in (12).
where E(m) stands for the restoration error after m iterations at the output of the net and the constant η indicates the learning speed. Let us compute now the so-called gradient matrix $\frac{\partial E\left(m\right)}{\partial {\mathit{W}}^{i}\left(m\right)}$ in the different layers of the MLP.
4.1 Output layer
where TV stands for the well-known TV regularizer and ε > 0 is a constant to avoid singularities when minimizing. Both products ${\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}\widehat{\mathit{x}}\left(m\right)$ and ${\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}\widehat{\mathit{x}}\left(m\right)$ are subscripted by k meaning the k th element of the respective U × 1 sized vector (see Table 1). It should be mentioned that ${\ell}_{1}$ norm and TV regularizations are quite often used as the same in the literature. But, the distinction between these two regularizers should be kept in mind since, at least in deconvolution problems, TV leads to significant better results as illustrated in [18].
such that the operator trunk{·} is applied individually to ${\mathit{D}}_{\mathit{a}}^{\mathit{\xi}}$ and ${\mathit{D}}_{\mathit{a}}^{\mathit{\mu}}$ (see Table 1) and merged later as indicated in the definition of (26).
where ○ denotes the Hadamard (elementwise) product.
Summary of dimensions for the output layer
Output layer | |
---|---|
size{p(m)} | p(m) = z^{i-1}(m)⇒ size{p(m)} = S^{i-1}× 1 |
size{W(m)} | L × S^{i-1} |
size{v(m)} | L × 1 |
size{z(m)} | $\mathit{z}\left(m\right)=\widehat{\mathit{x}}\left(m\right)\Rightarrow \mathsf{\text{size}}\left\{\mathit{z}\left(m\right)\right\}=L\times 1$ |
size{e(m)} | $\stackrel{\u0303}{L}\times 1$ |
size{r(m)} | size{D_{ a }} = 2U × L⇒size{r(m)} = 2U × 1 and size{Ω} = 2U × 2U |
size{δ(m)} | L × 1 |
4.2 Any i hidden layer
which is mainly based on the local gradient δ^{i+1}(m) of the following connected layer i+1.
4.3 Algorithm
As described in Section 3, our MLP neural net performs a couple of forward and backward processes at every iteration m. First, the whole set of connected layers propagate the degraded image y from the input to the output layers by means of Equation 14. Afterwards, the new synaptic weigh matrixes W^{ i }(m+1) are recalculated from right to left according to the expressions of ΔW^{ i }(m+1) for every layer.
Algorithm: MLP with TV regularizer
Initialization: p^{1} = y∀m and W^{ i }(0) = 0 1≤ i ≤ J
1: m: = 0
2: while StopRule not satisfied do
3: for i: = 1 to J do /* Forward */
4: v^{ i }: = W^{ i }p^{ i }
5: z^{ i }: = φ{v^{ i }}
6: end for /* $\widehat{\mathit{x}}\left(m\right):={\mathit{z}}^{J}$ */
7: for i: = J to 1 do /* Backward */
8: if i = J then /* Output layer */
9: Compute δ^{ J }(m) from (31)
10: Compute E(m) from (29)
11: else
12: δ^{ i }(m): = φ'{v^{ i }(m)}○((W^{i+1}(m))^{ T }δ^{i+1}(m))
13 end if
14: ΔW^{ i }(m+1): = -η δ^{ i }(m)(z^{i-1}(m))^{ T }
15: W^{ i }(m+1): = W^{ i }(m)+ΔW^{ i }(m+1)
16: end for
17: m: = m+1
18: end while /* $\widehat{\mathit{x}}:=\widehat{\mathit{x}}\left({m}_{total}\right)$ */
The previous pseudo-code summarizes our proposed algorithm in an MLP of J layers. StopRule denotes a condition such that either the number of iterations is more than a maximum; or the error E(m) converges and, thus, the error change ΔE(m) is less than a threshold; or, even, this error E(m) starts to increase. If one of these conditions comes true, the algorithm concludes and the final outgoing image is the restored image $\widehat{\mathsf{\text{x}}}:=\widehat{\mathsf{\text{x}}}\left({m}_{\mathsf{\text{total}}}\right)$.
4.4. Reconstruction of boundaries
4.5 Adjustment of λ and η
In the image restoration field, it is well known how important the parameter λ becomes. In fact, too small values of λ yield overly oscillatory estimates owing to either noise or discontinuities; too large values of λ yield over smoothed estimates.
For that reason, the literature has given significant attention to it with popular approaches such as the unbiased predictive risk estimator (UPRE), the generalized cross validation (GCV), or the L-curve method; see [27] for an overview and references. Most of them were particularized for a Tikhonov regularizer, but lately researches aim to provide solutions for TV regularization. Specifically, the Bayesian framework leads to successful approaches in this field.
In our previous article [10], we adjusted λ with solutions coming from the Bayesian state-of-art. However, we still need to investigate a particular algorithm for the MLP since those Bayesian approaches work only for circulant degradation models, but not for the truncated image of this article. So we shall compute yet a hand-tuned λ which optimizes the results.
Regarding the learning speed, it was already demonstrated that η shows lower sensitivity compared to λ. In fact, its main purpose is to speed up or slow down the convergence of the algorithm. Then, for the sake of simplicity, we shall assume η = 2 for the images of 256 × 256 in size.
5. Experimental results
Our previous article [10] showed a wide set of results which mainly demonstrated the good performance of the MLP in terms of image restoration. We shall focus now on its ability to reconstruct the boundaries using standard 256 × 256 sized images such as Lena or Barbara and common PSFs, some of which are presented here (diagonal motion, uniform, or Gaussian blur).
In the light of the expression (18), we define the gradient filters d^{ ξ } and d^{ μ } as the respective horizontal and vertical Sobel masks [1]
$\frac{1}{4}\left[\begin{array}{ccc}\hfill -1\hfill & \hfill -2\hfill & \hfill -1\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 1\hfill & \hfill 2\hfill & \hfill 1\hfill \end{array}\right]$ and $\frac{1}{4}\left[\begin{array}{ccc}\hfill -1\hfill & \hfill 0\hfill & \hfill 1\hfill \\ \hfill -2\hfill & \hfill 0\hfill & \hfill 2\hfill \\ \hfill -1\hfill & \hfill 0\hfill & \hfill 1\hfill \end{array}\right]$
consequently N = 3 × 3.
As observed in Figure 5, the neural net under analysis consists of two layers J = 2, where the bias vectors are ignored and the same log-sigmoid function is applied to both layers. Besides, looking for a tradeoff between good quality results and computational complexity, it is assumed that only two neurons take part in the hidden layer, i.e., S^{1} = 2.
In terms of parameters, we previously commented that the learning speed of the net is set to η = 2 and the regularization parameter λ relies on a hand tuning basis. Regarding the interconnection weights, they do not require any network training, so the weigh matrices are all initialized to zero. Finally, we set the stopping criteria in the Algorithm as a maximum number of 500 iterations (though never reached) or when the relative difference of the restoration error E(m) falls below a threshold of 10^{-3} in a temporal window of 10 iterations.
The Gaussian noise level is established according to a BSNR (signal-to-noise ratio of the blurred image) of 20 dB, so that the regularization term of (19) becomes relevant in the restoration result, i.e., high enough values of the parameter λ.
considering an 8-bit gray-scaled image.
Our proposed MLP scheme was fully implemented in Matlab, being very well suited as all formulae of this article have been presented on a matrix basis. The complexity of the net can be analyzed in the two stages which describe the algorithm: forward pass (FP) and backward pass (BP). The computation of the gradient δ(m) in the output layer makes the BP more time-consuming, as shown in (31). In those equations, the product $\text{trunc}\left\{{\mathit{H}}_{\mathit{a}}\widehat{\mathit{x}}\left(m\right)\right\}$ is the most critical term as it requires numerical computations of O(L^{2}), although the operator trunk{·} is responsible for discarding (zero-fixing) L_{1} × 8B_{1} operations (assuming B_{1} = B_{2} and L_{1} = L_{2}). However, this high computational cost is significantly reduced for the sparsity of H_{ a }, which obtains a performance only related to the number of non-zero elements. Regarding the FP, the two neurons of the hidden layer lead to faster matrix operations of O(2L).
In regard to convergence, our MLP is based on the simple steepest descent algorithm as defined in (16). Consequently, the time of convergence is usually slow and controlled by the parameter η. We are aware that other variations on backpropagation may be applied to our MLP such as the conjugate gradient algorithm, which performs significantly better [28]. Finally, we mention that the experiments were run on a 2.4-GHz Intel Core2Duo with 2 GB of RAM. For a detailed analysis of timing, let us refer to the previous article [10].
5.1. Experiment 1
In a first experiment, we aim to obtain numerical results of the boundary reconstruction process for different sizes of degradation. Let us take the original images Lena and Barbara degraded by diagonal motion and uniform blurs. Regarding the motion blur, it is set to 45° of inclination and the length of shifted pixels is the parameter to vary between 5 and 15. We have used the approximation of Matlab to construct the filter of motion http://www.mathworks.com/help/toolbox/images/ref/fspecial.html, which leads to masks between 5 × 5 and 11 × 11 in size. Analogously, the uniform blur is defined with odd sizes between 5 × 5 and 11 × 11. Let us recall that a Gaussian noise is added to the blurred image such that BSNR = 20 dB.
Numerical values of σ_{ e }and boundary parameters B σ_{ e }and BPSNR for different sizes of degradation
Length | Size | Lena | Barbara | ||||
---|---|---|---|---|---|---|---|
σ _{ e } | B σ _{ e } | BPSNR (dB) | σ _{ e } | B σ _{ e } | BPSNR (dB) | ||
Diagonal motion blur | |||||||
5 | 5 × 5 | 8.70 | 24.59 | 20.29 | 11.49 | 27.17 | 19.43 |
6 | 5 × 5 | 8.70 | 20.58 | 21.84 | 11.53 | 22.76 | 20.97 |
7 | 7 × 7 | 10.35 | 27.23 | 19.42 | 12.92 | 30.36 | 18.44 |
8 | 7 × 7 | 10.25 | 24.05 | 20.50 | 13.18 | 27.18 | 19.39 |
9 | 7 × 7 | 10.26 | 20.96 | 21.70 | 13.32 | 24.30 | 20.36 |
10 | 9 × 9 | 11.62 | 26.04 | 19.81 | 14.64 | 29.81 | 18.57 |
11 | 9 × 9 | 11.50 | 23.36 | 20.76 | 14.80 | 27.17 | 19.36 |
12 | 9 × 9 | 11.51 | 20.85 | 21.74 | 14.89 | 24.90 | 20.13 |
13 | 11 × 11 | 12.78 | 25.85 | 19.87 | 16.11 | 29.76 | 18.58 |
14 | 11 × 11 | 12.61 | 23.15 | 20.83 | 16.15 | 27.33 | 19.34 |
15 | 11 × 11 | 12.63 | 21.10 | 21.63 | 16.19 | 25.71 | 19.89 |
Uniform blur | |||||||
5 × 5 | 8.90 | 17.29 | 23.36 | 12.20 | 19.59 | 22.26 | |
7 × 7 | 11.32 | 19.64 | 22.27 | 14.13 | 22.08 | 21.16 | |
9 × 9 | 13.20 | 20.64 | 21.83 | 15.80 | 23.17 | 20.74 | |
11 × 11 | 14.69 | 22.27 | 21.17 | 17.25 | 25.22 | 20.06 |
Comparing the blurs in both images, we want to highlight the better results of boundary reconstruction for the uniform blur despite of the worse values of σ_{ e }. Therefore, it is presumable that the MLP carries out somehow differently the image restoration of the center of the image from the boundary restoration. In fact, the restoration of the center is a linear process defined by the regularization expression (29), but the boundary reconstruction comes from a nonlinear truncation which requires different performance.
Finally, let us comment the improvement of the regeneration of borders in the motion blur for a specific size when the length of pixels increases. Although we know it is a consequence of how the motion blur is modeled, we can deduce the dependency of the MLP to the structure of the PSF in order to reconstruct the boundaries.
5.2. Experiment 2
In each of the figures, we have included a gray-scaled image which represents the evolution of the restoration error in square blocks. Specifically, it corresponds to the parameter σ_{ e }where the brighter regions are the lower values of σ_{ e }, that is to say, the pixels with a better quality of restoration. We want to highlight the smooth transition of restoration error in the boundary area due to the regeneration of borders. On the other hand, the center of the image comprises the minor values of error restoration as expected by the global energy minimization of the MLP.
5.3. Experiment 3
This experiment aims to compare the performance of the MLP with other restoration algorithms which needs BCs to deal with a realistic capture model: zero, periodic, reflective, and anti-reflective as commented in Section 2. We have used the well-known RestoreTools http://www.mathcs.emory.edu/~nagy/RestoreTools library patched with the anti-reflective modification http://scienze-como.uninsubria.it/mdonatelli/, which implements the matrix-vector operations for every boundary condition. In particular, we have selected a modified version of the Tikhonov regularization (9) named as hybrid bidiagonalization regularization (HyBR) in the library.
The restored image of our MLP algorithm is shown in Figure 10f and makes obvious the good performance of the neural net. First, the boundary ringing is negligible without prior assumption on the boundary condition. Moreover, the visual aspect is better compared to the others which supports the good properties of the TV regularizer. To numerically contrast the results, the parameter σ_{ e }of the MLP is measured only in the FOV region. It leads to σ_{ e }= 12.47 which is notably lower to the values of the HyBR algorithm (e.g., σ_{ e }= 12.99 for the reflexive BCs). Finally, the MLP is able to reconstruct the 253 × 12 sized boundary region and outcomes the original image size of 256 × 256.
5.4. Experiment 4
where x^{+} is the extended image of length $\stackrel{\u0303}{L}$ and x_{1} is the restricted image defined in the FOV of (5). It can be deduced that H_{ 1 } and H_{ 2 } are matrixes of size FOV × FOV and $\mathsf{\text{FOV}}\times \left(\stackrel{\u0303}{L}-L\right)$, respectively, and a x_{ 2 } is the image vector in the boundary frame of length $\stackrel{\u0303}{L}-L$. The extrapolation approach of these methods establishes an adequate prior p(x^{+}) which models the entire image and the restored distribution p(x^{+}|y_{real}) is estimated according to the Bayesian framework. We particularly select the region L of the restored image ${\widehat{\mathit{x}}}^{+}$.
For this section we will extract results from the Extrapolation algorithm of Bishop, whom we would like to thank for his close collaboration, using a TV prior for p(x^{+}). It is demonstrated in [29] that the figures obtained for Calvetti's algorithm would be equivalent. On the other hand, we will upgrade our proposed MLP to leverage somehow the concept of extended observation. First, it means that the input of our neural net is actually the real observed image y_{real}, not the truncated model y_{tru}, and then the input layer consists of FOV neurons. The structure of the MLP keeps unaltered in terms of hidden and output layers, yielding a restored image $\widehat{\mathit{x}}$ of size L. To finish, we remove the operator trunk{·} from all formulae of Section 4 assuming an aperiodic model (zero-padded) of the extended image x^{+}. We do not lose generality as the input is the real image y_{real} and the MLP has to reconstruct likewise the boundary region B.
Let us take the blurs of the previous experiments: uniform, Gaussian, and motion masks of 7 × 7. Tests are computed with the Barbara image and a noise ratio of BSNR = 20 dB. To maximize the results of the MLP, wechoose the parameters λ and η on a hand-tuning basis. Again the performance of the algorithms are measured with the restoration errors in the whole image σ_{ e } and in the boundary region B σ_{ e }. In this experiment, we also include the equivalent error in the FOV, which is denoted as F σ_{ e }.
Numerical values of σ_{ e }, B σ_{ e }, and F σ_{ e }comparing the extrapolation algorithm of Bishop and our MLP with different 7 × 7 sized blurs
Blur | σ_{ e } | B σ_{ e } | F σ_{ e } |
---|---|---|---|
Extrapolation | |||
Uniform | 13.23 | 17.43 | 12.99 |
Gaussian | 12.49 | 17.79 | 12.18 |
Motion | 11.37 | 17.63 | 10.97 |
MLP | |||
Uniform | 13.53 | 15.05 | 13.45 |
Gaussian | 12.33 | 14.13 | 12.24 |
Motion | 11.33 | 12.58 | 11.27 |
Numerical values of B σ_{ e }comparing the Extrapolation algorithm of Bishop and our MLP with different sizes of the Gaussian blur
B σ_{ e } | ||
---|---|---|
Gaussian | Extrapolation | MLP |
7 × 7 | 17.79 | 14.13 |
9 × 9 | 18.28 | 14.23 |
11 × 11 | 17.93 | 14.18 |
13 × 13 | 17.67 | 13.86 |
15 × 15 | 17.40 | 13.71 |
17 × 17 | 17.12 | 13.51 |
6. Concluding remarks
In this article, we have presented the implementation of a method which allows restoring the boundary area of a real truncated image without prior conditions. The idea is to apply a TV-based regularization function in an iterative minimization of an MLP neural net. An inherent backpropagation algorithm has been developed in order to regenerate the lost borders, while adapting the center of the image to the optimum linear solution (the ringing artifact thus being negligible).
The proposed restoration scheme has been validated by means of several tests. As a result, we can conclude the ability of our neural net to reconstruct the boundaries of the image with different BPSNR values depending on the blurring type.
Declarations
Authors’ Affiliations
References
- González RC, Woods RE: Digital Image Processing. 3rd edition. Prentice Hall; 2008.Google Scholar
- Banham MR, Katsaggelos AK: Digital image restoration. IEEE Signal Process Mag 1997,14(2):24-41.View ArticleGoogle Scholar
- Bovik AC: Handbook of Image & Video Processing. 2nd edition. Elsevier; 2005.Google Scholar
- Chan TF, Shen J: Image processing and analysis variational, pde, wavelet and stochastic methods. Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM) 2005.Google Scholar
- Woods J, Biemond J, Tekalp A: Boundary value problem in image restoration. IEEE International Conference on Acoustics, Speech and Signal Processing 1985, 10: 692-695.View ArticleGoogle Scholar
- Calvetti D, Somersalo E: Statistical elimination of boundary artefacts in image deblurring. Inverse Probl 2005,21(5):1697-1714.MathSciNetView ArticleGoogle Scholar
- Martinelli A, Donatelli M, Estatico C, Serra-Capizzano S: Improved image deblurring with anti-reflective boundary conditions and re-blurring. Inverse Probl 2006,22(6):2035-2053.MathSciNetView ArticleGoogle Scholar
- Liu R, Jia J: Reducing boundary artifacts in image deconvolution. International Conference on Image Processing 2008, 505-508.Google Scholar
- Bernues E, Cisneros G, Capella M: Truncated edges estimation using MLP neural nets applied to regularized image restoration. International Conference on Image Processing 2002, 1: I-341-I-344.View ArticleGoogle Scholar
- Santiago MA, Cisneros G, Bernués E: An MLP neural net with L1 and L2 regularizers for real conditions of deblurring. EURASIP J Adv Signal Process 2010, 2010: 18. Article ID 394615View ArticleGoogle Scholar
- Paik JK, Katsaggelos AK: Image restoration using a modified hopfield network. IEEE Trans Image Process 1992,1(1):49-63.View ArticleGoogle Scholar
- Sun Y: Hopfield neural network based algorithms for image restoration and reconstruction--Part I: algorithms and simulations. IEEE Trans Signal Process 2000,48(7):2119-2131.View ArticleGoogle Scholar
- Han YB, Wu LN: Image restoration using a modified hopfield neural network of continuous state change. Signal Process 2004,12(3):431-435.Google Scholar
- Perry SW, Guan L: Weight assignment for adaptive image restoration by neural network. IEEE Trans Neural Netw 2000,11(1):156-170.View ArticleGoogle Scholar
- Wong H, Guan L: A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture. IEEE Neural Netw 2001,12(3):516-531.View ArticleGoogle Scholar
- Wang J, Liao X, Yi Z: Image Restoration Using Hopfield Neural Network Based on Total Variational Model. Springer Berlin; 2005:735-740. LNCS 3497, ISNN 2005Google Scholar
- Wu Y-D, Sun Y, Zhang H-Y, Sun S-X: Variational PDE based image restoration using neural network. Image Process IET 2007,1(1):85-93.MathSciNetView ArticleGoogle Scholar
- Bioucas-Dias J, Figueiredo M, Oliveira JP: Total variation-based image deconvolution: a majorization-minimization approach. IEEE International Conference on Acoustics, Speech and Signal Processing 2006, 2: 861-864.Google Scholar
- Oliveira J, Bioucas-Dias J, Figueiredo M: Adaptive total variation image deblurring: a majorization-minimization approach. Signal Process 2009,89(9):2479-2493.View ArticleGoogle Scholar
- Molina R, Mateos J, Katsaggelos AK: Blind deconvolution using a variational approach to parameter, image and blur estimation. IEEE Image Process 2006,15(12):3715-3727.MathSciNetView ArticleGoogle Scholar
- Santiago MA, Cisneros G, Bernués E: Iterative Desensitisation of image restoration filters under wrong PSF and noise estimates. EURASIP J Adv Signal Process 2007, 2007: 18. Article ID 72658View ArticleGoogle Scholar
- Bertero M, Boccacci P: Introduction to Inverse Problems in Imaging. Institute of Physics Publishing; 1998.View ArticleGoogle Scholar
- Ng M, Chan RH, Tang WC: A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J Sci Comput 1999, 21: 851-866.MathSciNetView ArticleGoogle Scholar
- Osher S, Rudin L, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259-268.View ArticleGoogle Scholar
- Brandt K, Sysking M:The Matrix Cookbook. Last update 2008 [http://matrixcookbook.com/]
- Felippa CA: Introduction to finite element methods.[http://www.colorado.edu/engineering/cas/courses.d/IFEM.d/]
- Vogel CR: Computational methods for inverse problems. Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM) 2002.Google Scholar
- Hagan MT, Demuth HB, Beale MH: Neural Network Design. PWS Publishing; 1996.Google Scholar
- Bishop TE: Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos. PhD Thesis, University of Edinburg 2008.Google Scholar
- Calvetti D, Somersalo E: Bayesian image deblurring and boundary effects. Proceedings of SPIE the International Society for Optical Engineering 2005,5910(1):59. 100X-9Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.