Skip to main content

Boundary reconstruction process of a TV-based neural net without prior conditions

Abstract

Image restoration aims to restore an image within a given domain from a blurred and noisy acquisition. However, the convolution operator, which models the degradation, is truncated in a real observation causing significant artifacts in the restored results. Typically, some assumptions are made about the boundary conditions (BCs) outside the field of view to reduce the ringing. We propose instead a restoration method without prior conditions which reconstructs the boundary region as well as making the ringing artifact negligible. The algorithm of this article is based on a multilayer perceptron (MLP) which minimizes a truncated version of the total variation regularizer using a back-propagation strategy. Various experiments demonstrate the novelty of the MLP in the boundary restoration process without neither any image information nor prior assumption on the BCs.

1. Introduction

Restoration of blurred and noisy images is a classical problem arising in many applications, including astronomy, biomedical imaging, and computerized tomography [1]. This problem aims to invert the degradation because of a capture device, but the underlying process is mathematically ill posed and leads to a highly noise sensitive solution. A large number of techniques have been developed to cope with this issue, most of them under the regularization or the Bayesian frameworks (a complete review can be found in [24]).

The degraded image is generally modeled as a convolution of the unknown true image with a linear point spread function (PSF), along with the effects of an additive noise. The non-local property of the convolution implies that part of the blurred image near the boundary integrates information of the original scenery outside the field of view. However, this information is not available in the deconvolution process and may cause strong ringing artifacts on the restored image, i.e., the well-known boundary problem [5]. Typical methods to counteract the boundary effect is to make assumptions about the behavior of the original image outside the field of view such as Dirichlet, Neuman, periodic, or other recent conditions in [68]. The result of restoration with these methods is an image defined in the field-of-view (FOV) domain, but it lacks the boundary area which is actually present in the true image.

In this article we present a restoration method which deals with a blurred image defined in the FOV, but with neither any image information nor prior assumption on the boundary conditions (BCs). Furthermore, the objective is not only to reduce the ringing artifacts on the whole image, but also reconstruct the missed boundaries of the original image without prior assumption.

1.1. Contribution

In recent studies [9, 10], we have developed an algorithm using a multilayer perceptron (MLP) to restore a real image without relying on the typical BCs of the literature. The main goal is to model the blurred image as truncation of the convolution operator, where the boundaries have been removed and they are not further used in the algorithm.

A first step of our neural net was given in a previous study [9] using the standard l2 norm in the energy function, as done in other regularization algorithms [1115]. However, the success of the total variation (TV) in deconvolution [1620] motivated its incorporation in the MLP. By means of matrix algebra and the approximation of the TV operator with the majorization-minimization (MM) algorithm of [19], we presented a newer version of the MLP [10] for both l1 and l2 regularizers and mainly devoted to compare the truncation model with the traditional BCs.

Now we will analyze the TV-based MLP with the purpose of going into the boundary restoration process. In general, the neural network is very well suited to learn about the degradation model and then restore the borders without the values of the blurred data therein. Besides, the algorithm adapts the energy optimization to the whole image and makes the ringing artifact negligible.

Finally, let us recall that our MLP is somehow based on the same algorithmic base presented for the authors about the desensitization problem [21]. In fact, our MLP simulates at every iteration an approach to both the degradation (backward) and the restoration (forward) processes, thus extending the same iterative concept but applied to a nonlinear problem.

1.2 Paper organization

This article is structured as follows. In the next section, we provide a detailed formulation of the problem, establishing naming conventions, and the energy function to be minimized. In Section 3, we present the architecture of the neural net under analysis. Section 4 describes the adjustment of its synaptic weights in every layer and outlines the reconstruction of boundaries. We present some experimental results in Section 5 and, finally, concluding remarks are given in Section 6.

2. Problem formulation

Let h(i, j) be any generic two-dimensional degradation filter mask (PSF, usually invariant low pass filter) and x(i, j) the unknown original image, which can be lexicographically represented by the vectors h and x

h = h 1 , h 2 , . . . , h M T x = x 1 , x 2 , . . . , x L T
(1)

where M = M 1 × M 2 2 and L = L 1 × L 2 2 are the supports which define the PSF and the original image, respectively. Let B1 and B2 be the horizontal and vertical bandwidths of the PSF mask, then we can rewrite the support M as ( 2 B 1 + 1 ) × ( 2 B 2 + 1 ) .

A classical formulation of the degradation model (blur and noise) in an image restoration problem is given by

y = H x + n
(2)

where H is the blurring matrix corresponding to the filter mask h of (1), y is the observed image (blurred and noisy image) and n is a sample of a zero mean white Gaussian additive noise of variance σ2.

The matrix H can generally be expressed as

H = T + B
(3)

where T has a Toeplitz structure and B, which is defined by the BCs, is often structured, sparse and low rank. BCs make assumptions about how the observed image behaves outside the FOV and they are often chosen for algebraic and computational conveniences. The following cases are commonly referenced in literature:

Zero BCs [22], aka Dirichlet, impose a black boundary so that the matrix B is all zeros and, therefore, H has a Toeplitz structure (BTTB). This implies an artificial discontinuity at the borders which can lead to serious ringing effects.

Periodic BCs [22], aka Neumann, assume that the scene can be represented as a mosaic of a single infinite-dimensional image, repeated periodically in all directions. The resulting matrix H is BCCB which can be diagonalized by the unitary discrete Fourier transform and leads to a restoration problem implemented by FFTs. Although computationally convenient, it cannot actually represent a physical observed image and still produces ringing artifacts.

Reflective BCs [23] reflect the image like a mirror with respect to the boundaries. In this case, the matrix H has a Toeplitz-plus-Hankel structure which can be diagonalized by the orthonormal discrete cosine transformation if the PSF is symmetric. As these conditions maintain the continuity of the gray level of the image, the ringing effects are reduced in the restoration process.

Anti-reflective BCs [7], similarly reflect the image with respect to the boundaries but using a central symmetry instead of the axial symmetry of the reflective BCs. The continuity of the image and the normal derivative are both preserved at the boundary leading to an important reduction of ringing. The structure of H is Toeplitz-plus-Hankel and a structured rank 2 matrix, which can be also efficiently implemented if the PSF satisfies a strong symmetry condition.

BCs are required to manage the non-local property of the convolution operator which leads to the undetermined problem (2), in the sense that we have fewer data points than unknowns to explain it. In fact, the matrix product Hx yields a vector y of length L ̃ , where H is L ̃ × L in size and the value of L ̃ is greater than the original size L

L ̃ = ( L 1 + 2 B 1 ) × ( L 2 + 2 B 2 )
(4)

for linear convolution (aperiodic model).

Then, we obtain a degraded image y of support L ̃ 2 with pixels integrated from the BCs; however, they are not actually present in a real observation. Figure 1 illustrates the boundary regions resulted after shifting the PSF mask throughout the entire image, and defines the region FOV as

Figure 1
figure 1

Real observed image which truncates the borders appeared due to the non-local property of the linear convolution.

FOV = L 1 - 2 B 1 × L 2 - 2 B 2 L ̃
(5)

A real observed image yreal is therefore a truncation of the degradation model up to the size of the FOV support. In our algorithm, we define an image y tru which represents this observed image yreal by means of a truncation on the aperiodic model

y tru = trunc H a x + n
(6)

where H a is the blurring matrix for the aperiodic model and the operator trunk{·} is responsible for removing (zero-fixing) the borders appeared due to the BCs, that is to say,

y tru ( i , j ) = trunc { H a x + n | ( i , j ) } = { y real = H a x + n | ( i , j ) ( i , j ) FOV 0 otherwise }
(7)

Dealing with a truncated image like (7) in a restoration problem is an evident source of ringing for the discontinuity at the boundaries. For that reason, this article aims to provide an image restoration approach to avoid those undesirable ringing artifacts when ytru is the degraded image. Furthermore, it is also intended to regenerate the truncated borders while adapting the center of the image to the optimum linear solution. Figure 2 shows the restored image x ^ with a reconstructed boundary region B defined by

Figure 2
figure 2

Restored image which indicates the boundary reconstruction area B.

B = L - FOV
(8)

and whose area is calculated by B = (L1-B1) × 4B1, if we consider square dimensions such that B1 = B2 and L1 = L2.

Restoring an image x is usually an ill-posed or ill-conditioned problem since either the blurring operator H does not admit inverse or is nearly singular. Thus, a regularization method should be used in the inversion process for controlling the high sensitivity to the noise. Many examples have been presented in the literature by means of the classical Tikhonov regularization

x ^ = arg  min x { 1 2 y H x 2 2 + λ 2 D x 2 2 }
(9)

where z 2 2 = i z i 2 denotes the 2 norm, x ^ is the restored image, and D is the regularization operator, built on the basis of a high pass filter mask d of support N = N 1 × N 2 2 and using the same BCs described previously. The first term in (9) is the 2 residual norm appearing in the least-squares approach and ensures fidelity to data. The second term is the so-called "regularizer" or "side constrain" and captures prior knowledge about the expected behavior of x through an additional 2 penalty term involving just the image. The hyper-parameter (or regularization parameter) λ is a critical value which measures the trade-off between a good fit and a regularized solution.

Alternatively, the TV regularization, proposed by Rudin et al. [24], has become very popular in recent research as result of preserving the edges of objects in the restoration. A discrete version of the TV deblurring problem is given by

x ^ = arg  min x { 1 2 y H x 2 2 + λ x 1 }
(10)

where ||z||1 denotes the 1 norm (i.e., the sum of the absolute value of the elements) and stands for the discrete gradient operator. The operator is defined by the matrices Dξ and Dμ as

x = D ξ x + D μ x
(11)

built on the basis of the respective masks dξ and dμ of support N = N 1 × N 2 2 , which turn out the horizontal and vertical first-order differences of the image. Compared to the expression (9), the TV regularization provides a 1 penalty term which can be thought as a measure of signal variability. Once again, λ is the critical regularization parameter to control the weight we assign to the regularizer relatively to the data misfit term.

Significant amount of work has been addressed to solve any of the above regularizations and mainly the TV deblurring in recent times. Nonetheless, most of the approaches adopted any of the BCs described at the beginning of this section to cope with the indetermination of the problem. We now intend to study an algorithm able to restore the real truncated image (6) removing the assumptions about the boundaries and using the TV method as mathematical regularizer. Consequently, the restoration problem (10) can be redefined as

x ^ = arg  min x 1 2 y - trunc H a x 2 2 + λ trunc D a ξ x + D a μ x 1
(12)

where the subscript a denotes the aperiodic formulation of the matrix operator. Table 1 summarizes the dimensions involved in the expression (12) taking into account the definition of the operator trunc{·} in (7).

Table 1 Size of the variables involved in the definition of the MLP, both in the degradation and the restoration processes

To go through this problem, we know that neural networks are particularly well suited as their ability to nonlinear mapping and self-adaptiveness. In fact, the Hopfield network has been used in the literature to solve the optimization problem (9) and recent studies provide neural network solutions to the TV regularization (10) as in [16, 17]. In this article, we present a simple solution to solve the TV-based solution by means of an MLP with back-propagation. Previous researches of the authors [10] showed that the MLP also using the 2 term of (9).

3. Definition of the MLP approach

Let us build our neural net according to the MLP architecture illustrated in Figure 3. The input layer of the net consists of L ̃ neurons with inputs y 1 , y 2 , . . . , y L ̃ being, respectively, the L ̃ pixels of the truncated image ytru. At any generic iteration m, the output layer is defined by L neurons whose outputs x ^ 1 ( m ) , x ^ 2 ( m ) , . . . , x ^ L ( m ) are, respectively, the L pixels of an approach x ^ ( m ) to the restored image. After mtotal iterations, the neural net outcomes the actual restored image x ^ = x ^ ( m total ) . On the other hand, the hidden layer consists of two neurons, this being enough to achieve good restoration results while keeping low complexity of the network. In any case, the following analysis will be generalized for any number of hidden layers and any number of neurons per layer.

Figure 3
figure 3

MLP scheme adopted for image restoration.

At every iteration, the neural net works by simulating both an approach to the degradation process (backward) and to the restoration solution (forward), while refining the results according to a optimization criteria. However, the input to the net is always the image ytru, as no net training is required. Let us remark that we manage "backward" and "forward" concepts in the opposite sense to a standard image restoration problem due to the specific architecture of the net.

During the back-propagation process, the network must iteratively minimize a regularized error function which we will set to the expression (12) in the following sections. Since the trunc{·} operator is involved in those expressions, the truncation of the boundaries is performed at every iteration but also their reconstruction as deduced by the L ̃ size at the input (though it is really defined in FOV since the rest of pixels are zeros) and the L size at the output. What deserves attention is that no a priori knowledge, assumption or estimation concerning the unknown borders is needed to perform the regeneration. In general, this could be explained by the neural net behavior, which is able to learn about the degradation model. A restored image is therefore obtained in real conditions on the basis of a global energy minimization strategy, with reconstructed borders while adapting the center of the image to the optimum solution and thus making the ringing artifact negligible.

Following a similar naming convention to that adopted in Section 2, let us define any generic layer of the net composed by R inputs and S neurons (outputs) as illustrated in Figure 4,

Figure 4
figure 4

Model of a layer in the MLP.

where p is the R × 1 input vector, W represents the synaptic weight matrix, S × R in size, and z is the S × 1 output vector of the layer. The bias vector b is ignored in our particular implementation. In order to have a differentiable transfer function, a log-sigmoid expression is chosen for φ{·}

φ v = 1 1 + e - v
(13)

which is defined in the domain 0 ≤ φ{·} ≤ 1.

Then, a layer in the MLP is characterized for the following equations

z = ϕ v v = W p + b = W p
(14)

as b= 0(vector of zeros). Furthermore, two layers are connected each other verifying that

z i = p i + 1  and  S i = R i + 1
(15)

where i and i+1 are superscripts to denote two consecutive layers of the net. Although this superscripting of layers should be appended to all variables, for notational simplicity we shall remove it from all formulae of the manuscript when deduced by the context.

4. Adjustment of the neural net

In this section, our purpose is to show the procedure of adjusting the interconnection weights as the MLP iterates. A variant of the well-known algorithm of back-propagation is applied by solving the optimization problem in (12).

Let ΔWi(m+1) be the correction applied to the weight matrix Wiof the layer i at the (m + 1)th iteration. Then,

Δ W i ( m + 1 ) = - η E ( m ) W i ( m )
(16)

where E(m) stands for the restoration error after m iterations at the output of the net and the constant η indicates the learning speed. Let us compute now the so-called gradient matrix E ( m ) W i ( m ) in the different layers of the MLP.

4.1 Output layer

Defining the vectors e(m) and r(m) for the respective error and regularization terms at the output layer after m iterations

e ( m ) = y - trunc H a x ^ ( m )
(17)
r ( m ) = trunc D a ξ x ^ ( m ) + D a μ x ^ ( m )
(18)

we can rewrite the restoration error from (12) as

E ( m ) = 1 2 e ( m ) 2 2 + λ r ( m ) 1
(19)

Using the matrix chain rule when having a composition on a vector [25], the gradient matrix leads to

E ( m ) W ( m ) = E ( m ) v ( m ) v ( m ) W ( m ) = δ ( m ) v ( m ) W ( m )
(20)

where δ ( m ) = E ( m ) v ( m ) is the so-called local gradient vector which again can expanded by the chain rule for vectors [26].

δ ( m ) = z ( m ) v ( m ) E ( m ) z ( m )
(21)

Since z and v are elementwise related by the transfer function φ{·} and thus z i ( m ) v j ( m ) = 0 for any ij, then

z ( m ) v ( m ) = d i a g φ v ( m )
(22)

representing a diagonal matrix whose eigenvalues are computed by the function

φ v = e - v 1 + e - v 2
(23)

We recall that z(m) is actually x ^ ( m ) in the output layer (see Figure 3). If we wanted to compute the gradient matrix E ( m ) W i ( m ) with formulation (19), we would find out a challenging nonlinear optimization problem that is caused by the nondifferentiability of the 1 norm. One approach to overcome this challenge comes from the approximation

r ( m ) 1 T V x ^ ( m ) = = k D a ξ x ^ ( m ) k 2 + D a μ x ^ ( m ) k 2 + ε
(24)

where TV stands for the well-known TV regularizer and ε > 0 is a constant to avoid singularities when minimizing. Both products D a ξ x ^ ( m ) and D a μ x ^ ( m ) are subscripted by k meaning the k th element of the respective U × 1 sized vector (see Table 1). It should be mentioned that 1 norm and TV regularizations are quite often used as the same in the literature. But, the distinction between these two regularizers should be kept in mind since, at least in deconvolution problems, TV leads to significant better results as illustrated in [18].

Bioucas-Dias et al. [18, 19] proposed an interesting formulation of the TV problem by applying MM algorithms. It leads to a quadratic bound function for TV regularizer, which thus results in solving a linear system of equations. Similarly, we adopt that quadratic majorizer in our particular implementation as

TV x ^ ( m ) Q T V x ^ ( m ) = x ^ T ( m ) D a T Ω ( m ) r ( m ) + K
(25)

where K is an irrelevant constant, the involved matrixes are defined as

D a = D a ξ T D a μ T T
(26)
Ω ( m ) = Λ ( m ) 0 0 Λ ( m ) with  Λ ( m ) = diag 1 2 D a ξ x ^ ( m ) 2 + D a μ x ^ ( m ) 2 + ε
(27)

and the regularization term r(m) of (18) is reformulated

r ( m ) = trunc D a x ^ ( m )
(28)

such that the operator trunk{·} is applied individually to D a ξ and D a μ (see Table 1) and merged later as indicated in the definition of (26).

Finally, we can rewrite the restoration error E(m) as

E ( m ) = 1 2 e ( m ) 2 2 + λ Q T V x ^ ( m )
(29)

Taking advantage of the quadratic properties of the expression (25) and applying Matrix Calculus basis (see a detailed computation in [10]), the differentiation E ( m ) z ( m ) leads to

E ( m ) z ( m ) = E ( m ) x ^ ( m ) = - H a T e ( m ) + λ D a T Ω ( m ) r ( m )
(30)

According to Table 1, it can be deduced that E ( m ) z ( m ) represents a vector of size L × 1. When combining with the diagonal matrix of (22), we can write

δ ( m ) = φ v ( m ) - H a T e ( m ) + λ D a T Ω ( m ) r ( m )
(31)

where denotes the Hadamard (elementwise) product.

To complete the analysis of the gradient matrix, we have to compute the term v ( m ) W ( m ) . Based on the layer definition in the MLP (14), we obtain

v ( m ) W ( m ) = W ( m ) p ( m ) W ( m ) = p T ( m )
(32)

which in turns corresponds to the output of the previous connected hidden layer, that is to say,

v ( m ) W ( m ) = z i - 1 ( m ) T
(33)

Putting together all the results into the incremental weight matrix ΔW(m+1), we have

Δ W ( m + 1 ) = - η δ ( m ) z i - 1 ( m ) T = = - η φ v ( m ) - H a T e ( m ) + λ D a T Ω ( m ) r ( m ) z i - 1 ( m ) T
(34)

A summary of the dimensions of every variable can be found in Table 2.

Table 2 Summary of dimensions for the output layer

4.2 Any i hidden layer

If we set superscripting for the gradient matrix (20) over any i hidden layer of the MLP, we obtain

E ( m ) W i ( m ) = E ( m ) v i ( m ) v i ( m ) W i ( m ) = δ i ( m ) v i ( m ) W i ( m )
(35)

and taking what was already demonstrated in (33), then

E ( m ) W i ( m ) = δ i ( m ) z i - 1 ( m ) T
(36)

Let us expand the local gradient δi(m) by means of the chain rule for vectors as follows

δ i ( m ) = E ( m ) v i ( m ) = z i ( m ) v i ( m ) v i + 1 ( m ) z i ( m ) E ( m ) v i + 1 ( m )
(37)

where z i ( m ) v i ( m ) is the same diagonal matrix (22), whose eigenvalues are represented by φ'{vi(m)}, and E ( m ) v i + 1 ( m ) denotes the local gradient δi+1(m) of the following connected layer. With respect to the term v i + 1 ( m ) z i ( m ) , it can be immediately derived from the MLP definition of (14) that

v i + 1 ( m ) z i ( m ) = W i + 1 ( m ) p i + 1 ( m ) z i ( m ) = = W i + 1 ( m ) z i ( m ) z i ( m ) = W i + 1 ( m ) T
(38)

Consequently, we come to

δ i ( m ) = d i a g φ v i ( m ) W i + 1 ( m ) T δ i + 1 ( m )
(39)

which can be simplified after verifying that (Wi+1(m))T δi+1(m) stands for a Ri+1× 1 = Si × 1 vector,

δ i ( m ) = φ v i ( m ) W i + 1 ( m ) T δ i + 1 ( m )
(40)

We finally provide an equation to compute the incremental weight matrix ΔWi(m+1) for any i hidden layer

Δ W i ( m + 1 ) = - η δ i ( m ) z i - 1 ( m ) T = = - η φ v i ( m ) W i + 1 ( m ) T δ i + 1 ( m ) z i - 1 ( m ) T
(41)

which is mainly based on the local gradient δi+1(m) of the following connected layer i+1.

4.3 Algorithm

As described in Section 3, our MLP neural net performs a couple of forward and backward processes at every iteration m. First, the whole set of connected layers propagate the degraded image y from the input to the output layers by means of Equation 14. Afterwards, the new synaptic weigh matrixes Wi(m+1) are recalculated from right to left according to the expressions of ΔWi(m+1) for every layer.

Algorithm: MLP with TV regularizer

Initialization: p1 = ym and Wi(0) = 0 1iJ

1: m: = 0

2: while StopRule not satisfied do

3:   for i: = 1 to J do /* Forward */

4:      vi: = Wipi

5:      zi: = φ{vi}

6:   end for /* x ^ ( m ) : = z J */

7:   for i: = J to 1 do /* Backward */

8:      if i = J then /* Output layer */

9:         Compute δJ(m) from (31)

10:            Compute E(m) from (29)

11:      else

12:         δi(m): = φ'{vi(m)}((Wi+1(m))Tδi+1(m))

13      end if

14:      ΔWi(m+1): = -η δi(m)(zi-1(m))T

15:      Wi(m+1): = Wi(m)+ΔWi(m+1)

16:   end for

17:   m: = m+1

18: end while /* x ^ : = x ^ ( m t o t a l ) */

The previous pseudo-code summarizes our proposed algorithm in an MLP of J layers. StopRule denotes a condition such that either the number of iterations is more than a maximum; or the error E(m) converges and, thus, the error change ΔE(m) is less than a threshold; or, even, this error E(m) starts to increase. If one of these conditions comes true, the algorithm concludes and the final outgoing image is the restored image x ^ : = x ^ ( m total ) .

4.4. Reconstruction of boundaries

If we particularize the algorithm for two layers J = 2, we come to an MLP scheme such as illustrated in Figure 5. It is worthy to emphasize how the boundaries are reconstructed at any iteration of the net, from a real image of support FOV (5) to the restored image of size L = [L1 × L2] (recall that the remainder of pixels in ytru was zero-fixed). In addition, we shall observe in Section 5 how the boundary artifacts are removed from the restored image based on the energy minimization E(m), but they are critical however for other methods of the literature.

Figure 5
figure 5

MLP algorithm specifically used in the experiments for J = 2.

4.5 Adjustment of λ and η

In the image restoration field, it is well known how important the parameter λ becomes. In fact, too small values of λ yield overly oscillatory estimates owing to either noise or discontinuities; too large values of λ yield over smoothed estimates.

For that reason, the literature has given significant attention to it with popular approaches such as the unbiased predictive risk estimator (UPRE), the generalized cross validation (GCV), or the L-curve method; see [27] for an overview and references. Most of them were particularized for a Tikhonov regularizer, but lately researches aim to provide solutions for TV regularization. Specifically, the Bayesian framework leads to successful approaches in this field.

In our previous article [10], we adjusted λ with solutions coming from the Bayesian state-of-art. However, we still need to investigate a particular algorithm for the MLP since those Bayesian approaches work only for circulant degradation models, but not for the truncated image of this article. So we shall compute yet a hand-tuned λ which optimizes the results.

Regarding the learning speed, it was already demonstrated that η shows lower sensitivity compared to λ. In fact, its main purpose is to speed up or slow down the convergence of the algorithm. Then, for the sake of simplicity, we shall assume η = 2 for the images of 256 × 256 in size.

5. Experimental results

Our previous article [10] showed a wide set of results which mainly demonstrated the good performance of the MLP in terms of image restoration. We shall focus now on its ability to reconstruct the boundaries using standard 256 × 256 sized images such as Lena or Barbara and common PSFs, some of which are presented here (diagonal motion, uniform, or Gaussian blur).

Let us see our problem formulation by means of an example. Figure 6 depicts the original Barbara image blurred by a motion blur of 15 pixels and 45° of inclination, which turns out a PSF mask of 11 × 11 in size (B1 = B2 = 5). Specifically, we have represented the truncated image ytru (c) which reflects the zeros at the boundaries and the size of L ̃ = 266 × 266 . A real model would consist of the FOV = 256 × 256 region of this image which we have named as yreal in the article. Most of the recent restoration algorithms deal with the real image yreal making assumptions about the boundaries, however the restored image is only 256 × 256 in size. Consequently, the boundaries marked with white broken line in (b) are never restored and then sensible information is lost. In contrast, our MLP uses the ytru version of the real image and outcomes a 256 × 256 sized image x ^ , thus trying to reconstruct the boundary area B = 251 × 20.

Figure 6
figure 6

Barbara image 256 × 256 in size: (a) degraded by diagonal motion blur of 15 pixels and truncated to the field of view 246 × 246 (c). A broken while line in (b) identifies the 251 × 20 sized boundary region which requires be reconstructing in the MLP.

In the light of the expression (18), we define the gradient filters dξ and dμ as the respective horizontal and vertical Sobel masks [1]

1 4 - 1 - 2 - 1 0 0 0 1 2 1 and 1 4 - 1 0 1 - 2 0 2 - 1 0 1

consequently N = 3 × 3.

As observed in Figure 5, the neural net under analysis consists of two layers J = 2, where the bias vectors are ignored and the same log-sigmoid function is applied to both layers. Besides, looking for a tradeoff between good quality results and computational complexity, it is assumed that only two neurons take part in the hidden layer, i.e., S1 = 2.

In terms of parameters, we previously commented that the learning speed of the net is set to η = 2 and the regularization parameter λ relies on a hand tuning basis. Regarding the interconnection weights, they do not require any network training, so the weigh matrices are all initialized to zero. Finally, we set the stopping criteria in the Algorithm as a maximum number of 500 iterations (though never reached) or when the relative difference of the restoration error E(m) falls below a threshold of 10-3 in a temporal window of 10 iterations.

The Gaussian noise level is established according to a BSNR (signal-to-noise ratio of the blurred image) of 20 dB, so that the regularization term of (19) becomes relevant in the restoration result, i.e., high enough values of the parameter λ.

In order to measure the performance of our algorithm, we compute the standard deviation σ e of the error image e = x ^ - x , since it does not depend on the blurred image y as occurred in the ISNR [2]. Moreover, our purpose is to measure the boundary restoration process so we particularize the standard deviation to the pixels of the boundary region B. Then,

B σ e = 1 B - 1 k = 1 B e k - 1 B j = 1 B e j 2
(42)

where e stands for the boundary standard deviation. Alternatively, we have also used the boundary peak signal-to-noise ration (BPSNR) as defined in [8]:

BPSNR = 10 log 255 2 1 B k = 1 B e k 2 dB
(43)

considering an 8-bit gray-scaled image.

Our proposed MLP scheme was fully implemented in Matlab, being very well suited as all formulae of this article have been presented on a matrix basis. The complexity of the net can be analyzed in the two stages which describe the algorithm: forward pass (FP) and backward pass (BP). The computation of the gradient δ(m) in the output layer makes the BP more time-consuming, as shown in (31). In those equations, the product trunc H a x ^ ( m ) is the most critical term as it requires numerical computations of O(L2), although the operator trunk{·} is responsible for discarding (zero-fixing) L1 × 8B1 operations (assuming B1 = B2 and L1 = L2). However, this high computational cost is significantly reduced for the sparsity of H a , which obtains a performance only related to the number of non-zero elements. Regarding the FP, the two neurons of the hidden layer lead to faster matrix operations of O(2L).

In regard to convergence, our MLP is based on the simple steepest descent algorithm as defined in (16). Consequently, the time of convergence is usually slow and controlled by the parameter η. We are aware that other variations on backpropagation may be applied to our MLP such as the conjugate gradient algorithm, which performs significantly better [28]. Finally, we mention that the experiments were run on a 2.4-GHz Intel Core2Duo with 2 GB of RAM. For a detailed analysis of timing, let us refer to the previous article [10].

5.1. Experiment 1

In a first experiment, we aim to obtain numerical results of the boundary reconstruction process for different sizes of degradation. Let us take the original images Lena and Barbara degraded by diagonal motion and uniform blurs. Regarding the motion blur, it is set to 45° of inclination and the length of shifted pixels is the parameter to vary between 5 and 15. We have used the approximation of Matlab to construct the filter of motion http://www.mathworks.com/help/toolbox/images/ref/fspecial.html, which leads to masks between 5 × 5 and 11 × 11 in size. Analogously, the uniform blur is defined with odd sizes between 5 × 5 and 11 × 11. Let us recall that a Gaussian noise is added to the blurred image such that BSNR = 20 dB.

The results of the MLP are shown in Table 3. We can observe the expected reduction of quality (both σ e and B σ e are increased, while the BPSNR is lowered) when the size of degradation is bigger. However, it is important to note that the region of boundary reconstruction is expanded accordingly as we will see in the following section.

Table 3 Numerical values of σ e and boundary parameters B σ e and BPSNR for different sizes of degradation

Comparing the blurs in both images, we want to highlight the better results of boundary reconstruction for the uniform blur despite of the worse values of σ e . Therefore, it is presumable that the MLP carries out somehow differently the image restoration of the center of the image from the boundary restoration. In fact, the restoration of the center is a linear process defined by the regularization expression (29), but the boundary reconstruction comes from a nonlinear truncation which requires different performance.

Finally, let us comment the improvement of the regeneration of borders in the motion blur for a specific size when the length of pixels increases. Although we know it is a consequence of how the motion blur is modeled, we can deduce the dependency of the MLP to the structure of the PSF in order to reconstruct the boundaries.

5.2. Experiment 2

To visually assess the performance of the MLP on the boundary reconstruction process, we have devoted an experiment to show some restored images. In particular, we have selected some of the results indicated in Table 3 with different sizes of blurring. Figure 7 depicts the Lena restored image for a diagonal motion blur of 10 pixels. The restored boundary area is 252 × 16 in size marked by a white broken line and reveals how the borders are successfully regenerated without neither any image information nor prior assumption on the BCs.

Figure 7
figure 7

Restored image from the Lena degraded image by diagonal motion blur of 10 pixels and BSNR = 20 dB. (a): σ e = 11.62 A broken white line shows the reconstruction of boundaries 252 × 16 in (b). The image (c) depicts the evolution of the restoration error.

Still using a bigger motion blur of 13 pixels, the boundary reconstruction is even more manifest as shown in Figure 8. In spite of the fact that blurring is more critical and then the subjective quality of Barbara image is lower, the 251 × 20 boundary pixels are regenerated accurately. Let us look at the table cloth or her hair to find out the good performance of the MLP.

Figure 8
figure 8

Restored image from the Barbara degraded image by diagonal motion blur of 13 pixels and BSNR = 20 dB. (a): σ e = 16.11 A broken white line shows the reconstruction of boundaries 251 × 20 in (b). The image (c) depicts the evolution of the restoration error.

Finally, we use a different type of blurring in Lena image of Figure 9. In this case, a uniform blur of size 7 × 7 is applied to the original image and the MLP leads to a successfully restored image which recovers the 253 × 12 truncated pixels of the original image.

Figure 9
figure 9

Restored image from the Lena degraded image by 7 × 7 uniform blur and BSNR = 20 dB. (a): σ e = 11.32 A broken white line shows the reconstruction of boundaries 253 × 12 in (b). The image (c) depicts the evolution of the restoration error.

In each of the figures, we have included a gray-scaled image which represents the evolution of the restoration error in square blocks. Specifically, it corresponds to the parameter σ e where the brighter regions are the lower values of σ e , that is to say, the pixels with a better quality of restoration. We want to highlight the smooth transition of restoration error in the boundary area due to the regeneration of borders. On the other hand, the center of the image comprises the minor values of error restoration as expected by the global energy minimization of the MLP.

5.3. Experiment 3

This experiment aims to compare the performance of the MLP with other restoration algorithms which needs BCs to deal with a realistic capture model: zero, periodic, reflective, and anti-reflective as commented in Section 2. We have used the well-known RestoreTools http://www.mathcs.emory.edu/~nagy/RestoreTools library patched with the anti-reflective modification http://scienze-como.uninsubria.it/mdonatelli/, which implements the matrix-vector operations for every boundary condition. In particular, we have selected a modified version of the Tikhonov regularization (9) named as hybrid bidiagonalization regularization (HyBR) in the library.

Let us consider a Barbara image degraded by a 7 × 7 Gaussian blur and the same additive white noise of the previous experiments with BSNR = 20 dB. Figure 10 shows the real acquisition of such a degraded image where we have removed the boundary pixels and the image is FOV = 250 × 250 in size (FOV). From (b) to (e) we have represented the restored images for each boundary condition; all of them are 250 × 250 sized images which miss the information of the boundaries up to 256 × 256. Furthermore, a remarkable boundary ringing can be appreciated for the zero and the periodic BCs as result of the discontinuity of the image in the boundaries. As demonstrated in [6, 7], the reflexive (d) and the anti-reflexive (e) conditions perform considerably better removing that boundary effect.

Figure 10
figure 10

Restoration results from the Barbara degraded image by Gaussian blur 7 × 7 and BSNR = 20 dB. (a). For the HyBR method, the restored images with BCs (b) zero: σ e = 15.25, (c) periodic: σ e = 13.81, (d) reflexive: σ e = 12.99 and (e) anti-reflexive: σ e = 12.98. The MLP restored image (f) performs considerably better with σ e = 12.47 in the original image size.

The restored image of our MLP algorithm is shown in Figure 10f and makes obvious the good performance of the neural net. First, the boundary ringing is negligible without prior assumption on the boundary condition. Moreover, the visual aspect is better compared to the others which supports the good properties of the TV regularizer. To numerically contrast the results, the parameter σ e of the MLP is measured only in the FOV region. It leads to σ e = 12.47 which is notably lower to the values of the HyBR algorithm (e.g., σ e = 12.99 for the reflexive BCs). Finally, the MLP is able to reconstruct the 253 × 12 sized boundary region and outcomes the original image size of 256 × 256.

5.4. Experiment 4

Finally, let us delve into other algorithms of the literature which deal with the boundary problem in a different sense from the typical BCs. However, it is not only expected that those methods remove the boundary ringing but also amount to reconstruct the area B bordering the FOV. In recent research, Bishop [29] and Calvetti [30] propose similar methods based on the Bayesian model of the deconvolution problem and treat the truncation effect as modeling error. They rewrite the observation model (2) to take into account the original image outside the FOV

y real = H + x + + n , x + = x 1 x 2 , H + = H 1 H 2
(44)

where x+ is the extended image of length L ̃ and x1 is the restricted image defined in the FOV of (5). It can be deduced that H 1 and H 2 are matrixes of size FOV × FOV and FOV × L ̃ - L , respectively, and a x 2 is the image vector in the boundary frame of length L ̃ - L . The extrapolation approach of these methods establishes an adequate prior p(x+) which models the entire image and the restored distribution p(x+|yreal) is estimated according to the Bayesian framework. We particularly select the region L of the restored image x ^ + .

For this section we will extract results from the Extrapolation algorithm of Bishop, whom we would like to thank for his close collaboration, using a TV prior for p(x+). It is demonstrated in [29] that the figures obtained for Calvetti's algorithm would be equivalent. On the other hand, we will upgrade our proposed MLP to leverage somehow the concept of extended observation. First, it means that the input of our neural net is actually the real observed image yreal, not the truncated model ytru, and then the input layer consists of FOV neurons. The structure of the MLP keeps unaltered in terms of hidden and output layers, yielding a restored image x ^ of size L. To finish, we remove the operator trunk{·} from all formulae of Section 4 assuming an aperiodic model (zero-padded) of the extended image x+. We do not lose generality as the input is the real image yreal and the MLP has to reconstruct likewise the boundary region B.

Let us take the blurs of the previous experiments: uniform, Gaussian, and motion masks of 7 × 7. Tests are computed with the Barbara image and a noise ratio of BSNR = 20 dB. To maximize the results of the MLP, wechoose the parameters λ and η on a hand-tuning basis. Again the performance of the algorithms are measured with the restoration errors in the whole image σ e and in the boundary region B σ e . In this experiment, we also include the equivalent error in the FOV, which is denoted as F σ e .

Looking into Table 4, we find out that the values of σ e are quite similar for both methods, being the MLP which outperforms in the Gaussian and motion blurs. But, what really deserves attention are the results in the boundary region B. The MLP is considerably better reconstructing the missed boundaries as indicated by the lower values of B σ e . Then, it proves the outstanding properties of the neural net in terms of learning about the unknown image. On the contrary, the extrapolation method is able to restore the FOV slightly better. We can conclude that our MLP is a successful approach of inpainting the boundary frame, in addition to recover the FOV without any boundary artifact.

Table 4 Numerical values of σ e , B σ e , and F σ e comparing the extrapolation algorithm of Bishop and our MLP with different 7 × 7 sized blurs

Let us visually assess the performance of both methods for the experiments of Table 4. In particular, we have used two 250 × 250 sized images degraded by uniform and Gaussian blurs of 7 × 7 as depicted in Figure 11a, d, respectively. The restored images obtained by the Extrapolation and the MLP algorithms are placed in a row of Figure 11. It can be deduced that the output images are all 256 × 256 in size and thus reconstructing the boundary area B = 253 × 12. Despite the fact that the value of σ e is lower for the Extrapolation method in the uniform blur, we can observe that the subjective quality of the MLP output is better. Regarding the Gaussian blur, the restored images look similar although the value of σ e is in favor of the neural net. As for the boundary let us compute some other experiments to actually notice the reconstructing process.

Figure 11
figure 11

Restoration results from the Barbara degraded image by Uniform (a) and Gaussian (d) blurs 7 × 7 and BSNR = 20 dB. For the Extrapolation method, the output images reach restoration errors of (b) σ e = 13.23 and (e) σ e = 12.49, respectively, while in the MLP we compute values of (c) σ e = 13.53 and (f) σ e = 12.33.

We have selected a Gaussian blur with sizes increasing from 7 × 7 to 17 × 17. In Table 5, we reflect the data corresponding to boundary error B σ e for every single mask. It gives clear evidence of the good performance of the MLP when dealing with the boundary problem. That is also remarkable when we have a look to the restored images of Figure 12. We have highlighted the upper right corner of the Barbara image in the case of Gaussian masks of 7 × 7 and 17 × 17. We can observe how the MLP achieves to reconstruct the boundary frame successfully, whereas the extrapolation algorithm obtains a rough estimation of the region as the mask is bigger.

Table 5 Numerical values of B σ e comparing the Extrapolation algorithm of Bishop and our MLP with different sizes of the Gaussian blur
Figure 12
figure 12

Restoration results from the Barbara degraded image by Gaussian blurs 7 × 7 and 17 × 17. The MLP (b, d) clearly outperforms in boundary inpainting with respect to the Extrapolation method (a, c).

6. Concluding remarks

In this article, we have presented the implementation of a method which allows restoring the boundary area of a real truncated image without prior conditions. The idea is to apply a TV-based regularization function in an iterative minimization of an MLP neural net. An inherent backpropagation algorithm has been developed in order to regenerate the lost borders, while adapting the center of the image to the optimum linear solution (the ringing artifact thus being negligible).

The proposed restoration scheme has been validated by means of several tests. As a result, we can conclude the ability of our neural net to reconstruct the boundaries of the image with different BPSNR values depending on the blurring type.

References

  1. González RC, Woods RE: Digital Image Processing. 3rd edition. Prentice Hall; 2008.

    Google Scholar 

  2. Banham MR, Katsaggelos AK: Digital image restoration. IEEE Signal Process Mag 1997,14(2):24-41.

    Article  Google Scholar 

  3. Bovik AC: Handbook of Image & Video Processing. 2nd edition. Elsevier; 2005.

    Google Scholar 

  4. Chan TF, Shen J: Image processing and analysis variational, pde, wavelet and stochastic methods. Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM) 2005.

    Google Scholar 

  5. Woods J, Biemond J, Tekalp A: Boundary value problem in image restoration. IEEE International Conference on Acoustics, Speech and Signal Processing 1985, 10: 692-695.

    Article  Google Scholar 

  6. Calvetti D, Somersalo E: Statistical elimination of boundary artefacts in image deblurring. Inverse Probl 2005,21(5):1697-1714.

    Article  MathSciNet  Google Scholar 

  7. Martinelli A, Donatelli M, Estatico C, Serra-Capizzano S: Improved image deblurring with anti-reflective boundary conditions and re-blurring. Inverse Probl 2006,22(6):2035-2053.

    Article  MathSciNet  Google Scholar 

  8. Liu R, Jia J: Reducing boundary artifacts in image deconvolution. International Conference on Image Processing 2008, 505-508.

    Google Scholar 

  9. Bernues E, Cisneros G, Capella M: Truncated edges estimation using MLP neural nets applied to regularized image restoration. International Conference on Image Processing 2002, 1: I-341-I-344.

    Article  Google Scholar 

  10. Santiago MA, Cisneros G, Bernués E: An MLP neural net with L1 and L2 regularizers for real conditions of deblurring. EURASIP J Adv Signal Process 2010, 2010: 18. Article ID 394615

    Article  Google Scholar 

  11. Paik JK, Katsaggelos AK: Image restoration using a modified hopfield network. IEEE Trans Image Process 1992,1(1):49-63.

    Article  Google Scholar 

  12. Sun Y: Hopfield neural network based algorithms for image restoration and reconstruction--Part I: algorithms and simulations. IEEE Trans Signal Process 2000,48(7):2119-2131.

    Article  Google Scholar 

  13. Han YB, Wu LN: Image restoration using a modified hopfield neural network of continuous state change. Signal Process 2004,12(3):431-435.

    Google Scholar 

  14. Perry SW, Guan L: Weight assignment for adaptive image restoration by neural network. IEEE Trans Neural Netw 2000,11(1):156-170.

    Article  Google Scholar 

  15. Wong H, Guan L: A neural learning approach for adaptive image restoration using a fuzzy model-based network architecture. IEEE Neural Netw 2001,12(3):516-531.

    Article  Google Scholar 

  16. Wang J, Liao X, Yi Z: Image Restoration Using Hopfield Neural Network Based on Total Variational Model. Springer Berlin; 2005:735-740. LNCS 3497, ISNN 2005

    Google Scholar 

  17. Wu Y-D, Sun Y, Zhang H-Y, Sun S-X: Variational PDE based image restoration using neural network. Image Process IET 2007,1(1):85-93.

    Article  MathSciNet  Google Scholar 

  18. Bioucas-Dias J, Figueiredo M, Oliveira JP: Total variation-based image deconvolution: a majorization-minimization approach. IEEE International Conference on Acoustics, Speech and Signal Processing 2006, 2: 861-864.

    Google Scholar 

  19. Oliveira J, Bioucas-Dias J, Figueiredo M: Adaptive total variation image deblurring: a majorization-minimization approach. Signal Process 2009,89(9):2479-2493.

    Article  Google Scholar 

  20. Molina R, Mateos J, Katsaggelos AK: Blind deconvolution using a variational approach to parameter, image and blur estimation. IEEE Image Process 2006,15(12):3715-3727.

    Article  MathSciNet  Google Scholar 

  21. Santiago MA, Cisneros G, Bernués E: Iterative Desensitisation of image restoration filters under wrong PSF and noise estimates. EURASIP J Adv Signal Process 2007, 2007: 18. Article ID 72658

    Article  Google Scholar 

  22. Bertero M, Boccacci P: Introduction to Inverse Problems in Imaging. Institute of Physics Publishing; 1998.

    Book  Google Scholar 

  23. Ng M, Chan RH, Tang WC: A fast algorithm for deblurring models with Neumann boundary conditions. SIAM J Sci Comput 1999, 21: 851-866.

    Article  MathSciNet  Google Scholar 

  24. Osher S, Rudin L, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259-268.

    Article  Google Scholar 

  25. Brandt K, Sysking M:The Matrix Cookbook. Last update 2008 [http://matrixcookbook.com/]

  26. Felippa CA: Introduction to finite element methods.[http://www.colorado.edu/engineering/cas/courses.d/IFEM.d/]

  27. Vogel CR: Computational methods for inverse problems. Frontiers in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM) 2002.

    Google Scholar 

  28. Hagan MT, Demuth HB, Beale MH: Neural Network Design. PWS Publishing; 1996.

    Google Scholar 

  29. Bishop TE: Blind image deconvolution: nonstationary Bayesian approaches to restoring blurred photos. PhD Thesis, University of Edinburg 2008.

    Google Scholar 

  30. Calvetti D, Somersalo E: Bayesian image deblurring and boundary effects. Proceedings of SPIE the International Society for Optical Engineering 2005,5910(1):59. 100X-9

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Miguel A Santiago.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Santiago, M.A., Cisneros, G. & Bernués, E. Boundary reconstruction process of a TV-based neural net without prior conditions. EURASIP J. Adv. Signal Process. 2011, 115 (2011). https://doi.org/10.1186/1687-6180-2011-115

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2011-115

Keywords