Boundary reconstruction process of a TV-based neural net without prior conditions

Image restoration aims to restore an image within a given domain from a blurred and noisy acquisition. However, the convolution operator, which models the degradation, is truncated in a real observation causing significant artifacts in the restored results. Typically, some assumptions are made about the boundary conditions (BCs) outside the field of view to reduce the ringing. We propose instead a restoration method without prior conditions which reconstructs the boundary region as well as making the ringing artifact negligible. The algorithm of this article is based on a multilayer perceptron (MLP) which minimizes a truncated version of the total variation regularizer using a back-propagation strategy. Various experiments demonstrate the novelty of the MLP in the boundary restoration process without neither any image information nor prior assumption on the BCs.


Introduction
Restoration of blurred and noisy images is a classical problem arising in many applications, including astronomy, biomedical imaging, and computerized tomography [1]. This problem aims to invert the degradation because of a capture device, but the underlying process is mathematically ill posed and leads to a highly noise sensitive solution. A large number of techniques have been developed to cope with this issue, most of them under the regularization or the Bayesian frameworks (a complete review can be found in [2][3][4]).
The degraded image is generally modeled as a convolution of the unknown true image with a linear point spread function (PSF), along with the effects of an additive noise. The non-local property of the convolution implies that part of the blurred image near the boundary integrates information of the original scenery outside the field of view. However, this information is not available in the deconvolution process and may cause strong ringing artifacts on the restored image, i.e., the well-known boundary problem [5]. Typical methods to counteract the boundary effect is to make assumptions about the behavior of the original image outside the field of view such as Dirichlet, Neuman, periodic, or other recent conditions in [6][7][8]. The result of restoration with these methods is an image defined in the field-of-view (FOV) domain, but it lacks the boundary area which is actually present in the true image.
In this article we present a restoration method which deals with a blurred image defined in the FOV, but with neither any image information nor prior assumption on the boundary conditions (BCs). Furthermore, the objective is not only to reduce the ringing artifacts on the whole image, but also reconstruct the missed boundaries of the original image without prior assumption.

Contribution
In recent studies [9,10], we have developed an algorithm using a multilayer perceptron (MLP) to restore a real image without relying on the typical BCs of the literature. The main goal is to model the blurred image as truncation of the convolution operator, where the boundaries have been removed and they are not further used in the algorithm.
A first step of our neural net was given in a previous study [9] using the standard l 2 norm in the energy function, as done in other regularization algorithms [11][12][13][14][15]. However, the success of the total variation (TV) in deconvolution [16][17][18][19][20] motivated its incorporation in the MLP. By means of matrix algebra and the approximation of the TV operator with the majorization-minimization (MM) algorithm of [19], we presented a newer version of the MLP [10] for both l 1 and l 2 regularizers and mainly devoted to compare the truncation model with the traditional BCs. Now we will analyze the TV-based MLP with the purpose of going into the boundary restoration process. In general, the neural network is very well suited to learn about the degradation model and then restore the borders without the values of the blurred data therein. Besides, the algorithm adapts the energy optimization to the whole image and makes the ringing artifact negligible.
Finally, let us recall that our MLP is somehow based on the same algorithmic base presented for the authors about the desensitization problem [21]. In fact, our MLP simulates at every iteration an approach to both the degradation (backward) and the restoration (forward) processes, thus extending the same iterative concept but applied to a nonlinear problem.

Paper organization
This article is structured as follows. In the next section, we provide a detailed formulation of the problem, establishing naming conventions, and the energy function to be minimized. In Section 3, we present the architecture of the neural net under analysis. Section 4 describes the adjustment of its synaptic weights in every layer and outlines the reconstruction of boundaries. We present some experimental results in Section 5 and, finally, concluding remarks are given in Section 6.

Problem formulation
Let h(i, j) be any generic two-dimensional degradation filter mask (PSF, usually invariant low pass filter) and x (i, j) the unknown original image, which can be lexicographically represented by the vectors h and x are the supports which define the PSF and the original image, respectively. Let B 1 and B 2 be the horizontal and vertical bandwidths of the PSF mask, then we can rewrite the support M as (2B 1 + 1) × (2B 2 + 1) .
A classical formulation of the degradation model (blur and noise) in an image restoration problem is given by where H is the blurring matrix corresponding to the filter mask h of (1), y is the observed image (blurred and noisy image) and n is a sample of a zero mean white Gaussian additive noise of variance s 2 .
The matrix H can generally be expressed as where T has a Toeplitz structure and B, which is defined by the BCs, is often structured, sparse and low rank. BCs make assumptions about how the observed image behaves outside the FOV and they are often chosen for algebraic and computational conveniences. The following cases are commonly referenced in literature: Zero BCs [22], aka Dirichlet, impose a black boundary so that the matrix B is all zeros and, therefore, H has a Toeplitz structure (BTTB). This implies an artificial discontinuity at the borders which can lead to serious ringing effects.
Periodic BCs [22], aka Neumann, assume that the scene can be represented as a mosaic of a single infinite-dimensional image, repeated periodically in all directions. The resulting matrix H is BCCB which can be diagonalized by the unitary discrete Fourier transform and leads to a restoration problem implemented by FFTs. Although computationally convenient, it cannot actually represent a physical observed image and still produces ringing artifacts.
Reflective BCs [23] reflect the image like a mirror with respect to the boundaries. In this case, the matrix H has a Toeplitz-plus-Hankel structure which can be diagonalized by the orthonormal discrete cosine transformation if the PSF is symmetric. As these conditions maintain the continuity of the gray level of the image, the ringing effects are reduced in the restoration process.
Anti-reflective BCs [7], similarly reflect the image with respect to the boundaries but using a central symmetry instead of the axial symmetry of the reflective BCs. The continuity of the image and the normal derivative are both preserved at the boundary leading to an important reduction of ringing. The structure of H is Toeplitzplus-Hankel and a structured rank 2 matrix, which can be also efficiently implemented if the PSF satisfies a strong symmetry condition.
BCs are required to manage the non-local property of the convolution operator which leads to the undetermined problem (2), in the sense that we have fewer data points than unknowns to explain it. In fact, the matrix product Hx yields a vector y of lengthL , where H is L × L in size and the value ofL is greater than the original size L for linear convolution (aperiodic model). Then, we obtain a degraded image y of support L ⊂ 2 with pixels integrated from the BCs; however, they are not actually present in a real observation. Figure 1 illustrates the boundary regions resulted after shifting the PSF mask throughout the entire image, and defines the region FOV as A real observed image y real is therefore a truncation of the degradation model up to the size of the FOV support. In our algorithm, we define an image y tru which represents this observed image y real by means of a truncation on the aperiodic model y tru = trunc {H a x + n} (6) where H a is the blurring matrix for the aperiodic model and the operator trunk{·} is responsible for removing (zero-fixing) the borders appeared due to the BCs, that is to say, Dealing with a truncated image like (7) in a restoration problem is an evident source of ringing for the discontinuity at the boundaries. For that reason, this article aims to provide an image restoration approach to avoid those undesirable ringing artifacts when y tru is the degraded image. Furthermore, it is also intended to regenerate the truncated borders while adapting the and whose area is calculated by B = (L 1 -B 1 ) × 4B 1 , if we consider square dimensions such that B 1 = B 2 and L 1 = L 2 .
Restoring an image x is usually an ill-posed or ill-conditioned problem since either the blurring operator H does not admit inverse or is nearly singular. Thus, a regularization method should be used in the inversion pro- where z 2 2 = i z 2 i denotes the 2 norm,x is the restored image, and D is the regularization operator, built on the basis of a high pass filter mask d of support N = [N 1 × N 2 ] ⊂ 2 and using the same BCs described previously. The first term in (9) is the 2 residual norm appearing in the least-squares approach and ensures fidelity to data. The second term is the so-called "regularizer" or "side constrain" and captures prior knowledge about the expected behavior of x through an additional 2 penalty term involving just the image. The hyperparameter (or regularization parameter) l is a critical value which measures the trade-off between a good fit and a regularized solution.
Alternatively, the TV regularization, proposed by Rudin et al. [24], has become very popular in recent research as result of preserving the edges of objects in the restoration. A discrete version of the TV deblurring problem is given bŷ where ||z|| 1 denotes the 1 norm (i.e., the sum of the absolute value of the elements) and ∇ stands for the discrete gradient operator. The ∇ operator is defined by the matrices D ξ and D μ as built on the basis of the respective masks d ξ and d μ of support N = [N 1 × N 2 ] ⊂ 2 , which turn out the horizontal and vertical first-order differences of the image. Compared to the expression (9), the TV regularization provides a 1 penalty term which can be thought as a measure of signal variability. Once again, l is the critical regularization parameter to control the weight we assign to the regularizer relatively to the data misfit term.
Significant amount of work has been addressed to solve any of the above regularizations and mainly the TV deblurring in recent times. Nonetheless, most of the approaches adopted any of the BCs described at the beginning of this section to cope with the indetermination of the problem. We now intend to study an algorithm able to restore the real truncated image (6) removing the assumptions about the boundaries and using the TV method as mathematical regularizer. Consequently, the restoration problem (10) can be redefined asx where the subscript a denotes the aperiodic formulation of the matrix operator. Table 1 summarizes the dimensions involved in the expression (12) taking into account the definition of the operator trunc{·} in (7).
To go through this problem, we know that neural networks are particularly well suited as their ability to nonlinear mapping and self-adaptiveness. In fact, the Hopfield network has been used in the literature to solve the optimization problem (9) and recent studies provide neural network solutions to the TV regularization (10) as in [16,17]. In this article, we present a simple solution to solve the TV-based solution by means of an MLP with back-propagation. Previous researches of the authors [10] showed that the MLP also using the 2 term of (9).

Definition of the MLP approach
Let us build our neural net according to the MLP architecture illustrated in Figure 3. The input layer of the net consists ofL neurons with inputs y 1 , y 2 , ..., yL being, respectively, theL pixels of the truncated image y tru . At any generic iteration m, the output layer is defined by L neurons whose outputsx 1 (m),x 2 (m), ...,x L (m) are, respectively, the L pixels of an approachx(m) to the restored image. After m total iterations, the neural net outcomes the actual restored imagex =x(m total ). On the other hand, the hidden layer consists of two neurons, this being enough to achieve good restoration results while keeping low complexity of the network. In any case, the following analysis will be generalized for any number of hidden layers and any number of neurons per layer.
At every iteration, the neural net works by simulating both an approach to the degradation process (backward) and to the restoration solution (forward), while refining the results according to a optimization criteria. However, the input to the net is always the image y tru , as no net training is required. Let us remark that we manage "backward" and "forward" concepts in the opposite sense to a standard image restoration problem due to the specific architecture of the net.
During the back-propagation process, the network must iteratively minimize a regularized error function which we will set to the expression (12) in the following sections. Since the trunc{·} operator is involved in those expressions, the truncation of the boundaries is performed at every iteration but also their reconstruction as deduced by theL size at the input (though it is really defined in FOV since the rest of pixels are zeros) and the L size at the output. What deserves attention is that no a priori knowledge, assumption or estimation concerning the unknown borders is needed to perform the regeneration. In general, this could be explained by the neural net behavior, which is able to learn about the degradation model. A restored image is therefore obtained in real conditions on the basis of a global energy minimization strategy, with reconstructed borders while adapting the center of the image to the optimum solution and thus making the ringing artifact negligible.
Following a similar naming convention to that adopted in Section 2, let us define any generic layer of the net composed by R inputs and S neurons (outputs) as illustrated in Figure 4, where p is the R × 1 input vector, W represents the synaptic weight matrix, S × R in size, and z is the S × 1 output vector of the layer. The bias vector b is ignored in our particular implementation. In order to have a differentiable transfer function, a log-sigmoid expression is chosen for {·} which is defined in the domain 0 ≤ {·} ≤ 1. Then, a layer in the MLP is characterized for the following equations as b = 0 (vector of zeros). Furthermore, two layers are connected each other verifying that (15) where i and i+1 are superscripts to denote two consecutive layers of the net. Although this superscripting of layers should be appended to all variables, for notational simplicity we shall remove it from all formulae of the manuscript when deduced by the context. Table 1 Size of the variables involved in the definition of the MLP, both in the degradation and the restoration processes ) and the rest are zeros up to the sizeL and the rest are zeros up to the size U

Adjustment of the neural net
In this section, our purpose is to show the procedure of adjusting the interconnection weights as the MLP iterates. A variant of the well-known algorithm of back-propagation is applied by solving the optimization problem in (12). Let ΔW i (m+1) be the correction applied to the weight matrix W i of the layer i at the (m + 1)th iteration. Then, where E(m) stands for the restoration error after m iterations at the output of the net and the constant h indicates the learning speed. Let us compute now the in the different layers of the MLP.

Output layer
Defining the vectors e(m) and r(m) for the respective error and regularization terms at the output layer after m iterations we can rewrite the restoration error from (12) as Using the matrix chain rule when having a composition on a vector [25], the gradient matrix leads to where is the so-called local gradient vector which again can expanded by the chain rule for vectors [26].
Since z and v are elementwise related by the transfer representing a diagonal matrix whose eigenvalues are computed by the function We recall that z(m) is actuallyx(m) in the output layer (see Figure 3). If we wanted to compute the gradi- (19), we would find out a challenging nonlinear optimization problem that is caused by the nondifferentiability of the 1 norm. One approach to overcome this challenge comes from the approximation where TV stands for the well-known TV regularizer and ε > 0 is a constant to avoid singularities when minimizing. Both products D ξ ax (m) and D μ ax (m) are subscripted by k meaning the kth element of the respective U × 1 sized vector (see Table 1). It should be mentioned that 1 norm and TV regularizations are quite often used as the same in the literature. But, the distinction between these two regularizers should be kept in mind since, at least in deconvolution problems, TV leads to significant better results as illustrated in [18].
Bioucas-Dias et al. [18,19] proposed an interesting formulation of the TV problem by applying MM algorithms. It leads to a quadratic bound function for TV regularizer, which thus results in solving a linear system of equations. Similarly, we adopt that quadratic majorizer in our particular implementation as where K is an irrelevant constant, the involved matrixes are defined as and the regularization term r(m) of (18) is reformulated r(m) = trunc D ax (m) (28) such that the operator trunk{·} is applied individually to D ξ a and D μ a (see Table 1) and merged later as indicated in the definition of (26).
Finally, we can rewrite the restoration error E(m) as Taking advantage of the quadratic properties of the expression (25) and applying Matrix Calculus basis (see a detailed computation in [10]), the differentiation ∂E(m) ∂z (m) leads to According to Table 1, it can be deduced that ∂E(m) ∂z(m) represents a vector of size L × 1. When combining with the diagonal matrix of (22), we can write where ○ denotes the Hadamard (elementwise) product. To which in turns corresponds to the output of the previous connected hidden layer, that is to say, Putting together all the results into the incremental weight matrix ΔW(m+1), we have  A summary of the dimensions of every variable can be found in Table 2.

Any i hidden layer
If we set superscripting for the gradient matrix (20) over any i hidden layer of the MLP, we obtain and taking what was already demonstrated in (33), then Let us expand the local gradient δ i (m) by means of the chain rule for vectors as follows where is the same diagonal matrix (22) (14) that Consequently, we come to which can be simplified after verifying that (W i+1 (m)) T δ i+1 (m) stands for a R i+1 × 1 = S i × 1 vector, We finally provide an equation to compute the incremental weight matrix ΔW i (m+1) for any i hidden layer which is mainly based on the local gradient δ i+1 (m) of the following connected layer i+1.

Algorithm
As described in Section 3, our MLP neural net performs a couple of forward and backward processes at every iteration m. First, the whole set of connected layers propagate the degraded image y from the input to the output layers by means of Equation 14. Afterwards, the new synaptic weigh matrixes W i (m+1) are recalculated from right to left according to the expressions of ΔW i (m+1) for every layer. if i = J then /* Output layer */ 9: Compute δ J (m) from (31) 10: Compute E(m) from (29) 11: else 12: 16: end for 17: m: = m+1 18: end while /*x :=x(m total ) */ The previous pseudo-code summarizes our proposed algorithm in an MLP of J layers. StopRule denotes a condition such that either the number of iterations is more than a maximum; or the error E(m) converges and, thus, the error change ΔE(m) is less than a threshold; or, even, this error E(m) starts to increase. If one of these conditions comes true, the algorithm concludes and the final outgoing image is the restored imagê x :=x(m total ) .

Reconstruction of boundaries
If we particularize the algorithm for two layers J = 2, we come to an MLP scheme such as illustrated in Figure 5. It is worthy to emphasize how the boundaries are reconstructed at any iteration of the net, from a real image of support FOV (5) to the restored image of size L = [L 1 × L 2 ] (recall that the remainder of pixels in y tru was zerofixed). In addition, we shall observe in Section 5 how the boundary artifacts are removed from the restored image based on the energy minimization E(m), but they are critical however for other methods of the literature.

Adjustment of l and h
In the image restoration field, it is well known how important the parameter l becomes. In fact, too small values of l yield overly oscillatory estimates owing to either noise or discontinuities; too large values of l yield over smoothed estimates.
For that reason, the literature has given significant attention to it with popular approaches such as the unbiased predictive risk estimator (UPRE), the generalized cross validation (GCV), or the L-curve method; see [27] for an overview and references. Most of them were particularized for a Tikhonov regularizer, but lately researches aim to provide solutions for TV regularization. Specifically, the Bayesian framework leads to successful approaches in this field.
In our previous article [10], we adjusted l with solutions coming from the Bayesian state-of-art. However, we still need to investigate a particular algorithm for the MLP since those Bayesian approaches work only for circulant degradation models, but not for the truncated image of this article. So we shall compute yet a handtuned l which optimizes the results.
Regarding the learning speed, it was already demonstrated that h shows lower sensitivity compared to l. In fact, its main purpose is to speed up or slow down the convergence of the algorithm. Then, for the sake of simplicity, we shall assume h = 2 for the images of 256 × 256 in size.

Experimental results
Our previous article [10] showed a wide set of results which mainly demonstrated the good performance of the MLP in terms of image restoration. We shall focus now on its ability to reconstruct the boundaries using standard 256 × 256 sized images such as Lena or Barbara and common PSFs, some of which are presented here (diagonal motion, uniform, or Gaussian blur).
Let us see our problem formulation by means of an example. Figure 6   As observed in Figure 5, the neural net under analysis consists of two layers J = 2, where the bias vectors are ignored and the same log-sigmoid function is applied to both layers. Besides, looking for a tradeoff between good quality results and computational complexity, it is assumed that only two neurons take part in the hidden layer, i.e., S 1 = 2.
In terms of parameters, we previously commented that the learning speed of the net is set to h = 2 and the regularization parameter l relies on a hand tuning basis. Regarding the interconnection weights, they do not require any network training, so the weigh matrices are all initialized to zero. Finally, we set the stopping criteria in the Algorithm as a maximum number of 500 iterations (though never reached) or when the relative difference of the restoration error E(m) falls below a threshold of 10 -3 in a temporal window of 10 iterations.
The Gaussian noise level is established according to a BSNR (signal-to-noise ratio of the blurred image) of 20 dB, so that the regularization term of (19) becomes In order to measure the performance of our algorithm, we compute the standard deviation s e of the error image e =x − x , since it does not depend on the blurred image y as occurred in the ISNR [2]. Moreover, our purpose is to measure the boundary restoration process so we particularize the standard deviation to the pixels of the boundary region B. Then, where Bs e stands for the boundary standard deviation. Alternatively, we have also used the boundary peak signal-to-noise ration (BPSNR) as defined in [8]: considering an 8-bit gray-scaled image. Our proposed MLP scheme was fully implemented in Matlab, being very well suited as all formulae of this article have been presented on a matrix basis. The complexity of the net can be analyzed in the two stages which describe the algorithm: forward pass (FP) and backward pass (BP). The computation of the gradient δ(m) in the output layer makes the BP more time-consuming, as shown in (31). In those equations, the product trunc H ax (m) is the most critical term as it requires numerical computations of O(L 2 ), although the operator trunk{·} is responsible for discarding (zero-fixing) L 1 × 8B 1 operations (assuming B 1 = B 2 and L 1 = L 2 ). However, this high computational cost is significantly reduced for the sparsity of H a , which obtains a performance only related to the number of non-zero elements. Regarding the FP, the two neurons of the hidden layer lead to faster matrix operations of O(2L).
In regard to convergence, our MLP is based on the simple steepest descent algorithm as defined in (16). Consequently, the time of convergence is usually slow and controlled by the parameter h. We are aware that other variations on backpropagation may be applied to our MLP such as the conjugate gradient algorithm, which performs significantly better [28]. Finally, we mention that the experiments were run on a 2.4-GHz Intel Core2Duo with 2 GB of RAM. For a detailed analysis of timing, let us refer to the previous article [10].

Experiment 1
In a first experiment, we aim to obtain numerical results of the boundary reconstruction process for different sizes of degradation. Let us take the original images Lena and Barbara degraded by diagonal motion and uniform blurs. Regarding the motion blur, it is set to 45°of inclination and the length of shifted pixels is the parameter to vary between 5 and 15. We have used the approximation of Matlab to construct the filter of motion http://www.mathworks.com/help/toolbox/ images/ref/fspecial.html, which leads to masks between 5 × 5 and 11 × 11 in size. Analogously, the uniform blur is defined with odd sizes between 5 × 5 and 11 × 11. Let us recall that a Gaussian noise is added to the blurred image such that BSNR = 20 dB.
The results of the MLP are shown in Table 3. We can observe the expected reduction of quality (both σ e and Bσ e are increased, while the BPSNR is lowered) when the size of degradation is bigger. However, it is important to note that the region of boundary reconstruction is expanded accordingly as we will see in the following section.
Comparing the blurs in both images, we want to highlight the better results of boundary reconstruction for the uniform blur despite of the worse values of σ e . Therefore, it is presumable that the MLP carries out somehow differently the image restoration of the center of the image from the boundary restoration. In fact, the restoration of the center is a linear process defined by the regularization expression (29), but the boundary reconstruction comes from a nonlinear truncation which requires different performance.
Finally, let us comment the improvement of the regeneration of borders in the motion blur for a specific size when the length of pixels increases. Although we know it is a consequence of how the motion blur is modeled, we can deduce the dependency of the MLP to the structure of the PSF in order to reconstruct the boundaries.

Experiment 2
To visually assess the performance of the MLP on the boundary reconstruction process, we have devoted an experiment to show some restored images. In particular, we have selected some of the results indicated in Table  3 with different sizes of blurring. Figure 7 depicts the Lena restored image for a diagonal motion blur of 10 pixels. The restored boundary area is 252 × 16 in size marked by a white broken line and reveals how the borders are successfully regenerated without neither any image information nor prior assumption on the BCs.
Still using a bigger motion blur of 13 pixels, the boundary reconstruction is even more manifest as shown in Figure 8. In spite of the fact that blurring is more critical and then the subjective quality of Barbara image is lower, the 251 × 20 boundary pixels are regenerated accurately. Let us look at the table cloth or her hair to find out the good performance of the MLP.
Finally, we use a different type of blurring in Lena image of Figure 9. In this case, a uniform blur of size 7 × 7 is applied to the original image and the MLP leads to a successfully restored image which recovers the 253 × 12 truncated pixels of the original image.
In each of the figures, we have included a gray-scaled image which represents the evolution of the restoration error in square blocks. Specifically, it corresponds to the parameter σ e where the brighter regions are the lower values of σ e , that is to say, the pixels with a better quality of restoration. We want to highlight the smooth transition of restoration error in the boundary area due to the regeneration of borders. On the other hand, the center of the image comprises the minor values of error restoration as expected by the global energy minimization of the MLP.

Experiment 3
This experiment aims to compare the performance of the MLP with other restoration algorithms which needs BCs to deal with a realistic capture model: zero, periodic, reflective, and anti-reflective as commented in Section 2. We have used the well-known RestoreTools   http://www.mathcs.emory.edu/~nagy/RestoreTools library patched with the anti-reflective modification http://scienze-como.uninsubria.it/mdonatelli/, which implements the matrix-vector operations for every boundary condition. In particular, we have selected a modified version of the Tikhonov regularization (9) named as hybrid bidiagonalization regularization (HyBR) in the library. Let us consider a Barbara image degraded by a 7 × 7 Gaussian blur and the same additive white noise of the previous experiments with BSNR = 20 dB. Figure 10 shows the real acquisition of such a degraded image where we have removed the boundary pixels and the image is FOV = 250 × 250 in size (FOV). From (b) to (e) we have represented the restored images for each boundary condition; all of them are 250 × 250 sized images which miss the information of the boundaries up to 256 × 256. Furthermore, a remarkable boundary ringing can be appreciated for the zero and the periodic BCs as result of the discontinuity of the image in the boundaries. As demonstrated in [6,7], the reflexive (d) and the anti-reflexive (e) conditions perform considerably better removing that boundary effect.
The restored image of our MLP algorithm is shown in Figure 10f and makes obvious the good performance of the neural net. First, the boundary ringing is negligible without prior assumption on the boundary condition. Moreover, the visual aspect is better compared to the others which supports the good properties of the TV regularizer. To numerically contrast the results, the parameter σ e of the MLP is measured only in the FOV region. It leads to σ e = 12.47 which is notably lower to the values of the HyBR algorithm (e.g., σ e = 12.99 for the reflexive BCs). Finally, the MLP is able to reconstruct the 253 × 12 sized boundary region and outcomes the original image size of 256 × 256.

Experiment 4
Finally, let us delve into other algorithms of the literature which deal with the boundary problem in a different sense from the typical BCs. However, it is not only expected that those methods remove the boundary ringing but also amount to reconstruct the area B bordering the FOV. In recent research, Bishop [29] and Calvetti [30] propose similar methods based on the Bayesian model of the deconvolution problem and treat the truncation effect as modeling error. They rewrite the observation model (2)    where x + is the extended image of lengthL and x 1 is the restricted image defined in the FOV of (5). It can be deduced that H 1 and H 2 are matrixes of size FOV × FOV and FOV × L − L , respectively, and a x 2 is the image vector in the boundary frame of lengthL − L . The extrapolation approach of these methods establishes an adequate prior p(x + ) which models the entire image and the restored distribution p(x + |y real ) is estimated according to the Bayesian framework. We particularly select the region L of the restored imagex + .
For this section we will extract results from the Extrapolation algorithm of Bishop, whom we would like to thank for his close collaboration, using a TV prior for p(x + ). It is demonstrated in [29] that the figures obtained for Calvetti's algorithm would be equivalent. On the other hand, we will upgrade our proposed MLP to leverage somehow the concept of extended observation. First, it means that the input of our neural net is actually the real observed image y real , not the truncated model y tru , and then the input layer consists of FOV neurons. The structure of the MLP keeps unaltered in terms of hidden and output layers, yielding a restored imagex of size L. To finish, we remove the operator trunk{·} from all formulae of Section 4 assuming an aperiodic model (zero-padded) of the extended image x + . We do not lose generality as the input is the real image y real and the MLP has to reconstruct likewise the boundary region B.
Let us take the blurs of the previous experiments: uniform, Gaussian, and motion masks of 7 × 7. Tests are computed with the Barbara image and a noise ratio of BSNR = 20 dB. To maximize the results of the MLP, wechoose the parameters λ and η on a hand-tuning basis. Again the performance of the algorithms are measured with the restoration errors in the whole image s e and in the boundary region Bσ e . In this experiment, we also include the equivalent error in the FOV, which is denoted as Fσ e .
Looking into Table 4, we find out that the values of s e are quite similar for both methods, being the MLP which outperforms in the Gaussian and motion blurs. But, what really deserves attention are the results in the boundary region B. The MLP is considerably better reconstructing the missed boundaries as indicated by the lower values of Bσ e . Then, it proves the outstanding properties of the neural net in terms of learning about the unknown image. On the contrary, the extrapolation method is able to restore the FOV slightly better. We can conclude that our MLP is a successful approach of inpainting the boundary frame, in addition to recover the FOV without any boundary artifact.
Let us visually assess the performance of both methods for the experiments of Table 4. In particular, we have used two 250 × 250 sized images degraded by uniform and Gaussian blurs of 7 × 7 as depicted in Figure 11a, d, respectively. The restored images obtained by the Extrapolation and the MLP algorithms are placed in a row of Figure 11. It can be deduced that the output images are all 256 × 256 in size and thus reconstructing the boundary area B = 253 × 12. Despite the fact that the value of s e is lower for the Extrapolation method in the uniform blur, we can observe that the subjective quality of the MLP output is better. Regarding the Gaussian blur, the restored images look similar although the value of s e is in favor of the neural net. As for the boundary let us compute some other experiments to actually notice the reconstructing process.
We have selected a Gaussian blur with sizes increasing from 7 × 7 to 17 × 17. In Table 5, we reflect the data corresponding to boundary error Bσ e for every single mask. It gives clear evidence of the good performance of the MLP when dealing with the boundary problem. That is also remarkable when we have a look to the restored images of Figure 12. We have highlighted the upper right corner of the Barbara image in the case of Gaussian masks of 7 × 7 and 17 × 17. We can observe how the MLP achieves to reconstruct the boundary frame successfully, whereas the extrapolation algorithm obtains a rough estimation of the region as the mask is bigger.

Concluding remarks
In this article, we have presented the implementation of a method which allows restoring the boundary area of a real truncated image without prior conditions. The idea is to apply a TV-based regularization function in an iterative minimization of an MLP neural net. An inherent backpropagation algorithm has been developed in order to regenerate the lost borders, while adapting the center of the image to the optimum linear solution (the ringing artifact thus being negligible).   The proposed restoration scheme has been validated by means of several tests. As a result, we can conclude the ability of our neural net to reconstruct the boundaries of the image with different BPSNR values depending on the blurring type.