 Research
 Open Access
 Published:
Introducing oriented Laplacian diffusion into a variational decomposition model
EURASIP Journal on Advances in Signal Processing volume 2016, Article number: 116 (2016)
Abstract
The decomposition model proposed by Osher, Solé and Vese in 2003 (the OSV model) is known for its good denoising performance. This performance has been found to be due to its higher weighting of lower image frequencies in the H ^{−1}norm modeling the noise component in the model. However, the OSV model tends to also move highfrequency texture into this noise component. Diffusion with an oriented Laplacian for oriented texture is introduced in this paper, in lieu of the usual Laplacian operator used to solve the OSV model, thereby significantly reducing the presence of such texture in the noise component. Results obtained from the proposed oriented Laplacian model for test images with oriented texture are given, and compared to those from the OSV model as well as the Mean Curvature model (MCM). In general, the proposed oriented Laplacian model yields higher signaltonoise ratios and visually superior denoising results than either the OSV or the MCM models. We also compare the proposed method to a nonlocal means model and find that although the proposed method generally yields slightly lower signaltonoise ratios, it generally gives results of better perceptual visual quality.
Introduction
Image decomposition
According to the definition of Meyer [1], image decomposition is the process of splitting an image into components of different priorities, usually so that the original image is roughly equal to the sum of these components. Often the original image is denoted by f, which is considered to approximately equal the sum of two components, u and v, i.e. f≈u+v. In Meyer’s 2001 monograph [1], decomposition was deemed to be important for image coding and transmission. In addition to the problem of decomposition being interesting and important in its own right [2], decomposition has also proven to be useful for texture discrimination [3], image denoising [4], image inpainting [5] and image registration [6]. In this paper, the focus is on the application of decomposition to the problem of the denoising of images with oriented texture.
Meyer’s decomposition framework
Meyer’s decomposition framework models the various components of the decomposition as having small norms in different Banach spaces, i.e. complete normed vector spaces. In practical implementations of this framework [3], the energy method is used, in which an energy is defined over the image being decomposed and its components, and is subsequently minimized using the Calculus of Variations.
The first method that could be considered to fall under the image decomposition framework is the total variation model of Rudin, Osher and Fatemi [7]. The energy for the total variation model is defined as
where the original image is f and f≈u+v. The first term \(\left (\int _{\Omega }\nabla udxdy\right)\) is included to produce a bounded variation (piecewisesmooth) image u upon energy minimization, while the second term is a fidelity term, which ensures that the final u component is close to the initial image f. Although the minimization of E _{ TV }(u) preserves sharp edges, it destroys fine structure such as texture. However, Eq. 1 has been used successfully for the denoising and deblurring of images of bounded variation.
In [1], Meyer proposed modifying the second term in the above energy of Eq. 1 by changing the L ^{2}norm of the residual v to the ∗norm of this residual, where ∗ is a norm on a suitably defined Banach space G. The ∗norm on G is defined by
over all g _{1} and g _{2} such that \(v=\text {div}({\vec {g}})\) where \({\vec {g}}=(g_{1},g_{2})\).
The OsherSoléVese decomposition model
A second attempt at minimizing the ∗norm of Eq. 2 was made by Osher, Solé and Vese in [4]. They used the Hodge decomposition of \(\vec {g}\), which splits \(\vec {g}\) into the sum of the divergence of a single valued function and a divergencefree vector field. The energy functional for their model is
The PDE that they obtained as a result of minimization of the above functional is
with adiabatic boundary conditions. This equation was found by approximating \(\vec {g}\) as having no divergencefree part, and approximating the L ^{∞} part of the ∗norm in Eq. 2 with the square of the L ^{2}norm of the same function. The resulting energy functional includes an inverse Laplacian of f−u; this was eliminated in [4] by showing that under some rather relaxed conditions, found in the second Remark of that paper, the gradient descent solution of an EulerLagrange equation u _{ t }=−E OSV′(u), where E OSV′(u) is the first variation of E _{ OSV } with respect to u, converges to the same solution as the equation u _{ t }=Δ E OSV′(u). Thus no inverse Laplacian appears in Eq. 4. Additionally, the auxiliary functions g _{1} and g _{2} of Eq. 2 are no longer involved in the PDE of Eq. 4, as they disappear in the derivation.
Incorporating oriented diffusion into the OSV model
Motivation and initial derivation
If we denote the curvature of the level lines of the cartoon component u by \(K(u) = \text {div}\left (\frac {\nabla u}{\nabla u}\right)\), Eq. 4 becomes
Observe that Eq. 5 consists of two terms, one with the negative Laplacian of the curvature of the level lines of u; the other a fidelity term, which makes sure that the evolved image does not stray too far away from the original image f. Equation 5 has been shown to perform well for denoising [4]. Since the original image f is noisy, and the second term keeps the evolved image close to this noisy image, the denoising itself must be due to the first term; i.e. the negative Laplacian term.
Attention is now restricted to images with oriented texture. With this restricted focus, instead of including the Laplacian of the curvature, Δ K(u), as in Eq. 5, the oriented Laplacian [8] of the curvature can be taken instead. The new equation becomes
Here ζ is the isophote direction of the curvature in the regions of the image where there is a dominant orientation, and η is the gradient direction of the curvature in such regions. An illustration of these directions is shown in Fig. 1.
There are also two conductivity coefficients c _{ ζ } and c _{ η } which determine how much diffusion there is in the isophote and gradient directions respectively.
The values of the second directional derivatives of the curvature in its isophote and gradient directions can be computed without actually calculating the isophote and gradient angles. This is done by using the general formula for the secondorder directional derivative of the function K in the direction w=(w _{ x },w _{ y }), i.e.
and then substituting the unit gradient and isophote vector directions for w. The unit gradient vector is simply
and the unit isophote vector is
The formulae for the second directional derivative of the curvature in the isophote direction (ζ) and its gradient direction (η) are thus
The expressions for the two directional derivatives in Eqs. 7 and 8 can be substituted into Eq. 6. Recall that the function K represents the curvature of the cartoon component u.
In our experiments for this paper, better results were obtained in practice by using the normalized isophote orientation estimates of the original image f instead of those of the curvature image K. An initial explanation for the improvement in results is the sensitivity to noise of the formulae in Eqs. 7 and 8.
We now explain how estimates of the normalized isophote and gradient directions are extracted from the original noisy image using the linear structure tensor (Section 2.2.1), as well as how this image is separated into oriented and nonoriented regions (Section 2.2.2). This separation by region is important to decide whether anisotropic or isotropic diffusion should be performed at each image pixel, and the directional estimates are important to determine which directional diffusion parameters are used for anisotropic diffusion at each pixel in the oriented region. After this explanation in Section 2.2, we return to the derivation of the proposed model in Section 2.3 using these extracted gradient and isophote orientation estimates.
Orientation and coherence calculation
The proposed algorithm depends on the noiseresistant computation of orientation at each pixel in the image, in addition to the detection of the image regions that are coherent, or roughly consisting of one orientation. Two methods were considered in our experiments by which such calculations could be performed and be robust to noise. Both use the structure tensor, or the outer product of the image gradient with itself. The first is called the linear structure tensor, which corresponds to a linear diffusion or Gaussian blurring of the structure tensor elements, and the second is called the nonlinear structure tensor, an extension of the linear structure tensor which corresponds to nonlinear diffusion of the structure tensor entries. Only the linear structure tensor is described here, since it was found to be more efficient computationally than the nonlinear structure tensor, and gave results of similar quality. The linear structure tensor is now described, along with methods of determining the orientation coherence [9]. If not otherwise specified, it is assumed the orientation refers to the gradient orientation, or the orientation in which the image changes the most in intensity.
Linear structure tensor
The structure tensor is defined as the outer product of the image gradient vector with itself. Supposing that the image is f, then the structure tensor J at each pixel is defined as
It was found that better denoising results were obtained for the proposed method by using the following derivative filters (see e.g. [10]) to determine f _{ x } and f _{ y }
and
To further increase robustness to noise, the resulting structure tensor J is blurred elementwise with a Gaussian filter G _{ σ } of standard deviation σ, to obtain J _{ σ }=G _{ σ }∗J. Then the gradient direction at each pixel, θ _{ i,j }, is calculated to be the angle between the eigenvector \(\overrightarrow {w}_{1}\) of J _{ σ } corresponding to its larger eigenvalue λ _{1} and the horizontal axis, or more simply, if we consider \(\overrightarrow {w}_{1}\) of J _{ σ } to be a vector in the complex plane
The isophote vector \(\overrightarrow {f_{iso}} = (f_{iso,x},f_{iso,y})\) is computed as the unit vector perpendicular to θ _{ i,j }, thus at an angle \(\frac {\pi }{2}\) radians greater than θ _{ i,j } from the horizontal axis; i.e. if we consider the isophote vector as a vector in the complex plane
Orientation coherence and oriented region determination
For the proposed algorithm, it is necessary to separate the image into oriented and nonoriented regions, so that different variational models can be applied to each of the two region types. A region is defined to be nonoriented when its orientation coherence is less than a predetermined threshold; this coherence function is a measure of how uniform gradient directions are around a pixel. In [11], the coherence of f is measured directly using a small window W around each pixel by the formula
where θ _{ i,j } is the orientation calculated from the linear structure tensor J at pixel (i,j) (which can be found because the tensor determines the gradient vector at each pixel); generally W is chosen to be 7 × 7 pixels. In this paper, this formula is used to determine the orientation coherence at each pixel, thus allowing oriented and nonoriented regions to be separated.
The threshold used for the coherence of the orientations of f is dependent on the image. Call this parameter coher _{ thresh }; we set coher _{ thresh }=15 for all experiments in this paper.
Now define the oriented region to be Ω _{ O }, the union of pixels in the image where there is a dominant orientation in f. Similarly, we define the nonoriented region to be Ω _{ NO }, the union of pixels in the image where there is no such dominant orientation.
The oriented Laplacian OsherSoléVese model
After Eqs. 7 and 8 in Section 2.1, we stated that using the normalized isophote orientation estimates of the original image yielded better denoising results in our experiments. After this, in Section 2.2 we explained how these isophote orientation estimates were computed using the linear structure tensor. Now using the isophote orientation vectors (f _{ i s o,x },f _{ i s o,y }) at each pixel, we obtain the following noiseresistant approximation of the second directional derivative of the curvature in the isophote direction
which is subsequently substituted in the oriented Laplacian expression in Eq. 6. Note that the denominators in Eqs. 7 and 8 do not appear since we are dealing with the normalized isophote estimates, as opposed to Eqs. 7 and 8, where the partial derivatives of K are unnormalized.
For the coefficient c _{ η } in Eq. 6, the usual PeronaMalik diffusivity function
could be chosen (with K _{ d } set to 10 after experimentation). However, we found that just setting c _{ η } to zero gave similar results and was simpler; consequently the expression in Eq. 12 was not used. With the coefficient c _{ η } set to zero, there is no diffusion or denoising in the image gradient direction. To promote smoothing in the image isophote direction, the coefficient c _{ ζ } is set equal to 1. Therefore Eq. 6 becomes
and substituting the approximation of Eq. 11 for K _{ ζ ζ } yields the final equation for the iterative solution of the proposed decomposition model
where (ζ _{ x },ζ _{ y })=(f _{ i s o,x },f _{ i s o,y }) is a unit vector defined for each pixel in the image pointing in the isophote direction.
Within the oriented region Ω _{ O }, Eq. 14 is used, whereas within the nonoriented region Ω _{ NO }, the ordinary evolution equation for OsherSoléVese decomposition, as in Eq. 4 is utilized. Each equation and region involves a weighting parameter λ. This parameter becomes λ _{ O } in Ω _{ O }, and λ _{ NO } in Ω _{ NO }. In practice, each model is evolved over the entire image, but after convergence restricted to either Ω _{ O } or Ω _{ NO }, as initially defined by the coherence threshold. This avoids any noticeable artifacts at the boundaries of the regions by using the image itself to effectively pad each region for its model of diffusion along internal interfaces.
We call the decomposition model defined by the PDE of Eq. 14, the Oriented Laplacian OsherSoléVese (OLOSV) decomposition model.
Denoising by mean curvature motion
A similar denoising model to the OLOSV model developed in Eq. 14 above is the mean curvature motion (MCM) denoising model. Both models are based on the idea of denoising edges, only along and not across them. However, it will be seen that MCM denoising does not lend itself readily to a dual norm formulation, and thus cannot avail of the better quality results obtained from such a formulation.
The equation defining Mean Curvature Motion denoising is
with u _{ ζ ζ } the secondorder directional derivative of u in the isophote direction.
To make a fair comparison with the OSV and OLOSV models in Eqs. 4 and 14, respectively, a fidelity term is added to the basic MCM of Eq. 15, along with a parameter λ that controls the relative weighting of the diffusion and fidelity terms. Thus the version of MCM implemented in this paper for comparison with OSV and OLOSV is
The implementation of mean curvature motion that we used in our experiments was based on the following
derived by setting (ζ _{ x },ζ _{ y })=(f _{ i s o,x },f _{ i s o,y }) with this isophote direction measured in a noiseresistant manner, as in Section 2.2.1. We found from our experiments that using these image isophotes gave better denoising results.
Explicit timestepping
Then Eq. 14 (for OLOSV) and Eq. 16 (for MCM) are each separately solved via explicit timestepping (with the time step chosen by experimentation to be Δ t=0.002).
As with other decomposition/denoising schemes, with known texture/noise variance, the value of λ is dynamically updated with iteration number using a method based on gradient projection. In fact, in oriented regions (Ω _{ O }) one dynamically updated coefficient λ _{ O } is used, and in nonoriented regions, a separate dynamically updated coefficient λ _{ NO } is calculated. The final formula that is obtained for λ _{ O } is
In nonoriented regions, the expression for λ _{ NO } at each iteration is
The above two equations are derived from Eqs. 14 and 4, respectively, by assuming we have reached equilibrium, so that u _{ t }=0, and then multiplying each side by u−f and integrating over the image domain Ω.
For MCM denoising, a formula similar to Eq. 18 is used to determine λ _{ O } at each iteration, also based on gradient projection
Numerical implementation
Two sets of experiments were conducted using the OSV and OLOSV methods. The first involved an explicit timestepping solution of the two models, along with the mean curvature denoising model. The second set of experiments was based on the dual formulation of total variation minimization of Chambolle [12], which was further generalized in [13] to deal with the OSV model, and to an even more general TVHilbert model in [14]
with initial condition
Then, similar to [14], \(f+\frac {1}{\lambda _{NO}}\Delta \text {div}p^{n}\) converges to the u component of the decomposition in Ω _{ NO }, the nonoriented section of the image, which implies that \(\frac {1}{\lambda _{NO}}\Delta \text {div}p^{n}\) converges to the component v=f−u in Ω _{ NO }. Similarly, in the oriented section Ω _{ O } of the image, \(f+\frac {1}{\lambda _{O}}\Delta _{\zeta \zeta }\text {div}p^{n}\) converges to the u component of the decomposition and thus \(\frac {1}{\lambda _{O}}\Delta _{\zeta \zeta }\text {div}p^{n}\) converges to the component v=f−u in Ω _{ O }.
When used in conjunction with the OLOSV model, such an implicit numerical method gives rise to a new denoising method, which we call the Implicit Oriented Laplacian OsherSoléVese, or IOLOSV method.
TVHilbert space formulation
The proposed IOLOSV model presented in this paper differs from the general TVHilbert model in [14] and the TVGabor model in [15], in that, in the proposed model, the operator K is spatially varying, whereas in the two other models, the operator K is constant throughout the entire image, and not datadependent.
For two discrete zeromean image functions f and g, Aujol and Gilboa [14] defined the inner product <·,·> on a Hilbert space \(\mathcal {H}\) as
where \(<f,Kg>_{L^{2}} = \sum _{i,j} f_{i,j}(Kg)_{i,j}\). Then the Sobolev space H ^{−1} used in the model of Osher, Solé and Vese fits into the TVHilbert space framework, with the operator K=−Δ ^{−1} as the negative of the inverse Laplacian operator.
The proposed IOLOSV method can also be made to fit into the TVHilbert space decomposition framework, with the operator K spatially varying and equal to \(K = \Delta _{\zeta \zeta }^{1}\), the negative of the spatially varying secondorder directional derivative (in the direction of the image isophote). We denote the Hilbert space thus obtained as \(H^{1}_{\zeta }\) in order to emphasize its link with both the Sobolev space H ^{−1} and its dependence on the spatially varying isophote direction ζ(x,y).
A flow chart is included below illustrating the overall procedure for denoising with the OLOSV and OSV models in the oriented and nonoriented parts of the image, respectively (Fig. 2).
Calculation of iterationdependent fidelity parameter
It is shown in [14] that for an M×N pixel image n consisting only of white Gaussian noise of standard deviation σ, it can be assumed that
where \(C_{\mathcal {H}}\) is a constant that only depends on the Hilbert space \(\mathcal {H}\) over which we are taking the norm of the noise image n. Its calculation will be described in the next subsection.
The same procedure found in [14] is used in this paper to dynamically vary the fidelity parameters λ _{ O } and λ _{ NO }. This procedure is based on attempting to solve the problem
In practice, it was found in [14] that the problem posed by Eq. 24 leads to the oversmoothing of u; instead a positive factor \(\alpha _{\mathcal {H}} < 1\) was added in [14] to the righthand side of that equation to form the new problem
In both equations, the noise component v is defined to be equal to f−u, where f is the original noisecontaminated image and u is the obtained cartoon component.
Based on our experimentation with the IOLOSV coefficients, both λ _{ O } and λ _{ NO } are initialized to 0.15, and then at each iteration, their values are updated using the following method. At every iteration, \(v_{O,n}_{H^{1}_{\zeta }}\phantom {\dot {i}\!}\) and \(v_{NO,n}_{H^{1}}\phantom {\dot {i}\!}\) are estimated by calculating \(v_{L^{2}}\phantom {\dot {i}\!}\), easily done by taking the squareroot of the sum of squares of the v component of the decomposition, and then multiplying by the factor \(\sqrt {C_{H^{1}}}\phantom {\dot {i}\!}\) for the nonoriented section of the image, and \(\sqrt {C_{H_{\zeta }^{1}}}\phantom {\dot {i}\!}\) for the oriented part. As in [14], given λ _{ n } and \(v_{n}_{\mathcal {H}}\phantom {\dot {i}\!}\), λ _{ n+1} is obtained from these quantities by the formula
In the above equation, \(v_{n}_{\mathcal {H}}\) is the norm of v _{ n } in the Hilbert space \(\mathcal {H}\); the calculation of this norm is detailed below in Section 5.3. Then v is calculated using this new value of λ for the next iteration. The procedure is implemented separately in Ω _{ O }, where \(\mathcal {H} = H^{1}_{\zeta }\), and in Ω _{ NO }, where \(\mathcal {H} = H^{1}\). Moreover, the constant \(\alpha _{\mathcal {H}}\) is chosen separately in each of Ω _{ O } and Ω _{ NO }, with the selected values being 0.6 and 0.98, respectively, based on experimental findings.
For Ω _{ O } we slightly alter the calculation of the numerator of Eq. 26 to ensure enough noise is extracted from oriented parts of the image. We calculate the L ^{2}norm of the noise in the nonoriented region Ω _{ NO } and then compute the expected L ^{2}norm of the noise in the oriented region Ω _{ O } by balancing the value of this norm in Ω _{ O } and the sizes of both regions, by assuming that the noise power is spatially invariant across both regions. Finally, in the denominator of Eq. 26, we use the maximum of the actual measured L ^{2}norm of the noise in Ω _{ O } and 0.7 times the expected L ^{2}norm of this noise in this region, based on that in Ω _{ NO }.
Execution of the proposed algorithm is stopped when the numerator and denominator of the multiplicative factor in Eq. 26 become close to one another. More specifically, for the nonoriented region Ω _{ NO }, iterations are stopped once the numerator becomes greater than 0.95 times the denominator, while for the oriented region Ω _{ O } we stop iterating once the numerator becomes greater than 0.99 of the denominator.
Calculation of constant \(C_{\mathcal {H}}\)
In [13] it is shown that when \(\mathcal {H} = H^{1}\), as is the case for the OSV model,
We first observe that the denominator of the expression for \(C_{H^{1}}\phantom {\dot {i}\!}\) corresponds to the discretization of the Laplacian operator in the frequency domain. Therefore, similar to the H ^{−1} case and following the methodology in [13], a similar summation expression is obtained for \(\phantom {\dot {i}\!}C_{H^{1}_{\zeta }}\), with the denominator of the summation now corresponding to the discretization of the oriented Laplacian in the frequency domain instead of the discretization of the plain Laplacian in the frequency domain. Since this only corresponds to one directional derivative, instead of two orthogonal directional derivatives, as is the case with the usual Laplacian operator, the denominator will be half the value of that for the usual Laplacian. Therefore, we will have
Calculation of \(\mathcal {H}\)norms
Recall from elementary Hilbert space theory that the norm of a function on a Hilbert space is defined as the square root of the inner product of the function with itself. From the inner product definition of Eq. 22, the square of the \(\mathcal {H}\)norm of v, \(v_{\mathcal {H}}\) is thus defined by
In the case of the OSV model, where \(\mathcal {H}=H^{1}\), and K=−Δ ^{−1},
This equation was shown in [13] to simplify, based on Parseval’s identity, to
with \(\mathcal {F}(v)\) being the discrete Fourier transform of v, and the image assumed to be MxN pixels large. From Eqs. 23 and 28 above, we find that
both of these squarednorm values being used in the dynamic update of the fidelity parameter λ, for their respective Hilbert spaces.
Nonlocal means
To set this paper within the forefront of research on image denoising, we also compare the proposed OLOSV algorithm with nonlocal means, a recent addition to the field. Nonlocal means denoising, introduced in [16] by Buades et al. is known to give good denoising results. It does so by comparing image patches from different parts of an image and using all or many of these patches to improve denoising performance by performing a weighted average on these patches based on their similarity in pixel intensities.
From [16], the nonlocal (NL) mean of a function \(u:\Omega \rightarrow \mathbb {R}\) is defined as
where ω(x,y) is a weighting function measuring the similarity between the image patches centred at pixels x and y.
Test images
Among the test images used in this paper are several wellknown images with significant oriented texture, i.e. barbara which is 512 × 512 pixels, fingerprint which is 256 × 240 pixels, and bridges which is 1024 × 683 pixels, each with bitdepth of 8 bits.
Results obtained for the three test images barbara, bridges and fingerprint are presented in the next section; these results demonstrate that in general, denoised images with higher signaltonoise ratios are obtained with IOLOSV over both the MCM and OSV algorithms, and the results are visually superior.
Experimental results
Experimental method
To each of the three original test images, additive white Gaussian noise (AWGN) was added with a standard deviation σ=15.
Then each of the four local iterative denoising algorithms (OSV, OLOSV, IOLOSV and MCM) was run on the noisy test images to obtain a denoised image; the signaltonoise ratio is then calculated against the noisefree original image. We also ran the nonlocal means (NLM) algorithm on all three images. For nonlocal means, 5 × 5 pixel image patches were used to calculate the weight function measuring similarity between patches centred at each pixel in 11 × 11 neighborhoods.
Of the four iterative algorithms, only the proposed OLOSV model with implicit timestepping (IOLOSV) had a welldefined stopping condition, whereas the other 3 algorithms were run until a peak in the calculated signaltonoise ratio (SNR) was detected. This procedure could actually artificially inflate the SNRs of the other 3 algorithms with respect to IOLOSV, since in practice, the SNR is not easily calculated for an image being denoised without having the denoised image at our disposal.
As already mentioned in the description of the proposed IOLOSV method in Section 5.1, the values of the fidelity parameter λ for the oriented and nonoriented regions of the image are updated dynamically for each of these regions. We stop the denoising process for the IOLOSV method separately for each of the oriented and nonoriented image regions, based on when the dynamic value of λ for each of these regions stabilizes.
Results and discussion
Table 1 gives the SNRs of the denoised results from OsherSoléVese decomposition (OSV), Oriented Laplacian OSV with explicit timestepping (OLOSV), Mean Curvature Motion (MCM), and Oriented Laplacian OSV with implicit timestepping (IOLOSV). As can be seen from that table, the NLM SNRs are generally the best, with the IOLOSV and MCM SNRs close behind.
Figures 3, 4, 5, 6, 7 and 8 show the cartoon and noise components obtained from the various denoising algorithms on the three test images, along with the original and noisy images used for testing all of these methods. The appearance of the v component is much less ordered in the OLOSV and IOLOSV results than the results from the other algorithms. Despite the good SNR of NLM, it must be remembered that SNR is a global measure, which does not always accurately capture visual image quality. The noise components for all images processed with NLM exhibit very visible structure from the underlying noiseless image; this is not to be expected in typical white noise, indicating that the denoising results from NLM are not as good as the signaltonoise ratio would suggest, and not as good as the results from the iterative variational methods.
This is further illustrated in Fig. 9, which depicts a zoomed and contrastenhanced portion of the noise component obtained from the proposed IOLOSV method and the NLM algorithm. The contrast enhancement is done using the imadjust command found in Matlab^{®;}’s Image Processing Toolbox. In the left panel, the NLM result contains visible structure from the original image which has been removed as part of the noise component, whereas in the right panel, corresponding to the same region from the IOLOSV result, such structures are not visible. Additionally, the NLM result takes longer to generate than the IOLOSV one, due to the high complexity of the nonlocal means algorithm.
Conclusions
The OLOSV decomposition scheme, in both its explicit and implicit forms of implementation, has been found to be extremely good at denoising oriented texture, and is a substantial improvement over the model of Osher, Solé and Vese on which it is based. It also has some other good properties.
For example, one good property of OLOSV is that it can be generalized to images where there are up to two dominant isophote orientations at a point. These orientations can be calculated using the multiple orientation framework in [17]. In regions where there are two orientations present, the model’s evolution equation consists of the sum of two oriented Laplacians with isophote directions ζ _{1} and ζ _{2} and gradient directions η _{1} and η _{2}.
Additionally, it was found from the experiments in this paper that cartoon edges were much less visible in the noise components v of the OLOSV and IOLOSV results when compared with the OSV noise components (e.g. compare Fig. 5 d to Fig. 6 b, d). This is because a directional diffusion was performed close to these cartoon edges (if they were calculated as being in the coherent regions of the image), and along the direction of the edges, and not perpendicular to them. Unfortunately, there tends to be some slight visible smearing of the noise close to the edges, but this could be reduced by only including pixels on cartoon edges in the region in the image computed to have a coherent orientation, and not those around the edges. The detection of such cartoon edges could be implemented with a TotalVariationlike diffusion, similar to [18].
It may be argued that the Oriented Laplacian OSV model used for denoising is very similar to ordinary Oriented Laplacian diffusion, which when the diffusion across edges is set to zero, degenerates to the mean curvature motion denoising model [19] tested in this paper. However, the OLOSV model has two main differences with standard Oriented Laplacian diffusion:

1.
Mathematically OLOSV is a novel nonlinear diffusion flow based on the Laplacian of the levelset curvature of the cartoon component, instead of the Laplacian of the image itself, and

2.
OLOSV is based on a model for image decomposition, and thus variations of it could potentially be used for other applications of image decomposition, e.g. inpainting.
In addition to the above two distinguishing properties of the proposed OLOSV model with respect to mean curvature motion, OLOSV also has two distinct advantages:

1.
The proposed OLOSV model fits into the Hilbert space formulation of [14], and therefore admits an implicit timestepping implementation, which we called IOLOSV in this paper, as opposed to mean curvature motion which does not. This means that the proposed method is far more efficient; this is confirmed by our experiments, that required only on the order of ten iterations for convergence, as opposed to several hundred for mean curvature motion.

2.
The proposed IOLOSV method admits an easytodefine stopping criterion which allows automatic cessation of execution of the algorithm when the SNR is at or close to its peak value. Such a stopping criterion is not as easily defined for mean curvature motion, necessitating human intervention to determine when the image has in fact been denoised.
Finally, the OLOSV decomposition model can be extended quite easily to colour images by using the colour total variation model of Chan and Blomgren [20]. This remains as future work.
References
 1
Y Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures. Am. Math. Soc (2001).
 2
R Shahidi, C Moloney, Decorrelating the structure and texture components of a variational decomposition model. IEEE Trans. Image Process. 18(2), 299–309 (2009).
 3
LA Vese, SJ Osher, Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci.Comput. 19(13) (2003).
 4
SJ Osher, A Solé, LA Vese, Image decomposition and restoration using total variation minimization and the H ^{−1} norm. Multiscale Model. Simul. SIAM Interdiscip. J. 1(3), 349–370 (2003).
 5
M Bertalmio, G Sapiro, V Caselles, C Ballester, in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques. Image inpainting (ACM Press/AddisonWesley Publishing Co.New York, 2000), pp. 417–424. doi:http://dx.doi.org/10.1145/344779.344972. http://dx.doi.org/10.1145/344779.344972.
 6
D Paquin, D Levy, E Schreibmann, L Xing, Multiscale image registration. Math. Biosci. Eng. 3:, 389–418 (2006).
 7
L Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms. Phys. D. 60:, 259–268 (1992).
 8
D Tschumperlé, Pde’s based regularization of multivalued images and applications. PhD thesis, Université de Nice  Sophia Antipolis (2002).
 9
J Weickert, Coherenceenhancing diffusion filtering. Int. J. Comp. Vis. 31(23), 111–127 (1999).
 10
L Florack, R Duits, G Jongbloed, MC van Lieshout, L Davies, Mathematical Methods for Signal and Image Analysis and Representation (Springer, 2012). https://books.google.ca/books?id=oby5nrg72iQC. Accessed 22 Aug 2016.
 11
R Terebes, O Lavialle, P Baylou, M Borda, Mixed anisotropic diffusion. ICPR. 3:, 760–763 (2002). doi: doi:http://dx.doi.org/10.1109/ICPR.2002.1048102.
 12
A Chambolle, An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20:, 89–97 (2004).
 13
JF Aujol, A Chambolle, Dual norms and image decomposition models. International Journal of Computer Vision. 63(1), 85–104 (2005).
 14
JF Aujol, G Gilboa, Constrained and snrbased solutions for tvhilbert space image denoising. J. Math. Imaging Vis. 26(12), 217–237 (2006). doi:http://dx.doi.org/10.1007/s1085100678016.
 15
JF Aujol, G Gilboa, T Chan, S Osher, in Proceedings of Variational, Geometric, and Level Set Methods in Computer Vision 2005, LNCS Vol. 3752. StructureTexture decomposition by a TVGabor model (Springer Berlin HeidelbergBerlin, 2005), pp. 85–96.
 16
A Buades, B Coll, JM Morel, in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 2. A NonLocal Algorithm for Image Denoising (IEEE Computer SocietyWashington, 2005), pp. 60–65. doi:http://dx.doi.org/10.1109/CVPR.2005.38. http://dx.doi.org/10.1109/CVPR.2005.38.
 17
T Aach, C Mota, I Stuke, M Muhlich, E Barth, Analysis of superimposed oriented patterns. IEEE Trans. Image Process. 15(12), 3690–3700 (2006).
 18
R Shahidi, C Moloney, in Proceedings of the International Conference on Image Processing. Variational Textured Image Decomposition with Improved Edge Segregation (IEEEAtlanta, 2006).
 19
F Catte, PL Lions, JM Morel, T Coll, Image selective smoothing and edge detection by nonlinear diffusion. SIAM J. Numer. Anal.29(1), 182–193 (1992).
 20
P Blomgren, TF Chan, Color TV: total variation methods for restoration of vectorvalued images. IEEE Trans. Image Process. 7(3), 304–309 (1998).
Acknowledgements
The authors would like to acknowledge the partial support of this work by a Natural Sciences and Engineering Research Council (NSERC) of Canada Discovery Grant to Cecilia Moloney.
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shahidi, R., Moloney, C. Introducing oriented Laplacian diffusion into a variational decomposition model. EURASIP J. Adv. Signal Process. 2016, 116 (2016). https://doi.org/10.1186/s1363401604152
Received:
Accepted:
Published:
Keywords
 Image decomposition
 Variational methods
 Image processing
 OsherSoléVese model
 Oriented Laplacian
 Image denoising