Skip to content

Advertisement

  • Research Article
  • Open Access

Image Variational Denoising Using Gradient Fidelity on Curvelet Shrinkage

EURASIP Journal on Advances in Signal Processing20102010:398410

https://doi.org/10.1155/2010/398410

Received: 27 December 2009

Accepted: 7 June 2010

Published: 30 June 2010

Abstract

A new variational image model is presented for image restoration using a combination of the curvelet shrinkage method and the total variation (TV) functional. In order to suppress the staircasing effect and curvelet-like artifacts, we use the multiscale curvelet shrinkage to compute an initial estimated image, and then we propose a new gradient fidelity term, which is designed to force the gradients of desired image to be close to the curvelet approximation gradients. Then, we introduce the Euler-Lagrange equation and make an investigation on the mathematical properties. To improve the ability of preserving the details of edges and texture, the spatial-varying parameters are adaptively estimated in the iterative process of the gradient descent flow algorithm. Numerical experiments demonstrate that our proposed method has good performance in alleviating both the staircasing effect and curvelet-like artifacts, while preserving fine details.

Keywords

  • Wavelet Shrinkage
  • Total Variational Model
  • Fidelity Term
  • Staircase Effect
  • Curvelet Coefficient

1. Introduction

Image denoising is a very important preprocessing in many computer vision tasks. The tools for attracting this problem come from computational harmonic analysis (CHA), variational approaches, and partial differential equations (PDEs) [1]. The major concern in these image denoising models is to preserve important image features, such as edges and texture, while removing noise.

In the direction of multi-scale geometrical analysis (MGA), the shrinkage algorithms based on the CHA tools, such as contourlets [2] and curvelets [35], are very important in image denoising because they are simple and have efficient computational complexity and promising properties for singularity analysis. Therefore, the pseudo-Gibbs artifacts caused by the shrinkage methods based on Fourier transform and wavelets attempt to be overcame by the methods based on MGA at least partially. However, there are still some curve-like artifacts in MGA-based shrinkage methods [6].

Algorithms designed by variatinal and PDE models are free from the above lacks of MGA but cost heavy computational burden that is not suitable for time critical application. In addition, the PDE-based algorithms tend to produce a staircasing effect [7], although they can achieve a good trade-off between noise removal and edge preservation. For instance, the total variation (TV) minimizing [8] method has some undesirable drawbacks such as the staircasing effect and loss of texture, although it can reduce pseudo-Gibbs oscillations effectively. Similar problem can be found in many other nonlinear diffusion models such as the Perona-Malik model [9] and the mean curvature motion model [10]. In this paper, we will focus on the hybrid variational denoising method. Specificity, we will emphasis on the improvement on the TV model, and we will propose a novel gradient fidelity term based on the curvelet shrinkage algorithm.

1.1. Related Works and Analysis

To begin with, we will review some related works of the variational methods. To cope with the ill-posed nature of denoising, the variational methods often use regularization technique. Let denote the observed raw image data and is the original good image; then the regularization functional-based denoising is given by
(1)

where the first term is the image fidelity term, which penalizes the inconsistency between the under-estimated recovery image and the acquired noisy image, while the second term is the regularization term which imposes some priori constraints on the original image and to a great degree determines the quality of the recovery image, and is the regularization parameter which balances trade-off between the image fidelity term and the regularization term .

A classical model is the minimizing total variational (TV) functional [8]. The TV model seeks the minimal energy of an energy functional comprised of the TV norm of image and the fidelity of this image to the noisy image :
(2)
Here, denotes the image domain and is the space of functions of such that the TV norm is . The gradient descent evolution equation is
(3)
In this formulation, can be considered as a Lagrange multiplier, computed by
(4)
Although the TV model can reduce oscillations and regularize the geometry of level sets without penalizing discontinuities, it possesses some properties which may be undesirable under some circumstances [11], such as staircasing and loss of texture (see Figures 1(a)1(c)).
Figure 1
Figure 1

(a) The noisy "Lena" image with standard deviation 35; (b) the denoised "Lena" image by the TV algorithm; (c) the denoised "Lena" image by the curvelet hard shrinkage algorithm.

Currently, there are three approaches which can partially overcome these drawbacks. One approach to preventing staircasing is to introduce higher-order derivatives into the energy. In [12], an image is decomposed into two parts: . The component is measured using the total variation norm, while the second component is measured using a higher-order norm. More precisely, one solves the following variational problem that now involves two unknowns:
(5)

Here could be some higher-order norm, for example, . More complex higher-order norms were brought to variational method in order to alleviate the staircasing effect [13].

The second approach overcoming the staircasing effect is to adopt the new data fidelity term. Gilboa et al. proposed an adaptive fidelity term to better preserve fine-scale features [14]. Zhu and Xia in [7] introduced the gradient fidelity term defined by
(6)

where is the Gaussian kernel with scale , and the symbol " " denotes the convolution operator. Their studies show that this gradient fidelity term can alleviate the staircasing effect. However, classical Gaussian filtering technique is the uniform smoothing in all direction of images and fine details are easily destroyed with these filters. Hence, the gradient of smoothed image is unreliable near the edges, and the gradient fidelity cannot preserve the gradient and thereby image edges.

The third approach is the combination of the variational model and the MGA tools. In [14], the TV model has been combined with wavelet to reduce the pseudo-Gibbs artifacts resulted from wavelet shrinkage [15]. Nonlinear diffusion has been combined with wavelet shrinkage to improve the rotation invariance [16]. The author in [17] presented a hybrid denoising methods in which the complex ridgelet shrinkage was combined with total variation minimization [6]. From their reports, the combination of MGA and PDE methods can improve the visual quality of the restored image and provides a good way to take full advantages of both methods.

1.2. Main Contribution and Paper's Organization

In this paper, we add a new gradient fidelity term to the TV model to some second-order nonlinear diffusion PDEs for avoiding the staircasing effect and curvelet-like artifacts. This new gradient fidelity term provides a good mechanism to combine curvelet shrinkage algorithm and the TV regularization.

This paper is organized as follows. In Section 2, we introduce the curvelet transform. In Section 3, we propose a new hybrid model for image smoothing. In this model, we have two main contributions. We propose a new hybrid fidelity term, in which the gradient of multi-scale curvelet shrinkage image is used as a feature fidelity term in order to suppress the staircasing effect and curvelet-like artifacts. Secondly, we propose an adaptive gradient descent flow algorithm, in which the spatial-varying parameters are adaptively estimated to improve the ability of preserving the details of edges and texture of the desired image. In Section 4, we give numerical experiments and analysis.

The pipeline of our proposed method is illustrated in Figure 2. There are three core modules in our method. In the first module, we apply the curvelet shrinkage algorithm to obtain a good initial restored image . The second module is the minimizing TV with the new gradient fidelity, and this module is a global optimizing process which is guided by our proposed general objective functional. The third one is the parameters adjustment module, which provides an adaptive process to compute the value for the system's parameters. The rationale behind the proposed method is that high visual quality image restoration scheme is expected to be a blind process to filter out noise, preserve the edge, and alleviate other artifacts.
Figure 2
Figure 2

Illustrate the proposed self-optimizing image denoising approach.

2. Curvelet Transform

In the next, we review the basic principles of curvelets which were originally proposed by Candès et al. in [5]. Let and  be a pair of smooth, non-negative real-valued functions; here is called "radial window" and is called "angular window". Both of them need to satisfy the admissibly conditions:
(7)
Now, for each , let the window in Fourier domain be given by
(8)
where is the integer part of and denotes the polar coordinate; thus the support of is a polar "wedge" which is determined by "radial window" and "angular window". Let be the Fourier transform of , that is, . We may think of as a "mother" curvelet in the sense that all curvelets at scale are obtained by rotation and translations of . Let be the rotation matrix by radians and its inverse, then curvelets are indexed by three parameters: a scale an equispaced sequence of orientation , and the position . With these parameters, the curvelets are defined by
(9)
A curvelet coefficient is then simply the inner product between an element and a curvelet , that is,
(10)
Let be the collection of the triple index. The family of curvelet functions forms a tight frame of . That means that each function has a representation:
(11)
where denotes the -scalar product of and . The coefficients are called coefficients of function . In this paper, we apply the second-generation curvelet transform, and the digital implementations can be outlined roughly as three steps [5]: apply 2D FFT, product with frequency windows, and apply 2D inverse FFT for each window. The forward and inverse curvelet transforms have the same computational cost of for an data [11]. More details on curvelets and recent applications can be found in recent reviewer papers [36, 18, 19]. Figure 3 shows the elements of curvelets in comparison with wavelets. Note that the tensor-product 2D wavelets are not strictly isotropic but have three directions, while curvelets have almost arbitrary directional selectivity.
Figure 3
Figure 3

The elements of wavelets (a) and curvelets on various scales, directions and translations in the spatial domain (b).

3. Combination TV Minimization withGradient Fidelity on Curvelet Shrinkage

3.1. The Proposed Model

We start from the following assumed additive noise degradation model:
(12)
where denotes the observed raw image data, is the original good image, and is additive measurement noise. The goal of image denoising is to recover from the observed image data . The shrinkage algorithm on some multi-scale frame can be written as follows:
(13)

where is the corresponding MGA operator, that is, , for all , and " " is a set of indices. The rational is that the noise is nearly Gaussian. The principles of the shrinkage estimators which estimate the frame coefficients from the observed coefficients have been discussed in different frameworks such as Bayesian and variational regularization [20, 21].

Although traditional wavelets perform well only for representing point singularities, they become computationally inefficient for geometric features with line and surface singularities. To overcome this problem, we choose the curvelet as the tool of shrinkage algorithm. In general, the shrinkage operators are considered to be in the form of a symmetric function ; thus the coefficients are estimated by
(14)
Let denote the dual frame, and then a denoised image is generated by the reconstruction algorithm:
(15)
Following the wavelet shrinkage idea which was proposed by Donoho and Johnstone [22], the curvelet shrinkage operators can be taken as a soft thresholding function defined by a fixed threshold , that is,
(16)
or a hard shrinkage function
(17)
The major problem with wavelet shrinkage methods, as discussed, is that shrinking large coefficients entails an erosion of the spiky image features, while shrinking small coefficients towards zero yields Gibbs-like oscillation in the vicinity of edges and loss of texture. As a new MGA tool, Curvelet shrinkage can suppress this pseudo-Gibbs and preserve the image edge; however, some shapes of curve-like artifacts are generated (see Figure 4).
Figure 4
Figure 4

(a) Original "Toys" image. (b) Noisy "Toys" image for Gaussian noise with standard deviation . (c)–(f) Denoising of "Toys" image shown in (b) where the curvelet transform is hard-thresholding according to (17) for different choices of .Toys imageNoisy for

In order to suppress the staircasing effect and curvelet-like artifacts, we propose a new objective functional:
(18)

In the cost functional (18), the term is called curvelet shrinkage-based gradient data fidelity term and is designed to force the gradient of to be close to the gradient estimation and to alleviate the staircase effect. And the parameters and control the weights of each term. For the sake of simplicity for description, we always let and in the following sections.

3.2. Basic Properties of Our Model

Let us denote
(19)
Then, the cost function is a new hybrid data fidelity term, and its corresponding Euler equation is
(20)

Proposition 1.

The Euler equation (20) equals to produce a new image whose Fourier transform is described as follows: if , then
(21)

Proof.

Apply Fourier transform to the Euler equation (20) and we will get
(22)
According to the differential properties of the Fourier transform
(23)
we have
(24)
If , then we get
(25)

where and are parameters in the frequency domain.

Proposition 1 tells us that the Euler equation (20) equals to compute a new image whose Fourier frequency spectrum is the interpolation of and . The weight coefficients of and are and , respectively.

Proposition 2.

The energy functional is convex.

Proof.

For all for all , on one hand we have the following conclusion:
(26)
On the other hand, we have the following conclusion:
(27)
Then, we have
(28)

According to (26) and (28), we get the

From Proposition 2, the convexity of the energy functional can guarantee the global optimizing and the existence of the unique solution, while Proposition 1 shows us that the solution has some special form in Fourier domain. Combining Propositions 1 and 2 together, we can remark that the unique solution of (19) is
(29)

Then, we can prove the following existence and uniqueness theorem.

Theorem 1.

Let be a positive, bounded function with ; then the minimizing problem of energy functional in (18) admits a unique solution satisfying
(30)

Proof.

Using the lower semicontinuity and compactness of and the convexity of , the proof can be made following the same procedure of [23, 24] (for detail, see appendix in [24]).

3.3. Adaptive Parameters Estimation

For solving the minimizing energy functional , it often transforms the optimizing problem into the Euler-Lagrange equation. Using the standard computation of Calculus of Variation of with respect to , we can get its Euler-Lagrange equation:
(31)
where is the outward unit normal vector on the boundary , and . For a convenient numerical simulation of (31), we apply the gradient descent flow and get the evolution equation:
(32)
There are three parameters involved in the iterative procedure. For the threshold parameter in the curvelet coefficients shrinkage, the common one is to choose , where denotes the standard deviation of Gaussian white noise. The Monte-Carlo simulations can calculate an approximation value of the individual variance. In our experiments, we use the following hard-thresholding rule for estimating the unknown curvelet coefficients:
(33)

Here, we actually chose a scale-dependent value for : we have for the first scale (the finest scale) and for the others.

For the parameters and , they are very important to balance trade-off between the image fidelity term and the regularization term. An important prior fact is that the Gaussian distributed noise has the following restriction condition:
(34)
Therefore, we merely multiply the first equation of (32) by and integrate by parts over ; if the steady state has been reached, the left side of the first equation of (32) vanishes; thus we have
(35)
Obviously, the above equation is not sufficient to estimate the values of and simultaneously. This implies that we should introduce another prior knowledge. Borrowing the idea of spatial varying data fidelity from Gilboa [14], we compute the parameter by the formula:
(36)
where , and is the local power of the residue . The local power of the residue is given by
(37)
where is a normalized and radially symmetric Gaussian Function window, and is the expected value. After we compute the value of , we can estimate the value of using (38), that is,
(38)

3.4. Description of Proposed Algorithm

To discretize equation (32), the finite difference scheme in [8] is used. Denote the space step by and the time step by . Thus we have
(39)

where and is the regularized parameter chosen near 0.

The numerical algorithm for (32) is given in the following (the subscripts are omitted):
(40)
with boundary conditions
(41)

for . The parameters are chosen like this: , while the parameters are computed dynamically during the iterative process according to the formulae (33), (36), and (38).

In summary, according to the gradient descent flow and the discussion of parameters choice, we now present a sketch of the proposed algorithm (the pipeline is shown in Figure 2).

Initialization

, Iterative-Steps

Curvelet Shrinkage.
  1. (1)

    Apply curvelet transform (the FDCT [5]) to noisy image and obtain the discrete coefficients .

     
  2. (2)

    Use robust method to estimate an approximation value , and then use the shrinkage operator in (33) to obtain the estimated coefficients .

     
  3. (3)

    Apply the inverse curvelet transform and obtain the initial restored image .

     
Iteration. While Iterative-Steps Do
  1. (1)

    Compute according to (40).

     
  2. (2)

    Update the parameter according to (36).

     
  3. (3)

    Update the parameter according to (38).

     

End Do Output: .

3.5. Analysis of Staircase and Curve-Like Effect Alleviation

The essential idea of denoising is to obtain the cartoon part of the image , preserve more details of the edge and texture parts , and filter out the noise part . In classical TV algorithm and Curvelet threshold algorithm, the staircase effects and curve-like artifacts are often generated in the restored cartoon part , respectively. Our model provides a similar ways to force gradient to be close to an approximation. However, our model provides a better mechanism to alleviate the staircase effects and curve-like artifacts.

Firstly, the "TVGF" model in [7] uses the Gaussian filtering to approximation. However, because Gaussian filter is uniform smoothing in all directions of an image, it will smooth the image too much to preserve edges. Consequently, their gradient fidelity term cannot maintain the variation of intensities well. Differing from the TVGF model, our model takes full advantage of curvelet transform. The curvelets allow an almost optimal sparse representation of object with -singularities. For a smooth object with discontinuities along -continuous curves, the best -term approximation by curvelet thresholding obeys , while for the wavelets the decay rate is only .

Secondly, from the regularization theory, the gradient fidelity term works as Tikhonov regularization in Sobolev functional space The problem of admits a unique solution characterized by the Euler-Lagrange equation Moreover, the function is called harmonic (subharmonic, superharmonic) if it satisfies . Using the mean value theorems [25], for any ball , we have
(42)
However, in [7], the gradient fidelity term is chosen as
(43)

Comparing the above two results, we can understand the difference between two gradient fidelity term smoothing mechanism. We remark that the gradient fidelity term in [7] will tend to produce more edge blurring effect and remove more texture components with the increasing of the scale parameter of Gaussian kernel, although it helps to alleviate the staircase effect and produces some smoother results. However, our model tends to produce the curvelet shrinkage image and can remain the curve singularities in images; thus it will obtain good edge preserving performance.

In addition, another rationale behind our proposed model is that the spatially varying fidelity parameters and are incorporated into our model. In our proposed algorithm, as described in (36), we use the measure ; here is the local power of the residue . In the flat area where (basic cartoon model without textures or fine-scale details), the local power of the residue is almost constant and hence . We get a high-quality denoising process so that the noise, the staircase effect, and the curve-like artifacts are smoothed. In the texture area, since the noise is uncorrelated with the signal, thus the total power of the residue can be approximated as , the sum of local powers of the noncartoon part and the noise, respectively. Therefore, textured regions are characterized by high local power of the residue. Thus, our algorithm will reduce the level of filtering so that it will preserve the detailed structure of such regions.

4. Experimental Results and Analysis

In this section, experimental results are presented to demonstrate the capability of our proposed model. The results are compared with those obtained by using the curvelet shrinkage method [26], the "TV" model (2) proposed by Rudin et al. [8], and the "TVGF" model proposed by Zhu and Xia [7].

In the curvelet shrinkage method, denoising is achieved by hard-thresholding of the curvelet coefficients. We select the thresholding at for all but the finest scale where it is set at ; here is the noise level of a coefficient at scale and angle . In our experiments, we actually use a robust estimator to estimate noise level using the following formula: , here, represents the corresponding curvelet coefficients at scale and angle , and MED represents the medium operator to calculate the medium value for a sequence coefficients.

The solution of the "TV" model in (2) is obtained by using the following explicit iterative:
(44)

Here and are set to be 0.02 and 0.01, respectively. The regularization parameter is dynamically updated to satisfy the noise variance constrains according to (4).

The solution of the "TVGF" model is obtained by using the following explicit iterative:
(45)
Here the size and standard deviation of Gaussian lowpass filter are set to be 7 and 1, respectively. The evolution step length is set to be 0.02. The weight is set to be 0.5 as suggested in [7], and is dynamically updated according to
(46)

where is set to be 0.01, and is the noise variance.

4.1. Image Quality Assessment

For the following experiments, we compute the quality of restored images by the signal-to-noise ratio (SNR) to compare the performance of different algorithms. Because of the limitation of SNR on capturing the subjective appearance of the results, the mean structure similarity (MSSIM) index as defined in [27] is used to measure the performance of the different methods. As shown by theoretical and experimental analysis [27], the MSSIM index intends to measure the perceptual quality of the images.

SNR is defined by
(47)
MSSIM is given by
(48)
where and are the original and the restored images, respectively; is the mean of the image ; and are the image contents at the th local window, respectively; is the number of local windows in the image;
(49)

where is the mean of the image ; is the standard deviation of the image ; is the covariance of the image and image ; are the constants.

In order to evaluate the performance of alleviation of staircase effect and curve-like artifacts, the difference image between the restored image and the original clean image is used to visually judge image quality.

The stopping criterion of both the proposed method, "TV" method, and the "TVGF" method is that the MSSIM reached maximum or the total iteration number reached the maximal iteration number 3000.

4.1.1. Qualitative and Quantitative Results

Firstly, we take "Toys" image (see Figure 4(a)) as the test image, and we add Gaussian noise with standard deviation (the noisy image is shown in Figure 4(b)). The SNR value of noisy Toys image is 3.99 dB while the value of MSSIM is 0.30. The results of the curvelet shrinkage, the TV algorithm, the TVGF algorithm, and our proposed algorithm are displayed in Figures 5(a)5(d), respectively. All the algorithm can improve the value of SNR and MSSIM greatly, the value of SNR obtained by our algorithm reaches 14.37 dB, while those obtained by the TVGF model, the curvelet algorithm and the TV model is just 13.28 dB, 13.33 dB, and 14.21 dB, respectively. Similarly, the improvement of MSSIM obtained by our algorithm is the largest among all the algorithms. From this kind of cartoon image, our proposed algorithm does a good job at restoring faint geometrical structures of the image. In order to make better judgement on the visual difference of different restored images of different algorithms, we display the subimages which are cropped from Figures 5(a)5(d). As illustrated in Figures 6(a)6(f), the restored image by our proposed image (Figure 6(f)) is more nature than any other restored images as shown in Figures 6(c), 6(d), and 6(e). Figure 7 shows the difference image between "Toys" image and restored "Toys" images in Figure 6. From Figure 6, we can see that our algorithm's restored image has less staircase and curve-like distortion comparing with other algorithm's result.
Figure 5
Figure 5

The restored "Toys" images (a) by the curvelet shrinkage method (SNR = 13. 33, MSSIM = 0.85); (b) by using the "TV" model (SNR = 14.21, MSSIM = 0.85); (c) by using the "TVGF" model (SNR = 13.28, MSSIM = 0.72); (d) by using our proposed model (SNR = 14.37, MSSIM = 0.88).

Figure 6
Figure 6

The subimages of the original, noisy, and restored "Toys" in Figure 5: (a) the original image; (b) the noisy image; (c) the restored image by the curvelet Shrinkage method; (d) the restored image by using the "TV" model; (e) the restored image by using the "TVGF" model; (f) the restored image by using our proposed model.

Figure 7
Figure 7

The difference images between the original "Toys" image and restored "Toys" images in Figure 6: (a) by the curvelet Shrinkage method; (b) by using the "TV" model; (c) by using the "TVGF" model; (d) by using our proposed model.

In the second experiments, we take "Boat" as the test image. The "Boat" is a kind of image which contains many linear edges and curve singularities. The restored results are depicted in Figure 8, and the difference images are shown in Figure 9. As expected, our hybrid method is less prone to the staircase effect and curve-like artifacts and takes benefits from the curvelet transform for capturing efficiently the geometrical content of image.
Figure 8
Figure 8

The original, noisy, and restored "Boat" images: (a) the original image; (b) the noisy image (SNR = 7. 36, MSSIM = 0.43); (c) the restored image by the curvelet Shrinkage method (SNR = 13.51, MSSIM = 0.76); (d) the restored image by using the "TV" model (SNR = 14.36, MSSIM = 0.78); (e) the restored image by using the "TVGF" model (SNR = 13.25, MSSIM = 0.76); (f) the restored image by using our proposed model (SNR = 14.67, MSSIM = 0.79).

Figure 9
Figure 9

The difference images between the original "Boat" and restored "Boat" images in Figure 8: (a) by the curvelet Shrinkage method; (b) by using the "TV" model; (c) by using the "TVGF" model; (d) by using our proposed model.

In the third experiments, we carry experiments on "Lena" with different noise level and the standard derivation of Gaussian noise is 20, 25, 30, 35, and 40. Figures 10(a)10(f) display the results for the noisy "Lena" image when the standard deviation of noise is 40. In this case, the value of SNR and MSSIM of the noisy image is 1.54 dB and 0.15, the value of SNR obtained by our algorithm reaches 14.36 dB, while that obtained by the TVGF model, the curvelet algorithm, and the TV model is just 12.01 dB, 13.72 dB and 12.94 dB, respectively. As we know, the "Lena" image is an international standard test image which not only contains the strong vertical and curve edge but also have hair texture. From the result of curvelet shrinkage (Figure 10(c)), the hair texture and the strong vertical and curve edge are restored very good; however some annoying curve-like artifacts are generated. The TV algorithm can preserve the edge, but it lose the hair texture; moreover, the staircase effect is very obvious (Figure 10(d)). The TVGF algorithm can reduce the staircase effect; however, the edges and textures in "Lena" image look a bit blurred. Compared with the other algorithms, our hybrid algorithm obtains a sharper image with better hair detail, and the restored image is almost free of artifacts. The values of MSSIM of four algorithms also show that our algorithm has the best image perceptual quality. From Table 1, we can find that our algorithm can obtain the largest value of MSSIM. We just display two group results (Figures 10 and 11) for the face part of "Lena" image of various algorithms in the cases of Gaussian noise with standard deviation , respectively.
Table 1

The SNR, MSSIM of the restored "Lena" images by using four models. The numbers in the bracket under the "MSSIM" column refer to the total iteration number of the algorithm.

Noise standard

Noisy image

Curvelet shrinkage method

"TV" Model

"TVGF" Model

Proposed model

deviation

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

20

7.59

0.34

16.62

0.86

16.76

0.85 (1923)

15.68

0.82 (969)

17.26

0.88 (486)

25

5.63

0.27

15.70

0.84

15.68

0.82 (2290)

14.75

0.77 (1115)

16.36

0.86 (615)

30

4.04

0.22

14.95

0.82

14.81

0.80 (2991)

13.79

0.71 (1135)

15.58

0.85 (763)

35

2.70

0.18

14.29

0.81

13.92

0.76 (3000)

12.88

0.67 (1142)

14.92

0.84 (898)

40

1.54

0.15

13.72

0.79

12.94

0.71 (3000)

12.01

0.61 (1151)

14.36

0.82 (1033)

Figure 10
Figure 10

The subimages of the original, noisy, and restored "Lena": (a) the original image; (b) the noisy image (SNR = 1. 54, MSSIM = 0.15); (c) the restored image by the curvelet shrinkage method (SNR = 13.72, MSSIM = 0.79); (d) the restored image by using the "TV" model (SNR = 12.94, MSSIM = 0.71); (e) the restored image by using the "TVGF" model (SNR = 12.01, MSSIM = 0.61); (f) the restored image by using the proposed model (SNR = 14.36, MSSIM = 0.82).

Figure 11
Figure 11

The difference images between the original "Lena" and restored "Lena" images in Figure 10: (a) by the curvelet Shrinkage method; (b) by using the "TV" model; (c) by using the "TVGF" model; (d) by using our proposed model.

In the fourth experiments, we take "Barbara" as the test image. The "Barbara" is a kind of image which contains many texture details. The distinguished characteristic of the Barbara image is that it has abundant textures. We display two group results (Figures 12, 13, 14, and 15) for the face part of "Barbara" image of various algorithms in the cases of Gaussian noise with standard deviation and , respectively. From these experiments, the visual quality of restored image of our algorithm is better than any other algorithms. Of course, our hybrid algorithm can only restore the regular texture partially, some tiny textures cannot be restored, but the whole image looks more nature and more harmonious.
Figure 12
Figure 12

The detail of the original, noisy, and restored "Barbara" images: (a) the original image; (b) the noisy image (SNR = 8. 73, MSSIM = 0.48); (c) the restored image by the curvelet shrinkage method (SNR = 12.05, MSSIM = 0.77); (d) the restored image by using the "TV" model (SNR = 12.74, MSSIM = 0.77); (e) the restored image by using the "TVGF" model (SNR = 11.39, MSSIM = 0.72); (f) the restored image by using our proposed model (SNR = 13.15, MSSIM = 0.81).

Figure 13
Figure 13

The difference images between the original "Barbara" and restored "Barbara" images in Figure 12: (a) by the curvelet Shrinkage method; (b) by using the "TV" model; (c) by using the "TVGF" model; (d) by using our proposed model.

Figure 14
Figure 14

The subimages of the original, noisy, and restored "Barbara" images: (a) the original image; (b) the noisy image (SNR = 2. 71, MSSIM = 0.26); (c) the restored image by Curvelet Shrinkage method (SNR = 10.45, MSSIM = 0.68); (d) the restored image by using the "TV" model (SNR = 10.12, MSSIM = 0.61); (e) the restored image by using the "TVGF" model (SNR = 9.93, MSSIM = 0.56); (f) the restored image by using our proposed model (SNR = 10.81, MSSIM = 0.70).

Figure 15
Figure 15

The difference images between the original "Barbara" and restored "Barbara" images in Figure 14: (a) by the curvelet Shrinkage method; (b) by using the "TV" model; (c) by using the "TVGF" model; (d) by using our proposed model.

In addition, we use the SNR and MSSIM index to evaluate the quality of restored images of various algorithms under different noise level. For fair comparison, all the results are the images whose values of MSSIM reach maximum in their iterative process, respectively. In order to systematically evaluate the performance of the various algorithms from different noise levels, the denoising performance results are tabulated in Tables 1 and 2 for the "Lena" image and the "Barbara" image, respectively. By inspection of these tables, both the SNR improvement and the MSSIM improvement by our model are larger than the other three models. It is interesting to point out that the performance of the TVGF algorithm is not good compared with three other algorithms. This phenomenon has been discussed and analyzed by the research of literature [28] and the reason is the blurring caused by Gaussian kernel convolution. Of course, the TVGF algorithm actually reduces the staircase effect to some extent. Differing with Gaussian kernel convolution, our proposed algorithm takes advantage of the anisotropic of the curvelets; therefore there is no any edge blurring. The MSSIM improvement shows that our algorithm has better "subjective" or perceptual quality. Especially, the important improvement becomes more salient as noise level increase. Also we can see, both the SNR improvement and the MSSIM improvement obtained by our algorithm become more salient than those obtained by other algorithms for more complex images with more textures, for instance, the Barbara image. Figures 16, 17, 18, and 19 show the amplitude of the MSSIM improvement during the iterative process of the TV model, the TVGF model, and our proposed model. Tables 1 and 2 also list the total iteration number of each algorithm. For example, in Table 1, when the noise standard deviation is 40, the "TV" model needs 3000 iteration steps to reach the maximum MSSIM (0.71), the "TVGF" model needs 1151 iteration steps to reach the maximum MSSIM (0.61), the "TVGF" model needs 1151 iteration steps to reach the maximum MSSIM (0.61), while our model can obtain the good quality restored image with the value of MSSIM being 0.82 using 1033 iteration steps. These experiments show that our algorithm can more quickly reach the best restored image with maximum MSSIM, and our algorithm obtains the best performance comparing with the other algorithms.
Table 2

The SNR, MSSIM of the restored "Barbara" images by using four models. The numbers in the bracket under the "MSSIM" column refer to the total iteration number of the algorithm.

Noise standard

Noisy image

Curvelet shrinkage method

"TV" Model

"TVGF" Model

Proposed model

deviation

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

SNR (dB)

MSSIM

20

8.73

0.48

12.05

0.77

12.74

0.77 (2032)

11.39

0.72 (724)

13.15

0.81 (427)

25

6.79

0.40

11.42

0.74

11.83

0.72 (2634)

11.05

0.68 (836)

12.26

0.78 (567)

30

5.21

0.34

11.02

0.71

11.17

0.69 (3000)

10.69

0.64 (894)

11.64

0.75 (687)

35

3.91

0.30

10.72

0.70

10.63

0.65 (3000)

10.32

0.60 (936)

11.19

0.73 (783)

40

2.71

0.26

10.45

0.68

10.12

0.61 (3000)

9.93

0.56 (953)

10. 81

0.70 (928)

Figure 16
Figure 16

Comparison of the MSSIM results of the different images which is corrupted by some Gaussian noise of standard deviation 20: (a) "Lena"; (b) "Barbara".

Figure 17
Figure 17

Comparison of the MSSIM results of the different images which is corrupted by some Gaussian noise of standard deviation 25: (a) "Lena"; (b) "Barbara".

Figure 18
Figure 18

Comparison of the MSSIM results of the different images which is corrupted by some Gaussian noise of standard deviation 35: (a) "Lena"; (b) "Barbara".

Figure 19
Figure 19

Comparison of the MSSIM results of the different images which is corrupted by some Gaussian noise of standard deviation 40: (a) "Lena"; (b) "Barbara".

5. Conclusion

In this paper, a curvelet shrinkage fidelity-based total variation regularization is proposed for discontinuity-preserving denoising. We propose a new gradient fidelity term, which is designed to force the gradients of desired image to be close to the curvelet approximation gradients. To improve the ability of preserving the details of edges and texture, the spatial-varying parameters are adaptively estimated in the iterative process of the gradient descent flow algorithm. We carry out many numerical experiments to compare the performance of various algorithms. The SNR and MSSIM improvements demonstrate that our proposed method has the best performance than the TV algorithm, the curvelet shrinkage, and the TVGF algorithm. Our future work will extend this new gradient fidelity term to other PDE-based methods.

Declarations

Acknowledgments

The authors would like to express their gratitude to the anonymous referees for making helpful and constructive suggestions. The authors also thank financial support from the National Natural Science Foundation of China (60802039) and the National 863 High Technology Development Project (2007AA12Z142), Specialized Research Fund for the Doctoral Program of Higher Education (20070288050), and NUST Research Funding under Grant no. 2010ZDJH07.

Authors’ Affiliations

(1)
School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing, China
(2)
Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, USA
(3)
Department of Information and Computing Science, Guangxi University of Technology, Liuzhou, China

References

  1. Buades A, Coll B, Morel JM: A review of image denoising algorithms, with a new one. Multiscale Modeling and Simulation 2005, 4(2):490-530. 10.1137/040616024MathSciNetView ArticleMATHGoogle Scholar
  2. Do MN, Vetterli M: The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on Image Processing 2005, 14(12):2091-2106.MathSciNetView ArticleGoogle Scholar
  3. Candès E, Donoho D: Curvelets-a surprisingly effective nonadaptive representation for objects with edges. In Curves and Surface Fitting: Saint-Malo 1999. Edited by: Cohen A, Rabut C, Schumaker L. Vanderbilt University Press, Nashville, Tenn, USA; 2000:105-120.Google Scholar
  4. Candès EJ, Donoho DL:New tight frames of curvelets and optimal representations of objects with piecewise singularities. Communications on Pure and Applied Mathematics 2004, 57(2):219-266. 10.1002/cpa.10116MathSciNetView ArticleMATHGoogle Scholar
  5. Candès E, Demanet L, Donoho D, Ying L: Fast discrete curvelet transforms. Multiscale Modeling and Simulation 2006, 5(3):861-899. 10.1137/05064182XMathSciNetView ArticleMATHGoogle Scholar
  6. Ma J, Plonka G: Combined curvelet shrinkage and nonlinear anisotropic diffusion. IEEE Transactions on Image Processing 2007, 16(9):2198-2206.MathSciNetView ArticleGoogle Scholar
  7. Zhu L, Xia D: Staircase effect alleviation by coupling gradient fidelity term. Image and Vision Computing 2008, 26(8):1163-1170. 10.1016/j.imavis.2008.01.008MathSciNetView ArticleGoogle Scholar
  8. Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60(1–4):259-268.View ArticleMathSciNetMATHGoogle Scholar
  9. Perona P, Malik J: Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence 1990, 12(7):629-639. 10.1109/34.56205View ArticleGoogle Scholar
  10. Alvarez L, Lions P, Morel J: Image selective smoothing and edge detection by nonlinear diffusion. II. SIAM Journal on Numerical Analysis 1992, 29(3):845-866. 10.1137/0729052MathSciNetView ArticleMATHGoogle Scholar
  11. Chan T, Esedoglu S, Park F, Yip A: Recent developments in total variation image restoration. In Handbook of Mathematical Models in Computer Vision. Edited by: Paragios N, Chen Y, Faugeras O. Springer, Berlin, Germany; 2004.Google Scholar
  12. Lysaker M, Tai X-C: Iterative image restoration combining total variation minimization and a second-order functional. International Journal of Computer Vision 2006, 66(1):5-18. 10.1007/s11263-005-3219-7View ArticleMATHGoogle Scholar
  13. Li F, Shen C, Fan J, Shen C: Image restoration combining a total variational filter and a fourth-order filter. Journal of Visual Communication and Image Representation 2007, 18(4):322-330. 10.1016/j.jvcir.2007.04.005View ArticleGoogle Scholar
  14. Gilboa G, Zeevi YY, Sochen N: Texture preserving variational denoising usingan adaptive fidelity term. Proceedings of the 2nd IEEE Workshop on Variational, Geometric and Level Set Methods in Computer Vision (VLSM '03), 2003, Nice, France 137-144.Google Scholar
  15. Ma J, Fenn M: Combined complex ridgelet shrinkage and total variation minimization. SIAM Journal of Scientific Computing 2006, 28(3):984-1000. 10.1137/05062737XMathSciNetView ArticleMATHGoogle Scholar
  16. Plonka G, Steidl G: A multiscale wavelet-inspired scheme for nonlinear diffusion. International Journal of Wavelets, Multiresolution and Information Processing 2006, 4(1):1-21. 10.1142/S0219691306001063MathSciNetView ArticleMATHGoogle Scholar
  17. Ma J, Fenn M: Combined complex ridgelet shrinkage and total variation minimization. SIAM Journal of Scientific Computing 2006, 28(3):984-1000. 10.1137/05062737XMathSciNetView ArticleMATHGoogle Scholar
  18. Ying L, Demanet L, Candès E: 3D discrete curvelet transform. Wavelets XI, August 2005, San Diego, Calif, USA, Proceedings of SPIE 1-11.Google Scholar
  19. Demanet L, Ying L: Curvelets and wave atoms for mirror-extended images. Wavelets XII, August 2007, San Diego, Calif, USA, Proceedings of SPIEGoogle Scholar
  20. Achim A, Tsakalides P, Bezerianos A: SAR image denoising via Bayesian wavelet shrinkage based on heavy-tailed modeling. IEEE Transactions on Geoscience and Remote Sensing 2003, 41(8):1773-1784. 10.1109/TGRS.2003.813488View ArticleGoogle Scholar
  21. Durand S, Nikolova M:Denoising of frame coefficients using data-fidelity term and edge-preserving regularization. Multiscale Modeling & Simulation 2007, 6(2):547-576. 10.1137/06065828XMathSciNetView ArticleMATHGoogle Scholar
  22. Donoho DL, Johnstone IM: Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81(3):425-455. 10.1093/biomet/81.3.425MathSciNetView ArticleMATHGoogle Scholar
  23. Kornprobst P, Deriche R, Aubert G: Image sequence analysis via partial differential equations. Journal of Mathematical Imaging and Vision 1999, 11(1):5-26. 10.1023/A:1008318126505MathSciNetView ArticleMATHGoogle Scholar
  24. Xiao L, Huang L-L, Wei Z-H: A weberized total variation regularization-based image multiplicative noise removal algorithm. EURASIP Journal on Advances in Signal Processing 2010, 2010:-15.Google Scholar
  25. Gilbarg D, Trudinger NS: Elliptic Partial Differential Equation of Second Order. Springer, Berlin, Germany; 2003.MATHGoogle Scholar
  26. Starck J-L, Candès EJ, Donoho DL: The curvelet transform for image denoising. IEEE Transactions on Image Processing 2002, 11(6):670-684. 10.1109/TIP.2002.1014998MathSciNetView ArticleMATHGoogle Scholar
  27. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004, 13(4):600-612. 10.1109/TIP.2003.819861View ArticleGoogle Scholar
  28. Dong FF, Liu Z: A new gradient fidelity for avoiding staircasing effect. Journal of Computer Science and Technology 2009, 24(6):1162-1170. 10.1007/s11390-009-9289-1MathSciNetView ArticleGoogle Scholar

Copyright

© Liang Xiao et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement