# A parameter-adaptive iterative regularization model for image denoising

## Abstract

In this article, an iterative regularization model (IRM) with adaptive parameter is addressed. IRM has gained a lot of attentions. But constant scale parameter becomes very sensitive for the fast convergence. It becomes very important to optimize the scale parameter adaptively. Therefore, we introduce a novel IRM with varying scale parameter because of the fact that when the scale parameter is smaller, the number of the iteration will enhance by IRM. A method to estimate a scale parameter is proposed according to the trend of the scale parameter. And the theoretical justification for this approach can be inferred. Numerical experiments show that the proposed methods with varying scale parameter can efficiently remove noise, reduce the number of iteration, and well preserve the details of images.

## Introduction

During the last decade, in spite of the sophistication of the recently proposed methods, some algorithms have not yet attained a desirable level of applicability for image denoising, which is still a challenge at the crossing of functional analysis and statistics. The relations between variational regularization method and wavelet shrinkage have become one of the most active areas of research .

In this article, we are motivated by the following classical denoising problem of image degraded by additive white Gaussian noise. Given a noisy image f (x, y): Ω→, where Ω is a bounded open subset of σ2, we want to obtain a decomposition equation:

$f\left(x,y\right)=g\left(x,y\right)+n\left(x,y\right)$
(1)

where g(x,y) is the true image and n(x,y) is the noise with (x, y) Ω and n (x, y) (0, σ2)

The most classical variational model is

$u=\text{arg}\underset{u\in \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}\left\{J\left(u\right)+\lambda \parallel f-{u\parallel }_{2}^{2}\right\}$
(2)

or its corresponding constrained version

$u=\text{arg}\underset{u\in \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}J\left(u\right).\text{s}.\text{t}.\parallel f-{u\parallel }_{2}^{2}={\sigma }^{2}$
(2a)

For some scale parameter λ > 0, where BV(Ω) denotes the space of functions with bounded variation on Ω, ·2 is L2 norm. J(u) is the regularization item and f - u22 is the fitting item. λ is chosen to balance inconsistency (first term) and the deviation (second term) from the noise image f(x y) and depends on the noise norm σ. Therefore, a mass of researchers are concentrated on the regularization item J(u). The total variation model of Rudin–Osher–Fatemi (ROF) for image denoising is considered to be the better denoising model. But, there were two serious issues about the ROF model . First, it was very complicated to compute the solutions of the optimization problems induced by the variational method. Second, it was difficult to extract textures from images by using the ROF model. For the first issue, Goldstein and Osher recently introduced the split Bregman method for L 1 regularized problems. The Bregman method gave rise to very efficient algorithms for solutions of the ROF model. Meyer  did some very interesting analysis by characterizing textures which he defines as “highly oscillatory patterns in image processing” as elements of the dual space of BV(Ω). An iterative regularization model (IRM) , which replaces the regularization term by a generalized Bregman distance [14, 15], was proposed. This model is formulated as

${u}_{k+1}=\text{arg}\underset{u\in \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}\left\{J\left(u\right)+\frac{\lambda }{2}\parallel f+{v}_{k}-{u\parallel }_{2}^{2}\right\}$
(3a)
${v}_{k+1}={v}_{k}+f-{u}_{k+1}$
(3b)

Large λ corresponds to very little noise removal, and hence u(x y) is quickly close to f(x y) and the quality of image denoising is not effective. Small λ yields an over-smoothed u(x y) and the iterated times will be enhanced. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability .

In this article, we proposed a new denoising method with varying scale parameter where the regularization item is $J\left(u\right)=\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy$. We deduce a method to gain the scale parameter from the iterative regularization. Finally, some numerical examples are presented and show that our method improves the quality of the image denoising and reduces the optimal number of iterations.

### IRM

IRM makes use of some signals in the removed residual part for these denoising algorithms [19, 20]. For p J (v), we define the non-negative quantity

${D}^{p}\left(u,v\right)\equiv {D}_{J}^{p}\left(u,v\right)\equiv J\left(u\right)-J\left(v\right)-$
(4)

Then, the equivalent representation of Equation (3) is

${u}_{k+1}=\text{arg}\underset{u\in \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}{D}^{{p}_{k}}\left(u,{u}_{k}\right)+\frac{\lambda }{2}\parallel f-{u\parallel }_{2}^{2}\right\}$
(5a)
${p}_{k+1}={p}_{k}+f-{u}_{k+1}$
(5b)

where u0 = 0 and ${D}_{J}^{{p}_{k}}\left(u,{u}_{k}\right)$ are the Bregman distance between u and u k . As the optimal number of iteration k increases, u is close to the noisy image f. The scale parameter λ tunes the weight between the regularization and fidelity terms. The iterated refinement method yields a well-defined sequence of minimizers {u k } which satisfies u k  − f22 ≤ uk − 1 − f22 and if f BV (Ω), then ${u}_{k}-{f}_{2}^{2}\le \frac{J\left(f\right)}{k}$, i.e., u k converges monotonically to f in L2(Ω) with a rate of $\frac{1}{\sqrt{k}}$. For g BV (Ω) and γ > 1, we have D(g, u k ) ≤ D(g, uk−1) subject to u k  − f2 ≥ γg − f2.

Thus, the distance between a restored image u k and a possible exact image g is decreasing until the L2– distance of f and u k is larger than the L2– distance of f and g. This result can be used to construct a stopping rule for our iterative procedure .

It should be stressed that the Bregman-based methodology, in the last few years, has made rapid development due to the tireless efforts of Osher and collaborators [18, 2123]. A key breakthrough among is that, with adequate initializations, the Bregman method equals to the augmented Lagrangian algorithm [7, 22]. Furthermore, many efficient algorithms are proposed to enable fast implementation [21, 24, 25].

### IRM with varying scale parameter

We know that for IRM the bigger the scale parameter λ is, the smaller the number of iteration is to the stop criterion, but u is quickly close to the noise image f, the quality of the image denoising is not ideal. When the scale parameter λ is smaller, the number of the iteration will enhance. Therefore, it is important to choose an optimal value λ.

### Varying scale parameter

For that, let $J\left(u\right)=\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy$, differentiating both sides with respect to u for Equation (3a) we have

$\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)+\lambda \left(f+{v}_{k}-u\right)=0$
(6)

Multiplying Equation (6) by $\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)$ and integrating over x and y, we get

$\begin{array}{c}\underset{\mathrm{\Omega }}{\iint }\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)dxdy\hfill \\ \phantom{\rule{1em}{0ex}}+\lambda \underset{\mathrm{\Omega }}{\iint }\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)\left(f+{v}_{k}-u\right)dxdy=0\hfill \end{array}$
(7)

Then, we have the following equation

$\lambda =\frac{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)dxdy}{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla u\right|}\nabla u\right)\left(u-f-{v}_{k}\right)dxdy}$
(8)

In numerical implementation, we use accordinglyλk+1 denotes λ in Equation (8). Applying the proposed scale parameter to IRM with initial values u0 = 0, v0 = 0, we obtain different scale parameters λk+1 for different iterations. Equation (3) should be written as

${\lambda }_{k+1}=\frac{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)dxdy}{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\left({u}_{k}-f-{v}_{k}\right)dxdy}$
(9a)
${u}_{k+1}=\text{arg}\underset{u\in BV\left(\mathrm{\Omega }\right)}{\text{min}}\left\{\left|\nabla u\right|+{\lambda }_{k+1}\parallel f+{v}_{k}-{u\parallel }_{2}^{2}\right\}$
(9b)
${v}_{k+1}={v}_{k}+\left(f-{u}_{k+1}\right)$
(9c)

This gives us an adaptive value λk+1, which appears to converge as k → ∞. The theoretical justification for this approach comes from Appendices 1 and 2.

### Initial scale parameter

By the numerical experiment, we discover that the quality of image denoising is not ideal when initial values u0 = 0, v0 = 0. For example, if the initial condition holds, there is a question that Equation (9a) will be divided by zero.

If we randomly give an initial scale parameter value λ0, we calculate λ k by

${\stackrel{\sim }{\lambda }}_{k}=\frac{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)dxdy}{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\left({u}_{k}-f-{v}_{k}\right)dxdy}$
(10)

after the iterations are taken some steps. We gain the sequence vector {λ k } and find that λ k has some properties as follows:

1. (a)

the sequence vector {λ k } is monotonically decreasing as the number of iteration k increases (see Figure 1c);

2. (b)

as the number of iteration k increases, the sequence vector {λ k } will at first decrease, and then increase closely to λ 0 (see Figure 1b);

3. (c)

the sequence vector {λ k } is monotonically increasing as the number of iteration k increases (see Figure 1a).

Therefore, we can obtain the initial value of varying scale parameter by the trend of the sequence vector {λ k } as follows:

1. (1)

If the sequence vector {λ k } is monotonically decreasing at first as the number of iteration k increases, we consider that the random selected λ 0 is contented with the property of (a) or (b). Then, the initial scale parameter λ 1 of our proposed method is equal to $\stackrel{-}{{\text{λ}}_{\text{k}}}$. Usually, k is equal to 3.

2. (2)

If the sequence vector {λ k } is monotonically increasing as the number of iteration k increases, the random selected λ 0 is contented with the property of (c). Then, the initial scale parameter of our proposed method ${\text{λ}}_{1}=\stackrel{-}{{\text{λ}}_{1}}$ or λ1 = λ0/p with the constant p > 1. Usually, p = 2.

In Figure 1, as the example of ‘Barbara’ image, the trends of the sequence vector {λ k } are gained when the scale parameter λ0 is 8.33, 4.34, and 0.013, respectively.

### IRM framework with varying scale parameter

According to the above two sections, our general iterative regularization procedure can be formulated as follows.

$\left\{\begin{array}{c}\hfill {\text{u}}_{j+1}=\text{arg}\underset{u\in \mathrm{B}\mathrm{V}\left(\mathrm{\Omega }\right)}{\text{min}}\left\{J\left(u\right)+\frac{\lambda 0}{2}f+{v}_{j}-{u}_{2}^{2}\right\}\hfill \\ \hfill {\text{v}}_{j+1}={v}_{j}+f-{\text{u}}_{j+1}\hfill \end{array}$
(11)

Step 1: We randomly select λ0. Let u0 = 0, v0 = 0 and j = 0, 1, 2…

1. (1)

According to Equations (11) and (10), we calculate uj+1, vj+1, and λ j by the number of iteration j. Generally j = 2.

2. (2)

We observe the trend of the sequence vector {λ k }. According to the properties of “Initial scale parameter” section, we get the initial value λ 1 of our proposed method.

Step 2: Let u0 = 0, v0 = 0 and k = 1, 2…

1. (1)

According to Equation (9) and the initial value λ 1, we calculate uk+1, vk+1, and λ k+1.

2. (2)

We get image u k and stop the iteration when f - u k ≤ σ (as the stopping criterion).

## Result and discussion

All solutions to the variational problem were obtained using gradient descent in a standard fashion . Now, we use Chambolle Algorithm . The only non-trivial difficulty comes when |u| ≈ 0. We fix this, as is usual, by perturbing $J\left(u\right)=\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy$ to ${J}_{\epsilon }\left(u\right)=\underset{\mathrm{\Omega }}{\iint }\sqrt{{\left|\nabla u\right|}^{2}+{\epsilon }^{2}}dxdy$, where ε is a small positive number. To be extent, the ‘stair-casing’ effect of this method can be decreased. In our calculations, we too k = 10−12; the step of iteration unit τ for Chambolle Algorithm is 0.2. Without loss of generality, the performance of the denoising algorithms is measured in terms of peak signal-to-noise-ratio (PSNR) , which can be defined as follows

$\text{PSNR}=10*{\text{log}}_{10}\left(\frac{{255}^{2}}{\frac{1}{MN}{\sum }_{m=1}^{M}{\sum }_{n=1}^{N}{\left({f}_{\mathit{mn}}-{u}_{\mathit{mn}}\right)}^{2}}\right)$
(12)

where f is the original image and u is the denoising image.

### Convergence analysis

Figures 2 and 3 show the results with constant and varying scale parameter of IRM with ‘cameramen’ image added Gauss noise σ = 20 when λ is smaller and bigger, respectively. In Figure 2, the first row results show that more iteration steps are required to stop criterion with smaller scale parameter λ0 = 0.67; the second row results show that our proposed methods require less iterations to get the optimal denoising results. At first, we used constant scale parameter λ0 = 0.67 to iterate three times and got a sequence vector {λ k } decreased in the first image of the third row. According to Equation (11), we got the initial value λ1 = λ2 = 5.74. The last two plots (i) and (j) show that f − uk 2 decreases monotonically with the iteration, first dropping below σ at the optimal iterate k = 12 and 2, respectively. It shows that our proposed method converge faster than IRM with the constant scale parameter. In Figure 3, as can be seen, with large scale parameter λ0 = 10 the original IRM convergences to the noisy image f quickly, and only one iterative needed to reach the stop criterion. Obviously, the denoising result is not satisfied. However, promising result is obtained by our varying scale parameter strategy where the initial value λ1 = λ2 = 5.74 according to Equation (10).

### Preserved textures analysis

Figure 4 shows the denoising results of ‘Barbara’ image with Gauss white noise σ = 25.5. The constant scale parameter λ for IRM is 1. Compared with the constant scale parameter for IRM in Figure 4c–f, our proposed method can preserved more textures in Figure 4g–j. The last two plots (k) and (l) show that f − uk 2 decreases monotonically with the iterations, first dropping below σ at the optimal iterate k = 12 and 2, respectively. It shows that our proposed method converges faster than IRM with the constant scale parameter.

### Denoising analysis for MRI coronal brain

The denoising results of MRI coronal brain image with Gauss white noise σ = 53.83 are shown in Figure 5. The constant scale parameter λ for IRM is 1. Compared with the constant scale parameter for IRM in Figure 5c–f, our proposed method can preserve more textures in Figure 5g–j. The last two plots (k) and (l) show that f − uk 2 decreases monotonically with the iterations, first dropping below σ at the optimal iterate k = 13 and 3, respectively. It shows that our proposed method converges faster than IRM with the constant scale parameter and has more texture details in the denoised image.

### Computational cost analysis

We have made a comparison in terms of computational time (see Tables 1 and 2) by MATLAB 7.1, which is used on a PC equipped with AMD 2.31 GHz CPU and 3 GB RAM. In fact, it should be noted that the term “Fast” is relatively for computing (9a) when compared with computing the sub-problem (9b), i.e., in this article even we use the efficient Chambolle Algorithm to solve the sub-problem (9b) and set the inner iterations is 40. To take the results of “Preserved textures analysis” section for denoising ‘Barbara’ image (256 × 256) as an example, the average time of computing (9b) once is 1.281 s, while that of (9a) once is only 0.026 s. Moreover, combined with fact that the outer iterations of the conventional IRM is 13 while our adaptive IRM is 3, the whole computational time of the conventional IRM is 13.708 s while that of our method is 3.136 s. Therefore, we know that our adaptive scheme is really faster than the conventional one.

In addition, we have made a comparison between wavelet + wiener, curvelets include hard threshold, soft threshold, and block threshold regulation algorithm and our method. Denoising results of Lena image are shown that our algorithm improves PSNR than traditional method in Table 3.

## Conclusion

A novel IRM with adaptive scale parameter is proposed in order to decrease the sensitivity of constant scale parameter, optimize the scale parameter adaptively in the IRM, and attain a desirable level of applicability for image denoising. We replace the classic regularization item and deduce the equation of the adaptive scale parameter, because we know that the scale parameter is smaller, the number of the iteration will enhance by IRM. Then, the rule of varying scale parameter by the trend of the sequence vector is attained. A new iterative scale parameter λ is obtained according to the trend of the sequence vector. In general, we can get the initial scale parameter λ just using three steps of iteration. We have seen with practical examples that our proposed method can reduce the number of iterations. Thus, a fast and robust method is got.

## Appendix 1

A constrained problem is defined as

$\left\{\begin{array}{c}\hfill \text{min}F\left(X\right)\hfill \\ \hfill s.t.{N}^{T}X-b\ge 0\hfill \end{array}$
(13)

We know that the constraint equation is NTX − b = 0. The basic assumption is that X lies in the subspace tangent to the active constraints, i.e., Xi+1 = X i  + αS, where S is the direction with the most negative directional derivative and α is the iterative step length, both X i and Xi+1 satisfy the constraint equations. Therefore, we obtain

${N}^{T}S=0$
(14)

If we want the steepest descent direction satisfying Equation (14), we can pose the problem as

$\left\{\begin{array}{c}\hfill \text{min}{S}^{T}\nabla F\hfill \\ \hfill s.t.{N}^{T}S=0\phantom{\rule{0.25em}{0ex}}\text{and}\phantom{\rule{0.25em}{0ex}}{S}^{T}S=1\hfill \end{array}$
(15)

The Euler–Lagrange equation (15) has the formulation

$L\left(S,\lambda ,\mu \right)={S}^{T}\nabla F-{S}^{T}N\lambda -\mu \left({S}^{T}S-1\right)$
(16)

The derivative of L with respect to S is

$\frac{\partial L}{\partial S}=\nabla F-N\lambda -2\mu S=0$
(17)

Recall that NTS = 0 in Equation (14) and multiplying Equation (17) by NT, we get

${N}^{T}\nabla F-{N}^{T}N\lambda =0$
(18)

Therefore, we get the value

$\lambda =\frac{{N}^{T}\nabla F}{{N}^{T}N}$
(19)

So, the proposition holds.

## Appendix 2

${u}_{k+1}=\text{arg}\underset{u\text{εBV}\left(\text{Ω}\right)}{\text{min}}\left\{\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy+\frac{\lambda }{2}\parallel f+{v}_{k}-{u\parallel }_{2}^{2}\right\}$
(20)

It is equivalent to

${u}_{k+1}=\text{arg}\underset{u\epsilon \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}\left\{{\lambda }_{1}\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy+\frac{1}{2}\parallel f+{v}_{k}-{u\parallel }_{2}^{2}\right\}$
(21)

where ${\lambda }_{1}=\frac{1}{\lambda }$. Then, Equation (21) is rewritten to

$\left\{\begin{array}{c}\hfill \underset{u\epsilon \text{BV}\left(\mathrm{\Omega }\right)}{\text{min}}\frac{1}{2}f+{v}_{k}-{u}_{2}^{2}\hfill \\ \hfill \text{s}.\text{t}.\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy\ge 0\hfill \end{array}$
(22)

Since $\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy=\nabla {u}_{1}\approx \sqrt{\nabla {u}_{1}^{2}+\epsilon }=<\frac{{\nabla }^{*}\nabla u}{\sqrt{\nabla {u}_{1}^{2}+\epsilon }},u>$, we let $\frac{{\nabla }^{*}\nabla u}{\sqrt{\nabla {u}_{1}^{2}+\epsilon }}$ be approximated by $\frac{{\nabla }^{*}\nabla {u}_{k}}{\sqrt{\nabla {{u}_{k}}_{1}^{2}+\epsilon }}$, and let $u=X,b=0,N=\frac{{\nabla }^{*}\nabla {u}_{k}}{\sqrt{\nabla {{u}_{k}}_{1}^{2}+\epsilon }}$ and $\left(u\right)=\frac{1}{2}f+{v}_{k}-{u}_{2}^{2}$, we have $\underset{\mathrm{\Omega }}{\iint }\left|\nabla u\right|dxdy=={N}^{T}u\ge 0$, then according to Appendix 1, we obtain

${\lambda }_{1}=\frac{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}^{k}\right)\left({u}_{k}-f-{v}_{k}\right)dxdy}{{\iint }_{\mathrm{\Omega }}\nabla \left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}^{k}\right)\nabla \left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)dxdy}$
(23)

Therefore, we obtain the parameter

$\lambda =\frac{1}{{\lambda }_{1}}=\frac{{\iint }_{\mathrm{\Omega }}\nabla ··\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)dxdy}{{\iint }_{\mathrm{\Omega }}\nabla ·\left(\frac{1}{\left|\nabla {u}_{k}\right|}\nabla {u}_{k}\right)\left({u}_{k}-f-{v}_{k}\right)dxdy}$
(24)

So, the proposition holds.

## References

1. Rudin L, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259-268. 10.1016/0167-2789(92)90242-F

2. Devore R, Jawerth B, Lucier B: Image compression through wavelet transform coding. IEEE Trans. Inf. Theory 1992, 38: 719-746. 10.1109/18.119733

3. Donoho DL: De-noising by soft-threshold. IEEE Trans. Inf. Theory 1995, 41(3):613-627. 10.1109/18.382009

4. Dabov K, Foi A, Katkovnik V, Egiazarian K: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16: 2080-2095.

5. Yan R, Shao L, Cvetkovic SD, Klijn J: Improved nonlocal means based on pre-classification and invariant block matching. IEEE/OSA J. Disp. Technol. 2012, 8(4):212-218.

6. Osher S, Solè A, Vese L: Image decomposition and restoration using total variation minimization and the H-1 norm. SIAM J. Multiscale Model. Simulat. 2003, 1(3):349-370. 10.1137/S1540345902416247

7. Burger M, Gilboa G, Osher S, Xu JJ: Nonlinear inverse scale space methods. Commun. Math. Sci. 2006, 4(1):175-208.

8. Chambolle A, DeVore R, Lee NY, Lucier B: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 1998, 7(3):319-335.

9. Chambolle A, Lucier BJ: Interpreting translation-invariant wavelet shrinkage as a new image smoothing scale space. IEEE Trans. Image Process. 2001, 10: 993-1000. 10.1109/83.931093

10. Steidl G, Weickert J, Brox T, MraZek P, Welk M: On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs. SIAM J. Numer. Anal. 2004, 42(2):686-713. 10.1137/S0036142903422429

11. Xu J, Osher S: Iterative regularization and nonlinear inverse scale space applied to wavelet-based denoising. IEEE Trans. Image Process. 2007, 16(2):534-544.

12. Meyer Y: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. American Mathematical Society, Boston; 2002.

13. Osher S, Burger M, Goldfarb D, Xu J, Yin W: An iterative regularization method for total variation based image restoration. Multiscale Model. Simulat. 2005, 4: 460-489. 10.1137/040605412

14. Bregman L: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200-217.

15. Hao BB, Li M, Feng XC: Wavelet iterative regularization for image restoration with varying scale parameter. Signal Process. Image Commun. 2008, 23(6):433-441. 10.1016/j.image.2008.04.006

16. Liu B, King K, Steckner M, Xie J, Sheng J, Ying L: Regularized sensitivity encoding (SENSE) reconstruction using Bregman iterations. Magn. Reson. Med. 2009, 61(1):145-152. 10.1002/mrm.21799

17. Chambolle A: An algorithm for total variation minimization and applications. J. Math. Imag. Vis. 2004, 20: 89-97.

18. Vese L, Osher S: Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 2003, 19: 553-572. 10.1023/A:1025384832106

19. Weissman T, Ordentlich E, Seroussi G, Verdu S, Weinberger M: Universal discrete denoising: known channel. IEEE Trans. Inf. Theory 2005, 51: 5-28.

20. Perona P, Malik J: Scale space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12: 629-639. 10.1109/34.56205

21. Goldstein T, Osher S: The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2009, 2(2):323-343. 10.1137/080725891

22. Yin W, Osher S, Goldfarb D, Darbon J: Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 2009, 1: 142-168.

23. He L, Chang TC, Osher S, Fang T, Speier P: MR image reconstruction by using the iterative refinement method and nonlinear inverse scale space methods. 2006. Technical Report CAM Report 06–35, UCLA

24. Esser E: Applications of Lagrangian-based alternating direction methods and connections to split Bregman. 2009. CAM Report 09–31, UCLA

25. Wang Y, Yang J, Yin W, Zhang Y: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imag. Sci. 2008, 1(3):248-272. 10.1137/080724265

26. Ma J: Compressed sensing by inverse scale space and curvelet thresholding. Appl. Math. Comput. 2008, 206(2):980-988. 10.1016/j.amc.2008.10.011

27. Ma J, Le DF: Deblurring from highly incomplete measurements for remote sensing. IEEE Trans. Geosci. Remote Sensing (GeoRS) 2009, 3(47):792-802.

28. Gilboa G, Sochen N, Zeevi YY: Texture preserving variational denoising using an adaptive fidelity term. Proc VLSM Nice, France 1 2003, 137-144.

29. Shao L, Zhang H, de Haan G: An overview and performance evaluation of classification-based least squares trained filters. IEEE Trans. Image Process. 2008, 17: 1772-178.

## Acknowledgment

This study was supported by the National Natural Science Foundation of China under the Grant nos. 60702069, 30300443 and 61105035; the Research Project of Department of Education of Zhejiang Province, China under the Grant no. 20060601; The Science Foundation of Zhejiang Sci-Tech University of China under the Grant no. 0604039-Y; the Natural Science Foundation of Zhejiang Province of China under the Grant no. Y1080851 and Y12H290045; the Research Project of 2011 overseas students of Zhejiang Province of China under the Grant no. 1104707-M; Qianjiang talents project of Science and Technology Department of Zhejiang province of China under the grant no. 2012R10054.

## Author information

Authors

### Corresponding author

Correspondence to Wenshu Li.

### Competing interests

The authors declare that they have no competing interests.

## Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

## Rights and permissions

Reprints and Permissions

Li, W., Zhao, C., Liu, Q. et al. A parameter-adaptive iterative regularization model for image denoising. EURASIP J. Adv. Signal Process. 2012, 222 (2012). https://doi.org/10.1186/1687-6180-2012-222

• Accepted:

• Published:

• DOI: https://doi.org/10.1186/1687-6180-2012-222

### Keywords

• Iterative regularization
• Total variation
• Variational methods
• Image denoising 