Open Access

A parameter-adaptive iterative regularization model for image denoising

EURASIP Journal on Advances in Signal Processing20122012:222

https://doi.org/10.1186/1687-6180-2012-222

Received: 23 April 2012

Accepted: 11 September 2012

Published: 16 October 2012

Abstract

In this article, an iterative regularization model (IRM) with adaptive parameter is addressed. IRM has gained a lot of attentions. But constant scale parameter becomes very sensitive for the fast convergence. It becomes very important to optimize the scale parameter adaptively. Therefore, we introduce a novel IRM with varying scale parameter because of the fact that when the scale parameter is smaller, the number of the iteration will enhance by IRM. A method to estimate a scale parameter is proposed according to the trend of the scale parameter. And the theoretical justification for this approach can be inferred. Numerical experiments show that the proposed methods with varying scale parameter can efficiently remove noise, reduce the number of iteration, and well preserve the details of images.

Keywords

Iterative regularizationTotal variationVariational methodsImage denoising

Introduction

During the last decade, in spite of the sophistication of the recently proposed methods, some algorithms have not yet attained a desirable level of applicability for image denoising, which is still a challenge at the crossing of functional analysis and statistics. The relations between variational regularization method and wavelet shrinkage have become one of the most active areas of research [15].

In this article, we are motivated by the following classical denoising problem of image degraded by additive white Gaussian noise. Given a noisy image f (x, y): Ω→, where Ω is a bounded open subset of σ2, we want to obtain a decomposition equation:
f x , y = g x , y + n x , y
(1)

where g(x,y) is the true image and n(x,y) is the noise with (x, y) Ω and n (x, y) (0, σ2)

The most classical variational model is
u = arg min u BV Ω J u + λ f u 2 2
(2)
or its corresponding constrained version
u = arg min u BV Ω J u . s . t . f u 2 2 = σ 2
(2a)
For some scale parameter λ > 0, where BV(Ω) denotes the space of functions with bounded variation on Ω, ·2 is L2 norm. J(u) is the regularization item and f - u22 is the fitting item. λ is chosen to balance inconsistency (first term) and the deviation (second term) from the noise image f(x y) and depends on the noise norm σ. Therefore, a mass of researchers are concentrated on the regularization item J(u). The total variation model of Rudin–Osher–Fatemi (ROF) for image denoising is considered to be the better denoising model. But, there were two serious issues about the ROF model [611]. First, it was very complicated to compute the solutions of the optimization problems induced by the variational method. Second, it was difficult to extract textures from images by using the ROF model. For the first issue, Goldstein and Osher recently introduced the split Bregman method for L 1 regularized problems. The Bregman method gave rise to very efficient algorithms for solutions of the ROF model. Meyer [12] did some very interesting analysis by characterizing textures which he defines as “highly oscillatory patterns in image processing” as elements of the dual space of BV(Ω). An iterative regularization model (IRM) [13], which replaces the regularization term by a generalized Bregman distance [14, 15], was proposed. This model is formulated as
u k + 1 = arg min u BV Ω J u + λ 2 f + v k u 2 2
(3a)
v k + 1 = v k + f u k + 1
(3b)

Large λ corresponds to very little noise removal, and hence u(x y) is quickly close to f(x y) and the quality of image denoising is not effective. Small λ yields an over-smoothed u(x y) and the iterated times will be enhanced. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability [1518].

In this article, we proposed a new denoising method with varying scale parameter where the regularization item is J u = Ω u d x d y . We deduce a method to gain the scale parameter from the iterative regularization. Finally, some numerical examples are presented and show that our method improves the quality of the image denoising and reduces the optimal number of iterations.

The remainder of this article is organized as follows. In “IRM” section, we mainly review IRM and its some attributes. The proposed method is introduced in “IRM with varying scale parameter” section; the experimental results of our method are given in “Result and discussion” section. This article is summarized in “Conclusion” section.

IRM

IRM makes use of some signals in the removed residual part for these denoising algorithms [19, 20]. For p J (v), we define the non-negative quantity
D p u , v D J p u , v J u J v < p , u v >
(4)
Then, the equivalent representation of Equation (3) is
u k + 1 = arg min u BV Ω D p k u , u k + λ 2 f u 2 2 }
(5a)
p k + 1 = p k + f u k + 1
(5b)

where u0 = 0 and D J p k u , u k are the Bregman distance between u and u k . As the optimal number of iteration k increases, u is close to the noisy image f. The scale parameter λ tunes the weight between the regularization and fidelity terms. The iterated refinement method yields a well-defined sequence of minimizers {u k } which satisfies u k  − f22 ≤ uk − 1 − f22 and if f BV (Ω), then u k f 2 2 J f k , i.e., u k converges monotonically to f in L2(Ω) with a rate of 1 k . For g BV (Ω) and γ > 1, we have D(g, u k ) ≤ D(g, uk−1) subject to u k  − f2 ≥ γg − f2.

Thus, the distance between a restored image u k and a possible exact image g is decreasing until the L2– distance of f and u k is larger than the L2– distance of f and g. This result can be used to construct a stopping rule for our iterative procedure [13].

It should be stressed that the Bregman-based methodology, in the last few years, has made rapid development due to the tireless efforts of Osher and collaborators [18, 2123]. A key breakthrough among is that, with adequate initializations, the Bregman method equals to the augmented Lagrangian algorithm [7, 22]. Furthermore, many efficient algorithms are proposed to enable fast implementation [21, 24, 25].

IRM with varying scale parameter

We know that for IRM the bigger the scale parameter λ is, the smaller the number of iteration is to the stop criterion, but u is quickly close to the noise image f, the quality of the image denoising is not ideal. When the scale parameter λ is smaller, the number of the iteration will enhance. Therefore, it is important to choose an optimal value λ.

Varying scale parameter

For that, let J u = Ω u d x d y , differentiating both sides with respect to u for Equation (3a) we have
· 1 u u + λ f + v k u = 0
(6)
Multiplying Equation (6) by · 1 u u and integrating over x and y, we get
Ω · 1 u u · 1 u u d x d y + λ Ω · 1 u u f + v k u d x d y = 0
(7)
Then, we have the following equation
λ = Ω · 1 u u · 1 u u d x d y Ω · 1 u u u f v k d x d y
(8)
In numerical implementation, we use accordinglyλk+1 denotes λ in Equation (8). Applying the proposed scale parameter to IRM with initial values u0 = 0, v0 = 0, we obtain different scale parameters λk+1 for different iterations. Equation (3) should be written as
λ k + 1 = Ω · 1 u k u k · 1 u k u k d x d y Ω · 1 u k u k u k f v k d x d y
(9a)
u k + 1 = arg min u B V Ω u + λ k + 1 f + v k u 2 2
(9b)
v k + 1 = v k + f u k + 1
(9c)

This gives us an adaptive value λk+1, which appears to converge as k → ∞. The theoretical justification for this approach comes from Appendices 1 and 2.

Initial scale parameter

By the numerical experiment, we discover that the quality of image denoising is not ideal when initial values u0 = 0, v0 = 0. For example, if the initial condition holds, there is a question that Equation (9a) will be divided by zero.

If we randomly give an initial scale parameter value λ0, we calculate λ k by
λ k = Ω · 1 u k u k · 1 u k u k d x d y Ω · 1 u k u k u k f v k d x d y
(10)
after the iterations are taken some steps. We gain the sequence vector {λ k } and find that λ k has some properties as follows:
  1. (a)

    the sequence vector {λ k } is monotonically decreasing as the number of iteration k increases (see Figure 1c);

     
  2. (b)

    as the number of iteration k increases, the sequence vector {λ k } will at first decrease, and then increase closely to λ 0 (see Figure 1b);

     
  3. (c)

    the sequence vector {λ k } is monotonically increasing as the number of iteration k increases (see Figure 1a).

     
Figure 1

Trends of the scale parameter λ k change as the number of iteration k increases.

Therefore, we can obtain the initial value of varying scale parameter by the trend of the sequence vector {λ k } as follows:
  1. (1)

    If the sequence vector {λ k } is monotonically decreasing at first as the number of iteration k increases, we consider that the random selected λ 0 is contented with the property of (a) or (b). Then, the initial scale parameter λ 1 of our proposed method is equal to λ k . Usually, k is equal to 3.

     
  2. (2)

    If the sequence vector {λ k } is monotonically increasing as the number of iteration k increases, the random selected λ 0 is contented with the property of (c). Then, the initial scale parameter of our proposed method λ 1 = λ 1 or λ1 = λ0/p with the constant p > 1. Usually, p = 2.

     

In Figure 1, as the example of ‘Barbara’ image, the trends of the sequence vector {λ k } are gained when the scale parameter λ0 is 8.33, 4.34, and 0.013, respectively.

IRM framework with varying scale parameter

According to the above two sections, our general iterative regularization procedure can be formulated as follows.
{ u j + 1 = arg min u B V Ω J u + λ 0 2 f + v j u 2 2 v j + 1 = v j + f u j + 1
(11)
Step 1: We randomly select λ0. Let u0 = 0, v0 = 0 and j = 0, 1, 2…
  1. (1)

    According to Equations (11) and (10), we calculate uj+1, vj+1, and λ j by the number of iteration j. Generally j = 2.

     
  2. (2)

    We observe the trend of the sequence vector {λ k }. According to the properties of “Initial scale parameter” section, we get the initial value λ 1 of our proposed method.

     
Step 2: Let u0 = 0, v0 = 0 and k = 1, 2…
  1. (1)

    According to Equation (9) and the initial value λ 1, we calculate uk+1, vk+1, and λ k+1.

     
  2. (2)

    We get image u k and stop the iteration when f - u k ≤ σ (as the stopping criterion).

     

Result and discussion

All solutions to the variational problem were obtained using gradient descent in a standard fashion [2128]. Now, we use Chambolle Algorithm [8]. The only non-trivial difficulty comes when |u| ≈ 0. We fix this, as is usual, by perturbing J u = Ω u d x d y to J ε u = Ω u 2 + ε 2 d x d y , where ε is a small positive number. To be extent, the ‘stair-casing’ effect of this method can be decreased. In our calculations, we too k = 10−12; the step of iteration unit τ for Chambolle Algorithm is 0.2. Without loss of generality, the performance of the denoising algorithms is measured in terms of peak signal-to-noise-ratio (PSNR) [29], which can be defined as follows
PSNR = 10 * log 10 255 2 1 M N m = 1 M n = 1 N f mn u mn 2
(12)

where f is the original image and u is the denoising image.

Convergence analysis

Figures 2 and 3 show the results with constant and varying scale parameter of IRM with ‘cameramen’ image added Gauss noise σ = 20 when λ is smaller and bigger, respectively. In Figure 2, the first row results show that more iteration steps are required to stop criterion with smaller scale parameter λ0 = 0.67; the second row results show that our proposed methods require less iterations to get the optimal denoising results. At first, we used constant scale parameter λ0 = 0.67 to iterate three times and got a sequence vector {λ k } decreased in the first image of the third row. According to Equation (11), we got the initial value λ1 = λ2 = 5.74. The last two plots (i) and (j) show that f − uk 2 decreases monotonically with the iteration, first dropping below σ at the optimal iterate k = 12 and 2, respectively. It shows that our proposed method converge faster than IRM with the constant scale parameter. In Figure 3, as can be seen, with large scale parameter λ0 = 10 the original IRM convergences to the noisy image f quickly, and only one iterative needed to reach the stop criterion. Obviously, the denoising result is not satisfied. However, promising result is obtained by our varying scale parameter strategy where the initial value λ1 = λ2 = 5.74 according to Equation (10).
Figure 2

Denoising image used constant scale parameter for IRM and proposed method when λ is smaller.

Figure 3

Denoising image used constant scale parameter for IRM and proposed method when λ is bigger.

Preserved textures analysis

Figure 4 shows the denoising results of ‘Barbara’ image with Gauss white noise σ = 25.5. The constant scale parameter λ for IRM is 1. Compared with the constant scale parameter for IRM in Figure 4c–f, our proposed method can preserved more textures in Figure 4g–j. The last two plots (k) and (l) show that f − uk 2 decreases monotonically with the iterations, first dropping below σ at the optimal iterate k = 12 and 2, respectively. It shows that our proposed method converges faster than IRM with the constant scale parameter.
Figure 4

Denoising results of ‘Barbara’ image.

Denoising analysis for MRI coronal brain

The denoising results of MRI coronal brain image with Gauss white noise σ = 53.83 are shown in Figure 5. The constant scale parameter λ for IRM is 1. Compared with the constant scale parameter for IRM in Figure 5c–f, our proposed method can preserve more textures in Figure 5g–j. The last two plots (k) and (l) show that f − uk 2 decreases monotonically with the iterations, first dropping below σ at the optimal iterate k = 13 and 3, respectively. It shows that our proposed method converges faster than IRM with the constant scale parameter and has more texture details in the denoised image.
Figure 5

Denoising results of MRI coronal brain image.

Computational cost analysis

We have made a comparison in terms of computational time (see Tables 1 and 2) by MATLAB 7.1, which is used on a PC equipped with AMD 2.31 GHz CPU and 3 GB RAM. In fact, it should be noted that the term “Fast” is relatively for computing (9a) when compared with computing the sub-problem (9b), i.e., in this article even we use the efficient Chambolle Algorithm to solve the sub-problem (9b) and set the inner iterations is 40. To take the results of “Preserved textures analysis” section for denoising ‘Barbara’ image (256 × 256) as an example, the average time of computing (9b) once is 1.281 s, while that of (9a) once is only 0.026 s. Moreover, combined with fact that the outer iterations of the conventional IRM is 13 while our adaptive IRM is 3, the whole computational time of the conventional IRM is 13.708 s while that of our method is 3.136 s. Therefore, we know that our adaptive scheme is really faster than the conventional one.
Table 1

Computer time of formula (9)

 

Computing (9a)

Computing the sub-problem (9b)

Computational time

0.026 s

1.281 s

Table 2

Computer time of our method and the conventional IRM

 

Our method

The conventional IRM

Computational time

3.136 s

13.708 s

In addition, we have made a comparison between wavelet + wiener, curvelets include hard threshold, soft threshold, and block threshold regulation algorithm and our method. Denoising results of Lena image are shown that our algorithm improves PSNR than traditional method in Table 3.
Table 3

A variety of image denoising algorithms are compared for Lena image

Gauss white noise σ

10

15

20

25

30

Wavelet + wiener

30.65

28.06

26.94

25.48

24.39

Hard threshold of curvelet

31.67

29.58

28.47

26.43

24.97

Soft threshold of curvelet

31.99

30.76

29.78

28.79

26.34

Block threshold of curvelet

32.76

32.33

31.09

29.57

28.08

Our method

33.97

33.68

32.59

30.14

29.64

Conclusion

A novel IRM with adaptive scale parameter is proposed in order to decrease the sensitivity of constant scale parameter, optimize the scale parameter adaptively in the IRM, and attain a desirable level of applicability for image denoising. We replace the classic regularization item and deduce the equation of the adaptive scale parameter, because we know that the scale parameter is smaller, the number of the iteration will enhance by IRM. Then, the rule of varying scale parameter by the trend of the sequence vector is attained. A new iterative scale parameter λ is obtained according to the trend of the sequence vector. In general, we can get the initial scale parameter λ just using three steps of iteration. We have seen with practical examples that our proposed method can reduce the number of iterations. Thus, a fast and robust method is got.

Appendix 1

A constrained problem is defined as
{ min F X s . t . N T X b 0
(13)
We know that the constraint equation is N T X − b = 0. The basic assumption is that X lies in the subspace tangent to the active constraints, i.e., Xi+1 = X i  + αS, where S is the direction with the most negative directional derivative and α is the iterative step length, both X i and Xi+1 satisfy the constraint equations. Therefore, we obtain
N T S = 0
(14)
If we want the steepest descent direction satisfying Equation (14), we can pose the problem as
{ min S T F s . t . N T S = 0 and S T S = 1
(15)
The Euler–Lagrange equation (15) has the formulation
L S , λ , μ = S T F S T N λ μ S T S 1
(16)
The derivative of L with respect to S is
L S = F N λ 2 μ S = 0
(17)
Recall that N T S = 0 in Equation (14) and multiplying Equation (17) by N T , we get
N T F N T N λ = 0
(18)
Therefore, we get the value
λ = N T F N T N
(19)

So, the proposition holds.

Appendix 2

In this article, we want to solve the question
u k + 1 = arg min u εBV Ω Ω u d x d y + λ 2 f + v k u 2 2
(20)
It is equivalent to
u k + 1 = arg min u ε BV Ω λ 1 Ω u d x d y + 1 2 f + v k u 2 2
(21)
where λ 1 = 1 λ . Then, Equation (21) is rewritten to
{ min u ε BV Ω 1 2 f + v k u 2 2 s . t . Ω u d x d y 0
(22)
Since Ω u d x d y = u 1 u 1 2 + ε = < * u u 1 2 + ε , u > , we let * u u 1 2 + ε be approximated by * u k u k 1 2 + ε , and let u = X , b = 0 , N = * u k u k 1 2 + ε and u = 1 2 f + v k u 2 2 , we have Ω u d x d y = = N T u 0 , then according to Appendix 1, we obtain
λ 1 = Ω · 1 u k u k u k f v k d x d y Ω 1 u k u k 1 u k u k d x d y
(23)
Therefore, we obtain the parameter
λ = 1 λ 1 = Ω ·· 1 u k u k · 1 u k u k d x d y Ω · 1 u k u k u k f v k d x d y
(24)

So, the proposition holds.

Declarations

Acknowledgment

This study was supported by the National Natural Science Foundation of China under the Grant nos. 60702069, 30300443 and 61105035; the Research Project of Department of Education of Zhejiang Province, China under the Grant no. 20060601; The Science Foundation of Zhejiang Sci-Tech University of China under the Grant no. 0604039-Y; the Natural Science Foundation of Zhejiang Province of China under the Grant no. Y1080851 and Y12H290045; the Research Project of 2011 overseas students of Zhejiang Province of China under the Grant no. 1104707-M; Qianjiang talents project of Science and Technology Department of Zhejiang province of China under the grant no. 2012R10054.

Authors’ Affiliations

(1)
Lab of Intelligence Detection and System, School of Information Science and Technology, Zhejiang Sci-Tech University
(2)
School of Life Science and Technology, Shanghai Jiao Tong University

References

  1. Rudin L, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259-268. 10.1016/0167-2789(92)90242-FView ArticleMathSciNetMATHGoogle Scholar
  2. Devore R, Jawerth B, Lucier B: Image compression through wavelet transform coding. IEEE Trans. Inf. Theory 1992, 38: 719-746. 10.1109/18.119733MathSciNetView ArticleMATHGoogle Scholar
  3. Donoho DL: De-noising by soft-threshold. IEEE Trans. Inf. Theory 1995, 41(3):613-627. 10.1109/18.382009MathSciNetView ArticleMATHGoogle Scholar
  4. Dabov K, Foi A, Katkovnik V, Egiazarian K: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16: 2080-2095.MathSciNetView ArticleGoogle Scholar
  5. Yan R, Shao L, Cvetkovic SD, Klijn J: Improved nonlocal means based on pre-classification and invariant block matching. IEEE/OSA J. Disp. Technol. 2012, 8(4):212-218.View ArticleGoogle Scholar
  6. Osher S, Solè A, Vese L: Image decomposition and restoration using total variation minimization and the H-1 norm. SIAM J. Multiscale Model. Simulat. 2003, 1(3):349-370. 10.1137/S1540345902416247View ArticleMATHGoogle Scholar
  7. Burger M, Gilboa G, Osher S, Xu JJ: Nonlinear inverse scale space methods. Commun. Math. Sci. 2006, 4(1):175-208.MathSciNetGoogle Scholar
  8. Chambolle A, DeVore R, Lee NY, Lucier B: Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage. IEEE Trans. Image Process. 1998, 7(3):319-335.MathSciNetView ArticleMATHGoogle Scholar
  9. Chambolle A, Lucier BJ: Interpreting translation-invariant wavelet shrinkage as a new image smoothing scale space. IEEE Trans. Image Process. 2001, 10: 993-1000. 10.1109/83.931093MathSciNetView ArticleMATHGoogle Scholar
  10. Steidl G, Weickert J, Brox T, MraZek P, Welk M: On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs. SIAM J. Numer. Anal. 2004, 42(2):686-713. 10.1137/S0036142903422429MathSciNetView ArticleMATHGoogle Scholar
  11. Xu J, Osher S: Iterative regularization and nonlinear inverse scale space applied to wavelet-based denoising. IEEE Trans. Image Process. 2007, 16(2):534-544.MathSciNetView ArticleGoogle Scholar
  12. Meyer Y: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations. American Mathematical Society, Boston; 2002.Google Scholar
  13. Osher S, Burger M, Goldfarb D, Xu J, Yin W: An iterative regularization method for total variation based image restoration. Multiscale Model. Simulat. 2005, 4: 460-489. 10.1137/040605412MathSciNetView ArticleMATHGoogle Scholar
  14. Bregman L: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. U.S.S.R. Comput. Math. Math. Phys. 1967, 7: 200-217.View ArticleGoogle Scholar
  15. Hao BB, Li M, Feng XC: Wavelet iterative regularization for image restoration with varying scale parameter. Signal Process. Image Commun. 2008, 23(6):433-441. 10.1016/j.image.2008.04.006View ArticleGoogle Scholar
  16. Liu B, King K, Steckner M, Xie J, Sheng J, Ying L: Regularized sensitivity encoding (SENSE) reconstruction using Bregman iterations. Magn. Reson. Med. 2009, 61(1):145-152. 10.1002/mrm.21799View ArticleGoogle Scholar
  17. Chambolle A: An algorithm for total variation minimization and applications. J. Math. Imag. Vis. 2004, 20: 89-97.MathSciNetView ArticleGoogle Scholar
  18. Vese L, Osher S: Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 2003, 19: 553-572. 10.1023/A:1025384832106MathSciNetView ArticleMATHGoogle Scholar
  19. Weissman T, Ordentlich E, Seroussi G, Verdu S, Weinberger M: Universal discrete denoising: known channel. IEEE Trans. Inf. Theory 2005, 51: 5-28.MathSciNetView ArticleMATHGoogle Scholar
  20. Perona P, Malik J: Scale space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12: 629-639. 10.1109/34.56205View ArticleGoogle Scholar
  21. Goldstein T, Osher S: The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2009, 2(2):323-343. 10.1137/080725891MathSciNetView ArticleMATHGoogle Scholar
  22. Yin W, Osher S, Goldfarb D, Darbon J: Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 2009, 1: 142-168.MathSciNetGoogle Scholar
  23. He L, Chang TC, Osher S, Fang T, Speier P: MR image reconstruction by using the iterative refinement method and nonlinear inverse scale space methods. 2006. Technical Report CAM Report 06–35, UCLAGoogle Scholar
  24. Esser E: Applications of Lagrangian-based alternating direction methods and connections to split Bregman. 2009. CAM Report 09–31, UCLAGoogle Scholar
  25. Wang Y, Yang J, Yin W, Zhang Y: A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imag. Sci. 2008, 1(3):248-272. 10.1137/080724265MathSciNetView ArticleMATHGoogle Scholar
  26. Ma J: Compressed sensing by inverse scale space and curvelet thresholding. Appl. Math. Comput. 2008, 206(2):980-988. 10.1016/j.amc.2008.10.011MathSciNetView ArticleMATHGoogle Scholar
  27. Ma J, Le DF: Deblurring from highly incomplete measurements for remote sensing. IEEE Trans. Geosci. Remote Sensing (GeoRS) 2009, 3(47):792-802.Google Scholar
  28. Gilboa G, Sochen N, Zeevi YY: Texture preserving variational denoising using an adaptive fidelity term. Proc VLSM Nice, France 1 2003, 137-144.Google Scholar
  29. Shao L, Zhang H, de Haan G: An overview and performance evaluation of classification-based least squares trained filters. IEEE Trans. Image Process. 2008, 17: 1772-178.MathSciNetView ArticleGoogle Scholar

Copyright

© Li et al.; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.