Unsupervised estimation of signaldependent CCD camera noise
 Bruno Aiazzi^{1},
 Luciano Alparone^{2},
 Stefano Baronti^{1}Email author,
 Massimo Selva^{1} and
 Lorenzo Stefani^{1}
https://doi.org/10.1186/168761802012231
© Aiazzi et al.; licensee Springer. 2012
Received: 6 May 2011
Accepted: 25 September 2012
Published: 30 October 2012
Abstract
This article deals with an original method to estimate the noise introduced by optical imaging systems, such as CCD cameras. The power of the signaldependent photon noise is decoupled from the power of the signalindependent electronic noise. The method relies on the multivariate regression of sample mean and variance. Statistically similar image pixels, not necessarily connected, produce scatterpoints that are clustered along a straight line, whose slope and intercept measure the signaldependent and signalindependent components of the noise power, respectively. Experimental results carried out on a simulated noisy image and on true data from a commercial CCD camera highlight the accuracy of the proposed method and its applicability to separate R–G–B components that have been corrected for the nonlinear effects of the camera response function, but not yet interpolated to the the full size of the mosaiced R–G–B image.
Introduction
Whenever the assumption of additive white Gaussian noise (AWGN) no longer holds, noise modeling, and estimation becomes a preliminary step of the most advanced image analysis and interpretation systems. Preprocessing of data acquired with certain modalities, like optoelectronic and coherent, either ultrasound or microwave, may benefit from proper parametric modeling of the dependence of the signal on the noise and from accurate measurements of the noise model parameters. The knowledge of the noise model parameters is crucial for the task of denoising. Maximum a posteriori probability estimators exhibit a scarce tolerance to mismatches in the parametric noise model[1].
Recent advances in the technology of optoelectronic imaging devices have lead to the availability of image data, in which the photon noise contribution may no longer be neglected with respect to the electronic component, which is becoming less and less relevant. As a consequence, preprocessing and analysis methods must be revised or even designed anew to take into account that the noise is signal dependent.
To date, the most powerful noise estimation models are based on the multivariate regressions of local statistics[2–5]. However, the solution is complicated by the presence of two parametric noise components, one signaldependent and another signalindependent.
The original contribution of this article is twofold: on one side a robust multivariate procedure is proposed to estimate the parameters of the mixed photon + electronic noise from a single image. On the other side, the limits in the validity of the optoelectronic noise model are discussed, a topic that has never been clarified by any of the most prominent articles, e.g.,[5, 6]. On raw data^{a} such a model does not strictly hold, or better it holds only for a limited range of values above zero. Actually, raw data are available after a nonlinear mapping performed through the camera response function (CRF) of the device in order to avoid saturation effects. The optoelectronic noise model is correctly estimated on true raw data by other authors, e.g.,[5], only if the range of nonlinearity is carefully avoided by the estimation procedure. Conversely, on CRFcorrected data, which are much more available and widespread (they might be in principle obtained by properly decimating the demosaiced R–G–B image) the optoelectronic noise model holds on the whole dynamic range and can be more easily estimated. Other authors develop their analysis in a local mean versus standard deviation space, which makes hard to devise a specific parametric noise model[6]. Instead, we develop our model in the local mean versus variance space, in which a nearly linear relation can easily be recognized and exploited to obtain the noise parameters.
Signaldependent noise modeling
where (m,n) is the pixel location, g(m,n) the observed noisy image, f(m,n) the noisefree image, modeled as a nonstationary correlated random process, u(m,n) a stationary, zeromean uncorrelated random process independent of f(m,n) with variance${\sigma}_{u}^{2}$, and w(m,n) is electronics noise (zeromean white and Gaussian, with variance${\sigma}_{w}^{2}$). For a great variety of images, this model has been proven to hold for values of the parameter γ such that γ ≤ 1. The additive term v = f^{ γ }·u is the GSD noise. Since f is generally nonstationary, the noise v will be nonstationary as well. The term w is the signalindependent noise component and is generally assumed to be Gaussian distributed.
A purely multiplicative noise (γ = 1) is typical of coherent imaging systems; the majority of despeckling filters rely on the multiplicative fully developed speckle model[8]. In SAR imagery, the thermal noise contribution w is negligible, compared to the speckle term, f · u[9].
The signaldependent noise in Equation (2) is the combination of a purely multiplicative term and of a signalindependent term. The outcome exhibits a dependence on the signal that vanishes as f → 0_{+}. Whenever f·u ≫ w, as it happens for SAR speckle, it stems that γ_{eq}(f)→1_{−}. In practice, the lefthand side of (2), i.e., (1) with γ = 1, is taken as a noise model suitable for ultrasonic images[10].
The model (1) is also suitable for filmgrain noise[11], typical of images obtained by scanning a film (transparent support) or a photographic halftone print (reflecting support). In the former case, γ > 0 and values 1/3 ≤ γ ≤ 1/2 are typically encountered; in the latter case, negative values of γ are found[11]. For images obtained from monochrome or color scanners, the electronics noise w may not be neglected. Its variance is easily measured on a dark acquisition, i.e., when f = 0. The unknown exponent γ may be found by drawing the scatterplot of the logarithm of measured local variance diminished by the dark signal variance (estimate of${\sigma}_{w}^{2})$ against the logarithm of local mean[12]. Homogeneous pixels are clustered along a straight line in the logscatterplot plane. The unknown γ is estimated as the slope of the regression line,${\sigma}_{u}^{2}$ as the intercept.
Eventually, the model (1) applies also to images produced by optoelectronic devices, such as CCD cameras, multispectral scanners, and imaging spectrometers. In that case, the exponent γ is equal to 0.5. The term$\sqrt{f}u$ stems from the Poissondistributed number of photons captured by each pixel and is therefore denoted as photon noise[13]. This case will be investigated in the remainder of this article.
Optoelectronic noise
Equation (3) represents the electrical signal resulting from the photon conversion and from the dark current. The mean dark current has preliminarily been subtracted to yield g(m,n). However, its statistical fluctuations around the mean constitute most of the zeromean electronic noise w(m,n). The term$\sqrt{f(m,n)}\xb7u(m,n)$ is the photon noise, whose mean is zero and whose variance is proportional to E[f(m,n)]. It represents a statistical fluctuation of the photon signal around its noisefree, f(m,n), due to the granularity of photons originating electric charge.
SNR
in which μ_{ f }(m,n) ≜ E[f(m,n)] is the nonstationary mean of f . The term μ_{ f }(m,n) equals μ_{ g }(m,n), from (3).
which states that the SNR depends on the square of the mean photon signal.
where$\stackrel{\u0304}{f}$ is obtained by averaging the observed noisy image, the noise being zeromean and the average local variance of f is assumed to be negligible, i.e.,$\left({\stackrel{\u0304}{f}}^{2}\right)\approx {\left(\stackrel{\u0304}{f}\right)}^{2}$.
Estimation procedure
Equation (4) represents a straight line in the plane$(x,y)=({\mu}_{f},{\sigma}_{g}^{2})=({\mu}_{g},{\sigma}_{g}^{2})$, whose slope and intercept are equal to${\sigma}_{u}^{2}$ and${\sigma}_{w}^{2}$, respectively. The interpretation of (4) is that on statistically homogeneous pixels the theoretical nonstationary ensemble statistics (mean and variance) of the observed noisy image g(m,n) lie upon a straight line. In practice, homogeneous pixels with${\sigma}_{f}^{2}(m,n)\equiv 0$ may be extremely rare and theoretical expectation are approximated with local averages. Hence, the most homogeneous pixels in the scene appear in the meanvariance plane to be clustered along the straight line y = mx + y_{0}, in which$m={\sigma}_{u}^{2}$ and${y}_{0}={\sigma}_{w}^{2}$.
The problem of measuring the two parameters of the optoelectronics noise model (3) has been stated to be equivalent to fitting a regression line to the scatterplot containing homogeneous pixels, or at least the most homogeneous pixels in the scene. The problem is now shifted to detecting the (most) statistically homogeneous pixels in an imaged scene.
One major drawback of the simultaneous estimation of the two parameters of a generic line is that at least two distinct clusters, not necessarily corresponding to two homogeneous image patches, are necessary to yield a steady and balanced line. The procedures developed by some of the authors for signalindependent noise estimation[4] and SAR speckle estimation[14], once they have been extended to twoparameter noise estimation, have been found to be inadequate for the new task, mainly because the overall noise power, though accurately estimated, was not correctly split into its signaldependent and independent components.
 1.
Calculate global homogeneity threshold θ on the most densely populated bins of the binned scatterplot relative to the whole image, analogously to [14];
 2.
Set block index k := 1;
 3.If k > number of blocks, go to 7, else, within a K × K window (K = 2m + 1) sliding over the k th block B _{ k }calculate the local statistics of the noisy image:

average$\stackrel{\u0304}{g}(i,j)\equiv {\widehat{\mu}}_{g}(i,j)$$\stackrel{\u0304}{g}(i,j)=\frac{1}{{K}^{2}}\sum _{k=m}^{m}\sum _{l=m}^{m}g(i+k,j+l)$(10)

mean quadratic deviation from the average${\widehat{\sigma}}_{g}^{2}(i,j)$${\widehat{\sigma}}_{g}^{2}(i,j)=\frac{1}{{K}^{2}1}\sum _{k=m}^{m}\underset{l=m}{\overset{m}{\sum \sum}}{\left[g\right(i+k,j+l)\stackrel{\u0304}{g}(i,j\left)\right]}^{2}$(11)

 4.
Draw ${\mathcal{S}}_{k}$, the ${\widehat{\sigma}}_{g}^{2}(i,j)$ versus ${\widehat{\mu}}_{g}(i,j)$ scatterplot of B _{ k };
 5.
Calculate mass m _{ k } (number of points) and gravity center g _{ k } (center of mass of the set of the points) of ${\mathcal{S}}_{k}$;
 6.
Let R be the average quadratic distance of scatterpoints to their gravity center measured along the variance axis (y axis): if R ≤ θ save coordinates of g _{ k }and m _{ k }, set k := k + 1 and go to 3. Else, split ${\mathcal{S}}_{k}$ into four quadrants (bins), $\{{\mathcal{S}}_{k}^{j},j=1,4\}$, find the most densely populated bin ${\mathcal{S}}_{k}^{j}$, set ${\mathcal{S}}_{k}:={\mathcal{S}}_{k}^{j}$ and go to 5;
 7.
Draw meantovariance scatterplot from the coordinates of {g _{ k }} and its mass {m _{ k }}. A twoparameters regression line is fit to the scatterplot. The slope and intercept of such a straight line are estimates of the two noise model parameters.
The main advantage of the above procedure is that a little homogeneous image block, i.e., a block containing few statistically similar pixels, not necessarily forming a connected set, yield a gravity center with low weight, while a block containing many homogeneous points will contribute with a center having a large weight. The multiplicity of centers will ensure that the regression line is not undetermined, as it would happen in the case of a unique center, originated from an isotropically spread cloud of dense scatterpoints.
Experiments on simulated noisy images
The proposed method has preliminarily been validated on simulated noisy images. Results on the synthetic noisefree test image used in[5] are presented here. The original test image is shown in Figure2a. A noisy versions with average SNR (9) equal to 17 dB and 77% signaldependent photon noise (γ = 0.5) and 23% signalindependent electronic noise has been generated and is shown in Figure2b.
The variancetomean scatterplots, shown in Figure2c,d, highlight the noise model. In Figure2c no noise has been superimposed and nine points can be detected, approximately lying aligned over the xaxis. The slope of the joining line is equal to zero and the intercept is equal to the variance of integer roundoff error, i.e., to 1/12. Conversely, Figure2d evidences the presence of nine clusters that are aligned along a straight line having slope and intercept equal to the parameters of the superimposed noise.
Imaging model of CCD cameras
According to a recent article[15] that integrates previous studies[16, 17], CCD imaging can be represented by three subsystems: the CCD sensor array, which converts photons at each pixels into electrons and thus voltage; the camera electronics, which usually forces a nonlinear compression on the voltage values; and an analogtodigital (AD) conversion, which generates the digital image values.
The conversion of light or photons to electronic charge depends on many factors. Electronic charge consists of electrons that are excited from the silicon valence band to the conduction band. Electrons occur because of the reaction between the silicon and the incident light. The amount of charge generated for a given source of light is determined by several factors, the main of which are dependent on wavelength, i.e., on photon energy, and to a less extent on nonlinearities in the conversion process. As a consequence of the latter effects, there is a degradation in the efficiency of the charge generation process and an incomplete conversion of photons to signal electrons occurs. Other nonlinearities are further introduced by the electronics of the camera that is often designed to compress the wide range of irradiance values of the scene to a fixed range of measurable values.
We wish to highlight that the main contribution to the overall CRF is a nonlinearity purposely introduced by the manufacturer to prevent clipping above the maximum value allowed by the ADC. Therefore, clipped upper values are never encountered unless for the case of a severe and uncontrolled overexposure. Instead, negative values that are clipped below zero may occur in dark image regions. Incidentally, negative values depend on the electronic noise only, after the average dark signal is subtracted, not on the photon noise, because the overall number of photons received cannot be negative. Now, if the inverse CRF is derived in a laboratory in such a way that the overall response of the instrument is linearized, the correction would also include a partial compensation of the nonlinearity, more exactly of the positive bias in the mean response for very low levels, introduced by negative clipping in the presence of “dark” noise (noise associated to the dark signal). In other words, negative clipping, whose extent is limited to a few counts, on pixels having photon signal approximately zero, by the RMS value of dark noise, is simply approximated as another contribution to the overall CRF, together with the undesired nonlinearity of the optoelectronic chain (imperfect conversion of photons to electrons, as the number of photons increases) and especially of the saturated nonlinear response imposed to prevent overflow in the ADC.
Experiments on a CCD camera
In order to estimate the CCD noise there are two possibilities. The first is to recover noise parameters in IS for small values of digital counters that correspond to a linear mapping from LS, taking into account saturation and/or clipping effects[5]. More exactly, saturation is a reversible nonlinearity purposely introduced by the manufacturer to prevent the values of bright pixels of the mosaiced image to fall outside the dynamic range of the ADC, thereby being clipped above the maximum. Clipping is an irreversible operation and is associated to partial or total loss of information. Clipping below zero may occur when the dark signal is subtracted (see Figure4b). Its effect has carefully been analyzed in[5] and found to be beneficial for noise parameters estimation. Clipping over the maximum level allowed by the ADC is always an undesired effect. Its occurrence, usually originated by overexposure, should be avoided.
Whenever ADCs with high bitdepth, e.g., 14 bit, are employed, the nonlinearity of the imaging chain is weak because there is no longer need to purposely compress the range of values of the signal. Only undesired nonlinearity effects due to imperfect conversion of photons into electrons survive.
The second possibility, which is pursued, e.g., by[15], is to estimate the noise after applying an inverse CRF to image values of IS to return back to LS. This is the approach followed in this study. The proposed method might be applied also at the end of the chain, i.e., on the demosaiced R–G–B bands, because estimation methods based on multivariate regressions, as those used in the present context, are insensitive of the spatial correlation of the noise[3] introduced by interpolation. However, noise estimation on interpolated data will depend on the interpolation algorithm, which creates new pixel values where the noise model before interpolation may no longer hold. Consider the simple case of a linear interpolation of two pixel values affected by purely photon noise. The new value generated by interpolation is the average of the existing values. The signal component will be the average of the two signal components of the interpolating nodes, as well as the noise component will be the average of the two noise components. However, it does no longer holds that the noise component exhibits variance proportional to the mean noisefree signal. In summary, interpolation of signalindependent noise preserves the noise model, i.e. the dependence of the signal on the noise, of the interpolating nodes. Interpolation of signaldependent noise preserves the noise model only if γ = 1, i.e., for speckle noise. The noise variance is always reduced by the averaging process. The interpolated image is cyclostationary and the noise model depends on the pixel position within a period equal to the interpolation factor.
With reference to Figure4b, the experiments are aimed at verifying the noise model on raw split colors (Step 1), on CRFcorrected split colors (Step 2), and on demosaiced CRFcorrected data (Step 3). Step 1 is before CRF correction; Steps 2 and 3 after CRF correction.
Noise model parameters estimated from the whole test picture
${\mathit{\sigma}}_{\mathit{u}}^{\mathbf{2}}$  ${\mathit{\sigma}}_{\mathit{w}}^{\mathbf{2}}$  PN%  CD  SNR_{dB}  ${\mathit{\text{SNR}}}_{\mathbf{\text{dB}}}^{\mathbf{\u2033}}$  

Step 1  
B  0.173  8.96  49  0.90  24.41  28.76 
G1  0.033  18.77  17  0.08  29.85  32.14 
G2  0.036  17.22  20  0.09  30.09  32.35 
R  0.104  11.71  33  0.56  26.81  30.08 
Step 2  
B  3.435  1985.8  58  0.91  24.23  28.65 
G1  2.959  2417.4  72  0.84  29.54  32.53 
G2  3.149  2044.0  76  0.90  29.51  32.54 
R  2.973  2032.7  63  0.94  26.48  30.06 
Step 3  
B  1.252  1035.6  49  0.96  27.88  29.19 
G  1.944  1276.1  76  0.83  31.59  34.27 
R  1.124  870.1  61  0.83  30.48  31.76 
Table1 reports the estimated noise model parameters,${\sigma}_{u}^{2}$ and${\sigma}_{w}^{2}$, and the coefficient of determination (CD) of least squares fit, which ranges in [0,1] and measures the strength of matching (CD = 1 means all scatterpoints lie on the straight line). Also, the percentage of photon noise over the cumulative noise power (PN%) is provided. Average SNR is reported for each fit. SNR is computed from the two noise model parameters and from the average signal in the corresponding channel, according to (9). Both raw (Step 1) and corrected (Steps 2 and 3) data have been analyzed.
What stands out from the results in Table1, especially from the CD, is that the optoelectronic noise model (3) is highlighted in the corrected data (Steps 2 and 3), while it is not evident in the raw data (Step 1), apart from the blue band. So, reliable noise values are only those relative to corrected and possibly interpolated data appearing in the middle and lower parts of Table1. On Step 1 raw data, there is a good fit of the noise model only on the blue channel; both green components fit very poorly; the red channel, being moderately affected by saturation, exhibits intermediate values of CD. On Step 2 corrected data, there is an excellent fit of the model for all color components B, G1, G2, and R. The contribution of photon noise is generally larger than that of electronic noise, especially on the brightest green channels, as evidenced by PN%. The electronic noise, however, is not negligible with respect to the photon noise. Hence, methods aimed at converting pure photonic noise into signalindependent Gaussian noise, like the Anscombe transform[21], may not in principle be employed. Concerning Step 3 data, interpolation produces cyclostationary as well as spatially correlated noise. Average values of parameters are estimated by the proposed procedure. The discrepancy of values of noise parameters model between Steps 2 and 3 is still due to interpolation, which increases SNR. As an example, a bilinear interpolation increases the average SNR of B and R bands by 3.59 and by 1.25 dB the SNR of the G band, respectively. Hence, the values of measured noise parameters are expected to be lower for Step 3 data than for Step 2 data. Eventually, noise reduction by means of a waveletbased LMMSE filter[22] tailored on the estimated parameters of the optoelectronic noise model has been performed. The SNR values after filtering are denoted as SNR’ in the last column of Table1. By comparing SNR’ at Steps 1 and 2 we must consider that the inverse CRF (see Figure6), Being a convex function, lowers its input SNR. The decrement is approximately 0.2 dB and may be found as difference of blue SNR at Step 1 and blue SNR at Step 2. If such an offset is applied to SNR’ values of Step 2, we can conclude that noise is estimated and filtered out in Step 2 domain better than in Step 1 domain. Noise filtering at Step 3 is less effective because of interpolation, which makes noise to become spatially correlated, and hence more difficult to reject, at least with conventional LMMSE estimators. The differences SNR’ – SNR at Steps 3 and 2 evidence this trend, which is otherwise expected from theory.
Conclusions and developments
Modern CCD color cameras produce corrected R–G–B images dominated by optoelectronic noise, a mixture of signaldependent photon noise and signalindependent electronic noise. The parameters of the noise model can be measured on a single image by means of an original unsupervised procedure relying on a bivariate linear regression of local mean and variance. It is noteworthy that such a noise model does not strictly hold for raw data, but only once the CRF has been corrected and the original LS has been restored from nonlinearities introduced by the electronic chain.
The full knowledge of the parametric noise model can be useful not only in applications requiring preliminary denoising, but also in application of surveillance, in which no denoising is performed, but automatic detection is ruled by thresholds that are presumably related with the noise model. Also restoration will benefit from the knowledge of a parametric noise model, including its autocorrelation function. Its estimation, however, whenever performed on R–G–B data, is complicated by the demosaicing and interpolation steps, especially because interpolation algorithms, aimed at reducing impairments originated by Bayer’s mosaicing pattern, are generally adaptive, may be nonlinear and especially they are not disclosed by manufacturers. Therefore, the most suitable domain for this kind of processing is undoubtedly the one where color components have been split, but have not yet been interpolated.
Endnotes
^{a}The most usual acceptance of “raw data” is data expressed in digital counts that have not yet converted to physical units, according to the relationship between what is measured and the outcome of the measurement.
^{b}A MATLAB implementation of the algorithm is available athttp://www.cs.tut.fi/~foi/sensornoise.html.
Declarations
Acknowledgements
The authors are indebted to the Guest Editor and to the anonymous reviewers, whose insightful comments have notably improved the organization and presentation of the article.
Authors’ Affiliations
References
 Argenti F, Bianchi T, Alparone L: Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling. IEEE Trans. Image Process 2006, 15(11):33853399.View ArticleGoogle Scholar
 Lee JS, Hoppel K, Mango SA: Unsupervised estimation of speckle noise in radar images. Int. J. Imag. Syst. Technol 1993, 4: 298305.View ArticleGoogle Scholar
 Aiazzi B, Alparone L, Barducci A, Baronti S, Pippi I: Estimating noise and information of multispectral imagery. J. Opt. Eng 2002, 41(3):656668. 10.1117/1.1447547View ArticleGoogle Scholar
 Aiazzi B, Alparone L, Barducci A, Baronti S, Marcoionni P, Pippi I, Selva M: Noise modelling and estimation of hyperspectral data from airborne imaging spectrometers. Ann. Geophys 2006, 49: 19.Google Scholar
 Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical PoissonianGaussian noise modeling and fitting for singleimage raw data. IEEE Trans. Image Process 2008, 17(10):17371754.MathSciNetView ArticleGoogle Scholar
 Liu C, Szeliski R, Kang SB, Zitnick CL, Freeman WT: Automatic estimation and removal of noise from a single image. IEEE Trans. Pattern Anal. Mach. Intell 2008, 30(2):299314.View ArticleGoogle Scholar
 Jain AK: Fundamentals of Digital Image Processing. 1989.Google Scholar
 Tur M, Chin KC, Goodman JW: When is speckle multiplicative? Appl. Opt 1982, 21(7):11571159. 10.1364/AO.21.001157View ArticleGoogle Scholar
 Oliver C, Quegan S: Understanding Synthetic Aperture Radar Images. 1998.Google Scholar
 Argenti F, Torricelli G: Speckle suppression in ultrasonic images based on undecimated wavelets. EURASIP J. Appl. Signal Process 2003, 2003(5):470478. 10.1155/S1110865703211136View ArticleGoogle Scholar
 Pratt WK: Digital Image Processing. 1991.Google Scholar
 Aiazzi B, Baronti S, Casini A, Lotti F, Mattei A, Santurri L: Quality issues for archival of ancient documents. Mathematics of Data/Image Coding, Compression, and Encryption III 2000, 115126.View ArticleGoogle Scholar
 Starck JL, Murtagh F, Bijaoui A: Image Processing and Data Analysis: The Multiscale Approach. 1998.View ArticleGoogle Scholar
 Aiazzi B, Alparone L, Baronti S, Garzelli A: Coherence estimation from incoherent multilook SAR imagery. IEEE Trans. Geosci. Remote Sens 2003, 41(11):25312539. 10.1109/TGRS.2003.818813View ArticleGoogle Scholar
 Faraji H, MacLean WJ: CCD noise removal in digital images. IEEE Trans. Image Process 2006, 15(9):26762685.View ArticleGoogle Scholar
 Healey GE, Kondepudy R: Radiometric CCD camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell 1994, 16(3):267276. 10.1109/34.276126View ArticleGoogle Scholar
 Tsin Y, Ramesh V, Kanade T: Statistical calibration of CCD imaging process. Proc. IEEE Int. Conf. Computer Vision 2001, 480487.Google Scholar
 Mann S: Intelligent Image Processing. 2002.Google Scholar
 Irie K, McKinnon AE, Unsworth K, Woodhead IM: A model for measurement of noise in CCD digital video cameras. Measur. Sci. Technol 2008, 19(4):045207. 10.1088/09570233/19/4/045207View ArticleGoogle Scholar
 Gunturk BK, Glotzbach J, Altunbasak Y, Schafer RW, Mersereau RM: Demosaicking: color filter array interpolation. IEEE Signal Process. Mag 2005, 22: 4454.View ArticleGoogle Scholar
 Talbot H, Phelippeau H, Akil M, Bara S: Efficient Poisson denoising for photography. Proc. IEEE International Conference on Image Processing 2009, 38813884.Google Scholar
 Argenti F, Torricelli G, Alparone L: MMSE filtering of generalised signaldependent noise in spatial and shiftinvariant wavelet domains. Signal Process 2006, 86(8):20562066. 10.1016/j.sigpro.2005.10.014View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.