 Research
 Open Access
 Published:
Unsupervised estimation of signaldependent CCD camera noise
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 231 (2012)
Abstract
This article deals with an original method to estimate the noise introduced by optical imaging systems, such as CCD cameras. The power of the signaldependent photon noise is decoupled from the power of the signalindependent electronic noise. The method relies on the multivariate regression of sample mean and variance. Statistically similar image pixels, not necessarily connected, produce scatterpoints that are clustered along a straight line, whose slope and intercept measure the signaldependent and signalindependent components of the noise power, respectively. Experimental results carried out on a simulated noisy image and on true data from a commercial CCD camera highlight the accuracy of the proposed method and its applicability to separate R–G–B components that have been corrected for the nonlinear effects of the camera response function, but not yet interpolated to the the full size of the mosaiced R–G–B image.
Introduction
Whenever the assumption of additive white Gaussian noise (AWGN) no longer holds, noise modeling, and estimation becomes a preliminary step of the most advanced image analysis and interpretation systems. Preprocessing of data acquired with certain modalities, like optoelectronic and coherent, either ultrasound or microwave, may benefit from proper parametric modeling of the dependence of the signal on the noise and from accurate measurements of the noise model parameters. The knowledge of the noise model parameters is crucial for the task of denoising. Maximum a posteriori probability estimators exhibit a scarce tolerance to mismatches in the parametric noise model[1].
Recent advances in the technology of optoelectronic imaging devices have lead to the availability of image data, in which the photon noise contribution may no longer be neglected with respect to the electronic component, which is becoming less and less relevant. As a consequence, preprocessing and analysis methods must be revised or even designed anew to take into account that the noise is signal dependent.
To date, the most powerful noise estimation models are based on the multivariate regressions of local statistics[2–5]. However, the solution is complicated by the presence of two parametric noise components, one signaldependent and another signalindependent.
The original contribution of this article is twofold: on one side a robust multivariate procedure is proposed to estimate the parameters of the mixed photon + electronic noise from a single image. On the other side, the limits in the validity of the optoelectronic noise model are discussed, a topic that has never been clarified by any of the most prominent articles, e.g.,[5, 6]. On raw data^{a} such a model does not strictly hold, or better it holds only for a limited range of values above zero. Actually, raw data are available after a nonlinear mapping performed through the camera response function (CRF) of the device in order to avoid saturation effects. The optoelectronic noise model is correctly estimated on true raw data by other authors, e.g.,[5], only if the range of nonlinearity is carefully avoided by the estimation procedure. Conversely, on CRFcorrected data, which are much more available and widespread (they might be in principle obtained by properly decimating the demosaiced R–G–B image) the optoelectronic noise model holds on the whole dynamic range and can be more easily estimated. Other authors develop their analysis in a local mean versus standard deviation space, which makes hard to devise a specific parametric noise model[6]. Instead, we develop our model in the local mean versus variance space, in which a nearly linear relation can easily be recognized and exploited to obtain the noise parameters.
Signaldependent noise modeling
A generalized signaldependent (GSD) noise model has been proposed to deal with several different acquisition systems. Many types of noise can be described by using the following parametric model[7]
where (m,n) is the pixel location, g(m,n) the observed noisy image, f(m,n) the noisefree image, modeled as a nonstationary correlated random process, u(m,n) a stationary, zeromean uncorrelated random process independent of f(m,n) with variance${\sigma}_{u}^{2}$, and w(m,n) is electronics noise (zeromean white and Gaussian, with variance${\sigma}_{w}^{2}$). For a great variety of images, this model has been proven to hold for values of the parameter γ such that γ ≤ 1. The additive term v = f^{γ}·u is the GSD noise. Since f is generally nonstationary, the noise v will be nonstationary as well. The term w is the signalindependent noise component and is generally assumed to be Gaussian distributed.
A purely multiplicative noise (γ = 1) is typical of coherent imaging systems; the majority of despeckling filters rely on the multiplicative fully developed speckle model[8]. In SAR imagery, the thermal noise contribution w is negligible, compared to the speckle term, f · u[9].
A more complex scenario is related to ultrasound image generation. Due to the great variability of scatterers size in each tissue, the electronics noise w cannot be neglected. Although a simplified noise model without electronic term with value of γ in (0,1), e.g., γ = 1/2, is accepted as characteristic of this kind of images, the presence of the additional term w alleviates for the need of exactly knowing the γ. In fact, if γ is taken to be unity, as for coherent noise, an equivalent signaldependent γ may be defined, such that
The signaldependent noise in Equation (2) is the combination of a purely multiplicative term and of a signalindependent term. The outcome exhibits a dependence on the signal that vanishes as f → 0_{+}. Whenever f·u ≫ w, as it happens for SAR speckle, it stems that γ_{eq}(f)→1_{−}. In practice, the lefthand side of (2), i.e., (1) with γ = 1, is taken as a noise model suitable for ultrasonic images[10].
The model (1) is also suitable for filmgrain noise[11], typical of images obtained by scanning a film (transparent support) or a photographic halftone print (reflecting support). In the former case, γ > 0 and values 1/3 ≤ γ ≤ 1/2 are typically encountered; in the latter case, negative values of γ are found[11]. For images obtained from monochrome or color scanners, the electronics noise w may not be neglected. Its variance is easily measured on a dark acquisition, i.e., when f = 0. The unknown exponent γ may be found by drawing the scatterplot of the logarithm of measured local variance diminished by the dark signal variance (estimate of${\sigma}_{w}^{2})$ against the logarithm of local mean[12]. Homogeneous pixels are clustered along a straight line in the logscatterplot plane. The unknown γ is estimated as the slope of the regression line,${\sigma}_{u}^{2}$ as the intercept.
Eventually, the model (1) applies also to images produced by optoelectronic devices, such as CCD cameras, multispectral scanners, and imaging spectrometers. In that case, the exponent γ is equal to 0.5. The term$\sqrt{f}u$ stems from the Poissondistributed number of photons captured by each pixel and is therefore denoted as photon noise[13]. This case will be investigated in the remainder of this article.
Optoelectronic noise
In this section, the optoelectronic noise model will be reviewed in a deeper detail. The main contributions of photon noise and electronic noise will be derived and physically related to the instrument. Signaltonoise ratio (SNR) will be defined and its relationships to the noise model parameters will be addressed. Let us rewrite the model (1) with γ = 0.5:
Equation (3) represents the electrical signal resulting from the photon conversion and from the dark current. The mean dark current has preliminarily been subtracted to yield g(m,n). However, its statistical fluctuations around the mean constitute most of the zeromean electronic noise w(m,n). The term$\sqrt{f(m,n)}\xb7u(m,n)$ is the photon noise, whose mean is zero and whose variance is proportional to E[f(m,n)]. It represents a statistical fluctuation of the photon signal around its noisefree, f(m,n), due to the granularity of photons originating electric charge.
SNR
If the variance of (3) is calculated on homogeneous pixels, in which${\sigma}_{f}^{2}(m,n)=0$, by definition, thanks to the independence of f , u and w and the fact that both u and w have null mean and are stationary, we can write
in which μ_{ f }(m,n) ≜ E[f(m,n)] is the nonstationary mean of f . The term μ_{ f }(m,n) equals μ_{ g }(m,n), from (3).
Let us define the local SNR at pixel position (m,n) as
which on homogeneous pixels (i.e.,${\sigma}_{f}^{2}(m,n)=0$) becomes
In (6), if${\mu}_{f}(m,n){\sigma}_{u}^{2}\gg {\sigma}_{w}^{2}$, then
That is SNR depends on the mean photon signal. Instead, if${\mu}_{f}(m,n){\sigma}_{u}^{2}\ll {\sigma}_{w}^{2}$, then
which states that the SNR depends on the square of the mean photon signal.
In practical applications, the average SNR is used:
where$\stackrel{\u0304}{f}$ is obtained by averaging the observed noisy image, the noise being zeromean and the average local variance of f is assumed to be negligible, i.e.,$\left({\stackrel{\u0304}{f}}^{2}\right)\approx {\left(\stackrel{\u0304}{f}\right)}^{2}$.
Estimation procedure
Equation (4) represents a straight line in the plane$(x,y)=({\mu}_{f},{\sigma}_{g}^{2})=({\mu}_{g},{\sigma}_{g}^{2})$, whose slope and intercept are equal to${\sigma}_{u}^{2}$ and${\sigma}_{w}^{2}$, respectively. The interpretation of (4) is that on statistically homogeneous pixels the theoretical nonstationary ensemble statistics (mean and variance) of the observed noisy image g(m,n) lie upon a straight line. In practice, homogeneous pixels with${\sigma}_{f}^{2}(m,n)\equiv 0$ may be extremely rare and theoretical expectation are approximated with local averages. Hence, the most homogeneous pixels in the scene appear in the meanvariance plane to be clustered along the straight line y = mx + y_{0}, in which$m={\sigma}_{u}^{2}$ and${y}_{0}={\sigma}_{w}^{2}$.
The problem of measuring the two parameters of the optoelectronics noise model (3) has been stated to be equivalent to fitting a regression line to the scatterplot containing homogeneous pixels, or at least the most homogeneous pixels in the scene. The problem is now shifted to detecting the (most) statistically homogeneous pixels in an imaged scene.
One major drawback of the simultaneous estimation of the two parameters of a generic line is that at least two distinct clusters, not necessarily corresponding to two homogeneous image patches, are necessary to yield a steady and balanced line. The procedures developed by some of the authors for signalindependent noise estimation[4] and SAR speckle estimation[14], once they have been extended to twoparameter noise estimation, have been found to be inadequate for the new task, mainly because the overall noise power, though accurately estimated, was not correctly split into its signaldependent and independent components.
The new procedure for noise estimation consists either of partitioning the image into blocks or of manually selecting only some regions of interest (ROI). In both cases (unsupervised and semisupervised), the sequence of blocks/ROIs, “blocks” in the following, is processed in the same way.

1.
Calculate global homogeneity threshold θ on the most densely populated bins of the binned scatterplot relative to the whole image, analogously to [14];

2.
Set block index k := 1;

3.
If k > number of blocks, go to 7, else, within a K × K window (K = 2m + 1) sliding over the k th block B _{ k }calculate the local statistics of the noisy image:

average$\stackrel{\u0304}{g}(i,j)\equiv {\widehat{\mu}}_{g}(i,j)$
$$\stackrel{\u0304}{g}(i,j)=\frac{1}{{K}^{2}}\sum _{k=m}^{m}\sum _{l=m}^{m}g(i+k,j+l)$$(10) 
mean quadratic deviation from the average${\widehat{\sigma}}_{g}^{2}(i,j)$
$${\widehat{\sigma}}_{g}^{2}(i,j)=\frac{1}{{K}^{2}1}\sum _{k=m}^{m}\underset{l=m}{\overset{m}{\sum \sum}}{\left[g\right(i+k,j+l)\stackrel{\u0304}{g}(i,j\left)\right]}^{2}$$(11)


4.
Draw ${\mathcal{S}}_{k}$, the ${\widehat{\sigma}}_{g}^{2}(i,j)$ versus ${\widehat{\mu}}_{g}(i,j)$ scatterplot of B _{ k };

5.
Calculate mass m _{ k } (number of points) and gravity center g _{ k } (center of mass of the set of the points) of ${\mathcal{S}}_{k}$;

6.
Let R be the average quadratic distance of scatterpoints to their gravity center measured along the variance axis (y axis): if R ≤ θ save coordinates of g _{ k }and m _{ k }, set k := k + 1 and go to 3. Else, split ${\mathcal{S}}_{k}$ into four quadrants (bins), $\{{\mathcal{S}}_{k}^{j},j=1,4\}$, find the most densely populated bin ${\mathcal{S}}_{k}^{j}$, set ${\mathcal{S}}_{k}:={\mathcal{S}}_{k}^{j}$ and go to 5;

7.
Draw meantovariance scatterplot from the coordinates of {g _{ k }} and its mass {m _{ k }}. A twoparameters regression line is fit to the scatterplot. The slope and intercept of such a straight line are estimates of the two noise model parameters.
Figure1 shows the last step of the noise estimation procedure. The scatterplot containing the 64 gravity centers of the 64 partition blocks of the noisy image in Figure2b is displayed together with its regression line. The size of each dot is proportional to its mass, which is considered in the calculation of the regression line.
The main advantage of the above procedure is that a little homogeneous image block, i.e., a block containing few statistically similar pixels, not necessarily forming a connected set, yield a gravity center with low weight, while a block containing many homogeneous points will contribute with a center having a large weight. The multiplicity of centers will ensure that the regression line is not undetermined, as it would happen in the case of a unique center, originated from an isotropically spread cloud of dense scatterpoints.
Experiments on simulated noisy images
The proposed method has preliminarily been validated on simulated noisy images. Results on the synthetic noisefree test image used in[5] are presented here. The original test image is shown in Figure2a. A noisy versions with average SNR (9) equal to 17 dB and 77% signaldependent photon noise (γ = 0.5) and 23% signalindependent electronic noise has been generated and is shown in Figure2b.
The variancetomean scatterplots, shown in Figure2c,d, highlight the noise model. In Figure2c no noise has been superimposed and nine points can be detected, approximately lying aligned over the xaxis. The slope of the joining line is equal to zero and the intercept is equal to the variance of integer roundoff error, i.e., to 1/12. Conversely, Figure2d evidences the presence of nine clusters that are aligned along a straight line having slope and intercept equal to the parameters of the superimposed noise.
Noisy versions of the test image with 50% photon and 50% electronic noise have been generated with SNR ranging between 15 and 30 dB. The proposed method and the method described in[5],^{b} which conversely exploits a wavelet decomposition in order to find homogeneous regions, have been used to estimate the noise model parameters. In the latter case, the noisy image is clipped below zero, as it happens with a real CCD camera. For the proposed method, the results without clipping are almost identical to those with clipping, provided that the gravity centers of clusters originated by dark image blocks are preliminarily discarded by thresholding their mean. Figure3a,c,e shows estimated slope and intercept of the noise model in the (μ σ^{2}) plane, as well as estimated SNR, varying with the true SNR, for the proposed method; Figure3b,d,f for the method in[5]. The accuracy of both is very high, especially on SNR. The proposed method, however, exhibits a slightly better ability in splitting the noise contribution into its two signaldependent and signalindependent components.
Imaging model of CCD cameras
According to a recent article[15] that integrates previous studies[16, 17], CCD imaging can be represented by three subsystems: the CCD sensor array, which converts photons at each pixels into electrons and thus voltage; the camera electronics, which usually forces a nonlinear compression on the voltage values; and an analogtodigital (AD) conversion, which generates the digital image values.
The conversion of light or photons to electronic charge depends on many factors. Electronic charge consists of electrons that are excited from the silicon valence band to the conduction band. Electrons occur because of the reaction between the silicon and the incident light. The amount of charge generated for a given source of light is determined by several factors, the main of which are dependent on wavelength, i.e., on photon energy, and to a less extent on nonlinearities in the conversion process. As a consequence of the latter effects, there is a degradation in the efficiency of the charge generation process and an incomplete conversion of photons to signal electrons occurs. Other nonlinearities are further introduced by the electronics of the camera that is often designed to compress the wide range of irradiance values of the scene to a fixed range of measurable values.
The resulting effect is that the mapping between the incident photons and the camera output is nonlinear and is described by a function denoted as CRF. CRF can be assumed as linear for low intensity values of the incident light and must be considered when modeling CCD noise from the digital counters at the AD converter (ADC). According to Figure4, we can assume that ideal CCD output values, before processing and digitization, belong to a linear space denoted as light space (LS)[18]. Any CCD nonlinearity can be incorporated in the CRF. The output of the imaging device is assumed to belong to a nonlinear image space (IS). If LS values are denoted as q, then IS are modeled as f(q), where f(·) represents the CRF. Eventually, after inverting the CRF, the image values are restored to the LS, where the dependencies of pixel values on incident light become linear again. More complex noise models accounting for a wider range of phenomena can be devised[19], with the drawback that modelbased analytical solutions may become intractable[6].
We wish to highlight that the main contribution to the overall CRF is a nonlinearity purposely introduced by the manufacturer to prevent clipping above the maximum value allowed by the ADC. Therefore, clipped upper values are never encountered unless for the case of a severe and uncontrolled overexposure. Instead, negative values that are clipped below zero may occur in dark image regions. Incidentally, negative values depend on the electronic noise only, after the average dark signal is subtracted, not on the photon noise, because the overall number of photons received cannot be negative. Now, if the inverse CRF is derived in a laboratory in such a way that the overall response of the instrument is linearized, the correction would also include a partial compensation of the nonlinearity, more exactly of the positive bias in the mean response for very low levels, introduced by negative clipping in the presence of “dark” noise (noise associated to the dark signal). In other words, negative clipping, whose extent is limited to a few counts, on pixels having photon signal approximately zero, by the RMS value of dark noise, is simply approximated as another contribution to the overall CRF, together with the undesired nonlinearity of the optoelectronic chain (imperfect conversion of photons to electrons, as the number of photons increases) and especially of the saturated nonlinear response imposed to prevent overflow in the ADC.
Experiments on a CCD camera
A further experiment was made on the data produced by a commercial CCD color camera. The imaging device is a Nikon D70s digital camera equipped with a 3008×2000 pixel CCD of 23.7×15.6 mm physical dimensions. The radiometric resolution is 12 bit. Acquired images are made available in NEF (Nikon electronic file) 12bit lossless compressed mosaiced raw data format. On the decompressed raw images demosaicing is performed to pass from the Bayer pattern images made available by the optoelectronic acquisition system to conventional RGB image format[20]. Figure5 shows that demosaicing is equivalent to split the mosaiced images into their polyphase components R–G–G–B and to interpolate the latter to yield an R–G–B image of the same size as the mosaiced image. The inverse CRF, shown in Figure6, can be applied to raw data to pass from the 12bit IS representation to the LS radiance images an ideal CCD would collect in ideal noisefree conditions. The inverse CRF has the purpose of restoring the linear dependence between light and image values, represented as radiance values. Since the CRF accounts both for the compression of values introduced to avoid overflow in the ADC and for the intrinsic nonlinearities of the photoelectronic instrument, it is experimentally obtained, in order to produce the inverse function (CRF^{−1}) to return back from IS to LS values. Split R–G–G–B components, both raw and CRFcorrected have been analyzed in this study, together with the demosaiced R–G–B image. The CRFcorrected and demosaiced R–G–B format is available at the end of the processing chain in Figure4b. A 1024×768 detail of the test scene is displayed in Figure7.
In order to estimate the CCD noise there are two possibilities. The first is to recover noise parameters in IS for small values of digital counters that correspond to a linear mapping from LS, taking into account saturation and/or clipping effects[5]. More exactly, saturation is a reversible nonlinearity purposely introduced by the manufacturer to prevent the values of bright pixels of the mosaiced image to fall outside the dynamic range of the ADC, thereby being clipped above the maximum. Clipping is an irreversible operation and is associated to partial or total loss of information. Clipping below zero may occur when the dark signal is subtracted (see Figure4b). Its effect has carefully been analyzed in[5] and found to be beneficial for noise parameters estimation. Clipping over the maximum level allowed by the ADC is always an undesired effect. Its occurrence, usually originated by overexposure, should be avoided.
Whenever ADCs with high bitdepth, e.g., 14 bit, are employed, the nonlinearity of the imaging chain is weak because there is no longer need to purposely compress the range of values of the signal. Only undesired nonlinearity effects due to imperfect conversion of photons into electrons survive.
The second possibility, which is pursued, e.g., by[15], is to estimate the noise after applying an inverse CRF to image values of IS to return back to LS. This is the approach followed in this study. The proposed method might be applied also at the end of the chain, i.e., on the demosaiced R–G–B bands, because estimation methods based on multivariate regressions, as those used in the present context, are insensitive of the spatial correlation of the noise[3] introduced by interpolation. However, noise estimation on interpolated data will depend on the interpolation algorithm, which creates new pixel values where the noise model before interpolation may no longer hold. Consider the simple case of a linear interpolation of two pixel values affected by purely photon noise. The new value generated by interpolation is the average of the existing values. The signal component will be the average of the two signal components of the interpolating nodes, as well as the noise component will be the average of the two noise components. However, it does no longer holds that the noise component exhibits variance proportional to the mean noisefree signal. In summary, interpolation of signalindependent noise preserves the noise model, i.e. the dependence of the signal on the noise, of the interpolating nodes. Interpolation of signaldependent noise preserves the noise model only if γ = 1, i.e., for speckle noise. The noise variance is always reduced by the averaging process. The interpolated image is cyclostationary and the noise model depends on the pixel position within a period equal to the interpolation factor.
With reference to Figure4b, the experiments are aimed at verifying the noise model on raw split colors (Step 1), on CRFcorrected split colors (Step 2), and on demosaiced CRFcorrected data (Step 3). Step 1 is before CRF correction; Steps 2 and 3 after CRF correction.
Figure8a,b refers to the blue band and exhibit pronounced linear trends in the variancetomean scatterplots, both before and after CRF correction. However, the dynamic range of raw data does not exceed the linear portion of the inverse CRF function (Figure6). Apart from the different values along axes, digital count in the former, radiance values in the latter, the scatterplots before and after correction are very similar and suggest that the optoelectronic noise model (3) holds in both cases.
Conversely, Figure8c,d, that is relative to either of the green components, shows that the optoelectronic noise model does not hold for raw data, whose amplitudes exceed the linear part of the forward CRF (see Figure4a), unlike the blue data. However, the noise model is well verified once the CRF has been corrected and thus the original LS has been restored (see Figure4b). The red channel yields trends intermediate between those of blue and of green and thus are not reported as scatterplots, but only among quantitative results in Table1. The comparison between data at Steps 1 and 2 suggests that CRF correction is crucial for the fulfillment of the optoelectronic noise modeling (3).
Table1 reports the estimated noise model parameters,${\sigma}_{u}^{2}$ and${\sigma}_{w}^{2}$, and the coefficient of determination (CD) of least squares fit, which ranges in [0,1] and measures the strength of matching (CD = 1 means all scatterpoints lie on the straight line). Also, the percentage of photon noise over the cumulative noise power (PN%) is provided. Average SNR is reported for each fit. SNR is computed from the two noise model parameters and from the average signal in the corresponding channel, according to (9). Both raw (Step 1) and corrected (Steps 2 and 3) data have been analyzed.
What stands out from the results in Table1, especially from the CD, is that the optoelectronic noise model (3) is highlighted in the corrected data (Steps 2 and 3), while it is not evident in the raw data (Step 1), apart from the blue band. So, reliable noise values are only those relative to corrected and possibly interpolated data appearing in the middle and lower parts of Table1. On Step 1 raw data, there is a good fit of the noise model only on the blue channel; both green components fit very poorly; the red channel, being moderately affected by saturation, exhibits intermediate values of CD. On Step 2 corrected data, there is an excellent fit of the model for all color components B, G1, G2, and R. The contribution of photon noise is generally larger than that of electronic noise, especially on the brightest green channels, as evidenced by PN%. The electronic noise, however, is not negligible with respect to the photon noise. Hence, methods aimed at converting pure photonic noise into signalindependent Gaussian noise, like the Anscombe transform[21], may not in principle be employed. Concerning Step 3 data, interpolation produces cyclostationary as well as spatially correlated noise. Average values of parameters are estimated by the proposed procedure. The discrepancy of values of noise parameters model between Steps 2 and 3 is still due to interpolation, which increases SNR. As an example, a bilinear interpolation increases the average SNR of B and R bands by 3.59 and by 1.25 dB the SNR of the G band, respectively. Hence, the values of measured noise parameters are expected to be lower for Step 3 data than for Step 2 data. Eventually, noise reduction by means of a waveletbased LMMSE filter[22] tailored on the estimated parameters of the optoelectronic noise model has been performed. The SNR values after filtering are denoted as SNR’ in the last column of Table1. By comparing SNR’ at Steps 1 and 2 we must consider that the inverse CRF (see Figure6), Being a convex function, lowers its input SNR. The decrement is approximately 0.2 dB and may be found as difference of blue SNR at Step 1 and blue SNR at Step 2. If such an offset is applied to SNR’ values of Step 2, we can conclude that noise is estimated and filtered out in Step 2 domain better than in Step 1 domain. Noise filtering at Step 3 is less effective because of interpolation, which makes noise to become spatially correlated, and hence more difficult to reject, at least with conventional LMMSE estimators. The differences SNR’ – SNR at Steps 3 and 2 evidence this trend, which is otherwise expected from theory.
Conclusions and developments
Modern CCD color cameras produce corrected R–G–B images dominated by optoelectronic noise, a mixture of signaldependent photon noise and signalindependent electronic noise. The parameters of the noise model can be measured on a single image by means of an original unsupervised procedure relying on a bivariate linear regression of local mean and variance. It is noteworthy that such a noise model does not strictly hold for raw data, but only once the CRF has been corrected and the original LS has been restored from nonlinearities introduced by the electronic chain.
The full knowledge of the parametric noise model can be useful not only in applications requiring preliminary denoising, but also in application of surveillance, in which no denoising is performed, but automatic detection is ruled by thresholds that are presumably related with the noise model. Also restoration will benefit from the knowledge of a parametric noise model, including its autocorrelation function. Its estimation, however, whenever performed on R–G–B data, is complicated by the demosaicing and interpolation steps, especially because interpolation algorithms, aimed at reducing impairments originated by Bayer’s mosaicing pattern, are generally adaptive, may be nonlinear and especially they are not disclosed by manufacturers. Therefore, the most suitable domain for this kind of processing is undoubtedly the one where color components have been split, but have not yet been interpolated.
Endnotes
^{a}The most usual acceptance of “raw data” is data expressed in digital counts that have not yet converted to physical units, according to the relationship between what is measured and the outcome of the measurement.
^{b}A MATLAB implementation of the algorithm is available athttp://www.cs.tut.fi/~foi/sensornoise.html.
References
 1.
Argenti F, Bianchi T, Alparone L: Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling. IEEE Trans. Image Process 2006, 15(11):33853399.
 2.
Lee JS, Hoppel K, Mango SA: Unsupervised estimation of speckle noise in radar images. Int. J. Imag. Syst. Technol 1993, 4: 298305.
 3.
Aiazzi B, Alparone L, Barducci A, Baronti S, Pippi I: Estimating noise and information of multispectral imagery. J. Opt. Eng 2002, 41(3):656668. 10.1117/1.1447547
 4.
Aiazzi B, Alparone L, Barducci A, Baronti S, Marcoionni P, Pippi I, Selva M: Noise modelling and estimation of hyperspectral data from airborne imaging spectrometers. Ann. Geophys 2006, 49: 19.
 5.
Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical PoissonianGaussian noise modeling and fitting for singleimage raw data. IEEE Trans. Image Process 2008, 17(10):17371754.
 6.
Liu C, Szeliski R, Kang SB, Zitnick CL, Freeman WT: Automatic estimation and removal of noise from a single image. IEEE Trans. Pattern Anal. Mach. Intell 2008, 30(2):299314.
 7.
Jain AK: Fundamentals of Digital Image Processing. 1989.
 8.
Tur M, Chin KC, Goodman JW: When is speckle multiplicative? Appl. Opt 1982, 21(7):11571159. 10.1364/AO.21.001157
 9.
Oliver C, Quegan S: Understanding Synthetic Aperture Radar Images. 1998.
 10.
Argenti F, Torricelli G: Speckle suppression in ultrasonic images based on undecimated wavelets. EURASIP J. Appl. Signal Process 2003, 2003(5):470478. 10.1155/S1110865703211136
 11.
Pratt WK: Digital Image Processing. 1991.
 12.
Aiazzi B, Baronti S, Casini A, Lotti F, Mattei A, Santurri L: Quality issues for archival of ancient documents. Mathematics of Data/Image Coding, Compression, and Encryption III 2000, 115126.
 13.
Starck JL, Murtagh F, Bijaoui A: Image Processing and Data Analysis: The Multiscale Approach. 1998.
 14.
Aiazzi B, Alparone L, Baronti S, Garzelli A: Coherence estimation from incoherent multilook SAR imagery. IEEE Trans. Geosci. Remote Sens 2003, 41(11):25312539. 10.1109/TGRS.2003.818813
 15.
Faraji H, MacLean WJ: CCD noise removal in digital images. IEEE Trans. Image Process 2006, 15(9):26762685.
 16.
Healey GE, Kondepudy R: Radiometric CCD camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell 1994, 16(3):267276. 10.1109/34.276126
 17.
Tsin Y, Ramesh V, Kanade T: Statistical calibration of CCD imaging process. Proc. IEEE Int. Conf. Computer Vision 2001, 480487.
 18.
Mann S: Intelligent Image Processing. 2002.
 19.
Irie K, McKinnon AE, Unsworth K, Woodhead IM: A model for measurement of noise in CCD digital video cameras. Measur. Sci. Technol 2008, 19(4):045207. 10.1088/09570233/19/4/045207
 20.
Gunturk BK, Glotzbach J, Altunbasak Y, Schafer RW, Mersereau RM: Demosaicking: color filter array interpolation. IEEE Signal Process. Mag 2005, 22: 4454.
 21.
Talbot H, Phelippeau H, Akil M, Bara S: Efficient Poisson denoising for photography. Proc. IEEE International Conference on Image Processing 2009, 38813884.
 22.
Argenti F, Torricelli G, Alparone L: MMSE filtering of generalised signaldependent noise in spatial and shiftinvariant wavelet domains. Signal Process 2006, 86(8):20562066. 10.1016/j.sigpro.2005.10.014
Acknowledgements
The authors are indebted to the Guest Editor and to the anonymous reviewers, whose insightful comments have notably improved the organization and presentation of the article.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Aiazzi, B., Alparone, L., Baronti, S. et al. Unsupervised estimation of signaldependent CCD camera noise. EURASIP J. Adv. Signal Process. 2012, 231 (2012). https://doi.org/10.1186/168761802012231
Received:
Accepted:
Published:
Keywords
 Noise Model
 Image Space
 Noisy Image
 Noise Estimation
 Gravity Center