Skip to main content

Unsupervised estimation of signal-dependent CCD camera noise

Abstract

This article deals with an original method to estimate the noise introduced by optical imaging systems, such as CCD cameras. The power of the signal-dependent photon noise is decoupled from the power of the signal-independent electronic noise. The method relies on the multivariate regression of sample mean and variance. Statistically similar image pixels, not necessarily connected, produce scatterpoints that are clustered along a straight line, whose slope and intercept measure the signal-dependent and signal-independent components of the noise power, respectively. Experimental results carried out on a simulated noisy image and on true data from a commercial CCD camera highlight the accuracy of the proposed method and its applicability to separate R–G–B components that have been corrected for the nonlinear effects of the camera response function, but not yet interpolated to the the full size of the mosaiced R–G–B image.

Introduction

Whenever the assumption of additive white Gaussian noise (AWGN) no longer holds, noise modeling, and estimation becomes a preliminary step of the most advanced image analysis and interpretation systems. Preprocessing of data acquired with certain modalities, like optoelectronic and coherent, either ultrasound or microwave, may benefit from proper parametric modeling of the dependence of the signal on the noise and from accurate measurements of the noise model parameters. The knowledge of the noise model parameters is crucial for the task of denoising. Maximum a posteriori probability estimators exhibit a scarce tolerance to mismatches in the parametric noise model[1].

Recent advances in the technology of optoelectronic imaging devices have lead to the availability of image data, in which the photon noise contribution may no longer be neglected with respect to the electronic component, which is becoming less and less relevant. As a consequence, preprocessing and analysis methods must be revised or even designed anew to take into account that the noise is signal dependent.

To date, the most powerful noise estimation models are based on the multivariate regressions of local statistics[25]. However, the solution is complicated by the presence of two parametric noise components, one signal-dependent and another signal-independent.

The original contribution of this article is twofold: on one side a robust multivariate procedure is proposed to estimate the parameters of the mixed photon + electronic noise from a single image. On the other side, the limits in the validity of the optoelectronic noise model are discussed, a topic that has never been clarified by any of the most prominent articles, e.g.,[5, 6]. On raw dataa such a model does not strictly hold, or better it holds only for a limited range of values above zero. Actually, raw data are available after a nonlinear mapping performed through the camera response function (CRF) of the device in order to avoid saturation effects. The optoelectronic noise model is correctly estimated on true raw data by other authors, e.g.,[5], only if the range of nonlinearity is carefully avoided by the estimation procedure. Conversely, on CRF-corrected data, which are much more available and widespread (they might be in principle obtained by properly decimating the demosaiced R–G–B image) the optoelectronic noise model holds on the whole dynamic range and can be more easily estimated. Other authors develop their analysis in a local mean versus standard deviation space, which makes hard to devise a specific parametric noise model[6]. Instead, we develop our model in the local mean versus variance space, in which a nearly linear relation can easily be recognized and exploited to obtain the noise parameters.

Signal-dependent noise modeling

A generalized signal-dependent (GSD) noise model has been proposed to deal with several different acquisition systems. Many types of noise can be described by using the following parametric model[7]

g ( m , n ) = f ( m , n ) + f ( m , n ) γ · u ( m , n ) + w ( m , n ) = f ( m , n ) + v ( m , n ) + w ( m , n )
(1)

where (m,n) is the pixel location, g(m,n) the observed noisy image, f(m,n) the noise-free image, modeled as a non-stationary correlated random process, u(m,n) a stationary, zero-mean uncorrelated random process independent of f(m,n) with variance σ u 2 , and w(m,n) is electronics noise (zero-mean white and Gaussian, with variance σ w 2 ). For a great variety of images, this model has been proven to hold for values of the parameter γ such that |γ| ≤ 1. The additive term v = fγ·u is the GSD noise. Since f is generally non-stationary, the noise v will be non-stationary as well. The term w is the signal-independent noise component and is generally assumed to be Gaussian distributed.

A purely multiplicative noise (γ = 1) is typical of coherent imaging systems; the majority of despeckling filters rely on the multiplicative fully developed speckle model[8]. In SAR imagery, the thermal noise contribution w is negligible, compared to the speckle term, f · u[9].

A more complex scenario is related to ultrasound image generation. Due to the great variability of scatterers size in each tissue, the electronics noise w cannot be neglected. Although a simplified noise model without electronic term with value of γ in (0,1), e.g., γ = 1/2, is accepted as characteristic of this kind of images, the presence of the additional term w alleviates for the need of exactly knowing the γ. In fact, if γ is taken to be unity, as for coherent noise, an equivalent signal-dependent γ may be defined, such that

f(m,n)·u(m,n)+w(m,n)f ( m , n ) γ eq ( f ( m , n ) ) · u eq (m,n).
(2)

The signal-dependent noise in Equation (2) is the combination of a purely multiplicative term and of a signal-independent term. The outcome exhibits a dependence on the signal that vanishes as f → 0+. Whenever f·u w, as it happens for SAR speckle, it stems that γeq(f)→1. In practice, the left-hand side of (2), i.e., (1) with γ = 1, is taken as a noise model suitable for ultrasonic images[10].

The model (1) is also suitable for film-grain noise[11], typical of images obtained by scanning a film (transparent support) or a photographic halftone print (reflecting support). In the former case, γ > 0 and values 1/3 ≤ γ ≤ 1/2 are typically encountered; in the latter case, negative values of γ are found[11]. For images obtained from monochrome or color scanners, the electronics noise w may not be neglected. Its variance is easily measured on a dark acquisition, i.e., when f = 0. The unknown exponent γ may be found by drawing the scatterplot of the logarithm of measured local variance diminished by the dark signal variance (estimate of σ w 2 ) against the logarithm of local mean[12]. Homogeneous pixels are clustered along a straight line in the log-scatterplot plane. The unknown γ is estimated as the slope of the regression line, σ u 2 as the intercept.

Eventually, the model (1) applies also to images produced by optoelectronic devices, such as CCD cameras, multispectral scanners, and imaging spectrometers. In that case, the exponent γ is equal to 0.5. The term f u stems from the Poisson-distributed number of photons captured by each pixel and is therefore denoted as photon noise[13]. This case will be investigated in the remainder of this article.

Optoelectronic noise

In this section, the optoelectronic noise model will be reviewed in a deeper detail. The main contributions of photon noise and electronic noise will be derived and physically related to the instrument. Signal-to-noise ratio (SNR) will be defined and its relationships to the noise model parameters will be addressed. Let us rewrite the model (1) with γ = 0.5:

g(m,n)=f(m,n)+ f ( m , n ) ·u(m,n)+w(m,n).
(3)

Equation (3) represents the electrical signal resulting from the photon conversion and from the dark current. The mean dark current has preliminarily been subtracted to yield g(m,n). However, its statistical fluctuations around the mean constitute most of the zero-mean electronic noise w(m,n). The term f ( m , n ) ·u(m,n) is the photon noise, whose mean is zero and whose variance is proportional to E[f(m,n)]. It represents a statistical fluctuation of the photon signal around its noise-free, f(m,n), due to the granularity of photons originating electric charge.

SNR

If the variance of (3) is calculated on homogeneous pixels, in which σ f 2 (m,n)=0, by definition, thanks to the independence of f , u and w and the fact that both u and w have null mean and are stationary, we can write

σ g 2 (m,n)= σ u 2 · μ f (m,n)+ σ w 2
(4)

in which μ f (m,n) E[f(m,n)] is the non-stationary mean of f . The term μ f (m,n) equals μ g (m,n), from (3).

Let us define the local SNR at pixel position (m,n) as

SNR dB (m,n)=10 log 10 E [ f 2 ( m , n ) ] μ f ( m , n ) σ u 2 + σ w 2
(5)

which on homogeneous pixels (i.e., σ f 2 (m,n)=0) becomes

SNR dB (m,n)=10 log 10 μ f ( m , n ) 2 μ f ( m , n ) σ u 2 + σ w 2 .
(6)

In (6), if μ f (m,n) σ u 2 σ w 2 , then

SNR dB (m,n)10 log 10 μ f ( m , n ) σ u 2 .
(7)

That is SNR depends on the mean photon signal. Instead, if μ f (m,n) σ u 2 σ w 2 , then

SNR dB (m,n)10 log 10 μ f ( m , n ) 2 σ w 2
(8)

which states that the SNR depends on the square of the mean photon signal.

In practical applications, the average SNR is used:

SNR dB =10 log 10 f ̄ 2 f ̄ σ u 2 + σ w 2 .
(9)

where f ̄ is obtained by averaging the observed noisy image, the noise being zero-mean and the average local variance of f is assumed to be negligible, i.e.,( f ̄ 2 ) ( f ̄ ) 2 .

Estimation procedure

Equation (4) represents a straight line in the plane(x,y)=( μ f , σ g 2 )=( μ g , σ g 2 ), whose slope and intercept are equal to σ u 2 and σ w 2 , respectively. The interpretation of (4) is that on statistically homogeneous pixels the theoretical non-stationary ensemble statistics (mean and variance) of the observed noisy image g(m,n) lie upon a straight line. In practice, homogeneous pixels with σ f 2 (m,n)0 may be extremely rare and theoretical expectation are approximated with local averages. Hence, the most homogeneous pixels in the scene appear in the mean-variance plane to be clustered along the straight line y = mx + y0, in whichm= σ u 2 and y 0 = σ w 2 .

The problem of measuring the two parameters of the opto-electronics noise model (3) has been stated to be equivalent to fitting a regression line to the scatterplot containing homogeneous pixels, or at least the most homogeneous pixels in the scene. The problem is now shifted to detecting the (most) statistically homogeneous pixels in an imaged scene.

One major drawback of the simultaneous estimation of the two parameters of a generic line is that at least two distinct clusters, not necessarily corresponding to two homogeneous image patches, are necessary to yield a steady and balanced line. The procedures developed by some of the authors for signal-independent noise estimation[4] and SAR speckle estimation[14], once they have been extended to two-parameter noise estimation, have been found to be inadequate for the new task, mainly because the overall noise power, though accurately estimated, was not correctly split into its signal-dependent and independent components.

The new procedure for noise estimation consists either of partitioning the image into blocks or of manually selecting only some regions of interest (ROI). In both cases (unsupervised and semi-supervised), the sequence of blocks/ROIs, “blocks” in the following, is processed in the same way.

  1. 1.

    Calculate global homogeneity threshold θ on the most densely populated bins of the binned scatterplot relative to the whole image, analogously to [14];

  2. 2.

    Set block index k := 1;

  3. 3.

    If k > number of blocks, go to 7, else, within a K × K window (K = 2m + 1) sliding over the k th block B k calculate the local statistics of the noisy image:

    • average g ̄ (i,j) μ ̂ g (i,j)

      g ̄ (i,j)= 1 K 2 k = m m l = m m g(i+k,j+l)
      (10)
    • mean quadratic deviation from the average σ ̂ g 2 (i,j)

      σ ̂ g 2 (i,j)= 1 K 2 1 k = m m l = m m [ g ( i + k , j + l ) g ̄ ( i , j ) ] 2
      (11)
  4. 4.

    Draw S k , the σ ̂ g 2 (i,j) versus μ ̂ g (i,j) scatterplot of B k ;

  5. 5.

    Calculate mass m k (number of points) and gravity center g k (center of mass of the set of the points) of S k ;

  6. 6.

    Let R be the average quadratic distance of scatterpoints to their gravity center measured along the variance axis (y axis): if Rθ save coordinates of g k and m k , set k := k + 1 and go to 3. Else, split S k into four quadrants (bins), { S k j ,j=1,4}, find the most densely populated bin S k j , set S k := S k j and go to 5;

  7. 7.

    Draw mean-to-variance scatterplot from the coordinates of {g k } and its mass {m k }. A two-parameters regression line is fit to the scatterplot. The slope and intercept of such a straight line are estimates of the two noise model parameters.

Figure1 shows the last step of the noise estimation procedure. The scatterplot containing the 64 gravity centers of the 64 partition blocks of the noisy image in Figure2b is displayed together with its regression line. The size of each dot is proportional to its mass, which is considered in the calculation of the regression line.

Figure 1
figure 1

Calculation of slope and intercept of mixed photon/electronic noise from centroids of scatterplots calculated from blocks/ROIs of test image: scatterplot of homogeneous areas with regression line superimposed (dots size proportional to mass of clusters).

Figure 2
figure 2

Original piecewise-smooth test image taken from [[5]]: (a) noise-free original; (b) corrupted with simulated optoelectronic noise (77% photon, 23% electronic, SNR=17 dB); (c) variance-to-mean scatterplot of original; (d) variance-to-mean scatterplot of noisy version.

The main advantage of the above procedure is that a little homogeneous image block, i.e., a block containing few statistically similar pixels, not necessarily forming a connected set, yield a gravity center with low weight, while a block containing many homogeneous points will contribute with a center having a large weight. The multiplicity of centers will ensure that the regression line is not undetermined, as it would happen in the case of a unique center, originated from an isotropically spread cloud of dense scatterpoints.

Experiments on simulated noisy images

The proposed method has preliminarily been validated on simulated noisy images. Results on the synthetic noise-free test image used in[5] are presented here. The original test image is shown in Figure2a. A noisy versions with average SNR (9) equal to 17 dB and 77% signal-dependent photon noise (γ = 0.5) and 23% signal-independent electronic noise has been generated and is shown in Figure2b.

The variance-to-mean scatterplots, shown in Figure2c,d, highlight the noise model. In Figure2c no noise has been superimposed and nine points can be detected, approximately lying aligned over the x-axis. The slope of the joining line is equal to zero and the intercept is equal to the variance of integer roundoff error, i.e., to 1/12. Conversely, Figure2d evidences the presence of nine clusters that are aligned along a straight line having slope and intercept equal to the parameters of the superimposed noise.

Noisy versions of the test image with 50% photon and 50% electronic noise have been generated with SNR ranging between 15 and 30 dB. The proposed method and the method described in[5],b which conversely exploits a wavelet decomposition in order to find homogeneous regions, have been used to estimate the noise model parameters. In the latter case, the noisy image is clipped below zero, as it happens with a real CCD camera. For the proposed method, the results without clipping are almost identical to those with clipping, provided that the gravity centers of clusters originated by dark image blocks are preliminarily discarded by thresholding their mean. Figure3a,c,e shows estimated slope and intercept of the noise model in the (μ σ2) plane, as well as estimated SNR, varying with the true SNR, for the proposed method; Figure3b,d,f for the method in[5]. The accuracy of both is very high, especially on SNR. The proposed method, however, exhibits a slightly better ability in splitting the noise contribution into its two signal-dependent and signal-independent components.

Figure 3
figure 3

Tests with simulated signal-dependent noise on a piecewise-smooth test image. Estimated (solid) and true (dashed) parameters of the photon (slope of regression line) and electronic (intercept) noise model as a function of true SNR. (a) Slope of the proposed method; (b) slope of the method in[5]; (c) intercept of the proposed method; (d) intercept of the method in[5]; (e) SNR of the proposed method; (f) SNR of the method in[5].

Imaging model of CCD cameras

According to a recent article[15] that integrates previous studies[16, 17], CCD imaging can be represented by three subsystems: the CCD sensor array, which converts photons at each pixels into electrons and thus voltage; the camera electronics, which usually forces a nonlinear compression on the voltage values; and an analog-to-digital (AD) conversion, which generates the digital image values.

The conversion of light or photons to electronic charge depends on many factors. Electronic charge consists of electrons that are excited from the silicon valence band to the conduction band. Electrons occur because of the reaction between the silicon and the incident light. The amount of charge generated for a given source of light is determined by several factors, the main of which are dependent on wavelength, i.e., on photon energy, and to a less extent on nonlinearities in the conversion process. As a consequence of the latter effects, there is a degradation in the efficiency of the charge generation process and an incomplete conversion of photons to signal electrons occurs. Other nonlinearities are further introduced by the electronics of the camera that is often designed to compress the wide range of irradiance values of the scene to a fixed range of measurable values.

The resulting effect is that the mapping between the incident photons and the camera output is nonlinear and is described by a function denoted as CRF. CRF can be assumed as linear for low intensity values of the incident light and must be considered when modeling CCD noise from the digital counters at the AD converter (ADC). According to Figure4, we can assume that ideal CCD output values, before processing and digitization, belong to a linear space denoted as light space (LS)[18]. Any CCD nonlinearity can be incorporated in the CRF. The output of the imaging device is assumed to belong to a nonlinear image space (IS). If LS values are denoted as q, then IS are modeled as f(q), where f(·) represents the CRF. Eventually, after inverting the CRF, the image values are restored to the LS, where the dependencies of pixel values on incident light become linear again. More complex noise models accounting for a wider range of phenomena can be devised[19], with the drawback that model-based analytical solutions may become intractable[6].

Figure 4
figure 4

Basic flowchart of a CCD R-G-B camera: (a) the incoming photons are transduced into electrons by the photon imaging device and produce an analog image; (b) the analog image is digitized and preprocessed with three steps corresponding to as many intermediate products.

We wish to highlight that the main contribution to the overall CRF is a nonlinearity purposely introduced by the manufacturer to prevent clipping above the maximum value allowed by the ADC. Therefore, clipped upper values are never encountered unless for the case of a severe and uncontrolled overexposure. Instead, negative values that are clipped below zero may occur in dark image regions. Incidentally, negative values depend on the electronic noise only, after the average dark signal is subtracted, not on the photon noise, because the overall number of photons received cannot be negative. Now, if the inverse CRF is derived in a laboratory in such a way that the overall response of the instrument is linearized, the correction would also include a partial compensation of the nonlinearity, more exactly of the positive bias in the mean response for very low levels, introduced by negative clipping in the presence of “dark” noise (noise associated to the dark signal). In other words, negative clipping, whose extent is limited to a few counts, on pixels having photon signal approximately zero, by the RMS value of dark noise, is simply approximated as another contribution to the overall CRF, together with the undesired nonlinearity of the opto-electronic chain (imperfect conversion of photons to electrons, as the number of photons increases) and especially of the saturated nonlinear response imposed to prevent overflow in the ADC.

Experiments on a CCD camera

A further experiment was made on the data produced by a commercial CCD color camera. The imaging device is a Nikon D70s digital camera equipped with a 3008×2000 pixel CCD of 23.7×15.6 mm physical dimensions. The radiometric resolution is 12 bit. Acquired images are made available in NEF (Nikon electronic file) 12-bit lossless compressed mosaiced raw data format. On the decompressed raw images demosaicing is performed to pass from the Bayer pattern images made available by the optoelectronic acquisition system to conventional RGB image format[20]. Figure5 shows that demosaicing is equivalent to split the mosaiced images into their polyphase components R–G–G–B and to interpolate the latter to yield an R–G–B image of the same size as the mosaiced image. The inverse CRF, shown in Figure6, can be applied to raw data to pass from the 12-bit IS representation to the LS radiance images an ideal CCD would collect in ideal noise-free conditions. The inverse CRF has the purpose of restoring the linear dependence between light and image values, represented as radiance values. Since the CRF accounts both for the compression of values introduced to avoid overflow in the ADC and for the intrinsic nonlinearities of the photo-electronic instrument, it is experimentally obtained, in order to produce the inverse function (CRF−1) to return back from IS to LS values. Split R–G–G–B components, both raw and CRF-corrected have been analyzed in this study, together with the demosaiced R–G–B image. The CRF-corrected and demosaiced R–G–B format is available at the end of the processing chain in Figure4b. A 1024×768 detail of the test scene is displayed in Figure7.

Figure 5
figure 5

Work flow among mosaiced R–G–B image, split R–G–G–B components, and demosaiced R–G–B three-band image. Demosaicing is equivalent to split the mosaiced image and interpolate the outcome four polyphase components, though it is usually accomplished in a unique step.

Figure 6
figure 6

Inverse CRF of the test color camera (see block CRF -1 of Figure 4 b).

Figure 7
figure 7

Detail of size 1024 × 768 of the full image in CRF-corrected demosaiced R-G-B format.

In order to estimate the CCD noise there are two possibilities. The first is to recover noise parameters in IS for small values of digital counters that correspond to a linear mapping from LS, taking into account saturation and/or clipping effects[5]. More exactly, saturation is a reversible nonlinearity purposely introduced by the manufacturer to prevent the values of bright pixels of the mosaiced image to fall outside the dynamic range of the ADC, thereby being clipped above the maximum. Clipping is an irreversible operation and is associated to partial or total loss of information. Clipping below zero may occur when the dark signal is subtracted (see Figure4b). Its effect has carefully been analyzed in[5] and found to be beneficial for noise parameters estimation. Clipping over the maximum level allowed by the ADC is always an undesired effect. Its occurrence, usually originated by overexposure, should be avoided.

Whenever ADCs with high bit-depth, e.g., 14 bit, are employed, the nonlinearity of the imaging chain is weak because there is no longer need to purposely compress the range of values of the signal. Only undesired nonlinearity effects due to imperfect conversion of photons into electrons survive.

The second possibility, which is pursued, e.g., by[15], is to estimate the noise after applying an inverse CRF to image values of IS to return back to LS. This is the approach followed in this study. The proposed method might be applied also at the end of the chain, i.e., on the demosaiced R–G–B bands, because estimation methods based on multivariate regressions, as those used in the present context, are insensitive of the spatial correlation of the noise[3] introduced by interpolation. However, noise estimation on interpolated data will depend on the interpolation algorithm, which creates new pixel values where the noise model before interpolation may no longer hold. Consider the simple case of a linear interpolation of two pixel values affected by purely photon noise. The new value generated by interpolation is the average of the existing values. The signal component will be the average of the two signal components of the interpolating nodes, as well as the noise component will be the average of the two noise components. However, it does no longer holds that the noise component exhibits variance proportional to the mean noise-free signal. In summary, interpolation of signal-independent noise preserves the noise model, i.e. the dependence of the signal on the noise, of the interpolating nodes. Interpolation of signal-dependent noise preserves the noise model only if γ = 1, i.e., for speckle noise. The noise variance is always reduced by the averaging process. The interpolated image is cyclo-stationary and the noise model depends on the pixel position within a period equal to the interpolation factor.

With reference to Figure4b, the experiments are aimed at verifying the noise model on raw split colors (Step 1), on CRF-corrected split colors (Step 2), and on demosaiced CRF-corrected data (Step 3). Step 1 is before CRF correction; Steps 2 and 3 after CRF correction.

Figure8a,b refers to the blue band and exhibit pronounced linear trends in the variance-to-mean scatterplots, both before and after CRF correction. However, the dynamic range of raw data does not exceed the linear portion of the inverse CRF function (Figure6). Apart from the different values along axes, digital count in the former, radiance values in the latter, the scatterplots before and after correction are very similar and suggest that the optoelectronic noise model (3) holds in both cases.

Figure 8
figure 8

Variance-to-mean scatterplots of split color bands. (a) Blue component, raw format (Step 1); (b) blue band, CRF-corrected radiance format (Step 2); (c) either of green components, raw format (Step 1); (d) same green band after CRF correction (Step 2).

Conversely, Figure8c,d, that is relative to either of the green components, shows that the opto-electronic noise model does not hold for raw data, whose amplitudes exceed the linear part of the forward CRF (see Figure4a), unlike the blue data. However, the noise model is well verified once the CRF has been corrected and thus the original LS has been restored (see Figure4b). The red channel yields trends intermediate between those of blue and of green and thus are not reported as scatterplots, but only among quantitative results in Table1. The comparison between data at Steps 1 and 2 suggests that CRF correction is crucial for the fulfillment of the optoelectronic noise modeling (3).

Table 1 Noise model parameters estimated from the whole test picture

Table1 reports the estimated noise model parameters, σ u 2 and σ w 2 , and the coefficient of determination (CD) of least squares fit, which ranges in [0,1] and measures the strength of matching (CD = 1 means all scatter-points lie on the straight line). Also, the percentage of photon noise over the cumulative noise power (PN%) is provided. Average SNR is reported for each fit. SNR is computed from the two noise model parameters and from the average signal in the corresponding channel, according to (9). Both raw (Step 1) and corrected (Steps 2 and 3) data have been analyzed.

What stands out from the results in Table1, especially from the CD, is that the opto-electronic noise model (3) is highlighted in the corrected data (Steps 2 and 3), while it is not evident in the raw data (Step 1), apart from the blue band. So, reliable noise values are only those relative to corrected and possibly interpolated data appearing in the middle and lower parts of Table1. On Step 1 raw data, there is a good fit of the noise model only on the blue channel; both green components fit very poorly; the red channel, being moderately affected by saturation, exhibits intermediate values of CD. On Step 2 corrected data, there is an excellent fit of the model for all color components B, G1, G2, and R. The contribution of photon noise is generally larger than that of electronic noise, especially on the brightest green channels, as evidenced by PN%. The electronic noise, however, is not negligible with respect to the photon noise. Hence, methods aimed at converting pure photonic noise into signal-independent Gaussian noise, like the Anscombe transform[21], may not in principle be employed. Concerning Step 3 data, interpolation produces cyclo-stationary as well as spatially correlated noise. Average values of parameters are estimated by the proposed procedure. The discrepancy of values of noise parameters model between Steps 2 and 3 is still due to interpolation, which increases SNR. As an example, a bilinear interpolation increases the average SNR of B and R bands by 3.59 and by 1.25 dB the SNR of the G band, respectively. Hence, the values of measured noise parameters are expected to be lower for Step 3 data than for Step 2 data. Eventually, noise reduction by means of a wavelet-based LMMSE filter[22] tailored on the estimated parameters of the optoelectronic noise model has been performed. The SNR values after filtering are denoted as SNR’ in the last column of Table1. By comparing SNR’ at Steps 1 and 2 we must consider that the inverse CRF (see Figure6), Being a convex function, lowers its input SNR. The decrement is approximately 0.2 dB and may be found as difference of blue SNR at Step 1 and blue SNR at Step 2. If such an offset is applied to SNR’ values of Step 2, we can conclude that noise is estimated and filtered out in Step 2 domain better than in Step 1 domain. Noise filtering at Step 3 is less effective because of interpolation, which makes noise to become spatially correlated, and hence more difficult to reject, at least with conventional LMMSE estimators. The differences SNR’ – SNR at Steps 3 and 2 evidence this trend, which is otherwise expected from theory.

Conclusions and developments

Modern CCD color cameras produce corrected R–G–B images dominated by opto-electronic noise, a mixture of signal-dependent photon noise and signal-independent electronic noise. The parameters of the noise model can be measured on a single image by means of an original unsupervised procedure relying on a bivariate linear regression of local mean and variance. It is noteworthy that such a noise model does not strictly hold for raw data, but only once the CRF has been corrected and the original LS has been restored from nonlinearities introduced by the electronic chain.

The full knowledge of the parametric noise model can be useful not only in applications requiring preliminary denoising, but also in application of surveillance, in which no denoising is performed, but automatic detection is ruled by thresholds that are presumably related with the noise model. Also restoration will benefit from the knowledge of a parametric noise model, including its autocorrelation function. Its estimation, however, whenever performed on R–G–B data, is complicated by the demosaicing and interpolation steps, especially because interpolation algorithms, aimed at reducing impairments originated by Bayer’s mosaicing pattern, are generally adaptive, may be nonlinear and especially they are not disclosed by manufacturers. Therefore, the most suitable domain for this kind of processing is undoubtedly the one where color components have been split, but have not yet been interpolated.

Endnotes

aThe most usual acceptance of “raw data” is data expressed in digital counts that have not yet converted to physical units, according to the relationship between what is measured and the outcome of the measurement.

bA MATLAB implementation of the algorithm is available athttp://www.cs.tut.fi/~foi/sensornoise.html.

References

  1. Argenti F, Bianchi T, Alparone L: Multiresolution MAP despeckling of SAR images based on locally adaptive generalized Gaussian pdf modeling. IEEE Trans. Image Process 2006, 15(11):3385-3399.

    Article  Google Scholar 

  2. Lee JS, Hoppel K, Mango SA: Unsupervised estimation of speckle noise in radar images. Int. J. Imag. Syst. Technol 1993, 4: 298-305.

    Article  Google Scholar 

  3. Aiazzi B, Alparone L, Barducci A, Baronti S, Pippi I: Estimating noise and information of multispectral imagery. J. Opt. Eng 2002, 41(3):656-668. 10.1117/1.1447547

    Article  Google Scholar 

  4. Aiazzi B, Alparone L, Barducci A, Baronti S, Marcoionni P, Pippi I, Selva M: Noise modelling and estimation of hyperspectral data from airborne imaging spectrometers. Ann. Geophys 2006, 49: 1-9.

    Google Scholar 

  5. Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical Poissonian-Gaussian noise modeling and fitting for single-image raw data. IEEE Trans. Image Process 2008, 17(10):1737-1754.

    Article  MathSciNet  Google Scholar 

  6. Liu C, Szeliski R, Kang SB, Zitnick CL, Freeman WT: Automatic estimation and removal of noise from a single image. IEEE Trans. Pattern Anal. Mach. Intell 2008, 30(2):299-314.

    Article  Google Scholar 

  7. Jain AK: Fundamentals of Digital Image Processing. 1989.

    Google Scholar 

  8. Tur M, Chin KC, Goodman JW: When is speckle multiplicative? Appl. Opt 1982, 21(7):1157-1159. 10.1364/AO.21.001157

    Article  Google Scholar 

  9. Oliver C, Quegan S: Understanding Synthetic Aperture Radar Images. 1998.

    Google Scholar 

  10. Argenti F, Torricelli G: Speckle suppression in ultrasonic images based on undecimated wavelets. EURASIP J. Appl. Signal Process 2003, 2003(5):470-478. 10.1155/S1110865703211136

    Article  Google Scholar 

  11. Pratt WK: Digital Image Processing. 1991.

    Google Scholar 

  12. Aiazzi B, Baronti S, Casini A, Lotti F, Mattei A, Santurri L: Quality issues for archival of ancient documents. Mathematics of Data/Image Coding, Compression, and Encryption III 2000, 115-126.

    Chapter  Google Scholar 

  13. Starck JL, Murtagh F, Bijaoui A: Image Processing and Data Analysis: The Multiscale Approach. 1998.

    Book  Google Scholar 

  14. Aiazzi B, Alparone L, Baronti S, Garzelli A: Coherence estimation from incoherent multilook SAR imagery. IEEE Trans. Geosci. Remote Sens 2003, 41(11):2531-2539. 10.1109/TGRS.2003.818813

    Article  Google Scholar 

  15. Faraji H, MacLean WJ: CCD noise removal in digital images. IEEE Trans. Image Process 2006, 15(9):2676-2685.

    Article  Google Scholar 

  16. Healey GE, Kondepudy R: Radiometric CCD camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell 1994, 16(3):267-276. 10.1109/34.276126

    Article  Google Scholar 

  17. Tsin Y, Ramesh V, Kanade T: Statistical calibration of CCD imaging process. Proc. IEEE Int. Conf. Computer Vision 2001, 480-487.

    Google Scholar 

  18. Mann S: Intelligent Image Processing. 2002.

    Google Scholar 

  19. Irie K, McKinnon AE, Unsworth K, Woodhead IM: A model for measurement of noise in CCD digital video cameras. Measur. Sci. Technol 2008, 19(4):045207. 10.1088/0957-0233/19/4/045207

    Article  Google Scholar 

  20. Gunturk BK, Glotzbach J, Altunbasak Y, Schafer RW, Mersereau RM: Demosaicking: color filter array interpolation. IEEE Signal Process. Mag 2005, 22: 44-54.

    Article  Google Scholar 

  21. Talbot H, Phelippeau H, Akil M, Bara S: Efficient Poisson denoising for photography. Proc. IEEE International Conference on Image Processing 2009, 3881-3884.

    Google Scholar 

  22. Argenti F, Torricelli G, Alparone L: MMSE filtering of generalised signal-dependent noise in spatial and shift-invariant wavelet domains. Signal Process 2006, 86(8):2056-2066. 10.1016/j.sigpro.2005.10.014

    Article  Google Scholar 

Download references

Acknowledgements

The authors are indebted to the Guest Editor and to the anonymous reviewers, whose insightful comments have notably improved the organization and presentation of the article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefano Baronti.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Aiazzi, B., Alparone, L., Baronti, S. et al. Unsupervised estimation of signal-dependent CCD camera noise. EURASIP J. Adv. Signal Process. 2012, 231 (2012). https://doi.org/10.1186/1687-6180-2012-231

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-231

Keywords