 Research
 Open Access
Realistic camera noise modeling with application to improved HDR synthesis
 Bart Goossens^{1}Email author,
 Hiêp Luong^{1},
 Jan Aelterman^{1},
 Aleksandra Pižurica^{1} and
 Wilfried Philips^{1}
https://doi.org/10.1186/168761802012171
© Goossens et al.; licensee Springer. 2012
Received: 29 June 2011
Accepted: 14 May 2012
Published: 16 August 2012
Abstract
Due to the ongoing miniaturization of digital camera sensors and the steady increase of the “number of megapixels”, individual sensor elements of the camera become more sensitive to noise, even deteriorating the final image quality. To go around this problem, sophisticated processing algorithms in the devices, can help to maximally exploit the knowledge on the sensor characteristics (e.g., in terms of noise), and offer a better image reconstruction. Although a lot of research focuses on rather simplistic noise models, such as stationary additive white Gaussian noise, only limited attention has gone to more realistic digital camera noise models. In this article, we first present a digital camera noise model that takes several processing steps in the camera into account, such as sensor signal amplification, clipping, postprocessing,.. We then apply this noise model to the reconstruction problem of high dynamic range (HDR) images from a small set of low dynamic range (LDR) exposures of a static scene. In literature, HDR reconstruction is mostly performed by computing a weighted average, in which the weights are directly related to the observer pixel intensities of the LDR image. In this work, we derive a Bayesian probabilistic formulation of a weighting function that is nearoptimal in the MSE sense (or SNR sense) of the reconstructed HDR image, by assuming exponentially distributed irradiance values. We define the weighting function as the probability that the observed pixel intensity is approximately unbiased. The weighting function can be directly computed based on the noise model parameters, which gives rise to different symmetric and asymmetric shapes when electronic noise or photon noise is dominant. We also explain how to deal with the case that some of the noise model parameters are unknown and explain how the camera response function can be estimated using the presented noise model. Finally, experimental results are provided to support our findings.
Keywords
Introduction
The modeling of realistic camera noise is a subject that has not extensively been investigated, compared to the overwhelming amount of attention that the problem of stationary additive white Gaussian noise (AWGN) removal from images receives. A white stationary Gaussian noise assumption leads to simple and elegant denoising methods (such as wavelet shrinkage methods [1–5], total variation [6], anisotropic diffusion [7, 8], NLMeans [9–11]). However, when applied to realistic problems (e.g., the suppression of noise from CCD/CMOS measures taken with mobile phones or other consumer cameras), these techniques often yield poor results [11–13]. The main problem is the noise model mismatch, which causes the techniques to either over or underestimate the noise level. In realistic circumstances, noise is not an additive Gaussian process, not white nor stationary. For example, due to quantummechanical aspects inherent to the measurement of light, the sensor signals, in the form of analog voltages, are subject to statistical fluctuations with variances proportional to the “ideal”^{a} signal (i.e., the one we would like to measure). This phenomenon is known as photon noise. Subsequently, the measured analog voltages are amplified and converted to digital signals, leading to the introduction of electronic noise and quantization noise. The overall noise is a signaldependent mix of noise from different sources, in combination with different linear and nonlinear pre/postprocessing steps performed inside the camera. Consequently, the noise can not be well described using a stationary AWGN model.
In literature, it is often advocated to use socalled variance stabilizing transforms[14, 15], these are transforms that make the noise approximately “independent” and additive with a constant variance. The simple and elegant denoising approaches can then efficiently be applied to the variance stabilized signal and finally the inverse stabilization transform is applied. A first problem is the fact that a signal model that works fine in a linear domain may function poorly after variance stabilization [13]. A second problem is that an accurate noise model is needed in order to build such a variance stabilization transform.
Recently, a number of (simplified) CCD/CMOS noise models and estimation techniques have been presented in [12, 16, 17]. Next to this, many researchers address individual aspects of the noise in digital camera images (such as dealing with clipping [18], Poisson/multiplicative noise [19, 20], signaldependent/nonstationary noise [21–23]). Compared to these efforts, relatively little attention has gone to the joint modeling of the many involved factors that contribute to the noise characteristics in a digital camera. Some notable exceptions are [17, 24, 25]: Liu et al. [17] propose Bayesian estimation of the noise level function, directly from the camera reconstructed image itself. The set of possible noise level functions is thereby derived by considering the processing operations in the camera. In [24], a parametric noise model for RAW sensor data is proposed, which is then used to estimate the optimal exposure times for high dynamic range (HDR) image acquisition. A similar approach is presented in [25] for the photonnoise limited case.
In our opinion, image reconstruction techniques based on joint noise modeling techniques will (in the long term) significantly further improve the image quality, especially when physical limits of the image sensors are reached. For example, the minimal sensor size of a digital camera is limited by signaltonoise considerations: if one would like to have a higher image resolution, one would also have to deal with a higher level of noise in the image, even up to such a degree that further increasing the image resolution does not bring any gain in image quality. Besides this, the dynamic range of a camera sensor is still many orders lower than the dynamic range that the human visual system can deal with, and improving the dynamic range will also bring an additional image quality increase.
Nowadays, to deal with these problems, camera manufacturers integrate extra postprocessing operations, such as noise removal or HDR reconstruction into the camera. Their noise removal schemes are, mostly because of power consumption, hardware complexity and cost reasons, often based on the simple Gaussian noise models. Building more sophisticated and realistic noise models bridges the gap between the more and more miniaturized camera sensor designs and the increasing image quality expectations of camera end users.
Now that our goals are made clear, it is obvious that there are a lot of aspects (e.g. image noise, resolution, dynamic range, processing artifacts,..) to cover in order to build such an accurate and realistic noise model. To limit the scope of this article, firstly we will assume a pointwise relationship between the input and the output, which can be described by a socalled camera (or intensity or brightness) response function (ignoring correlations both spatially and between different color channels). Secondly, we will limit ourselves to image artifacts that are caused by noise: we will not deal with other artifacts common in digital camera, such as chromatic abberations, lens deformations, lens flares etc. Multivariate extensions of our theory and dealing with such artifacts may be a topic of future research. Also, while building sophisticated models, it is important to keep the models as simple as possible, so that techniques based on these models are still practical.
An important concept in HDR reconstruction is the weighting function (also called certainty function), which is introduced in [27] and subsequently used in [28–30]. The weighting function in [30] is defined as being proportional to the slope of the camera response function (CRF), indicating how quickly the output pixel intensity of a camera varies for a given input intensity. Correspondingly, certainty images are computed by applying certainty functions to digital images, revealing which parts of the image are the most “reliable”. It is found that this is the case for the midtones of the image. One of the issues however, is that several choices for weighting functions have been proposed in literature and that it is not clear which function to choose under which conditions.
In this article, we will formalize the concept of the “certainty function” in terms of a realistic camera model, and we will show that defining the certainty function as the probability that the output pixel intensity is unbiased, yields close to optimal HDR reconstruction results in the mean square error sense. The characteristics of our certainty function are similar to [30], additionally the Bayesian probabilistic formulation of our certainty function permits a more straightforward application to other problems, as we will demonstrate. To arrive at these novel certainty functions, we will develop a realistic camera noise model. Our noise model will be based on similar considerations as in [24], however the main difference is that we explicitly compute the bias functions based on an exponential prior distribution for the irradiance values, whereas in [24], simple indicator functions are used to predict whether the signal has been clipped.
The remainder of this article is organized as follows: in Section “Reconstruction of HDR images”, we introduce the synthesis problem of HDR images. Our realistic camera noise model is presented in Section “Camera noise modeling”. In Section “Probabilistic formulation of the weighting function”, we use the camera model to derive the weighting function. In Section “Estimation of the CRF”, we use the obtained weighting function in order to estimate the CRF for a set of LDR images. Results and a discussion are given in Section “Results and discussion”. Finally, Section “Conclusion” concludes this article.
Reconstruction of HDR images
Before going deep into the noise modeling for digital cameras, we first give an overview of the HDR reconstruction process and the different factors (such as SNR, the CRF and choice of the weighting function) that influence the reconstruction quality. First, we will introduce a number of general concepts that will be used throughout the article. Next, we will explain a HDR reconstruction technique that is based on a quite general noise model. Finally, we will investigate the “denoising” performance in the SNR sense of such a scheme, which will give some insight in the different factors that play a role in the quality of the final HDR image and will give an indication on conditions needed to obtain a certain minimal level of image quality, in terms of SNR.
Basic concepts
We have a set of $j=1,\dots ,P$ LDR digital photographs of a static scene at our disposal. The photographs are all taken from a fixed position, for example, using a tripod. We assume that lighting changes can be ignored, such that the incident scene radiance is constant during the exposure. For every photograph we use a different exposure time Δ t_{ j }, which will let us recover the dynamic range of the scene. Let z_{ ij } denote the pixel intensity of photograph j at position i, where we use a onedimensional index to denote the spatial position (as in raster scanning). The goal of HDR reconstruction is to recover the irradiance map of the scene E_{ i }, based on the pixel intensities z_{ ij } of the digital photographs.
In reallife, there is not a onetoone mapping of E_{ i } to z_{ ij }, because E_{ i } is the ideal noisefree image irradiance, while z_{ ij } is subject to noise. In this case, the CRF is the function that maps the measured image irradiance ${\xca}_{i}$ onto the pixel intensity z_{ ij }, or mathematically speaking, ${z}_{\mathit{ij}}=\gamma \left(\Delta {t}_{j}\sqrt{{\alpha}_{j}}{\xca}_{i}\right)$.
The concept of the CRF function is quite general: the techniques based on the CRF can also be used when all postprocessing operations are switched off, for example for processing RAW sensor data. Because most camera manufacturers do not publicly share the internal processing steps of their cameras, together with their used parameter values, the CRF is usually estimated from the photographs themselves [28, 30]. Two photographs with different exposure times are sufficient for this task.
Noise modelbased HDR reconstruction
In practice, the image irradiance measurements E_{ i } are subject to statistical fluctuations called quantum or photon noise, caused by uncertainty associated by “counting” light energy quanta. Let ${\xca}_{i}\Delta {t}_{j}$ denote an energy measurement after an exposure of Δ t_{ j } secs. The final pixel intensity of the LDR image j at position i is then given by ${z}_{\mathit{ij}}=\gamma \left({\xca}_{i}\sqrt{{\alpha}_{j}}\Delta {t}_{j}\right)$ Because of the statistical nature of ${\xca}_{i}$, z_{ ij } will be a random variable as well. The probability density function of z_{ ij } is a complicated function in general, because:

The measurement ${\xca}_{i}$ is affected by several (both additive and multiplicative) sources of noise, as we will discuss in more detail in Section “Camera noise modeling”.

A priori, little is known about the CRF γ(·)(except for the assumption of monotonicity).
The factor 1/σ^{2}(E_{ i },Δ t_{ j }) takes into account that the noise variance (and correspondingly the SNR) is both exposure time and irradiance dependent, and consequently penalizes images with a high noise variance (low SNR) in the HDR reconstruction.
Let us now turn to the choice of the weighting function, which is crucial for the correct operation of the HDR algorithm, since the weighting function determines the tradeoff between dynamic range and SNR. In literature, several functions have been proposed:

Debevec and Malik [28] propose a triangular function to emphasize the middle of the exposure range, mainly for its simplicity:${w}_{\mathit{ij}}\left(z\right)=\left\{\begin{array}{ll}z{z}_{\text{min}}& z\le {z}_{\text{mid}}\\ {z}_{max}z& z>{z}_{\text{mid}}\end{array}\right.$(8)
with ${z}_{\text{mid}}=\frac{1}{2}\left({z}_{\text{min}}+{z}_{max}\right)$. We will see in Section “Dealing with unknown camera model parameters” that this weighting function is a good choice when little information is known about the camera noise characteristics.

Mann and Picard [27, 30] select a weight related to the slope of the CRF γ, which indicates how quickly the pixel intensity z_{ ij } varies with the input, in order to assign lower weights to coarser quantized pixel intensities. It is argued that for noisy input images, influences of quantization noise are minimal in the middle of the camera’s exposure range. Using our notations, the weighting function is defined as follows ([30], Eq. 11):${w}_{\mathit{ij}}\left(z\right)=\frac{\mathrm{d}z}{\mathrm{d}\text{log}{\widehat{q}}_{j}\left(z\right)}={\widehat{q}}_{j}\left(z\right)\frac{\mathrm{d}\gamma}{\mathrm{d}z}\left({\widehat{q}}_{j}\left(z\right)\right)$(9)
where ${\widehat{q}}_{j}\left(z\right)={\gamma}^{1}\left(z\right)/\left(\Delta {t}_{j}\sqrt{{\alpha}_{j}}\right)$ is an estimate for the image irradiance, based on exposure j. For an identity CRF, the weights are approximately proportional to the image irradiance.

Mitsunaga and Nayar [29] relate the weights to the SNR of the LDR image (which is assumed to be linear to the image irradiance). Thereby, the authors assume that the measurement noise is both stationary and independent of the underlying signal. This results in the following equation:${w}_{\mathit{ij}}\left(z\right)={\widehat{q}}_{j}\left(z\right)/\frac{\mathrm{d}{\widehat{q}}_{j}\left(z\right)}{\mathrm{d}z}\left(z\right)={\widehat{q}}_{j}\left(z\right)\frac{\mathrm{d}\gamma}{\mathrm{d}z}\left({\widehat{q}}_{j}\left(z\right)\right),$(10)
where the rule for the derivative of the inverse was applied. The weighting function proposed by Mitsunaga and Nayar is the same as the weighting function of Mann.

In previous work [31], we proposed an exponential power function as a tradeoff between noise suppression and clipping:${w}_{\mathit{ij}}\left(z\right)=exp\left(\frac{\left(z{z}_{\text{mid}}\right)}{b{\left({z}_{\text{mid}}\right)}^{a}}\right),$(11)
where a and b are constants which were determined experimentally. This function also emphasizes the middle of the exposure range, but in such a way that the low and high intensities are still assigned a large weight, close to 1.

Other weighting functions are proposed by Reinhard et al. [32], Tsin et al. [33], Kirk and Andersen [34].
We remark that in many of the above works, the authors assume AWGN that is independent of the image irradiance and image index j, which leads to a special case of (6) in which σ^{2} (E_{ i },Δ t_{ j }) is constant (consequently this factor can be dropped in (7)). In general, because of the dependency of the weighting factor σ^{−2} (E_{ i },Δ t_{ j }) on the unknown image irradiance E_{ i }, $\hat{\text{log}{E}_{i}}$ needs to be estimated as in an iteratively reweighted least squares approach. During this iterative process, in the right handed side of (7), the image irradiance estimate E_{ i } of the previous iteration is used. The image irradiance is then estimated using the currently best available estimate of the weight w_{ ij }σ^{−2} (E_{ i },Δ t_{ j }). Once a better estimate E_{ i } becomes available, the combined weights w_{ ij }σ^{−2} (E_{ i },Δ t_{ j }) are updated.
Such an iterative updating scheme can be avoided (which is advantageous from a computational point of view), even without assuming AWGN: if σ^{−2}(E_{ i },Δ t_{ j }) is proportional to (E_{ i })^{ q }, the factors (E_{ i })^{ q }, which are independent of the summing variable j, in the numerator and denominator of (7) cancel each other. In the next section, we will perform a thorough modeling of the camera noise characteristics, to obtain explicit formulas for σ^{−2} (E_{ i },Δ t_{ j }). Surprisingly, our result in Section “Camera noise modeling” indicates that an approximation such as σ^{−2} (E_{ i },Δ t_{ j })∝(Δ t_{ j }E_{ i })^{ q } with q = 1 or 2 are quite adequate for a digital camera noise model.
Denoising performance of the HDR reconstruction formula
In other words, nonconstant weighting function, such as the weighting functions proposed in literature, lead to a reduced SNR_{HDR}. Figure 7 also illustrates how the SNR of the reconstructed HDR image is affected by using different weighting functions. We will go deeper into this topic in Section “Probabilistic formulation of the weighting function”.
Camera noise modeling
In Section “Noise modelbased HDR reconstruction” we have put forward a Gaussian model for the inversely compensated pixel intensity g(z_{ ij }). This Gaussian model comprises next to the mean log(Δ t_{ j }E_{ i }), a bias function ν(E_{ i }Δ t_{ j }) and a noise variance function σ^{2}(E_{ i },Δ t_{ j }). For accurate HDR reconstruction, expressions for these functions are indispensable. These expressions through camera noise modeling. Our noise modeling and the way in which we deal with clipping is similar to the Poissonian–Gaussian modeling of [18, 35], with the main difference that our analysis covers the more general case in which the CRF is nonlinear (see Appendix 1). But before building a camera noise model, we first need a better understanding of the processing inside a digital camera.
 1.
Amplification, which introduces electronic noise. This electronic noise is well characterized by a Gaussian distribution $N\left(0,{\sigma}_{\u0454}^{2}\right)$. We will denote by $\sqrt{{\alpha}_{j}}$ the amplification gain for image j, resulting in a signal with expected value $\sqrt{{\alpha}_{j}}\Delta {t}_{j}{E}_{i}$. The amplification gain is usually determined by the ISO sensitivity of the camera.
 2.
The measured signal x _{ ij } is affected by nonuniform heating of the sensor, and becomes nonuniform, even when the scene radiance is constant. The resulting fluctuations are called fixedpattern noise (FPN), also known as dark current nonuniformity. FPN has a variance that is quadratic in the image irradiance (mainly caused by variations by variations in the gain factor α _{ j }, see [36]), but also contributes to the overall noise that is independent of the image irradiance (as an offset term). We will call these two noise components respectively gain FPN and offset FPN.
 3.
A/D conversion, which involves clipping of the dynamic range to the range 0–2^{ B } − 1 and quantization. The camera sensor can only map irradiance values that are within certain minimum and maximum bounds (these bounds determine the tonal range of the sensor) and values that are outside this range are clipped. Because the tonal range of the sensor often cannot cover the whole dynamic range of the scene, some regions in the image will be over or underexposed. In the following f(x) = max(0,min(2^{ B } − 1,x)) will denote this clipping function.
 4.
For ideal A/D converters, the quantization noise is uniformly distributed in the range [−1/2,1/2]. In this work, we will treat quantization noise similarly to electronic noise and offset FPN. This is possible since quantization is performed directly after amplification. Therefore, the offset noise variance ${\sigma}_{\mathrm{o},j}^{2}$ signifies the summed contributions of of the variances of the electronic noise, offset FPN and quantization noise.
 5.
Finally, the quantized signal is subjected to several postprocessing steps in the digital domain. Examples are: gamma correction, brightness and contrast adjustment, color corrections, white balance, compression/expansion etc. As we mentioned before, these operations are modeled using the CRF.
where $s\left(x\right)={\sigma}_{\mathrm{o},j}^{2}+{\alpha}_{j}x+{\beta}_{\mathit{ij}}{x}^{2}$ and C = (2Πs(Δ t_{ j }E_{ i }))^{−1/2}.
Although the functions in (19) can be calculated numerically, it is not directly clear how σ^{2} (E_{ i },Δ t_{ j }) is related to the parameters ${\sigma}_{\mathrm{o},j}^{2}$, α_{ j }, β_{ ij } and Δ t_{ j }. Thereby it becomes very difficult to, e.g., devise techniques to estimate these parameters from the data z_{ ij }.
Interestingly, the gain FPN appears as an extra bias term in (1). Since FPN does not change from one image to another, a FPN template can be constructed for each ISO setting. Gain FPN removal then simply consists of subtracting the template −β_{ ij }/2α_{ j } from logf(y_{ ij }), the logarithm of the clipped digitized signal. This bias subtraction approach is quite common in digital cameras.
Even though complicated processing by several nonlinear functions is involved, we can well describe the first and second statistical moments of the variables g(z_{ ij }) using a convenient formula that holds with great accuracy within certain bounds. This finding will turn out to be very useful for applications that depend on this noise model. In the next sections, we will take these bounds into account in the HDR reconstruction, in particular for designing an appropriate weighting function.
Probabilistic formulation of the weighting function
The camera noise model from previous section can relatively easily be applied in many practical applications. The main ingredients of this model are the approximations of the noise variance function and the bias function, together with the ranges $[{y}_{\text{min}}^{\prime},{y}_{\text{max}}^{\prime}]$ on which these approximations are accurate.
From the explanation in Section “Denoising performance of the HDR reconstruction formula”, the reader may expect that an optimal weighting function is one that is constant everywhere in the clippingfree regions (w_{ ij } = 1 − I(z_{ ij } ≤ 0) − I(z_{ ij } ≥ 2^{ B } − 1), with I(·) the indicator function). However, such weighting function does not give maximal SNR because of bias effects: for example, an initially very large intensity y_{ ij } may become less than 2^{ B } − 1 with a certain probability, due to addition of offset noise or FPN, but without being affected by the clipping operation. On average, several of these initially large intensities cause a biased HDR image estimate, when the weighting function is not properly chosen. In this section, we will show how the camera noise model can be used to compute a near optimal weighting (or certainty) function in terms of SNR to be used in combination with the reconstruction formula (7).
MMSEbased estimation of the weighting function
Here we simply computed the mathematical expectation of both sides of (7). Note that a sufficient condition for an unbiased estimate is given by ν(E_{ i },Δ t_{ j }) ≠ 0 ⇒ w_{ ij } = 0. Our approximation theory (Appendix 2) then states that the latter condition is the case if ${y}_{\text{min}}^{\prime}\le \Delta {t}_{j}{E}_{i}\sqrt{{\alpha}_{j}}\le {y}_{\text{max}}^{\prime}$.
with $A\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\left[\underset{{j}^{\prime}}{max}\phantom{\rule{0.3em}{0ex}}{\sigma}^{2}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left({E}_{i},\Delta {t}_{{j}^{\prime}}\phantom{\rule{0.3em}{0ex}}\right)\phantom{\rule{0.3em}{0ex}}/\phantom{\rule{0.3em}{0ex}}\left({\sigma}^{2}\left({E}_{i},\Delta {t}_{\mathrm{\prime j}}\right)\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}P{\nu}^{2}\phantom{\rule{0.3em}{0ex}}\left({E}_{i},\Delta {t}_{{j}^{\prime}}\right)\right)\right]}^{1}$ a normalization factor. According to (28), the optimal weights minimizing (27), can be found by computing the proportion of the noise variance σ^{2} (E_{ i },Δ t_{ j }) compared to the whole σ^{2} (E_{ i },Δ t_{ j }) + P ν^{2} (E_{ i },Δ t_{ j }). We remark that when the bias ν(E_{ i },Δ t_{ j }) = 0, the corresponding weight is equal to one. When the bias increases, the weight becomes smaller. Readers familiar with Wiener filters will note that (28) resembles the classical scalar Wiener filter weight formula, however (28) is not related to the Wiener filter because P ν^{2}(E_{ i },Δ t_{ j }) is not a signal energy measure but a measure for the signal bias.
 1.
The prior PDF of the image irradiance f _{ E }(E _{ i }), for which we use a prior distribution of maximal entropy since little information about the image irradiance is known: the exponential distribution. The parameter of the exponential distribution (i.e., the average image irradiance) is assumed to be prior knowledge. To verify whether the exponential distribution is representative for real world images, we performed a simple experiment using three HDR images (in particular, the images from Figures 1d, 11a, and 12a). The obtained histogram and the fitted exponential distribution are shown in Figure 2b. It can be noted that there is a relatively good correspondence (i.e., the distribution well captures the high positive skew). Alternatively, the gamma distribution or even mixtures of gamma distributions may be used as in [25] (note in this respect that the exponential distribution arises as a special case).
 2.The conditional PDF f _{zE}(z _{ ij }E _{ i }), which can be computed exactly from (4) through the change of variables method. More specifically, we have:$\phantom{\rule{12.0pt}{0ex}}g\left({z}_{\mathit{ij}}\right)\left{E}_{i}\right.\phantom{\rule{0.3em}{0ex}}\sim \phantom{\rule{0.3em}{0ex}}\mathrm{N}\left(\text{log}\left(\sqrt{{\alpha}_{j}}\Delta {t}_{j}{E}_{i}\right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\nu \left({E}_{i},\Delta {t}_{j}\right),{\sigma}^{2}\left({E}_{i},\Delta {t}_{j}\right)\right),$
with φ(x,μ,σ) the Gaussian PDF.
Remark that the weighting function (29) depends on the parameters Δ t_{ j }, α_{ j }, β_{ ij }, ${\sigma}_{\mathrm{o},j}^{2}$ and the CRF γ(x). We will explain in Section “Dealing with unknown camera model parameters” how to deal with the scenario in which some of these parameters are also unknown.
Approximate direct formula
The optimal weighting function found in the previous section, depends on the functions σ^{2}(E_{ i },Δ t_{ j }) and ν^{2}(E_{ i },Δ t_{ j }) and its practical computation requires a (numerical) integration over the image irradiance variable E_{ i }. We therefore investigated if it is possible to find a good approximation to (29), that is somewhat easier to compute.
which is the probability that the image irradiance is within the bounds [E_{min}(i,j),E_{max}(i,j)], given the the observed pixel intensity z_{ ij }. Generally, (32) has a higher cost in terms of MSE (because we disregard two terms of (32)), however the squared bias term of (24) will be approximately zero. If E_{ i } ∈ [E_{min}(i,j),E_{max}(i,j)] then the bias ν(E_{ i },Δ t_{ j }) ≈ 0, so weighting function (32) can be interpreted as “the probability that the inversely compensated pixel intensity g (z_{ ij }) is approximately biasfree”. It is clear that HDR reconstruction using (32) will also be approximately biasfree: the bias error $\mathrm{E}\left[\left\hat{\text{log}{E}_{i}}\text{log}{E}_{i}\right\right]\approx 0$.
The merit of this approximation is that it is somewhat easier to implement and compute in practice. In case we can ignore the quadratic fixed pattern noise (see Section “Camera noise modeling”), Equation (33) only depends on the variables z and j. This allows offline computation and the use of a lookup table.
Dealing with unknown camera model parameters
In the previous sections, we explained how to determine a parametric weighting function that mediates a tradeoff between the SNR of the reconstructed HDR image on the one hand and the squared bias error on the other hand. The function depends on the parameters Δ t_{ j }, α_{ j }, β_{ ij }, ${\sigma}_{\mathrm{o},j}^{2}$ and the CRF γ(x), hence beforehand, we assumed that the CRF was known.
In practice, the exposure time can always be assumed to be available.^{h} The CRF, however, is not publicly disclosed by camera manufacturers, but can be obtained through calibration procedures (e.g., using a color chart), or estimated from a set of LDR images. We will explain this in more detail in Section “Estimation of the CRF”.
If the CRF is available, the camera noise parameters ${\alpha}_{j},{\beta}_{\mathit{ij}},{\sigma}_{\mathrm{o},j}^{2}$ can be estimated directly using a (robust) linear regression on the the inversely compensated pixel intensities γ^{−1}(z_{ ij }), relying on (17).
The main idea of our approach is: when little is known about the noise model parameters ${\alpha}_{j},{\beta}_{\mathit{ij}},{\sigma}_{\mathrm{o},j}^{2}$, we can always select reasonable upper bounds (or in case of α_{ j }, an upper bound for the reciprocal ${\alpha}_{j}^{1}$) for the values of these parameters. The obtained weighting function will then allow some uncertainty on the noise model parameters. The only restriction is that the upper bounds have to be chosen such that E_{max}(i,j) > E_{min}(i,j), otherwise all weights become zero (the input SNR would likely be too low to allow proper HDR reconstruction). By using such upper bounds, the weighting function will be more conservative in assigning weights, but still keeping the bias error (26) small in magnitude. In this sense, the certainty function establishes a novel meaning to this concept by relating to the uncertainty associated with the noise model parameters.
Estimation of the CRF
In this section, we revisit another common problem in HDR reconstruction: the estimation of the CRF. The CRF is often not known in advance, because this requires exact knowledge of the processing steps of the digital cameras and their parameter values, information that is only at hand of camera manufacturers. Therefore it is useful to estimate the CRF in a cameraindependent way. In literature, several techniques have been proposed to estimate the CRF. Mann and Picard propose a parametric regression method based on the comparagram (which is a joint histogram of z_{ ij } versus ${z}_{i{j}^{\prime}}$ for j ≠ j^{′}) [27, 30]. Mitsunaga and Nayar [29] perform an iterative polynomial regression, where the (assumed to be unknown) exposure time ratio $\Delta {t}_{j}/\Delta {t}_{{j}^{\prime}}$ is refined in each iteration. Related iterative methods have been proposed by Tsin et al. [33]. Lin et al. [39] use a completely different technique that estimates the CRF based on RGB distributions of pixels at color edges, requiring only one exposure (P = 1). Debevec and Malik use a nonparametric method to estimate the inverse CRF g(z) [28], leaving a lot of degrees of freedom to this function. In [31], this method was improved to incorporate all available pixel intensities in the estimation (leading to more accurate estimation), while reducing computational complexity.
The problems with the existing techniques are that they are either not very robust to high levels of noise (for example, the method of Debevec and Malik does not enforce monotonicity in terms of a direct constraint to the CRF estimation problem, often leading to CRF estimates that are oscillating in presence of noise) or not very well adapted to the signaldependent noise characteristics of the individual LDR images (e.g., due to an assumption of stationary noise).
When considering the estimation of the CRF as a regression problem (as in [30]), optimal estimation is quite difficult due to the dependency of the variance on the (unknown) image irradiance E_{ i }. If the conditional variances Var[z_{ ij }E_{ i }] and $\text{Var}\left[{z}_{i{j}^{\prime}}{E}_{i}\right]$ are constant (e.g. $\text{Var}\left[{z}_{\mathit{ij}}{E}_{i}\right]\approx {\sigma}_{\mathrm{o},j}^{2}$), the method of total least squares (TLS) [40] is well suited to estimate the nonlinear CRF: the TLS method performs a regression in which the orthogonal distance error of the fitted function to the observations is minimized. The TLS method is well suited for regression problems in which both the xvariables and yvariables have a known and constant variance. Unfortunately, this is not the case here. Furthermore, the variances (34) depend on the derivative of the CRF, which is to be estimated!
where a weighting function was introduced in analogy to Equation (6). Recall that the LDR pixel intensities are discrete (${z}_{\mathit{ij}}\in 0,\dots ,{2}^{B}1$ and ${z}_{i{j}^{\prime}}\in 0,\dots ,{2}^{B}1$). Therefore, (36) can be solved by treating g(z_{ ij }) and $g\left({z}_{i{j}^{\prime}}\right)$ as unknown variables. Solving (36) then gives a linear system of N × P equations with 2^{ B } unknowns ($g\left(0\right),\dots ,g({2}^{B}1)$).
with λ a regularization parameter to enforce smoothness of g(z) (see [28, 31]), and with є a small positive number. In (37), the function w_{smooth}(z) is the weighting function, averaged over the set of images $j=1,\dots ,P$. To compute d^{2}g/dz^{2}, the numerical second derivative is used. Optimization problem (37) can be efficiently solved using standard quadratic program (QP) solvers [41]. We note that in (37), the (unknown) image irradiance appears as positiondependent weights. A straightforward solution is then to iteratively update the CRF and the logirradiance estimate logE_{ i }, but to keep the processing technique simple, we propose here to simply drop the positiondependent weights 1/E_{ i }. To conclude, the proposed CRF estimation technique differs from the technique in [31] in the following aspects: (1) an imagedependent weight factor $\left(\Delta {t}_{j}+\Delta {t}_{{j}^{\prime}}\right)/\Delta {t}_{j}\Delta {t}_{{j}^{\prime}}$ that penalizes LDR images with low exposure times (because the SNR is generally lower in these images), (2) the use of a weight function w_{ ij } that is adapted to the underlying camera noise model and (3) the monotonicity constraint for g(z). Consequently, the CRF estimation technique will be considerably more robust against noise, with only limited increase in algorithmic complexity (the use of a QP solver compared to a sparse system solver).
Results and discussion
Comparison of different weighting functions
Ground truth data with simulated noise
HDR reconstruction of RAW sensor data
Next, we demonstrate our method for real digital camera images. In particular, we use the “desk still life” set of 17 RAW LDR images acquired by Hasinoff [24] using a Canon EOS 1D Mark III camera (10 megapixel images, with an equal ISO setting of 100).^{j} The 17 images are used to obtain an estimate for the groundtruth data, which is useful for objective evaluation. The spatial resolution of the images and the exposure time sampling is sufficiently high, such that different parameters of the reconstruction methods have a limited impact on the reconstruction result. Here, we use Debevec’s method[28] for creating the HDR images.
A HDR image is then constructed from the three LDR images, using the proposed approach or, alternatively, existing methods like [28, 31]. Finally, some common postprocessing operations, such as white balancing, exposure correction, color correction and HDR tone mapping need to be applied to the constructed HDR images. For this postprocessing stage, we employ the software program “Raw Therapee V4.0”.^{k} For our purposes, this program allows us to create a reconstruction profile with fixed (non automatic) parameter settings, which can then subsequently be applied to the images reconstructed using different methods. This is very useful to ease visual comparison. In Figure 11, the final results are shown, together with the HDR reconstructed from densely sampled LDR images. The signaltonoise ratio is estimated from the logirradiance values obtained before postprocessing, by using this last image as a reference. It can be seen that by using optimized weighting functions we can obtain a significant increase in visual quality, compared to other methods which are using more “generalpurpose” weighting functions.
HDR reconstruction from JPEG compressed LDR digital photographs
Conclusion
In this article, we presented a realistic digital camera noise model that incorporates several noise sources (such as photon noise, electronic noise, fixed pattern noise), as well as several parts of the camera processing pipeline (such as the amplifier, clipping,..). For this noise model, we derived both exact and approximate formulas for the bias function and the noise variance function, taking clipping of the dynamic range into account. Next, we investigated novel HDR image reconstruction weighting schemes relying on the realistic noise model. We paid special attention to different factors that determine the signaltonoise ratio of the HDR image, and we used the obtained insight to derive weighting functions that are optimized for the camera noise model. This led to our probabilistic weighting function, that is defined as the probability that the considered pixel intensity is (approximately) unbiased. Because due to the clipping there is a strong coupling between the noise variance and bias functions, this weighting function offers a tradeoff between maximizing the SNR of the HDR image and bias errors, its performance is close to optimal in the MSE sense. The probabilistic reasoning was then extended to derive a weighting function for the case in which (some of) the noise model parameters are unknown. Experimental results confirmed the expected improvements in reconstruction quality, especially for reconstruction from RAW sensor data. Although HDR image reconstruction is only one example of application of sophisticated noise modeling techniques, there is a wide range of other application areas to explore in which similar image quality improvements can be obtained when adapting to the noise characteristics from the imaging devices.
Appendix
Appendix 1: approximation of the statistical moments of a nonlinear function of signal and noise
These convenient formulas form the basis of our noise model in Section “Camera noise modeling”.
Appendix 2: determining the clippingsafe region of the dynamic range
An illustration of the bounds as a function of the offset noise level ${\sigma}_{\mathrm{o},j}^{2}$ and the amplification factor $\sqrt{\alpha}$ is given in Figure 15. It can be seen that even at low SNRs, the region $[{y}_{\text{min}}^{\prime},{y}_{\text{max}}^{\prime}]$ still covers a sufficiently large region near the center of the dynamic range $\sqrt{\alpha}\mathrm{\Delta tE}=\left({y}_{\text{min}}+{y}_{\text{max}}\right)/2$.
Appendix 3: optimization problem for determining the weighting function
This solution is used in Section “Probabilistic formulation of the weighting function” for computing optimal weighting functions.
Appendix 4: derivation of a direct formula for the weighting function
Endnotes
^{a} We remark that due to quantummechanical aspects, in reality there is no such thing as an “ideal” signal, here we define the “ideal” signal as the voltage averaged over a “very” long exposure time.
^{b} Remark that Bayesian estimates (such as MAP, MMSE) are equally possible, given that prior information on E_{ i } is available.
^{c} Due to Jensen’s inequality.
^{d} This is because the contribution of several noise sources can be modeled using a Gaussian distribution, with signaldependent mean and variance.
^{e} Equation (13) is invariant under scaling of w_{ ij }; if a certain ${w}_{\mathit{ij}}^{\star}$ minimizes MSE then any scaled version $a{w}_{\mathit{ij}}^{\star}$ with a > 0 is also a solution to (24). By adding an extra constraint to the solution, this problem is solved.
^{f} More specifically, we use ${\left{\sum}_{j}{a}_{j}{b}_{j}\right}^{2}<{\sum}_{j}{a}_{j}^{2}{\sum}_{l}{b}_{l}^{2}$ with a_{ i } = w_{ ij }σ^{ − 2}(E_{ i },Δ t_{ j })ν(E_{ i },Δ t_{ j }) and b_{ l } = 1.
^{g} This follows from the fact that E_{ i } > 0, Δ t_{ j } > 0, α_{ j } > 0 and ${\sigma}_{\mathrm{o},j}^{2}>0$.
^{h} The exposure time is either stored in the RAW data files, or the EXIF information of the compressed JPEG files.
^{i} To see this, note that $\sqrt{s\left(\mathrm{\Delta t\alpha}\right)}/\left(\sqrt{\mathrm{\Delta t}}\alpha \right)=\sqrt{\frac{{\sigma}_{\mathrm{o}}^{2}}{\mathrm{\alpha \Delta}{t}^{2}}+\mathrm{\Delta tE}+\frac{\beta}{\alpha}E},$ which increases monotonically in ${\sigma}_{\mathrm{o}}^{2}$ and β and decreases in α.
^{j} Available at http://people.csail.mit.edu/hasinoff/hdrnoise/.
Declarations
Acknowledgements
The authors thank Dr. Filip Rooms and Koen Douterloigne for providing HDR datasets. Bart Goossens acknowledges the financial support from the Flemish Research Foundation (FWO), Belgium.
Authors’ Affiliations
References
 Donoho DL: Denoising by softthresholding. IEEE Trans. Inf. Theory 1995, 41(3):613627. 10.1109/18.382009MathSciNetView ArticleGoogle Scholar
 Coifman RR, Donoho DL: TranslationInvariant Denoising. Wavelets and Statistics, ed. by A Antoniadis, G Oppenheim (Springer Verlag, New York, 1995), pp 125–150Google Scholar
 Chang S, Yu B, Vetterli M: Spatially adaptive wavelet thresholding with context modeling for image denoising. IEEE Trans. Image Process 2000, 9(9):15221531. 10.1109/83.862630MathSciNetView ArticleGoogle Scholar
 Portilla J, Strela V, Wainwright M, Simoncelli E: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. Image Process 2003, 12(11):13381351. 10.1109/TIP.2003.818640MathSciNetView ArticleGoogle Scholar
 Pižurica A, Philips W: Estimating the probability of the presence of a signal of interest in multiresolution single and multiband image denoising. IEEE Trans. Image Process 2006, 15(3):654665.View ArticleGoogle Scholar
 Rudin LI, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Phyisica D 1992, 60: 259268. 10.1016/01672789(92)90242FView ArticleGoogle Scholar
 Perona P, Malik J: Scalespace and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell 1990, 12(7):629639. 10.1109/34.56205View ArticleGoogle Scholar
 Weickert J: Anisotropic Diffusion in Image Processing. ser. ECMI Series (TeubnerVerlag, Stuttgart, 1998)Google Scholar
 Buades A, Coll B, Morel J: A review of image denoising algorithms, with a new one. SIAM Interdiscip. J. Multiscale Model. Simu 2005, 4(2):490530. 10.1137/040616024MathSciNetView ArticleGoogle Scholar
 Kervrann C, Boulanger J: Optimal spatial adaptation for patchbased image denoising. IEEE Trans. Image Process 2006, 15(10):28662878.View ArticleGoogle Scholar
 Goossens B, Aelterman J, Luong HQ, Pižurica A, Philips W: Efficient design of a low redundant discrete shearlet transform. In 2009 International Workshop on Local and NonLocal Approximation in Image Processing (LNLA2009). Tuusula, Finland; Aug 2009:112124.View ArticleGoogle Scholar
 Lim S: Characterization of noise in digital photographs for image processing. In Proc. SPIE Digital Photography II, vol. 6069,. San josé, CA, USA; 2006:p. 60690Op. 60690O.Google Scholar
 Hirakawa K: SingleSensor Imaging: Methods and Applications for Digital Cameras. In Color Filter Array Image Analysis for Joint Denoising and Demosaicking Edited by: Lukac R. (CRC Press, Boca Raton, 2008)View ArticleGoogle Scholar
 Anscombe FJ: The transformation of Poisson, binomial and negativebinomial data. Biometrika 1948, 35: 245254.MathSciNetView ArticleGoogle Scholar
 Makitalo M, Foi A: Optimal inversion of the anscombe transformation in lowcount poisson image denoising. IEEE Trans. Image Process 2011, 20(1):99109.MathSciNetView ArticleGoogle Scholar
 Faraji H, MacLean WJ: CCD noise removal in digital images. IEEE Trans. Image Process 2006, 15(9):26762685.View ArticleGoogle Scholar
 Liu C, Szeliski R, Kang SB, Zitnick CL, Freeman WT: Automatic estimation and removal of noise from a single image. IEEE Trans. Pattern Anal. Mach. Intell 2008, 30(2):299314.View ArticleGoogle Scholar
 Foi A, Trimeche M, Katkovnik V, Egiazarian K: Practical Poissonian–Gaussian noise modeling and fitting for singleimage rawdata. IEEE Trans. Image Process 2008, 17(10):17371754.MathSciNetView ArticleGoogle Scholar
 Foi A, Alenius S, Trimeche M, Katkovnik V, Egiazarian K: A spatially adaptive Poissonian image deblurring,. Proc. IEEE International Conference on Image Processing ICIP 2005, vol. 1, 11–14 Sept 2005, pp. I–925–8Google Scholar
 Paliy D, Foi A, Bilcu R, Katkovnik V, Egiazarian K: Joint deblurring and demosaicing of Poissonian bayerdata based on local adaptivity,. Proceedings of European Signal Processing Conference (Lausanne, 2008), pp. 1569104966/1–5Google Scholar
 Kuan DT, Sawchuk A, Strand TC, Chavel P: Adaptive noise smoothing filter for images with signaldependent noise. IEEE Trans. Pattern Anal. Mach. Intell 1985, 7(2):165177.View ArticleGoogle Scholar
 Argenti F, Torricelli G, Alparone L: Signaldependent noise removal in the undecimated wavelet domain,. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2002)., vol. 4, 13–17 May 2002, pp. IV–3293–IV–3296Google Scholar
 Goossens B, Pižurica A, Philips W: Wavelet domain image denoising for nonstationary and signaldependent noise,. IEEE International Conference on Image Processing (ICIP) (Atlanta, 2006), pp. 1425–1428Google Scholar
 Hasinoff SW, Durand F, Freeman WT: Noiseoptimal capture for high dynamic range photography,. Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR) (San Francisco, 2010), pp. 553–560Google Scholar
 Hirakawa K, Wolfe PJ: Optimal exposure control for high dynamic range imaging,. Proc. 17th IEEE Int Image Processing (ICIP) Conf (Hong Kong, 2010, pp. 3137–3140Google Scholar
 Larson G, Rushmeier H, Piatko C: A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Trans. Vis. Comput. Graph 1997, 3: 291306. 10.1109/2945.646233View ArticleGoogle Scholar
 Mann S, Picard R: On being ’undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures,. In Proceedings of IS&T 48th annual conference (Cambridge, Massachusetts, May 1995) (Los Angeles, 1994), pp. 422–428Google Scholar
 Debevec P, Malik J: Recovering high dynamic range radiance maps from photographs,. Proceedings of SIGGRAPH97, Computer Graphics Proceedings (Ft. Collins, 1997), pp. 369–378Google Scholar
 Mitsunaga T, Nayar S: Radiometric self calibration. IEEE Conf. Comput. Vision and Pattern Recognit (CVPR) 1999, 1: 374380.Google Scholar
 Mann S: Comparametric equations with practical applications in quantigraphic image processing. IEEE Trans. Image Process 2000, 9(8):13891406. 10.1109/83.855434MathSciNetView ArticleGoogle Scholar
 De Neve S, Goossens B, Luong H, Philips W: An improved HDR image synthesis algorithm,. IEEE Int. Conf. Image Processing (ICIP2009) (Cairo, 2009), pp. 1545–1548Google Scholar
 Reinhard E, Ward G, Pattanaik S, Debevec P: High Dynamic Range Imaging: Acquisition, Display and ImageBased Lightingt. (Morgan Kaufmann Publishers, San Francisco, 2005)Google Scholar
 Tsin Y, Ramesh V, Kanade T: Statistical calibration of the CCD imaging process,. Proc. IEEE Int. Conf. Comp. Vis. (Vancouver, 2001), pp. 480–487Google Scholar
 Kirk K, Andersen HJ: Noise characterization of weighting schemes for combination of multiple exposuresc,. Proc. British Mach. Vis. Conf. (Edinburgh, 2006), pp. 1129–1138Google Scholar
 Foi A: Clipped noisy images: heteroskedastic modeling and practical denoising. Signal Process 2009, 89(12):26092629. 10.1016/j.sigpro.2009.04.035View ArticleGoogle Scholar
 Lim S, El Gamal A: Gain fixed pattern noise correction via optical flow. IEEE Trans. Circ. Syst. I Fundament. Theory Appl 2004, 51(4):779786. 10.1109/TCSI.2004.823666View ArticleGoogle Scholar
 Grossberg MD, Nayar SK: Modeling the space of camera response functions. IEEE Trans. Pattern Anal. Mach. Intell 2004, 26(10):12721282. 10.1109/TPAMI.2004.88View ArticleGoogle Scholar
 Robertson M, Borman S, Stevenson R: Estimationtheoretic approach to dynamic range improvement using multiple exposure. J. Electr. Image 2003, 12(2):219228. 10.1117/1.1557695View ArticleGoogle Scholar
 Lin S, Gu J, Yamazaki S, HY Shum: Radiometric calibration from a single image,. Proc. IEEE Computer Society Conf. Computer Vision and Pattern Recognition CVPR 2004, vol. 2, (Washington, 2004), pp. II938  II945Google Scholar
 Golub GH, Van Loan CF: Matrix Computations, 3rd edn.. (Johns, Hopkins University Press, Baltimore, MD, USA, 1996)Google Scholar
 Gill P, Murray W, Wright MH: Practical Optimization. (Acedemic Press, London, UK, 1981)Google Scholar
 Portilla J: Image restoration using Gaussian scale mixtures in overcomplete oriented pyramids (a review),. Proc. of the SPIE’s 50th Annual Meeting: Wavelets XI, vol. 5914, (San Diego, CA, Aug 20050, pp, 468–482Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.