- Open Access
Bayer patterned high dynamic range image reconstruction using adaptive weighting function
© Kang et al.; licensee Springer. 2014
- Received: 5 February 2014
- Accepted: 5 May 2014
- Published: 22 May 2014
It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.
- High dynamic range
- Bayer pattern
- Camera response function
- Ghost artifact
Image capturing devices like digital cameras and camcorders have recently improved remarkably. However, image sensors such as charge-coupled devices (CCD) and complementary metal-oxide semiconductors (CMOS) in these imaging devices can still only capture a limited dynamic range. As a result, when a captured scene contains a dynamic range above the given limitation, a loss of information is inevitable even if the exposure is adjusted according to the brightness of the scene. Thus, many methods based on signal processing have been proposed to reproduce scenes with a high dynamic range (HDR).
To obtain HDR images, many HDR imaging approaches utilize low dynamic range (LDR) images with different exposures [1–11]. Most of these approaches first convert the pixel values of input images into radiance values by using the camera response function (CRF), where the CRF refers to the function that maps the radiance values of a given scene to the pixel values in the captured image, and the radiance refers to the physical quantity of light energy on each element on the sensor array. Next, the radiance values of the input images are combined into a single HDR image using weighting functions based on the reliability of the input data.
In early studies, conventional approaches were proposed to estimate the CRF from multiple LDR images. These CRF estimation approaches can be categorized as parametric [1, 2] and non-parametric approaches [3, 4]. In parametric approaches, Mann and Picard  presented a variety of parametric forms for CRF estimation. Mitsunaga and Nayar  used a high-order polynomial function to estimate the CRF. On the other hand, in terms of non-parametric approaches, Debevec and Malik  estimated the CRF using an objective function with a smoothness constraint. Pal et al.  used a Bayesian network consisting of a probabilistic model for an imaging function and a generative model for smooth functions.
In recent years, some techniques have been proposed to prevent artifacts in HDR images caused by moving objects [5–10]. If local motion occurs in a scene while the LDR images are being captured, a ghost artifact appears in the HDR image. Most of the ghost artifact-preventing techniques first detect the local motion by using ghost artifact measurement and then combine the LDR images without the ghost artifact regions.
In this paper, we introduce a new approach which performs HDR image reconstruction on the Bayer raw images before the IPM , as shown in Figure 1b. The proposed method can be widely used in applications such as real-time HDR video cameras because of the reduction of hardware complexity. The CRF estimation is simpler and more accurate than conventional methods because the CRF before the IPM is linear. The proposed method considers the noise and the ghost artifact problems. For this purpose, a new weighting function is proposed to combine the Bayer patterned LDR (BP-LDR) images. The proposed weighting function is designed so that each of the BP-LDR images independently covers each corresponding region according to the radiance value in order to reduce the influence of the noise. The regions covered by each BP-LDR image are determined by the exposure of each BP-LDR image and the existence of local motion. This avoids using the short-exposure BP-LDR image to reconstruct the dark regions in the Bayer patterned HDR (BP-HDR) image. The weighting function also detects the local motion in the Bayer pattern and excludes ghosting regions. To detect the local motion, the luminance and the chrominance values are directly calculated in the Bayer pattern, and the differences of these values are utilized. When the proposed method is compared with conventional methods, the detection performance is improved since an accurate CRF is employed.
The rest of this paper is organized as follows: In Section 2, the proposed BP-HDR image reconstruction approach is described in detail. The properties of the CRF are discussed and analyzed in Section 2.1. Section 2.2 describes the design process of the adaptive weighting function for BP-HDR image reconstruction. In Section 3, experimental results of various test images are presented, and the paper is concluded in Section 4.
2.1 Properties of the camera response function
where Δ t represents the exposure time.
Although the relationship between the light energy and the image sensor output is linear, the CRF is usually non-linear due to the IPM. In the IPM, the linear response function is intentionally converted into a non-linear function to produce an image attractive to the human visual system. The response function generated by the IPM is often designed to mimic the nonlinearity of film, where the film response function is designed to produce attractive images [19, 20]. Moreover, the response function generated by the IPM also varies for every pixel due to spatially adaptive processing modules for image enhancement. Thus, it is not appropriate to estimate the CRF with images acquired from the IPM’s output. Therefore, we use the CRF before the IPM and apply it to the BP-LDR images to reconstruct the BP-HDR image.
In the BP-LDR images, the CRF is expressed linearly as f(x)=α x+β as shown in Figure 2. Here, α represents the slope of the CRF corresponding to the sensitivity of the RGB channels. That is, α differs according to the color channel of the Bayer pattern. However, there is no need to estimate α because the auto white balance sub-module of IPM adjusts the slopes of the RGB channels to be equal . Therefore, f(·) can be approximated as f(x)=x+β. β represents the black level of an image sensor which can be simply estimated as the average of the optical black region located on the boundary of the image sensor .
2.2 Proposed design method of the weighting function for BP-HDR image reconstruction
where W n represents the weighting function corresponding to the n-th BP-LDR image, I n (i,j) represents the given pixel value at the position (i,j) of the n-th BP-LDR image, Δ t n represents the exposure time of I n , and N is the number of BP-LDR images. For convenience, we arranged I1,I2,….,I n ,…,I N such that Δ t1>Δ t2>….>Δ t n >….>Δ t N . Furthermore, f represents the CRF explained in the previous section, and thus f-1(I n )/Δ t n represents the radiance value of I n . Therefore, (2) can be regarded as an equation which combines the radiance values of the BP-LDR into the BP-HDR image.
The details are described in the following sections.
2.2.1 The weight for data reliability
The two adjacent functions and intersect at the same point as in conventional methods.
All of the functions have the same slopes at the transition region.
By obeying the first constraint, the most reliable BP-LDR image for a certain radiance range becomes consistent with that in conventional methods. However, since the conventional weighting functions are designed to adapt to the RGB images processed by the image processing module such as the gamma correction, they are improper for the BP-LDR images. Therefore, we modify the conventional weighting functions to adapt to the BP-LDR images. That is, the maximum value position (ρ) in the weighting functions is changed from the center of the pixel value range to a more reliable position as will be later mentioned in Section 3. By obeying the second constraint, the change of the two weights between different exposure LDR images becomes close to linear. This can prevent artifacts which can be possibly generated due to a nonlinear change in the weight ratio. To reduce the overlap between adjacent weighting functions while obeying the above mentioned constraints, the slopes at the transition region have to be increased.
Here, C represents the parameter that controls the slope of the function .
For , is a Gaussian function with mean . For , becomes 1 since I N is the most reliable in this range.
The weighting function obtained in this section is further modified to consider local motion by the method described in Section 2.2.3.
2.2.2 The weight for ghost artifact reduction
where n0 denotes the reference image and M n represents the region in I n which satisfies the abovementioned fundamental assumption. Even though the switching component is not used, the weighting component assigns small weights to these regions. However, the switching component is effective to reduce ghost artifacts because these regions are completely excluded from the reconstruction process by the switching component.
Here, represents the difference of the luminance values between and I n . , where X denotes the red (R) or blue (B) channel, represents the difference of the chrominance values between and I n . The parameters C Y , C R , and C B are chosen to balance between , , and , respectively.
Here, X n represents the pixel value of the X channel in I n , which is calculated by the bilinear interpolation depending on the position.
2.2.3 The weight for data reliability considering local motion
Here, the weight W s is not required for the third term in the right hand side of (26), since the radiance value range captured by I1 corresponds to the low-radiance part of I2.
The fourth term compensates for the case if local motion is detected in both I1 and I2. If local motion occurs only in I2, then the third term in (27) would be enough. Otherwise, the case that local motion occurs only in I1 is already compensated for by the third term in (26).
Finally, the weighting function is used as the data reliability term in (3).
The performance of the proposed algorithm was tested with several BP-LDR images, which were captured with a CMOS sensor at three different exposures (Δ t1=t, Δ t2=t/4, and Δ t3=t/16). The BP-LDR images have a pixel value range of 0≤intensity≤4,095 (12-bit). The 12-bit pixel value range is widely used for digital cameras. Later, the 12-bit range is compressed to 8-bit RGB data by the IPM.
With the proposed method, several parameters were set empirically and tested with various images to obtain the best results. The parameter ρ in (5) was set to . The parameter δ in (6), which determines the degree of the overlap between the data reliability weights, was set to 0.25. The parameters C Y , C R and C B in (17) were set to 20, 10, and 10, respectively. The kernel size S and the variance σ2 in (20) and (23) were set to 5×5 and 4, respectively. No pre-processes were performed on the input BP-LDR image, but pre-processes such as bad pixel correction  and Gr-Gb imbalance correction  can improve the HDR result according to the quality of the imaging sensor. For better visualization, we showed the results in RGB images rather than in Bayer patterned images. All the input BP-LDR images were post-processed by the edge-preserving color interpolation , white balancing, color correction, and gamma correction. For the resulting BP-HDR images, an additional tone-mapping algorithm  was used to compress the dynamic range, which visualizes the HDR image information on a low dynamic range display.
We compared the performance with respect to the influence of noise at dark regions and ghost artifacts with three conventional methods. The first conventional method (CM1) uses the weighted summation using the Gaussian weighting function without considering ghost artifact reduction in . The second (CM2) and third (CM3) method are commercial software programs that are widely used to obtain an HDR image with ghost artifact reduction in [26, 27], respectively. In CM2 and CM3, the parameters associated with ghost artifact removal were set to the highest level. For CM2 and CM3, the BP-LDR images were preprocessed by the same edge-preserving color interpolation, white balancing, and gamma correction algorithms which are used for the visualization.
Comparison of experimental results in quantitative terms
In this paper, we have proposed a BP-HDR (Bayer patterned high dynamic range) image reconstruction algorithm from multiple BP-LDR (Bayer patterned low dynamic range) images. Unlike conventional methods, the proposed method works on the Bayer raw image. This allows for a linear CRF and also improves the efficiency in hardware implementation. The proposed method aims to deal with the noise and the ghost artifact problems. For this aim, a new weighting function is proposed to be designed so that each of the BP-LDR images independently covers its corresponding region according to the radiance value. Furthermore, the weighting function is designed to detect the local motion in the Bayer pattern and to exclude ghosting regions. As a result, the proposed method weakens the influence of noise in the short-exposure BP-LDR image and prevents ghost artifacts. Experimental results show that the proposed method produces a high-quality BP-HDR image while being robust against ghost artifacts and noise factors, even when there exists excessive local motion.
\thelikesubsection Procedure for calculating intersection point γ n
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2012R1A2A4A01003732).
- Mann S, Picard R: Being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures. In IS&T 48th Annual Conference. Washington D.C.; 7–11 May 1995:422-428.Google Scholar
- Mitsunaga T, Nayar SK: Radiometric self calibration. In 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Fort Collins; 23–25 June 1999:374-380.Google Scholar
- Debevec PE, Malik J: Recovering high dynamic range radiance maps from photographs. In 24th International Conference on Computer Graphics and Interactive Techniques. Los Angeles; 3–8 Aug 1997:369-378.Google Scholar
- Pal C, Szeliski R, Uyttendaele M, Jojic N: Probability models for high dynamic range imaging. In 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington D.C.; 27 June–2 July 2004:173-180.Google Scholar
- Srikantha A, Sidibé D: Ghost detection and removal for high dynamic range images: recent advances. Signal Process. Image Commun 2012, 27(6):650-662. 10.1016/j.image.2012.02.001View ArticleGoogle Scholar
- Gallo O, Gelfandz N, Chen W-C, Tico M, Pulli K: Artifact-free high dynamic range imaging. In 2009 IEEE International Conference on Computational Photography. San Francisco; 16–17 Apr 2009:1-7.View ArticleGoogle Scholar
- Khan EA, Akyuz AO, Reinhard E: Ghost removal in high dynamic range images. In 2006 IEEE International Conference on Image Processing. Atlanta; 8–11 Oct 2006:2005-2008.View ArticleGoogle Scholar
- Jacobs K, Loscos C, Ward G: Automatic high-dynamic range image generation for dynamic scenes. IEEE Trans. Comput. Graphics Appl 2008, 28(2):84-93.View ArticleGoogle Scholar
- Heo YS, Lee KM, Lee SU, Moon Y, Cha J: Ghost-free high dynamic range imaging. In 10th Asian Conference on Computer Vision. Queenstown; 8–12 Nov 2010:486-500.Google Scholar
- An J, Lee SH, Kuk JG, Cho NI: A multi-exposure image fusion algorithm without ghost effect. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing. Prague; 22–27 May 2011:1565-1568.View ArticleGoogle Scholar
- Robertson MA, Borman S, Stevenson RL: Dynamic range improvement through multiple exposures. In 1999 International Conference on Image Processing. Kobe; 24–28 Oct 1999:159-163.View ArticleGoogle Scholar
- Shao L, Rehman AU: Image demosaicing using content and colour-correlation analysis. Signal Process doi: 10.1016/j.sigpro.2013.07.017Google Scholar
- Shao L, Zhang H, de Haan G: An overview and performance evaluation of classification-based least squares trained filters. IEEE Trans. Image Process 2008, 17(10):1772-1782.MathSciNetView ArticleGoogle Scholar
- Ramanath R, Snyder WE, Yoo Y, MS Drew: Color image processing pipeline. Signal Process. Mag. IEEE 2005, 22(1):34-43.View ArticleGoogle Scholar
- Shao L, Yan R, Li X, Liu Y: From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms. IEEE Trans. Cybernet doi:10.1109/TCYB.2013.2278548Google Scholar
- Pham B, Pringle G: Color correction for an image sequence. Comput. Graphics Appl. IEEE 1995, 15(3):38-42. 10.1109/38.376611View ArticleGoogle Scholar
- BE Bayer: Color imaging array. U.S. Patent 3,971,065, July 1976Google Scholar
- Holst GC, Lomheim TS: CMOS/CCD Sensors and Camera Systems. SPIE, Bellingham; 2011.Google Scholar
- Grossberg MD, Nayar SK: Determining the camera response from images: what is knowable? IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(11):1455-1467. 10.1109/TPAMI.2003.1240119View ArticleGoogle Scholar
- Tsin Y, Ramesh V, Kanade T: Statistical calibration of, CCD imaging process. In 8th IEEE International Conference on Computer Vision. Vancouver; 7–14 July 2001:480-487.Google Scholar
- Han YS, Choi E, Kang MG: Smear removal algorithm using the optical black region for CCD imaging sensors. IEEE Trans. Consum. Electron 2009, 55(4):2287-2293.View ArticleGoogle Scholar
- Dierickx B, Meynants G: Missing pixel correction algorithm for image sensors. In EUROPTO Conference on Advanced Focal Plane Arrays and Electronic Camera 2. Zurich; 7 Sep 1998:200-203.View ArticleGoogle Scholar
- Chino N, Une H: Color imaging by independently controlling gains of each of R, Gr, Gb, and B signals. U.S. Patent 7,009,639, March 2006Google Scholar
- Lu W, Tan Y-P: Color filter array demosaicking: new method and performance measures. IEEE Trans. Image Process 2003, 12(10):1194-1210. 10.1109/TIP.2003.816004View ArticleGoogle Scholar
- Meylan L, Susstrunk S: High dynamic range image rendering with a retinex-based adaptive filter. IEEE Trans. Image Process 2006, 15(9):2820-2830.View ArticleGoogle Scholar
- Photomatix Version 4.0.2, HDRsoft . Accessed 31 Jan 2014 http://www.hdrsoft.com/ HDRsoft . Accessed 31 Jan 2014
- Adobe Photoshop CS5 Version 12.0.4 Adobe Systems Inc. . Accessed 31 Jan 2014 http://www.adobe.com/products/photoshop/ Adobe Systems Inc. . Accessed 31 Jan 2014
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.