Skip to main content

Bayer patterned high dynamic range image reconstruction using adaptive weighting function

Abstract

It is not easy to acquire a desired high dynamic range (HDR) image directly from a camera due to the limited dynamic range of most image sensors. Therefore, generally, a post-process called HDR image reconstruction is used, which reconstructs an HDR image from a set of differently exposed images to overcome the limited dynamic range. However, conventional HDR image reconstruction methods suffer from noise factors and ghost artifacts. This is due to the fact that the input images taken with a short exposure time contain much noise in the dark regions, which contributes to increased noise in the corresponding dark regions of the reconstructed HDR image. Furthermore, since input images are acquired at different times, the images contain different motion information, which results in ghost artifacts. In this paper, we propose an HDR image reconstruction method which reduces the impact of the noise factors and prevents ghost artifacts. To reduce the influence of the noise factors, the weighting function, which determines the contribution of a certain input image to the reconstructed HDR image, is designed to adapt to the exposure time and local motions. Furthermore, the weighting function is designed to exclude ghosting regions by considering the differences of the luminance and the chrominance values between several input images. Unlike conventional methods, which generally work on a color image processed by the image processing module (IPM), the proposed method works directly on the Bayer raw image. This allows for a linear camera response function and also improves the efficiency in hardware implementation. Experimental results show that the proposed method can reconstruct high-quality Bayer patterned HDR images while being robust against ghost artifacts and noise factors.

1 Introduction

Image capturing devices like digital cameras and camcorders have recently improved remarkably. However, image sensors such as charge-coupled devices (CCD) and complementary metal-oxide semiconductors (CMOS) in these imaging devices can still only capture a limited dynamic range. As a result, when a captured scene contains a dynamic range above the given limitation, a loss of information is inevitable even if the exposure is adjusted according to the brightness of the scene. Thus, many methods based on signal processing have been proposed to reproduce scenes with a high dynamic range (HDR).

To obtain HDR images, many HDR imaging approaches utilize low dynamic range (LDR) images with different exposures [111]. Most of these approaches first convert the pixel values of input images into radiance values by using the camera response function (CRF), where the CRF refers to the function that maps the radiance values of a given scene to the pixel values in the captured image, and the radiance refers to the physical quantity of light energy on each element on the sensor array. Next, the radiance values of the input images are combined into a single HDR image using weighting functions based on the reliability of the input data.

In early studies, conventional approaches were proposed to estimate the CRF from multiple LDR images. These CRF estimation approaches can be categorized as parametric [1, 2] and non-parametric approaches [3, 4]. In parametric approaches, Mann and Picard [1] presented a variety of parametric forms for CRF estimation. Mitsunaga and Nayar [2] used a high-order polynomial function to estimate the CRF. On the other hand, in terms of non-parametric approaches, Debevec and Malik [3] estimated the CRF using an objective function with a smoothness constraint. Pal et al. [4] used a Bayesian network consisting of a probabilistic model for an imaging function and a generative model for smooth functions.

In recent years, some techniques have been proposed to prevent artifacts in HDR images caused by moving objects [510]. If local motion occurs in a scene while the LDR images are being captured, a ghost artifact appears in the HDR image. Most of the ghost artifact-preventing techniques first detect the local motion by using ghost artifact measurement and then combine the LDR images without the ghost artifact regions.

All these approaches, however, combine LDR images without considering the influence of noise factors. Dark regions in an LDR image taken with short exposure time contain relatively more noise factors because the regions are under-exposed or noisy. Thus, the noise level is increased in the corresponding dark regions of the reconstructed HDR image. These approaches also utilize RGB images processed by the image processing module (IPM). The IPM is a processor which converts Bayer raw data into an RGB image suitable to the human visual system [1214]. Using RGB images processed by the IPM makes the CRF estimation inaccurate since the IPM includes several adaptive nonlinear sub-modules, e.g., dynamic range compression, noise reduction [15], and color correction [16] algorithms. An inaccurate CRF leads to poor HDR imaging performance. Moreover, the use of RGB images increases the hardware complexity since the IPM has to be performed several times before the HDR image reconstruction, as shown in Figure 1a.

Figure 1
figure 1

Comparison of hardware complexity when using (a) conventional methods and (b) the proposed method.

In this paper, we introduce a new approach which performs HDR image reconstruction on the Bayer raw images before the IPM [17], as shown in Figure 1b. The proposed method can be widely used in applications such as real-time HDR video cameras because of the reduction of hardware complexity. The CRF estimation is simpler and more accurate than conventional methods because the CRF before the IPM is linear. The proposed method considers the noise and the ghost artifact problems. For this purpose, a new weighting function is proposed to combine the Bayer patterned LDR (BP-LDR) images. The proposed weighting function is designed so that each of the BP-LDR images independently covers each corresponding region according to the radiance value in order to reduce the influence of the noise. The regions covered by each BP-LDR image are determined by the exposure of each BP-LDR image and the existence of local motion. This avoids using the short-exposure BP-LDR image to reconstruct the dark regions in the Bayer patterned HDR (BP-HDR) image. The weighting function also detects the local motion in the Bayer pattern and excludes ghosting regions. To detect the local motion, the luminance and the chrominance values are directly calculated in the Bayer pattern, and the differences of these values are utilized. When the proposed method is compared with conventional methods, the detection performance is improved since an accurate CRF is employed.

The rest of this paper is organized as follows: In Section 2, the proposed BP-HDR image reconstruction approach is described in detail. The properties of the CRF are discussed and analyzed in Section 2.1. Section 2.2 describes the design process of the adaptive weighting function for BP-HDR image reconstruction. In Section 3, experimental results of various test images are presented, and the paper is concluded in Section 4.

2 Proposed BP-HDR image reconstruction

2.1 Properties of the camera response function

In general, the radiance value IR which passes through the lens of the camera is converted into a pixel value I by the image sensor and the IPM part. The CRF is a function that relates the radiance value to the pixel value and is expressed as

I=f I R · Δ t
(1)

where Δ t represents the exposure time.

CCD and CMOS image sensors are widely used in image acquisition systems. Both of these sensors utilize the same kind of element called photodiode [18], which generates a current proportional to the light energy. The photodiode offers a high level of resistance when there is no light falling on it. On the other hand, the resistance of the photodiode is reduced and the current increases linearly with the light energy when light falls on it. That is, the response function of the photodiode is linear with respect to the radiance value. An example under the condition with 6500 k fluorescent lighting using a CMOS sensor with Bayer pattern is shown in Figure 2. In the plot of the pixel value versus the intensity of the radiance, each of these response functions appear to be linear.

Figure 2
figure 2

Camera response function of the RGB channels in the Bayer raw data.

Although the relationship between the light energy and the image sensor output is linear, the CRF is usually non-linear due to the IPM. In the IPM, the linear response function is intentionally converted into a non-linear function to produce an image attractive to the human visual system. The response function generated by the IPM is often designed to mimic the nonlinearity of film, where the film response function is designed to produce attractive images [19, 20]. Moreover, the response function generated by the IPM also varies for every pixel due to spatially adaptive processing modules for image enhancement. Thus, it is not appropriate to estimate the CRF with images acquired from the IPM’s output. Therefore, we use the CRF before the IPM and apply it to the BP-LDR images to reconstruct the BP-HDR image.

In the BP-LDR images, the CRF is expressed linearly as f(x)=α x+β as shown in Figure 2. Here, α represents the slope of the CRF corresponding to the sensitivity of the RGB channels. That is, α differs according to the color channel of the Bayer pattern. However, there is no need to estimate α because the auto white balance sub-module of IPM adjusts the slopes of the RGB channels to be equal [14]. Therefore, f(·) can be approximated as f(x)=x+β. β represents the black level of an image sensor which can be simply estimated as the average of the optical black region located on the boundary of the image sensor [21].

2.2 Proposed design method of the weighting function for BP-HDR image reconstruction

The proposed method combines a set of differently exposed Bayer raw images to obtain a single BP-HDR image. The BP-LDR images are combined with different weights determined by the weighting function. The weighting function consists of the data reliability term and the ghost artifact reduction term, and it utilizes the linear property of the CRF. The data reliability term determines how much a certain BP-LDR should be reflected in the BP-HDR image. The data reliability term is designed to exclude the influence of noise factors. The ghost artifact reduction term excludes ghosting regions from the reconstruction process. The reconstructed BP-HDR image (Ê) is calculated as

Ê(i,j)= n = 1 N W n i , j · f - 1 I n i , j / Δ t n n = 1 N W n i , j
(2)

where W n represents the weighting function corresponding to the n-th BP-LDR image, I n (i,j) represents the given pixel value at the position (i,j) of the n-th BP-LDR image, Δ t n represents the exposure time of I n , and N is the number of BP-LDR images. For convenience, we arranged I1,I2,….,I n ,…,I N such that Δ t1>Δ t2>….>Δ t n >….>Δ t N . Furthermore, f represents the CRF explained in the previous section, and thus f-1(I n )/Δ t n represents the radiance value of I n . Therefore, (2) can be regarded as an equation which combines the radiance values of the BP-LDR into the BP-HDR image.

Figure 3 shows a block diagram of the proposed method regarding the estimation of the proposed weighting function. In order to estimate the weighting function W n , first, the weight for the data reliability ( W n r ) and the weight for the ghost artifact reduction ( W n l ) have to be estimated. The data reliability weight W n r is further modified to Ŵ n r to reflect the change of the data reliability according to the ghosting regions in the other LDR images. Then, finally, Ŵ n r and W n l are combined to obtain the final weighting function W n :

W n (i,j)= Ŵ n r (i,j)· W n l (i,j),
(3)
Figure 3
figure 3

Block diagram for the proposed weighting function.

The details are described in the following sections.

2.2.1 The weight for data reliability

Conventional methods use a symmetric weighting function that decreases with the distance from the center of the pixel value range [3, 11]. This is based on the fact that the pixel values near the center are the most reliable. If considering only the influence of the noise, the higher part of the pixel value range seems to be reliable since the shot noise in general image sensor has a Poisson distribution. However, since the higher part is close to the saturation value and since post-processing methods such as the gamma correction make the higher part more close to the saturation value, a smaller weight should be assigned to the higher part to prevent the use of the saturated pixel values in the reconstruction process. Figure 4a shows two examples of weighting functions that are generally used. These weighting functions are converted into the functions in the radiance domain, as shown in Figure 4b. Using the function values in the radiance domain, the ratio of how much the BP-LDR images are combined into the BP-HDR image is calculated. For example, the radiance value L1 in Figure 4b has a weight of about 0.6 (point a) in the middle exposure BP-LDR image, while the weights in the other BP-LDR images are about 0.2 (point b). This means the weight for the middle exposure is about three times larger than the weights for the other exposures at L1. In general, it can be observed that a BP-LDR image captured with long exposure has a large weight in the low-radiance region, while a BP-LDR image captured with short exposure has a large weight in the high-radiance region. However, a major problem is that the short-exposure BP-LDR image has a considerable weight in the very low radiance region. Using the short-exposure BP-LDR image in the low-radiance region is improper since it can be under-exposed or noisy in this region.

Figure 4
figure 4

Two examples of conventional weighting functions. (a) In the pixel value domain and (b) in the radiance domain.

To overcome the abovementioned problems, we use a designing method for the reliability function. The designing method consists of two steps as listed in Algorithm ??. The first step is to design a desired weighting function ( W ~ n r ) in the radiance domain. The second step is the conversion of the designed weighting function in the radiance domain into that in the pixel value domain.

In the first step, the design of W ~ n r has to be done so that for a certain n, W ~ n r covers only a certain region in the radiance domain, as shown in Figure 5b. That is, we design the weighting function so that the short-exposure BP-LDR image has an effect only in the high-radiance region, while the long-exposure BP-LDR image has an effect only in the low-radiance region. This reduces the amount of noise in the reconstructed BP-HDR image, since the short-exposure BP-LDR image, which normally contains noise in the dark regions, is not used to reconstruct the low-radiance region in the BP-HDR image.

Figure 5
figure 5

Proposed weighting function for reliability of the data. (a) In the pixel value domain and (b) in the radiance domain.

To make the design simple, the following constraints are used:

  1. 1.

    The two adjacent functions W ~ n r and W ~ n + 1 r intersect at the same point as in conventional methods.

  2. 2.

    All of the functions W ~ n have the same slopes at the transition region.

By obeying the first constraint, the most reliable BP-LDR image for a certain radiance range becomes consistent with that in conventional methods. However, since the conventional weighting functions are designed to adapt to the RGB images processed by the image processing module such as the gamma correction, they are improper for the BP-LDR images. Therefore, we modify the conventional weighting functions to adapt to the BP-LDR images. That is, the maximum value position (ρ) in the weighting functions is changed from the center of the pixel value range to a more reliable position as will be later mentioned in Section 3. By obeying the second constraint, the change of the two weights between different exposure LDR images becomes close to linear. This can prevent artifacts which can be possibly generated due to a nonlinear change in the weight ratio. To reduce the overlap between adjacent weighting functions while obeying the above mentioned constraints, the slopes at the transition region have to be increased.

The proposed weighting functions are calculated starting with that corresponding to the longest exposure image. As shown in Figure 5b, W ~ 1 r for the longest exposure BP-LDR image I1 is asymmetric with respect to point μ 1 H as

W ~ 1 r I 1 R = 1 , if I 1 R < μ 1 H exp - C · I 1 R - μ 1 H 2 , otherwise ,
(4)

where

μ 1 H = f - 1 ρ Δ t 1 .
(5)

Here, C represents the parameter that controls the slope of the function W ~ 1 r .

Obeying the second constraint, C should be the same for all of W ~ n r . For I 1 R < μ 1 H , W ~ 1 r has the value 1 since I1 is the most reliable in this range. For I 1 R μ 1 H , W ~ 1 r is the Gaussian function with mean μ 1 H . C is determined by the function value δ at the intersection point γ1 of W ~ 1 r and W ~ 2 r , i.e., δ= W ~ 1 r ( γ 1 )= W ~ 2 r ( γ 1 ). It is calculated from (4) as

C=- log δ γ 1 - μ 1 H 2 .
(6)

As described above, the intersection point γ n of W ~ n r and W ~ n + 1 r is the same as in conventional methods. The procedure of calculating γ n is described in the Appendix. The result is

γ n = ρ ( I max - β ) - β ( I max - ρ ) Δ t n ρ + Δ t n + 1 ( I max - ρ ) ,
(7)

where Imax represents the maximum value of the BP-LDR images (4,095 for 12-bit images). W ~ n r for n=2,…,N-1 can be calculated by the following equation:

W ~ n r I n R = exp - C · I n R - μ n L 2 , if I n R < μ n L 1 , if μ n L < I n R < μ n H exp - C · I n R - μ n H 2 , otherwise ,
(8)

where μ n L and μ n H are the mean values of the Gaussian functions applied to the low part and the high part in W ~ n r , respectively. Since W ~ n - 1 r and W ~ n r intersect at the point γn-1 and are symmetric around γn-1, μ n L is obtained from μ n - 1 H :

μ n L =2 γ n - 1 - μ n - 1 H .
(9)

Likewise, μ n H is calculated from (8) as

exp - C · ( γ n - μ n H ) 2 = δ γ n - μ n H = - log ( δ ) C μ n H = γ n - - log ( δ ) C .
(10)

The various coefficients to design the proposed weighting function W ~ n r are marked in Figure 6. The weighting function W ~ n r corresponding to the shortest exposure image I N is asymmetric with respect to μ n L :

W ~ N r I N R = exp - C · I N R - μ N L 2 , if I N R < μ N L 1 , otherwise ,
(11)
Figure 6
figure 6

Illustration of the data reliability weighting function W ~ n r . The coefficients to design the function are included in this figure.

For I N R < μ N L , W ~ n r is a Gaussian function with mean μ n L . For I N R μ N L , W n r becomes 1 since I N is the most reliable in this range.

After the desired weighting function is designed by the abovementioned step, it has to be converted to a weighting function in the pixel value domain. This is done by step 2 in Algorithm ??. By step 2, W ~ 1 r , W ~ n r , and W ~ n r in (4), (8), and (11) are converted into W 1 r , W n r , and W n r , respectively, as

W 1 r I 1 = 1 , if I 1 < f μ 1 H Δ t 1 exp - C · f - 1 I 1 Δ t 1 - μ 1 H 2 , otherwise ,
(12)
W n r I n = exp - C · f - 1 I n Δ t n - μ n L 2 , if I n < f μ n L Δ t n 1 , if f μ n L Δ t n I n < f μ n H Δ t n exp - C · f - 1 I n Δ t n - μ n H 2 , otherwise ,
(13)
W N r I N = exp - C · f - 1 I N Δ t N - μ N L 2 , if I N < f μ N L Δ t N 1 , otherwise .
(14)

The weighting function W n r obtained in this section is further modified to consider local motion by the method described in Section 2.2.3.

2.2.2 The weight for ghost artifact reduction

In general, any movement introduces ghost artifacts when input images with different exposures are sequentially acquired. Figure 7 shows the effect of these ghost artifacts. Figure 7a shows three input images with different exposures, and Figure 7b shows the HDR image result using a conventional Gaussian weight [11]. The movement of the people while the input images were being captured caused the ghost artifacts of the HDR image. The artifacts are shown in the close-up image (Figure 7c). To prevent the ghost artifacts, the region where local motion occurs has to be excluded from the reconstruction process.

Figure 7
figure 7

An example of the ghost artifact in an HDR image. (a) Input images, (b) the result containing ghost artifact, and (c) close-up of the red box in (b).

The proposed method uses the weighting function to exclude the ghost artifacts. Small weights are assigned to the pixels where ghost artifacts occur to exclude the ghosting regions from the reconstruction process. Before calculating the weighting function, the image with the fewest saturated and dark pixels is selected as the reference image. From the correspondence between the reference image and the other BP-LDR images, the weighting function W n l is decomposed into the switching (s n ) and the weighting ( W n l ) components as

W n l i , j = s n i , j · w n l i , j .
(15)

The switching component s n is defined based on the fundamental assumption that the pixel value captured at a long exposure should be larger than that at a short exposure. Therefore, if the pixel value at a short exposure is larger than at a long exposure, this means that one of the two pixel values is wrong and should be excluded from the reconstruction process. Therefore, s n becomes

s n i , j = 1 , if ( i , j ) M n or n = n 0 0 , otherwise
(16)

where n0 denotes the reference image and M n represents the region in I n which satisfies the abovementioned fundamental assumption. Even though the switching component is not used, the weighting component W n l assigns small weights to these regions. However, the switching component is effective to reduce ghost artifacts because these regions are completely excluded from the reconstruction process by the switching component.

The weighting component W n l is determined by the differences of the luminance and chrominance values between the reference image and the other BP-LDR images. The regions with large differences can be regarded as regions where local motion occur, and thus small weights have to be assigned to these regions. On the other hand, large weights have to be assigned to regions with small differences. The above description is applied to W n l , and it is assigned to the pixels as

w n l i , j = exp - C Y · D n Y i , j - C R · D n R i , j - C B · D n B i , j .
(17)

Here, D n Y represents the difference of the luminance values between I n 0 and I n . D n X , where X denotes the red (R) or blue (B) channel, represents the difference of the chrominance values between I n 0 and I n . The parameters C Y , C R , and C B are chosen to balance between D n Y , D n R , and D n B , respectively.

To obtain D n Y and D n X , first, the BP-LDR images are adjusted to the same exposure setting because the exposures of the BP-LDR images differ. The procedure to adjust the exposure can be expressed in terms of the formula below:

I n 1 n 2 =min f f - 1 I n 1 × Δ t n 2 Δ t n 1 , I max
(18)

where I n 1 n 2 represents the image I n 1 of which the exposure time changes from Δ t n 1 to Δ t n 2 , and min{a,b} represents the minimum value between a and b. From the exposure adjustment in (18), the exposure time of I n 1 changes from Δ t n 1 to Δ t n 2 in the radiance domain, and then it is converted in the pixel value domain. After adjusting the exposures, D n Y is calculated as

D n Y i , j = Y ~ n 0 i , j - Y ~ n i , j max Y ~ n 0 i , j , Y ~ n i , j
(19)

where

Y ~ n i , j = p , q S G σ p , q · Y n i + p , j + q .
(20)

Here, max{a,b} represents the maximum value between a and b, Y n represents the luminance component of I n , G σ (·) represents the corresponding truncated Gaussian kernel with variance σ2, and S represents an index set of a support, i.e., a rectangular windowed search range. The normalization of Y ~ n 0 i , j - Y ~ n i , j to D n Y i , j is to compensate for the difference according to the level of luminance. The luminance component Y n is calculated in the Bayer pattern from neighboring pixels as

Y n i , j = 1 4 · I n ( i , j ) + 1 8 · I n ( i - 1 , j ) + I n ( i , j - 1 ) + I n ( i + 1 , j ) + I n ( i , j + 1 ) + 1 16 · I n ( i - 1 , j - 1 ) + I n ( i - 1 , j + 1 ) + I n ( i + 1 , j - 1 ) + I n ( i + 1 , j + 1 ) .
(21)

By applying (21), Y n contains 50% green, 25% red, and 25% blue colors regardless of the position. D n X is also calculated after adjusting the exposures as follows:

D n X i , j = K ~ n 0 X i , j - K ~ n X i , j max Y ~ n 0 X i , j , Y ~ n X i , j
(22)

where K ~ n X is calculated by

K ~ n X i , j = p , q S G σ p , q · Y n i + p , j + q - X n i + p , j + q .
(23)

Here, X n represents the pixel value of the X channel in I n , which is calculated by the bilinear interpolation depending on the position.

Figure 8 shows the results of obtaining D n Y , D n R , and D n B between the reference image (middle exposure) and the other two BP-LDR images. The regions where D n Y , D n R , and D n B are large are regarded as the local motion regions. In Figure 8, it can be seen that the local motion is detected effectively. Although it is difficult to detect the local motion in the dark region of the short-exposure BP-LDR image due to the influence of the noise, there is no problem since the short-exposure BP-LDR image is rarely used to reconstruct dark regions of the scene.

Figure 8
figure 8

Ghost artifact regions calculated from the captured BP-LDR images.

2.2.3 The weight for data reliability considering local motion

If the weighting function W n r obtained in the previous section as Ŵ n r in (3) is used, this will cause a problem: a certain local motion can cause artifacts. For easy understanding, let us consider a situation in which two LDR images have different exposures. Suppose that the image with shorter exposure is the reference image. Figure 9 illustrates this situation. Here, the local motion region is shown in the longer exposure image I1 (marked by light gray in Figure 9c), and the black region is shown in the shorter exposure image I2 (marked by dark gray in Figure 9c). When calculating W n for n=1,2 by (3) using W n r , they have weighting values W1≈0 and W2≈0, respectively. This is due to the fact that the local motion region in I1 leads to W 1 l 0, and the black region in I2 leads to W 2 r 0. As a result, an artifact appears since no LDR images are utilized in the intersection of these two regions (marked by black in Figure 9c). To avoid the artifact, the weighting function W n r has to be modified.

Figure 9
figure 9

An example of a situation in which the weighting functions W n for all n become 0. (a) A long-exposure image I1, (b) a short-exposure image I2 (reference image), and (c) a weight image (light gray: W1≈0, dark gray: W2≈0, black: W1,W2≈0).

The modified weighting function Ŵ n r should increase its weight if a local motion occurs in the opponent BP-LDR image to compensate for the small weighting value of the local motion region. For example, again, regarding the case where the image I1 is obtained with the longest exposure, I2 with the second longest, I3 with the third, and so on for In=4,5,…,N. In this case, first, the weighting function Ŵ 1 r for the longest exposure BP-LDR image I1 is composed of W 1 r and a partial weighting of W 2 r to compensate for the local motion in I2. In detail, when a local motion is detected in a certain region in I2, a partial weight of W 2 r is added to Ŵ 1 r for this region. The modified weight Ŵ 1 r does not include the partial weights of W n = 3 , , N r , since W n = 3 , , N r are almost 0 in the radiance value range of I1. Therefore, Ŵ 1 r can be expressed as follows:

Ŵ 1 r i , j = W 1 r I 1 i , j + W s I 1 i , j · 1 - W 2 l i , j · W 2 r I 1 2 i , j .
(24)

The weight W 2 l is the weight defined in (15) for the case of image I2. It has a small value if the likelihood of a local motion in I2 is large. If W 2 l has a small value, the weight W 2 r is much reflected in the weight Ŵ 1 r . The image I1→2 is used to compute W 2 r , where I1→2 is calculated from (18). Here, Ws(·) represents a simple hat function [7] as shown in Figure 10, which prevents the use of saturated values in I n :

W s I n =1- 2 · I n I max - 1 12 .
(25)
Figure 10
figure 10

The hat function W s to avoid using the saturation level.

Next, we consider the case for Ŵ 2 r . The weight Ŵ 2 r includes the weighting values W 2 r and W 3 r in the same manner as explained in the abovementioned case. Furthermore, the weight W 1 r is additionally included in Ŵ 2 r . This is due to the fact that the radiance range of I2 includes the radiance range of I1, which is again due to the fact that I2 is obtained with a shorter exposure than I1. Therefore, using I2 to compensate for the local motion in I1 and I3, Ŵ 2 r becomes

Ŵ 2 r i , j = W 2 r I 2 i , j + W s I 2 i , j · 1 - W 3 l i , j · W 3 r I 2 3 i , j + 1 - W 1 l i , j · W 1 r I 2 1 i , j .
(26)

Here, the weight Ws is not required for the third term in the right hand side of (26), since the radiance value range captured by I1 corresponds to the low-radiance part of I2.

Similarly, Ŵ 3 r contains the weights W 3 r , W 1 r , W 2 r , and W 4 r :

Ŵ 3 r i , j = W 3 r I 3 i , j + W s I 3 i , j · 1 - W 4 l i , j · W 4 r I 3 4 i , j + 1 - W 2 l i , j · W 2 r I 3 2 i , j + 1 - W 2 l i , j · 1 - W 1 l i , j · W 1 r I 3 1 i , j .
(27)

The fourth term compensates for the case if local motion is detected in both I1 and I2. If local motion occurs only in I2, then the third term in (27) would be enough. Otherwise, the case that local motion occurs only in I1 is already compensated for by the third term in (26).

Generalizing the case for n (except for n = 1 and n = N), Ŵ n r can be represented as

Ŵ n r i , j = W n r I n i , j + W s I n i , j · 1 - W n + 1 l i , j · W n + 1 r I n n + 1 i , j + k = 1 n - 1 p = 1 k 1 - W n - p l i , j · W n - k r I n n - k i , j .
(28)

Finally, the weighting function Ŵ n r is used as the data reliability term in (3).

3 Experimental results

The performance of the proposed algorithm was tested with several BP-LDR images, which were captured with a CMOS sensor at three different exposures (Δ t1=t, Δ t2=t/4, and Δ t3=t/16). The BP-LDR images have a pixel value range of 0≤intensity≤4,095 (12-bit). The 12-bit pixel value range is widely used for digital cameras. Later, the 12-bit range is compressed to 8-bit RGB data by the IPM.

With the proposed method, several parameters were set empirically and tested with various images to obtain the best results. The parameter ρ in (5) was set to 2 3 I max . The parameter δ in (6), which determines the degree of the overlap between the data reliability weights, was set to 0.25. The parameters C Y , C R and C B in (17) were set to 20, 10, and 10, respectively. The kernel size S and the variance σ2 in (20) and (23) were set to 5×5 and 4, respectively. No pre-processes were performed on the input BP-LDR image, but pre-processes such as bad pixel correction [22] and Gr-Gb imbalance correction [23] can improve the HDR result according to the quality of the imaging sensor. For better visualization, we showed the results in RGB images rather than in Bayer patterned images. All the input BP-LDR images were post-processed by the edge-preserving color interpolation [24], white balancing, color correction, and gamma correction. For the resulting BP-HDR images, an additional tone-mapping algorithm [25] was used to compress the dynamic range, which visualizes the HDR image information on a low dynamic range display.

We compared the performance with respect to the influence of noise at dark regions and ghost artifacts with three conventional methods. The first conventional method (CM1) uses the weighted summation using the Gaussian weighting function without considering ghost artifact reduction in [11]. The second (CM2) and third (CM3) method are commercial software programs that are widely used to obtain an HDR image with ghost artifact reduction in [26, 27], respectively. In CM2 and CM3, the parameters associated with ghost artifact removal were set to the highest level. For CM2 and CM3, the BP-LDR images were preprocessed by the same edge-preserving color interpolation, white balancing, and gamma correction algorithms which are used for the visualization.

First, we performed an experiment when there was no local motion. Figure 11 shows our HDR result applied to a scene containing both indoor and outdoor environments. As can be seen in Figure 11a, the limited dynamic range of the imaging sensor revealed saturation and black regions in the LDR images. In the short-exposure LDR image, the pixels in the bright regions avoided saturation, but the details in the dark regions disappeared. On the other hand, in the long exposure LDR image, the dark regions, e.g., the part under the desk, became visible, but the bright regions became saturated. In comparison, with the proposed method, the fine details became visible in both the bright and the dark regions, as can be seen in Figure 11b. Figure 11c visualizes the values of the weighting functions Wn=1,2,3 in colors. The R, G, and B channels were assigned to W1, W2, and W3, respectively. For example, a red region represents the fact that the value of W1 is dominant in that region. As a result, in the reconstruction of the BP-HDR image, the long-exposure BP-LDR image has a dominant effect on the region under the desk, while the short-exposure BP-LDR image has a dominant effect on the sky region.

Figure 11
figure 11

Experimental results of the captured scene with window. (a) The three LDR images, (b) the result of the proposed method, and (c) the weight image obtained by W n .

Figure 12 shows the results of the three CMs and the proposed method applied to a scene containing very dark regions around the test chart. Figure 12a shows the ‘chart’ images with different exposures. Figure 12b,c,d,f shows the results of the CMs and the proposed method, respectively. To confirm the effect of noise, the lower left corners of the results were magnified as shown in Figure 13. The proposed results were less affected by the noise compared with the result of the CMs. Especially, with the CM2 and the CM3, more noise was observed in the dark regions compared with the others. This is due to the fact that the weight for the long-exposure BP-LDR image decreased in the process of excluding ghosting regions. The long-exposure BP-LDR image appears less noisy in the dark regions than the other images. However, the proposed method provides better performance than the CM1 although our method considered ghost artifacts unlike the CM1. Figure 14a,b shows the weight images of Figure 12b,e, which correspond to the weighting functions used in the CM1 and the proposed method, respectively. As can be observed, with the CM1, the surrounding dark region revealed similar small weights for all the BP-LDR images. In comparison, with the proposed method, W n assigned a dominant weight for the long-exposure BP-LDR image in the dark region, which made the dark region visible.

Figure 12
figure 12

Experimental results of the captured scene with chart. (a) The three LDR images, (b) CM1, (c) CM2, (d) CM3, and (e) the proposed method.

Figure 13
figure 13

Close-up comparison of Figure12. (a) CM1, (b) CM2, (c) CM3, and (d) the proposed method.

Figure 14
figure 14

The weight images of Figure12. (a) CM1 and (b) the proposed method.

Figure 15 shows the results for a scene with a desk. The scene contained a very dark region under the desk as shown in Figure 15a. In Figure 15, it appears that the noise under the desk was reduced effectively in the results of the CM2 and the proposed method, while the CM1 and the CM3 did not reduce the influence of noise and even generated some artifacts in the light source. However, with the CM2, the details in both the bright and dark regions were disappeared when compared with the other results. To evaluate the performance with respect to the influence of noise, the standard deviation (σ) and the coefficient of variation (CV) were used as objective performance criteria. The CV is defined as the ratio of the standard deviation σ to the mean μ, i.e., σ/μ. The CV is useful for comparison when the means in the result images are different from each other. Table 1 presents the standard deviations and the CVs of the proposed method and the CMs, which are calculated in the homogeneous dark regions of Figures 13 and 15. As described in Table 1, the PM recorded a smaller standard deviation and CV values than the CMs. From Table 1, it is clear that the proposed method outperforms the CMs in numerical values.In the second experiment, we performed experiments on captured scenes that included object movements. Figure 16 shows the results for a scene with leaves blowing in the wind. Figure 16b shows the ghost artifacts around the leaves since CM1 cannot consider local motion. The CM2 cannot remove the ghost artifacts effectively, as can be seen in Figure 16c. Figure 16d,e shows the results when using the CM3 and the proposed method, respectively. Both results show almost no ghost artifacts when compared with the results of the CM1 and the CM2. As a result, the CM3 and the proposed method are able to prevent ghost artifacts from small local motion like blowing leaves.Figure 17 shows the results for a scene with moving people. The captured input images are shown in Figure 7a. As can be seen, the motion regions are extremely large. The ghost artifacts are observed with all of the conventional methods. However, the proposed method introduces no ghost artifacts. Figure 18 shows a close-up of Figure 17. In Figure 18a, false color artifacts are observed in bright regions. This is due to the fact that saturated pixels in the longest exposure BP-LDR image are used in the reconstruction process. The reconstructed radiance regions affected by these saturated pixels become falsely achromatic. Meanwhile, the BP-HDR image performs post-processing such as white balancing since the colors in the Bayer raw images may be unbalanced. As a result, false colors may occur in the achromatic regions. Moreover, as can be seen in Figure 18, conventional methods cannot detect the person wearing black pants in front of the black desk because their brightness values are similar. In comparison, the proposed method prevents ghost artifacts and produces a high-quality BP-HDR image, as shown in Figure 18d.We presented the results of the proposed method together with the input images in Figure 19. Figure 19 demonstrates that the proposed method provides BP-HDR images without any artifacts.

Figure 15
figure 15

Experimental results of the captured scene with desk. (a) The three LDR images, (b) CM1, (c) CM2, (d) CM3, and (e) the proposed method.

Table 1 Comparison of experimental results in quantitative terms
Figure 16
figure 16

Experimental results of the captured scene with leaves blowing in the wind. (a) The three LDR images, (b) the result of CM1, (c) the result of CM2, (d) the result of CM3, and (e) the result of the proposed method.

Figure 17
figure 17

Experimental results of the captured scene with moving people. (a) The result of CM1, (b) the result of CM2, (c) the result of CM3, and (d) the result of the proposed method.

Figure 18
figure 18

Close-up comparison of Figure17. (a) The result of CM1, (b) the result of CM2, (c) the result of CM3, and (d) the result of the proposed method.

Figure 19
figure 19

Results of the proposed BP-HDR image reconstruction method with various test images.

4 Conclusion

In this paper, we have proposed a BP-HDR (Bayer patterned high dynamic range) image reconstruction algorithm from multiple BP-LDR (Bayer patterned low dynamic range) images. Unlike conventional methods, the proposed method works on the Bayer raw image. This allows for a linear CRF and also improves the efficiency in hardware implementation. The proposed method aims to deal with the noise and the ghost artifact problems. For this aim, a new weighting function is proposed to be designed so that each of the BP-LDR images independently covers its corresponding region according to the radiance value. Furthermore, the weighting function is designed to detect the local motion in the Bayer pattern and to exclude ghosting regions. As a result, the proposed method weakens the influence of noise in the short-exposure BP-LDR image and prevents ghost artifacts. Experimental results show that the proposed method produces a high-quality BP-HDR image while being robust against ghost artifacts and noise factors, even when there exists excessive local motion.

\thelikesection Appendix

\thelikesubsection Procedure for calculating intersection point γ n

The intersection point γ n of W ~ n r and W ~ n + 1 r is determined as the same point obtained from the conventional weighting function. We use the simple weighting function to calculate γ n as

W n c x = 1 ρ · x , if x < ρ - 1 I max - ρ ( x - ρ ) + 1 , otherwise ,
(29)

The weighting function W n c is shown in the top part of Figure 4. The falling part of W n c and the rising part of W n + 1 c intersect in the radiance domain. The procedure for calculating the intersection point γ n is shown below:

- 1 I max - ρ f ( γ n Δ t n ) - ρ + 1 = 1 ρ · f ( γ n Δ t n + 1 ) 1 I max - ρ · f ( γ n Δ t n ) + 1 ρ · f ( γ n Δ t n + 1 ) = ρ I max - ρ + 1 1 I max - ρ · ( γ n Δ t n + β ) + 1 ρ · ( γ n Δ t n + 1 + β ) = ρ I max - ρ + 1 γ n Δ t n I max - ρ + Δ t n + 1 ρ + β ρ + β I max - ρ = ρ I max - ρ + 1 γ n = ρ - β I max - ρ - β ρ + 1 Δ t n I max - ρ + Δ t n + 1 ρ γ n = ρ ( I max - β ) - β ( I max - ρ ) Δ t n ρ + Δ t n + 1 ( I max - ρ ) .
(30)

References

  1. Mann S, Picard R: Being ‘undigital’ with digital cameras: extending dynamic range by combining differently exposed pictures. In IS&T 48th Annual Conference. Washington D.C.; 7–11 May 1995:422-428.

    Google Scholar 

  2. Mitsunaga T, Nayar SK: Radiometric self calibration. In 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Fort Collins; 23–25 June 1999:374-380.

    Google Scholar 

  3. Debevec PE, Malik J: Recovering high dynamic range radiance maps from photographs. In 24th International Conference on Computer Graphics and Interactive Techniques. Los Angeles; 3–8 Aug 1997:369-378.

    Google Scholar 

  4. Pal C, Szeliski R, Uyttendaele M, Jojic N: Probability models for high dynamic range imaging. In 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington D.C.; 27 June–2 July 2004:173-180.

    Google Scholar 

  5. Srikantha A, Sidibé D: Ghost detection and removal for high dynamic range images: recent advances. Signal Process. Image Commun 2012, 27(6):650-662. 10.1016/j.image.2012.02.001

    Article  Google Scholar 

  6. Gallo O, Gelfandz N, Chen W-C, Tico M, Pulli K: Artifact-free high dynamic range imaging. In 2009 IEEE International Conference on Computational Photography. San Francisco; 16–17 Apr 2009:1-7.

    Chapter  Google Scholar 

  7. Khan EA, Akyuz AO, Reinhard E: Ghost removal in high dynamic range images. In 2006 IEEE International Conference on Image Processing. Atlanta; 8–11 Oct 2006:2005-2008.

    Chapter  Google Scholar 

  8. Jacobs K, Loscos C, Ward G: Automatic high-dynamic range image generation for dynamic scenes. IEEE Trans. Comput. Graphics Appl 2008, 28(2):84-93.

    Article  Google Scholar 

  9. Heo YS, Lee KM, Lee SU, Moon Y, Cha J: Ghost-free high dynamic range imaging. In 10th Asian Conference on Computer Vision. Queenstown; 8–12 Nov 2010:486-500.

    Google Scholar 

  10. An J, Lee SH, Kuk JG, Cho NI: A multi-exposure image fusion algorithm without ghost effect. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing. Prague; 22–27 May 2011:1565-1568.

    Chapter  Google Scholar 

  11. Robertson MA, Borman S, Stevenson RL: Dynamic range improvement through multiple exposures. In 1999 International Conference on Image Processing. Kobe; 24–28 Oct 1999:159-163.

    Chapter  Google Scholar 

  12. Shao L, Rehman AU: Image demosaicing using content and colour-correlation analysis. Signal Process doi: 10.1016/j.sigpro.2013.07.017

  13. Shao L, Zhang H, de Haan G: An overview and performance evaluation of classification-based least squares trained filters. IEEE Trans. Image Process 2008, 17(10):1772-1782.

    Article  MathSciNet  Google Scholar 

  14. Ramanath R, Snyder WE, Yoo Y, MS Drew: Color image processing pipeline. Signal Process. Mag. IEEE 2005, 22(1):34-43.

    Article  Google Scholar 

  15. Shao L, Yan R, Li X, Liu Y: From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms. IEEE Trans. Cybernet doi:10.1109/TCYB.2013.2278548

  16. Pham B, Pringle G: Color correction for an image sequence. Comput. Graphics Appl. IEEE 1995, 15(3):38-42. 10.1109/38.376611

    Article  Google Scholar 

  17. BE Bayer: Color imaging array. U.S. Patent 3,971,065, July 1976

  18. Holst GC, Lomheim TS: CMOS/CCD Sensors and Camera Systems. SPIE, Bellingham; 2011.

    Google Scholar 

  19. Grossberg MD, Nayar SK: Determining the camera response from images: what is knowable? IEEE Trans. Pattern Anal. Mach. Intell 2003, 25(11):1455-1467. 10.1109/TPAMI.2003.1240119

    Article  Google Scholar 

  20. Tsin Y, Ramesh V, Kanade T: Statistical calibration of, CCD imaging process. In 8th IEEE International Conference on Computer Vision. Vancouver; 7–14 July 2001:480-487.

    Google Scholar 

  21. Han YS, Choi E, Kang MG: Smear removal algorithm using the optical black region for CCD imaging sensors. IEEE Trans. Consum. Electron 2009, 55(4):2287-2293.

    Article  Google Scholar 

  22. Dierickx B, Meynants G: Missing pixel correction algorithm for image sensors. In EUROPTO Conference on Advanced Focal Plane Arrays and Electronic Camera 2. Zurich; 7 Sep 1998:200-203.

    Chapter  Google Scholar 

  23. Chino N, Une H: Color imaging by independently controlling gains of each of R, Gr, Gb, and B signals. U.S. Patent 7,009,639, March 2006

  24. Lu W, Tan Y-P: Color filter array demosaicking: new method and performance measures. IEEE Trans. Image Process 2003, 12(10):1194-1210. 10.1109/TIP.2003.816004

    Article  Google Scholar 

  25. Meylan L, Susstrunk S: High dynamic range image rendering with a retinex-based adaptive filter. IEEE Trans. Image Process 2006, 15(9):2820-2830.

    Article  Google Scholar 

  26. Photomatix Version 4.0.2, HDRsoft . Accessed 31 Jan 2014 http://www.hdrsoft.com/ HDRsoft . Accessed 31 Jan 2014

  27. Adobe Photoshop CS5 Version 12.0.4 Adobe Systems Inc. . Accessed 31 Jan 2014 http://www.adobe.com/products/photoshop/ Adobe Systems Inc. . Accessed 31 Jan 2014

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2012R1A2A4A01003732).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Moon Gi Kang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kang, H., Lee, S.H., Song, K.S. et al. Bayer patterned high dynamic range image reconstruction using adaptive weighting function. EURASIP J. Adv. Signal Process. 2014, 76 (2014). https://doi.org/10.1186/1687-6180-2014-76

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2014-76

Keywords