Skip to main content

Region Adaptive Color Demosaicing Algorithm Using Color Constancy

Abstract

This paper proposes a novel way of combining color demosaicing and the auto white balance (AWB) method, which are important parts of image processing. Performance of the AWB is generally affected by demosaicing results because most AWB algorithms are performed posterior to color demosaicing. In this paper, in order to increase the performance and efficiency of the AWB algorithm, the color constancy problem is examined during the color demosaicing step. Initial estimates of the directional luminance and chrominance values are defined for estimating edge direction and calculating the AWB gain. In order to prevent color failure in conventional edge-based AWB methods, we propose a modified edge-based AWB method that used a predefined achromatic region. The estimation of edge direction is performed region adaptively by using the local statistics of the initial estimates of the luminance and chrominance information. Simulated and real Bayer color filter array (CFA) data are used to evaluate the performance of the proposed method. When compared to conventional methods, the proposed method shows significant improvements in terms of visual and numerical criteria.

1. Introduction

1.1. Backgrounds

Recently, digital cameras have become more and more popular and they have replaced film cameras in many applications. A typical digital camera acquires images using a single chip image sensor with a color filter array (CFA) in order to reduce cost and size. The Bayer CFA [1], shown in Figure 1, is the most common array used to sample one of the primary colors at each pixel.

Figure 1
figure1

The Bayer pattern.

In order to render images captured with a single chip image sensor as a viewable image, an image processing pipeline is required. The most important parts of this image processing pipeline are demosaicing and the automatic white balance (AWB). Since only one color component is available at each pixel, the other two missing color components have to be estimated from the neighboring pixels. This process is referred to as CFA demosaicing or CFA interpolation. The color constancy property of the human visual system allows the perceived color to remain relatively constant at different color temperatures [2]. This capability is required for cameras to generate natural-looking images that match with human perception. The goal of the AWB method is to emulate human color constancy. This is normally achieved by adjusting the image so that it looks as if it were taken under a canonical light (usually daylight).

In recent years, there have been investigations into more sophisticated demosaicing algorithms. Based on the assumption of smooth hue transition, demosaicing is performed using a ratio model which assumes that the ratio between luminance and chrominance at the same position is constant in the neighborhood [3]. Instead of using color ratios, many methods also make use of interchannel color differences which assume that the difference between luminance and chrominance is smooth in small region [47]. Since human visual systems are sensitive to the edges in images, edge-directed demosaicing method chooses the interpolation direction to avoid interpolating across edges, instead interpolating along any edges in the image [811]. Instead of choosing a certain interpolation direction, the edge indicator function is used in [1217]. The edge indicator functions in several directions are defined as measures of edge information and a missing pixel is determined as a weighted sum of its neighbors. Color demosaicing is performed by reconstruction approaches [1821]. Demosaiced image is obtained by deriving Minimum Mean Square Error (MMSE) estimator [18]. Regularization approaches are proposed in [19] and the color channels are reconstructed using the projections onto convex sets (POCSs) technique [20]. In [21], the demosaicing problem is formulated as a Bayesian estimation problem. Another recent demosaicing approach, which is referred as decision-based demosaicing algorithm, divided the demosaicing procedure into an interpolation stage and decision stage [2227]. In the interpolation stage, horizontally and vertically interpolated images are produced, respectively. In the decision stage, soft-decision or hard-decision methods are employed for choosing the pixels interpolated in the direction with fewer artifacts. The Fisher discriminant is used as the decision criterion [22]. Homogeneity map is used to improve the reasonability of the criterion [23]. The second order Taylor series are used to produce directionally interpolated images in the interpolation stage [24]. Demosaicing error is minimized by the directional linear minimum mean square-error estimation technique [25]. In [26], Chung and Chan presented an adaptive demosaicing algorithm by using the variances of color differences along horizontal and vertical edge directions. However, in the method proposed by Tsai and Song [27], the decision stage is performed before the interpolation stage.

Various algorithms, such as gray world (GW), perfect reflectors (Max-), gamut mapping, and color by correlation, have been proposed to maintain the color constancy of an image under different light sources. A good comparison of these algorithms can be found in [28, 29]. The gray world algorithm assumes that the average of the surface reflectance of a typical scene is achromatic [30, 31]. The perfect reflectors algorithm is a simple and fast color constancy algorithm which estimates the light source color from the maximum response of the different color channels [32]. The gamut mapping algorithm is based on the observation that only a limited set of values can be observed under a given illuminant [2, 33]. The basic idea of color by correlation is to precompute a correlation matrix which describes the extent to which proposed illuminants are compatible with the occurrence of image chromaticities [34, 35].

1.2. Motivation

For most AWB algorithms, , , and color components at each pixel are required; thus AWB algorithms are performed after color demosaicing. Therefore, the performance of the AWB is mainly affected by the demosaicing results. In order to increase the performance and efficiency of the AWB algorithms, the color constancy problem can be treated in the color demosaicing step. Color demosaicing that considers color constancy has several advantages. Firstly, computational complexity is reduced. Second, image quality is improved because the AWB method can be performed using original Bayer data which is not degraded by the color demosaicing process. During the color demosaicing process, color information in the fine detail region can be degraded and this introduces false color artifacts. These artifacts influence the AWB gain and also color artifacts can be emphasized by the AWB process. In order to use these advantages and avoid problems, a novel color demosaicing algorithm which uses color constancy is proposed in this paper. For an initial estimate of proposed algorithm, the channel is directionally interpolated using Taylor series approximation and the chrominance channel is calculated using the concept of spectral and spatial correlation (SSC) [27]. An edge-based AWB algorithm is performed using initial estimates instead of using a full color image. The AWB gain is obtained by using a predefined achromatic region and initial estimates in the edge region to prevent color failure situations when more than one uniform object existed in the image. In order to improve the performance and the computational complexity, the region is classified into flat, edge, and pattern regions at each pixel. Based on these preclassified region, the color demosaicing process with color constancy is performed to reduce the number of interpolation errors using the AWB gain and local statistics of the initial estimates.

1.3. Overview

The rest of this paper is organized as follows. Section 2 provides the motivation for combining color demosaicing and AWB methods by posing the problem. The initial estimate values of and the chrominance channel are defined and the AWB gain is calculated. A detailed explanation of the proposed method and its theoretical improvements are presented. Section 3 presents experimental results of simulated and real CFA data and some comparisons with other algorithms. The paper is concluded in Section 4.

2. A Joint Color Demosaicing Method and the AWB Algorithm

The performance of the AWB algorithm is mainly affected by demosaicing results because AWB algorithms are usually performed posterior to color demosaicing process. In an edge-based AWB method, which is simple but shows good performance, edge extraction is an important part. Figure 2 compares the results of edge extraction of some original and demosaicing images. The results of edge extraction are significantly different due to the degraded color demosaicing result of the method proposed in [7]. During the color demosaicing process, edge information can be degraded and false color artifacts can be introduced. These factors can influence the AWB gain. Figure 3 shows the AWB results of the demosaicing image. The AWB results of the color demosaicing image produced by the method discussed in [7] do not show a white balanced image because the false color artifact and Moire effect influence the calculation of AWB gain. Also, the color artifacts and Moire effect are emphasized by the AWB process. In order to avoid these problems, we considered the AWB method during the color demosaicing process. Performing color demosaicing and the AWB method simultaneously produced various advantages. In order to overcome difficulties, a color demosaicing method that uses color constancy is introduced in this paper. The details of the algorithm are presented below.

Figure 2
figure2

An example of the problem of edge detection: (a) original image, (b) color demosaicing image by method [ 7 ], (c) edge detection of original image, and (d) edge detection after color demosaicing method [ 7 ].

Figure 3
figure3

An example of the effects of demosaicing artifacts on the AWB method: (a) AWB result of color demosaicing method [ 7 ] and (b) AWB result of original image.

2.1. Initial Estimate of Color Demosaicing and Color Constancy

In an AWB algorithm, full color information is required at each pixel location. However, in a Bayer CFA, only one color component exists at each pixel location. Initial estimates of luminance and chrominance information can be used to solve the problem. In order to obtain initial estimates of the channel (similar to the luminance information), directional estimates of the green channel at every pixel position follow the Taylor series approximation [24] for simplicity. The missing sample at the or position is calculated by

(1)

where represents either or . The superscripts , , , and mean Top, Bottom, Left, and Right direction. is a initial estimate of value at the or position calculated toward top direction. In the case of superscripts , , and the values are calculated toward bottom, left, and right direction, respectively.

The concepts of spectral and spatial correlation (SSC) [27], shown in Figure 4, are used for the chrominance calculation. The SSC means that the color difference between green and red or blue is constant over the neighboring pixels and toward the edge directions. Also, the rate of change of the neighboring pixel values is constant and the SSC value, , is given by

(2)

where is used for only formula manipulation and represents either and . When is zero, is equal to , which represents the chrominance information.

Figure 4
figure4

Spectral and spatial correlation.

The directional SSC value, , at the or position is calculated by

(3)

where the superscripts , , , and mean Top, Bottom, Left, and Right direction. is a initial estimate of value at the or position calculated toward top direction. In the case of superscripts , , and the values are calculated toward bottom, left, and right direction, respectively. From (1) and (3), initial estimates of the luminance and chrominance values are obtained. Using these values, color constancy gains are calculated and the optimal demosaicing values are determined. The details of obtaining the color constancy gains are explained in the next section.

2.2. Obtaining Color Constancy Gains

For the AWB algorithms, a modified version of the edge-based method is used in this paper. Although the edge-based method is simple, the color constancy accuracy of the method is reasonable [30], and the average edge difference in the scene is assumed to be achromatic. To prevent color failure when more than one uniform object existed in an image, a predefined achromatic region is used in this paper.

In the color difference model, and channels are expressed as sum of luminance and chrominance information [7]:

(4)

where and . In order to obtain white balanced and values, we calculate and , which are white balanced and values, by using gray-edge hypothesis. In order to apply color constancy gains to color difference model-based color demosaicing, color constancy gains are subtracted from the chrominance value:

(5)

where and are color constancy gains. , and are the average of edge pixels of , , and channels. The and values of achromatic points are proportional to intensity. In order to increase AWB performance, we use intensity dependent color constancy gain instead of constant color constancy gain. Then (5) becomes

(6)

In this subsection, the color constancy gains, and , are calculated.

For the edge-based AWB method, edge point detection is required before the AWB process. The region is classified into flat, edge, and pattern regions in the Bayer CFA for more accurate edge direction decisions and for improving the computational efficiency in the color demosaicing process. In order to perform the proposed method, instead of using full color images, Bayer CFA data is used to classify the types of region at the and pixel locations.

For region classification, the and values at the or pixels are defined as

(7)

where these two parameters are used to estimate whether there are strong horizontal or vertical edges in the testing window. A region which is not classified as an edge region is considered to be in a flat region or a pattern region. is also defined as

(8)

where this parameter is used to determine whether it is a flat or pattern region. By using the , , and values, the region at each pixel location is classified as

(9)

where and are predefined threshold values that are used to determine the edge and flat and pattern edge regions.

Although the effects of dominant color can be alleviated by using predefined achromatic regions, the dominant color problem may still exist, for instance, for texture images because pattern edge regions with similar colors are regarded as edge points [31]. In cases like this, the AWB method only based on the edge detection may perform worse than the GW method. Also, color artifacts can be introduced in the pattern region. Due to the these regions, only normal edge regions are used as edge points for the AWB method in this paper.

After the edge points are detected, an estimation of the luminance and chrominance information is required to calculate the color constancy gain. In the case of the pixel location, we estimate the and pixels:

(10)

In order to estimate the missing chrominance information, the difference of the channel information is used and defined as

(11)

Using (11), an estimated value is obtained as

(12)

where is the pixel location that satisfied the following equation:

(13)

From (12), the directional values are temporary defined as

(14)

Using the directional and values and the assumption , , the temporary used chrominance values and are expressed as

(15)

From (10) and (15), we obtained the luminance and chrominance information that we used to calculate the color constancy gain. Figure 5 shows a predefined achromatic region that prevented a color failure situation when more than one uniform object existed in an image. The predefined achromatic region is obtained using a color chart under different lighting conditions. Using the mean information of the Bayer CFA data, information about the dominant color of the image is obtained and this information restricted the predefined achromatic region for accurate color constancy gain. The average point is defined as and . When the distance between the average point and the pre-defined achromatic region is smaller than the predefined threshold , the possible achromatic region becomes

(16)

where represents the abscissa and represents the ordinate. and are defined as and and represents a nearest point in the predefined region from the average point. Otherwise, the possible achromatic region becomes

(17)

where represents a linear function of the predefined region. In this paper, and , which are obtained from Figure 5, are −1.1394 and −0.0068, respectively. Figure 6 explains (16) and (17).

Figure 5
figure5

Predefined achromatic region ("A" is Incandescent Lamp (CIE-A) and "TL" is Tri- Phosphor Fluorescent (CIE-F11).).

Figure 6
figure6

Restricted achromatic region: (a) average point inside the achromatic region and (b) average point outside the achromatic region.

By using the estimated luminance and chrominance information, the color constancy gains are obtained as

(18)

where represents an edge pixel, as defined in (9), and represents the total number of edge pixels included in . The obtained color constancy gain is used for the and channels while the channel demosaicing at the and pixel locations.

detailed explanation will be provided in the next section.

2.3. Green Channel Demosaicing

In a Bayer CFA, the green plane is sampled at a rate twice as high as in the red and blue planes. Thus, the amount of aliasing in the green plane tends to be less than that in the red and blue planes. The green plane possesses the most spatial information of the image to be interpolated and has a great influence on the visual quality of the image. In most cases, the AWB method alters only the and channels. The channel is kept unchanged because the wavelength of the green color band is close to the peak of the human luminance frequency response. For these reasons, green plane interpolation at the and channels should be performed first.

The estimated values of are expressed as a weighted sum of the initial estimate and is expressed using the result of the values. We express them as

(19)

It was found that, in local regions of a natural image, the color differences between the pixels are quite flat [7]. Also, the green channel is flat in homogeneous regions. Accordingly, the variances of and are used as supplementary information to determine the interpolation direction, where is either or . In order to minimize the directional interpolation error and accurately estimate the direction of interpolation, the weight function is designed inversely proportional to the value and (,). Regardless of the orientation of an edge, at least one of the four estimates is an accurate estimate of the missing green color [24]. To illustrate the above mentioned concept, the weight function is defined to pick one of the four estimates. In the paper, the weight function is defined as

(20)

where and are variances of and . These variances are calculated in the mask defined in Figures 10 and 11 based on the region determined in (9). It is difficult to determine edge directions in the pattern edge region. Therefore, multiple directional rectangular windows are used for the calculation of local statistics to use the directional information of neighboring pixels. On the other hand, a directional rectangular window is used for the calculation of local statistics in the flat and edge regions. and represents a predefined thresholds to control the weight of and in the weight function. From (18)–(19), white balance processing is written by

(21)

where is either and . From (19) and (21), a fully interpolated channel and white balanced and channels are obtained. Using these values, and channel demosaicing is performed.

2.4. Red and Blue Plane Interpolation

As explained in previous section, the problem of recovering red and blue channels becomes the demosaicing without color constancy problem. Although the red and blue planes are more sparsely sampled than the green planes, they are easily interpolated by using the fully interpolated green plane and the and domains. Similar to green plane interpolation, the missing value at the location is estimated adaptively by

(22)

where . In this case, is calculated directly, using fully interpolated channel values. is the same value as the one in the green plane, except in a different pixel location. The interpolation of the values at the location and the interpolation of the values are performed in a similar way as interpolation of the value at the location.

3. Experimental Results

The performance of the proposed algorithm was tested with 24 simulated Kodak images, which have been widely used for evaluating demosaicing algorithms with Bayer patterns. These full color images were downsampled into Bayer CFA images. The PSNR and the normalized color difference (NCD) [36], which is an objective measure of the perceptual error between two color images, were used in order to measure the performance of the proposed algorithm quantitatively. The PSNR was defined in decibels as

(23)

where represents the total number of pixels in the image, is the original color image, and is the demosaiced color image. The NCD is computed in the color space by using the following equation:

(24)

where is the perceptual color error between two color vectors and is defined as the Euclidean distance between them and is given by

(25)

where , , and are the differences in the , , and components. The magnitude of the pixel vector in the of the original image is and is given by

(26)

Experiments were also performed on 12-bit sensor data taken from a Micron 2-Megapixel image sensor (MT9D111) in an indoor environment with different lighting conditions and in an outdoor environment. The measurement of the performance of the proposed algorithm was divided into two stages. The first stage is the color demosaicing performance and the second stage is the color constancy ability. In the first stage, for comparison, six conventional algorithms, including the methods of Pei and Tam [7], Hamilton and Adams [10], Lu and Tan [15], Wu and Zhang [22], Li and Randhawa [24], and Tsai and Song [27], were implemented. In the second stage, the proposed algorithms were compared with the results of GW, Max-RGB, Shades of Gray, and Gray edge [30].

For the proposed method, an empirical study was carried out to select an appropriate threshold value of , , , and . Figures 7 and 8 illustrate the performance of the proposed algorithm at difference settings of threshold and . From Figure 7, the optimal choice of the threshold is 10. When the threshold is 10, the PSNR value is the largest and the run time does not change much after 10. From Figure 8, the optimal choice of the threshold is set to 10. In the test image set, is about 4 times bigger than . Figure 9 illustrates the performance of the proposed algorithm at difference settings of threshold and . Figure 9 does not show serious numerical differences at difference settings of threshold and . In this paper, and are set to 120 and 30, respectively, to balance the effect of and in the weight function.The threshold in (16) and (17) is determined using Figure 5. In the experiments, 0.25 is used as the threshold .

Figure 7
figure7

Performance of the proposed algorithm at different settings of threshold T flat : (a) PSNR and (b) Run Time.

Figure 8
figure8

Performance of the proposed algorithm at different settings of threshold T edge : (a) PSNR and (b) Run Time.

Figure 9
figure9

Performance of the proposed algorithm at different settings of threshold T G and T K : (a) threshold T K and (b) threshold T G .

Figure 10
figure10

Flat and edge region: (a) left and right for the G channel, (b) top and bottom for the G channel, (c) left and right for the KR channel, and (d) top and bottom for the KR channel.

Figure 11
figure11

Pattern region: (a) left and right for the G channel, (b) top and bottom for the G channel, (c) left and right for the KR channel (d), and top and bottom for the KR channel.

Tables 1 and 2 show some numerical comparisons of the color demosaicing method. In each table, the bold font denotes the largest PSNR and smallest NCD values across each image. From Tables 1 and 2, the proposed algorithm provides the improved PSNR and NCD values in most of the test images. In the case of fairly flat images, the proposed method showed similar PSNR and NCD results as in the method proposed by Lu and Tan, Wu and Zhang, and Li and Randhawa. However, the superiority of the proposed algorithm was clearly seen when the image contained details, textures and fine structures (Kodak 1, 6, 13, 19, and 21). When the average PSNR was compared, the proposed algorithm improved from about 0.67 dB to about 4.77 dB. The proposed method improved the average NCD from about 0.158 to about 1.5. The lower NCD value indicates that the output from the proposed algorithm produced fewer color artifacts than the other methods. The proposed method consistently produced high PSNR and low NCD results for most of the test images. Although the proposed method did not provide the best PSNR and NCD results for several images, it outperformed the other algorithms.

Table 1 PSNR comparison of color demosaicing algorithms.
Table 2 NCD comparison of color demosaicing algorithms.

To evaluate the proposed method, the visual quality of the test image was compared numerically, and also with some conventional methods. Figures 12 and 13 contain eight partially magnified images: the original Kodak images and the resulting images obtained by the methods proposed by Pei and Tam [7], Hamilton and Adams [10], Lu and Tan [15], Wu and Zhang [22], Li and Randhawa [24], and Tsai and Song [27], and the proposed method.

Figure 12
figure12

Partially magnified Kodak 1 image: (a) original image, (b) method discussed in [ 7 ], (c) method discussed in [ 10 ], (d) method discussed in [ 15 ], (e) method discussed in [ 22 ], (f) method discussed in [ 24 ], (g) method discussed in [ 27 ], and (h) Proposed method.

Figure 13
figure13

Partially magnified Kodak 19 image: (a) original image, (b) method discussed in [ 7 ], (c) method discussed in [ 10 ], (d) method discussed in [ 15 ], (e) method discussed in [ 22 ], (f) method discussed in [ 24 ], (g) method discussed in [ 27 ], and (h) Proposed method.

In Figures 12 and 13, the results of the simulated Kodak1 and Kodak 19 images (in which detailed and fine structured regions appear) can be seen. From this visual comparison, it is clear that most conventional methods suffered from zipper effects along the edges. The methods proposed by Wu and Zhang, Li and Randhawa, and Tsai and Song showed good results but they still produce more color artifacts than the proposed method. These experimental results explain that the proposed method performs satisfactorily not only in textured regions but also in normal edge regions.

Table 3 shows the numerical comparisons of a conventional AWB method and the proposed method. The value was calculated in the achromatic region of the color chart image. The smaller the value of , the better the algorithm should have been [31]. As shown in Table 3, the proposed algorithm provided an improved value in all six different light conditions. When the average was compared, the proposed algorithm improved from about 0.3 to about 1.5.

Table 3 The average of a checker board image under six different color temperature light sources.

For evaluating the results, numerical comparisons were important, and so was a the visual quality comparison. Figures 1416 contain six images the original images and the resulting images obtained by the GW methods, max-, Shades of Gray, Gray edge [30], and the proposed method. The conventional AWB methods were obtained by using the proposed color demosaicing method because this was the first attempt to combine two methods. Figure 14 shows the results of the Kodak 1 test image. Most edge regions existed in the wall; so it proved difficult to get a well-whited-balanced image when using the gray edge method. Also, the GW method showed a bluish image due to the dominant color problem. However, the proposed method avoided the dominant color problem because it used a predefined achromatic region and showed a white-balanced image. In Figures 15 and 16, the results of the real image taken from the Micron 2-megapixel image sensor (MT9D111) (in which the dominant color problem appears) can be seen. There are many objects such as flowers, leaves, grass, and the ground in the image. These objects have uniform colors and are dispersed in the image; so again, it was difficult to get a well-white-balanced image. The proposed method was able to deal with cases where there were many uniform objects in an image.

Figure 14
figure14

Result of the AWB method: (a) Kodak 1 image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

Figure 15
figure15

Result of the AWB method: (a) original image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

Figure 16
figure16

Result of the AWB method: (a) original image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

To show the computational efficiency of the proposed method, the average run times are presented in Table 4. The experiment was performed on a PC equipped with 3.2 GHz CPU and 4 GB RAM. When computational complexity was compared with the cascade color demosaicing method and the AWB algorithm, the proposed method improved efficiency by about 10%.

Table 4 The computational complexity comparison.

4. Conclusions

A method of recovering white balanced and full color images from color sampled data was presented in this paper. In order to avoid the problem of treating these methods separately and increasing the computational efficiency, a simultaneous color demosaicing and AWB scheme was proposed. Initial estimates were calculated for AWB weight and color demosaicing by using second-order Taylor series approximation and an SSC assumption. The gray edge assumption was used to achieve color constancy and a predefined achromatic region was used to avoid dominant color problem. Region adaptive color demosaicing was performed to improve the performance and the computational complexity. The experiments verified that the proposed method effectively suppressed color artifacts while preserving the details, texture, and fine structures in the images and showed well-white-balanced images. The results of the proposed algorithm indicate that it outperformed conventional algorithms in both quantitative and qualitative criteria. Moreover, the proposed algorithm was more computationally efficient than when the AWB method and color demosaicing were treated separately. The future research in this area include a new approach to combine other color demosaicing and AWB methods and experiments with other sensors to improve the algorithm.

References

  1. 1.

    Bayer BE: Color imaging array. US patent 3 971 065, July 1976

    Google Scholar 

  2. 2.

    Forsyth D: A novel algorithm for color constancy. International Journal of Computer Vision 1990, 5(1):5-36. 10.1007/BF00056770

    MathSciNet  Article  Google Scholar 

  3. 3.

    Cok DR: Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal. US patent 4 642 678, Febraury 1987

  4. 4.

    Adams JE Jr.: Interactions between color plane interpolation and other image processing functions in electronic photography. Cameras and Systems for Electronic Photography and Scientific Imaging, February 1995, San Jose, Calif, USA, Proceedings of SPIE 2416: 144-151.

    Article  Google Scholar 

  5. 5.

    Adams JE, Hamilton JF Jr.: Adaptive color plane interpolation in single color electronic camera. US patent 5 506 619, April 1996

    Google Scholar 

  6. 6.

    Adams JE Jr.: Design of practical color filter array interpolation algorithms for digital cameras. Real-Time Imaging II, February 1997, San Jose, Calif, USA, Proceedings of SPIE 3028: 117-125.

    Article  Google Scholar 

  7. 7.

    Pei S-C, Tam I-K: Effective color interpolation in CCD color filter arrays using signal correlation. IEEE Transactions on Circuits and Systems for Video Technology 2003, 13(6):503-513. 10.1109/TCSVT.2003.813422

    Article  Google Scholar 

  8. 8.

    Hibbard RH: Apparatus and method for adaptively interpolating a full color image utilizing luminance gradients. US patent 5 382 976, January 1995

    Google Scholar 

  9. 9.

    Laroche CA, Prescott MA: Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients. US patent 5 373 322, December 1994

    Google Scholar 

  10. 10.

    Hamilton JF Jr., Adams JE: Adaptive color plane interpolation in single sensor color electronic camera. US patent 5 629 734, May 1997

    Google Scholar 

  11. 11.

    Adams JE Jr., Hamilton JF Jr.: Adaptive color plane interpolation in single sensor color electronic camera. US patent 5 652 621, July 1997

    Google Scholar 

  12. 12.

    Kimmel R: Demosaicing: image reconstruction from color CCD samples. IEEE Transactions on Image Processing 1999, 8(9):1221-1228. 10.1109/83.784434

    Article  Google Scholar 

  13. 13.

    Hur BS, Kang MG: Edge-adaptive color interpolation algorithm for progressive scan charge-coupled device image sensors. SPIE Optical Engineering 2001, 40(12):2698-2708.

    Article  Google Scholar 

  14. 14.

    Park SW, Kang MG: Color interpolation with variable color ratio considering cross-channel correlation. SPIE Optical Enginerring 2004, 43(1):34-43.

    MathSciNet  Google Scholar 

  15. 15.

    Lu W, Tan Y: Color fiter array demosaicking: new method and performance measures. IEEE Transactions on Image Processing 2003, 12(10):1194-1210. 10.1109/TIP.2003.816004

    Article  Google Scholar 

  16. 16.

    Ramanath R, Snyder WE: Adaptive demosaicking. Journal of Electronic Imaging 2003, 12(4):633-642. 10.1117/1.1606459

    Article  Google Scholar 

  17. 17.

    Kim C, Kang MG: Noise insensitive high resolution color interpolation scheme considering cross-channel correlation. SPIE Optical Engineering 2005, 44(12):-15.

    Google Scholar 

  18. 18.

    Trussel H, Hartwig R: Mathematics for demosaicking. IEEE Transactions on Image Processing 2002, 11: 485-492. 10.1109/TIP.2002.999681

    MathSciNet  Article  Google Scholar 

  19. 19.

    Keren D, Osadchy M: Restoring subsampled color images. Machine Vision Applied 1999, 11(4):197-202. 10.1007/s001380050102

    Article  Google Scholar 

  20. 20.

    Gunturk BK, Altunbasak Y, Mersereau RM: Color plane interpolation using alternating projections. IEEE Transactions on Image Processing 2002, 11(9):997-1013. 10.1109/TIP.2002.801121

    Article  Google Scholar 

  21. 21.

    Mukherjee J, Parthasarathi R, Goyal S: Markov random field processing for color demosaicing. Pattern Recognition Letters 2001, 22(3-4):339-351. 10.1016/S0167-8655(00)00129-X

    Article  MATH  Google Scholar 

  22. 22.

    Wu XL, Zhang N: Primary-consistent soft-decision color demosaicking for digital cameras (patent pending). IEEE Transactions Processing 2004, 13(9):1263-1274. 10.1109/TIP.2004.832920

    Article  Google Scholar 

  23. 23.

    Hirakawa K, Parks TW: Adaptive homogeneity-directed demosaicing algorithm. IEEE Transactions on Image Processing 2005, 14(3):360-368.

    Article  Google Scholar 

  24. 24.

    Li JSJ, Randhawa S: High order extrapolation using taylor series for color filter array demosaicing. Proceedings of the International Conference on Image Analysis and Recognition (ICIAR '05), September 2005, Toronto, Canada, Lecture Notes in Computer Science 3656: 703-711.

    Article  Google Scholar 

  25. 25.

    Zhang L, Wu X: Color demosaicking via directional linear minimum mean square-error estimation. IEEE Transactions on Image Processing 2005, 14(12):2167-2178.

    Article  Google Scholar 

  26. 26.

    Chung K-H, Chan Y-H: Color demosaicing using variance of color differences. IEEE Transactions on Image Processing 2006, 15(10):2944-2955.

    Article  Google Scholar 

  27. 27.

    Tsai C-Y, Song K-T: Heterogeneity-projection hard-decision color interpolation using spectral-spatial correlation. IEEE Transactions on Image Processing 2007, 16(1):78-91.

    MathSciNet  Article  Google Scholar 

  28. 28.

    Barnard K, Cardei V, Funt B: A comparison of computational color constancy algorithms—part I: methodology and experiments with synthesized data. IEEE Transactions on Image Processing 2002, 11(9):972-984. 10.1109/TIP.2002.802531

    Article  Google Scholar 

  29. 29.

    Barnard K, Martin L, Coath A, Funt B: A comparison of computational color constancy algorithms—part II: experiments with image data. IEEE Transactions on Image Processing 2002, 11(9):985-996. 10.1109/TIP.2002.802529

    Article  Google Scholar 

  30. 30.

    van de Weijer J, Gevers T, Gijsenij A: Edge-based color constancy. IEEE Transactions on Image Processing 2007, 16(9):2207-2214.

    MathSciNet  Article  Google Scholar 

  31. 31.

    Lin J: An automatic white balance method based on edge detection. Proceedings of the 10th IEEE International Symposium on Consumer Electronics (ISCE '06), 2006 1-4.

    Google Scholar 

  32. 32.

    Land E, McCann J: Lightness and retinex theory. Journal of the Optical Society of America A 1971, 61(1):1-11. 10.1364/JOSA.61.000001

    Article  Google Scholar 

  33. 33.

    Finlayson GD, Hordley SD, Tastl I: Gamut constrained illuminant estimation. International Journal of Computer Vision 2006, 67(1):93-109. 10.1007/s11263-006-4100-z

    Article  Google Scholar 

  34. 34.

    Finlayson GD, Hubel PH, Hordley S: Color by correlation. Proceedings of the 5th IS&T/SID Color Imaging Conference: Color Science, Systems and Applications, November 1997, Scottsdale, Ariz, USA 6-11.

    Google Scholar 

  35. 35.

    Finlayson GD, Hordley SD, Hubel PM: Color by correlation: a simple, unifying framework for color constancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, 23(11):1209-1221. 10.1109/34.969113

    Article  Google Scholar 

  36. 36.

    Khriji L, Cheikh FA, Gabbouj M: High-resolution digital resampling using vector rational fiters. SPIE Optical Engineering 1999, 38(5):893-901.

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) through the Biometrics Engineering Research Center (BERC) at Yonsei University (2009-0062990) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2009-0079024).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Moon Gi Kang.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Kim, C.W., Oh, H.M., Yoo, D.S. et al. Region Adaptive Color Demosaicing Algorithm Using Color Constancy. EURASIP J. Adv. Signal Process. 2010, 271078 (2010). https://doi.org/10.1155/2010/271078

Download citation

Keywords

  • Color Constancy
  • Color Filter Array
  • Gray Edge
  • Full Color Image
  • Color Artifact