Skip to content

Advertisement

Open Access

Region Adaptive Color Demosaicing Algorithm Using Color Constancy

  • Chang Won Kim1,
  • Hyun Mook Oh1,
  • Du Sic Yoo1 and
  • Moon Gi Kang1Email author
EURASIP Journal on Advances in Signal Processing20102010:271078

https://doi.org/10.1155/2010/271078

Received: 7 October 2009

Accepted: 2 February 2010

Published: 13 April 2010

Abstract

This paper proposes a novel way of combining color demosaicing and the auto white balance (AWB) method, which are important parts of image processing. Performance of the AWB is generally affected by demosaicing results because most AWB algorithms are performed posterior to color demosaicing. In this paper, in order to increase the performance and efficiency of the AWB algorithm, the color constancy problem is examined during the color demosaicing step. Initial estimates of the directional luminance and chrominance values are defined for estimating edge direction and calculating the AWB gain. In order to prevent color failure in conventional edge-based AWB methods, we propose a modified edge-based AWB method that used a predefined achromatic region. The estimation of edge direction is performed region adaptively by using the local statistics of the initial estimates of the luminance and chrominance information. Simulated and real Bayer color filter array (CFA) data are used to evaluate the performance of the proposed method. When compared to conventional methods, the proposed method shows significant improvements in terms of visual and numerical criteria.

Keywords

Color ConstancyColor Filter ArrayGray EdgeFull Color ImageColor Artifact

1. Introduction

1.1. Backgrounds

Recently, digital cameras have become more and more popular and they have replaced film cameras in many applications. A typical digital camera acquires images using a single chip image sensor with a color filter array (CFA) in order to reduce cost and size. The Bayer CFA [1], shown in Figure 1, is the most common array used to sample one of the primary colors at each pixel.
Figure 1

The Bayer pattern.

In order to render images captured with a single chip image sensor as a viewable image, an image processing pipeline is required. The most important parts of this image processing pipeline are demosaicing and the automatic white balance (AWB). Since only one color component is available at each pixel, the other two missing color components have to be estimated from the neighboring pixels. This process is referred to as CFA demosaicing or CFA interpolation. The color constancy property of the human visual system allows the perceived color to remain relatively constant at different color temperatures [2]. This capability is required for cameras to generate natural-looking images that match with human perception. The goal of the AWB method is to emulate human color constancy. This is normally achieved by adjusting the image so that it looks as if it were taken under a canonical light (usually daylight).

In recent years, there have been investigations into more sophisticated demosaicing algorithms. Based on the assumption of smooth hue transition, demosaicing is performed using a ratio model which assumes that the ratio between luminance and chrominance at the same position is constant in the neighborhood [3]. Instead of using color ratios, many methods also make use of interchannel color differences which assume that the difference between luminance and chrominance is smooth in small region [47]. Since human visual systems are sensitive to the edges in images, edge-directed demosaicing method chooses the interpolation direction to avoid interpolating across edges, instead interpolating along any edges in the image [811]. Instead of choosing a certain interpolation direction, the edge indicator function is used in [1217]. The edge indicator functions in several directions are defined as measures of edge information and a missing pixel is determined as a weighted sum of its neighbors. Color demosaicing is performed by reconstruction approaches [1821]. Demosaiced image is obtained by deriving Minimum Mean Square Error (MMSE) estimator [18]. Regularization approaches are proposed in [19] and the color channels are reconstructed using the projections onto convex sets (POCSs) technique [20]. In [21], the demosaicing problem is formulated as a Bayesian estimation problem. Another recent demosaicing approach, which is referred as decision-based demosaicing algorithm, divided the demosaicing procedure into an interpolation stage and decision stage [2227]. In the interpolation stage, horizontally and vertically interpolated images are produced, respectively. In the decision stage, soft-decision or hard-decision methods are employed for choosing the pixels interpolated in the direction with fewer artifacts. The Fisher discriminant is used as the decision criterion [22]. Homogeneity map is used to improve the reasonability of the criterion [23]. The second order Taylor series are used to produce directionally interpolated images in the interpolation stage [24]. Demosaicing error is minimized by the directional linear minimum mean square-error estimation technique [25]. In [26], Chung and Chan presented an adaptive demosaicing algorithm by using the variances of color differences along horizontal and vertical edge directions. However, in the method proposed by Tsai and Song [27], the decision stage is performed before the interpolation stage.

Various algorithms, such as gray world (GW), perfect reflectors (Max- ), gamut mapping, and color by correlation, have been proposed to maintain the color constancy of an image under different light sources. A good comparison of these algorithms can be found in [28, 29]. The gray world algorithm assumes that the average of the surface reflectance of a typical scene is achromatic [30, 31]. The perfect reflectors algorithm is a simple and fast color constancy algorithm which estimates the light source color from the maximum response of the different color channels [32]. The gamut mapping algorithm is based on the observation that only a limited set of values can be observed under a given illuminant [2, 33]. The basic idea of color by correlation is to precompute a correlation matrix which describes the extent to which proposed illuminants are compatible with the occurrence of image chromaticities [34, 35].

1.2. Motivation

For most AWB algorithms, , , and color components at each pixel are required; thus AWB algorithms are performed after color demosaicing. Therefore, the performance of the AWB is mainly affected by the demosaicing results. In order to increase the performance and efficiency of the AWB algorithms, the color constancy problem can be treated in the color demosaicing step. Color demosaicing that considers color constancy has several advantages. Firstly, computational complexity is reduced. Second, image quality is improved because the AWB method can be performed using original Bayer data which is not degraded by the color demosaicing process. During the color demosaicing process, color information in the fine detail region can be degraded and this introduces false color artifacts. These artifacts influence the AWB gain and also color artifacts can be emphasized by the AWB process. In order to use these advantages and avoid problems, a novel color demosaicing algorithm which uses color constancy is proposed in this paper. For an initial estimate of proposed algorithm, the channel is directionally interpolated using Taylor series approximation and the chrominance channel is calculated using the concept of spectral and spatial correlation (SSC) [27]. An edge-based AWB algorithm is performed using initial estimates instead of using a full color image. The AWB gain is obtained by using a predefined achromatic region and initial estimates in the edge region to prevent color failure situations when more than one uniform object existed in the image. In order to improve the performance and the computational complexity, the region is classified into flat, edge, and pattern regions at each pixel. Based on these preclassified region, the color demosaicing process with color constancy is performed to reduce the number of interpolation errors using the AWB gain and local statistics of the initial estimates.

1.3. Overview

The rest of this paper is organized as follows. Section 2 provides the motivation for combining color demosaicing and AWB methods by posing the problem. The initial estimate values of and the chrominance channel are defined and the AWB gain is calculated. A detailed explanation of the proposed method and its theoretical improvements are presented. Section 3 presents experimental results of simulated and real CFA data and some comparisons with other algorithms. The paper is concluded in Section 4.

2. A Joint Color Demosaicing Method and the AWB Algorithm

The performance of the AWB algorithm is mainly affected by demosaicing results because AWB algorithms are usually performed posterior to color demosaicing process. In an edge-based AWB method, which is simple but shows good performance, edge extraction is an important part. Figure 2 compares the results of edge extraction of some original and demosaicing images. The results of edge extraction are significantly different due to the degraded color demosaicing result of the method proposed in [7]. During the color demosaicing process, edge information can be degraded and false color artifacts can be introduced. These factors can influence the AWB gain. Figure 3 shows the AWB results of the demosaicing image. The AWB results of the color demosaicing image produced by the method discussed in [7] do not show a white balanced image because the false color artifact and Moire effect influence the calculation of AWB gain. Also, the color artifacts and Moire effect are emphasized by the AWB process. In order to avoid these problems, we considered the AWB method during the color demosaicing process. Performing color demosaicing and the AWB method simultaneously produced various advantages. In order to overcome difficulties, a color demosaicing method that uses color constancy is introduced in this paper. The details of the algorithm are presented below.
Figure 2

An example of the problem of edge detection: (a) original image, (b) color demosaicing image by method [ 7 ], (c) edge detection of original image, and (d) edge detection after color demosaicing method [ 7 ].

Figure 3

An example of the effects of demosaicing artifacts on the AWB method: (a) AWB result of color demosaicing method [ 7 ] and (b) AWB result of original image.

2.1. Initial Estimate of Color Demosaicing and Color Constancy

In an AWB algorithm, full color information is required at each pixel location. However, in a Bayer CFA, only one color component exists at each pixel location. Initial estimates of luminance and chrominance information can be used to solve the problem. In order to obtain initial estimates of the channel (similar to the luminance information), directional estimates of the green channel at every pixel position follow the Taylor series approximation [24] for simplicity. The missing sample at the or position is calculated by
(1)

where represents either or . The superscripts , , , and mean Top, Bottom, Left, and Right direction. is a initial estimate of value at the or position calculated toward top direction. In the case of superscripts , , and the values are calculated toward bottom, left, and right direction, respectively.

The concepts of spectral and spatial correlation (SSC) [27], shown in Figure 4, are used for the chrominance calculation. The SSC means that the color difference between green and red or blue is constant over the neighboring pixels and toward the edge directions. Also, the rate of change of the neighboring pixel values is constant and the SSC value, , is given by
(2)
where is used for only formula manipulation and represents either and . When is zero, is equal to , which represents the chrominance information.
Figure 4

Spectral and spatial correlation.

The directional SSC value, , at the or position is calculated by
(3)

where the superscripts , , , and mean Top, Bottom, Left, and Right direction. is a initial estimate of value at the or position calculated toward top direction. In the case of superscripts , , and the values are calculated toward bottom, left, and right direction, respectively. From (1) and (3), initial estimates of the luminance and chrominance values are obtained. Using these values, color constancy gains are calculated and the optimal demosaicing values are determined. The details of obtaining the color constancy gains are explained in the next section.

2.2. Obtaining Color Constancy Gains

For the AWB algorithms, a modified version of the edge-based method is used in this paper. Although the edge-based method is simple, the color constancy accuracy of the method is reasonable [30], and the average edge difference in the scene is assumed to be achromatic. To prevent color failure when more than one uniform object existed in an image, a predefined achromatic region is used in this paper.

In the color difference model, and channels are expressed as sum of luminance and chrominance information [7]:
(4)
where and . In order to obtain white balanced and values, we calculate and , which are white balanced and values, by using gray-edge hypothesis. In order to apply color constancy gains to color difference model-based color demosaicing, color constancy gains are subtracted from the chrominance value:
(5)
where and are color constancy gains. , and are the average of edge pixels of , , and channels. The and values of achromatic points are proportional to intensity. In order to increase AWB performance, we use intensity dependent color constancy gain instead of constant color constancy gain. Then (5) becomes
(6)

In this subsection, the color constancy gains, and , are calculated.

For the edge-based AWB method, edge point detection is required before the AWB process. The region is classified into flat, edge, and pattern regions in the Bayer CFA for more accurate edge direction decisions and for improving the computational efficiency in the color demosaicing process. In order to perform the proposed method, instead of using full color images, Bayer CFA data is used to classify the types of region at the and pixel locations.

For region classification, the and values at the or pixels are defined as
(7)
where these two parameters are used to estimate whether there are strong horizontal or vertical edges in the testing window. A region which is not classified as an edge region is considered to be in a flat region or a pattern region. is also defined as
(8)
where this parameter is used to determine whether it is a flat or pattern region. By using the , , and values, the region at each pixel location is classified as
(9)

where and are predefined threshold values that are used to determine the edge and flat and pattern edge regions.

Although the effects of dominant color can be alleviated by using predefined achromatic regions, the dominant color problem may still exist, for instance, for texture images because pattern edge regions with similar colors are regarded as edge points [31]. In cases like this, the AWB method only based on the edge detection may perform worse than the GW method. Also, color artifacts can be introduced in the pattern region. Due to the these regions, only normal edge regions are used as edge points for the AWB method in this paper.

After the edge points are detected, an estimation of the luminance and chrominance information is required to calculate the color constancy gain. In the case of the pixel location, we estimate the and pixels:
(10)
In order to estimate the missing chrominance information, the difference of the channel information is used and defined as
(11)
Using (11), an estimated value is obtained as
(12)
where is the pixel location that satisfied the following equation:
(13)
From (12), the directional values are temporary defined as
(14)
Using the directional and values and the assumption , , the temporary used chrominance values and are expressed as
(15)
From (10) and (15), we obtained the luminance and chrominance information that we used to calculate the color constancy gain. Figure 5 shows a predefined achromatic region that prevented a color failure situation when more than one uniform object existed in an image. The predefined achromatic region is obtained using a color chart under different lighting conditions. Using the mean information of the Bayer CFA data, information about the dominant color of the image is obtained and this information restricted the predefined achromatic region for accurate color constancy gain. The average point is defined as and . When the distance between the average point and the pre-defined achromatic region is smaller than the predefined threshold , the possible achromatic region becomes
(16)
where represents the abscissa and represents the ordinate. and are defined as and and represents a nearest point in the predefined region from the average point. Otherwise, the possible achromatic region becomes
(17)
where represents a linear function of the predefined region. In this paper, and , which are obtained from Figure 5, are −1.1394 and −0.0068, respectively. Figure 6 explains (16) and (17).
Figure 5

Predefined achromatic region ("A" is Incandescent Lamp (CIE-A) and "TL" is Tri- Phosphor Fluorescent (CIE-F11).).

Figure 6

Restricted achromatic region: (a) average point inside the achromatic region and (b) average point outside the achromatic region.

By using the estimated luminance and chrominance information, the color constancy gains are obtained as
(18)

where represents an edge pixel, as defined in (9), and represents the total number of edge pixels included in . The obtained color constancy gain is used for the and channels while the channel demosaicing at the and pixel locations.

detailed explanation will be provided in the next section.

2.3. Green Channel Demosaicing

In a Bayer CFA, the green plane is sampled at a rate twice as high as in the red and blue planes. Thus, the amount of aliasing in the green plane tends to be less than that in the red and blue planes. The green plane possesses the most spatial information of the image to be interpolated and has a great influence on the visual quality of the image. In most cases, the AWB method alters only the and channels. The channel is kept unchanged because the wavelength of the green color band is close to the peak of the human luminance frequency response. For these reasons, green plane interpolation at the and channels should be performed first.

The estimated values of are expressed as a weighted sum of the initial estimate and is expressed using the result of the values. We express them as
(19)
It was found that, in local regions of a natural image, the color differences between the pixels are quite flat [7]. Also, the green channel is flat in homogeneous regions. Accordingly, the variances of and are used as supplementary information to determine the interpolation direction, where is either or . In order to minimize the directional interpolation error and accurately estimate the direction of interpolation, the weight function is designed inversely proportional to the value and ( , ). Regardless of the orientation of an edge, at least one of the four estimates is an accurate estimate of the missing green color [24]. To illustrate the above mentioned concept, the weight function is defined to pick one of the four estimates. In the paper, the weight function is defined as
(20)
where and are variances of and . These variances are calculated in the mask defined in Figures 10 and 11 based on the region determined in (9). It is difficult to determine edge directions in the pattern edge region. Therefore, multiple directional rectangular windows are used for the calculation of local statistics to use the directional information of neighboring pixels. On the other hand, a directional rectangular window is used for the calculation of local statistics in the flat and edge regions. and represents a predefined thresholds to control the weight of and in the weight function. From (18)–(19), white balance processing is written by
(21)

where is either and . From (19) and (21), a fully interpolated channel and white balanced and channels are obtained. Using these values, and channel demosaicing is performed.

2.4. Red and Blue Plane Interpolation

As explained in previous section, the problem of recovering red and blue channels becomes the demosaicing without color constancy problem. Although the red and blue planes are more sparsely sampled than the green planes, they are easily interpolated by using the fully interpolated green plane and the and domains. Similar to green plane interpolation, the missing value at the location is estimated adaptively by
(22)

where . In this case, is calculated directly, using fully interpolated channel values. is the same value as the one in the green plane, except in a different pixel location. The interpolation of the values at the location and the interpolation of the values are performed in a similar way as interpolation of the value at the location.

3. Experimental Results

The performance of the proposed algorithm was tested with 24 simulated Kodak images, which have been widely used for evaluating demosaicing algorithms with Bayer patterns. These full color images were downsampled into Bayer CFA images. The PSNR and the normalized color difference (NCD) [36], which is an objective measure of the perceptual error between two color images, were used in order to measure the performance of the proposed algorithm quantitatively. The PSNR was defined in decibels as
(23)
where represents the total number of pixels in the image, is the original color image, and is the demosaiced color image. The NCD is computed in the color space by using the following equation:
(24)
where is the perceptual color error between two color vectors and is defined as the Euclidean distance between them and is given by
(25)
where , , and are the differences in the , , and components. The magnitude of the pixel vector in the of the original image is and is given by
(26)

Experiments were also performed on 12-bit sensor data taken from a Micron 2-Megapixel image sensor (MT9D111) in an indoor environment with different lighting conditions and in an outdoor environment. The measurement of the performance of the proposed algorithm was divided into two stages. The first stage is the color demosaicing performance and the second stage is the color constancy ability. In the first stage, for comparison, six conventional algorithms, including the methods of Pei and Tam [7], Hamilton and Adams [10], Lu and Tan [15], Wu and Zhang [22], Li and Randhawa [24], and Tsai and Song [27], were implemented. In the second stage, the proposed algorithms were compared with the results of GW, Max-RGB, Shades of Gray, and Gray edge [30].

For the proposed method, an empirical study was carried out to select an appropriate threshold value of , , , and . Figures 7 and 8 illustrate the performance of the proposed algorithm at difference settings of threshold and . From Figure 7, the optimal choice of the threshold is 10. When the threshold is 10, the PSNR value is the largest and the run time does not change much after 10. From Figure 8, the optimal choice of the threshold is set to 10. In the test image set, is about 4 times bigger than . Figure 9 illustrates the performance of the proposed algorithm at difference settings of threshold and . Figure 9 does not show serious numerical differences at difference settings of threshold and . In this paper, and are set to 120 and 30, respectively, to balance the effect of and in the weight function.The threshold in (16) and (17) is determined using Figure 5. In the experiments, 0.25 is used as the threshold .
Figure 7

Performance of the proposed algorithm at different settings of threshold T flat : (a) PSNR and (b) Run Time.

Figure 8

Performance of the proposed algorithm at different settings of threshold T edge : (a) PSNR and (b) Run Time.

Figure 9

Performance of the proposed algorithm at different settings of threshold T G and T K : (a) threshold T K and (b) threshold T G .

Figure 10

Flat and edge region: (a) left and right for the G channel, (b) top and bottom for the G channel, (c) left and right for the K R channel, and (d) top and bottom for the K R channel.

Figure 11

Pattern region: (a) left and right for the G channel, (b) top and bottom for the G channel, (c) left and right for the K R channel (d), and top and bottom for the K R channel.

Tables 1 and 2 show some numerical comparisons of the color demosaicing method. In each table, the bold font denotes the largest PSNR and smallest NCD values across each image. From Tables 1 and 2, the proposed algorithm provides the improved PSNR and NCD values in most of the test images. In the case of fairly flat images, the proposed method showed similar PSNR and NCD results as in the method proposed by Lu and Tan, Wu and Zhang, and Li and Randhawa. However, the superiority of the proposed algorithm was clearly seen when the image contained details, textures and fine structures (Kodak 1, 6, 13, 19, and 21). When the average PSNR was compared, the proposed algorithm improved from about 0.67 dB to about 4.77 dB. The proposed method improved the average NCD from about 0.158 to about 1.5. The lower NCD value indicates that the output from the proposed algorithm produced fewer color artifacts than the other methods. The proposed method consistently produced high PSNR and low NCD results for most of the test images. Although the proposed method did not provide the best PSNR and NCD results for several images, it outperformed the other algorithms.
Table 1

PSNR comparison of color demosaicing algorithms.

Method

[7]

[10]

[15]

[22]

[24]

[27]

Proposed method

1

29.139

26.237

31.374

31.281

30.959

30.147

32.773

2

33.942

31.958

34.431

35.164

34.625

32.996

35.111

3

36.174

33.735

36.169

37.172

36.740

33.789

37.533

4

35.209

32.674

35.060

35.076

34.537

32.838

35.775

5

30.638

27.599

32.420

32.029

31.532

30.506

32.772

6

30.162

27.736

32.124

33.303

33.604

32.014

34.702

7

35.758

33.462

36.182

36.670

36.349

32.854

36.866

8

25.955

24.015

29.597

29.644

28.863

28.525

30.643

9

34.863

32.778

35.917

37.301

36.767

33.109

37.550

10

35.450

32.795

36.159

36.752

36.093

32.988

37.204

11

31.713

29.171

33.284

33.650

33.367

31.434

34.647

12

35.046

33.0381

36.2903

37.4469

37.5518

32.7008

37.976

13

26.693

23.301

27.819

27.701

27.634

26.955

29.285

14

30.959

28.983

32.244

32.065

31.426

30.725

31.887

15

33.102

30.691

33.646

34.212

33.156

32.402

34.351

16

33.372

31.055

34.760

36.921

37.511

34.146

38.281

17

34.689

31.582

35.320

35.622

35.301

33.022

36.581

18

30.506

27.321

30.943

31.107

30.569

29.939

31.694

19

30.596

28.750

33.310

34.601

34.118

31.762

35.460

20

33.982

31.155

33.660

35.107

34.540

33.681

35.652

21

31.137

28.393

31.977

32.499

32.310

30.552

33.778

22

32.021

30.007

32.333

32.969

32.552

30.839

33.157

23

37.170

34.873

36.444

37.873

37.225

34.0518

37.916

24

28.879

25.819

28.550

29.437

28.574

28.794

30.205

Average

32.381

29.880

33.334

33.983

31.699

33.579

34.658

Table 2

NCD comparison of color demosaicing algorithms.

Method

[7]

[10]

[15]

[22]

[24]

[27]

Proposed method

1

3.3642

4.8614

2.4648

2.5723

2.5855

2.6255

2.0038

2

2.3222

2.8759

1.9663

2.0716

2.0967

2.0942

2.0186

3

1.4192

1.9295

1.2932

1.3320

1.3249

1.3996

1.2512

4

1.7575

2.4308

1.6365

1.7548

1.8524

1.8983

1.6150

5

4.2188

5.5268

3.0787

3.4729

3.6039

3.6878

3.0593

6

2.3901

3.3519

1.8369

1.6963

1.5771

1.731

1.4103

7

1.6883

2.1963

1.4123

1.5237

1.5247

1.6259

1.4994

8

4.0698

5.3445

2.6195

2.6783

2.7872

2.8475

2.2732

9

1.3281

1.8437

1.1006

1.0799

1.0946

1.2112

1.0030

10

1.2920

1.8610

1.1320

1.1782

1.1866

1.3199

1.0792

11

3.0053

3.9284

2.2764

2.3406

2.3375

2.4373

2.0996

12

1.0234

1.4854

0.8695

0.8450

0.8295

0.9139

0.7744

13

4.7222

6.9852

3.9882

4.2398

4.2848

4.2964

3.2278

14

3.2396

4.1906

2.5953

2.7794

2.8446

2.9388

2.6094

15

2.2142

2.9346

2.0149

2.0663

2.2075

2.1600

2.0052

16

2.1635

2.9721

1.6778

1.4987

1.3683

1.5698

1.2589

17

2.6429

3.5263

2.2175

2.3417

2.4115

2.5657

2.1226

18

4.1882

5.7475

3.8332

3.8676

4.2533

4.1030

3.7728

19

2.4973

3.4972

1.9400

1.8843

1.9515

2.0023

1.6513

20

1.4892

2.0914

1.3428

1.3168

1.3752

1.3550

1.1949

21

2.3918

3.3835

1.9527

2.0638

2.0501

2.1609

1.6947

22

2.1328

2.9178

1.9437

2.0020

2.0905

2.1126

1.9271

23

1.2644

1.5910

1.2928

1.2699

1.3045

1.3572

1.2883

24

2.5183

3.4042

2.1787

2.2341

2.3844

2.3264

2.0180

Average

2.4726

3.36988

2.0277

2.0879

2.1975

2.1386

1.8691

To evaluate the proposed method, the visual quality of the test image was compared numerically, and also with some conventional methods. Figures 12 and 13 contain eight partially magnified images: the original Kodak images and the resulting images obtained by the methods proposed by Pei and Tam [7], Hamilton and Adams [10], Lu and Tan [15], Wu and Zhang [22], Li and Randhawa [24], and Tsai and Song [27], and the proposed method.
Figure 12

Partially magnified Kodak 1 image: (a) original image, (b) method discussed in [ 7 ], (c) method discussed in [ 10 ], (d) method discussed in [ 15 ], (e) method discussed in [ 22 ], (f) method discussed in [ 24 ], (g) method discussed in [ 27 ], and (h) Proposed method.

Figure 13

Partially magnified Kodak 19 image: (a) original image, (b) method discussed in [ 7 ], (c) method discussed in [ 10 ], (d) method discussed in [ 15 ], (e) method discussed in [ 22 ], (f) method discussed in [ 24 ], (g) method discussed in [ 27 ], and (h) Proposed method.

In Figures 12 and 13, the results of the simulated Kodak1 and Kodak 19 images (in which detailed and fine structured regions appear) can be seen. From this visual comparison, it is clear that most conventional methods suffered from zipper effects along the edges. The methods proposed by Wu and Zhang, Li and Randhawa, and Tsai and Song showed good results but they still produce more color artifacts than the proposed method. These experimental results explain that the proposed method performs satisfactorily not only in textured regions but also in normal edge regions.

Table 3 shows the numerical comparisons of a conventional AWB method and the proposed method. The value was calculated in the achromatic region of the color chart image. The smaller the value of , the better the algorithm should have been [31]. As shown in Table 3, the proposed algorithm provided an improved value in all six different light conditions. When the average was compared, the proposed algorithm improved from about 0.3 to about 1.5.
Table 3

The average of a checker board image under six different color temperature light sources.

 

Before AWB

GW

Max-

SG

GE

Proposed method

A (2856K)

27.2081

1.0732

3.1478

1.0486

1.0083

0.6997

Coolwhite (4150K)

27.4791

1.0535

1.0803

0.9990

1.0057

0.7731

Daylight (6500K)

23.4127

0.9412

1.0961

0.9463

1.0466

0.7197

Horizon (2300K)

31.7124

1.2960

4.0441

1.2662

1.4153

0.8413

Tl (4100K)

19.5384

0.9778

2.4763

0.9351

0.9490

0.7058

Outdoor

25.5908

2.0421

1.7281

1.0986

0.9233

0.6822

Average

25.8236

1.2306

2.2621

1.0490

1.0580

0.7370

For evaluating the results, numerical comparisons were important, and so was a the visual quality comparison. Figures 1416 contain six images the original images and the resulting images obtained by the GW methods, max- , Shades of Gray, Gray edge [30], and the proposed method. The conventional AWB methods were obtained by using the proposed color demosaicing method because this was the first attempt to combine two methods. Figure 14 shows the results of the Kodak 1 test image. Most edge regions existed in the wall; so it proved difficult to get a well-whited-balanced image when using the gray edge method. Also, the GW method showed a bluish image due to the dominant color problem. However, the proposed method avoided the dominant color problem because it used a predefined achromatic region and showed a white-balanced image. In Figures 15 and 16, the results of the real image taken from the Micron 2-megapixel image sensor (MT9D111) (in which the dominant color problem appears) can be seen. There are many objects such as flowers, leaves, grass, and the ground in the image. These objects have uniform colors and are dispersed in the image; so again, it was difficult to get a well-white-balanced image. The proposed method was able to deal with cases where there were many uniform objects in an image.
Figure 14

Result of the AWB method: (a) Kodak 1 image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

Figure 15

Result of the AWB method: (a) original image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

Figure 16

Result of the AWB method: (a) original image, (b) Gray world, (c) Max- RGB , (d) Shades of Gray, (e) Gray Edge, and (f) Proposed method.

To show the computational efficiency of the proposed method, the average run times are presented in Table 4. The experiment was performed on a PC equipped with 3.2 GHz CPU and 4 GB RAM. When computational complexity was compared with the cascade color demosaicing method and the AWB algorithm, the proposed method improved efficiency by about 10%.
Table 4

The computational complexity comparison.

 

Color demosaicing

Edge based AWB

Proposed method

Average running time (sec)

3.404

0.44

3.486

4. Conclusions

A method of recovering white balanced and full color images from color sampled data was presented in this paper. In order to avoid the problem of treating these methods separately and increasing the computational efficiency, a simultaneous color demosaicing and AWB scheme was proposed. Initial estimates were calculated for AWB weight and color demosaicing by using second-order Taylor series approximation and an SSC assumption. The gray edge assumption was used to achieve color constancy and a predefined achromatic region was used to avoid dominant color problem. Region adaptive color demosaicing was performed to improve the performance and the computational complexity. The experiments verified that the proposed method effectively suppressed color artifacts while preserving the details, texture, and fine structures in the images and showed well-white-balanced images. The results of the proposed algorithm indicate that it outperformed conventional algorithms in both quantitative and qualitative criteria. Moreover, the proposed algorithm was more computationally efficient than when the AWB method and color demosaicing were treated separately. The future research in this area include a new approach to combine other color demosaicing and AWB methods and experiments with other sensors to improve the algorithm.

Declarations

Acknowledgments

This work was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) through the Biometrics Engineering Research Center (BERC) at Yonsei University (2009-0062990) and by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2009-0079024).

Authors’ Affiliations

(1)
TMS Institute of Information Technology, Yonsei University, Seoul, South Korea

References

  1. Bayer BE: Color imaging array. US patent 3 971 065, July 1976Google Scholar
  2. Forsyth D: A novel algorithm for color constancy. International Journal of Computer Vision 1990, 5(1):5-36. 10.1007/BF00056770MathSciNetView ArticleGoogle Scholar
  3. Cok DR: Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal. US patent 4 642 678, Febraury 1987Google Scholar
  4. Adams JE Jr.: Interactions between color plane interpolation and other image processing functions in electronic photography. Cameras and Systems for Electronic Photography and Scientific Imaging, February 1995, San Jose, Calif, USA, Proceedings of SPIE 2416: 144-151.View ArticleGoogle Scholar
  5. Adams JE, Hamilton JF Jr.: Adaptive color plane interpolation in single color electronic camera. US patent 5 506 619, April 1996Google Scholar
  6. Adams JE Jr.: Design of practical color filter array interpolation algorithms for digital cameras. Real-Time Imaging II, February 1997, San Jose, Calif, USA, Proceedings of SPIE 3028: 117-125.View ArticleGoogle Scholar
  7. Pei S-C, Tam I-K: Effective color interpolation in CCD color filter arrays using signal correlation. IEEE Transactions on Circuits and Systems for Video Technology 2003, 13(6):503-513. 10.1109/TCSVT.2003.813422View ArticleGoogle Scholar
  8. Hibbard RH: Apparatus and method for adaptively interpolating a full color image utilizing luminance gradients. US patent 5 382 976, January 1995Google Scholar
  9. Laroche CA, Prescott MA: Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients. US patent 5 373 322, December 1994Google Scholar
  10. Hamilton JF Jr., Adams JE: Adaptive color plane interpolation in single sensor color electronic camera. US patent 5 629 734, May 1997Google Scholar
  11. Adams JE Jr., Hamilton JF Jr.: Adaptive color plane interpolation in single sensor color electronic camera. US patent 5 652 621, July 1997Google Scholar
  12. Kimmel R: Demosaicing: image reconstruction from color CCD samples. IEEE Transactions on Image Processing 1999, 8(9):1221-1228. 10.1109/83.784434View ArticleGoogle Scholar
  13. Hur BS, Kang MG: Edge-adaptive color interpolation algorithm for progressive scan charge-coupled device image sensors. SPIE Optical Engineering 2001, 40(12):2698-2708.View ArticleGoogle Scholar
  14. Park SW, Kang MG: Color interpolation with variable color ratio considering cross-channel correlation. SPIE Optical Enginerring 2004, 43(1):34-43.MathSciNetGoogle Scholar
  15. Lu W, Tan Y: Color fiter array demosaicking: new method and performance measures. IEEE Transactions on Image Processing 2003, 12(10):1194-1210. 10.1109/TIP.2003.816004View ArticleGoogle Scholar
  16. Ramanath R, Snyder WE: Adaptive demosaicking. Journal of Electronic Imaging 2003, 12(4):633-642. 10.1117/1.1606459View ArticleGoogle Scholar
  17. Kim C, Kang MG: Noise insensitive high resolution color interpolation scheme considering cross-channel correlation. SPIE Optical Engineering 2005, 44(12):-15.Google Scholar
  18. Trussel H, Hartwig R: Mathematics for demosaicking. IEEE Transactions on Image Processing 2002, 11: 485-492. 10.1109/TIP.2002.999681MathSciNetView ArticleGoogle Scholar
  19. Keren D, Osadchy M: Restoring subsampled color images. Machine Vision Applied 1999, 11(4):197-202. 10.1007/s001380050102View ArticleGoogle Scholar
  20. Gunturk BK, Altunbasak Y, Mersereau RM: Color plane interpolation using alternating projections. IEEE Transactions on Image Processing 2002, 11(9):997-1013. 10.1109/TIP.2002.801121View ArticleGoogle Scholar
  21. Mukherjee J, Parthasarathi R, Goyal S: Markov random field processing for color demosaicing. Pattern Recognition Letters 2001, 22(3-4):339-351. 10.1016/S0167-8655(00)00129-XView ArticleMATHGoogle Scholar
  22. Wu XL, Zhang N: Primary-consistent soft-decision color demosaicking for digital cameras (patent pending). IEEE Transactions Processing 2004, 13(9):1263-1274. 10.1109/TIP.2004.832920View ArticleGoogle Scholar
  23. Hirakawa K, Parks TW: Adaptive homogeneity-directed demosaicing algorithm. IEEE Transactions on Image Processing 2005, 14(3):360-368.View ArticleGoogle Scholar
  24. Li JSJ, Randhawa S: High order extrapolation using taylor series for color filter array demosaicing. Proceedings of the International Conference on Image Analysis and Recognition (ICIAR '05), September 2005, Toronto, Canada, Lecture Notes in Computer Science 3656: 703-711.View ArticleGoogle Scholar
  25. Zhang L, Wu X: Color demosaicking via directional linear minimum mean square-error estimation. IEEE Transactions on Image Processing 2005, 14(12):2167-2178.View ArticleGoogle Scholar
  26. Chung K-H, Chan Y-H: Color demosaicing using variance of color differences. IEEE Transactions on Image Processing 2006, 15(10):2944-2955.View ArticleGoogle Scholar
  27. Tsai C-Y, Song K-T: Heterogeneity-projection hard-decision color interpolation using spectral-spatial correlation. IEEE Transactions on Image Processing 2007, 16(1):78-91.MathSciNetView ArticleGoogle Scholar
  28. Barnard K, Cardei V, Funt B: A comparison of computational color constancy algorithms—part I: methodology and experiments with synthesized data. IEEE Transactions on Image Processing 2002, 11(9):972-984. 10.1109/TIP.2002.802531View ArticleGoogle Scholar
  29. Barnard K, Martin L, Coath A, Funt B: A comparison of computational color constancy algorithms—part II: experiments with image data. IEEE Transactions on Image Processing 2002, 11(9):985-996. 10.1109/TIP.2002.802529View ArticleGoogle Scholar
  30. van de Weijer J, Gevers T, Gijsenij A: Edge-based color constancy. IEEE Transactions on Image Processing 2007, 16(9):2207-2214.MathSciNetView ArticleGoogle Scholar
  31. Lin J: An automatic white balance method based on edge detection. Proceedings of the 10th IEEE International Symposium on Consumer Electronics (ISCE '06), 2006 1-4.Google Scholar
  32. Land E, McCann J: Lightness and retinex theory. Journal of the Optical Society of America A 1971, 61(1):1-11. 10.1364/JOSA.61.000001View ArticleGoogle Scholar
  33. Finlayson GD, Hordley SD, Tastl I: Gamut constrained illuminant estimation. International Journal of Computer Vision 2006, 67(1):93-109. 10.1007/s11263-006-4100-zView ArticleGoogle Scholar
  34. Finlayson GD, Hubel PH, Hordley S: Color by correlation. Proceedings of the 5th IS&T/SID Color Imaging Conference: Color Science, Systems and Applications, November 1997, Scottsdale, Ariz, USA 6-11.Google Scholar
  35. Finlayson GD, Hordley SD, Hubel PM: Color by correlation: a simple, unifying framework for color constancy. IEEE Transactions on Pattern Analysis and Machine Intelligence 2001, 23(11):1209-1221. 10.1109/34.969113View ArticleGoogle Scholar
  36. Khriji L, Cheikh FA, Gabbouj M: High-resolution digital resampling using vector rational fiters. SPIE Optical Engineering 1999, 38(5):893-901.View ArticleGoogle Scholar

Copyright

© ChangWon Kim et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement