Skip to content

Advertisement

Open Access

Color interpolation algorithm for an RWB color filter array including double-exposed white channel

  • Ki Sun Song1,
  • Chul Hee Park1,
  • Jonghyun Kim1 and
  • Moon Gi Kang1Email author
EURASIP Journal on Advances in Signal Processing20162016:58

https://doi.org/10.1186/s13634-016-0359-6

Received: 12 January 2016

Accepted: 2 May 2016

Published: 13 May 2016

The Erratum to this article has been published in EURASIP Journal on Advances in Signal Processing 2016 2016:73

Abstract

In this paper, we propose a color interpolation algorithm for a red-white-blue (RWB) color filter array (CFA) that uses a double exposed white channel instead of a single exposed green (G) channel. The double-exposed RWB CFA pattern, which captures two white channels at different exposure times simultaneously, improves the sensitivity and provides a solution for the rapid saturation problem of W channel although spatial resolution is degraded due to the lack of a suitable color interpolation algorithm. The proposed algorithm is designed and optimized for the double-exposed RWB CFA pattern. Two white channels are interpolated by using directional color difference information. The red and blue channels are interpolated by applying a guided filter that uses the interpolated white channel as a guided value. The proposed method resolves spatial resolution degradation, particularly in the horizontal direction, which is a challenging problem in the double-exposed RWB CFA pattern. Experimental results demonstrate that the proposed algorithm outperforms other color interpolation methods in terms of both objective and subjective criteria.

Keywords

Color interpolationDouble-exposed white channelDirectional color differenceGuided filter

1 Introduction

Most digital imaging devices use a color filter array (CFA) to reduce the cost and size of equipment instead of using three sensors and optical beam splitters. The Bayer CFA, which consists of primary colors such as red, green, and blue (R, G, and B) is a widely used CFA pattern [1]. Recently, methods using a new CFA have been studied to overcome the limited sensitivity of the Bayer CFA under low-light conditions. The reason for this is that the amount of absorbed light decreases due to the RGB color filters. When white (panchromatic) pixels (W) are used, sensors can absorb more light, thereby providing an advantage in terms of sensitivity [27]. Despite of the sensitivity improvement, various RGBW CFA patterns suffer from spatial resolution degradation. The reason for this degradation is that the sensor is composed of more color components than the Bayer CFA pattern. In spite of this drawback, some industries have developed image sensors using the RGBW CFA pattern to improve sensitivity [2, 7]. They have attempted to overcome the degradation of the spatial resolution by using new CI algorithms especially designed for their RGBW CFA pattern.

In order to overcome the problem of degradation in RGBW CFA, a pattern of RWB CFA that does not absorb G is proposed [8, 9]. The spatial resolution of this RWB CFA is similar to that of the Bayer CFA because the pattern of this RWB CFA is identical to the pattern of the Bayer CFA. Although spatial resolution is improved when RWB CFA uses W rather than G, however, it is impossible to guarantee color fidelity. The reason is that correct color information cannot be produced due to the lack of G. To maximize the advantage of this CFA without color degradation, two techniques are required. First, accurate G should be reconstructed based on the correlation among W, R, and B. Second, the image should be fused, for which a high dynamic range (HDR) reconstruction technique that combines highly sensitive W information with RGB information is needed, as shown in Fig. 1. If the above techniques are applied to images obtained through the RWB CFA (ideal HDR and G value reconstruction algorithms are applied), images without color degradation can be obtained while the spatial resolution is improved as compared with images obtained through the RGBW CFA. The sensitivity is also significantly improved in comparison to images obtained from the existing Bayer CFA, as shown in Fig. 2.
Figure 1
Fig. 1

Image processing module of the RWB CFA

Figure 2
Fig. 2

Sensitivity comparison when an image is captured by using a Bayer CFA and b RWB CFA

However, there is a problem in obtaining W. W channel saturates at a lower light level than the R, G, and B channels. That is, W is saturated faster than R, G, and B since W absorbs more light compared to R,G, and B. In the RGBW CFA, the saturation problem can be handled by using the HDR reconstruction scheme that combines W with luminance of RGB. On the contrary, the rapid saturation of W is a important problem in the RWB CFA. As shown in Fig. 3, rapid saturation of W occurs when obtaining W with R and B as is the case with the existing Bayer CFA. If W is saturated, G cannot be estimated accurately. In order to prevent the saturation of W, the image is captured with a shorter exposure time. Unfortunately, this leads to another problem, i.e., reduced signal-to-noise ratio (SNR) for R and B. To solve this issue, a new pattern of the RWB CFA that obtains two W values at different exposure times has been proposed [10]. The pattern of this CFA is designed as shown in Fig. 4. In spite of the degradation of spatial resolution along the horizontal direction, R and B are placed in odd rows and W is placed in even rows to consider a readout method of complementary metal oxide semi-conductor (CMOS) image sensor and apply a dual sampling approach to the even rows. The dual sampling approach is the method of improving the sensitivity by adding a second column signal processing chain circuit and a capacitor bank to a conventional architecture of CMOS image sensor without modifying the data readout method [11]. In the CMOS image sensor, the sensor data is acquired by reading each row. Considering the row-readout process and using the dual sampling approach, it is possible to resolve the saturation problem of W and improve the sensitivity by using two W values at the same time. As well as the high SNR for R and B can be obtained. However, conventional color interpolation (CI) algorithms cannot be applied since the pattern of this RWB CFA is different from that of the widely used Bayer CFA. Consequently, the spatial resolution is degraded particularly in the horizontal direction as compared to the conventional RWB CFA pattern. In this paper, we propose a CI algorithm that improves the spatial resolution of images for this RWB CFA pattern. In the proposed algorithm, two sampled W channels captured with different exposure times are reconstructed as high resolution W channels using directional color difference information, and the sampled R and B are interpolated using a guided filter.
Figure 3
Fig. 3

Two problems when using W channel rather than G channel

Figure 4
Fig. 4

RWB CFA pattern acquiring W at two exposure times [10]

The rest of the paper is organized as follows. Section 2 provides the analysis of various CFA patterns. Section 3 describes the proposed algorithm in detail. Section 4 presents the experimental results, and Section 5 concludes the paper.

2 Analysis of CFA patterns

In 1976, Bayer proposed a CFA pattern to capture an image by using one sensor [1]. The Bayer pattern consists of 2×2 unit blocks that include two sampled G channels diagonally, one sampled R channel, and one sampled B channel, as shown in Fig. 5 a. The reason for this is as follows. Human eye is most sensitive to G and the luminance signal of the incident image is represented by G. In order to reconstruct full color channels from the sampled channels, many color interpolation (demosaicking) methods have been studied [1218].
Figure 5
Fig. 5

Various CFA patterns proposed by a Bayer [1], b Yamagami et al. [2], c Gindele et al. [3], d Komatsu et al. [8, 9], and e Park et al. [10]

In order to overcome the limited sensitivity of the Bayer CFA pattern, new CFA patterns using W have been proposed in a number of patents and publications [26]. Yamagami et al. were granted a patent for a new CFA pattern comprising RGB and W, as shown in Fig. 5 b [2]. They acknowledged the abovementioned rapid saturation problem of W, and dealt with the problem by using CMY filters rather than RGB filters or a neutral density filter on the W. The drawback of this CFA is spatial resolution degradation caused by small proportions of R and B channels which occupy one-eighth of the sensor, respectively [19].

Gindele et al. granted a patent for a new CFA pattern using W with RGB in 2×2 unit blocks, as shown in Fig. 5 c [3]. Sampling resolution of all the channels in this pattern are equal as a quarter of the sensor. It is possible to improve the spatial resolution as compared to the CFA of Yamagami et al. due to the increment of the sampling resolution for R and B. At the same time, improvement of the spatial resolution is limited because of the lack of a high-density channel. In comparison with other CFAs, G in the Bayer CFA and W in the CFA proposed by Yamagami et al. are high-density channels that occupy half of the sensor.

A new CFA pattern that consists of R, B, and W without G in 2×2 unit blocks was proposed in [8, 9], as shown in Fig. 5 d. Since the pattern of this CFA is identical to the pattern of the Bayer CFA where G is replaced with W, the spatial resolution of this CFA is similar to that of the Bayer CFA. Recently, some industries have researched this CFA pattern with interest owing to some merits in terms of high resolution and high sensitivity. Further, this CFA pattern can be produced with minimal manufacturing cost because the pattern is similar to the Bayer CFA widely used today [20]. In order to obtain an accurate color image, they estimated G by using a color correction matrix. The drawback of this CFA pattern is that it cannot cope with the above mentioned rapid saturation problem of W. If saturation occurs at W, the accuracy of the estimated G decreases considerably because the estimation process is conducted two times. Since the real W value cannot be obtained, first, the real value of the saturated W which is larger than the maximum value is estimated. Then, the color correction matrix is used to estimate G. The results by ‘two times’ estimation have a larger error than the results by “one time” estimation because the relationship among R, G, B, and W are not accurately modeled.

Park et al. proposed an RWB CFA pattern using R, B, and double-exposed W instead of a single exposed W to solve the rapid saturation problem [10]. Two W values are obtained at different exposure times, and then fused to prevent saturation. This approach is similar to the HDR reconstruction algorithm. The pattern of this RWB CFA pattern is shown in Fig. 5 e. R and B are placed sequentially one after the other in odd rows, and W is placed in even rows to consider the data readout method of the CMOS image sensor and apply the the dual sampling approach to W. Using the dual sampling approach, two values with different exposure times can be obtained without modifying the readout method in which the sensor data is acquired by scanning each row [11]. Applying the dual sampling approach to the even rows, double-exposed W are obtained to cope with the rapid saturation problem and improve the sensitivity. The obtained R and B show the high SNR since they are captured with optimal exposure time. The disadvantage of this CFA is the degradation of the spatial resolution along the horizontal direction than other CFAs. Since there are no W in the odd rows, W located in vertical or diagonal directions except the horizontal direction are unavoidably referred for interpolating the missing W.

In this paper, we adopt the double exposed RWB CFA pattern proposed in [10]. Although this CFA pattern is disadvantageous with respect to the spatial resolution along the horizontal direction, we focus on the improvement of sensitivity and color fidelity by resolving the rapid saturation problem. In order to overcome the drawback of this CFA pattern, we propose a color interpolation method that reconstructs the four high resolution channels (long exposed W, short exposed W, R, and B) from the sampled low resolution channels. As a result, the images are improved in terms of spatial resolution, sensitivity, and color fidelity simultaneously compared with the results of other CFAs.

3 Proposed color interpolation algorithm

In this section, we describe the proposed method in detail. First, the missing W values are interpolated by calculating the color difference values for each direction, and then the estimated W values are updated to improve the spatial resolution by using subdivided directional color difference values. Second, the R and B values are estimated by using a guided filter that uses the interpolated W channel as a guided image. Then, the estimated R and B values are modified to obtain accurately estimated values by using an edge directional residual compensation method.

3.1 Two W channels interpolation

The proposed method calculates the directional color difference values for each direction similar to [16] and conducts a weighted summation of the obtained values according to the edge direction in order to interpolate the missing pixels. The existing method considers both vertical and horizontal directions. In the proposed method, the color difference for the vertical and diagonal directions are calculated as
$$ \begin{aligned} \widetilde\Delta_{V}\left(i,j\right)&= \left\{ \begin{array}{ll} \widetilde W_{V}\left(i,j\right) - R\left(i,j\right), \;\;\;\;\;\; \text{if}\ W \ \text{is interpolated,}\\ W\left(i,j\right) - \widetilde{R}_{V} \left(i,j\right), \;\;\;\;\;\; \text{if} \ R \ \text{is interpolated,} \\ \end{array} \right.\\ \widetilde\Delta_{D1,\,D2}\left(i,j\right)&= \left\{ \begin{array}{ll} \widetilde W_{D1,\,D2}\left(i,j\right) - R\left(i,j\right), \;\;\;\;\;\; \text{if} \ W \ \text{is interpolated,}\\ W\left(i,j\right) - \widetilde{R}_{D1,\,D2} \left(i,j\right), \;\;\;\;\;\; \text{if} \ R \ \text{is interpolated,} \\ \end{array} \right. \end{aligned} $$
(1)
where (i,j) denotes the index of pixel location. \(\widetilde \Delta _{V}\) and \(\widetilde \Delta _{D1, D2}\) denote the vertical and diagonal color difference values between the white and red channels, respectively. The direction of D 1 is from the upper right to the lower left and the direction of D 2 is from the upper left to the lower right. \(\widetilde W\) and \(\widetilde {R}\) represent the temporary estimated values of each channel according to the direction. They are calculated as
$${} {\small{\begin{aligned} \widetilde W_{V}\left(i,j\right)&=\frac{1}{2}\left(W\left(i-1,j\right)+W\left(i+1,j\right)\right)\\ &\quad+\frac{1}{2}R\left(i,j\right)-\frac{1}{4}\left(R\left(i-2,j\right)+R\left(i+2,j\right)\right), \\ \widetilde W_{D1}\left(i,j\right)&=\frac{1}{2}\left(W\left(i-1,j+1\right)+W\left(i+1,j-1\right)\right)\\ &\quad+\frac{1}{2}R\left(i,j\right)-\frac{1}{4}\left(R\left(i-2,j+2\right)+R\left(i+2,j-2\right)\right)\!,\\ \widetilde W_{D2}\left(i,j\right)&=\frac{1}{2}\left(W\left(i-1,j-1\right)+W\left(i+1,j+1\right)\right)\\ &\quad+\frac{1}{2}R\left(i,j\right)-\frac{1}{4}\left(R\left(i-2,j-2\right)+R\left(i+2,j+2\right)\right). \end{aligned}}} $$
(2)

The equations are similar for calculating \(\widetilde R_{V}\), \(\widetilde R_{D1}\), and \(\widetilde R_{D2}\). The reason for considering these directions rather than the horizontal direction is that it is impossible to calculate the color difference in the horizontal direction with this CFA pattern, whereas the calculation is possible in the diagonal direction.

In order to apply the dual sampling approach without modifying the readout method of CMOS image sensor, R and B are placed sequentially in odd rows and W is placed in even rows. However, this CFA pattern results in a zipper artifact at the horizontal edge owing to the energy difference between R and B. To remove this, the color difference value is modified by considering both R and B,
$$ \begin{aligned} {}\widetilde\Delta_{H}\left(i,j\right)= \left\{ \begin{array}{ll} \widetilde W_{V}\left(i,j\right) - R^{hor}\left(i,j\right), \;\;\;\;\;\;\; \text{if} \ W \ \text{is interpolated,}\\ W\left(i,j\right) - \widetilde{R}^{hor}_{V}\left(i,j\right), \;\;\;\;\;\;\;\;\; \text{if} \ R \ \text{is interpolated,} \\ \end{array} \right. \end{aligned} $$
(3)
where
$$ \begin{aligned} &{R}^{hor}\left(i,j\right) = \frac{1}{2}{R} \left(i,j\right)+\frac{1}{4}\left({B}\left(i,j-1\right) + {B}\left(i,j+1\right)\right),\\ &\widetilde{R}_{V}^{hor}\left(i,j\right) = \frac{1}{2}\widetilde{R}_{V} \left(i,j\right)+\frac{1}{4}\left(\widetilde{B}_{V}\left(i,j-1\right) + \widetilde{B}_{V}\left(i,j+1\right)\right), \end{aligned} $$
(4)
where \(\widetilde {B}_{V}\) represents the temporary estimated value of B channel. After calculating the color difference values for each direction i.e., \(\widetilde \Delta _{V}\), \(\widetilde \Delta _{D1}\), \(\widetilde \Delta _{D2}\), and \(\widetilde \Delta _{H}\), the initial interpolated result is obtained by combining color difference values according to the edge direction. In horizontal edge regions, R h o r which considers the energy difference between R and B is used instead of R to prevent the zipper artifact,
$${} \begin{aligned} \widehat W^{ini}\!\left(i,j\right)&=\!\big\{w_{V}\big(R\left(i,j\right)\,+\,\widehat\Delta_{V}\!\left(i,j\right)\big) \,+\, w_{D1}\big(R\left(i,j\right)\,+\,\widehat\Delta_{D1}\left(i,j\right)\!\big)\,+\, w_{D2}\big(R\left(i,j\right)\\ &\quad\,+\,\widehat\Delta_{D2}\left(i,j\right)\big) \,+\, w_{H}\big(R^{hor}\left(i,j\right)\,+\,\widehat\Delta_{H}\left(i,j\right)\big)\big\} / \sum\limits_{\textbf{d}_{4}}\! w_{\textbf{d}_{4}},\\ \end{aligned} $$
(5)
where d 4={V,D1,D2,H} and \(\phantom {\dot {i}\!}w_{\textbf {d}_{4}}\) represent the weight values for each direction. When calculating the horizontal direction of \(\widehat W^{ini}\), \(R^{hor}+\widehat \Delta _{H}\) should be used instead of \(R+\widehat \Delta _{H}\) because \(\widetilde \Delta _{H}\) is calculated using R h o r and B h o r . Since the scaling of each direction is matched, there are no artifacts in the interpolated image. \(\widehat \Delta _{\textbf {d}_{4}}\) represents the directional averaging values of the directional color difference values,
$${} \begin{aligned} \widehat\Delta_{V}\left(i,j\right)& = \textbf{A}_{V}\otimes\widetilde\Delta_{V}\left(i-1:i+1,j\right),\\ \widehat\Delta_{D1}\left(i,j\right) &= \textbf{A}_{D1}\otimes\widetilde\Delta_{D1}\left(i-1:i+1,j-1:j+1\right),\\ \widehat\Delta_{D2}\left(i,j\right)& = \textbf{A}_{D2}\otimes\widetilde\Delta_{D2}\left(i-1:i+1,j-1:j+1\right),\\ \widehat\Delta_{H}\left(i,j\right)& = \textbf{A}_{H}\otimes\widetilde\Delta_{H}\left(i,j-1:j+1\right), \end{aligned} $$
(6)
where denotes element-wise matrix multiplication (often called the Hadamard product) and subsequent summation. Directional averaging matrices \(\phantom {\dot {i}\!}\textbf {A}_{\textbf {d}_{4}}\) for each direction are as
$$\begin{array}{@{}rcl@{}} &\textbf{A}_{V}&=\left[ \begin{array}{ccc} 1 & 2 & 1\end{array} \right]^{T}\cdot \frac{1}{4}, \\ &\textbf{A}_{D1}&=\left[ \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 2 & 0 \\ 1 & 0 & 0 \end{array} \right]\cdot \frac{1}{4}, \\ &\textbf{A}_{D2}&=\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{array} \right]\cdot \frac{1}{4}, \\ &\textbf{A}_{H}&=\left[ \begin{array}{ccc} 1 & 2 & 1\end{array} \right]\cdot \frac{1}{4}. \end{array} $$
(7)
The directional weight values \(\phantom {\dot {i}\!}w_{\textbf {d}_{4}}\) in Eq. (5) for each direction are calculated as
$$ \begin{aligned} w_{V}&=\left[\vphantom{+\left.\frac{1}{5} \sum\limits_{k=-2}^{2} \left| I\left(i+k,j-1\right) - I\left(i+k,j+1\right)\right|\right]^{-2}}\frac{1}{2}\left(\left| I\left(i,j\right) - I\left(i,j-1\right)\right| + \left| I\left(i,j\right) - I\left(i,j+1\right)\right| \right)\right.\\ &\quad+\left.\frac{1}{5} \sum\limits_{k=-2}^{2} \left| I\left(i+k,j-1\right) - I\left(i+k,j+1\right)\right|\right]^{-2}, \\ w_{D1}&=\left[\vphantom{+\left.\frac{1}{5} \sum\limits_{k=-2}^{2} \left| I\left(i+k,j-1\right) - I\left(i+k,j+1\right)\right|\right]^{-2}}\frac{1}{2}\left(\left| I\left(i,j\right) - I\left(i-1,j-1\right)\right| + \left| I\left(i,j\right) - I\left(i+1,j+1\right)\right|\right)\right.\\ &\quad+\left.\frac{1}{5}\sum\limits_{k=-2}^{2}\left| I\left(i-1,j-1+k\right) - I\left(i+1,j+1+k\right)\right|\right]^{-2}, \\ w_{D2}&=\left[\vphantom{+\left.\frac{1}{5} \sum\limits_{k=-2}^{2} \left| I\left(i+k,j-1\right) - I\left(i+k,j+1\right)\right|\right]^{-2}}\frac{1}{2}\left| I\left(i,j\right) - I\left(i-1,j+1\right)\right| + \left| I\left(i,j\right) - I\left(i+1,j-1\right)\right| \right.\\ &\quad+\left.\frac{1}{5}\sum\limits_{k=-2}^{2}\left| I\left(i-1,j+1+k\right) - I\left(i+1,j-1+k\right)\right|\right]^{-2}, \\ w_{H}&=\left[\vphantom{+\left.\frac{1}{5} \sum\limits_{k=-2}^{2} \left| I\left(i+k,j-1\right) - I\left(i+k,j+1\right)\right|\right]^{-2}}\frac{1}{2}\left| I\left(i,j\right) - I\left(i-1,j\right)\right| + \left| I\left(i,j\right) - I\left(i+1,j\right)\right|\right.\\ &\left.\quad+\frac{1}{5}\sum\limits_{k=-2}^{2}\left| I\left(i-1,j+k\right) - I\left(i+1,j+k\right)\right|\right]^{-2}, \end{aligned} $$
(8)

where I represents the input patterned image.

In order to further improve the resolution, the initial result is updated similar to [16]. After further subdividing the edge direction, the color difference values are calculated in eight directions i.e., d 8={N,N E,E,S E,S,S W,W,N W}. The obtained values are again combined according to the edge direction to obtain the final interpolated result,
$$ \widehat W^{up} \left(i,j\right) = \sum\limits_{\mathbf{d}_{4}}\frac{w_{\textbf{d}_{4}}\widetilde W^{up}_{\mathbf{d}_{4}}\left(i,j\right)}{\sum w_{\mathbf{d}_{4}}}, $$
(9)
where \(\widetilde W^{up}_{\textbf {d}_{4}}\) represents the updated W values for each d 4 direction. They are calculated by arbitrating the directional color difference value (\(\widehat \Delta _{\textbf {d}_{4}}\)) with updated directional color difference value (\(\widehat \Delta ^{up}_{\textbf {d}_{4}}\)) as
$${} {\small{\begin{aligned} &\widetilde W^{up}_{V} \left(i,j\right) = R\left(i,j\right)+ \big\{w_{up}\widehat\Delta^{up}_{V}\left(i,j\right) + \left(1-w_{up}\right)\widehat\Delta_{V}\left(i,j\right)\big\},\\ &\widetilde W^{up}_{D1} \left(i,j\right) = R\left(i,j\right)+ \big\{w_{up}\widehat\Delta^{up}_{D1}\left(i,j\right) + \left(1-w_{up}\right)\widehat\Delta_{D1}\left(i,j\right)\big\},\\ &\widetilde W^{up}_{D2} \left(i,j\right) = R\left(i,j\right)+ \big\{w_{up}\widehat\Delta^{up}_{D2}\left(i,j\right) + \left(1-w_{up}\right)\widehat\Delta_{D2}\left(i,j\right)\big\},\\ &\widetilde W^{up}_{H} \left(i,j\right) = R^{hor}\left(i,j\right)+ \big\{w_{up}\widehat\Delta^{up}_{H}\left(i,j\right) + \left(1-w_{up}\right)\widehat\Delta_{H}\left(i,j\right)\big\},\\ \end{aligned}}} $$
(10)
where w up is the weight value which determines the ratio of the directional color difference value to the updated directional color difference value. This weight value is set to 0.7 in our experiments. The updated directional color difference values are calculated considering the subdivided directions (d 8) as
$${} {\small{\begin{aligned} &\widehat\Delta^{up}_{V}\left(i,j\right) = w_{N}\widehat\Delta_{V}\left(i-2,j\right)+w_{S}\widehat\Delta_{V}\left(i+2,j\right),\\ &\widehat\Delta^{up}_{D1}\left(i,j\right) = w_{NE}\widehat\Delta_{D1}\left(i-2,j+2\right)+w_{SW}\widehat\Delta_{D1}\left(i+2,j-2\right),\\ &\widehat\Delta^{up}_{D2}\left(i,j\right) = w_{NW}\widehat\Delta_{D2}\left(i-2,j-2\right)+w_{SE}\widehat\Delta_{D2}\left(i+2,j+2\right),\\ &\widehat\Delta^{up}_{H}\left(i,j\right) = w_{E}\widehat\Delta_{H}\left(i,j-2\right)+w_{W}\widehat\Delta_{H}\left(i,j+2\right), \end{aligned}}} $$
(11)
where
$${} \begin{aligned} &w_{N}\,=\,\frac{1}{15}\sum\limits_{k=-4}^{0}\sum\limits_{l=-1}^{1}\frac{1}{\left|I(i+k,j+l-1)-|I(i+k,j+l+1)\right|^{2}},\\ &w_{NE}\,=\,\frac{1}{15}\!\sum\limits_{k=-4}^{0}\sum\limits_{l=-1}^{1}\frac{1}{\left|I(i+k+l-1,j-k+1)-|I(i+k+l+1,j-k-1)\right|^{2}},\\ &w_{E}\,=\,\frac{1}{15}\sum\limits_{k=-1}^{1}\sum\limits_{l=0}^{4}\frac{1}{\left|I(i+k-1,j+l)-|I(i+k+1,j+l)\right|^{2}},\\ &w_{SE}\,=\,\frac{1}{15}\sum\limits_{k=0}^{4}\sum\limits_{l=-1}^{1}\frac{1}{\left|I(i+k+l+1,j+k+1)-|I(i+k+l-1,j+k-1)\right|^{2}},\\ &w_{S}\,=\,\frac{1}{15}\sum\limits_{k=0}^{4}\sum\limits_{l=-1}^{1}\frac{1}{\left|I(i+k,j+l-1)-|I(i+k,j+l+1)\right|^{2}},\\ &w_{SW}\,=\,\frac{1}{15}\sum\limits_{k=0}^{4}\sum\limits_{l=-1}^{1}\frac{1}{\left|I(i+k+l+1,j-k-1)-|I(i+k+l-1,j-k+1)\right|^{2}},\\ &w_{W}\,=\,\frac{1}{15}\sum\limits_{k=-1}^{1}\sum\limits_{l=-4}^{0}\frac{1}{\left|I(i+k-1,j+l)-|I(i+k+1,j+l)\right|^{2}},\\ &w_{NW}\,=\,\frac{1}{15}\!\sum\limits_{k=-4}^{0}\sum\limits_{l=-1}^{1}\!\frac{1}{\left|I(i+k+l-1,j+k-1)-|I(i+k+l+1,j+k+1)\right|^{2}}\!. \end{aligned} $$
(12)
The double-exposed RWB CFA used in this paper obtains two W values that have different exposure times. Although the proposed interpolation algorithm can be applied to short exposed W, it cannot be applied to long exposed W. This is because the color difference value cannot be calculated due to the saturation. In the proposed method, the edge directional weighted summation of the temporary estimated W is set as the final interpolated value for long exposed W. The temporary estimated W for the vertical direction and both diagonal directions are obtained by using Eq. (2). In horizontal edge regions, spatial resolution is more improved when using \(\widetilde W_{V}\) than using \(\widetilde W_{D1}\) or \(\widetilde W_{D2}\) because W values are not located in the horizontal direction. However, this results in the zipper artifact due to the energy difference between R and B. To remove this artifact, we propose a method to estimate the temporary W values for the horizontal edge (\(\widetilde W_{H}\)) by modifying \(\widetilde W_{V}\) similar to Eq. (4),
$$ \begin{aligned} {} \widetilde W_{H}\left(i,j\right)=&\frac{1}{2}\big(W\left(i-1,j\right)+W\left(i+1,j\right)\big)\\ &+\frac{1}{4}R\left(i,j\right)-\frac{1}{8}\big(R\left(i-2,j\right)+R\left(i+2,j\right)\big)\\ &+\frac{1}{8}B\left(i,j-1\right)-\frac{1}{16}\big(R\left(i-2,j-1\right)+R\left(i+2,j-1\right)\big)\\ &+\frac{1}{8}B\left(i,j+1\right)-\frac{1}{16}\big(R\left(i-2,j+1\right)+R\left(i+2,j+1\right)\big). \end{aligned} $$
(13)
The weighted values used in the edge directional weighted summation are identical to the weighted values used for short exposed W because they are determined for the same area,
$$ \widehat W_{L}\left(i,j\right)=\sum\limits_{\textbf{d}_{4}}\frac{w_{\textbf{d}_{4}}\widetilde W_{\textbf{d}_{4}}\left(i,j\right)}{\sum w_{\textbf{d}_{4}}}. $$
(14)

With the above mentioned interpolation method, the missing W values located at the sampled R pixels are calculated. The remaining missing W values located at the sampled B pixels are estimated in a similar manner.

3.2 R and B channels interpolation

The proposed method interpolates the sampled R and B channels into high resolution channels through a guided filter that uses the interpolated W channel as a guided value. The guided filter conducts the image filtering considering the guided value. If the image filtering is conducted using the guided filter approach, the filtered image has the characteristics of the guided image. Therefore, the guided filter approach demonstrates high performance particularly in edge-aware smoothing. Further, this approach is utilized in many image processing areas including detail enhancement, HDR compression, image matting/feathering, dehazing, and joint upsampling, [21]. In the proposed algorithm, the property of the guided filter, which conducts filtering by reflecting the characteristics of the guided value, is used. Since the interpolated W channel is a high resolution image, the sampled R and B channels can be efficiently interpolated by reflecting the characteristic of the W channel in terms of high resolution,
$$ \widetilde {R}\left(i,j\right)=a\cdot \widehat W^{up}\left(i,j\right)+b, $$
(15)
where
$${} \begin{aligned} a&=\frac{\frac{1}{N_{mask}}\left(\sum\limits_{i,j\in mask}\widehat W^{up}\left(i,j\right)R\left(i,j\right)\right)-\mu_{mask}^{\widehat W^{up}}\cdot\mu_{mask}^{R}}{\left[\sigma_{mask}^{\widehat W^{up}}\right]^{2}+\varepsilon},\\ b&=\mu_{mask}^{R}-a\cdot \mu_{mask}^{\widehat W^{up}}, \end{aligned} $$
(16)
where μ mask and \(\sigma _{mask}^{\widehat W^{up}}\) are the mean and standard deviation values of the estimated W or sampled R channels. N mask is the number of pixels in the n×m mask. The size of the mask is set as 7×7 in our experiment, and the regularization parameter ε is set as 0.001. The real value of R is required for estimating the coefficients used in the guided filter. The input patterned image contains the sampled real value of R, allowing for an accurate estimation of the coefficient values. There is a difference between the initial estimated R values that are obtained by substituting the estimated coefficients into Eq. (15) and the real values of R at the location of sampled pixels,
$$ \lambda\left(i,j\right)=R\left(i,j\right)-\widetilde{R}\left(i,j\right), $$
(17)
where \(\widetilde {R}\) is the initial estimated value of R. λ represent the residual values which are calculated by the difference between the real R values and the initial interpolated R values. Such residuals occur also at the location of missing pixel. Thus, the initial interpolated R values should be compensated considering these residuals to improve the accuracy of the estimated R values [18]. In the proposed method, the final interpolated result of R (\(\widehat {R}\)) is obtained by using an edge directional residual compensation method as
$$ \widehat {R}\left(i,j\right)=\widetilde {R}\left(i,j\right)+\widehat\lambda \left(i,j\right), $$
(18)
where \(\widehat \lambda \) represents the edge directional residual values by considering the residual values at the surrounding sampled pixels. Because the pixels with real values neighboring a missing pixel are exclusively located in a diagonal direction, the diagonal residual value is first calculated as
$$ {{\begin{aligned} \widehat\lambda \left(i,j\right)&=w_{NE}\lambda \left(i-1,j+1\right)+w_{SE}\lambda \left(i+1,j+1\right)\\ &\quad+w_{SW}\lambda \left(i+1,j-1\right)+w_{NW}\lambda \left(i-1,j-1\right), \end{aligned}}} $$
(19)
where the weight values for each direction are calculated by using Eq. (12). The remaining missing pixels are interpolated by combining the vertical and horizontal residual values,
$$ \begin{aligned} \widehat\lambda \left(i,j\right)&=\!w_{N}\lambda \left(i-1,j\right)+w_{E}\lambda \left(i,j+1\right)+w_{S}\lambda \left(i-1,j\right)\\ &\quad+w_{W}\lambda \left(i,j-1\right). \end{aligned} $$
(20)

The B channel is interpolated similarly by using the guided filter with the reconstructed W value as the guidance value.

4 Experimental results

First, we tested the validity of the adoption of the double exposed CFA pattern. Various CFA patterns, as mentioned in Section 2, were considered in the experiments and compared in terms of the spatial resolution, sensitivity, and color fidelity. For this purpose, we designed experimental equipment for obtaining R, G, B, W, and double-exposed W in high resolution images, and then sampled these using each CFA pattern to obtain patterned images. All images were captured in our test room where the outside light could be blocked. Since other commercial products did not support this pattern, we built an imaging system using a single monochrome sensor and rotating filter wheel system including red, green, blue, and white (luminance) filters. Using this rotating filter wheel, capturing the high resolution image of each channel was possible. In our experiments, two W channels (W L and W s ) were captured with different exposure times when using the W filter. The exposure time of each channel was determined as follows. First, the proper exposure time of R and B channels (t RB ) was set to prevent the saturation of those channels. Then, the exposure times of the long exposed W \(\left (t_{W_{L}}\right)\) and short exposed W \(\left (t_{W_{s}}\right)\) were set as \(\frac {5}{6}t_{RB}\) and \(\frac {1}{6}t_{RB}\), respectively. The ratio of exposure time between W L and W s was experimentally determined considering the sensitivity difference of each channel. Note that, the saturated region never occurs in the short exposed W (W s ). In the long exposed W (W L ), the generation of the saturated regions is not a concern. The brightness of the dark regions is important. The brighter the dark region, the higher SNR for the final resulting image. If object moving or handshaking occurs, long exposed W and short exposed W could capture different images. The problem of the moving object in the double exposed W mentioned previously was also problem in the R and B channels. In the images of the R and B channels, the object was captured with motion blur. The research regarding color interpolation including moving objects is beyond the scope of this paper. In our experiment, we assumed that the object did not move when capturing the images with t RB .

In order to reconstruct the full color images, CI algorithms suitable for each CFA pattern were applied. In our experiments, multiscale gradients (MSG) based CI algorithm [16] was used for the patterns shown in Fig. 5 a, d. For patterns shown in Fig. 5 b, c, e, edge directional interpolation was utilized to generate the intermediate images with quincuncial patterns [22], and then a full color image was obtained by applying the MSG based CI algorithm. Then, G was estimated by using a color correction matrix in case of the RWB CFA, and the HDR reconstruction algorithm was applied if the image contained W.

In our experiments, the color correction matrix was used to estimate G from R, B, and W [9]. The matrix form of the method to estimate G is presented as
$$ \textbf{X} = \Phi^{T}\textbf{Y}, $$
(21)

where X={R,G,B} T and Y={R,W,B} T . Φ is the color correction matrix whose components are determined by considering the correlation among R, G, B, and W. These components were changed in accordance with the color temperature and brightness. In our experiments, we experimentally determined the values of these components considering the correlation among R, G, B, and W for each experimental condition (brightness and kind of illumination). By using these values, the same color can be obtained regardless of the experimental conditions.

Exposure fusion method was used for the HDR reconstruction [23]. The weight values that determine the best parts in each image were calculated by multiplication of a set of quality measures. The final HDR image (L hdr ) was obtained by fusing the best parts of each image. The three images (interpolated short exposed W channel, interpolated long exposed W channel, and luminance image from RGB values) were used to improve the sensitivity.
$$ {\small{\begin{aligned} {}L_{hdr} &= \widehat w^{hdr}_{s}\left(i,j\right)\widehat W^{up}\left(i,j\right) + \widehat w^{hdr}_{L}\left(i,j\right)\widehat W_{L}\left(i,j\right)\\ & \quad+ \widehat w^{hdr}_{lumi}\left(i,j\right)L\left(i,j\right), \end{aligned}}} $$
(22)
where \(\widehat w^{hdr}_{s}\), \(\widehat w^{hdr}_{L}\), and \(\widehat w^{hdr}_{lumi}\) represent the normalized HDR weight values for interpolated short exposed W (\(\widehat W^{up}\)), interpolated long exposed W (\(\widehat W_{L}\)), and luminance (L) value from RGB values, respectively. These values are calculated as
$$ \widehat w^{hdr}_{\textbf{s}}\left(i,j\right) = \left[ \sum\limits_{\textbf{s}}w^{hdr}_{\textbf{s}}\left(i,j\right)\right]^{-1} w^{hdr}_{\textbf{s}}\left(i,j\right), $$
(23)
where \(\textbf {s} = \left \{\widehat W^{up}, \widehat W_{L}, L\right \}\). The HDR weight values for each image \(w^{hdr}_{\textbf {s}}\) are calculated as
$$ w^{hdr}_{\textbf{s}}\left(i,j\right) = C^{\alpha}_{\textbf{s}} \times S^{\beta}_{\textbf{s}} \times E^{\gamma}_{\textbf{s}}, $$
(24)

where C s , S s , and E s represent the values of quality measures for contrast, saturation, and well-exposure, respectively. α, β, and γ are the weight values of each measure. The detailed description for these quality measures are presented in [23].

Figure 6 shows the comparison results for a test image captured using each CFA pattern. The test image simultaneously includes a bright region (left side) and a dark region (right side). The processed results show that the sensitivity is improved when W is obtained. In Fig. 6 bf, the color checker board located on the right side was clearly visible. The average brightness of Fig. 6 e was lower than that of the other results because of the shorter exposure time to prevent the saturation of W. If W is saturated, incorrect color information is produced due to the inaccurate estimation of G, as shown in Fig. 6 d.
Figure 6
Fig. 6

Comparison of CI and HDR reconstruction results and the enlarged parts obtained by using a Bayer CFA, b RGBW CFA [2], c RGBW CFA [3], d RWB CFA with saturated W [8, 9], e RWB CFA with unsaturated W [8, 9], and f double-exposed RWB CFA [10]

The enlarged parts, including letters, show the spatial resolution of each CFA pattern. Figure 6 a, e shows higher spatial resolution than other results because the MSG based CI algorithm is designed for the Bayer CFA. In Fig. 6 d, saturation occurs at the letters, and thus the spatial resolution is lower than that of Fig. 6 e even though the CFA patterns are similar. Figure 6 b, shows that the lower spatial resolution due to the weakness of the CFA patterns. Figure 6 f shows better spatial resolution than Fig. 6 b, c. Although the double exposed CFA pattern may suffer from low resolution along horizontal direction due to the weakness of CFA pattern, the spatial resolution of this CFA pattern is not the worst. If an effective CI algorithm is applied, it is possible to improve the spatial resolution.

In Table 1, the performance of each CFA pattern is compared by using objective metrics for the test images captured by our experimental equipment. The color peak signal-to-noise ratio (CPSNR), brightness of each region (bright and dark regions), and angular error are used to measure the spatial resolution, sensitivity, and color fidelity, respectively. Comparing the CPSNR metric, the double-exposed RWB CFA pattern recorded a larger value than the RGBW CFA patterns. It should be improved further, since the Bayer CFA pattern provides a larger CPSNR value. In terms of the brightness values of bright and dark regions, the CFA patterns with W recorded larger values compared with those of the Bayer CFA pattern. For the RWB CFA pattern with unsaturated W, however, the brightness value is smaller than that of other CFA patterns including W because of the shorter exposure time to avoid the saturation of W. By comparing the angular error values, it is shown that satisfactory color fidelity is obtained when using the double-exposed RWB CFA although the Bayer CFA pattern and RGBW CFA patterns provide smaller angular error than the double exposed RWB CFA pattern. If an accurate G estimation method is developed, the double exposed RWB CFA pattern will provide improved color fidelity compared with the results estimated by using a color correction matrix.
Table 1

Performance comparison of various CFA patterns

 

Spatial resolution

Sensitivity

Color fidelity

 

(CPSNR)

(Brightness)

(Angular

   

error)

Bayer CFA [1]

35.27

62.22, 5.20

1.03

RGBW CFA [2]

19.93

94.65, 28.29

1.68

RGBW CFA [3]

20.65

94.67, 28.47

1.54

RWB CFA with

17.50

80.92, 28.55

8.73

saturated W [8, 9]

   

RWB CFA with

27.60

45.38, 8.34

2.55

unsaturated W [8, 9]

   

RWB CFA with

23.49

91.67, 27.87

2.42

double exposed W [10]

   

From above analysis, we adopted the double-exposed CFA pattern and proposed the CI algorithm to improve the spatial resolution. Despite spatial resolution degradation, the color fidelity was satisfactory and the sensitivity was greatly improved in comparison with the Bayer CFA pattern. If the spatial resolution is improved, it is possible to capture high-resolution and noise-free images in low light conditions because the sensitivity is significantly improved without color degradation.

Next, we tested the performance of the proposed algorithm. For this purpose, we obtained R and B, and the double exposed W in high resolution images by using our experimental equipment, and then sampled these images by using the CFA pattern proposed in [10] to obtain a patterned image. After applying the proposed method (PM) to the patterned image in order to reconstruct each color channel as a high resolution channel, we compared its values to the original values. We also compared the results of the PM to those of the conventional methods (CMs): a simple color interpolation algorithm (CM1) described in [10]; MSG based CI algorithm with intermediate quincuncial patterns (CM2) [16, 22]; an intra field deinterlacing algorithm (CM3) [24], which uses the characteristic of this CFA pattern with sampled W only in the even row; and a method (CM4) that modifies the algorithm for the existing Bayer CFA such that it is compatible with the double exposed RWB CFA pattern [17].

Figure 7 shows the result images which compare the original image with the reconstructed images by using the PM and CMs. Figure 7 a shows the enlarged parts of the original image. Figure 7 b shows the results when using CM1. It is clear that CM1 does not consider the edge direction at all. The ability of CM1 to improve the spatial resolution was very poor. In Fig. 7 c, improvement of the spatial resolution is shown as compared with CM1 since the edge directional interpolation for obtaining quincuncial patterns is used. Figure 7 d shows the results when using CM3, where there is an obvious improvement to the spatial resolution compared with CM1, but is similar to CM2. This is because CM3 interpolated the W channel by estimating the edge direction. When estimating the edge direction, however, CM3 relied exclusively on pixel information corresponding to an even row. Consequently, the edge direction was falsely estimated in some areas, leading to incorrect interpolation results. With CM4, an interpolation kernel was applied for missing pixels considering the edge direction, the results for which are shown in Fig. 7 e. Since the interpolation kernel was applied considering R and B, as well as W, the image was interpolated while preserving the edge better than other methods. The biggest disadvantage of this method is poor resolution and zipper artifacts in the horizontal edge regions. Figure 7 f presents the result of the PM. It is confirmed that the spatial resolution was further improved than when using the CMs. In particular, the spatial resolution is significantly improved near the horizontal edge.
Figure 7
Fig. 7

Comparison of the various CI algorithms for color images including short exposed W: a original image, b CM1, c CM2, d CM3, e CM4, and f PM

The interpolation results for the long exposed W are shown in Fig. 8. Similar to Fig. 7, the result of the PM also shows higher resolution than when using the CMs. In particular, the spatial resolution of the diagonal direction was greatly improved and zipper artifacts located at regions near the horizontal edge were removed as compared with the results of the CMs. The reason is that the CMs interpolate the missing pixels without considering the pixels in the diagonal directions and the energy difference between R and B.
Figure 8
Fig. 8

Comparison of the various interpolation algorithms for the long exposed W: a original image, b CM1, c CM2, d CM3, e CM4, and f PM

The performance of the PM and CMs was evaluated on a Kodak test set and the five test images captured in our laboratory. Because Kodak test images lack W channels, both exposures of W (\(W_{s}^{gen}\) and \(W_{L}^{gen}\)) were generated artificially for the test,
$$\begin{array}{@{}rcl@{}} &W_{s}^{gen}&\left(i,j\right)=\frac{R\left(i,j\right) + G\left(i,j\right) + B\left(i,j\right)}{3}\times 0.5, \\ &W_{L}^{gen}&\left(i,j\right)=\frac{R\left(i,j\right) + G\left(i,j\right) + B\left(i,j\right)}{3}\times 1.5. \end{array} $$
(25)
The difference between the original and interpolated images was determined by using two metrics: color peak signal-to-noise ratio (CPSNR) and S-CIELAB color difference (Δ E) formula [12, 25]. Generally, when calculating the CPSNR, three channels (R, G, and B) are used. In our experiments, this three-channel equation is modified to a four-channel equation since our results have four channels (R, B, long exposed W, and short exposed W). The S-CIELAB Δ E was calculated by averaging the Δ E values of two color images using long exposed W and short exposed W rather than G. Table 2 shows the CPSNR comparison. The PM outperformed the other methods for 24 of the 25 test images. Likewise, the S-CIELAB Δ E results demonstrate that the PM had the best performance for all test images. The S-CIELAB Δ E values of each result are compared in Table 3.
Table 2

Comparison of objective evaluation: CPSNR

Image No.

CM1

CM2

CM3

CM4

PM

Captured img 1

33.714

35.178

36.533

38.175

40.375

2

32.685

34.580

35.064

37.491

40.175

3

35.769

37.226

37.502

39.528

41.016

4

29.218

31.417

32.153

37.481

37.414

5

33.483

34.825

35.265

37.959

40.684

Kodak test img 1

29.936

31.758

32.808

35.700

38.715

2

36.032

36.771

38.724

40.522

41.661

3

36.642

37.908

38.047

40.004

40.191

4

36.779

37.630

39.276

40.312

42.234

5

30.500

33.030

33.986

35.877

38.737

6

31.111

33.064

33.448

35.060

38.433

7

36.151

37.751

39.922

40.267

42.658

8

27.638

29.162

31.956

34.783

36.001

9

35.915

37.036

37.353

39.148

41.917

10

35.887

37.129

39.419

40.565

42.869

11

32.835

34.784

35.867

36.112

39.333

12

35.843

36.909

38.766

41.579

43.724

13

27.956

30.616

29.897

32.328

35.165

14

32.743

34.529

35.092

36.485

39.065

15

34.754

35.374

37.196

39.085

41.519

16

34.109

35.893

35.877

38.302

40.759

17

35.312

37.017

38.819

40.377

43.665

18

32.058

34.472

34.606

37.232

40.675

19

31.790

32.910

34.650

37.244

40.070

20

34.083

36.609

36.251

39.276

42.327

Avg.

33.318

34.943

35.939

38.032

40.335

Boldface indicates the best results in each metric among PM and CMs

Table 3

Comparison of objective evaluation: S-CIELAB Δ E

Image No.

CM1

CM2

CM3

CM4

PM

Captured img 1

13.500

6.854

9.068

6.792

3.946

2

16.642

8.882

13.194

9.884

5.903

3

9.466

5.324

6.804

4.580

3.251

4

18.804

13.493

15.592

13.947

7.839

5

14.044

7.825

9.947

7.468

4.553

Kodak test img 1

17.080

11.926

13.764

11.543

7.604

2

8.785

6.103

6.895

5.432

3.578

3

2.754

1.508

2.167

1.692

1.054

4

3.285

1.864

2.422

1.822

1.314

5

7.648

3.307

4.977

3.636

1.862

6

7.251

3.802

5.804

4.293

2.644

7

3.184

1.842

2.204

1.819

1.112

8

10.548

6.666

6.587

4.930

2.712

9

3.444

1.976

2.378

1.768

1.105

10

3.390

1.797

2.291

1.674

1.031

11

5.554

3.010

4.165

3.177

1.808

12

3.340

2.053

2.545

2.010

1.228

13

11.001

4.834

8.793

6.670

3.494

14

5.956

2.982

4.560

3.424

2.226

15

3.427

2.240

2.447

1.967

1.334

16

5.026

2.704

4.168

3.120

1.873

17

3.619

1.809

2.541

1.979

1.143

18

6.098

2.956

4.540

3.343

2.193

19

5.648

3.426

3.719

2.810

1.547

20

3.586

1.960

2.774

2.154

1.283

Avg.

5.409

2.921

3.953

2.972

1.762

Boldface indicates the best results in each metric among PM and CMs

5 Conclusions

In this paper, we proposed a color interpolation algorithm for the RWB CFA pattern that acquires W at different exposure times. The proposed algorithm interpolates two W channels by using directional color difference information, and then reconstructs R and B channels with a guided filter. The proposed method outperformed conventional methods, which was confirmed with objective metrics and a comparison using actual images. In future research, we intend to study G channel estimations and image fusion based on HDR reconstruction algorithms, and apply these methods to high resolution R,W, and B channels generated by using the proposed algorithm. Through these methods, an image with highly improved sensitivity compared to the existing Bayer patterned image can be obtained.

Notes

Declarations

Acknowledgements

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT and future Planning(No. 2015R1A2A1A14000912).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea

References

  1. BE Bayer, Color imaging array. Jul. 20. U.S. Patent 3,971,065 (1976).Google Scholar
  2. T Yamagami, T Sasaki, A Suga, Image signal processing apparatus having a color filter with offset luminance filter elements. Jun. 21. U.S. Patent 5,323,233 (1994).Google Scholar
  3. EB Gindele, AC Gallagher, Sparsely sampled image sensing device with color and luminance photosites. Nov. 5. U.S. Patent 6,476,865 (2002).Google Scholar
  4. M Kumar, EO Morales, JE Adams, W Hao, in Image Processing (ICIP), 2009 16th IEEE International Conference On. New, digital camera sensor architecture for low light imaging (IEEECairo, Egypt, 2009), pp. 2681–2684.View ArticleGoogle Scholar
  5. J Wang, C Zhang, P Hao, in Image Processing (ICIP), 2011 18th IEEE International Conference On. New color filter arrays of high light sensitivity and high demosaicking performance (IEEEBrussels, Belgium, 2011), pp. 3153–3156.View ArticleGoogle Scholar
  6. JT Compton, JF Hamilton, Image sensor with improved light sensitivity. Mar. 20. U.S. Patent 8,139,130 (2012).Google Scholar
  7. ON Semiconductor. TRUESENSE Sparse ColorFilter Pattern. Application Notes (AND9180/D), Sep. (2014). http://www.onsemi.com/pub_link/Collateral/AND9180-D.PDF.
  8. T Komatsu, T Saito, in Proc. SPIE, Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications IV, 5017. Color image acquisition method using color filter arrays occupying overlapped color spaces (SPIESanta Clara, CA, USA, 2003), pp. 274–285.Google Scholar
  9. M Mlinar, B Keelan, Imaging systems with clear filter pixels. Sep. 19. U.S. Patent App. 13/736,768 (2013).Google Scholar
  10. J Park, K Choe, J Cheon, G Han, A pseudo multiple capture cmos image sensor with rwb color filter array. J. Semicond. Technol. Sci.6:, 270–274 (2006).Google Scholar
  11. O Yadid-Pecht, ER Fossum, Wide intrascene dynamic range cmos aps using dual sampling. Electron Devices IEEE Trans.44(10), 1721–1723 (1997).View ArticleGoogle Scholar
  12. W Lu, Y-P Tan, Color filter array demosaicking: new method and performance measures. Image Process. IEEE Trans.12(10), 1194–1210 (2003).View ArticleGoogle Scholar
  13. BK Gunturk, J Glotzbach, Y Altunbasak, RW Schafer, RM Mersereau, Demosaicking: color filter array interpolation. Signal Process. Mag. IEEE. 22(1), 44–54 (2005).View ArticleGoogle Scholar
  14. X Li, B Gunturk, L Zhang, in Proc. SPIE, Visual Communications and Image Processing 2008, 6822. Image demosaicing: a systematic survey (SPIESan Jose, CA, USA, 2008), pp. 68221–6822115.Google Scholar
  15. D Menon, G Calvagno, Color image demosaicking: An overview. Image Commun.26(8-9), 518–533 (2011).Google Scholar
  16. I Pekkucuksen, Y Altunbasak, Multiscale gradients-based color filter array interpolation. Image Process. IEEE Trans.22(1), 157–165 (2013).MathSciNetView ArticleGoogle Scholar
  17. S-L Chen, E-D Ma, Vlsi implementation of an adaptive edge-enhanced color interpolation processor for real-time video applications. Circ. Syst. Video Technol. IEEE Trans.24(11), 1982–1991 (2014).View ArticleGoogle Scholar
  18. L Wang, G Jeon, Bayer pattern cfa demosaicking based on multi-directional weighted interpolation and guided filter. Signal Process. Lett. IEEE. 22(11), 2083–2087 (2015).View ArticleGoogle Scholar
  19. Y Li, P Hao, Z Lin, Color Filter Arrays: A Design Methodology. Research report (RR-08-03), Dept of Computer Science, Queen Mary, University of London (2008). https://core.ac.uk/download/files/145/21174373.pdf.
  20. Image Sensors World. Samsung Announces 8MP RWB ISOCELL Sensor (2015). http://image-sensors-world.blogspot.kr/2015/03/samsung-announces-8mp-rwb-isocell-sensor.html.
  21. K He, J Sun, X Tang, Guided image filtering. Pattern Anal. Mach. Intell. IEEE Trans.35(6), 1397–1409 (2013).View ArticleGoogle Scholar
  22. SW Park, MG Kang, Channel correlated refinement for color interpolation with quincuncial patterns containing the white channel. Digital Signal Process.23(5), 1363–1389 (2013).MathSciNetView ArticleGoogle Scholar
  23. T Mertens, J Kautz, F Van Reeth, in Computer Graphics and Applications, 2007. PG ’07. 15th Pacific Conference On. Exposure fusion (IEEEMaui, Hawaii, USA, 2007), pp. 382–390.Google Scholar
  24. MK Park, MG Kang, K Nam, SG Oh, New edge dependent deinterlacing algorithm based on horizontal edge pattern. Consum. Electron. IEEE Trans.49(4), 1508–1512 (2003).View ArticleGoogle Scholar
  25. RWG Hunt, MR Pointer, Measuring Colour (4th Edition) (John Wiley Sons, Ltd, Chichester, UK, 2011).View ArticleGoogle Scholar

Copyright

© Song et al. 2016

Advertisement