## 1.

## Introduction

Due to hardware limitations, the single-chip CCD or CMOS solid state sensor array in digital cameras does not measure a complete triplet of red, green, and blue color values for each pixel in an image. Instead, it captures a sparsely sampled image of each of the color planes with a sensor whose surface is covered with a color filter array (CFA). To produce a full RGB image from these subsampled color values, CFA demosaicking is then used to reconstruct the original colors.

The Bayer array^{1} shown in Fig. 1 is one of the many typical CFA patterns used in digital still cameras. A variety of methods have been proposed for demosaicking such a pattern. The simplest one is linear interpolation, which does not maintain edge information well. More advanced methods^{2}
^{3}
^{4} perform CFA interpolation in a manner that preserves edge details.

A property of many local edge regions is the linearity of its color distribution in RGB space,^{5} which also exists for homogeneous regions. We captialize on the linearity property of local color distributions to produce a novel demosaicking method that can result in fewer demosaicking artifacts while preserving edge details better than many current demosaicking methods.

## 2.

## Linearity Property of Local Color Distributions

As described in Ref. 5, because of the limited spatial resolution of the image array, the image plane area of an edge pixel will generally image portions of both regions that bound the edge. For an edge pixel that lies between two regions having distinct RGB color vectors I_{1}
^{′}
and I_{2}
^{′}, its measured RGB color vector I_{0}
should be a linear combination of the bounding region colors:

_{0}should be located on the line segment between I

_{1}

^{′}and I

_{2}

^{′}in the 3-D RGB space. The linearity property also suggests that local changes in the three color components should be consistent with one another, expressed as

## (2)

$$\frac{{r}_{0}-{r}_{1}^{\prime}}{{r}_{2}^{\prime}-{r}_{0}}=\frac{{g}_{0}-{g}_{1}^{\prime}}{{g}_{2}^{\prime}-{g}_{0}}=\frac{{b}_{0}-{b}_{1}^{\prime}}{{b}_{2}^{\prime}-{b}_{0}}$$_{k}

^{′}, g

_{k}

^{′}, b

_{k}

^{′}represent respectively the red, green, and blue values of I

_{k}

^{′}, and r

_{0}, g

_{0}, b

_{0}represent respectively the red, green, and blue values of I

_{0}.

In this work, only three consecutive pixels on a line in the CCD array tessellation are regarded as complying with the linearity property. For example, in Fig. 1, I_{21}, I_{22}, and I_{23}
should be linear with regard to 4-connectivity, and I_{11}, I_{22}, and I_{33}
should be linear in the sense of 8-connectivity.

## 3.

## Linearity in Demosaicking

The linearity property shown in Eq. (2) describes expected relationships among the color components of neighboring pixels. Missing components can be determined by incorporating the linearity property into the demosaicking problem.

The green channel is first interpolated. Referring to Fig. 1, we estimate G_{34}
of a red CFA pixel by first computing α_{1}=|G_{35}−G_{33}|, α_{2}=|G_{44}−G_{24}|, β_{1}=|B_{43}−B_{25}|, and β_{2}=|B_{45}−B_{23}|. These quantities are used to determine whether pixel I_{34}
is located on a vertical, horizontal, or diagonal edge. The following estimates are then used for the missing green pixel value:

## (3)

$${G}_{34}=\{\begin{array}{ll}({G}_{33}+{G}_{35})/2& \text{if}{\alpha}_{1}=MP\\ ({G}_{24}+{G}_{44})/2& \text{if}{\alpha}_{2}=MP\\ ({G}_{24}+{G}_{33})/2& \text{if}[({\beta}_{1}=MP)(|{B}_{avg1}-{B}_{23}||{B}_{avg1}-{B}_{45}|)]\\ ({G}_{35}+{G}_{44})/2& \text{if}[({\beta}_{1}=MP)(|{B}_{avg1}-{B}_{45}||{B}_{avg1}-{B}_{23}|)]\\ ({G}_{24}+{G}_{35})/2& \text{if}[({\beta}_{2}=MP)(|{B}_{avg2}-{B}_{25}||{B}_{avg2}-{B}_{43}|)]\\ ({G}_{33}+{G}_{44})/2& \text{if}[({\beta}_{2}=MP)(|{B}_{avg2}-{B}_{43}||{B}_{avg2}-{B}_{25}|)]\end{array}$$_{1},α

_{2},β

_{1},β

_{2}), B

_{avg1}=(B

_{25}+B

_{43})/2 and B

_{avg2}=(B

_{23}+B

_{45})/2. In Eq. (3), the last four cases correspond to diagonal edges. For example, a diagonal edge from the lower left to upper right is addressed in the third and fourth cases. For this kind of edge, I

_{34}is first grouped to either the upper left or lower right triangle formed by the edge in the 8-neighborhood, depending on which triangle has the more similar blue value. Then G

_{34}is estimated by the known green values in the selected triangle. The green channel value for a blue CFA pixel can be interpolated similarly.

After demosaicking the green color plane, the blue and red values of green CFA pixels are then estimated using the linearity property as follows, using I_{44}
as an example.

## (4)

$${B}_{44}=\{\begin{array}{ll}({B}_{45}+TB*{B}_{43})/(1+TB)& (TB\ne -1)\&(TB\ne \text{Inf})\\ ({G}_{44}/{G}_{43}){B}_{43}& (TB=-1)\&({G}_{43}\ne 0)\\ {B}_{43}& (TB=-1)\&({G}_{43}=0)\\ {B}_{43}& TB=\text{Inf}\end{array}$$_{44}can be determined similarly to Eq. (4) by the known green and red components value of I

_{54}, I

_{44}, and I

_{34}.

Linearity is also used to estimate the missing red values for blue CFA pixels, and the blue values for red CFA pixels. Using I_{54}
as an example, the blue value of a red CFA pixel is interpolated as

_{54}

^{H}is estimated from pixels I

_{53}, I

_{54}, and I

_{55}with the method in the Eq. (4), and B

_{54}

^{V}is determined similarly from pixels I

_{44}, I

_{54}, and I

_{64}.

## 4.

## Results

In our experiments, all test images are sampled with the Bayer CFA pattern and then reconstructed using demosaicking methods under comparisons in RGB color space.

In Fig. 2, we display the results of the Hamilton method,^{3} the Gunturk method,^{2} bilinear interpolation, and our method on a real color image. For greater clarity, we highlight a patch in the image and zoom in to obtain a larger scale. Bilinear interpolation produces many “confetti” types of artifacts. Fringe artifacts, also known as zipper artifacts, are obvious in the results of the Gunturk method. For this image, the Hamilton method performs as well as our method, both having much fewer artifacts.

More than 50 real images were tested in our experiments, and we found our method to be less susceptible to edge artifacts than these selected state-of-the-art demosaicking methods^{2}
^{3}
^{4} in most cases. At the same time, our method reasonably preserves edge details. Some of the test images and demosaicking results are available on our webpage.^{6}

## REFERENCES

*Proc. IEEE Computer Society*, Conference on Computer Vision and Pattern Recognition, pp. 938–945 (2004). Google Scholar