## 1.

## Introduction

Resolution of a depth image captured by a three-dimensional (3-D) camera for depth sensing is generally lower than its corresponding color image for 3-D video applications. In addition, an advanced 3-D TV system transmits low-resolution depth images to make the best use of a transmission bandwidth. For practical purposes, an efficient depth image upsampling algorithm is necessary to adjust the resolution between depth and color images.

There are a number of depth image upsamplers. The bilinear interpolator and bicubic interpolator can be directly used for depth image upsampling. These interpolators generate good results in smooth regions, but make noise looked like the shape of staircase in the edge. To remove the staircase noise, some depth image upsamplers based on weight functions have been introduced.^{1}2.^{–}^{3} These studies show positive results for staircase noise, but generate the drawback of texture copy problem caused by a distinct color pattern. In this paper, we propose a depth image upsampling algorithm to overcome these problems, such as staircase noise and texture copy problem. The proposed algorithm suggests an effective weight choice method and a color weight function.

## 2.

## Joint Bilateral Filter

To increase the sampling rate of a depth image, Kopf et al.^{1} have developed a joint bilateral upsampler (JBU) by extending the idea of a joint bilateral filter based on weight functions. Suppose that there are a low-resolution depth image ${D}^{\mathrm{l}}$ and a high-resolution color image ${I}^{\mathrm{h}}$. Let $({p}_{\downarrow},{q}_{\downarrow})$ and $(p,q)$ denote the pixel coordinates in the low-resolution image and the high-resolution image, respectively. $p$ is the center of the local neighborhood $N(p)$ and $q$ is the neighboring pixel location of $p$.

The depth value ${D}_{p}^{\mathrm{h}}$ at $p$ in an upsampled depth image ${D}^{\mathrm{h}}$ is calculated by a normalized weighted sum, which is expressed as

## (1)

$${D}_{p}^{\mathrm{h}}=\frac{\sum _{q\in N(p)}{f}_{\mathrm{s}}({p}_{\downarrow},{q}_{\downarrow})\xb7{f}_{\mathrm{r}}(p,q)\xb7{D}_{q\downarrow}^{\mathrm{l}}}{\sum _{q\in N(p)}{f}_{\mathrm{s}}({p}_{\downarrow},{q}_{\downarrow})\xb7{f}_{\mathrm{r}}(p,q)},$$## (2)

$${f}_{\mathrm{s}}({p}_{\downarrow},{q}_{\downarrow})=\mathrm{exp}\left(\frac{-{\Vert {p}_{\downarrow}-{q}_{\downarrow}\Vert}^{2}}{2{\delta}_{s}^{2}}\right),\phantom{\rule{0ex}{0ex}}{f}_{\mathrm{r}}(p,q)=\mathrm{exp}\left(\frac{-{\Vert {I}_{p}^{h}-{I}_{q}^{h}\Vert}^{2}}{2{\delta}_{r}^{2}}\right),$$## 3.

## Proposed Depth Image Upsampler

To get an upsampled depth image while keeping the shape of the edge without staircase noise, the previous works^{1}2.^{–}^{3} define the color weight function and take advantage of the color information under the assumption that depth and color images have similar characteristics. However, the use of color weight occasionally causes an unexpected depth in smooth areas when a distinct pattern is in the color image. This phenomenon is called texture copy problem. To reduce the texture copy problem, we propose a weight selection algorithm that selectively applies a range weight function for cost calculation: color range weight function or depth range weight function. In addition, we define a color range weight function based on Laplacian distribution to utilize the color information more effectively.

Figure 1 shows the overall flow of the proposed upsampler. First, a low-resolution depth image ${D}^{\mathrm{l}}$ is upsampled by a bicubic interpolation algorithm and the upsampled high-resolution depth image is denoted by ${B}^{\mathrm{h}}$. Then, all pixels in the upsampled depth image ${B}^{\mathrm{h}}$ should be tested by weight selection algorithm whether the pixel belongs to a distinct color pattern or not. Next, the cost values between the center pixel and the surrounding pixels are calculated based on spatial and range weight functions. Finally, when the candidate pixel at ${q}_{x}$ has the minimum cost among the nine candidates, the depth value ${D}_{p}^{\mathrm{h}}$ at $p$ is replaced by the depth value ${B}_{{q}_{x}}^{\mathrm{h}}$ at ${q}_{x}$.

After obtaining the upsampled depth image ${B}^{\mathrm{h}}$ by the bicubic interpolation algorithm, we perform the weight selection algorithm for all pixels, as shown in Fig. 2. If we detect a distinct color pattern in a color image, we do not use the color image information to avoid the texture copy problem. To detect a distinct color pattern, we compare the mean of absolute difference (MAD) of the color image and that of the depth image. The MAD of the color image at $p$ and the MAD of the depth image are denoted as ${\mathrm{MAD}}_{p}^{c}$ and ${\mathrm{MAD}}_{p}^{d}$ and calculated as

## (3)

$${\mathrm{MAD}}_{p}^{c}=\frac{1}{N}\sum _{q\in N(p)}|{I}_{p}^{\mathrm{h}}-{I}_{q}^{\mathrm{h}}|,\phantom{\rule{0ex}{0ex}}{\mathrm{MAD}}_{p}^{d}=\frac{1}{N}\sum _{q\in N(p)}|{B}_{p}^{\mathrm{h}}-{B}_{q}^{\mathrm{h}}|.$$Figure 3 shows the pixel position for cost calculation and weight function. The cost ${C}_{p,q}$ at $q$ with respect to $p$ is calculated as

## (5)

$${C}_{p,q}=\{\begin{array}{ll}R{W}_{q,\mathrm{r}}\xb7|{B}_{p}^{\mathrm{h}}-{B}_{q}^{\mathrm{h}}|\hspace{1em}& \text{if}\text{\hspace{0.17em}\hspace{0.17em}}|{B}_{p}^{\mathrm{h}}-{B}_{q}^{\mathrm{h}}|\ge 1\\ R{W}_{q,\mathrm{r}}& \text{otherwise},\end{array}$$## (6)

$$R{W}_{q,\mathrm{r}}=\{\begin{array}{ll}\sum _{r\in N(q)}{f}_{\phi}(q,r)\xb7{f}_{\psi}(q,r)& \text{if}\text{\hspace{0.17em}\hspace{0.17em}}A{D}_{p}<{T}_{d}\\ \sum _{r\in N(q)}{f}_{\phi}(q,r)\xb7{f}_{\omega}(q,r)& \text{otherwise}.\end{array}$$## (7)

$${f}_{\phi}(q,r)=\mathrm{exp}\left(\frac{-{\Vert q-r\Vert}^{2}}{{\delta}_{\phi}}\right),\phantom{\rule{0ex}{0ex}}{f}_{\omega}(q,r)=\mathrm{exp}\left(\frac{-{\Vert {B}_{q}^{h}-{B}_{r}^{h}\Vert}^{2}}{{\delta}_{\omega}}\right),$$## (8)

$${f}_{\psi}(q,r)=\frac{1}{2{\delta}_{\psi}}\mathrm{exp}\left(\frac{-\Vert {I}_{q}^{\mathrm{h}}-{I}_{r}^{\mathrm{h}}\Vert}{{\delta}_{\psi}}\right),$$After calculating each cost of all nine candidates, we find the minimum cost among these nine candidates. Minimum cost pixel ${q}_{x}$ is represented as

Finally, ${D}_{p}^{h}$ is replaced with a pixel ${B}_{{q}_{x}}^{h}$ generating the minimum cost## 4.

## Simulation Results

To evaluate the performance of the depth image upsampling algorithm, we performed computer simulations on various images having ground truth depth data. To generate input 8-bit low-resolution depth images, each ground truth depth image is downsampled by a factor of 2 and 4. The performance of the proposed algorithm was compared to the JBU,^{1} joint trilateral upsampler (JTU),^{2} and fast edge preserving depth upsampler (FEDU)^{3} in terms of peak signal-to-noise ratio (PSNR) and subjective visual quality. In the simulation, the cost function parameters were set by $q\in 3\times 3$, $r\in 3\times 3$, ${T}_{d}=50$, ${\delta}_{\phi}=2$, ${\delta}_{\psi}=0.1$, and ${\delta}_{\omega}=0.1$.

Tables 1 and 2 show the results of PSNR comparison. When the upsampling factor is 2, the proposed upsampler has higher PSNRs by as much as about 11.11, 11.3, and 2.54 dB than JBU,^{1} JTU,^{2} and FEDU^{3} on average, respectively. When the upsampling factor is 4, the PSNR gains of the proposed algorithm are 6.57, 4.21, and 2.65 dB higher than JBU,^{1} JTU,^{2} and FEDU^{3} on average, respectively.

## Table 1

PSNR comparison (unit: dB and upsampling factor: 2).

Test image | JBU1 | JTU2 | FEDU3 | Proposed |
---|---|---|---|---|

Art | 30.37 | 29.05 | 32.43 | 36.37 |

Book | 29.55 | 29.51 | 35.76 | 36.85 |

Laundry | 30.32 | 30.27 | 39.10 | 40.57 |

Lampshade | 29.07 | 29.05 | 37.55 | 38.33 |

Venus | 30.54 | 30.49 | 45.09 | 46.19 |

Teddy | 25.15 | 25.13 | 33.06 | 33.22 |

Sawtooth | 29.20 | 29.16 | 40.45 | 44.58 |

Plastic | 31.20 | 31.18 | 40.55 | 48.18 |

Average PSNR | 29.42 | 29.23 | 37.99 | 40.53 |

## Table 2

PSNR comparison (unit: dB and upsampling factor: 4).

Test image | JBU1 | JTU2 | FEDU3 | Proposed |
---|---|---|---|---|

Art | 28.87 | 31.47 | 32.41 | 34.09 |

Book | 26.45 | 26.42 | 32.55 | 32.96 |

Laundry | 32.91 | 34.88 | 36.32 | 39.53 |

Lampshade | 31.61 | 33.81 | 34.65 | 34.77 |

Venus | 38.13 | 41.21 | 41.74 | 43.08 |

Teddy | 25.63 | 29.55 | 30.33 | 31.18 |

Sawtooth | 33.33 | 36.90 | 37.11 | 41.58 |

Plastic | 34.49 | 36.02 | 37.64 | 46.77 |

Average PSNR | 31.42 | 33.78 | 35.34 | 37.99 |

As shown in Figs. 4 and 5, the excellence of the proposed algorithm is also demonstrated by a subjective visual quality test. The first image is the color image and the region of interest is marked by a red box. (a), (b), and (c) are the results of JBU, FEDU, and the proposed algorithm, respectively. Sawtooth and art are upsampled by 4 and 2. From the result of JBU in Fig. 4(a), the staircase distortion is observed. FEDU^{3} reduces the staircase distortion. but there remain blurring artifacts in the sharp edge as shown in Fig. 4(b). However, the proposed algorithm generates a sharp edge without blurring artifacts. In addition, we can see the reduced texture copy problem in Fig. 5(c) compared to the result of JBU in Fig. 5(a).

## 5.

## Conclusion

A depth image upsampler has been proposed to preserve edge information and reduce the texture copy problem. To achieve the goal, we introduced the weight selection algorithm and the color range weight function based on a Laplacian distribution model. Computer simulations were performed on various test images to show the superiority of the proposed algorithm. The simulation results demonstrate that the proposed scheme has advantages in terms of PSNR and subjective visual quality. As a result, the increased upsampled depth image quality can lead to an improvement in the virtual view synthesis and multiview 3-D videos.

## Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education (No. 2014R1A1A2057662) and research projects of “The Development of Security and Safety Systems based on Ubiquitous Technology for Shipping and Logistics.”

## References

J. Kopf et al., “Joint bilateral upsampling,” ACM Trans. Graphics 26(3), 1–6 (2007).ATGRDF0730-0301http://dx.doi.org/10.1145/1276377Google Scholar

Y. Li et al., “Depth map super-resolution via iterative joint trilateral upsampling,” in IEEE Conf. on Visual Communications and Image Processing, pp. 386–389 (2014).http://dx.doi.org/10.1109/VCIP.2014.7051587Google Scholar

S. Y. Kim and Y. S. Ho, “Fast edge-preserving depth image upsampler,” IEEE Trans. Consum. Electron. 58(3), 971–977 (2012).ITCEDA0098-3063http://dx.doi.org/10.1109/TCE.2012.6311344Google Scholar