## 1.

## Introduction

Dense stereo matching is one of the most challenging problems in the field of computer vision. It is an important requirement for many applications, such as three-dimensional (3-D) reconstruction and virtual view synthesis. Generally, the purpose of stereo matching is to find the corresponding pixels between the stereo image pairs captured by two or more cameras in the same scene, and get the disparity map composed by the coordinate difference of corresponding pixels in the stereo image pair.

There are plenty of algorithms available to solve the dense stereo problem, the choice of which depends on whether you want to get the area-based solution by a global method or local method. Stereo matching algorithms can be classified as either global or local. The typical global algorithms, such as graph cuts,^{1} belief propagation,^{2}^{,}^{3} and dynamic programming,^{4}^{,}^{5} can generate a dense disparity map precisely based on global energy function and suitable constraints. However, graph cuts and belief propagation usually consume a great deal of time and memory, and dynamic programming needs specific constraints at different times. Local matching algorithms are known for their simplicity and efficiency, and they also can achieve disparity more accurately. The basic idea of local matching is to estimate the disparity of a pixel in the target image by correlating a support window around the pixel with a similar support window in the reference image. One of the typical local matching algorithms is adaptive support weight (ASW), proposed by Yoon.^{6} The method in Ref. 6 adopts the fixed-size square windows and allocates a support weight to each pixel in the window according to pixel color and position similarities. The disparity map generated by Ref. 6 can get perfect effects similar to that obtained by global algorithms. The gradient information can indicate the variation between neighboring pixels and the structure of the information,^{7} as well as decreasing the noise presented in the disparity map more. The method that uses the gradient similarity and local ASW to compute the disparity is proposed.^{8} Considering that the information will be lost when converting the stereo images from RGB vector space to the CIELab color space, the ASW approach in RGB vector space is proposed.^{9} The gradient similarity is used to compute the support weight in Ref. 9. However, the difficulties in stereo matching are still at the boundaries of objects and ﬁne texture area, which can be reflected by the high-frequency information. In this paper, we propose to utilize the illumination normal similarity of the two-dimensional (2-D) gray image to compute the support weight based on the ASW in RGB vector space. The experimental results prove that the proposed method can improve the accuracy of the disparity map.

This paper is organized as follows. Section 2 gives the definition of illumination normal in the image space. Section 3 provides a specific explanation for the proposed method. In Sec. 4, experimental results of the proposed method are compared to that of other methods. Conclusions and future work are provided in Sec. 5.

## 2.

## Illumination Normal of Pixels in a 2-D Image Plane

A normal vector almost exists for each point of the object in 3-D space. Given a 2-D gray image, the gray value of every pixel can reflect the illumination information of the object. In order to obtain the illumination normal vector of pixels in a 2-D image, each pixel of the image is regarded as a point in 3-D space. This can be expressed as $P[x,y,p(x,y)]$, where $x$ and $y$ are the horizontal and vertical coordinates, respectively, and $p(x,y)$ is the pixel value at the position $(x,y)$.

The current point and the points located below and to the right of it are used to compute its normal vector. Figure 1 illustrates how the illumination normal vector is calculated. Point $A$ is the current point, $B$ and $C$ are the neighboring points used to compute the normal vector of point $A$. The 3-D vectors from $A$ to $C$ and from $A$ to $B$ are computed as follows:

The illumination normal vector of point $A$ is obtained by the cross-product of $vec1$ and $vec2$:

Normalize the illumination normal vector of point $A$:

where## (5)

$${n}_{x}(A)=\frac{vec{N}_{x}(A)}{\sqrt{{[vec{N}_{x}(A)]}^{2}+{[vec{N}_{y}(A)]}^{2}+{[vec{N}_{z}(A)]}^{2}}}$$## (6)

$${n}_{y}(A)=\frac{vec{N}_{y}(A)}{\sqrt{{[vec{N}_{x}(A)]}^{2}+{[vec{N}_{y}(A)]}^{2}+{[vec{N}_{z}(A)]}^{2}}}$$## (7)

$${n}_{z}(A)=\frac{vec{N}_{z}(A)}{\sqrt{{[vec{N}_{x}(A)]}^{2}+{[vec{N}_{y}(A)]}^{2}+{[vec{N}_{z}(A)]}^{2}}}.$$The modulus images of the illumination normal vector of the image pairs, which are used to analyze the illumination normal similarity of the image pairs, are shown in Fig. 2. The features of the illumination normal vector in Fig. 2(b) and 2(d) reflect the high-frequency information of the gray image pairs. The high-frequency information reflects some small-scale details of the image, which is useful for searching the matching pixels in the stereo pair. In this paper, the character mentioned above is utilized, and the illumination normal similarity of the gray image is combined into the ASW method to compute the weights in the support window.

## 3.

## Proposed Algorithm

To assign the support weight more accurately for each pixel in the support window, the similarity measurements are considered. Geng et al.^{9} adds the gradient similarity in RGB vector space to the gestalt group proposed by Yoon.^{6} Here, we propose to compute the support weight by a number of multi-similarity measurements, including color similarity, Euclidean distance similarity, gradient similarity, and illumination normal similarity. The support weight of a pixel in a support window can be expressed by

## (8)

$$w(\overrightarrow{p},\overrightarrow{q})=\mathrm{exp}(-\frac{{\mathrm{\Delta}c}_{\overrightarrow{p}\overrightarrow{q}}}{{\tau}_{c}})\xb7\mathrm{exp}(-\frac{{\mathrm{\Delta}\mathrm{dis}}_{\overrightarrow{p}\overrightarrow{q}}}{{\tau}_{d}})\phantom{\rule{0ex}{0ex}}\xb7\mathrm{exp}(-\frac{{\mathrm{\Delta}\text{grad}}_{\overrightarrow{p}\overrightarrow{q}}}{{\tau}_{g}})\xb7\mathrm{exp}(-\frac{{\mathrm{\Delta}n}_{pq}}{{\tau}_{n}}),$$## (9)

$${\mathrm{\Delta}c}_{\overrightarrow{p}\overrightarrow{q}}={\Vert \overrightarrow{p}-\overrightarrow{q}\Vert}_{2}=\sqrt{{({p}_{R}-{q}_{R})}^{2}+{({p}_{G}-{q}_{G})}^{2}+{({p}_{B}-{q}_{B})}^{2}}$$## (10)

$${\mathrm{\Delta}\mathrm{dis}}_{\overrightarrow{p}\overrightarrow{q}}={\Vert {\mathrm{dis}}_{\overrightarrow{p}}-{\mathrm{dis}}_{\overrightarrow{q}}\Vert}_{2}=\sqrt{{({x}_{\overrightarrow{p}}-{x}_{\overrightarrow{q}})}^{2}+{({y}_{\overrightarrow{p}}-{y}_{\overrightarrow{q}})}^{2}}$$## (11)

$${\mathrm{\Delta}\text{grad}}_{\overrightarrow{p}\overrightarrow{q}}={\mathrm{\Delta}\text{grad}x}_{\overrightarrow{p}\overrightarrow{q}}+{\mathrm{\Delta}\text{grad}y}_{\overrightarrow{p}\overrightarrow{q}}={\Vert {\text{grad}x}_{\overrightarrow{p}}-{\text{grad}x}_{\overrightarrow{q}}\Vert}_{2}+{\Vert {\text{grad}y}_{\overrightarrow{p}}-{\text{grad}y}_{\overrightarrow{q}}\Vert}_{2}$$## (12)

$${\mathrm{\Delta}n}_{pq}={\Vert n(p)-n(q)\Vert}_{2}\phantom{\rule{0ex}{0ex}}=\sqrt{{[{n}_{x}(p)-{n}_{x}(q)]}^{2}+{[{n}_{y}(p)-{n}_{y}(q)]}^{2}+{[{n}_{z}(p)-{n}_{z}(q)]}^{2}}.$$The weights calculated for the pixels in the window between the reference image and the target image are combined in the aggregation step. The dissimilarity $E$ can be expressed by

## (13)

$$E(\overrightarrow{p},{\overrightarrow{p}}_{d})=\frac{1}{N}\sum _{\overrightarrow{q}\in {N}_{\overrightarrow{p}},{\overrightarrow{q}}_{d}\in {N}_{{\overrightarrow{p}}_{d}}}{e}_{\text{matching}}(\overrightarrow{q},{\overrightarrow{q}}_{d})\xb7w(\overrightarrow{p},\overrightarrow{q}),$$## (14)

$${e}_{\text{matching}}(\overrightarrow{q},{\overrightarrow{q}}_{d})={e}_{c}(\overrightarrow{q},{\overrightarrow{q}}_{d})\xb7{e}_{\mathrm{dis}}(\overrightarrow{q},{\overrightarrow{q}}_{d})\xb7{e}_{n}(q,{q}_{d}).$$## (15)

$${e}_{c}(\overrightarrow{q},{\overrightarrow{q}}_{d})=\mathrm{exp}(-\frac{{\mathrm{\Delta}c}_{\overrightarrow{q}{\overrightarrow{q}}_{d}}}{{\lambda}_{c}})$$## (16)

$${e}_{\text{grad}}(\overrightarrow{q},{\overrightarrow{q}}_{d})=\mathrm{exp}(-\frac{{\mathrm{\Delta}\text{grad}x}_{\overrightarrow{q}{\overrightarrow{q}}_{d}}}{{\lambda}_{\text{grad}x}}-\frac{{\mathrm{\Delta}\text{grad}y}_{\overrightarrow{q}{\overrightarrow{q}}_{d}}}{{\lambda}_{\text{grad}y}})$$Find the best disparity of pixel $\overrightarrow{p}$ by maximizing the dissimilarity function $E(\overrightarrow{p},{\overrightarrow{p}}_{d})$:

## (18)

$${d}_{\overrightarrow{p}}=\underset{d\in D}{\mathrm{arg\; max}}\text{\hspace{0.17em}}E(\overrightarrow{p},{\overrightarrow{p}}_{d}),$$In order to refine the disparity, the consistency check is used to detect matching errors, as follows:

where ${d}_{L}(x,y)$ is the disparity of the pixels regarding the left image as the reference image and ${d}_{R}(x,y)$ is the disparity of the pixels regarding the right image as the reference image. Here, ${d}_{L}(x,y)$ and ${d}_{R}(x,y)$ are computed separately. The pixels that fail during the consistency check are classified as bad. The support weight for each neighboring pixel in the fixed-size support window centered on the bad pixel is recomputed using the proposed method. The disparity of the pixel with the largest support weight when recomputed is considered to be the disparity of the bad pixel.## 4.

## Experimental Results

## 4.1.

### Performance Comparison

The stereo image pairs “tsukuba,” “venus,” “teddy,” and “cones,” which are provided by the Middlebury stereo benchmark, were used in our experiments. The size of the support window was fixed at $35\times 35\text{\hspace{0.17em}}\mathrm{pixels}$, and the constants ${\tau}_{c}=30$, ${\lambda}_{c}=40$, ${\tau}_{d}=10$, ${\lambda}_{\text{grad}x}=20$, ${\lambda}_{\text{grad}y}=10$, ${\tau}_{g}=30$,^{9} ${\tau}_{n}=40$, and ${\lambda}_{n}=1$ were fixed for all the test stereo image pairs. To evaluate the proposed algorithm, we obtained the ground truth provided by Scharstein and Szeliski^{10} and the disparity maps of the ASW method by Yoon^{6} in the Middlebury stereo benchmark. The subjective quality comparison of the disparity maps is shown in Fig. 3. Figure 3(a) and 3(b) are the color image and the ground truth, respectively; while Fig. 3(c), 3(e), 3(g) and Fig. 3(d), 3(f), and 3(h) are the disparity maps and the bad-pixel images of disparity maps produced by our algorithm, ASW,^{6} and ASW-RGB,^{9} respectively. The error threshold $Th$ in our experiment was 0.5. We found that the smaller the area of gray and black was, the more accurate the disparity map was. Figure 3(d), 3(f), and 3(h) shows that the disparity map of our algorithm is more accurate than those of ASW^{6} and ASW-RGB.^{9}

In order to measure the objective quality of the disparity map, the Middlebury stereo benchmark provides the quality metrics to evaluate the generated disparity map, which can be separated into three parts: all pixels (“all”), nonoccluded regions (“nonocc”), and pixels near depth discontinuities (“disc”). When the absolute difference between generated disparity and ground truth is less than $Th$, the generated disparity value can be considered correct. Tables 1 and 2 show two cases of $Th=1$ and $Th=0.5$. To evaluate the proposed algorithm objectively, it is compared to the results of the other local matching ASW methods.^{6}^{,}^{8}^{,}^{9}^{,}^{11}12.^{–}^{13} The comparison of results is shown in Tables 1 and 2, and the results of the proposed algorithm (ASW-MS) improve the matching accuracy by different degrees.

## Table 1

Performance comparison of the proposed method with the Middlebury stereo benchmark (error threshold: 1.0).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

ASW-MS (proposed) | 2.03 | 2.63 | 8.50 | 0.50 | 0.96 | 2.17 | 6.57 | 11.9 | 17.1 | 2.96 | 8.55 | 7.93 | 5.98 |

AdaptDispCalib11 | 1.19 | 1.42 | 6.15 | 0.23 | 0.34 | 2.50 | 7.80 | 13.6 | 17.3 | 3.62 | 9.33 | 9.72 | 6.10 |

VSW12 | 1.62 | 1.88 | 6.98 | 0.47 | 0.81 | 3.40 | 8.67 | 13.3 | 18.0 | 3.37 | 9.87 | 9.77 | 6.29 |

GradAdaptWgt8 | 2.26 | 2.63 | 8.99 | 0.99 | 1.39 | 4.92 | 8.00 | 13.1 | 18.2 | 2.61 | 7.67 | 7.43 | 6.55 |

Adaptweight6 | 1.38 | 1.85 | 6.90 | 0.71 | 1.19 | 6.13 | 7.88 | 13.3 | 18.6 | 3.97 | 9.79 | 8.26 | 6.67 |

ASW-RGB9 | 2.56 | 3.19 | 9.89 | 0.91 | 1.56 | 4.46 | 8.48 | 13.5 | 19.2 | 3.32 | 8.91 | 8.72 | 7.06 |

BioPsyASW13 | 3.62 | 5.52 | 14.6 | 3.15 | 4.20 | 20.4 | 11.5 | 18.2 | 23.2 | 4.93 | 13.0 | 11.7 | 11.2 |

## Table 2

Performance comparison of the proposed method with the Middlebury stereo benchmark (error threshold: 0.5).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

ASW-MS (proposed) | 9.23 | 9.98 | 15.3 | 6.14 | 6.75 | 11.1 | 12.2 | 18.8 | 27.1 | 7.80 | 13.7 | 15.2 | 12.8 |

GradAdaptWgt8 | 7.67 | 8.25 | 15.0 | 7.51 | 8.05 | 12.3 | 13.5 | 19.6 | 28.5 | 7.34 | 13.0 | 14.8 | 13.0 |

ASW-RGB9 | 9.82 | 10.6 | 16.1 | 7.94 | 8.70 | 13.4 | 14.3 | 20.5 | 29.5 | 8.16 | 14.0 | 15.8 | 14.1 |

VSW12 | 19.2 | 19.5 | 18.5 | 8.17 | 8.65 | 13.2 | 17.4 | 23.2 | 31.4 | 13.1 | 18.3 | 20.4 | 17.6 |

AdaptDispCalib11 | 24.6 | 24.7 | 21.3 | 7.14 | 7.56 | 15.0 | 18.8 | 25.2 | 29.7 | 9.21 | 15.1 | 16.7 | 17.9 |

Adaptweight6 | 18.1 | 18.8 | 18.6 | 7.77 | 8.40 | 15.8 | 17.6 | 23.9 | 34.0 | 14.0 | 19.7 | 20.6 | 18.1 |

BioPsyASW13 | 22.9 | 24.4 | 24.1 | 9.69 | 10.8 | 24.5 | 18.5 | 26.1 | 34.5 | 12.6 | 20.2 | 22.3 | 20.9 |

## 4.2.

### Influence of the Illumination Normal

In order to analyze the influence of the illumination normal in the algorithm, the experiment with illumination normal (ASW-MS) and without illumination normal (ASW-MS-outN) were tested. The comparison results are shown in Tables 3 and 4. The data in these tables are the percentages of the bad pixels. For a more accurate disparity map, the smallest percentage of bad pixels is needed. Tables 3 and 4 show that having the illumination normal similarity can get a more accurate result than being without the illumination normal similarity.

## Table 3

Performance comparison of the proposed method with and without illumination normal in the Middlebury stereo benchmark (error threshold: 1.0).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

ASW-MS | 2.03 | 2.63 | 8.50 | 0.50 | 0.96 | 2.17 | 6.57 | 11.9 | 17.1 | 2.96 | 8.55 | 7.93 | 5.98 |

ASW-MS-outN | 2.48 | 3.04 | 9.53 | 1.03 | 1.73 | 4.84 | 8.13 | 13.2 | 19.1 | 3.22 | 8.92 | 8.55 | 6.98 |

## Table 4

Performance comparison of the proposed method with and without illumination normal in the Middlebury stereo benchmark (error threshold: 0.5).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

ASW-MS | 9.23 | 9.98 | 15.3 | 6.14 | 6.75 | 11.1 | 12.2 | 18.8 | 27.1 | 7.80 | 13.7 | 15.2 | 12.8 |

ASW-MS-outN | 13.1 | 13.7 | 17.3 | 8.22 | 8.99 | 13.2 | 14.1 | 20.4 | 29.6 | 7.94 | 13.9 | 15.6 | 14.7 |

## 4.3.

### Performance Analysis of the Proposed Method with Different Sizes of the Support Window

The size ($35\times 35$) of the support window of the proposed method is the same as that of Refs. 8, 9, and 12. In order to compare our results to Refs. 6 and 11, which used different sizes of support window, we tested the proposed method using the same sizes of support window as those studies used. The other relevant constants in the algorithm are the same as in the case of the $35\times 35$ support window. The size of the support window in our algorithm is odd and the central point of the window is a pixel, while the size ($48\times 48$) of the support window in Ref. 13 is even and there is no pixel at the central point of the window, so the comparison result between Ref. 13 and the proposed algorithm is not listed.

Tables 5 and 6 show the comparative results between the proposed method and that of Refs. 6 and 11 with the size ($33\times 33$, $21\times 21$) of the support window and error threshold 1.0 and 0.5, respectively. The results show that the size of the support window can affect the matching precision when the other relevant constants are fixed in the algorithm. When the error threshold is 1.0, the average percentage of bad pixels of the proposed method is less than that of other references except Ref. 11. For error threshold 0.5, however, all results of the proposed method are better than those of Refs. 6 and 11.

## Table 5

Performance comparison of the proposed method with Refs. 6 and 11 in terms of the size (33×33, 21×21) of the support window in the Middlebury stereo benchmark (error threshold: 1.0).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

Adaptweight6 (33×33) | 1.38 | 1.85 | 6.90 | 0.71 | 1.19 | 6.13 | 7.88 | 13.3 | 18.6 | 3.97 | 9.79 | 8.26 | 6.67 |

ASW-MS (33×33) | 2.04 | 2.65 | 8.47 | 0.54 | 1.00 | 2.16 | 6.55 | 11.9 | 17.0 | 2.91 | 8.54 | 7.82 | 5.96 |

AdaptDispCalib11 (21×21) | 1.19 | 1.42 | 6.15 | 0.23 | 0.34 | 2.50 | 7.80 | 13.6 | 17.3 | 3.62 | 9.33 | 9.72 | 6.10 |

ASW-MS (21×21) | 2.65 | 3.32 | 7.68 | 0.84 | 1.40 | 2.88 | 6.93 | 12.4 | 16.9 | 2.65 | 8.46 | 7.23 | 6.11 |

## Table 6

Performance Comparison of the proposed method with Refs. 6 and 11 in terms of the size (33×33, 21×21) of the support window in the Middlebury stereo benchmark (error threshold: 0.5).

Algorithm | Tsukuba | Venus | Teddy | Cones | Average percent of bad pixels | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | Nonocc | All | Disc | ||

Adaptweight6 (33×33) | 18.1 | 18.8 | 18.6 | 7.77 | 8.40 | 15.8 | 17.6 | 23.9 | 34.0 | 14.0 | 19.7 | 20.6 | 18.1 |

ASW-MS (33×33) | 9.46 | 10.2 | 15.4 | 6.20 | 6.81 | 11.0 | 12.1 | 18.7 | 26.8 | 7.69 | 13.6 | 15.0 | 12.7 |

AdaptDispCalib11 (21×21) | 24.6 | 24.7 | 21.3 | 7.14 | 7.56 | 15.0 | 18.8 | 25.2 | 29.7 | 9.21 | 15.1 | 16.7 | 17.9 |

ASW-MS (21×21) | 12.4 | 13.2 | 16.3 | 6.67 | 7.33 | 11.0 | 11.7 | 18.5 | 25.3 | 6.62 | 12.8 | 13.5 | 12.9 |

## 5.

## Conclusions and Future Work

In this paper, based on the multi-similarity measure, we present a new ASW matching algorithm that includes color similarity, Euclidean distance similarity, gradient similarity, and illumination normal similarity. The experimental results show that the algorithm proposed here can improve the matching precision compared to other local ASW matching algorithms. In future research, we plan to investigate other similarity measures to improve our method further.

## Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grant Nos. 61271315 and 61171078, and in part by the Research Fund for Doctorial Program of Higher Education of China under Grant No. 20110061110084.

## References

## Biography

**Kai Gao** received a BS degree in electronics and information engineering from Changchun University of Science and Technology in 2006, an MS degree in detection technology and automation devices from Changchun University of Science and Technology in 2009. Now he is pursuing his PhD at the College of Communication Engineering at Jilin University. His research interests include image and video coding, stereo matching, and virtual view synthesis.

**He-xin Chen** received MS and PhD degrees in communication and electronics in 1982 and 1990, respectively, from the Jilin University of Technology. He was a visiting scholar at the University of Alberta from 1987 to 1988. In 1993, he was a visiting professor at the Tampere University of Technology in Finland. He currently is a professor of communication engineering at Jilin University. His research interests include image and video coding, multidimensional signal processing, image and video retrieval, and audio and video synchronization.

**Yan Zhao** received a BS degree in communication engineering in 1993 from Changchun Institute of Posts and Telecommunications, an MS degree in communications and electronics in 1999 from the Jilin University of Technology, and a PhD in communications and information systems in 2003 from the Jilin University. She was a postdoctoral researcher at the Digital Media Institute of the Tampere University of Technology in Finland in 2003. In 2008, she was a visiting professor at the Institute of Communications and Radio-Frequency Engineering at the Vienna University of Technology. She currently is an associate professor of communication engineering. Her research interests include image and video coding, multimedia signal processing, and error concealment for audio and video transmitted over unreliable networks. She is a member of IEEE.

**Ying-nan Geng** received BS and MS degrees at the College of Communication Engineering in Jilin University. At present, she is working toward her PhD at the College of Communication Engineering in Jilin University. Her research interests are stereo matching, image and video coding, and virtual view synthesis.

**Gang Wang** received a BS degree in electronics engineering from Changchun University of Technology in 1999, and an MS degree in signal processing from Jilin University in 2005. Now he is pursuing his PhD at the College of Geo-Exploration Science and Technology of Jilin University. His research interests include wireless communication application on geo-exploration and hyperspectral image communication.