## 1.

## Introduction

Texture features have shown significant advantages in the field of image classification,^{1} image segmentation,^{2} and content-based image retrieval (CBIR),^{3}^{,}^{4} etc. Particularly, texture features are one kind of low-level features which have been widely used in CBIR community because of characteristics of independence of image color and intensity.

Some popular textural descriptors, such as gray level co-occurrence matrix (GLCM),^{5} Gabor filter,^{6} wavelet transform,^{7} and local binary pattern (LBP),^{8} have been extensively used in CBIR community. Unfortunately, these conventional texture features mentioned above are extracted from grayscale images directly and leave the discriminative information derived from different color channels, which can be regarded as complementary information for different texture patterns, out of consideration. With the intention of fully exploiting the discriminative information to improve the retrieval results of remote sensing images, many studies have been conducted on this topic.

Strategies of such research can be roughly divided into two categories: (1) combination of color and texture features and (2) texture features integrating opponent process theory. Some works based on the former strategy are illustrated as follows. Lin et al.^{9} proposed a smart CBIR system based on color and texture features. Chun et al.^{10} presented a CBIR method based on a combination of color and texture features extracted in multiresolution wavelet domain. Liapis and Tziritas^{11} illustrated a new image retrieval mechanism based on a combination of texture and color features using discrete wavelet frames analysis and one-dimensional histograms of CIELab chromaticity coordinates, respectively. This strategy has also been accepted as one of the important retrieval mechanisms in some famous image retrieval systems, such as query by image content (QBIC).^{12}^{,}^{13} Some other similar works^{14}15.16.^{–}^{17} could be found in this research. Although these works simultaneously take discriminative information and texture features into consideration, problems such as computational complexity and definition of weight parameters with combinational features are still an open question. In the 1800s, Hurvich and Jameson^{18} proposed an opponent process theory of human color vision, and thus texture features integrating opponent process theory have increasingly drawn substantial attention in recent years. Jain and Healey^{19} proposed a multiscale representation based on the opponent process theory for texture recognition, and later this method was applied to hyperspectral image texture recognition.^{20} In one recent work by Choi et al.,^{21} two features, namely color local Gabor wavelets and color LBP, are proposed for the purpose of face recognition, which share similar principles and can be treated as an extensive application of the theory proposed in Ref. 19. The opponent process theory provides complementary information among color channels and generates a simple but effective feature representation.

Motivated by the aforementioned applications of opponent process theory, in this study, we propose one descriptor named color Gabor wavelet texture (CGWT) for remote sensing image retrieval. Meanwhile, color Gabor opponent texture (CGOT) descriptor based on Gabor wavelets has also been presented so as to improve the retrieval results of certain image classes which have inferior precision using CGWT representation.

The rest of this study is organized as follows. Section 2 shows the framework of remote sensing image retrieval based on the proposed descriptors and illustrates the details of the proposed features, parameters used, and similarity measure defined for CGOT descriptor. In Sec. 3, comparative experimental results and discussions are presented. Conclusions and future work constitute Sec. 4.

## 2.

## Improved Color Texture Descriptors

## 2.1.

### Framework of Remote Sensing Image Retrieval Based on the Proposed Descriptors

Generally an image retrieval system contains image database, feature database, and some important functional modules, such as feature extraction, indexing mechanism, and similarity measure. Figure 1 illustrates the framework of improved color texture descriptors for remote sensing image retrieval in this study, which mainly contains two parts: feature extraction and image retrieval.

Feature extraction part indicates the extraction procedure of the proposed features. Given one RGB remote sensing image, three dependent color channel images, R, G, and B are obtained first. Then unichrome features corresponding to each color channel are extracted based on Gabor filter with orientation and scale $(u,v)$. Finally, R unichrome feature, G unichrome feature, and B unichrome feature are combined together to form unichrome feature. For opponent feature, two Gabor filters with orientation and scale $(u,v)$ and $(u,{v}^{\prime})$ are used for two color channel images, respectively. As with unichrome feature, RG opponent feature, RB opponent feature, and GB opponent feature are combined together to form opponent features.

Image retrieval part illustrates a simple procedure of remote sensing image retrieval. All images and features are stored in image database and feature database, respectively. Meanwhile, images are associated with the corresponding features through an indexing mechanism. Given one query image, distances between query image and images in database are calculated using a predefined similarity measure, and then the first $k$ most similar images are returned in ascending or descending order of similarity.

Feature extraction is an important and indispensable part in one image retrieval system. In Sec. 2.2, the details of extraction of proposed representations are illustrated. In addition, as the most important procedure of image retrieval part, similarity measure methods used in this study are discussed in Sec. 2.3 as well.

## 2.2.

### Feature Extraction

In our methodology, all images are represented in RGB color space for convenience. Both CGWT and CGOT features are based on Gabor filter, illustrated as follows:

## (1)

$${\psi}_{u,v}(z)=\frac{{\Vert {k}_{u,\nu}\Vert}^{2}}{{\sigma}^{2}}{\mathrm{e}}^{(-{\Vert {k}_{u,v}\Vert}^{2}{\Vert z\Vert}^{2}/2{\sigma}^{2})}[{\mathrm{e}}^{i{k}_{u,v}z}-{\mathrm{e}}^{-{\sigma}^{2}/2}],$$Note that the Gabor filter may have many formula forms, and Eq. (1) in Ref. 22 is chosen because of its conciseness and convenience for setting parameters, such as direction and scale, in our algorithm.

## 2.2.1.

#### Extraction of CGWT descriptor

As illustrated in Fig. 1, CGWT representation consists of two parts, unichrome feature and opponent feature. The terms “unichrome feature” and “opponent feature” follow the definition in Ref. 19, where you can find detailed information about the two features. Let $\text{R}$, $\text{G}$, and $\text{B}$ be the three grayscale images of corresponding color channels of an RGB image, respectively. The convolution results of three grayscale images and Gabor kernel ${\psi}_{u,v}$ are denoted as follows:

## (3)

$$\{\begin{array}{l}{\text{R}}_{u,v}(z)=\text{R}(z)*{\psi}_{u,v}(z)\\ {\text{G}}_{u,v}(z)=\text{G}(z)*{\psi}_{u,v}(z)\\ {\text{B}}_{u,v}(z)=\text{B}(z)*{\psi}_{u,v}(z)\end{array},$$## (4)

$$\mathrm{uni}=[\sqrt{\sum {\text{R}}_{u,v}^{2}(z)},\sqrt{\sum {\text{G}}_{u,v}^{2}(z)},\sqrt{\sum {\text{B}}_{u,v}^{2}(z)}],$$Then, the difference of normalized ${\text{R}}_{u,v}(z)$, ${\text{G}}_{u,v}(z)$, ${\text{B}}_{u,v}(z)$ is defined by

## (5)

$$\{\begin{array}{l}\text{R}{\text{G}}_{uv{v}^{\prime}}=\frac{{\text{R}}_{u,v}(z)}{\sqrt{\sum {\text{R}}_{u,v}^{2}(z)}}-\frac{{\text{G}}_{u,{v}^{\prime}}(z)}{\sqrt{\sum {\text{R}}_{u,{v}^{\prime}}^{2}(z)}}\\ \text{R}{\text{B}}_{uv{v}^{\prime}}=\frac{{\text{R}}_{u,v}(z)}{\sqrt{\sum {\text{R}}_{u,v}^{2}(z)}}-\frac{{\text{B}}_{u,{v}^{\prime}}(z)}{\sqrt{\sum {\text{B}}_{u,{v}^{\prime}}^{2}(z)}}\\ \text{G}{\text{B}}_{uv{v}^{\prime}}=\frac{{\text{G}}_{u,v}(z)}{\sqrt{\sum {\text{G}}_{u,v}^{2}(z)}}-\frac{{\text{B}}_{u,{v}^{\prime}}(z)}{\sqrt{\sum {\text{B}}_{u,{v}^{\prime}}^{2}(z)}}\end{array},$$## (6)

$$\mathrm{opp}=[\sqrt{\sum \text{R}{\text{G}}_{u,v,{v}^{\prime}}^{2}(z)},\sqrt{\sum \text{R}{\text{B}}_{u,v,{v}^{\prime}}^{2}(z)},\sqrt{\sum \text{G}{\text{B}}_{u,v,{v}^{\prime}}^{2}(z)}],$$According to Eqs. (5) and (6), we can obtain three equations in Eq. (7). During the feature extraction procedure, dimension and efficiency are the two factors needed to be considered, while in the work by Jain and Healey^{19} and Choi et al.,^{21} the above factors are not taken into consideration. In our study, we just choose three of them in Eq. (7) to constitute opponent feature so as to decrease feature dimension and increase efficiency. Finally, the CGWT representation of an image is denoted by Eq. (8)

## (7)

$$\{\begin{array}{l}\sqrt{\sum \text{R}{\text{G}}_{u,v,{v}^{\prime}}^{2}(z)}=\sqrt{\sum \text{G}{\text{R}}_{u,v,{v}^{\prime}}^{2}(z)}\\ \sqrt{\sum \text{R}{\text{B}}_{u,v,{v}^{\prime}}^{2}(z)}=\sqrt{\sum \text{B}{\text{R}}_{u,v,{v}^{\prime}}^{2}(z)}\\ \sqrt{\sum \text{G}{\text{B}}_{u,v,{v}^{\prime}}^{2}(z)}=\sqrt{\sum \text{B}{\text{G}}_{u,v,{v}^{\prime}}^{2}(z)}\end{array},$$## 2.2.2.

#### Extraction of CGOT descriptor

CGOT representation combines Gabor texture^{6} and opponent feature together, which substantially decreases the feature dimension compared with CGWT representation. Given one grayscale image $I$, then the convolution of $I$ and Gabor kernels ${\psi}_{u,v}$ with orientation $u$ and scale $v$ is given by

The mean ${\mu}_{uv}$ and standard deviation ${\sigma}_{uv}$ of the transform coefficients are defined by

## (10)

$$\{\begin{array}{l}{\mu}_{u,v}=\iint |{g}_{u,v}(x,y)|\mathrm{d}x\text{\hspace{0.17em}}\mathrm{d}y\\ {\sigma}_{u,v}=\sqrt{\iint {(|{g}_{u,v}(x,y)|-{\mu}_{u,v})}^{2}\mathrm{d}x\text{\hspace{0.17em}}\mathrm{d}y}\end{array}\mathrm{.}$$Gabor texture feature composed of ${\mu}_{uv}$ and ${\sigma}_{uv}$ is denoted using $T=\{{\mu}_{u,v},{\sigma}_{u,v}\}$. Then, CGOT representation of an image is denoted by

## 2.2.3.

#### Extraction of comparative texture features

Some widely used traditional texture features, such as wavelet texture, LBP, and GLCM, are introduced as comparative methods to give a quantitative analysis. Before extraction of these features, the color images are converted into intensity images using the equation $\text{gray}=\phantom{\rule{0ex}{0ex}}0.299*r+0.587*g+0.114*b$, where $r$, $g$, and $b$ mean red, green, and blue channels, respectively. Details about these comparative methods are in the following.

Wavelet transform makes a great difference in the field of texture analysis. Let $I$ be an original image, and then the extraction procedure is described as follows. First, “haar” wavelet is used to construct two decomposition filters, one low-pass filter, and one high-pass filter. Then, 2-level two-dimensional wavelet decomposition is applied to $I$ by means of above constructed decomposition filters, and six subband images are obtained. Note that the decomposition level is an important parameter and the size of the smallest subimage should not be less than $16\times 16$.^{7} Finally, the energy of each subband image is calculated using the following equation:

LBP describes the local structure of image texture through calculating the differences between each image pixel and its neighboring pixels. Ojala et al.^{8} improved an original LBP operator and developed a generalized grayscale and rotation invariant operator ${\mathrm{LBP}}_{P,R}^{\mathrm{riu}2}$ which can detect “uniform” patterns and is denoted by

## (13)

$${\mathrm{LBP}}_{P,R}^{\mathrm{riu}2}=\{\begin{array}{ll}{\sum}_{p=0}^{P-1}s({g}_{p}-{g}_{c})& U({\mathrm{LBP}}_{P,R})\le 2\\ P+1& U({\mathrm{LBP}}_{P,R})>2\end{array},$$In our study, 8 pixels circular neighbor of radius 1, i.e., ${\mathrm{LBP}}_{\mathrm{8,1}}^{\mathrm{riu}2}$ operator is used, and a total of 59 grayscale and rotation invariant LBP histogram is accepted.

GLCM is one widely used texture analysis method that considers spatial dependencies of gray levels from the perspective of mathematics. In the work by Haralick et al.,^{5} 14 statistical measures extracted from GLCM are introduced. Nevertheless, many of them are strongly correlated with each other and there are no definitive conclusions about which features are more important and discriminative than others. How to choose appropriate features for texture analysis from 14 statistical measures is still studied by some researchers. Haralick et al. selected four features, energy, entropy, correlation, and contrast, as texture features and conducted classification experiments using a satellite imagery data set, and good classification results are obtained.^{5} Considering the good performance of the above four features on remote sensing images, energy, entropy, correlation, and contrast are used in our study. They are defined by

## (15)

$$\{\begin{array}{l}{f}_{1}=\sum _{i}\sum _{j}{p}_{d,\theta}^{2}(i,j)\\ {f}_{2}=-\sum _{i}\sum _{j}{p}_{d,\theta}(i,j)\mathrm{log}\text{\hspace{0.17em}}{p}_{d,\theta}(i,j)\\ {f}_{3}=\frac{\sum _{i}\sum _{j}ij{p}_{d,\theta}(i,j)-{\mu}_{x}{\mu}_{y}}{{\sigma}_{x}{\sigma}_{y}}\\ {f}_{4}=\sum _{i}\sum _{j}{(i-j)}^{2}{p}_{d,\theta}(i,j)\end{array},$$## 2.2.4.

#### Parameters setting

How to choose optimal parameters for Gabor wavelets is still studied by some researchers because different parameters may result in different experimental results even for the same question. With respect to parameters used in this study, we choose default parameters used in Ref. 22, and the details are described as follows. Gabor wavelets of five scales $v\in \{\mathrm{0,1},\mathrm{2,3},4\}$ and eight orientations $u\in \{\mathrm{0,1},\mathrm{2,3},\mathrm{4,5},\mathrm{6,7}\}$, which have been used in most cases, are accepted because they can extract texture features from more scales and orientations. For the rest of the parameters, $\sigma =2\pi $, ${k}_{\mathrm{max}}=\pi /2$, and $f=\sqrt{2}$ are accepted, which can be regarded as empirical values. In addition, the size of Gabor window is also an important parameter, and it is set as $128\times 128$ in this study. Then, a total of 80 Gabor texture features is obtained.

According to Eq. (5) and restriction condition $|v-{v}^{\prime}|\le 1$, we can obtain 13 scale groups of $(v,{v}^{\prime})\in \{(\mathrm{0,0}),(\mathrm{1,1}),(\mathrm{2,2}),(\mathrm{3,3}),(\mathrm{4,4}),(\mathrm{0,1}),(\mathrm{1,0}),(\mathrm{1,2}),(\mathrm{2,1}),(\mathrm{2,3}),(\mathrm{3,2}),(\mathrm{3,4}),(\mathrm{4,3})\}$ and eight orientations of $u$. Thus, CGWT and CGOT representations are a total of 432 ($120+312$) and 392 ($80+312$) feature vectors, respectively.

## 2.3.

### Similarity Measure

Similarity measure is an indispensable and important step in image retrieval systems, and different methods may result in great difference even for identical query images. Some widely used similarity measure methods, such as Minkowski distance, histogram intersection, K-L distance, and Jeffrey divergence, etc. tend to have their own scope of application. In such cases, specific similarity measure methods are defined for certain features in this study.

Given two images ${I}_{i}$ and ${I}_{j}$ with corresponding CGWT representations ${f}_{i}^{\mathrm{CGWT}}$ and ${f}_{j}^{\mathrm{CGWT}}$, the distance measure of CGWT is defined as in Ref. 19

## (16)

$${d}_{ij}^{\mathrm{CGWT}}=\sum {\left(\frac{{f}_{i}^{\mathrm{CGWT}}-{f}_{j}^{\mathrm{CGWT}}}{{\sigma}^{\mathrm{CGWT}}}\right)}^{2},$$^{19}

For CGOT representation, considering it is the combination of Gabor texture feature and opponent feature, we integrate distance measure Eq. (16) and distance measure for Gabor texture features in Ref. 6 and define one much simpler distance measure by

## (17)

$${d}_{ij}^{\mathrm{CGOT}}=\sum \left|\frac{{f}_{i}^{\mathrm{CGOT}}-{f}_{j}^{\mathrm{CGOT}}}{{\sigma}^{\mathrm{CGOT}}}\right|,$$Note similarity measure Eq. (17) has similar form but different meanings as similarity measure Eq. (16). Since Gabor texture and opponent feature constitute CGOT representation, distance measure Eq. (17) taking both of them into consideration is appropriate. In this similarity measure, CGOT representation is regarded as a unitary feature, which means it is unnecessary to pay attention to each component of the feature when calculating standard deviation ${\sigma}^{\mathrm{CGOT}}$.

## 3.

## Experiments and Discussions

## 3.1.

### Data Set

To evaluate the performance of proposed descriptors, eight land-use/land-cover (LULC) classes from UC Merced LULC data set are chosen as retrieval image database. Original LULC is one manually constructed data set consisting of 21 image classes, and the 100 images in each class are tiles with the size of $256\times 256$ from large aerial images with the spatial resolution of 30 cm of some US regions.^{23} LULC data set has been used in many similar studies^{24}^{,}^{25} and made publicly available to other researchers. Some image patches of eight LULC classes used in our experiments are shown in Fig. 2. From left to right, they are agricultural, airplane, beach, buildings, chaparral, residential, forest, and harbor, respectively.

## 3.2.

### Performance of Proposed Descriptors

Accurate and objective evaluation criteria have also been a hot topic in the CBIR community. Precision, recall, precision-recall curves, and ANMRR are publicly accepted as evaluation criteria. However, due to the existence of semantic gap, evaluation of CBIR is not effortless. In addition, it is possible to get different performances with different evaluation methods even if the same data set is used.^{26} In order to avoid such problems, precision and precision–recall curves are chosen as evaluation methods in this study, because they can be treated as similar evaluations from a different perspective. Precision is the fraction of correct retrievals and recall is the fraction of ground truth items retrieved for a given result set.^{23}

Figure 3 shows the performance of proposed features and conventional texture features. The last bin of the histogram with the label “average” gives the average precision of corresponding features. The chart indicates that CGOT and CGWT representations perform better on five classes, i.e., airplane, beach, chaparral, residential, forest, and harbor, and less than perfect on the other two classes, i.e., agricultural and buildings, compared with wavelet texture. Nevertheless, the two proposed features achieve highest average precision on the whole image classes. Meanwhile, we can see CGOT feature increases the average precision of agricultural, airplane, beach, buildings, residential, and harbor by CGWT feature further, which is particularly obvious with respect to agricultural and harbor due to abundant texture information on these image classes.

In order to demonstrate the superiority of the proposed representation, precision–recall curves for different features are presented in Fig. 4 through setting different number of returned images. With the increase of returned images, precision by conventional texture features decrease rapidly, particularly GLCM and LBP. With regard to three rest features, it is evident that CGOT results in the best performance. For CGWT representation and wavelet texture, recall 0.5 can be treated as marginal value. When the value of recall is less than 0.5, CGWT representation performs better, and they have same performance with recall bigger than 0.5. Experimental results, here, are in accordance with the results in Fig. 3, and both of them have validated the effectiveness and good performance of the proposed color texture descriptors.

## 3.3.

### Comparisons of Used Similarity Measures

As aforementioned, appropriate similarity measure method is necessary in CBIR. For conventional texture features, i.e., GLCM, LBP, and wavelet texture, we choose ${L}_{2}$ distance as similarity measure. For CGWT representation, distance measure presented in Ref. 19 is used. Also, for CGOT representation, characteristics of existed distance measure for Gabor texture, unichrome and opponent features are considered and a simpler distance measure for CGOT representation is defined. Table 1 compares the performance of CGOT representation using proposed similarity measure in Eq. (17) with some other similarity measures, such as ${L}_{1}$ distance, ${L}_{2}$ distance, Jeffrey divergence,^{27} and distance measure in Ref. 19.

## Table 1

Comparisons of CGOT using different distance measures.

Distance measure | Precision with various returned images | Average | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|

10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | ||

Proposed | 0.80 | 0.70 | 0.65 | 0.62 | 0.59 | 0.56 | 0.54 | 0.52 | 0.49 | 0.47 | 0.60 |

L2 distance | 0.76 | 0.66 | 0.61 | 0.57 | 0.54 | 0.51 | 0.48 | 0.46 | 0.44 | 0.41 | 0.54 |

L1 distance | 0.79 | 0.69 | 0.64 | 0.60 | 0.57 | 0.54 | 0.52 | 0.50 | 0.47 | 0.45 | 0.58 |

Jeffrey divergence | 0.80 | 0.70 | 0.63 | 0.59 | 0.56 | 0.53 | 0.50 | 0.48 | 0.46 | 0.43 | 0.57 |

Distance in Ref. 19 | 0.78 | 0.68 | 0.65 | 0.59 | 0.56 | 0.53 | 0.51 | 0.49 | 0.46 | 0.44 | 0.57 |

For each group of returned images, the proposed similarity measure achieves highest precision and the average performance is best as well. Table 1 demonstrates that proposed distance measure is an appropriate and effective similarity measure method.

## 3.4.

### Examples of Remote Sensing Image Retrieval

Figure 5 shows one remote sensing image retrieval example using two proposed descriptors. Figure 5(a) is the query image from agricultural class, and Figs. 5(b) and 5(c) are the first 30 retrieved images of CGOT and CGWT, respectively. Note that these images are returned in the order of descending similarity, which means images ranking front are more similar to the query image.

According to the retrieval results of two descriptors, CGOT retrieves more similar images than CGWT. In addition, among the first 12 retrieved images, CGOT returns two irrelevant images, while CGWT returns five irrelevant images, which also indicates the better performance of CGOT descriptor.

## 3.5.

### Discussion

From the previous experiments of remote sensing image retrieval, some interesting points have been concluded.

1. Proposed color texture descriptors, CGWT and CGOT, describe the content of remote sensing images well and achieve a good performance compared with wavelet texture, LBP texture, and GLCM texture. The reason is that they have taken the discriminative information among color bands into consideration.

2. As shown in Fig. 3, CGOT improves the performance of CGWT and achieves highest average precision over the entire image database, and similar performance is obtained in Fig. 4. These results indicate that Gabor texture has better descriptive power than unichrome feature in terms of image texture.

3. The similarity measure defined for CGOT is appropriate. It reveals that the characteristics of one feature should be taken into consideration when defining a similarity measure, because it plays an important role in improving the performance of the proposed representations.

In this study, all experiments are conducted using aerial remote sensing images from one public image database. However, not all the selected images have regular texture structure, which will have an effect on the performance of proposed descriptors. In addition, proposed descriptors are likely to be suitable for hyperspectral image retrieval because they have high spectral resolution and more discriminative information can be extracted from image bands.

## 4.

## Conclusion

With the rapid development of remote sensing technology, the amount of accessible remote sensing data has been increasing at an incredible rate, which not only provides researchers more choices for various applications, but also brings more challenges. Under the circumstances, CBIR is a better choice for effective organization and management of massive remote sensing data.

Traditionally, low-level features, particular texture features, are widely used in CBIR community for their special characteristics. Nevertheless, conventional texture features tend to be extracted from grayscale images directly and ignore the complementary information that is of great importance between color bands.

To exploit the complementary information and perform remote sensing image retrieval, CGWT and CGOT representations have been proposed based on Gabor filter and opponent process theory. The filtered images by Gabor filter with five scales and eight orientations are obtained first and then unichrome features, opponent features, and Gabor texture features are extracted. Finally, CGWT and CGOT representations are constituted and used in remote sensing image retrieval.

Considering the existence of semantic gap and some other difficulties, two similar evaluations, i.e., precision and precision-recall curves are chosen to evaluate the performance of all texture features. Results demonstrate that CGWT and CGOT perform better than GLCM, LBP, and wavelet texture, and CGOT not only improves the performance of some image classes using CGWT but also increases overall precision of all queried remote sensing images. In addition, a similarity measure for CGOT based on two existed distance measures has been defined. Compared with some widely used distance measures, the proposed similarity measure shows better performance.

In the future, the fusion mechanism of unichrome features and opponent features, Gabor texture and opponent features, as well as the influence of color space on proposed descriptors will be considered.

## Acknowledgments

The author would like to thank Shawn Newsam for his LULC data set and the anonymous reviewers for their comments and corrections. This work was supported in part by National Science and Technology Specific Projects under Grant No. 2012YQ16018505 and National Natural Science Foundation of China under Grant No. 61172174.

## References

## Biography

**Zhenfeng Shao** received his PhD degree from Wuhan University, China, in 2004. He is now a professor of the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are image retrieval, image fusion, and urban remote sensing application.

**Weixun Zhou** received his BS degree from Anhui University of Science and Technology, Anhui, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval and image processing.

**Lei Zhang** received her BS degree from Xinyang Normal University, Henan, China, in 2011. She is currently working toward the PhD degree in State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. Her research interests include dimensionality reduction, hyperspectral classification, sparse representation, and pattern recognition in remote sensing images.

**Jihu Hou** received his BS degree from Hubei University, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval, image processing, and GIS applications.