High discriminative feature representation is key in remote sensing scene classification. Existing mid-level feature methods for solving the classification show poor performance. The reason includes two aspects. First, the discrimination power of the feature generated by the feature coding method is limited. Second, semantic information hidden in the scene images are not utilized. These essentially prevent them from achieving better performance. To solve these issues, we propose a hierarchical feature coding model with two stacked feature encoding layers. Specifically, in the first coding layer, semantic information from convolutional layers of deep models and complementary structure and spectral features are extracted and encoded into bag of visual word (BOVW) histogram features. Then in the second layer, Dirichlet-based Gaussians mixture model Fisher kernel is adopted to transform the BOVW histogram features to the more discriminative and effective feature vectors. Thus, through feeding the output of the first layer into the second layer, the complex feature representation is refined. Finally, the concatenated feature vectors are put into support vector machine classifier for classification. Experiments on two public high-resolution remote sensing scene datasets demonstrate that the performance of our hierarchical coding method is comparable to the previous state-of-the-art methods, including most multifeature fusion methods and convolutional neural network-based methods.
In this paper, a novel approach to change detection in synthetic aperture radar (SAR) images based on structure similarity (SSIM) and parametric kernel graph cuts is presented. Firstly, the SSIM is imported into change detection and a difference image constructed method based on SSIM is proposed. And then, the changed and unchanged pixels are identified from the difference image by the parametric kernel graph cuts algorithm. Experimental results obtained on real SAR images demonstrate the effectiveness of the proposed method.
Referring to the problem of SAR image registration, an image registration method based on Scale Invariant Feature Transform (SIFT) and Multi-Scale Autoconvolution (MSA) is proposed. Based on the extraction of SIFT descriptors and the MSA affine invariant moments of the region around the keypoints, the feature fusion method based on canonical correlation analysis (CCA) is employed to fuse them together to be a new descriptor. After the control points are rough matched, the distance and gray correlation around the rough matched points are combined to build the similarity matrix and the singular value decomposition (SVD) method is employed to realize precise image registration. Finally, the affine transformation parameters are obtained and the images are registered. Experimental results show that the proposed method outperforms the SIFT method and achieves high accuracy in sub-pixel level.
Multiresolution-based image fusion has been the focus of considerable research attention in recent years with a number
of algorithms proposed. In most of the algorithms, however, the parameter configuration is usually based on experience.
This paper proposes an adaptive image fusion algorithm based on the nonsubsampled contourlet transform (NSCT),
which realizes automatic parameter adjustment and gets rid of the adverse effect caused by artificial factors. The
algorithm incorporates the quality metric of structural similarity (SSIM) into the NSCT fusion framework. The SSIM
value is calculated to assess the fused image quality, and then it is fed back to the fusion algorithm to achieve a better
fusion by directing parameters (level of decomposition and flag of decomposition direction) adjustment. Based on the
cross entropy, the local cross entropy (LCE) is constructed and used to determine an optimal choice of information
source for the fused coefficients at each scale and direction. Experimental results show that the proposed method
achieves the best fusion compared to three other methods judged on both the objective metrics and visual inspection and
exhibits robust against varying noises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.