In this study, a technique for multifractal classification (MC) of synthetic aperture radar (SAR) images of ice-covered sea areas is proposed. This technique is based on the use of SAR image local Hölder exponents (LHEs) and coarse Hölder exponents (CHEs) calculated for sliding windows with different sizes. Hölder exponents are very effective and easily computable image texture descriptors that characterize the degree of local irregularity of image functions. The main steps of the presented SAR image classification technique are following: Sentinel-1 SAR image transformation (application of orbit file, radiometric calibration, speckle filtering, terrain correction and conversion to dB), extracting local and coarse Hölder exponents from HH- and HV-polarized Sentinel-1 SAR images, stacking local and coarse Hölder exponents into high-dimensional feature vectors, classification of the formed feature vectors by some classifier. Experimental testing of the proposed technique for classification of SAR images was carried out on several regions of Sentinel-1b SAR images demonstrating ice-covered areas of the Kara Sea. The first step of the technique was implemented by SNAP toolbox, and the next three steps were implemented using own MATLAB application (https://github.com/UchaevD/GMAToolbox). SAR image classification results were compared with ice charts of U.S. National Ice Center (NIC), which contain weekly information on sea ice concentration and ice thickness. As a result of comparison with NIC ice charts, it has been established that Kara Sea areas with widely-spreading types of floating ice can be successfully separated by MC of Sentinel-1b SAR images, and overall and average classification accuracies are not less than 75%. The results of the study suggest that MC of SAR images of ice-covered sea areas can be used to automate the generation daily ice charts for various ice-covered sea areas in the Arctic and Antarctic.
In this study, we present spatial feature profiles that can be used in addition to spectral profiles for semisupervised small-sample-size (SSS) classification of hyperspectral images (HSIs). These profiles have been obtained by combining extended multi-attributive profiles (EMAPs) and the recently proposed Chebyshev moment multifractal profiles (CMMPs). In order to demonstrate SSS classification capabilities of the introduced feature profiles, several experiments were performed on two test HSIs. In these experiments, we used a graph-based ensemble learning method for semisupervised HSI classification and a small number of labeled samples for training. The experiments performed on test HSIs demonstrate that the proposed feature profiles provide good classification performance in terms of the overall accuracy (OA), average accuracy (AA) and Kappa statistics. We also compared the classification results obtained using EMAPs and CMMPs with those obtained using EMAPs alone, CMMPs alone, and another spatial feature sets. It has been established that the classification based on CMMPs and EMAPs shows obvious improvements, especially when the number of labeled training samples is extremely small. In the final part of the study, the HSI classification results obtained using the proposed feature profiles were compared with classification maps obtained by deep learning methods adopted for small training samples. This comparison showed that the semisupervised classification by EMAPs and CMMPs is characterized by higher values of OA, AA and Kappa coefficients. Moreover, in contrast to deep learning methods, the classification procedure based on the calculation of the proposed feature profiles and graph-based ensemble learning is not time-consuming.
Classification of hyperspectral images is an important step of hyperspectral image interpretation. Different studies demonstrate that spatial features can provide complementary information for increasing the accuracy of hyperspectral image classification. In this study, we propose a method of spectral-spatial classification of hyperspectral images that is based on the use of specific multifractal features as the spatial features. The proposed method of hyperspectral image classification consists of the following steps. First, informative multifractal features are extracted from first few principal components of spectral features. For construction of the multifractal features, in the windows centered on each element of principal component images, using a generalized local-global multifractal image analysis, various 1D and 2D multiracial characteristics can be calculated including our early introduced 2D multifractal characteristics of global scaling exponents. After that, obtained multifractal features are stacked with spectral features into high-dimensional feature vectors. Finally, the resulting high-dimensional vectors of spectral and multifractal features are classified by a support vector machine classifier. The multifractal characteristics that are used to construct multifractal features have a lot of advantages: these characteristics provide a good textural separability of image objects, demonstrate an invariance to image scaling and rotation, and they are also insensitive to image noise. The experiments performed on several widely known test hyperspectral images have demonstrated that proposed method exhibits better performance than competitive methods of spectral-spatial classification of hyperspectral images, in terms of the overall accuracy and kappa statistic. In addition, it is shown that the introduced classification method can outperform some deep learning methods of hyperspectral image classification, which in recent years have attracted great interest in hyperspectral image classification. In particular, it was established that the proposed method can achieve good classification results over deep learning methods if we use small training samples for classification. In the future, we will focus on developing methods for object-oriented classification of hyperspectral images, which are based on the use of multifractal features. The study has been supported by the Ministry of Education and Science of the Russian Federation (Project No. МК-3477.2019.5) and by the Russian Foundation for Basic Research (Project No. 19-05-00330 А)."
Chebyshev multifractal signatures for characterization of a multifractal nature of image textures for natural objects are proposed. These signatures can be obtained by generalized multifractal formalism (GMF) using Chebyshev polynomial (CP) kernels and can be considered as alternatives to traditional multifractal spectra. The paper also presents properties of introduced multifractal signatures: in particular, it is shown that Chebyshev multifractal signatures, similar to traditional multifractal spectra, are invariant to image scaling. To illustrate recognition possibilities of multifractal signatures, the application of the signatures for multifractal interpretation of synthetic-aperture radar (SAR) images of ice-covered sea areas are shown. It is established that using parameters of multifractal signature approximations calculated by Sentinel-1 SAR image regions, we can separate sea areas with very close ice, close ice and very open ice. The obtained results allow to say that it is possible to use the introduced multifractal signatures at the preliminary stage of object-oriented classification of SAR or other images to assess a textural separability of image objects.
This paper introduces a specific type of aerospace image interpretation (AII), which is called as multifractal interpretation (MI) and provides the identification and description of natural objects on aerospace images (AIs) by their multifractal analysis (MA). The paper also presents a generalization of standard (moment-based) multifractal formalism (SMF), which can be considered as a theoretical basis of MI. This generalized multifractal formalism (GMF) is based on the use of kernels constructed using discrete orthonormal polynomials (OPs). It is shown that proposed GMF, in contrast to SMF, can be used to obtain one-dimensional (1D) spectra of global scaling exponents, spectra of local scaling exponents and firstly introduced two-dimensional (2D) spectra of global scaling exponents. The last part of the paper is devoted to the proposed MI methodology that includes MA based on GMF as the main block of the methodology.
We present a technique for automated restoration of digital images obtained from faded photographic prints. The proposed defading technique uses our early proposed image contrast enhancement algorithm based on a contrast measure of images in the Chebyshev moment transform domain. Obtained experimental results demonstrate some advantages of the technique as compared to other widely used image enhancement methods.
A new algorithm for image contrast enhancement in the Chebyshev moment transform (CMT) domain is introduced. This algorithm is based on a contrast measure that is defined as the ratio of high-frequency to zero-frequency content in the bands of CMT matrix. Our algorithm enables to enhance a large number of high-spatial-frequency coefficients, that are responsible for image details, without severely degrading low-frequency contributions. To enhance high-frequency Chebyshev coefficients we use a multifractal spectrum of scaling exponents (SEs) for Chebyshev wavelet moment (CWM) magnitudes, where CWMs are multiscale realization of Chebyshev moments (CMs). This multifractal spectrum is very well suited to extract meaningful structures on images of natural scenes, because these images have a multifractal character. Experiments with test images show some advantages of the proposed algorithm as compared to other widely used image enhancement algorithms. The main advantage of our algorithm is the following: the algorithm very well highlights image details during image contrast enhancement.
This paper introduces a new family of moments, namely orthogonal wavelet moments (OWMs), which are orthogonal realization of wavelet moments (WMs). In contrast to WMs with nonorthogonal kernel function, these moments can be used for multiresolution image representation and image reconstruction. The paper also introduces multifractal invariants (MIs) of OWMs which can be used instead of OWMs. Some reconstruction tests performed with noise-free and noisy images demonstrate that MIs of OWMs can also be used for image smoothing, sharpening and denoising. It is established that the reconstruction quality for MIs of OWMs can be better than corresponding orthogonal moments (OMs) and reduces to the reconstruction quality for the OMs if we use the zero scale level.
This paper introduces a new family of moments, namely orthogonal wavelet moments (OWMs), which are orthogonal realization of wavelet moments (WMs). In contrast to WMs with nonorthogonal kernel function, these moments can be used for multiresolution image representation and image reconstruction. The paper also introduces multifractal invariants (MIs) of OWMs which can be used instead of OWMs. Some reconstruction tests performed with noise-free and noisy images demonstrate that MIs of OWMs can also be used for image smoothing, sharpening and denoising. It is established that the reconstruction quality for MIs of OWMs can be better than corresponding orthogonal moments (OMs) and reduces to the reconstruction quality for the OMs if we use the zero scale level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.