KEYWORDS: Lithium, Image segmentation, Signal to noise ratio, Optimization (mathematics), Data modeling, Hyperspectral imaging, Thermography, Infrared radiation, Associative arrays, Chemical elements
A method called spatial subspace clustering (SpatSC) is proposed for the hyperspectral data segmentation problem focusing on the hyperspectral data taken from a drill hole, which can be seen as one-dimensional image data compared with hyperspectral/multispectral image data. Addressing this problem has several practical uses, such as improving interpretability of the data, and, especially, obtaining a better understanding of the mineralogy. SpatSC is a combination of subspace learning and the fused least absolute shrinkage and selection operator. As a result, it is able to produce spatially smooth clusters. From this point of view, it can be simply interpreted as a spatial information guided subspace learning algorithm. SpatSC has flexible structures that embrace the cases with and without library of pure spectra. It can be further extended, for example, using different error structures, such as including rank operator. We test this method on both simulated data and real-world hyperspectral data. SpatSC produces stable and continuous segments, which are more interpretable than those obtained from other state-of-the-art subspace learning algorithms.
This paper presents a novel algorithm for selecting random features via compressed sensing to improve the
performance of Normalized Cuts in image segmentation. Normalized Cuts is a clustering algorithm that has been widely
applied to segmenting images, using features such as brightness, intervening contours and Gabor filter responses. Some
drawbacks of Normalized Cuts are that computation times and memory usage can be excessive, and the obtained
segmentations are often poor. This paper addresses the need to improve the processing time of Normalized Cuts while
improving the segmentations. A significant proportion of the time in calculating Normalized Cuts is spent computing an
affinity matrix. A new algorithm has been developed that selects random features using compressed sensing techniques
to reduce the computation needed for the affinity matrix. The new algorithm, when compared to the standard
implementation of Normalized Cuts for segmenting images from the BSDS500, produces better segmentations in
significantly less time.
While developing high resolution payloads, it is also necessary to make full use of the present spaceborne/airborne payload
resources by super resolution (SR). SR is a technique of restoring a high spatial resolution image from a series of low
resolution images of the same scene captured at different times in a short period. Common SR methods, however, may
fail to overcome the irregular local warps and transformation in low resolution remote sensing images caused by platform
vibration and air turbulence. It is also difficult to choose a generalized prior for remote sensing images for Maximum a
Posteriori based SR methods. In this paper, irregular local warps and transformation within low resolution remote sensing
images will be corrected by incorporating an elastic registration method. Moreover, combined sparse representation will
be proposed for remote sensing SR problem. Experimental results show that the new method constructs a much better high
resolution image than other common methods. This method is promising for real applications of restoring high resolution
images from current low resolution on-orbit payloads.
The so-called robust L1 PCA was introduced in our recent work [1] based on the L1 noise assumption. Due to the heavy
tail characteristics of the L1 distribution, the proposed model has been proved much more robust against data outliers. In
this paper, we further demonstrate how the learned robust L1 PCA model can be used to denoise image data.
KEYWORDS: Data modeling, Pattern recognition, Image classification, Data conversion, 3D modeling, Modeling, Performance modeling, Optimization (mathematics), Wavelets, Intelligence systems
Kernel based methods and Support Vector Machines (SVMs)\cite{Vapnik1998,Smola1998} in particular are a class of learning methods that can be used for non-linear regression estimation. They have often achieved state of the art performance in many areas where they have been applied. The class of functions they choose from is determined by a kernel function. The form of this function is of central importance to kernel based methods. In this topic, I will give a simple description about the core concept of kernel-based methods and SVM and some fresh ideas for creating new kernels with multiscale and interpretability characterizations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.