27 April 2009 Data reduction via segmentation for hyperspectral imagery
Author Affiliations +
Abstract
A single hyperspectral image can easily be hundreds of megabytes or even several gigabytes in size. For spectral processing, this is not an issue, as each pixel is processed indepedently. Additionally, many standard image processing algorithms can be readily adapted to process a few lines at a time (possibly with multiple passes over a file). Thus, clustering algorithms like k-means and dimension reduction methods such as PCA have also become staples of hyperspectral processing. More recently, however, new algorithms such as locally linear embedding (LLE) or non-local means have been shown to offer substantial performance increases, at least in theory. However, incremental processing is not feasible in many of these newer algorithms, and large amounts of processing power and memory are required to process large images. For this reason, mechanisms to efficiently reduce the size of large hyperspectral images while maintaining important information are desired. In this paper, we investigate a segmentation algorithm as a means to this end. We show that the amount of data that needs to be processed can be reduced by over an order of magnitude while maintaining the spectral purity of the data. This will be illustrated by showing classification accuracy before and after segmentation.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Thomas R. Braun, Alexey Castrodad, "Data reduction via segmentation for hyperspectral imagery", Proc. SPIE 7334, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV, 73340Y (27 April 2009); doi: 10.1117/12.809522; https://doi.org/10.1117/12.809522
PROCEEDINGS
12 PAGES


SHARE
Back to Top