A single hyperspectral image can easily be hundreds of megabytes or even several gigabytes in size. For spectral
processing, this is not an issue, as each pixel is processed indepedently. Additionally, many standard image
processing algorithms can be readily adapted to process a few lines at a time (possibly with multiple passes
over a file). Thus, clustering algorithms like k-means and dimension reduction methods such as PCA have
also become staples of hyperspectral processing. More recently, however, new algorithms such as locally linear
embedding (LLE) or non-local means have been shown to offer substantial performance increases, at least in
theory. However, incremental processing is not feasible in many of these newer algorithms, and large amounts
of processing power and memory are required to process large images. For this reason, mechanisms to efficiently
reduce the size of large hyperspectral images while maintaining important information are desired. In this paper,
we investigate a segmentation algorithm as a means to this end. We show that the amount of data that needs
to be processed can be reduced by over an order of magnitude while maintaining the spectral purity of the data.
This will be illustrated by showing classification accuracy before and after segmentation.