Hyperspectral imaging involves the sensing of a large amount of spatial information across several adjacent wavelengths. Typically, hyperspectral images can be represented by a three-dimensional data cube. The collected data cube is extremely large to be transmitted from the satellite/airborne platform to the ground station. Compressive sensing (CS) is an emerging technique that acquire directly the compressed signal instead of acquiring the full data set. This reduces the amount of data that needs to be measured, transmitted and stored in first place. In this paper, a comparison of a CS method implementation for an ARM and for a GPU is conducted. This study takes into account the accuracy, the performance, and the power consumption for both implementations. The 256-cores GPU of a Jetson TX2 board, the dual-core ARM Cortex-A9 of a ZYNQ-7000 SoC FPGA and the quad-core ARM Cortex-A53 of a ZYNQ UltraScale SoC FPGA are the target platforms used for experimental validation. The obtained results indicate that the embedded GPU is faster but uses more power. Therefore, the most appropriate platform depends on the performance and power constraints of the project.
Hyperspectral imaging instruments allow data collection in hundreds of spectral bands for the same area on the surface of the Earth. The resulting multidimensional data cube typically comprises several GBs per ight. Due to the extremely large volumes of data collected by imaging spectrometers, hyperspectral data compression, dimensionality reduction and Compressive Sensing (CS) techniques has received considerable interest in recent years. These data are usually acquired by a satellite or an airbone instrument and sent to a ground station on Earth for subsequent processing. Usually the bandwidth connection between the satellite/airborne platform and the ground station is reduced, which limits the amount of data that can be transmitted. As a result, there is a clear need for (either lossless or lossy) hyperspectral data compression techniques that can be applied on-board the imaging instrument. <p> </p>This paper, presents a study of the power and time consumption and accuracy of a parallel implementation for a spectral compressive acquisition method on a Jetson TX2 platform, which is well suited to perform vector operations such as dot products. This implementation exploits the architecture at low level, using shared memory and coalesced accesses to memory. The conducted experiments have been performed to demonstrate the applicability, in terms of accuracy, time consuming and power consumption of these methods for onboard processing. The results show that by using this low power consumption GPU is it possible to obtain real-time performance with a very limited power requirement.
Hyperspectral imaging instruments measure hundreds of spectral bands (at different wavelength channels) for the same area of the surface of the Earth. Typically the data cube collected by these sensors comprises several GBs per flight, which have attracted attention to on-board techniques for compression. Typically these compression techniques are expensive from the computational point of view. Due to this fact, a number of Compressive Sensing and Random Projection techniques have raised as an alternative to reduce the signal size on-board the sensor. The measuring process of these techniques usually consist on performing dot products between the signal and random vectors. The Compressive Sensing process is performed directly in the optic system, however, in this paper, we propose to perform the random projection measurement process on a low power consumption Graphic Processing Unit. The experiments are conducted on a Jetson TX1 board, which is well suited to perform vector operations such as dot products. These experiments have been performed to demonstrate the applicability, in terms of accuracy and time consuming, of these methods for onboard processing. The results show that by using this low power consumption GPU is it possible to obtain real-time performance with a very limited power requirement.
Spaceborne sensors systems are characterized by scarce onboard computing and storage resources and by communication links with reduced bandwidth. Random projections techniques have been demonstrated as an effective and very light way to reduce the number of measurements in hyperspectral data, thus, the data to be transmitted to the Earth station is reduced. However, the reconstruction of the original data from the random projections may be computationally expensive. SpeCA is a blind hyperspectral reconstruction technique that exploits the fact that hyperspectral vectors often belong to a low dimensional subspace. SpeCA has shown promising results in the task of recovering hyperspectral data from a reduced number of random measurements. In this manuscript we focus on the implementation of the SpeCA algorithm for graphics processing units (GPU) using the compute unified device architecture (CUDA).<p> </p>Experimental results conducted using synthetic and real hyperspectral datasets on the GPU architecture by NVIDIA: GeForce GTX 980, reveal that the use of GPUs can provide real-time reconstruction. The achieved speedup is up to 22 times when compared with the processing time of SpeCA running on one core of the Intel i7-4790K CPU (3.4GHz), with 32 Gbyte memory.
KEYWORDS: Field programmable gate arrays, Image processing, Hyperspectral imaging, Signal to noise ratio, Sensors, Satellites, System on a chip, Digital signal processing, Data processing, Embedded systems
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. <p> </p>This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Hyperspectral instruments have been incorporated in satellite missions, providing data of high spectral resolution of the Earth. This data can be used in remote sensing applications, such as, target detection, hazard prevention, and monitoring oil spills, among others. In most of these applications, one of the requirements of paramount importance is the ability to give real-time or near real-time response. Recently, onboard processing systems have emerged, in order to overcome the huge amount of data to transfer from the satellite to the ground station, and thus, avoiding delays between hyperspectral image acquisition and its interpretation. For this purpose, compact reconfigurable hardware modules, such as field programmable gate arrays (FPGAs) are widely used. This paper proposes a parallel FPGA-based architecture for endmember’s signature extraction. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data sets collected by the NASA’s Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) over the Cuprite mining district in Nevada. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems, opening new perspectives for onboard hyperspectral image processing.
<p> Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance’s physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. </p><p> Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation. </p>
This paper addresses the unmixing of hyperspectral images, when intimate mixtures are present. In these
scenarios the light suffers multiple interactions among distinct endmembers, which is not accounted for by the
linear mixing model.
A two-step method to unmix hyperspectral intimate mixtures is proposed: first, based on the Hapke intimate
mixture model, the reflectance is converted into single scattering albedo average. Second, the mass fractions of
the endmembers are estimated by a recently proposed method termed simplex identification via split augmented
Lagrangian (SISAL). The proposed method is evaluated on a well known intimate mixture data set.
This paper addresses the problem of unmixing hyperspectral images, when the light suffers multiple interactions
among distinct endmembers. In these scenarios, linear unmixing has poor accuracy since the multiple light
scattering effects are not accounted for by the linear mixture model.
Herein, a nonlinear scenario composed by a single layer of vegetation above the soil is considered. For this
class of scene, the adopted mixing model, takes into account the second-order scattering interactions. Higher
order interactions are assumed negligible. A semi-supervised unmixing method is proposed and evaluated with
simulated and real hyperspectral data sets.
Hyperspectral sensors are being developed for remote sensing applications. These sensors produce huge data
volumes which require faster processing and analysis tools. Vertex component analysis (VCA) has become a
very useful tool to unmix hyperspectral data. It has been successfully used to determine endmembers and unmix
large hyperspectral data sets without the use of any a priori knowledge of the constituent spectra. Compared
with other geometric-based approaches VCA is an efficient method from the computational point of view.
In this paper we introduce new developments for VCA: 1) a new signal subspace identification method
(HySime) is applied to infer the signal subspace where the data set live. This step also infers the number of
endmembers present in the data set; 2) after the projection of the data set onto the signal subspace, the algorithm
iteratively projects the data set onto several directions orthogonal to the subspace spanned by the endmembers
already determined. The new endmember signature corresponds to these extreme of the projections. The
capability of VCA to unmix large hyperspectral scenes (real or simulated), with low computational complexity,
is also illustrated.
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember
signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance
fractions at each pixel in the image.
This paper introduces a new unmixing method termed <i>dependent component analysis</i> (DECA). This method
is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component
Analysis (ICA) and on geometrical based approaches.
DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures
weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet
densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process.
The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The
paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Linear unmixing decomposes an hyperspectral image into a collection of reflectance spectra, called endmember signatures, and a set corresponding abundance fractions from the respective spatial coverage. This paper introduces <i>vertex component analysis</i>, an unsupervised algorithm to unmix linear mixtures of hyperpsectral data. VCA exploits the fact that endmembers occupy vertices of a simplex, and assumes the presence of pure pixels in data. VCA performance is illustrated using simulated and real data. VCA competes with state-of-the-art methods with much lower computational complexity.
Proc. SPIE. 5238, Image and Signal Processing for Remote Sensing IX
KEYWORDS: Signal to noise ratio, Hyperspectral imaging, Independent component analysis, Statistical analysis, Solar radiation models, Data modeling, Sensors, Computer simulations, Monte Carlo methods, Hyperspectral simulation
One of the most challenging task underlying many hyperspectral
imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reflectance spectra, called endmember signatures, and their corresponding fractional
abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to find a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources.
In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (<i>SNR</i>) is high.