PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139201 (2020) https://doi.org/10.1117/12.2572760
This PDF file contains the front matter associated with SPIE Proceedings Volume 11392, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139202 https://doi.org/10.1117/12.2571187
The three imaging modalities of the scanner include reflectance (400 - 2500 nm, 2.5 nm sampling), molecular fluorescence (400 - 1000 nm) and X-ray fluorescence. The first two modalities provide molecular information and the third elemental information about artists’ materials (pigments and paint binders). The resulting material maps reveal insight into how artworks are constructed and modified. The type of information that can be obtained from this scanner will be presented with case studies such as Leonardo da Vinci’s Ginevra de' Benci and Pablo Picasso’s Le Gourmet, both in the collection of the National Gallery of Art, Washington DC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139203 (2020) https://doi.org/10.1117/12.2558741
Pan-sharpening - fusing the spatial and spectral information between panchromatic (PAN) and multispectral (MSI) or hyperspectral (HSI) imagery of a common scene is a hot topic in remote sensing due to a wide range of applications such as target detection, vegetation monitoring, and subsurface detection (e.g. landmines), among others. However, the focus of panchromatic sharpening is generally placed on visual quality of the resulting image and image-wide summary spectral accuracy metrics. Here we are interested in radiometrically accurate panchromatic sharpening of hyperspectral imagery with particular emphasis on spectral algorithm performance. Four pansharpening algorithms are applied to hyperspectral imagery and evaluated for spectral/radiometric fidelity. Two datasets from SHARE2012 were used: one which features rural scene elements and one which features an urban scene. Target detection was also performed to evaluate algorithm sharpening performance. We find that although visually the performance of the four algorithms were roughly similar, they differ in spectral/radiometric fidelity as well as performance in ACE target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139204 (2020) https://doi.org/10.1117/12.2557816
The Mastcam multispectral imagers onboard the Mars rover Curiosity have been collecting data since 2012. There are two imagers onboard the rover. The left imager has a wide field of view, but three times lower resolution than that of the right, which is just the opposite. Left and right images can be combined to generate stereo and disparity images. However, the resolution of the stereo images using conventional ways is at the same resolution of the left. Ideally, it will be more interesting to science fans and rover operators if one can generate stereo images with the same resolution of the right imager, as the resolution will be three times better. Recently, we have developed some algorithms that can fuse left and right images to create left images with the same resolution of the right. Consequently, high resolution stereo images can be generated. Moreover, disparity images can also be generated. In this document, we will summarize the development of new JMARS layers that display the enhanced left images using pansharpening and deep learning algorithms, high resolution stereo images, and high resolution disparity maps. The details of the workflow will be described. Some demonstration examples will be given as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139205 (2020) https://doi.org/10.1117/12.2560106
Recently, multispectral and hyperspectral data fusion models based on deep learning have been proposed to generate images with a high spatial and spectral resolution. The general objective is to obtain images that improve spatial resolution while preserving high spectral content. In this work, two deep learning data fusion techniques are characterized in terms of classification accuracy. These methods fuse a high spatial resolution multispectral image with a lower spatial resolution hyperspectral image to generate a high spatial-spectral hyperspectral image. The first model is based on a multi-scale long short-term memory (LSTM) network. The LSTM approach performs the fusion using a multiple step process that transitions from low to high spatial resolution using an intermediate step capable of reducing spatial information loss while preserving spectral content. The second fusion model is based on a convolutional neural network (CNN) data fusion approach. We present fused images using four multi-source datasets with different spatial and spectral resolutions. Both models provide fused images with increased spatial resolution from 8m to 1m. The obtained fused images using the two models are evaluated in terms of classification accuracy on several classifiers: Minimum Distance, Support Vector Machines, Class-Dependent Sparse Representation and CNN classification. The classification results show better performance in both overall and average accuracy for the images generated with the multi-scale LSTM fusion over the CNN fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139206 (2020) https://doi.org/10.1117/12.2560107
Recent advances in data fusion provide the capability to obtain enhanced hyperspectral data with high spatial and spectral information content, thus allowing for an improved classification accuracy. Although hyperspectral image classification is a highly investigated topic in remote sensing, each classification technique presents different advantages and disadvantages. For example; methods based on morphological filtering are particularly good at classifying human-made structures with basic geometrical spatial shape, like houses and buildings. On the other hand, methods based on spectral information tend to perform better classification in natural scenery with more shape diversity such as vegetation and soil areas. Even more, for those classes with mixed pixels, small training data or objects with similar re ectance values present a higher challenge to obtain high classification accuracy. Therefore, it is difficult to find just one technique that provides the highest accuracy of classification for every class present in an image. This work proposes a decision fusion approach aiming to increase classification accuracy of enhanced hyperspectral images by integrating the results of multiple classifiers. Our approach is performed in two-steps: 1) the use of machine learning algorithms such as Support Vector Machines (SVM), Deep Neural Networks (DNN) and Class-dependent Sparse Representation will generate initial classification data, then 2) the decision fusion scheme based on a Convolutional Neural Network (CNN) will integrate all the classification results into a unified classification rule. In particular, the CNN receives as input the different probabilities of pixel values from each implemented classifier, and using a softmax activation function, the final decision is estimated. We present results showing the performance of our method using different hyperspectral image datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139207 (2020) https://doi.org/10.1117/12.2557865
There are two multispectral Mastcam imagers in the Mars Science Laboratory (MSL) onboard the Mars rover Curiosity. The left imager has low resolution whereas the right imager is just the opposite. Most of the time, the two imagers work independently. It will be interesting to explore the possibility of fusing the two left and right images to produce stereo images. However, the extremely short baseline of the stereo images challenges the stereo 3D reconstruction. In this paper, we tested the feasibility of using Mastcam images for stereo 3D reconstruction. We took a five-point algorithm to perform the epipolar rectification and then performed a census-based semi-global matching algorithm on the rectified stereo pairs to produce disparity maps. Preliminary tests using Mastcam images of two scenes were performed to test the robustness of the processing pipeline.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139208 (2020) https://doi.org/10.1117/12.2558967
This paper investigates an efficient framework for the fusion of hyperspectral and LiDAR-derived digital surface model to improve classification performance, where collaborative representation based classifier is chosen due to its high computational efficiency with an analytical solution. Local binary pattern (LBP) and extinction profile (EP) features are extracted from both the sources, which include different spatial attributes. Then the derived spatial features are fed to a collaborative representation-based classifier with Tikhonov regularization (CRT) to produce representation residuals. Weighted residuals are calculated, and class label is assigned according to the minimal residual class to generate the classification map ultimately. To improve classification accuracy, the kernel CRT (KCRT) is used and residual fusion (RF) is conducted for the representation residuals from different sources and features. In this paper, spatial filtering for KCRT-RF is investigated. Experimental results demonstrate that a guided filter can help improve the fusion performance of KCRT-RF without significantly increasing computing cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139209 (2020) https://doi.org/10.1117/12.2558201
Forest destruction is a main contributor to carbon emissions and loss of biodiversity, making it a matter of global importance. Due to the large global footprint and often inaccessibility of forested areas, remote sensing is one of the most valuable techniques for monitoring deforestation. Spectral imaging is typically favored for material classification of forested areas and identification of broad swaths of deforestation. However, spectral data can fall short in detecting more subtle destruction beneath the forest canopy. Radar remote sensing can help fill this gap, as it has the ability to penetrate through tree canopies such that pixels capture backscatter information from both the canopy and material beneath it. Synthetic aperture radar in particular can capture this information at fine spatial resolution, and techniques such as polarimetry and interferometry can be used to measure biomass and detect deforestation. In this study, we compare synthetic aperture radar data with multispectral data to improve characterization and identification of source signatures captured within a pixel, with specific consideration to detecting areas where thinning is happening beneath the forest canopy. We focus on identifying different types of forest thinning in the Valles Caldera, located in the Jemez Mountains of northern New Mexico. We apply anomalous change detection to a combination of data products derived from multispectral imagery and synthetic aperture radar to determine which combinations are most effective at identifying anomalous features of interest in thinning regions. We find that comparing phase change measured by synthetic aperture radar interferometry to differenced vegetation indices highlights anomalous relationships in the thinning region. When comparing multispectral reflectance to backscatter intensity measured by synthetic aperture radar, the most successful temporal comparisons contained synthetic aperture radar data during the thinning period. This suggests that synthetic aperture radar enhances detection of thinning practices via remote sensing, especially in regards to changes taking place beneath the tree canopy. These results were improved even further by segmenting the images according to vegetation coverage prior to applying anomalous change detection techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920A https://doi.org/10.1117/12.2560149
Hyper-spectral imaging is an innovative and exciting technology that holds incredible diagnostic, scientific and categorization power. However, fundamental instrument performance is not consistently well characterized, well understood or well represented to suit distinct application endeavors or commercial market expectations. Establishing a common language, technical specification, testing criteria, task-specific recommendations and common data formats are essential to allowing this technology to achieve its true altruistic and economic market potential. In 2018 the IEEE P4001 was formed to facilitate consistent use of terminology, characterization methods and data structures. This talk is a progress report to inform the hyper-spectral community of the status of the work-to-date, the interconnection with other standards and outline the road map for future work until publication of the standards in 2022.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920B (2020) https://doi.org/10.1117/12.2559670
The radiometric performance of cameras is customarily characterized and specified in terms of component properties such as F-number and readout noise. This paper considers the camera as an integrated unit and derives characteristics suitable for specifying noise, throughput and saturation as determined from the overall input-to-output performance. A conventional model of signal and noise is reformulated into a simpler "equivalent camera" model with the same radiometric performance, constrained to have a lossless lens with detector-side pupil subtending 1 steradian, and a detector with a peak quantum efficiency (QE) of 1. The small parameter set of this model can then be determined with the camera treated as a "black box", relevant for verification of camera specifications. The net light collection of the real camera is expressed by the detector area of the equivalent camera, denoted A* , as well as the wavelength dependence of its QE, denoted η* (λ). The noise floor due to readout noise can be expressed for a particular camera as a noise equivalent spectral radiance (NESR). For comparison of cameras with different bandwidths, it is shown that a comparative figure of merit, which is also independent of integration time, is the "noise equivalent radiance dose" (NERD). For a hyperspectral camera, the model parameters can be determined with a simple broadband source, while cameras with broad spectral response require measurements with tunable monochromatic light. The treatment also applies to spectrometers. Reference is made to D*, a well-established figure of merit for detectors, and it is argued that A*,η* (λ) and NERD are analogous figures of merit for camera properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920D (2020) https://doi.org/10.1117/12.2556180
We have developed a multi-spectral SWIR lidar system capable of measuring simultaneous spatial-spectral information for imaging and spectral discrimination through partial obscurations. Our system utilizes a supercontinuum laser source and eight narrowband spectral channels in the 1000 nm to 1600 nm region. The system employs a steering mirror, which enables us to scan the region of interest and collect spectral and spatial data as a point-by-point scan. The system is designed to detect weak signal returns in the few-photon regime. The technique promises more capable classification and target detection of spectrally diverse targets in obscured environments with potential applications for mapping of ground type through forest canopy, pollution monitoring of water ways, and intelligence, surveillance, reconnaissance and target detection (ISRT). Custom targets designed to provide distinct spectral response are employed to ascertain the system’s response. The lidar system is calibrated by measuring the return signal from a highly reflective flat Spectralon target; this enables us to determine the reflectivity of the objects of interest. Spectral response of the targets are analyzed and their estimated reflectivity is reported. The same targets are studied in the presence of two partial obscurants. The objects are easily identified even though the return signal is attenuated by a factor of seven. The general spectral shape of the targets are preserved in the presence of the obscurants. More challenging objects and environments and various methods to recover the spectral response of the objects are currently being pursued.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Sensing and Imaging of Cultural Artifacts
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920I (2020) https://doi.org/10.1117/12.2559383
The US Army NVESD has been developing high performance hyperspectral sensors for Army missions for decades. During that time, a partnership was established with the National Gallery of Art to leverage these high value HSI instruments for the characterization and conservation of Masterworks. Works by Picasso, Rembrandt, Van Gogh, Matisse, and many others have been imaged and the resulting data has assisted in their understanding and continued conservation for future generations to enjoy. In this talk, a series of examples from this collaboration are presented, along with some of the lessons learned when using remote sensing equipment for close range, static, indoor imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920J (2020) https://doi.org/10.1117/12.2557890
Identification of the materials used in the creation of paintings and illuminated manuscripts can provide conservators with knowledge on how to better preserve objects, and supply scholars with information about the provenance of the objects. Significant headway has been made with non-invasive imaging techniques in the past decade. One example is hyperspectral imaging systems that were initially developed for the remote sensing community, and are now more widely applied to assist with art conservation. Although hardware, such as hyperspectral systems, are becoming more widely used, the software to analyze the acquired data is not yet well developed for pigment analysis of artworks. Pigment specific chemical information is derived from hyperspectral data, leveraging spectral signatures to separate out the various artists' materials used in the creation of the artwork. In general, methods of pigment analysis and mapping of paintings in illuminated manuscripts and artworks involve various algorithms and processing steps to create spatial distribution maps of the spectral signatures. When analyzing a particular work, the goals of the analysis are threefold: determine the number of unique pigments (not visual colors) used in the creation of the object; identify the unique pigment reflectance signatures from the data directly or from previously measured library spectra; and map the location and possible distribution of each pigment across the painting. This information is then passed on to conservators and art historians to inform their efforts. Current endmember selection methods involve significant user input, best done by an expert user and are time-consuming. This research examines the accuracy of creating abundance maps (pigment distribution maps) from automatically derived endmembers (using the Maximum Distance algorithm) to create useful information maps to inform conservators. The non-negative lease squares (NNLS) method is used to un-mix the data to create element distribution maps. Algorithms were evaluated with hyperspectral data of Cosmé Tura's Saint Francis Receiving the Stigma, in the collection of the National Gallery of Art, Washington, DC. Results compare well with what is known of the pigment used to create this painting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920K (2020) https://doi.org/10.1117/12.2558218
The Gough Map is a late medieval map of the island of Great Britain, which has been receiving a lot of research attention and many studies have been conducted to analyze the map. With the hyperspectral image (HSI) of the Gough Map collected, we are allowed to further analysis the map in a different angle. Our goal is to semi-automatically identify and separate iron-gall and carbon black inks written on the Gough map. Unlike traditional target detection problems, there are unique characteristics in our data: Firstly, the targets are not sparsely distributed and their statistical contributions to the background estimation cannot be ignored. Secondly, the spectral differences between the targets are subtle. Lastly, the variances of the background data are so low that the distinction between it and noise is challenging. To address these issues, we made the following modification to the traditional adaptive coherence estimator (ACE): First, manually select the background data instead of using the whole data to estimate the background. Moreover, the pixels with strong spectral features of other pigments are removed and only the black ink candidates are feed into the detector. Thirdly, the number of eigenvectors were limited in the calculation of whitening operator so that the impact of noise can be controlled. It is clear that the trailing eigenvectors are usually dominated by the noise, so the majority of the information of background data can be effectively compressed only with the top-ranked eigenvectors. This is critical because whitening the noise means increasing the variance of the noise based on the fact that our background has super low variances. The whitening matrix needs to be used for whitening target candidates which have subtle spectral differences, and whitening the noise will cause the spectral features to disappear in the whitening sub-space. In conclusion, the whitening operator used in ACE should whiten the useful information of background data and reduce the effect caused by the noise as much as possible so that the subtle spectral differences of targets will not be buried in the whitening sub-space. Our results show that the modified target detection algorithm can both separate the targets from the background and each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920L (2020) https://doi.org/10.1117/12.2559023
Current commercially fielded approaches for detection of trace materials (explosives, narcotics, low volatility chemical warfare agents and other hazardous chemicals) on surfaces require physical collection and analysis of the trace material. Although these techniques are very sensitive, they require unseen particles to be collected from the surface of the substrate being screened and be transferred to the analysis system. This collection process requires human input (and close-proximity exposure to the screened article, which could pose a safety threat), introduces a fundamental limit on the screening speed, and limits the types of surfaces that can be successfully tested for trace materials to those with good “wipe-ability”. The Intelligence Advanced Research Projects Activity (IARPA) SILMARILS program has demonstrated the capability to detect trace explosives at levels comparable with Explosive Trace Detection (ETD) systems, and narcotics at similar levels. This paper details the algorithms and sensor advances that have enabled these results. A number of relevant experimental test campaigns are discussed including: detection of target chemicals on harvested “real world” substrates such as wood, auto tires, smooth and rough metal, plastics, vinyl, leather, various fabrics, and pig skin (comparable to human skin with respect to fat, water, and hemoglobin content); detection of traces of explosives on portable electronic devices (including after cleaning with organic solvents); and field testing at the 2019 Indianapolis 500 race, where incoming spectator vehicles were screened for explosives simulants (with blind positive controls provided by test and evaluation team rental cars).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920M (2020) https://doi.org/10.1117/12.2558220
We are developing a cart-based mobile system for the detection of trace explosives on surfaces by active infrared (IR) backscatter hyperspectral imaging (HSI). We refer to this technology as Infrared Backscatter Imaging Spectroscopy (IBIS). A wavelength tunable multi-chip infrared quantum cascade laser (QCL) is used to interrogate a surface while an MCT focal plane array (FPA) collects backscattered images. The QCL tunes across the full wavelength range from 6 – 11 μm. Full 128 X 128 pixel frames from the FPA are collected at up to 1610 frames per second and comprise a hyperspectral image (HSI) cube. The HSI cube is processed and the extracted spectral information is fed into an algorithm to detect and identify traces of explosives. The algorithm utilizes a convolutional neural network (CNN) and has been pre-trained on synthetic diffuse reflectance spectra. In this manuscript, we present backscatter data and hyperspectral image mapping from a car panel substrate deposited with traces of the explosive RDX. We have used a mask to restrict the RDX analyte deposition to small 4 mm diameter areas. The results presented here were measured at 1 meter standoff.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920N (2020) https://doi.org/10.1117/12.2557030
Compressive sensing (CS) is a method of sampling which permits some classes of signals to be reconstructed with high accuracy even when they have been undersampled. In this paper we explore a phenomenon in which bandwise CS sampling of a hyperspectral data cube followed by reconstruction can actually result in amplification of chemical signals contained in the cube. Perhaps most surprisingly, chemical signal amplification generally seems to increase as the level of sampling decreases. In some examples, the chemical signal is significantly stronger in a data cube reconstructed from 10% CS sampling than it is in the raw, 100% sampled data cube. We explore this phenomenon in two real-world datasets including the Physical Sciences Inc. Fabry-Pérot interferometer sensor multispectral dataset and the Johns Hopkins Applied Physics Lab FTIR-based longwave infrared sensor hyperspectral dataset. Each of these datasets contains the release of a chemical simulant, such as glacial acetic acid, triethyl phospate, and sulfur hexafluoride, and in all cases we use the adaptive coherence estimator (ACE) to detect a target signal in the hyperspectral data cube. We end the paper by suggesting some theoretical justifications for why chemical signals would be amplified in CS sampled and reconstructed hyperspectral data cubes and discuss some practical implications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920O (2020) https://doi.org/10.1117/12.2556903
The smuggling of drug into correctional facilities through the mail is a major concern. ChemImage has developed the VeroVisionTM mail screener system, which highlights drugs from background based on score imagery computed from selected wavelengths based on the chemical signatures. More recently, sophisticated techniques to hide drugs by dissolution into paper are being used. We introduce a combined heterogeneous anomaly detection with a deep learning classifier. Anomaly detection initially extracts suspect stain patterns. A You Only Look Once (YOLO) based classifier then classifies anomalies as drug or non-drug stain patterns. We report its first successful detection on a limited set of meth samples, with 87.4% probability of detection (PD) and 7.0% probability of false alarm (PFA). The results show that widefield, multispectral short-field infrared (SWIR) imaging can allow for dissolved concealed drug screening of mail which has benefits for mail inspection efficiency and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920T (2020) https://doi.org/10.1117/12.2560716
In this work we demonstrate that generative adversarial networks (GANs) can be used to generate realistic pervasive changes in RGB remote sensing imagery, even in an unpaired training setting. We investigate some transformation quality metrics based on deep embedding of the generated and real images which enable visualization and understanding of the training dynamics of the GAN, and provide a useful measure in terms of quantifying how distinguishable the generated images are from real images. We also identify some artifacts introduced by the GAN in the generated images, which are likely to contribute to the differences seen between the real and generated samples in the deep embedding feature space even in cases where the real and generated samples appear perceptually similar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920U https://doi.org/10.1117/12.2559859
Multispectral (MSI) and hyperspectral (HSI) imaging spectrometers have become essential tools for various applications from agricultural to defense. It is imperative to assess the performance of multi-spectral performance in relevant environmental locations. However, these techniques can be time consuming, expensive, and often a difficult process to assess the spectral properties in the relevant environments and terrains. Therefore, an alternate test methodology will be explored, such as, the utilization of laboratory analysis for predictive means. The method will not replace the need for actual testing, but can reduce the need for such testing. The methodology will utilize a fully spectrally characterized environment and terrain, which will be compared to laboratory measurements across the full spectrum. The correlated results will be described and the effectiveness of the methodology will be assessed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920W (2020) https://doi.org/10.1117/12.2556105
The general form of admissible solutions to classification problems depends on parameters πi whose values determine performance. However, translating performance requirements into parameter choices requires a difficult evaluation of interdependent probabilities. In this report we build optimal classifiers by combining composite hypothesis tests. The process relates the parameters πi to detection thresholds λ jk , which are more directly predictive of detection and false alarm probabilities. It is found that the constituent composite hypothesis tests cannot be optimal, but instead must be constructed via clairvoyant fusion principles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920X (2020) https://doi.org/10.1117/12.2557894
The Reed-Xiaoli (RX) detector is used for identifying spatial anomalies in multispectral imagery, which are pixels whose spectra are anomalous relative to other pixels in a scene. The distribution of the spectra in an image is used to represent the background, and the anomalies are the pixels whose spectra deviate statistically from this distribution. While RX is used to identify spatial anomalies, in this research we have instead developed a method to capture temporal anomalies, or fleeting changes, such as a music festival in the desert. Using the annual Burning Man festival as a test case, we use a time series of multispectral images and iterate through each pixel, drawing the "background" distribution from a particular pixel location over time. Temporal RX (TRX) thus compares a pixel against itself through time, which enables us to capture normal seasonal trends and identify fleeting changes. We also describe a local window variant called Local Temporal RX (LTRX). Using k-means clustering and a new approach deemed Meta-RX, we investigate the nature of the temporal anomalies detected by TRX and LTRX to infer types and causes of change.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920Y (2020) https://doi.org/10.1117/12.2558128
Outbreaks of West Nile Virus (WNV) and St. Louis Encephalitis (SLE) are projected to increase in frequency and intensity with climate change, underlining the need to develop better mosquito borne disease (MBD) fore- casting systems. Spread by Culex, WNV and SLE have seemingly random spatial and temporal outbreaks, making such outbreaks difficult to predict. However, recent studies have found that mosquito abundance and WNV/SLE transmission are strongly correlated, providing researchers with a foundation for the development MBD forecasting systems. Mosquito populations are impacted by several environmental variables, such as humidity, temperature, vegetation, and available breeding habitat. Mosquito-population forecasting models are beginning incorporate spectral data, such as the normalized difference vegetation index (NDVI). Vegetation is a crucial habitat for some mosquito species, and spectral data offers the best estimate of this habitat virtually anywhere on Earth. Additionally, vegetation offers a proxy for understanding how water flows across a landscape, a crucial consideration in urban areas with high landscape heterogeneity. This research explores how the spatial scale (extent) of multispectral imagery used in mosquito population prediction models influences mosquito population forecasts, specifically in the Greater Toronto Area. We derive three monthly time series of standard spectral indices from multispectral imagery over the Greater Toronto Area from 2004 to 2015; each time series is derived from images taken over the same locations, but using images taken over different spatial footprints. We then explore how spectral indices across the three spatial scales perform as predictors for combined Cx. restuans and Cx. pipiens populations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113920Z (2020) https://doi.org/10.1117/12.2551920
Point target detection in hyperspectral data is often plagued by the inability to distinguish between the target and a (relatively) few false alarms. Even when, overall, the signal to noise ratio (SNR) to the overall data is good, the false alarms render use of many detection algorithms problematic. To solve this problem, we propose a two-step process for analyzing the data. We start by performing the standard matched filter (MF) algorithm. While the original covariance matrix is based on all the pixels in the hyperspectral cube, a second covariance matrix is constructed based on the highest detections. Running the algorithm a second time on the original data with this new covariance matrix, we distinguish between the targets and these background false detections. This new method was tested on real world test data and compared to traditional matched filter method results. In all cases, the new method showed a significant decrease in false alarms. Other benchmark metrics show the efficacy of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139210 (2020) https://doi.org/10.1117/12.2553374
A common anomaly detection algorithm for hyperspectral imagery is the RX algorithm based on the Mahalanobis distance of each pixel from the image mean. This is a benchmark algorithm which can be applied either directly on a hyperspectral image or on a dimensionality-reduced hyperspectral image. Recent work on Non-Negative Matrix Factorization (NNMF) provides a fast-iterative algorithm for decomposing a hyperspectral cube and achieving dimensionality reduction. In this paper, we study the implementation of the NNMF algorithm on a hyperspectral data cube and propose two new anomaly detection algorithms, based on combining the NNMF and the RX algorithms. In the first version, we apply the NNMF algorithm on a hyperspectral image reducing the dimensionality; we then apply the RX algorithm. In the second version, we segment and cluster the dataset after applying the NNMF algorithm. Anomaly detection is then performed on this dataset. Using either of these algorithms overcomes a weakness of the RX algorithm in handling background clusters which are close to each other. The algorithm was tested on the RIT blind test dataset. From our results, we conclude that the two versions of the algorithm are sensitive to different types of anomalies; a two-dimensional scatterplot of the data comparing the RX values to either of the NNMF algorithms enables us to distinguish between the anomaly types. The ground truth shows that we have achieved high accuracy and less false alarms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139211 (2020) https://doi.org/10.1117/12.2564106
The "Viareggio 2013 Trial" is a hyperspectral dataset obtained from multiple overflights of the Italian city of Viareggio. Careful management of panels and vehicles in the scene enabled the development of valuable ground truth information. One pair of overflights occurred at different times on the same day, and another pair took place over different days. These data were used to compare and evaluate a variety of automated approaches for discovering anomalous changes. Co-registration of the images is acknowledged to be imprecise, so part of the challenge is to identify anomalous changes in a way that is robust to this misregistration. In particular, we employed a local co-registration adjustment (LCRA) algorithm to ameliorate the effects of misregistration; we employed non-maximal suppression (NMS) to take advantage of the discrete nature of the changes; and we used canonical correlation analysis (CCA) to reduce the dimension of our data. We found that, taken together, these improved the performance of the detectors in the low false alarm rate regime of operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139214 (2020) https://doi.org/10.1117/12.2556568
Kernel methods proved to be a useful tool in classification and analysis of large data sets arising in the context of hyperspectral imaging (HSI). Among them, kernel PCA is of particular interest as it is a robust tool that combines non-linearity with the advantages provided by the principal components. Unfortunately, one drawback of kernel PCA is its high computational complexity, with run times on the order of O(N 3), where N is the number of points in the data set.
To resolve this problem, we propose to take advantage of the fast approximate factor analysis approach proposed by M. V. Wickerhauser in the context of traditional image processing. We adapt and implement this concept to the kernel PCA setting, yielding an approximation to a full kernel PCA decomposition of the data, with run times asymptotically on the order O(N 2 log N ). Furthermore, we test this approximation on several standard HSI data sets to demonstrate that these approximations do not impact the classification accuracy, while at the same time providing significant computation time reductions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139215 (2020) https://doi.org/10.1117/12.2559910
This paper explores the use of column subset selection (CSS) methods for endmember extraction in hyperspectral unmixing. CSS algorithms look for the subset of columns in a matrix that result in the simplex of maximum volume which is similar to the objective in many endmember extraction algorithms such as N-FINDR. Therefore, it is of interest to explore the use of CSS algorithms to solve the endmember extraction problem in hyperspectral unmixing. Many deterministic and randomized algorithms have been proposed in the literature for CSS. In this paper, we present an experimental comparison between CSS algorithms and traditional geometric-based endmember extraction algorithms such as N-FINDR, PPI and VCA. Experiments are conducted using the HYDICE Urban image. Volume of the resulting simplex and classification accuracies of maps extracted from the estimated abundances are used to evaluate the quality of the extracted endmembers and unmixing results. The SVD-based CSS algorithm (SVDSS) has the overall best performance in both metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139217 (2020) https://doi.org/10.1117/12.2554404
This study describes parametric modeling of diffuse IR reflectance for sparsely surface-distributed particles of the explosive RDX. Diffuse IR spectra are modeled using a formulation that considers spectral features due to target-material reflectance, i.e., RDX, substrate reflectance and resonance scattering resulting from finite sizes of surface-distributed particles. The results of this study demonstrate an approach for parametric modeling of diffuse IR reflectance for sparsely surface-distributed particles. The mathematical formulation of this approach is that of a phenomenological scattering-matrix representation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 1139218 (2020) https://doi.org/10.1117/12.2557587
Anomaly detection plays a significant role in hyperspectral imagery. Traditional methods mainly focus on the spectral discrimination between the background object and the test object by means of utilizing the Mahalanobis distance such as the benchmark Reed-Xiaoli (RX) detector. In this paper, we propose a novel hyperspectral anomaly detection method based on low rank representation. Since the observed hyperspectral data can be decomposed into a background part with low-rank property and a sparse anomaly part, we exploit the local outlier factor (LOF) to construct the potential background dictionary. The dictionary attempts to cover as many categories as possible for the potential background objects and can effectively excludes the anomaly objects by calculating the local density and outlier degree. In order to take advantage of the huge hyperspectral dataset cube, we integrate the spectral and spatial information with the outlier degree as a constraint component to optimize the low rank representation model, which takes the implicit structure of the whole hyperspectral image into consideration. Experiments conducted on both synthetic and real hyperspectral datasets indicate the proposed method achieves a better performance compared to other state-of -the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113921A (2020) https://doi.org/10.1117/12.2558052
The spatial and spectral resolution of remote sensing images are mutually restricted due to the limitation of sensor technology. Multispectral (MS) image has high spectral resolution, but low spatial resolution. While, panchromatic (PAN) image can provide high spatial resolution. Fusion of MS and PAN images is to get MS image with high resolution, which is a hot research in the field of remote sensing image processing. In this paper, a fusion algorithm of MS and PAN images is presented based on non-subsampled contourlet transform (NSCT) and Gram-Schmidt (GS) transform. Firstly, the low-resolution PAN image is synthesized by weighing each band of MS image whose weight coefficients are obtained by least squares estimation. MS image is decomposed by GS transform with the first GS component of the synthetic low-resolution PAN image. Secondly, one-level and three-level NSCT decomposition is performed on the synthetic low-resolution PAN image and PAN image, respectively. Low-frequency coefficients of low-resolution PAN image are as ones of the generated PAN image. High-frequency coefficients of first level decomposition of lowresolution PAN image and PAN image are fused according to region energy. The other level high-frequency coefficients of PAN image are as ones of the generated PAN image. Thirdly, the generated PAN image is reconstructed by the inverse NSCT with these coefficients. Lastly, inverse GS transform is performed to gain improved MS image by replacing the first GS component with the generated PAN image. The experiments conducted on Quickbird satellite images show that the proposed method is superior to the other typical methods, which improves the spatial resolution and has smaller spectral distortion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113921E (2020) https://doi.org/10.1117/12.2558299
The potential applications of cesium lead halide perovskites are widespread and are based on their unique dielectric response properties. It follows that for modeling the dielectric response of these materials, construction of a dielectric response function that is formulated for quantitative representation of underlying physical processes is required for the simulation of their performance as detectors, emitters or photocathodes. The present study examines physical characteristics of the dielectric response of cesium lead halide perovskites that provide a foundation for formulating quantitative dielectric response functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113921F (2020) https://doi.org/10.1117/12.2560127
This paper presents a simple approach to simulate the temporal variations of the spectral signature for a non-resolved object in hyperspectral remote sensing for space situational awareness. The simulation consist of an object with a simple geometry made of multiple materials rotating at a constant angular speed. The non-resolved object produces a time varying spectral signature that is measured by the imaging spectrometer. Signatures from representative materials in space crafts are used in the simulation. The simulation results are used as input to a hyperspectral unmixing approach to evaluate its capacity to extract spacecraft spectral endmember signatures from the spectro-temporal signature. The expectations is that the results from unmixing can be used to characterize, catalogue or identify the space object. Extracted spectral endmember signatures can be used to augment existing spectral libraries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113921G (2020) https://doi.org/10.1117/12.2563206
The Simple Linear Iterative Clustering (SLIC) algorithm is widely used for superpixel segmentation in hyperspectral image processing. In this paper, we study the effect of band-subset selection as a dimensionality reduction pre-processing step for SLIC superpixel segmentation. Column subset selection based band subset selection methods are studied. The quality of the resulting SLIC superpixel segmentation by the homogeneity of the resulting superpixels. A superpixel is considered homogeneous if the matrix resulting from unfolding the spectral signatures in the superpixel is a nearly rank one. The homogeneity ratio (number of homogeneous superpixels over total number of superpixels in the image) is used as a performance metric to compare different SLIC segmentation results. Experiments using the HYDICE Urban hyperspectral image are presented. Results show a slight increase in the homogeneity ratio for small numbers of bands (3-6) over SLIC using all bands.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imagery XXVI, 113921I https://doi.org/10.1117/12.2564690
The combination of pixel-assigned spectral filter matrices and standardized CMOS sensors enables the production and application of miniaturized spatial and spectral resolving sensors which, when used in snapshot-mosaic cameras, represent an innovative solution in comparison to whiskbroom, staring or pushbroom cameras. These cameras are characterized by CMOS sensors where spectral filters are applied on the CMOS sensor in a matrix which is multiplied in the x- and y-direction over the entire CMOS sensor surface. Current multispectral resolving filter-on-chip snapshot-mosaic cameras, available on the market, work with 1.3, 2.0- or 4.0-megapixel CMOS sensors, which are equipped with 4, 9, 16 or 25 different spectrally selective filters in the visible (VIS) or near infrared (NIR) spectral range. The combination of pixel-assigned spectral filter matrices on CMOS sensors increases the integration density and system complexity of multispectral resolving snapshot-mosaic cameras many times compared to established cameras with monochromatic or RGB Bayer Pattern image sensors. For the objective comparison of multispectral resolving snapshot-mosaic cameras, it is necessary to describe their pixel-related spectral wavelength depended image acquisition channels by suitable parameters. Here especially the method for determination of spectral sensitivity curves in accordance with the EMVA1288 standard will be shown and explained. This method will be applied on different kinds of snapshot-mosaic cameras called monolithic and hybrid. The method also will be extended by multiple measurements, comparisons and evaluations of spectral sensitivity curves from different areas of the sensor. The paper will provide a systematic presentation of how to measure the spectral sensitivity curves from different multispectral resolving cameras, how to compare measured results and how to evaluate the results to choose possible more appropriate camera for desired applications. The EMVA1288 standard, developed by camera manufacturers and research institutes, distinguishes itself from other standards by considering the camera as a linear model. The camera is treated as a black box of which only pixel size and exposure time must be known. The recording of standardized test images is also omitted, allowing the camera to be described without optics. The only input variable of the linear camera model of the EMVA 1288 standard is the number of photons that hit a pixel of the image sensor during the exposure time. Therefore, the correct determination of the photon count is essential to calculate important camera parameters from the linear camera model, such as quantum efficiency or signal-to-noise ratio. To determine the number of photons, the irradiance of the radiation incident on the image sensor must be measured. This is usually accomplished using a radiometer instead of the camera. The number of photons per pixel during the exposure time can then be calculated from the irradiance, considering constants like the wavelength of the incident radiation, the area of the pixel and the exposure time of the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.