Spectral Fingerprint Identification (SFI) attempts to incorporate feature finding, text matching, and data fusion
techniques for fast whole cube material identification. In operation, the SFI algorithm translates spectral data into a
feature space where fast text matching between all pixels in a data cube and a preprocessed SFI spectral library can be
performed. Data fusion of the resulting feature matches creates a listing of materials likely contained in a data cube at
both the whole pixel and subpixel level. The Spectral Fingerprint Identification methodology was implemented in a
prototype Opticks plug-in capable of both standalone and Windows based cluster processing.
Earlier, we reported on predictive anomaly detection (PAD) for nominating targets within data streams generated by
persistent sensing and surveillance. This technique is purely temporal and does not directly depend on the physics
attendant on the sensed environment. Since PAD adapts to evolving data streams, there are no determinacy assumptions.
We showed PAD to be general across sensor types, demonstrating it using synthetic chaotic data and in audio, visual,
and infrared applications. Defense-oriented demonstrations included explosions, muzzle flashes, and missile and aircraft
detection. Experiments were ground-based and air-to-air.
As new sensors come on line, PAD offers immediate data filtering and target nomination. Its results can be taken
individually, pixel by pixel, for spectral analysis and material detection/identification. They can also be grouped for
shape analysis, target identification, and track development. PAD analyses reduce data volume by around 95%,
depending on target number and size, while still retaining all target indicators.
While PAD's code is simple when compared to physics codes, PAD tends to build a huge model. A PAD model for 512
x 640 frames may contain 19,660,800 Gaussian basis functions. (PAD models grow linearly with the number of pixels
and the frequency content, in the FFT sense, of the sensed scenario's background data). PAD's complexity in terms of
computational and data intensity is an example of what one sees in new algorithms now in the R&D pipeline, especially
as DoD seeks capability that runs fully automatic, with little to no human interaction.
Work is needed to improve algorithms' throughput while employing existing infrastructure, yet allowing for growth in
the types of hardware employed. In this present paper, we discuss a generic cluster interface for legacy codes that can be
partitioned at the data level. The discussion's foundation is the growth of PAD models to accommodate a particular
scenario and the need to reduce false alarms while preserving all targets. The discussion closes with a view of future
software and hardware opportunities.
This paper discusses a method for searching a database of known material signatures to find the closest match with an unknown signature. This database search method combines fuzzy logic and voting methods to achieve a high level of classification accuracy with the signatures and data cubes tested. This paper discusses the method in detail to include background and test results. It makes reference to public literature concerning components used by the method but developed elsewhere. This paper results from a project whose main objective is to produce an easily integrated software tool that makes an accurate best-guess as to the material(s) indicated by the signature of a pixel found to be interesting according to some analysis method, such as anomaly detection and scene characterization. Anomaly detection examines a spectral cube and determines which pixels are unusual relative to the majority background. Scene characterization finds pixels whose signatures are representative of the unique pixel groups. The current project fully automates the process of determining unknown pixels of interest, taking the signatures from the flagged pixels, searching a database of known signatures, and making a best guess as to the material(s) represented by each pixel's signature. The method ranks the possible materials by order of likelihood with the purpose of accounting for multiple materials existing in the same pixel. In this way it is possible to deliver multiple reportings when more than one material is closely matched within some threshold. This facilitates human analysis and decision-making for productions purposes. The implementation facilitates rapid response to interactive analysis need in support of strategic and tactical operational requirements in both the civil and defense sectors.
This paper describes a means of achieving fault-tolerance and architecture extensibility for parallel/distributed systems that support spectral analysis. These attributes are essential to critical 24/7/366 operations and they improve upon systems that only enhance throughput. They also address the single-point-of-failure issues attendant upon architectures that commit critical operations to single machines. Graceful throughput degradation is achieved to mitigate all-or-nothing approaches. Parallel/distributed processing has three important goals. The first is for the subject application to provide faster throughput than it would while running on a single CPU or computer. The second goal is to make best use of existing capital equipment. For critical systems, the third goal is fault tolerance via redundancy. This project addresses the third goal. It seeks to demonstrate a means to make parallel/distributed processing systems fault tolerant so that crashes of individual machines ina cluster do not bring the entire system down. In spite of individual machine failures, it also seeks to ensure the completion of all tasks so that system throughput degrades gracefully. These goals can be met by a system composed of a generic TCP/IP LAN connecting some number of ordinary office computers and laboratory workstations that are heterogeneous and of unknown reliability. Described here is concept formulation and design. Other projects in this arena are referenced. These provide essential technology to this present effort. Particular application is made to detecting unspecified anomalies in unspecified data streams drawn from staring continuous-dwell sensors. This application enables the reliable non-stop detection of unexpected events, with the results immediately made available to human analysts or additional automated processing.