We describe algorithm development for a trigger system for bio-aerosol detection using bulk collection of aerosols. Two key problems inherent to any system which collects or probes a volume of air are presented - the "mixture" problem and the "spike" problem. We describe a background suppression and detection algorithm and show why knowledge of background endmembers is important. We present an endmember selection algorithm and show examples. Integrating these two algorithms solves both the mixture and spike problems and has applications to both bio-aerosol point detectors which collect samples from a volume of air, and to bio-aerosol stand-off detectors which probe a column of air.
We present a novel approach to a biological point detector system: extracting maximal information from fluorescence by using as much of the full excitation-emission-lifetime (XML) fluorescence space as can be conveniently gathered. Our paper has two parts. In the first part, we present initial XML spectral data gathered under Phase I of the DARPA Spectral-Sensing of Bio-Aerosols (SSBA) program using a commercial laboratory spectrofluorometer and illustrate its analysis in a multi-dimensional Principal Components Analysis (PCA) data space. We demonstrate classification using the spectral angle (SA) methodology developed for hyperspectral imaging in this PCA hyperspace. In the second part, we present a design for a custom trigger sensor developed in Phase II of the DARPA program. This Phase II sensor was motivated by the Phase I results and is intended to exploit them by gathering XML data at a rate consistent with near-real-time triggering.
Air refractivity changes, which include pressure, temperature, and composition effects, affect the performance of the Helium-Neon (HeNe) interferometer used to control the wafer and reticle stages of a step-and-scan lithography system. nanAlign is an auxiliary interferometer system designed to compensate for errors induced in a HeNe interferometer by refractivity changes. We conducted wafer exposure tests of nanAlign with 116 total wafers; 60 wafers with the same field order for each pass are discussed in this paper. We found that nanAlign measurements made on the x-axis could be used to improve the overlay in the y-axis. Over the entire ensemble of 60 wafers, the improvement of the x-axis was 0.6 nm, and the improvement of the y-axis was 0.4nm. Over the entire ensemble the worst wafers showed the most improvement, and there was some improvement on almost all wafers under a wide variety of conditions.
Air turbulence affects the performance of the Helium-Neon interferometer used to control the wafer stage of stepper or step-and-scan lithography systems. In this paper, we describe characterization and reduction of the major error sources in an Air Turbulence Compensated Interferometer designed to address this problem.
Air turbulence affects the performance of the helium-neon interferometer used to control the wafer stage of stepper or step and scan lithography systems. In this paper we describe the principles of operation and in-stepper performance of an air turbulence compensated interferometer designed to address these problems. Collinear combination of a two-wavelength compensation system using second harmonic interferometry, with the existing HeNe interferometer used for length measurement, provides a highly accurate system with real-time compensation for air turbulence. This paper reports on the hardware configuration and preliminary performance evaluation of an ATCI system which has been installed on a semiconductor wafer stepper. A brief overview of the signal processing algorithms is provided, showing the automatic compensation features and noise insensitivity of the algorithm.
Unconventional imaging techniques obtain high resolution images of objects at very long ranges without the use of large diameter primary optical elements. Cost and weight constraints lead us to consider methods for using sparse arrays of subapertures. In this paper, we present a genetic algorithm method for designing sparse arrays of subapertures for an unconventional imaging technique known as correlography. We have compared the solutions found using genetic algorithms to other techniques for generating arrays with filled autocorrelations. The results of this comparison are presented in this paper.
The authors have developed a prototype model of optical detection systems based on a set of primitive mathematical operations that are characteristic of elements in a detection system. The model can cascade these operations arbitrarily to simulate very complex detection systems without requiring cumbersome amounts of input for simple detection systems. Each execution of the model cascades an independent single instance of the noise associated with each operation drawn from the mathematically correct distributions in the same manner as an actual detection system. Thus, ensembles of images from the simulation exhibit the same statistical properties in each pixel as an ensemble of images obtained from a corresponding optical sensor. The resulting images are suitable for development and evaluation of image processors and machine vision systems.
Proc. SPIE. 2065, Optics, Illumination, and Image Sensing for Machine Vision VIII
KEYWORDS: Mathematical modeling, Human-machine interfaces, Visual process modeling, Visualization, Sensors, Computing systems, Computer programming, Signal processing, Signal detection, Systems modeling
The concept of visual programming is a powerful, new way to approach the simulation of optical detection systems. The visual programming interface described here is being designed to allow the user to create and manipulate block diagrams to describe the system of interest; the computer will automatically generate the script which performs the simulation of this system. In this paper, we consider several key issues associated with the development of a visual programming interface for modeling physical systems. First, the `syntax' of a visual programming interface for modeling of physical systems is defined. Second, we describe ways to keep the interface as flexible as possible -- not limiting the operations which can be performed through arbitrary restrictions. Finally, we describe how to implement error checking to prevent the user from creating simulation models which are physically incorrect.
This paper addresses the issues associated with simultaneous achievement of high capacity, high data rate, and a short access time in volume holographic memories. We show that fundamental limits impose performance tradeoffs in any volume holographic memory system. When combined with the state of the art in compact lasers, spatial light modulators, and detector arrays, the overall performance of these memories can be bounded. Achieving greater performance will require either significant improvements in these components, or memory architectures which permit parallel storage systems to be used to increase the capacity or data rate. Conversely, component performance requirements should be evaluated within the context of an entire memory system.
This paper presents the results of a two-year Phase II SBIR program investigating a number of the key aspects of the use of Spectral Hole Burning media in a high capacity holographic optical digital computer memory. Factors which were experimentally examined include data longevity and unintentional erasure, and fundamental capacity issues relating to data densities and crosstalk. An experimental memory system was constructed and tested which had all the key elements of a digital memory system. Our experimental results confirm our previous analyses which indicate useful storage densities of 1012 bytes/cm3.
This paper describes the construction and operation of a 4D neural network computer. This demonstration system uses holographic interconnects recorded in a volume spectral hole burning medium. The paper provides an overview of the demonstration system and includes experimental details of components: the tunable laser, the detector arrays, the spatial light modulators, and preparation and cooling of the spectral hole burning medium. Experimental results showing association of image patterns and a bidirectional associative memory experiment are presented and discussed.
This paper describes a new hardware architecture for searching and accessing data. This Content Addressable Memory (CAM) can be implemented using holographic storage in spectral hole burning media. The use of laser wavelength as a fourth dimension for volume holographic recording provides an additional addressing variable which can be used to advantage in a CAM architecture. This paper consists of three parts: definition of a CAM, presentation of two CAM concepts for digital data string and analog function search, and a discussion of architecture issues.
This paper describes recent results obtained during the experimental development of a holographic optical neural network based upon the spectrally selective recording properties of spectral hole burning materials. This general architecture has been initially tested as a bi-directional associative memory system (a subclass of neural networks). The results obtained clearly demonstrate the fundamental ability to fully connect two 2D planes of digital information. Expectations are that this architecture can be extended to capacities of 1012 interconnects or greater in a modest form factor system.
This paper describes a performance metric for the evaluation of active coherent imaging systems. This metric can be determined for any system using analytical considerations or measured using standard targets. It has implications for comparison of different imaging systems, optimization of imaging systems, and identification of areas in which significant improvements in particular systems can be realized. The imaging system performance metric described here is suitable for analysis of unconventional imaging systems in which the image is sensed by an array of discrete detectors or in which the image is produced by manipulation of arrays of data. The implications of this formula for determination of photon-efficient and optimized systems are discussed. An important result of this paper will be to show that the efficient use of photons is only part of the story. An efficient system must still be optimized to make best use of the imaging hardware.
Sensor simulation codes such as SPARTA''s optical sensor simulation (SENSORSIM) and the Defense Laser-Target Signatures (DELTAS) Code require a high fidelity computer model capable of simulating any type of detection system that might be employed in an optical sensor system. Detection system models for these codes must additionally satisfy sometimes conflicting needs of a diverse user community. Although current detection models in these codes are accurate and easy to use they are limited in the types of detection systems that they can simulate and lack the flexibility to incorporate new detection schemes that are still under development or that may be developed in the future. The authors are developing a comprehensive model of optical detection systems that can be integrated into signature simulation codes such as SENSORSIM and DELTAS. This model uses a radically different approach to simulate the performance of all existing and future detection systems. The model features a hierarchical structure that directly corresponds to the design of a detection system. The first (top) level of this hierarchy represents the overall detection system as an assembly of individual components the components are represented by the second level. Elements of some components such as image intensifier tubes may be represented by a third level. At the lowest level of the hierarchy which may vary from component to component a sequence of mathematical operations describes the behavior of each component or The model simulates a single instance of each random event using a cascaded noise model that is consistent with the philosophy of the SENSORSIM and DELTAS codes. Each detection system is defined by standard text files that follow an intuitive and efficient syntax corresponding to the hierarchy of the detection system model. This input structure allows components to be defined once and incorporated in many detection systems minimizing the library maintenance burden and also facilitates validation of detection system specifications. The specification files can also be transferred between computer systems electronically without special protocols or special conversion of embedded numerical data that would be required for binary files.
Many impressive developments in image simulation technology have led to extensive use of synthetic images in the motion picture industry for special effects and animation, and also in applications such as aircraft flight simulators. Although these images appear correct to the human eye, they generally are not suitable for development of image processing and machine vision applications because the logarithmic response of the human eye does not match the linear response of most electronic detectors. Synthetic images must accurately represent the effects which are present in detected images, whether produced by the source(s) of illumination, the scene itself, the medium through which the sensor is viewing the scene, the sensor system, or electronic circuits between the detector array and the processing system if they are to be useful for development and analysis of image processing (and machine vision) systems. Recent developments have led to the use of laser sensors for various machine vision applications including collision avoidance, wire detection and avoidance, intrusion detection, and underwater imaging systems. With recent developments in low cost laser systems, the use of these sensors for numerous applications relating to machine vision is likely to continue to expand for the foreseeable future. SPARTA's work in the area of image synthesis began with the development of a coherent laser radar simulation running on IBM and compatible personal computers, and has since branched into modeling of incoherent active and passive systems as well. SPARTA's current optical imaging sensor simulation, SENSORSIM, is written in ANSI standard FORTRAN '77 to ensure portability.
Active optical systems have potential for both long range discrimination and pointing and tracking missions. The narrow beamwidth and high angular resolution of optics provides advantages which can make optics the sensor of choice for these missions. The large aperture required to achieve high angular resolution presents several problems for conventional optical systems. For imaging with sub-meter resolution at a range of several thousand kilometers, apertures greater than one meter in diameter are required. Apertures of this size are difficult to steer rapidly to image many targets per second. In addition, fabrication of large primary mirrors is more difficult for large aperture sizes. Finally, the weight of large mirrors must scale approximately as D3 to maintain the mirror figure without active correction; weight equals cost for a space-based system. Active correction requires complex control systems and a beacon or other means for determining correct actuator position.
Retrieval of the phase of a complex analytic function when only the amplitude is known has application to many real-world problems. For example, phase retrieval has been used in astronomical speckle imaging  and is being considered for laser speckle imaging systems.[2,3] In the latter case, the computational requirements may limit the number of images which can be obtained in a given time. This paper describes a modification to the iterative Fourier transform algorithm (IFTA) as described by Fienup  which may reduce the computation to reconstruct an image to a given quality by as much as a factor of three.