The US Army Night Vision and Electronic Sensors Directorate (NVESD) has developed the Night Vision Integrated Performance Model (NV-IPM) for conducting system trade studies and performance evaluations for EOIR imaging systems. Many programs of record carry range performance requirements that utilize the targeting task performance (TTP) metric together with system level objective measurements of an imager. The imaging system measurements of signal intensity transfer function (SITF), 3-dimensional noise (3D Noise), instantaneous field of view (IFOV), and modulation transfer function (MTF) are combined within the measured system component that can be directly implemented in NVIPM for performance and specification evaluation. IRWindows 4TM (IRWindows) is a software package produced by Santa Barbara Infrared, Inc. (SBIR) and is used for testing electro optical systems in a variety of laboratory, production, and field environments. In this correspondence, we detail how IRWindows performs the required measurements for TTP evaluation and generates a measured system component for use in NVIPM to predict range performance of an IR imaging system. Further, we demonstrate how the range performance from system measurements is in agreement between different EOIR laboratories.
The spectral response of cameras is often reported as the product of the individual responses of the elements in the optical path, the lens element coatings, filter (if available), FPA window and coating, and detector elements. This data is often incomplete or inaccurate as vendors will often provide limited spectral data or the data is measured under conditions different than those in the camera system (i.e., normal incidence assumption). We have designed and built an instrument for the measurement of the normalized spectral response of camera systems in the thermal bands (MWIR [3um-6um] and LWIR [8um – 12um]). The design utilizes a series of narrowband filters, a cavity blackbody and other components for the conditioning of the stimuli to the camera. The normalized camera spectral response is obtained by comparing the camera response to each narrowband filter against a reference measurement. In this paper we discuss the modeling and analysis in support of the design, show the final design and some preliminary measurements.
Typically, a system level characterization of a thermal imaging device includes characterizing the objective optics, detector and readout electronics. Ultimately, the thermal imagery is converted to an 8-bit signal and presented on a display for human visual consumption. In some situations, direct characterization of the pre-sample imaging system is not possible, and measurements must be performed from analyzing the output from its display. Additionally, the performance of the display and display optics are significant contributors to the performance of the imaging system, yet they are both assumed to be ideal in many aspects. In this paper, we describe how the underlying imaging system non-uniformity is related to additional display contributions in the total system non-uniformity. This paper will be divided into three parts: the technique and considerations needed to properly measure system through its display, how we can use this information in the NVIPM performance model, and a comparison of performance from measurements at the pre-sample readout versus measurements only at the display.
Sensor fusion and novel “multi-image” systems that have several different spectral ranges are proliferating in tactical and commercial applications. Calibrating these devices requires a variety of sources from quartz-tungsten halogen to blackbodies to more selectable band sources such as LEDs. Usually these sources are used independently in discrete spectral regions, but real reflective and emissive targets often have signatures that make combining these sources necessary if one is to emulate these real spectrums for testing in either image (collimator) or flood (sphere) configurations. A novel approach to combine LED and broadband emitters has been developed to effect stable, calibrated, traceable sources that can match real target spectral signatures.
The modulation transfer function (MTF) describes how an imaging system will modify the spatial frequency content of a scene. Many performance metrics are strongly dependent on the MTF as it provides information of the limiting resolution. To measure the MTF, an image of a known or assumed scene is analyzed. In this correspondence we will detail potential issues that can contribute uncertainty or bias into the calculation of the MTF when using a tilted edge to super-resolve the system MTF of a sampled imaging system. Differences in measuring the system MTF can be categorized into 4 categories: data corruption, equipment, operator selection, and system under test effects. For each category we provide notional examples to demonstrate the severity of the measurement uncertainty as well as best practices to avoid or reduce their influence. We provide the full 2D derivation of the tilted edge technique, highlighting the impact of non-uniformity. We will discuss the influence of finite regions of interest (ROI) and stray-light, defective pixels, non-uniform illumination, and non-square pixels. Additionally we show how confidence intervals from sensor noise can be estimated and how they are related to frame averaging and ROI size. In support of the reproducible research effort, the Matlab functions associated with this work can be found on the Mathworks file exchange .
Mass markets, including mobile phones and automotive sensors, drive rapid developments of imaging technologies toward high performance, low cost sensors, even for the thermal infrared. Good infrared calibration blackbody sources have remained relatively costly, however. Here we demonstrate how to make low-cost reference sources, making quantitative infrared radiometry more accessible to a wider community. Our approach uses ordinary construction materials combined with low cost microcontrollers, digital temperature sensors and foil heater elements from massmarket 3D printers. Blackbodies are constructed from a foil heater of some chosen size and shape, attached to the back of a similarly shaped aluminum plate coated with commercial black paint, which normally exhibits high emissivity. The emissivity can be readily checked by using a thermal imager to view the reflection of a hot object. A digital temperature sensor is attached to the back of the plate. Thermal isolation of the backside minimizes temperature gradients through the plate, ensuring correct readings of the front temperature. The isolation also serves to minimize convection gradients and keeps power consumption low, which is useful for battery powered operation in the field. We demonstrate surface blackbodies (200x200 mm2) with surface homogeneities as low as 0.1°C at 100°C. Homogeneous heating and low thermal mass provides for fast settling time and setup/pack-down time. The approach is scalable to larger sizes by tiling, enabling portable and foldable square-meter-size or larger devices.
Type-II InAs/GaSb superlattice (T2SL) has recently matured into a commercially available technology addressing both MWIR and LWIR spectral domains. As the prerequisites such as Quantum Efficiency (QE) and dark current were met, more advanced figures of merits related to the ElectroOptic (EO) system as a whole can now be studied in order to position this technology. In this paper, we focus on modulation transfer function (MTF) measurements. Knowing the MTF of a detector is indeed of primary importance for the EO system designers, since spatial filtering affects the system range. We realized MTF measurements on a 320x256 MWIR T2SL FPA provided by IRnova, using a Continuously Self Imaging Grating (CSIG). The advantage of this experimental configuration is that no high performance projection optics is required. Indeed, the CSIG exploits the self-imaging property (known as Talbot effect) to project a pattern with known spatial frequencies on the photodetector. Such MTF measurements have never been done in Integrated Detector Dewar Cooler Assembly (IDDCA) configuration, so we had to study the effect of the vibrations induced by the cryocooler. Vibrations indeed affect the MTF measurement in the same way electrical diffusion would do. Using three accelerometers we optimized our experimental setup and extracted MTF measurements with reduced vibrations. The pixel size is 26μm for a pitch of 30μm.
This paper discusses a new capability developed for and results from a field portable test set for Gen 2 and Gen 3 Image Intensifier (I2) tube-based Night Vision Goggles (NVG). A previous paper described the test set and the automated and semi-automated tests supported for NVGs including a Knife Edge MTF test to replace the operator's interpretation of the USAF 1951 resolution chart. The major improvement and innovation detailed in this paper is the use of image analysis algorithms to automate the characterization of spot defects of I² tubes with the same test set hardware previously presented. The original and still common Spot Defect Test requires the operator to look through the NVGs at target of concentric rings; compare the size of the defects to a chart and manually enter the results into a table based on the size and location of each defect; this is tedious and subjective. The prior semi-automated improvement captures and displays an image of the defects and the rings; allowing the operator determine the defects with less eyestrain; while electronically storing the image and the resulting table. The advanced Automated Spot Defect Test utilizes machine vision algorithms to determine the size and location of the defects, generates the result table automatically and then records the image and the results in a computer-generated report easily usable for verification. This is inherently a more repeatable process that ensures consistent spot detection independent of the operator. Results of across several NVGs will be presented.
The quality of an imaging system can be assessed through controlled laboratory objective measurements. Currently, all imaging measurements require some form of digitization in order to evaluate a metric. Depending on the device, the amount of bits available, relative to a fixed dynamic range, will exhibit quantization artifacts. From a measurement standpoint, measurements are desired to be performed at the highest possible bit-depth available. In this correspondence, we described the relationship between higher and lower bit-depth measurements. The limits to which quantization alters the observed measurements will be presented. Specifically, we address dynamic range, MTF, SiTF, and noise. Our results provide guidelines to how systems of lower bit-depth should be characterized and the corresponding experimental methods.
The Modulation Transfer Function (MTF) of an imaging device is a strong indicator of the resolution limited performance. The MTF at the system level is commonly treated as separable, with the optical MTF multiplying the postoptic (detector) MTF to give the system MTF. As new detector materials and methods have become available, and as the manufacturing of detectors has been separated from the optical system, independently measuring the MTF of the detector is of great interest. In this correspondence, a procedure for measuring the post-optic MTF of a mid-wave (3-5 micron) sampled imager is described. This is accomplished through a careful measurement of a reference optic that is later installed to allow for a final system MTF measurement. The key finding is that matching the chromatic shape of the illumination between the optic and system MTFs is critical, as in both measurements the effective MTF is scaled by the source and detector spectral shapes. This is most easily accomplished through the use of narrow bandpass filters. Our results are consistent across bandpass filter cut-on and F/number.
The Infrared signature of a defence platform is strongly influenced by environmental conditions. This paper will outline two alternative methods to select a subset of climatic data to represent a large dataset as an input for Infrared signature modelling. A binning and ranking algorithm first presented by Vaitekunas and Kim (2013) will be compared with a genetic algorithm. The results for five different geographic locations will be assessed by comparing the cumulative distribution functions of the subset and the original dataset. Quantile-Quantile plots and the Kolmogorov-Smirnov statistic will be used to assess the solutions for the two different algorithms.
The effects of Hull Film Cooling (HFC) water-spray systems on infrared detection have already been presented by Vaitekunas and Kim (,). This paper will further assess the impact of zonal Active Hull Cooling (AHC) water-spray systems on the infrared detection of naval ships to ensure its proper usage against a range of infrared-guided anti-ship missiles. The transient performance of such systems will also be assessed using a fully transient version of ShipIR/NTCS which includes a laminar flow water-film convection model to simulate the effects of an activated water-spray. The model can be used to analyse the performance of a proposed design, and adjust the nozzle count to meet a specific requirement. Recent efforts to validate the new model against existing experiments is also described.
Asynchronous Detectors (ADs) are an emerging technology with the potential to improve the dynamic range, latency, bandwidth needs, and power requirements of military imaging systems. These capabilities are especially important to the effectiveness of digital imagers for Soldier mobility and autonomous systems. Similar to the functionality of the human visual system, ADs ignore the redundancy of static portions in the scene, only recording temporal changes. When coupled with motion, both spatial and temporal variations of a scene can be observed. Using simulated data, the performance of AD based imaging systems are compared to traditional synchronous, full-frame imaging systems. This work outlines a methodology to determine if this emerging technology has significant potential for Soldier mobility and autonomous applications.
The authors reflect on their three decades in the field of Infrared (IR) projection development and test. This merry group of youngsters had no idea this niche technology would consume them, pit them against each other, and ultimately unite them. Over the years we fought to make our technologies a viable contributor to the development of the next generation of “smart” weapons. Starting with Liquid Crystal Light Valves (LCLV), Bly-Cells, we sought solutions which ultimately found themselves as a critical component in many of today’s weapon systems’ ground test activities. Today’s resistor emitter arrays are the standard by which the HWIL community has relied for nearly twenty years but as Focal Plane Arrays (FPA) advance in size, speed, and sensitivity and the Department of Defense advances its mission with these new capabilities, we are challenged to seek new technology solutions. These technologies include advances in novel resistor emitters, IR Laser Emitting Diodes (IRLED), Carbon Nano-Tube (CNT) materials and Digital Mirror Devices.
Non-uniformity correction (NUC) is a standard procedure for infrared (IR) cameras. The effect of lens temperature, however, is often ignored during the implementation of a NUC. Ignoring the effect of temperature is acceptable if the lens temperature is at much lower than ambient temperature, whose irradiance onto the focal plane array (FPA) is much less than that of the scene. Ignoring the effect of temperature is also acceptable if the lens temperature during the calibration for NUC is the same as that during the scene collection. The change of the lens temperature in between the calibration for NUC and the scene collection, however, affects the performance. Such degradation in image quality is presented by the frames taken by a mid-wave infrared (MWIR) camera. An empirical law is established to mitigate the effect of lens temperature, which offers various options for NUC. As an example, we propose a four-point NUC that mitigates the effect of the lens temperature. We demonstrate its usefulness by applying it to the frames taken at various lens temperatures. The results are satisfactory.
The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.
A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.
The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.
At NVESD, the targeting task performance (TTP) metric applies a weighting of different system specifications that is determined from the scene geometry to calculate a probability of task performance. In this correspondence we detail how to utilize an imaging system specification document to obtain a baseline performance estimate using the Night Vision Integrated Performance Model (NV-IPM), the corresponding requirements and potential assumptions. We then discuss how measurements can be performed to update the model to give a more accurate prediction of performance, detailing the procedure taken at NVESD Advanced Sensor Evaluation Facility (ASEF) lab utilizing the Night Vision Laboratory Capture (NVLabCap) software. Finally, we show how the outputs of the measurement can be compared to those of the initial specification sheet based model and evaluated against a requirements document. The modeling components and data sets produced for this work are available upon request and will serve as a means to benchmark of performance for both modeling and measurement methods.
The U.S. Army RDECOM CERDEC NVESD MSD’s target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of modeled range performance predictions to the results of a perception test using real imagery with synthetically added vibration profiles. The purpose of the experiment is to validate model performance predictions in the presence of vibration and to continue the development of a robust methodology and data set for use in the virtual prototyping of infrared sensors. This methodology will continue to provide a strong foundation relating, model predictions, field DRI results and simulated imagery.
ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD).
In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations.
For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager.
Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.
We present Minimum-Resolvable Temperature Difference (MRTD) curves obtained by letting an ensemble of observers
judge how many of the six four-bar patterns they can “see” in a set of images taken with different bar-to-background
contrasts. The same images are analyzed using elemental signal analysis algorithms and machine-analysis based MRTD
curves are obtained. We show that by adjusting the minimum required signal-to-noise ratio the machine-based MRTDs
are very similar to the ones obtained with the help of the human observers.
A core component to modeling visible and infrared sensor responses is the ability to faithfully recreate background noise and clutter in a synthetic image. Most tracking and detection algorithms use a combination of signal to noise or clutter to noise ratios to determine if a signature is of interest. A primary source of clutter is the background that defines the environment in which a target is placed. Over the past few years, the Electro-Optical Systems Laboratory (EOSL) at the Georgia Tech Research Institute has made significant improvements to its in house simulation framework GTSIMS. First, we have expanded our terrain models to include the effects of terrain orientation on emission and reflection. Second, we have included the ability to model dynamic reflections with full BRDF support. Third, we have added the ability to render physically accurate cirrus clouds. And finally, we have updated the overall rendering procedure to reduce the time necessary to generate a single frame by taking advantage of hardware acceleration. Here, we present the updates to GTSIMS to better predict clutter and noise doe to non-uniform backgrounds. Specifically, we show how the addition of clouds, terrain, and improved non-uniform sky rendering improve our ability to represent clutter during scene generation.
The SWIR waveband between 0.8μm-1.8μm is getting increasingly exploited by imaging systems in a variety of different applications, including persistent imaging for security and surveillance of high-value assets, handheld tactical imagers, range-gated imaging systems and imaging LADAR for driverless vehicles. The vast majority of these applications utilize lattice-matched InGaAs detectors in their imaging sensors, and these sensors are rapidly falling in price, leading to their widening adoption. As these sensors are used in novel applications and locations, it is important that ambient SWIR backgrounds be understood and characterized for a variety of different field conditions, primarily for the purposes of system performance modeling of SNR and range metrics. SWIR irradiance backgrounds do not consistently track visible-light illumination at all. There is currently little of this type of information in the open literature, particularly measurements of SWIR backgrounds in urban areas, natural areas, or indoors. This paper presents field measurements done with an InGaAs detector calibrated in the swux unit of InGaAs-band-specific irradiance proposed by two of the authors in 2017. Simultaneous measurements of illuminance levels (in lux) at these sites are presented, as well as visible and InGaAs camera images of the scenery at some of these measurement sites. The swux and lux measurement hardware is described, along with the methods used to calibrate it. Finally, the swux levels during the partial and total phases of the total solar eclipse of 2017 are presented, along with curves fitted to the data from a theoretical model, based on obscuration of the sun by the moon. The apparent differences between photometric and swux measurements will be discussed.
Simulation-based training for target acquisition algorithms is an important goal for reducing the cost and risk associated with live data collections. To this end, the US Army Night Vision and Electronic Sensors Directorate (NVESD) has developed high-fidelity virtual scenes of terrains and targets using the DIRSIG in pursuit of a virtual DRI (detect, recognize, identify) capability. In this study, the NVESD has developed a neural network (NN) algorithm that can be trained on simulated data to classify targets of interest when presented with real data. This paper discusses the classification performance of a NN algorithm and the potential impact training with simulated data has on algorithm performance.
The problem of predicting real-world operator performance with digital imaging devices is of great interest within the military and commercial domains. There are several approaches to this problem, including: field trials with imaging devices, laboratory experiments using imagery captured from these devices, and models that predict human performance based on imaging device parameters. The modeling approach is desirable, as both field trials and laboratory experiments are costly and time-consuming. However, the data from these experiments is required for model validation. Here we considered this problem in the context of dismounted soldiering, for which detection and identification of human targets are essential tasks. Human performance data were obtained for two-alternative detection and identification decisions in a laboratory experiment in which photographs of human targets were presented on a computer monitor and the images were digitally magnified to simulate range-to-target. We then compared the predictions of different performance models within the NV-IPM software package: Targeting Task Performance (TTP) metric model and the Johnson model. We also introduced a modification to the TTP metric computation that incorporates an additional correction for target angular size. We examined model predictions using NV-IPM default values for a critical model constant, V50, and we also considered predictions when this value was optimized to fit the behavioral data. When using default values, certain model versions produced a reasonably close fit to the human performance data in the detection task, while for the identification task all models substantially overestimated performance. When using fitted V50 values the models produced improved predictions, though the slopes of the performance functions were still shallow compared to the behavioral data. These findings are discussed in relation to the models’ designs and parameters, and the characteristics of the behavioral paradigm.
The U.S. Army’s RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD’s Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.
Target acquisition range predictions are based upon system MTF analysis. Johnson linked cycles on target (sounds like an MTF), N50, to detection, recognition, and identification (DRI). Over time, models changed with V50 replacing N50. Not surprisingly, the DRI V50 values changed as the models evolved. Current DRI V50 values appear to be target specific. What is changing is the detail size needed for DRI – not V50. Rather than have a V50 for each task, use V50 = 2 for detecting specific target features. As the feature size decreases, target detail becomes more prominent leading to recognition and identification.
Well-known detection metrics based on Johnson criteria or Target Task Performance (TTP) models were developed for land-based targets [1,2]. In this paper we investigate how (whether) we can apply these metrics to especially recognition and identification of ships at sea. Large sea targets distinguish themselves from land-based targets by their large aspect ratio, when seen broad side, and their relatively large and hot plume. We shall only address the second of these two issues here. First, however, we shall investigate how the simple Johnson approach to recognition and identification stacks up against a TTP approach. The Johnson approach has clear and simple criteria to measure the target task performance. To apply the TTP model N50 (V50) values need to be found through observer trials. We avoid these trials here but estimate the criteria based on a comparison of the models. From analysis of LWIR and MWIR recordings of a multipurpose ship running outbound and inbound tracks, we find little difference between the two metrics. As mentioned, we study the effect of the plume on task performance ranges, by considering two different estimates for the target contrast: the average contrast and the root of the squares of this contrast and the standard deviation of the contrast. We argue that the plume skews the recognition and identification ranges to much too optimistic values when the standard deviation is included. In other words, although the plume helps to detect the target, it does not help the recognition or identification task. It seems a more careful definition of the temperature contrast needs to be applied when these models are used.
Triangle orientation discrimination (TOD) is an observer task useful for characterizing the practical performance of electro-optical imaging systems. We generated simulated imagery of TOD targets viewed at various ranges through a High Definition Long-wave (HDLW) imaging system. The simulated imagery was presented to human subjects in a TOD experiment, and the probability of identification (PID) was determined as a function of range. We present the PID curves and show that TOD is suitable for field testing of ID range.
Unmanned aerial vehicles (UAVs) have become more readily available in the past 5 years and are proliferating rapidly. New aviation regulations are accelerating the use of UAVs in many applications. As a result, there are increasing concerns of potential air threats in situational environments including commercial airport security and drug trafficking. In this study, radiometric signatures of commercially available miniature UAVs is determined for long-wave infrared (LWIR) bands in both clear sky and partial cloudy conditions. Results are presented that compare LWIR performance estimates for the detection of commercial UAVs via infrared search and track (IRST) systems with two candidate sensors.
Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.
In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.
Terahertz- (THz) and millimeter-wave sensors are becoming more important in industrial, security, medical, and defense applications. A major problem in these sensing areas is the resolution, sensitivity, and visual acuity of the imaging systems. There are different fundamental parameters in designing a system that have significant effects on the imaging performance. The performance of THz systems can be discussed in terms of two characteristics: sensitivity and spatial resolution. New approaches for design and manufacturing of THz imagers are a vital basis for developing future applications. Photonics solutions have been at the technological forefront in THz band applications. A single scan antenna does not provide reasonable resolution, sensitivity, and speed. An effective approach to imaging is placing a high-performance antenna in a two-dimensional antenna array to achieve higher radiation efficiency and higher resolution in the imaging systems. Here, we present the performance modeling of a pupil plane imaging system to find the resolution and sensitivity efficiency of the imaging system.
Range performance of an imaging system is a key factor for an infrared search and tracking system with a purpose of detection, recognition and identification. Therefore, the prediction of the expected range performance is of utmost importance. The range prediction includes many variables that affect the outcome. Wavelength is one of the most important parameters because it has an enormous effect on range, but detector technology directly related to range performance. In this study, MWIR and LWIR imaging systems in certain configurations are modelled and analyzed in terms of range. The imaging system is modelled taking into account the properties of the detector and the optics, while the atmospheric conditions is modelled using MODTRAN. Analytical expressions for detection, recognition and identification ranges with respect to Johnson criteria for different target types are derived. The effects of the given parameters to the range performance are examined and a comparison between the different wavelengths is discussed.