Advances in training and simulation technologies, particularly in the arena of augmented reality systems, not only enable a more immersive training experience for users, but also provide more opportunities to execute training both within and outside of operational environments. The Counter-Mine Augmented Reality Training System (CMARTS) effort was an investigation of fusion of augmented reality and embedded training capabilities into fielded hand-held mine detectors to support performance training at home stations, as well as in operational environments. The resulting system, including both metal detection and ground penetrating radar (GPR) sensors, provides: (1) real-time operator feedback in the form of augmented visualizations and (2) embedded “anywhere-anytime” mine detection training using simulated targets. The real-time feedback consists of a head-mounted display and tablet visualizations to indicate what areas of ground have already been scanned; markers for where devices have already been located; any problems with user swing; detector height and swing speed; and also device power status. The embedded training capability enables operators to practice mine detection in both indoor and outdoor environments with synthetic mines, to include actual device responses to simulated detection with real-time feedback. CMARTS will provide the basis for further investigation of augmented reality applications to support both real-time operations and training, not only for mine detection but for other sensing modalities and target types as well. This paper will start with a discussion of mine detection training challenges, followed by the design and development of CMARTS, concluding with possible future efforts.
The U.S. Army RDECOM CERDEC NVESD MSD’s target acquisition models have been used for many years by the military analysis community for sensor design, trade studies, and field performance prediction. This paper analyzes the results of perception tests performed to compare the results of a field DRI (Detection, Recognition, and Identification Test) performed in 2009 to current Soldier performance viewing the same imagery in a laboratory environment and simulated imagery of the same data set. The purpose of the experiment is to build a robust data set for use in the virtual prototyping of infrared sensors. This data set will provide a strong foundation relating, model predictions, field DRI results and simulated imagery.
There is a ubiquitous and never ending need in the US armed forces for training materials that provide the warfighter with the skills needed to differentiate between friendly and enemy forces on the battlefield. The current state of the art in battlefield identification training is the Recognition of Combat Vehicles (ROCV) tool created and maintained by the Communications - Electronics Research, Development and Engineering Center Night Vision and Electronic Sensors Directorate (CERDEC NVESD). The ROC-V training package utilizes measured visual and thermal imagery to train soldiers about the critical visual and thermal cues needed to accurately identify modern military vehicles and combatants. This paper presents an approach that has been developed to augment the existing ROC-V imagery database with synthetically generated multi-spectral imagery that will allow NVESD to provide improved training imagery at significantly lower costs.
Major decisions regarding life and death are routinely made on the modern battlefield, where visual function of the individual soldier can be of critical importance in the decision-making process. Glasses in the combat environment have considerable disadvantages: degradation of short term visual performance can occur as dust and sweat accumulate on lenses during a mission or patrol; long term visual performance can diminish as lenses become increasingly scratched and pitted; during periods of intense physical trauma, glasses can be knocked off the soldier’s face and lost or broken. Although refractive surgery offers certain benefits on the battlefield when compared to wearing glasses, it is not without potential disadvantages. As a byproduct of refractive surgery, elevated optical aberrations can be induced, causing decreases in contrast sensitivity and increases in the symptoms of glare, halos, and starbursts. Typically, these symptoms occur under low light level conditions, the same conditions under which most military operations are initiated. With the advent of wavefront aberrometry, we are now seeing correction not only of myopia and astigmatism but of other, smaller optical aberrations that can cause the above symptoms. In collaboration with the Warfighter Refractive Eye Surgery Program and Research Center (WRESP-RC) at Fort Belvoir and Walter Reed National Military Medical Center (WRNMMC), the overall objective of this study is to determine the impact of wavefront guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK) on military task visual performance. Psychophysical perception testing was conducted before and after surgery to measure each participant’s performance regarding target detection and identification using thermal imagery. The results are presented here.
Search and detection are two everyday jobs of many biological systems, performed almost innately, either consciously or subconsciously and necessary for survival. Search and target detection, in particular, are the first stages in visual observation tasks associated with military target acquisition, industrial inspection, traffic control, and many more applications. These tasks currently are often performed with the aid of an electro-optic viewing system. In these tasks, search is defined as the process by which an observer surveys his surroundings, and detection is the process of successfully declaring a desired target as such—more precise interpretations of these terms will be found in what follows and in the included papers.
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying the science behind the human processes of searching and detecting, and using that knowledge to develop and refine its models for military imaging systems. Modeling how human observers perform military tasks while using imaging systems in the field and linking that model with the physics of the systems has resulted in the comprehensive sensor models we have today. These models are used by the government, military, industry, and academia for sensor development, sensor system acquisition, military tactics development, and war-gaming. From the original hypothesis put forth by John Johnson in 1958, to modeling time-limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, the concept of search has a wide-ranging history. Our purpose is to present a snapshot of that history; as such, it will begin with a description of the search-modeling task, followed by a summary of highlights from the early years, and concluding with a discussion of search and detection modeling today and the changing battlefield. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. We do not claim to present a complete history here, but rather a look at some of the work that has been done, and this is meant to be an introduction to an extensive amount of work on a complex topic. That said, it is hoped that this overview of the history of search and detection modeling of military imaging systems pursued by NVESD directly, or in association with other government agencies or contractors, will provide both the novice and experienced search modeler with a useful historical summary and an introduction to current issues and future challenges.
The Federal Aviation Administration (FAA) is presently engaged in research to quantify the visibility of aircraft under
two important scenarios: aircraft observed directly by human operators in air traffic control towers (ATCT's), and
aircraft observed by human operators through unmanned aerial vehicle (UAV) sensors viewed through ground-based
display systems. Previously, an ATCT visibility analysis software tool (FAA Vis) was developed by the U.S. Army
Research Laboratory (ARL) in collaboration with the U.S. Army's Night Vision and Electronic Sensors Directorate
(NVESD) and the FAA. This tool predicts the probability of detection, recognition, and identification of various aircraft
by human observers as a function of range and ATCT height. More recently, a baseline version of a UAV See-And-
Avoid visibility analysis software tool was also developed by ARL, again in collaboration with NVESD and the FAA.
Important to the calibration of these tools is the empirical determination of target discrimination difficulty criteria.
Consequently, a set of human perception experiments were designed and conducted to empirically determine the target
recognition and identification discrimination difficulty criteria for a representative set of aircraft. This paper will report
on the results and analyses of those experiments.
To investigate the benefits of multiband infrared sensor design for target detection, search and detection experiments were conducted in the midwave infrared (MWIR) and longwave infrared (LWIR) wavebands in both rural and urban battlefields. In each battlefield environment, real imagery was collected in both bands by a single sensor using the same optics for both bands, resulting in perfect co-registration of the imagery. In order to study the performance impact of the spectral content, and not diffraction or other sensor-specific differences, the images were processed as needed so that differences in resolution due to diffraction were mitigated. The results of perception experiments, including detection probabilities, search times, and false alarm data, were compared between the wavebands.
The U.S. Army's Night Vision and Electronic Sensors Directorate (NVESD) Modeling and Simulation
Division is responsible for developing and enhancing electro-optic/infrared sensor performance models
that are used in wargames and for sensor trade studies. Predicting how well a sensor performs a military
task depends on both the physics of the sensor and how well observers perform specific tasks while using
that sensor. An example of such a task could be to search and detect targets of military interest. Another
task could be to identify a target as a threat or non-threat. A typical sensor development program
involves analyses and trade-offs among a number of variables such as field of view, resolution, range,
compression techniques, etc. Observer performance results, obtained in the NVESD perception lab,
provide essential information to bridge the gap between the physics of a system and the humans using that
system. This information is then used to develop and validate models, to conduct design trade-off studies
and to generate insights into the development of new systems for soldiers in surveillance, urban combat,
and all types of military activities. Computer scientists and engineers in the perception lab design tests
and process both real and simulated imagery in order to isolate the effect or design being studied. Then,
in accordance with an approved protocol for human subjects research, experiments are administered to the
desired number of observers. Results are tabulated and analyzed. The primary focus of this paper is to
describe current capabilities of the NVESD perception lab regarding computer-based observer
performance testing of sensor imagery, what types of experiments have been completed and plans for the
Atmospheric radiance occurs in both the MWIR and LWIR primarily as a consequence of thermal emission by the gases
and aerosols in the atmosphere. If this radiation originates between a scene and a thermal imaging sensor, it's called
path radiance. In thermal IR imagery, path radiance reduces scene radiation contrast at the entrance pupil. For ground
based sensors, this effect would be most significant in search systems with wide fields of view (WFOV) that image a
large range depth of field. In WFOV search systems, the sensor display gain and level are typically adjusted to optimize
the contrast of targets and backgrounds at the closer ranges. Without compensation in WFOV imagery, high path
radiance can mask distant targets in the detection process. However, in narrow fields of view (NFOV), path radiance
can have less of an impact since targets and backgrounds will be at about the same range and thus have the same path
radiance. As long as the NFOV radiation contrast exceeds the system noise, sensor display gain and level adjustments,
or image processing, if available, can be used to boost the contrast at the display. However, there are some imaging
conditions that are beyond compensation by display contrast adjustments or image processing. Using MODTRAN, this
paper examines the potential impacts of path radiance from the phenomenological point of view on target-tobackground
contrast and signatures (&Dgr;T) for dual band thermal imaging systems
When modeling the search and target acquisition process, probability of detection as a function of time is important to war games and physical entity simulations. Recent US Army RDECOM CERDEC Night Vision and Electronics Sensor Directorate modeling of search and detection has focused on time-limited search. Developing the relationship between detection probability and time of search as a differential equation is explored. One of the parameters in the current formula for probability of detection in time-limited search corresponds to the mean time to detect in time-unlimited search. However, the mean time to detect in time-limited search is shorter than the mean time to detect in time-unlimited search and the relationship between them is a mathematical relationship between these two mean times. This simple relationship is derived.
Perception experiments were conducted at Night Vision and Electronic Sensors Directorate (NVESD) to investigate the effect of targets in defilade on the search task. Vehicles were placed in a simulated terrain and were either fully exposed, partially exposed, or placed in hull defilade. These images, along with a number of no-target images, were presented in a time-limited search perception experiment using military observers. The results were analyzed and compared with ACQUIRE predictions to determine if there are factors, other than size, affecting the search task when targets are in defilade.
Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.
This paper provides an overview of research in search and detection modeling of military imaging systems. For more than forty-five years the US Army Night Vision and Electronic Sensors Directorate (NVESD) and others have been working to model the performance of infrared imagers in an effort to link imaging system design parameters to observer-sensor performance in the field. The widely used ACQUIRE model accomplished this by linking the minimum resolvable contrast of the sensor to field performance. From the original hypothesis put forth by John Johnson in 1958, to modeling time limited search, to modeling the impact of motion on target detection, to modeling target acquisition performance in different spectral bands, search has a wide and varied history. This paper will first describe the search-modeling task and then give a description of various topics in search and detection over the years. Some of the topics to be discussed will be classic search, clutter, computational vision models and the ACQUIRE model with its variants. It is hoped that this overview will provide both the novice and experienced search modeler alike with a useful summary and a glance at current issues and future challenges.
Laser Range-Gated (LRG) imagers provide high contrast images of targets at extended ranges because the laser light scattered by the intervening atmosphere is gated out. Atmospheric backscatter of the laser light does not degrade the contrast of LRG imagery. Natural illumination helps range performance by increasing overall target illumination. However, natural illumination of the intervening path degrades target to background contrast, and this hurts range performance. This paper provides a model for predicting the influence of natural illumination on LRG performance.
Recent technology advances have made low cost, eye safe, high performance laser-range-gated (LRG) imagers a reality. These advances include the Electron Bombarded CCD sensor and the 1.5 micron, monoblock laser. LRG imagers use a laser beam to illuminate targets at extended ranges; the targets are then identified with the EBCCD sensor. Several features of LRG imagers make predicting range performance different than for passive imagers. LRG imagers are described. The features that make active imager performance different from passive imager performance are discussed. Features unique to active imagers include laser speckle in the image, the narrow illumination beam and its interaction with the atmosphere, the highly directional “spot light” illumination of the target, and the range gating of the receiver. This paper discusses the unique modeling requirements for LRG imagers.
This paper describes a spectral night illumination model. The model provides spectral irradiance on a horizontal surface for wavelengths between 0.4 and 2.0 microns. These wavelengths encompass the visible, near infrared, and short wave infrared spectral bands. The primary significance of this model is that consistent estimates of spectral irradiance are now provided for the visible through SWIR spectral bands. The primary sources of night illumination are described. The paper also describes how the new model was derived from spectroscopic data gathered by astronomers. Model predictions are compared to standard references commonly used to predict night illuminations.
The windows version of the Night Vision Thermal Imaging System Performance Model, NVTherm, was released in March 2001. NVTherm provides accurate predictions of sensor performance for both well-sampled and undersampled thermal imagers. Since its initial fielding in March 2001, a number of improvements have been implemented. The most significant improvements are: (1) the addition of atmospheric turbulence blurring effects, (2) National Imagery Interpretability Rating System (NIIRS) estimates, (3) and the option for slant-path MODTRAN transmission. This paper presents these modifications, as well as a brief description of some of the minor changes and improvements that have been completed over the past year. These significant changes were released in January 2002.
Display artifacts such as raster and square pixelization associated with flat panel displays can be characterized using sampled imaging systems analysis. When the output raster/pixelization signal is large compared to the image modulation, the overall system performance is degraded. In this research, we investigated display methods for sampled imaging systems, including pixel replication, bilinear interpolation, and a higher-order interpolation. Problems with the performance modeling of these processes are discussed and a perception test is implemented for comparison.