With the advent of new technology in wide-area motion imagery (WAMI) and full-motion video (FMV), there is a
capability to exploit the imagery in conjunction with other information sources for improving confidence in detection,
tracking, and identification (DTI) of dismounts. Image exploitation, along with other radar and intelligence information
can aid decision support and situation awareness. Many advantages and limitations exist in dismount tracking analysis
using WAMI/FMV; however, through layered management of sensing resources, there are future capabilities to explore
that would increase dismount DTI accuracy, confidence, and timeliness. A layered sensing approach enables commandlevel
strategic, operational, and tactical analysis of dismounts to combine multiple sensors and databases, to validate DTI
information, as well as to enhance reporting results. In this paper, we discuss WAMI/FMV, compile a list of issues and
challenges of exploiting the data for WAMI, and provide examples from recently reported results. Our aim is to provide a
discussion to ensure that nominated combatants are detected, the sensed information is validated across multiple
perspectives, the reported confidence values achieve positive combatant versus non- combatant detection, and the related
situational awareness attributes including behavior analysis, spatial-temporal relations, and cueing are provided in a timely
and reliable manner to stakeholders.
Compact and robust high power eye-safe laser sources are required for rapidly deployable free-space optical (FSO) communication networks. Such systems have been demonstrated using essentially telecom-based lasers in a relatively narrow bandwidth window around 1.5 μm. Here we discuss additional wavelength transmission bands within the mid-IR. Using advanced laser sources to provide illumination across wide wavelength ranges, particularly within the 2-5 μm it may be possible to overcome transmission limitations associated with adverse weather and atmospheric conditions.
This paper lays the groundwork for modeling the quantification of sensor coverage for Unmanned Aircraft (UA) swarms of sensors. The concept of information expectation is defined, elaborated, and illustrated. Areas of interest (AOI)s are analyzed from a swarm standpoint to determine the quantity of coverage afforded by a swarm of multiple sensor-laden UAs. This work also investigates the coverage of AOIs as determined by the mission duration, area of the region, and time-variable swarm geometry. By experimentation using simulation, we gain insight into the quantifiable influence of varying swarm sizes and configurations on area coverage. This in turn allows validation of formulae and algorithms for computing approximation of expected opportunities for relevant information collection.
The Department of Defense uses modeling and simulation systems in many various roles, from research and training to modeling likely outcomes of command decisions. Simulation systems have been increasing in complexity with the increased capability of low-cost computer systems to support these DOD requirements. The demand for scenarios is also increasing, but the complexity of the simulation systems has caused a bottleneck in scenario development due to the limited number of individuals with knowledge of the arcane simulator languages in which these scenarios are written. This research combines the results of previous efforts from the Air Force Institute of Technology in visual modeling languages to create a language that unifies description of entities within a scenario with its behavior using a visual tool that was developed in the course of this research. The resulting language has a grammar and syntax that can be parsed from the visual representation of the scenario. The language is designed so that scenarios can be described in a generic manner, not tied to a specific simulation system, allowing the future development of modules to translate the generic scenario into simulation system specific scenarios.