The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD)
project is addressing many operational needs of the future Canadian Army's Surveillance and
Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an
experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing
multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing.
The project is exploiting important advances made in computer processing capability, displays technology,
digital communications, and sensor technology since the design of the original surveillance system.
As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor
and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from
the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as
from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors.
Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a
second major area of research, automatic target cueing capabilities have been added to the system. These
include scene change detection, automatic target detection and aided target recognition algorithms
processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of
incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion,
up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user
interface in this complex environment are presented, drawing upon multiple user group sessions with military
surveillance system operators. The paper concludes with Lessons Learned from the project.
The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire
Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to
investigate operator interactions. The paper concludes with recommendations for future work in operator
The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.
This paper evaluates the performance of a holographic neural network in comparison with a conventional feedforward backpropagation neural network for the classification of landmine targets in ground penetrating radar images. The data used in the study was acquired from four different test sites using the landmine detection system developed by General Dynamics Canada Ltd., in collaboration with the Defense Research and Development Canada, Suffield. A set of seven features extracted for each detected alarm is used as stimulus inputs for the networks. The recall responses of the networks are then evaluated against the ground truth to declare true or false detections. The area computed under the receiver operating characteristic curve is used for comparative purposes. With a large dataset comprising of data from multiple sites, both the holographic and conventional networks showed comparable trends in recall accuracies with area values of 0.88 and 0.87, respectively. By using independent validation datasets, the holographic network’s generalization performance was observed to be better (mean area = 0.86) as compared to the conventional network (mean area = 0.82). Despite the widely publicized theoretical advantages of the holographic technology, use of more than the required number of cortical memory elements resulted in an over-fitting phenomenon of the holographic network.