Airborne surveillance and reconnaissance are essential for many military missions. Such capabilities are critical for troop
protection, situational awareness, mission planning and others, such as post-operation analysis / damage assessment.
Motion imagery gathered from both manned and unmanned platforms provides surveillance and reconnaissance
information that can be used for pre- and post-operation analysis, but these sensors can gather large amounts of video
data. It is extremely labour-intensive for operators to analyse hours of collected data without the aid of automated tools.
At MDA Systems Ltd. (MDA), we have previously developed a suite of automated video exploitation tools that can
process airborne video, including mosaicking, change detection and 3D reconstruction, within a GIS framework. The
mosaicking tool produces a geo-referenced 2D map from the sequence of video frames. The change detection tool
identifies differences between two repeat-pass videos taken of the same terrain. The 3D reconstruction tool creates
calibrated geo-referenced photo-realistic 3D models.
The key objectives of the on-going project are to improve the robustness, accuracy and speed of these tools, and make
them more user-friendly to operational users. Robustness and accuracy are essential to provide actionable intelligence,
surveillance and reconnaissance information. Speed is important to reduce operator time on data analysis. We are
porting some processor-intensive algorithms to run on a Graphics Processing Unit (GPU) in order to improve
throughput. Many aspects of video processing are highly parallel and well-suited for optimization on GPUs, which are
now commonly available on computers.
Moreover, we are extending the tools to handle video data from various airborne platforms and developing the interface
to the Coalition Shared Database (CSD). The CSD server enables the dissemination and storage of data from different
sensors among NATO countries. The CSD interface allows operational users to search and retrieve relevant video data
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module
can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection
(eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to
normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face
image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure
for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the
We explore the feasibility of reconstructing some three-dimensional (3D) surface information of the human fundus present in a sequence of fluorescein angiograms. The angiograms are taken during the same examination with an uncalibrated camera. The camera is still and we assume that the natural head/eye micro movement is large enough to create the necessary view change for the stereo effect. We test different approaches to calculate the fundamental matrix and the disparity map. A careful medical analysis of the reconstructed 3D information indicates that it represents the 3D distribution of the fluorescein within the eye fundus rather than the 3D retina surface itself because the latter is mainly a translucent medium. Qualitative evaluation is presented and compared with the 3D information perceived with a stereoscope. This preliminary study indicates that our approach could provide a simple way to extract 3D fluorescein information without the use of a complex stereo image acquisition setup.