Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network – a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness.
Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc.
The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.
We propose a novel Deep learning approach using autoencoders to map spectral bands to a space of lower dimensionality while preserving the information that makes it possible to discriminate different materials. Deep learning is a relatively new pattern recognition approach which has given promising result in many applications. In Deep learning a hierarchical representation of increasing level of abstraction of the features is learned. Autoencoder is an important unsupervised technique frequently used in Deep learning for extracting important properties of the data. The learned latent representation is a non-linear mapping of the original data which potentially preserve the discrimination capacity.
The use of Improvised Explosive Devices (IEDs) has increased significantly over the world and is a globally widespread phenomenon. Although measures can be taken to anticipate and prevent the opponent's ability to deploy IEDs, detection of IEDs will always be a central activity. There is a wide range of different sensors that are useful but also simple means, such as a pair of binoculars, can be crucial to detect IEDs in time.
Disturbed earth (disturbed soil), such as freshly dug areas, dumps of clay on top of smooth sand or depressions in the ground, could be an indication of a buried IED. This paper brie y describes how a field trial was set-up to provide a realistic data set on a road section containing areas with disturbed soil due to buried IEDs. The road section was imaged using a forward looking land-based sensor platform consisting of visual imaging sensors together with long-, mid-, and shortwave infrared imaging sensors.
The paper investigates the presence of discriminatory information in surface texture comparing areas with disturbed against undisturbed soil. The investigation is conducted for the different wavelength bands available. To extract features that describe texture, image processing tools such as 'Histogram of Oriented Gradients', 'Local Binary Patterns', 'Lacunarity', 'Gabor Filtering' and 'Co-Occurence' is used. It is found that texture as characterized here may provide discriminatory information to detect disturbed soil, but the signatures we found are weak and can not be used alone in e.g. a detector system.
This paper briefly describes a field trial designed to give a realistic data set on a road section containing areas with
disturbed soil due to buried IEDs. During a time-span of a couple of weeks, the road was repeatedly imaged using a
multi-band sensor system with spectral coverage from visual to LWIR. The field trial was conducted to support a long
term research initiative aiming at using EO sensors and sensor fusion to detect areas of disturbed soil.
Samples from the collected data set is presented in the paper and shown together with an investigation on basic statistical
properties of the data. We conclude that upon visual inspection, it is fully possible to discover areas that have been
disturbed, either by using visual and/or IR sensors. Reviewing the statistical analysis made, we also conclude that
samples taken from both disturbed and undisturbed soil have well definable statistical distributions for all spectral bands.
We explore statistical tests to discriminate between different samples showing positive indications that discrimination
between disturbed and undisturbed soil is potentially possible using statistical methods.
Within the European Commission’s Seventh Framework Programme (FP7), the CONSORTIS project will design and fabricate a stand-off system for the detection of objects concealed on people. This system operating in the sub-millimetre wave part of the spectrum will scan people as they walk by the sensor. The aim of the project is to produce a system which has a high probability of detection, low false alarm rates, is non-invasive and respects privacy. The top level system design for CONSORTIS brings together both passive and active sensors and the simulation tools developed to evaluate the design are described. The passive system will operate in two or three bands between 100 and 600GHz and will be based on a cryogen free cooled focal plane array sensor whilst the active system will be a solid-state 340GHz radar. This will maximize the probability of detection and reduce false alarms. A ‘systems engineering’ approach was adopted with performance modeling being used to develop the system specifications. A modified version of OpenFx is used to model the passive system and SE-RAY-EM for the active system. Both of these tools are capable of rendering imagery which is electromagnetically correct and account for the properties of the sensor. Furthermore this imagery can be animated to give moving images similar to those observed in the real system. Targets can be embedded under clothing so that performance can be estimated. False alarms can be introduced in a similar way to understand if their signatures can be rejected.