KEYWORDS: Data fusion, Video, Video surveillance, Sensors, Information fusion, Computer security, Analytics, Network security, Data modeling, Sensor fusion
This paper presents a novel application of Evidential Reasoning to Threat Assessment for critical infrastructure
protection. A fusion algorithm based on the PCR5 Dezert-Smarandache fusion rule is proposed which fuses alerts
generated by a vision-based behaviour analysis algorithm and a-priori watch-list intelligence data. The fusion algorithm
produces a prioritised event list according to a user-defined set of event-type severity or priority weightings. Results
generated from application of the algorithm to real data and Behaviour Analysis alerts captured at London's Heathrow
Airport under the EU FP7 SAMURAI programme are presented. A web-based demonstrator system is also described
which implements the fusion process in real-time. It is shown that this system significantly reduces the data deluge
problem, and directs the user's attention to the most pertinent alerts, enhancing their Situational Awareness (SA). The
end-user is also able to alter the perceived importance of different event types in real-time, allowing the system to adapt
rapidly to changes in priorities as the situation evolves. One of the key challenges associated with fusing information
deriving from intelligence data is the issue of Data Incest. Techniques for handling Data Incest within Evidential
Reasoning frameworks are proposed, and comparisons are drawn with respect to Data Incest management techniques
that are commonly employed within Bayesian fusion frameworks (e.g. Covariance Intersection). The challenges associated with simultaneously dealing with conflicting information and Data Incest in Evidential Reasoning frameworks are also discussed.
The ability to passively reconstruct a scene in 3D provides significant benefit to Situational Awareness systems
employed in security and surveillance applications. Traditionally, passive 3D scene modelling techniques, such as Shape
from Silhouette, require images from multiple sensor viewpoints, acquired either through the motion of a single sensor or
from multiple sensors. As a result, the application of these techniques often attracts high costs, and presents numerous
practical challenges. This paper presents a 3D scene reconstruction approach based on exploiting scene shadows, which
only requires information from a single static sensor. This paper demonstrates that a large amount of 3D information
about a scene can be interpreted from shadows; shadows reveal the shape of objects as viewed from a solar perspective
and additional perspectives are gained as the sun arcs across the sky. The approach has been tested on synthetic and real
data and is shown to be capable of reconstructing 3D scene objects where traditional 3D imaging methods fail. Providing
the shadows within a scene are discernible, the proposed technique is able to reconstruct 3D objects that are
camouflaged, obscured or even outside of the sensor's Field of View. The proposed approach can be applied in a range
of applications, for example urban surveillance, checkpoint and border control, critical infrastructure protection and for
identifying concealed or suspicious objects or persons which would normally be hidden from the sensor viewpoint.
Techniques such as SIFT and SURF facilitate efficient and robust image processing operations through the use of sparse
and compact spatial feature descriptors and show much potential for defence and security applications. This paper
considers the extension of such techniques to include information from the temporal domain, to improve utility in
applications involving moving imagery within video data. In particular, the paper demonstrates how spatio-temporal
descriptors can be used very effectively as the basis of a target tracking system and as target discriminators which can
distinguish between bipeds and quadrupeds. Results using sequences of video imagery of walking humans and dogs are
presented, and the relative merits of the approach are discussed.
Traditional sharpening filters often enhance the noise content in imagery in addition to the edge definition. In order to
ensure that only pertinent features are enhanced and that the noise content of the imagery is not exaggerated, an adaptive
filter is typically required. This paper discusses a novel image sharpening strategy proposed by Waterfall Solutions Ltd.
(WS) that is based upon the use of adaptive image filter kernels. The scale of the filter is steered by a WS' local saliency
measure. This allows the filter to sharpen pertinent features and suppress local noise. The scale of the edge sharpening
filter adapts locally in accordance with a proposed saliency measure. This helps to ensure that only pertinent edges are
enhanced. The technique has been applied to a series of test images. Results have shown the potential of this technique
for distinguishing salient information from noise content and for sharpening pertinent edges. By increasing the size of the
filter in noisy regions the filter is able to enhance larger-scale edge gradients whilst suppressing local noise. It is
demonstrated that the proposed approach provides superior edge enhancement capabilities over conventional filtering
approaches according to performance measures, such as edge strength and Signal-to-Noise-Ratio (SNR).
KEYWORDS: Commercial off the shelf technology, Image processing, Algorithm development, Defense and security, Imaging systems, Field programmable gate arrays, Sensors, Information security, Surveillance, Detection and tracking algorithms
To address the emergent needs of military and security users, a new design approach has been developed to enable the
rapid development of high performance and low cost imaging and processing systems. In this paper, information about
the "Bespoke COTS" design approach is presented and is illustrated using examples of systems that have been built and
delivered. This approach facilitates the integration of standardised COTS components into a customised yet flexible
systems architecture to realise user requirements within stringent project timescales and budgets. The paper also
discusses the important area of the design trade-off space (performance, flexibility, quality, and cost) and compares the
results of the Bespoke COTS approach to design solutions derived from more conventional design processes.
This paper describes the ongoing development of the TERRA(TM) (Timeline Editing for Real-time Review and Analysis)
application by Waterfall Solutions Ltd. (WS), which is a high-throughput video analytics tool designed to be highly
flexible to user requirements. One of the known pitfalls associated with video analytics is the lack of sufficient user
interaction within existing systems, often leading to system unreliability due to an unacceptably high level of false
alarms. Therefore, instead of aiming to produce a fully automated system, TERRA(TM) emphasises the importance of
having a human user in the loop, and consequently concentrates on providing information in the most intuitive and
efficient a manner possible.
KEYWORDS: 3D modeling, Cameras, Sensors, 3D displays, Image processing, Signal processing, 3D image processing, Situational awareness sensors, MATLAB, Solid modeling
This paper describes a novel real-time image and signal processing network, RONINTM, which facilitates the rapid
design and deployment of systems providing advanced geospatial surveillance and situational awareness capability.
RONINTM is a distributed software architecture consisting of multiple agents or nodes, which can be configured to
implement a variety of state-of-the-art computer vision and signal processing algorithms. The nodes operate in an
asynchronous fashion and can run on a variety of hardware platforms, thus providing a great deal of scalability and
flexibility. Complex algorithmic configuration chains can be assembled using an intuitive graphical interface in a plug-and-
play manner. RONINTM has been successfully exploited for a number of applications, ranging from remote event
detection to complex multiple-camera real-time 3D object reconstruction. This paper describes the motivation behind the
creation of the network, the core design features, and presents details of an example application. Finally, the on-going
development of the network is discussed, which is focussed on dynamic network reconfiguration. This allows to the
network to automatically adapt itself to node or communications failure by intelligently re-routing network
communications and through adaptive resource management.
This paper discusses a novel image noise reduction strategy based on the use of adaptive image filter kernels. Three
adaptive filtering techniques are discussed and a case study based on a novel Adaptive Gaussian Filter is presented. The
proposed filter allows the noise content of the imagery to be reduced whilst preserving edge definition around important
salient image features. Conventional adaptive filtering approaches are typically based on the adaptation of one or two
basic filter kernel properties and use a single image content measure. In contrast, the technique presented in this paper is
able to adapt multiple aspects of the kernel size and shape automatically according to multiple local image content
measures which identify pertinent features across the scene. Example results which demonstrate the potential of the
technique for improving image quality are presented. It is demonstrated that the proposed approach provides superior
noise reduction capabilities over conventional filtering approaches on a local and global scale according to performance
measures such as Root Mean Square Error, Mutual Information and Structural Similarity. The proposed technique has
also been implemented on a Commercial Off-the-Shelf Graphical Processing Unit platform and demonstrates excellent
performance in terms of image quality and speed, with real-time frame rates exceeding 100Hz. A novel method which is
employed to help leverage the gains of the processing architecture without compromising performance is discussed.
KEYWORDS: Sensors, Energy harvesting, Solar energy, Clouds, Energy efficiency, Wind energy, Detection and tracking algorithms, Fusion energy, Vegetation, Surveillance
This paper considers the exploitation of energy harvesting technologies for teams of Autonomous Vehicles (AVs).
Traditionally, the optimisation of information gathering tasks such as searching for and tracking new objects, and
platform level power management, are only integrated at a mission-management level. In order to truly exploit new
energy harvesting technologies which are emerging in both the commercial and military domains (for example the
'EATR' robot and next-generation solar panels), the sensor management and power management processes must be
directly coupled. This paper presents a novel non-myopic sensor management framework which addresses this issue
through the use of a predictive platform energy model. Energy harvesting opportunities are modelled using a dynamic
spatial-temporal energy map and sensor and platform actions are optimised according to global team utility. The
framework allows the assessment of a variety of different energy harvesting technologies and perceptive tasks. In this
paper, two representative scenarios are used to parameterise the model with specific efficiency and energy abundance
figures. Simulation results indicate that the integration of intelligent power management with traditional sensor
management processes can significantly increase operational endurance and, in some cases, simultaneously improve
surveillance or tracking performance. Furthermore, the framework is used to assess the potential impact of energy
harvesting technologies at various efficiency levels. This provides important insight into the potential benefits that
intelligent power management can offer in relation to improving system performance and reducing the dependency on
fossil fuels and logistical support.
Hand-held thermal imaging systems are an important tool for fire and rescue services conducting search and rescue tasks.
However, in order to achieve wide-spread deployment the cost of such systems must be minimised, and this generally
leads to reduced image quality. Within this paper the use of advanced image processing functions to increase the imaging
system performance is discussed. Of particular note is the use and benefits of noise reduction and contrast enhancement.
Results from a developed camera system are presented, and the performance gains are illustrated and discussed.
This paper discusses the integration of a number of advanced image and data processing technologies in support of the
development of next-generation Situational Awareness systems for counter-terrorism and crime fighting applications. In
particular, the paper discusses the European Union Framework 7 'SAMURAI' project, which is investigating novel
approaches to interactive Situational Awareness using cooperative networks of heterogeneous imaging sensors. Specific
focus is given to novel Data Fusion aspects of the research which aim to improve system performance through
intelligently fusing both image data and non image data sources, resolving human-machine conflicts, and refining the
Situational Awareness picture. In addition, the paper highlights some recent advances in supporting image processing
technologies. Finally, future trends in image-based Situational Awareness are identified, such as Post-Event Analysis
(also known as 'Back-Tracking'), and the associated technical challenges are discussed.
Conventional air-to-ground target acquisition processes treat the image stream in isolation from external data sources.
This ignores information that may be available through modern mission management systems which could be fused into
the detection process in order to provide enhanced performance. By way of an example relating to target detection, this
paper explores the use of a-priori knowledge and other sensor information in an adaptive architecture with the aim of
enhancing performance in decision making. The approach taken here is to use knowledge of target size, terrain elevation,
sensor geometry, solar geometry and atmospheric conditions to characterise the expected spatial and radiometric
characteristics of a target in terms of probability density functions. An important consideration in the construction of the
target probability density functions are the known errors in the a-priori knowledge. Potential targets are identified in the
imagery and their spatial and expected radiometric characteristics are used to compute the target likelihood. The adaptive
architecture is evaluated alongside a conventional non-adaptive algorithm using synthetic imagery representative of an
air-to-ground target acquisition scenario. Lastly, future enhancements to the adaptive scheme are discussed as well as
strategies for managing poor quality or absent a-priori information.
The benefits of image fusion for man-in-the-loop Detection, Recognition, and Identification (DRI) tasks are well known.
However, the performance of conventional image fusion systems is typically sub-optimal, as they fail to capitalise on
high-level information which can be abstracted from the imagery. As part of a larger study into an Intelligent Image
Fusion (I2F) framework, this paper presents a novel approach which exploits high-level cues to adaptively enhance the
fused image via feedback to the pixel-level processing. Two scenarios are chosen for illustrative application of the
approach, Situational Awareness and Anomalous Object Detection (AOD). In the Situational Awareness scenario,
motion and other cues are used to enhance areas of the image according to predefined tasks, such as the detection of
moving targets of a certain size. This yields a large increase in Local Signal-to-Clutter Ratio (LSCR) when compared to
a baseline, non-adaptive approach. In the AOD scenario, spatial and spectral information is used to direct a foveal-patch
image fusion algorithm. This demonstrates a significant increase in the Probability of Detection on test imagery whilst
simultaneously reducing the mean number of false alarms when compared to a baseline, non-foveal approach. This paper
presents the rationale for the I2F approach and details two specific examples of how it can be applied to address very
different applications. Design details and quantitative performance analysis results are reported.
The performance of a multi-sensor data fusion system is inherently
constrained by the configuration of the given sensor suite.
Intelligent or adaptive control of sensor resources has been shown
to offer improved fusion performance in many applications. Common
approaches to sensor management select sensor observation tasks
that are optimal in terms of a measure of information. However,
optimising for information alone is inherently sub-optimal as it
does not take account of any other system requirements such as
stealth or sensor power conservation. We discuss the issues
relating to developing a suite of performance metrics for
optimising multi-sensor systems and propose some candidate
metrics. In addition it may not always be necessary to maximize
information gain, in some cases small increases in information
gain may take place at the cost of large sensor resource
requirements. Additionally, the problems of sensor tasking and
placement are usually treated separately, leading to a lack of
coherency between sensor management frameworks. We propose a novel
approach based on a high level decentralized information-theoretic
sensor management architecture that unifies the processes of
sensor tasking and sensor placement into a single framework.
Sensors are controlled using a minimax multiple objective
optimisation approach in order to address probability of target
detection, sensor power consumption, and sensor survivability
whilst maintaining a target estimation covariance threshold. We
demonstrate the potential of the approach through simulation of a
multi-sensor, target tracking scenario and compare the results
with a single objective information based approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.