A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.
We present an approach to a general decision support system. The aim is to cover the complete process for automatic
target recognition, from sensor data to the user interface. The approach is based on a query-based information
system, and include tasks like feature extraction from sensor data, data association, data fusion and situation
analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target
recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low
The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown
but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are
used to select the models of interest in the matching step, where the target is matched with a number of target models,
returning a likelihood value for each model. Several methods and sensor data types are used in both steps.
The user communicates with the system via a visual user interface, where, for instance, the user can mark an
area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query
language developed for this type of applications, and an ontological system decides which algorithms should be
invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers
are given back to the user. The user does not need to have any detailed technical knowledge about the sensors
(or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.