Privacy protection may be defined as replacing the original content in an image region with a (less intrusive) content having modified target appearance information to make it less recognizable by applying a privacy protection technique. Indeed, the development of privacy protection techniques also needs to be complemented with an established objective evaluation method to facilitate their assessment and comparison. Generally, existing evaluation methods rely on the use of subjective judgments or assume a specific target type in image data and use target detection and recognition accuracies to assess privacy protection. An annotation-free evaluation method that is neither subjective nor assumes a specific target type is proposed. It assesses two key aspects of privacy protection: “protection” and “utility.” Protection is quantified as an appearance similarity, and utility is measured as a structural similarity between original and privacy-protected image regions. We performed an extensive experimentation using six challenging datasets (having 12 video sequences), including a new dataset (having six sequences) that contains visible and thermal imagery. The new dataset is made available online for the community. We demonstrate effectiveness of the proposed method by evaluating six image-based privacy protection techniques and also show comparisons of the proposed method over existing methods.
Hyperspectral remote sensing based on unmanned airborne vehicles is a field increasing in importance. The combined functionality of simultaneous hyperspectral and geometric modeling is less developed. A configuration has been developed that enables the reconstruction of the hyperspectral three-dimensional (3D) environment. The hyperspectral camera is based on a linear variable filter and a high frame rate, high resolution camera enabling point-to-point matching and 3D reconstruction. This allows the information to be combined into a single and complete 3D hyperspectral model. In this paper, we describe the camera and illustrate capabilities and difficulties through real-world experiments.
Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral
imaging applications. These are challenging problems as the measured spectra in hyperspectral images
from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface,
e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different
components of the incident light. These light components are subsequently used to predict what a measured
spectrum would look like under different light conditions. The derived method is evaluated using an urban
hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from
LIDAR 3D data acquired simultaneously with the hyperspectral data.
The European Defence Agency (EDA) launched the Active Imaging (ACTIM) study to investigate the potential of active
imaging, especially that of spectral laser imaging. The work included a literature survey, the identification of promising
military applications, system analyses, a roadmap and recommendations.
Passive multi- and hyper-spectral imaging allows discriminating between materials. But the measured radiance in the
sensor is difficult to relate to spectral reflectance due to the dependence on e.g. solar angle, clouds, shadows... In turn,
active spectral imaging offers a complete control of the illumination, thus eliminating these effects. In addition it allows
observing details at long ranges, seeing through degraded atmospheric conditions, penetrating obscurants (foliage,
camouflage...) or retrieving polarization information. When 3D, it is suited to producing numerical terrain models and to
performing geometry-based identification. Hence fusing the knowledge of ladar and passive spectral imaging will result
in new capabilities.
We have identified three main application areas for active imaging, and for spectral active imaging in particular: (1) long
range observation for identification, (2) mid-range mapping for reconnaissance, (3) shorter range perception for threat
detection. We present the system analyses that have been performed for confirming the interests, limitations and
requirements of spectral active imaging in these three prioritized applications.
We have performed a field trial with an airborne push-broom hyperspectral sensor, making several flights over the
same area and with known changes (e.g., moved vehicles) between the flights. Each flight results in a sequence
of scan lines forming an image strip, and in order to detect changes between two flights, the two resulting image
strips must be geometrically aligned and radiometrically corrected. The focus of this paper is the geometrical
alignment, and we propose an image- and gyro-based method for geometric co-alignment (registration) of two
image strips. The method is particularly useful when the sensor is not stabilized, thus reducing the need for
expensive mechanical stabilization. The method works in several steps, including gyro-based rectification, global
alignment using SIFT matching, and a local alignment using KLT tracking. Experimental results are shown but
not quantified, as ground truth is, by the nature of the trial, lacking.
This paper will describe ongoing work from an EDA initiated study on Active Imaging with emphasis of using multi or
broadband spectral lasers and receivers. Present laser based imaging and mapping systems are mostly based on a fixed
frequency lasers. On the other hand great progress has recently occurred in passive multi- and hyperspectral imaging
with applications ranging from environmental monitoring and geology to mapping, military surveillance, and
reconnaissance. Data bases on spectral signatures allow the possibility to discriminate between different materials in the
scene. Present multi- and hyperspectral sensors mainly operate in the visible and short wavelength region (0.4-2.5 μm)
and rely on the solar radiation giving shortcoming due to shadows, clouds, illumination angles and lack of night
operation. Active spectral imaging however will largely overcome these difficulties by a complete control of the
illumination. Active illumination enables spectral night and low-light operation beside a robust way of obtaining
polarization and high resolution 2D/3D information.
Recent development of broadband lasers and advanced imaging 3D focal plane arrays has led to new opportunities for
advanced spectral and polarization imaging with high range resolution. Fusing the knowledge of ladar and passive
spectral imaging will result in new capabilities in the field of
EO-sensing to be shown in the study. We will present an
overview of technology, systems and applications for active spectral imaging and propose future activities in connection
with some prioritized applications.
We present a new hyperspectral data set that FOI will keep publicly available. The hyperspectral data set was collected in
an airborne measurement over the countryside. The spectral resolution was about 10 nm which allowed registrations in 60
spectral bands in the visual and near infrared range (390-960 nm). Objects with various signature properties were placed
in three areas: the edge of a wood, an open field and a rough open terrain. Several overflights were performed over the
areas. Between the overflights some of the objects were moved, representing different scenarios. Our interest is primarily
in anomaly detection of man-made objects placed in nature where no such objects are expected. The objects in the trial
were military and civilian vehicles, boards of different size and a camouflage net. The size of the boards range from multipixel
to subpixel size. Due to wind and cloud conditions the stability and the flight height of the airplane vary between
the overflights, which makes the analysis extra challenging.
We address the problem of estimating atmosphere parameters (temperature, water vapour content) from data
captured by an airborne thermal hyperspectral imager, and propose a method based on direct optimization. The
method also involves the estimation of object parameters (temperature and emissivity) under the restriction that
the emissivity is constant for all wavelengths. Certain sensor parameters can be estimated as well in the same
process. The method is analyzed with respect to sensitivity to noise and number of spectral bands. Simulations
with synthetic signatures are performed to validate the analysis, showing that estimation can be performed with
as few as 10-20 spectral bands at moderate noise levels. More than 20 bands does not improve the estimates. The
proposed method is also extended to incorporate additional knowledge, for example measurements of atmospheric
parameters and sensor noise.
The ROC curve is the most frequently used performance measure for detection methods and the underlying
sensor configuration. Common problems are that the ROC curve does not present a single number that can be
compared to other systems and that no discrimination between sensor performance and algorithm performance
is done. To address the first problem, a number of measures are used in practice, like detection rate at a specific
false alarm rate, or area-under-curve. For the second problem, we proposed in a previous paper1 an information
theoretic method for measuring sensor performance. We now relate the method to the ROC curve, show that it
is equivalent to selecting a certain point on the ROC curve, and that this point is easily determined. Our scope
is hyperspectral data, studying discrimination between single pixels.
When we digitize data from a hyperspectral imager, we do so in three dimensions; the radiometric dimension, the spectral dimension, and the spatial dimension(s). The output can be regarded as a random variable taking values from a discrete alphabet, thus allowing simple estimation of the variable's entropy, i.e., its information content. By modeling the target/background state as a binary random variable and the corresponding measured spectra as a function thereof, we can compute the information capacity of a certain sensor or sensor configuration. This can be used as a measure of the separability of the two classes, and also gives a bound on the sensor's performance. Changing the parameters of the digitizing process, bascially how many bits and bands to spend, will affect the information capacity, and we can thus try to find parameters where as few bits/bands as possible gives us as good class separability as possible. The parameters to be optimized in this way (and with respect to the chosen target and background) are spatial, radiometric and spectral resolution, i.e., which spectral bands to use and how to quantize them. In this paper, we focus on the band selection problem, describe an initial approach, and show early results of target/background separation.
This paper presents components of a sensor management architecture for autonomous UAV systems equipped with IR and video sensors, focusing on two main areas. Firstly, a framework inspired by optimal control and information theory is presented for concurrent path and sensor planning. Secondly, a method for visual landmark selection and recognition is presented. The latter is intended to be used within a SLAM (Simultaneous Localization and Mapping) architecture for visual navigation. Results are presented on both simulated and real sensor data, the latter from the MASP system (Modular Airborne Sensor Platform), an in-house developed UAV surrogate system containing a gimballed IR camera, a video sensor, and an integrated high performance navigation system.
We present an approach to a general decision support system. The aim is to cover the complete process for automatic
target recognition, from sensor data to the user interface. The approach is based on a query-based information
system, and include tasks like feature extraction from sensor data, data association, data fusion and situation
analysis. Currently, we are working with data from laser radar, infrared cameras, and visual cameras, studying target
recognition from cooperating sensors on one or several platforms. The sensors are typically airborne and at low
The processing of sensor data is performed in two steps. First, several attributes are estimated from the (unknown
but detected) target. The attributes include orientation, size, speed, temperature etc. These estimates are
used to select the models of interest in the matching step, where the target is matched with a number of target models,
returning a likelihood value for each model. Several methods and sensor data types are used in both steps.
The user communicates with the system via a visual user interface, where, for instance, the user can mark an
area on a map and ask for hostile vehicles in the chosen area. The user input is converted to a query in ΣQL, a query
language developed for this type of applications, and an ontological system decides which algorithms should be
invoked and which sensor data should be used. The output from the sensors is fused by a fusion module and answers
are given back to the user. The user does not need to have any detailed technical knowledge about the sensors
(or which sensors that are available), and new sensors and algorithms can easily be plugged into the system.