Long range imaging with visible or infrared observation systems is typically hampered by atmospheric turbulence. The fluctuations in the refractive index of the air produce random shifts and blurs in the recorded imagery that vary across the field of view and over time. This severely complicates their utility for visual detection, recognition and identification at large distances. Software based turbulence mitigation methods aim to restore such recorded image sequences based on the image data only and thereby enable visual identification at larger distances. Although successful restoration has been achieved on static scenes in the past, a significant challenge remains in accounting for moving objects such that they remain visible as moving objects in the output. Under moderate turbulence conditions, the turbulence induced shifts may be several pixels in magnitude and occur on the same length scale as moving objects. This severely complicates the segmentation between these objects and the background. Here we investigate how turbulence mitigation may be accomplished on background as well as large moving objects for both land and sea based imaging under moderate turbulence conditions. We apply optical flow estimation methods to determine both the turbulence induced shifts in image sequences as well as the motion of large moving objects. These motion estimates are used with our TNO turbulence mitigation software to reduce the effects of turbulence and to stabilize the output to a dynamic reference. We apply this approach to both land and sea scenarios. We investigate how different regularization methods for the optical flow affect the accuracy of the segmentation between moving object motion and the background motion. Moreover we qualitatively asses the quality improvement of the resulting imagery in sequences of output images, and show a substantial gain in their apparent sharpness and stability on both the background and moving objects.
Airborne platforms, such as UAV’s, with Wide Area Motion Imagery (WAMI) sensors can cover multiple square kilometers and produce large amounts of video data. Analyzing all data for information need purposes becomes increasingly labor-intensive for an image analyst. Furthermore, the capacity of the datalink in operational areas may be inadequate to transfer all data to the ground station. Automatic detection and tracking of people and vehicles enables to send only the most relevant footage to the ground station and assists the image analysts in effective data searches. In this paper, we propose a method for detecting and tracking vehicles in high-resolution WAMI images from a moving airborne platform. For the vehicle detection we use a cascaded set of classifiers, using an Adaboost training algorithm on Haar features. This detector works on individual images and therefore does not depend on image motion stabilization. For the vehicle tracking we use a local template matching algorithm. This approach has two advantages. In the first place, it does not depend on image motion stabilization and it counters the inaccuracy of the GPS data that is embedded in the video data. In the second place, it can find matches when the vehicle detector would miss a certain detection. This results in long tracks even when the imagery is of low frame-rate. In order to minimize false detections, we also integrate height information from a 3D reconstruction that is created from the same images. By using the locations of buildings and roads, we are able to filter out false detections and increase the performance of the tracker. In this paper we show that the vehicle tracks can also be used to detect more complex events, such as traffic jams and fast moving vehicles. This enables the image analyst to do a faster and more effective search of the data.
This paper presents a system to extract metadata about human activities from full-motion video recorded from a UAV.
The pipeline consists of these components: tracking, motion features, representation of the tracks in terms of their motion
features, and classification of each track as one of the human activities of interest. We consider these activities: walk,
run, throw, dig, wave. Our contribution is that we show how a robust system can be constructed for human activity
recognition from UAVs, and that focus-of-attention is needed. We find that tracking and human detection are essential
for robust human activity recognition from UAVs. Without tracking, the human activity recognition deteriorates. The
combination of tracking and human detection is needed to focus the attention on the relevant tracks. The best performing
system includes tracking, human detection and a per-track analysis of the five human activities. This system achieves an
average accuracy of 93%. A graphical user interface is proposed to aid the operator or analyst during the task of
retrieving the relevant parts of video that contain particular human activities. Our demo is available on YouTube.
Airborne platforms are recording large amounts of video data. Extracting the events which are needed to see is a time-demanding task for analysts. The reason for this is that the sensors record hours of video data in which only a fraction of
the footage contains events of interest. For the analyst, it is hard to retrieve such events from the large amounts of video
data by hand. A way to extract information more automatically from the data is to detect all humans within the scene.
This can be done in a real-time scenario (both on-board as on the ground station) for strategic and tactical purposes and
in an offline scenario where the information is analyzed after recording to acquire intelligence (e.g. a daily life pattern).
In this paper, we evaluate three different methods for object detection from a moving airborne platform. The first one is a
static person detection algorithm. The main advantage of this method is that it can be used on single frames, and therefor
does not depend on the stabilization of the platform. The main disadvantage of this method is that the number of pixels
needed for the detection is pretty large. The second method is based on detection of motion-in-motion. Here the
background is stabilized, and clusters of pixels that move with respect to this stabilized background are detected as
moving object. The main advantage is that all moving objects are detected, the main disadvantage is that it heavily
depends on the quality of the stabilization. The third method combines both previous detection methods.
The detections are tracked using a histogram-based tracker, so that missed detections can be filled in and a trajectory of
all objects can be determined. We demonstrate the tracking performance using the three different detections methods on
the publicly available UCF-ARG aerial dataset. The performance is evaluated for two human actions (running and
digging) and varying object sizes. It is shown that a combined detection approach (static person detection and motion-in-motion
detection) gives better tracking results for both human actions than using one of the detectors alone. Furthermore
it can be concluded that the minimal height of humans must be 20 pixels to guarantee a good tracking performance.
In general, long range detection, recognition and identification in visual and infrared imagery are hampered by
turbulence caused by atmospheric conditions. The amount of turbulence is often indicated by the refractive-index
structure parameter Cn2. The value of this parameter and its variation is determined by the turbulence effects over the
optical path. Especially along horizontal optical paths near the surface (land-to-land scenario) large values and
fluctuations of Cn2 occur, resulting in an extremely blurred and shaky image sequence. Another important parameter is
the isoplanatic angle, θ0, which is the angle where the turbulence is approximately constant. Over long horizontal paths
the values of θ0 are typically very small; much smaller than the field-of-view of the camera.
Typical image artefacts that are caused by turbulence are blur, tilt and scintillation. These artefacts occur often locally in
an image. Therefore turbulence corrections are required in each image patch of the size of the isoplanatic angle. Much
research has been devoted to the field of turbulence mitigation. One of the main advantages of turbulence mitigation is
that it enables visual recognition over larger distances by reducing the blur and motion in imagery. In many (military)
scenarios this is of crucial importance. In this paper we give a brief overview of two software approaches to mitigate the
visual artifacts caused by turbulence. These approaches are very diverse in complexity. It is shown that a more complex
turbulence mitigation approach is needed to improve the imagery containing medium turbulence. The basic turbulence
mitigation method is only capable of mitigating low turbulence.
Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.
Surveillance and reconnaissance tasks are currently often performed using an airborne platform such as a UAV. The airborne platform can carry different sensors. EO/IR cameras can be used to view a certain area from above. To support the task from the sensor analyst, different image processing techniques can be applied on the data, both in real-time or for forensic applications. These algorithms aim at improving the data acquired to be able to detect objects or events and make an interpretation of those detections. There is a wide range of techniques that tackle these challenges and we group them in classes according to the goal they pursue (image enhancement, modeling the world object information, situation assessment). An overview of these different techniques and different concepts of operations for these techniques are presented in this paper.
A well-known phenomena that diminishes the recognition range in infrared imagery is atmospheric turbulence. In literature many methods are described that try to compensate for the distortions caused by atmospheric turbulence. Most of these methods use a global processing approach in which they assume a global shift and a uniform blurring in all frames. Because the effects of atmospheric turbulence are often spatial and temporal varying, we presented previous year a turbulence compensation method that performs local processing leading to excellent results. In this paper an improvement of this method is presented which uses a temporal moving reference frame in order to be capable of processing imagery containing moving objects as well as blur estimation to obtain adaptive deconvolution. Furthermore our method is evaluated in a quantitative way, which will give a good insight in which components of our method contribute to the obtained visual improvements.
We present a robust method for landing zone selection using obstacle detection to be used for UAV emergency landings. The method is simple enough to allow real-time implementation on a UAV system. The method is able to detect objects in the presence of camera movement and motion parallax. Using the detected obstacles we select a safe landing zone for the UAV. The motion and structure detection uses background estimation of stabilized video. The background variation is measured and used to enhance the moving objects if necessary. In the motion and structure map a distance transform is calculated to find a suitable location for landing.
For many military operations, situational awareness is of great importance. During night conditions, this situational awareness can be improved using both analog and digital image-intensified cameras. The quality of image intensifiers is a topic of interest. One of the differences between a digital and analog system is noise behavior. For digital image intensifiers, the noise behavior is not as good as for analog image intensifiers, but it can be improved using noise-reduction techniques. In this paper, the improvement using temporal noise reduction and local adaptive contrast enhancement is shown and quantitatively evaluated by subjective measurement of the conspicuity and triangle orientation discrimination (TOD). The results of the conspicuity and TOD experiments are consistent with each other. The highest improvement is found for a low-clutter environment; for medium- and high-clutter environments, the improvement is less. This can be explained by the fact that image enhancement increases contrast of all image details, irrespective of whether they are targets or clutter. For low-clutter image enhancement, target conspicuity and target detection improvement will be largest, since there are not many distracting elements.
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which type of scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO’s turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.
High resolution sensors are required for recognition purposes. Low resolution sensors, however, are still widely
used. Software can be used to increase the resolution of such sensors. One way of increasing the resolution of
the images produced is using multi-frame super resolution algorithms. Limitation of these methods are that the
reconstruction only works if multiple frames are available furthermore these algorithms decreases the temporal
resolution. In this paper we use a sparse representation of an overcomplete dictionary to significantly increase
the resolution of a single low resolution image. This allows for a higher resolution gain and no loss in temporal
resolution. We demonstrate this technique to improve the resolution of number plates images obtained from a
near infrared roadside camera.
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric
conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of
turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is
of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for
the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated
hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each
approach the pros and cons are given and it is indicated for which scenario this approach is useful. In more detail we
describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the
different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look
forward and indicate the upcoming challenges in the field of turbulence compensation.
KEYWORDS: Super resolution, Reconstruction algorithms, Image resolution, Modulation transfer functions, Lawrencium, Signal to noise ratio, Image enhancement, Minimum resolvable temperature difference, Image processing, Cameras
For many military operations situational awareness is of great importance. This situational awareness and related
tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic.
Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order
to judge these algorithms and the conditions under which they operate best, performance evaluation methods
are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based
evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate
between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional
image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve
the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available.
Therefore, evaluation of the differences in high resolution between the estimated high resolution image
and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution
reconstruction, which are not known on forehand and hence are difficult to evaluate.
In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms.
Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery.
Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The
result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution
reconstruction algorithms.
Infrared imagery over long ranges is hampered by atmospheric turbulence effects, leading to spatial resolutions worse
than expected by a diffraction limited sensor system. This diminishes the recognition range and it is therefore important
to compensate visual degradation due to atmospheric turbulence. The amount of turbulence is spatially varying due to
anisoplanatic conditions, while the isoplanatic angle varies with atmospheric conditions. But also the amount of
turbulence varies significantly in time. In this paper a method is proposed that performs turbulence compensation using a
patch-based approach. In each patch the turbulence is considered to be approximately spatially and temporally constant.
Our method utilizes multi-frame super-resolution, which incorporates local registration, fusion and deconvolution of the
data and also can increase the resolution. This makes our method especially suited to use under anisoplanatic conditions.
In our paper we show that our method is capable of compensating the effects of mild to strong turbulence conditions.
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct
view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic
field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing
a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to
generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can
be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for
telesurgery or other remote vision-guided tasks.
KEYWORDS: Stochastic processes, Super resolution, Image restoration, Turbulence, Error analysis, Image analysis, Point spread functions, Cameras, Video, Signal to noise ratio
Blur estimation is an important technique for super resolution, image restoration, turbulence mitigation, deblurring and
autofocus. Low-cost methods have been proposed for blur estimation. However, they can have large stochastic errors
when computed close to the edge location and biased estimates at other locations. In this paper, we define an efficient,
accurate and precise estimate that can be computed at the edge location based on the first-order derivative. Our method is
compared and benchmarked against previous state-of-the-art. The results show that the proposed method is fast, unbiased
and with low stochastic error.
Long range object identi¯cation needs visual identi¯cation over large distances. However, atmospheric turbulence
does hinder long range imaging. Therefore it is crucial to compensate the visual artifacts due to atmospheric
turbulence. In this paper we propose a new method to compensate these turbulence e®ects, thus enabling
identi¯cation at larger distances. Our method is based on applying phase diversity imaging by a wavefront
modulator in free-running mode. As we have no feedback loop, we can simultaneously compensate turbulence
for multiple isoplanatic angles. The wavefront modulator generates several images with known additional wave
front aberrations. This extra information allows us to locally estimate the optimal wave front aberration and
thus the optimal - turbulence free - image can be derived. This paper provides results on simulated data showing
that this method performs well under realistic turbulence and noise conditions. Furthermore the robustness of
the proposed method is shown for varying algorithmic settings.
Recently new techniques for night-vision cameras are developed. Digital image-intensifiers are becoming available
on the market. Also, so-called EMCCD (electro-magnified) cameras are developed, which can also record
imagery in dim conditions. In this paper we present data recorded with both types of cameras (image-intensifiers
and EMCCD cameras) in dim light conditions, and present the results of image enhancement on this data. The
image enhancement techniques applied are noise reduction, super-resolution reconstruction and local adaptive
contrast enhancement. Comparing the results from both cameras indicates that the image intensifier performs
better at the dim conditions and the EMCCD camera performs somewhat better at the bright conditions.
When bright moving objects are viewed with an electro-optical system at very long range, they will appear as
small slightly blurred moving points in the recorded image sequence. Detection of point targets is seriously
hampered by structure in the background, temporal noise and aliasing artifacts due to undersampling by the
infrared (IR) sensor.
Usually, the first step of point target detection is to suppress the clutter of the stationary background in the
image. This clutter suppression step should remove the information of the static background while preserving
the target signal energy. Recently we proposed to use super-resolution reconstruction (SR) in the background
suppression step. This has three advantages: a better prediction of the aliasing contribution allows a better
clutter reduction, the resulting temporal noise is lower and the point target energy is better preserved.
In this paper the performance of the point target detection based on super-resolution reconstruction (SR)
is evaluated. We compare the use of robust versus non robust SR reconstruction and evaluate the effect of
regularization. Both of these effects are influenced by the number of frames used for the SR reconstruction and
the apparent motion of the point target. We found that SR improves the detection efficiency, that robust SR
outperforms non-robust SR, and that regularization decreases the detection performance. Therefore, for point
target detection one can best use a robust SR algorithm with little or no regularization.
When bright moving objects are viewed with an electro-optical system at long range, they appear as small, slightly blurred moving points in the recorded image sequence. Typically, such point targets need to be detected in an early stage. However, in some scenarios the background of a scene may contain much structure, which makes it difficult to detect a point target. The novelty of this work is that superresolution reconstruction is used for suppression of the background. With superresolution reconstruction a high-resolution estimate of the background, without aliasing artifacts due to undersampling, is obtained. After applying a camera model and subtraction, this will result in difference images containing only the point target and temporal noise. In our experiments, based on realistic scenarios, the detection performance, after background suppression using superresolution reconstruction, is compared with the detection performance of a common background suppression method. It is shown that using the proposed method, for an equal detection-to-false-alarm ratio, the signal strength of a point target can be up to 4 times smaller. This implies that a point target can be detected at a longer range.
Surveillance applications are primarily concerned with detection of targets. In electro-optical surveillance
systems, missiles or other weapons coming towards you are observed as moving points. Typically, such moving
targets need to be detected in a very short time. One of the problems is that the targets will have a low
signal-to-noise ratio with respect to the background, and that the background can be severely cluttered like
in an air-to-ground scenario.
The first step in detection of point targets is to suppress the background. The novelty of this work is
that a super-resolution reconstruction algorithm is used in the background suppression step. It is well-known
that super-resolution reconstruction reduces the aliasing in the image. This anti-aliasing is used to model
the specific aliasing contribution in the camera image, which results in a better estimate of the clutter in
the background. Using super-resolution reconstruction also reduces the temporal noise, thus providing a
better signal-to-noise ratio than the camera images. After the background suppression step common detection
algorithms such as thresholding or track-before-detect can be used.
Experimental results are given which show that the use of super-resolution reconstruction significantly
increases the sensitivity of the point target detection. The detection of the point targets is increased by the
noise reduction property of the super-resolution reconstruction algorithm. The background suppression is
improved by the anti-aliasing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.