We investigate the extension of the recently proposed weighted Fourier burst accumulation (FBA) method into the wavelet domain. The purpose of FBA is to reconstruct a clean and sharp image from a sequence of blurred frames. This concept lies in the construction of weights to amplify dominant frequencies in the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed spectra. We first suggest replacing the rigid registration step used in the original algorithm with a nonrigid registration in order to process sequences acquired through atmospheric turbulence. Second, we propose to work in a wavelet domain instead of the Fourier one. This leads us to the construction of two types of algorithms. Finally, we propose an alternative approach to replace the weighting idea by an approach promoting the sparsity in the used space. Several experiments are provided to illustrate the efficiency of the proposed methods.
Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.
We propose a new way to correct for the non-uniformity (NU) and the noise in uncooled infrared-type images.
This method works on static images, needs no registration, no camera motion and no model for the non uniformity.
The proposed method uses an hybrid scheme including an automatic locally-adaptive contrast adjustment and a
state-of-the-art image denoising method. It permits to correct for a fully non-linear NU and the noise efficiently
using only one image. We compared it with total variation on real raw and simulated NU infrared images. The
strength of this approach lies in its simplicity, low computational cost. It needs no test-pattern or calibration
and produces no "ghost-artefact".
In this paper we present a new approach to deblur the effect of atmospheric turbulence in the case of long range
imaging. Our method is based on an analytical formulation, the Fried kernel, of the atmosphere modulation
transfer function (MTF) and a framelet based deconvolution algorithm. An important parameter is the refractive
index structure which requires specific measurements to be known. Then we propose a method which provides a
good estimation of this parameter from the input blurred image. The final algorithms are very easy to implement
and show very good results on both simulated blur and real images.
We recently developed a new approach to get a stabilized image from a sequence of frames acquired through
atmospheric turbulence. The goal of this algorihtm is to remove the geometric distortions due by the atmosphere
movements. This method is based on a variational formulation and is efficiently solved by the use of Bregman
iterations and the operator splitting method. In this paper we propose to study the influence of the choice
of the regularizing term in the model. Then we proposed to experiment some of the most used regularization
constraints available in the litterature.
This paper introduces a new way to correct the non-uniformity (NU) in uncooled infrared-type images. The main
defect of these uncooled images is the lack of a column (resp. line) time-dependent cross-calibration, resulting
in a strong column (resp. line) and time dependent noise. This problem can be considered as a 1D flicker of the
columns inside each frame. Thus, classic movie deflickering algorithms can be adapted, to equalize the columns
(resp. the lines). The proposed method therefore applies to the series formed by the columns of an infrared
image a movie deflickering algorithm. The obtained single image method works on static images, and therefore
requires no registration, no camera motion compensation, and no closed aperture sensor equalization. Thus, the
method has only one camera dependent parameter, and is landscape independent. This simple method will be
compared to a state of the art total variation single image correction on raw real and simulated images. The
method is real time, requiring only two operations per pixel. It involves no test-pattern calibration and produces
no "ghost artifacts".
This paper deals with two fields related to active imaging system. First, we begin to explore image processing
algorithms to restore the artefacts like speckle, scintillation and image dancing caused by atmospheric turbulence.
Next, we examine how to evaluate the performance of this kind of systems. To do this task, we propose a modified
version of the german TRM3 metric which permits to get MTF-like measures. We use the database acquired
during NATO-TG40 field trials to make our tests.