Flash Imaging LiDARs (Light Detection and Ranging) are active systems that resolves depth in a scene by time-of-flight measurements and are being seen as a competitive technological alternative that presents great advantages in the space scenario. The system records full 3D-images with a single laser pulse, thus eliminating the need for a scanning device.
A first step to improve the overall quality of the measurements is the calibration of the camera. There is a vast literature and well-stablished techniques concerning radiometric calibration (2D information). However, for time-of-flight (3D information) the subject remains open to improvement and dependent upon the specific characteristics of the detector. In this article we propose a calibration scheme for CEA-LETI’s LiDAR that combines both intensity, range accuracy and range precision calibration and presents the first enhanced results based on data acquired under laboratory conditions.
This project is a partnership between ESA and CEA-LETI aiming the design of a LiDAR system based on a custom MCTAPD FPA (Avalanche PhotoDiode Focal Plane Array) detector developed by CEA-LETI and the formulation of a set of imaging processing algorithms. The target is to demonstrate the potential of such detector technology and to evaluate the performances of the full system chain in the frame of the targeted application. In the future, a validation campaign on a real terrain, at ESA’s campsite, will be performed to demonstrate the system in a close-to-real configuration.
Segmentation and classification are prolific research topics in the image processing community. These topics have been increasingly used in the context of analysis of cementitious materials on images acquired with a scanning electron microscope. Indeed, there is a need to be able to detect and to quantify the materials present in a cement paste in order to follow the chemical reactions occurring in the material even days after the solidification. We propose a new approach for segmentation and classification of cementitious materials based on the denoising of the data with a block-matching three-dimensional (3-D) algorithm, binary partition tree (BPT) segmentation, support vector machines (SVM) classification, and interactivity with the user. The BPT provides a hierarchical representation of the spatial regions of the data, allowing a segmentation to be selected among the admissible partitions of the image. SVMs are used to obtain a classification map of the image. This approach combines state-of-the-art image processing tools with user interactivity to allow a better segmentation to be performed, or to help the classifier discriminate the classes better. We show that the proposed approach outperforms a previous method when applied to synthetic data and several real datasets coming from cement samples, both qualitatively with visual examination and quantitatively with the comparison of experimental results with theoretical ones.
A new interactive approach for segmentation and classification of cementitious materials using Scanning Electron Microscope images is presented in this paper. It is based on the denoising of the data with the Block Matching 3D (BM3D) algorithm, Binary Partition Tree (BPT) segmentation and Support Vector Machines (SVM) classification. The latter two operations are both performed in an interactive way. The BPT provides a hierarchical representation of the spatial regions of the data and, after an appropriate pruning, it yields a segmentation map which can be improved by the user. SVMs are used to obtain a classification map of the image with which the user can interact to get better results. The interactivity is twofold: it allows the user to get a better segmentation by exploring the BPT structure, and to help the classifier to better discriminate the classes. This is performed by improving the representativity of the training set, adding new pixels from the segmented regions to the training samples. This approach performs similarly or better than methods currently used in an industrial environment. The validation is performed on several cement samples, both qualitatively by visual examination and quantitatively by the comparison of experimental results with theoretical values.
X-ray exposure during image guided interventions can be important for the patient as well as for the medical staff.
Therefore dose reduction is a major concern. Nevertheless, decreasing the dose per image affects significantly the image
quality. As a matter of fact, this tends to increase the noise and reduce the contrast. Hence, we propose a new and
efficient method to reduce the noise in low dose fluoroscopic sequences. Many studies in that domain have been proposed
implementing either multi-scale approaches using wavelet with its derivatives or using filters in the direct space. Our work
is based on a spatio-temporal denoising filter using the curvelet transform. Indeed, this sparse transform represents well
smooth images with edges and can be applied to fluoroscopic images in order to achieve robust denoising performances.
Therefore, we propose to combine a temporal recursive filter with a spatial curvelet filter. Our work is focused on the use of
the statistical dependencies between the curvelet coefficients in order to optimize the threshold function. Determining the
correlation among coefficients allows to detect which coefficients represent the relevant signal. Thus, our method allows
to diminish or even to erase curvelet-like artefacts. The performances and robustness of the proposed method are assessed
both on synthetic and real low dose sequences (ie: 20 nGy/frame).
The detection of hedges is a very important task for the monitoring of a rural environment and aiding the
management of their related natural resources. Hedges are narrow vegetated areas composed of shrubs and/or
trees that are usually present at the boundaries of adjacent agricultural fields. In this paper, a technique
for detecting hedges is presented. It exploits the spectral and spatial characteristics of hedges. In detail,
spatial features are extracted with attribute filters, which are connected operators defined in the mathematical
morphology framework. Attribute filters are flexible operators that can perform a simplification of a grayscale
image driven by an arbitrary measure. Such a measure can be related to characteristics of regions in the scene such
as the scale, shape, contrast etc. Attribute filters can be computed on tree representations of an image (such as the
component tree) which either represent bright or dark regions (with respect to their surroundings graylevels).
In this work, it is proposed to compute attribute filters on the inclusion tree which is an hierarchical dual
representation of an image, in which nodes of the tree corresponds to both bright and dark regions. Specifically,
attribute filters are employed to aid the detection of woody elements in the image, which is a step in the process
aimed at detecting hedges. In order to perform a characterization of the spatial information of the hedges in
the image, different attributes have been considered in the analysis. The final decision is obtained by fusing the
results of different detectors applied to the filtered image.
Watershed is one of the most widely used algorithms for segmenting remote sensing images. This segmentation
technique can be thought as a flooding performed on a topographic relief in which the water catchment basins,
separated by watershed lines, are the regions in the resulting segmentation. A popular technique for performing
a watershed relies on the flooding of the gradient image in which high level values correspond to watershed lines
and regional minima to the bottom of the catchment basins. Here we will refer as hierarchical segmentation
a decomposition of the segmentation map respecting the nesting property from a finer to a coarser scale i.e.
the set of partition lines at a coarser scale should be included in that of the finer scale. From the watershed
lines or partitions lines of the gradient image, we propose to perform a simplification using novel operators of
mathematical morphology for the filtering of thin and oriented features. By lowering the smallest edges, one can
reach a coarser partition of the image. Then, by applying a sequence of progressively more aggressive filters it is
possible to generate a hierarchy of segmentations.
Hyperspectral imaging is a continuously growing area of remote sensing. Hyperspectral data provide a wide
spectral range, coupled with a very high spectral resolution, and are suitable for detection and classification of
surfaces and chemical elements in the observed image. The main problem with hyperspectral data for these
applications is the (relatively) low spatial resolution, which can vary from a few to tens of meters. In the
case of classification purposes, the major problem caused by low spatial resolution is related to mixed pixels,
i.e., pixels in the image where more than one land cover class is within the same pixel. In such a case, the
pixel cannot be considered as belonging to just one class, and the assignment of the pixel to a single class
will inevitably lead to a loss of information, no matter what class is chosen. In this paper, a new supervised
technique exploiting the advantages of both probabilistic classifiers and spectral unmixing algorithms is proposed,
in order to produce land cover maps of improved spatial resolution. The method is in three steps. In a first
step, a coarse classification is performed, based on the probabilistic output of a Support Vector Machine (SVM).
Every pixel can be assigned to a class, if the probability value obtained in the classification process is greater
than a chosen threshold, or unclassified. In the proposed approach it is assumed that the pixels with a low
probabilistic output are mixed pixels and thus their classification is addressed in a second step. In the second
step, spectral unmixing is performed on the mixed pixels by considering the preliminary results of the coarse
classification step and applying a Fully Constrained Least Squares (FCLS) method to every unlabeled pixel, in
order to obtain the abundances fractions of each land cover type. Finally, in a third step, spatial regularization
by Simulated Annealing is performed to obtain the resolution improvement. Experiments were carried out on
a real hyperspectral data set. The results are good both visually and numerically and show that the proposed
method clearly outperforms common hard classification methods when the data contain mixed pixels.
In this paper, we cover a decade of research in the field of spectral-spatial classification in hyperspectral remote
sensing. While the very rich spectral information is usually used through pixel-wise classification in order to
recognize the physical properties of the sensed material, the spatial information, with a constantly increasing
resolution, provides insightful features to analyze the geometrical structures present in the picture. This is
especially important for the analysis of urban areas, while this helps reducing the classification noise in other
cases. The very high dimension of hyperspectral data is a very challenging issue when it comes to classification.
Support Vector Machines are nowadays widely aknowledged as a first choice solution. In parallel, catching the
spatial information is also very challenging. Mathematical morphology provides adequate tools: granulometries
(the morphological profile) for feature extraction, advanced filters for the definition of adaptive neighborhoods,
the following natural step being an actual segmentation of the data. In order to merge spectral and spatial
information, different strategies can be designed: data fusion at the feature level or decision fusion combining
the results of a segmentation on the one hand and the result of a pixel wise classification on the other hand.
Defect detection in images is a current task in quality control and is often integrated in partially or fully automated systems. Assessing the performances of defect detection algorithms is thus of great interest. However, because this is application- and context-dependent, it remains a difficult task. We describe a methodology to measure the performances of such algorithms on large images in a semi-automated defect inspection situation. Considering standard problems occurring in real cases, we compare typical performance evaluation methods. This analysis leads to the construction of a simple and practical receiver operating characteristic (ROC) based method. This method extends the pixel-level ROC analysis to an object-based approach by dilating the ground truth and the set of detected pixels before calculating the true-positive and false-positive rates. These dilations are computed thanks to the a priori knowledge of a human-defined ground truth and gives to true-positive and false-positive rates more consistent values in the semi-automated inspection context. Moreover, the dilation process is designed to be automatically suited to the object's shape in order to be applied on all types of defects without any parameter to be tuned.
Defects detection on images is a current task in quality control and is often integrated in partially or fully
automated systems. Assessing the performances of defects detection algorithms is thus of great interest. However,
being application and context dependent, it remains a difficult task. This paper describes a methodology to
measure the performances of such algorithms on large size images in a semi-automated defect inspection situation.
Considering standard problems occuring on real cases, a comparison of typical performance evaluation methods
is made. This analysis leads to the construction of a simple and practical ROC-based method. This algorithm
extends the pixel-level ROC analysis to an object-based approach by dilating the ground-truth and the set of
detected pixels before calculating true positive and false positive rates. These dilations are computed thanks
to the a priori knowledge of a human defined ground-truth and gives to true positive and false positive rates
more consistent values in the semi-automated inspection context. Moreover, dilation process is designed to be
automatically suited to the objects shape in order to be applied on all types of defects.
Classification of high resolution remote sensing data from urban areas is investigated. The main challenge with the classification of high resolution remote sensing image data is that spatial information is extremely important in the classification. Therefore, classification methods for such data need to take that into account. Here, a method based on mathematical morphology is used in order to preprocess the image data. The approach is based on building a morphological profile by a composition of geodesic opening and closing operations of different sizes. In the paper, the classification of is performed on two data sets from urban areas; one panchromatic and one hyperspectral. These data sets have different characteristcs and need different treatments by the morphological
approach. The approach can be directly applied on the panchromatic data. However, some feature extraction needs to be done on the hyperspectral data before the approach is applied. Both principal and independent components are considered here for that purpose. A neural network approach is used for the classification of the morphological profiles and its performance in terms of accuracies is compared to the classification of a fuzzy possibilistic approach in the case of the pancrhomatic data and the conventional maximum likelhood method
based on the Gaussian assumption in the case of the case of hyperspectral. Different types of feature extraction methods are considered in the classification process.
The presented study, based on the continuous wavelet transform and time-frequency representations, introduce new algorithms which perform different kinds of separation processing depending on the nature of the seismic data. When dealing with a one dimensional recorded signal (one sensor), we propose a segmentation of its time-scale representation. This leads to the automatic detection and separation of the different waves. This algorithm can be applied to a whole seismic profile containing several sensors, by tracking the segmentation features in the time-scale image sequence. The resulting separation algorithm is efficient as long as the patterns of the different waves do not overlap in the time-scale plane. Afterwards, the purpose is to take into account the redundancy of information in more dimensional data to increase the separation possibilities in presence of interference. In the case of vectorial sensors, we use the polarization information to separate the different waves using phase shifts, rotations, and amplifications. At last, in the case of linear array data, we use the propagation velocity information to separate dispersive waves with overlapping patterns. For this purpose, we propose a new time-scale representation which enable the estimation of the wave dispersion function from a small array of sensors.