Modern atmospheric gas monitoring applications demand progressively better performances with regards to spatial, spectral and temporal resolutions. In this context, great potential is shown by a newly developed family of cutting-edge snapshot imaging spectrometers based on Fabry-Perot interferometry, whose conceptual design was patented under the name ImSPOC. Three different sensor prototypes based on the ImSPOC concept are under development: 1) in the near infrared wavelength range for CH4 or H2S detection, 2) in ultra-violet and visible range for NO2, O4, O3, and O2 characterisation and 3) specifically for CO2 monitoring. After the realisation of these prototypes there is the need arose to provide intelligible and well-calibrated acquisitions for the final users. This study presents the ImSPOC concept from the signal processing point of view, framing the optical transformations performed in the instruments under an appropriate mathematical model formulation. Additionally, preliminary developments are presented to address the first step of the signal processing pipeline for this instrument: the estimation of the thickness of each interferometer. This is a fundamental step for obtaining calibrated acquisitions that could then be used for gas monitoring.
In remote sensing, a common scenario involves the simultaneous acquisition of a panchromatic (PAN), a broad-band high spatial resolution image, and a multispectral (MS) image, which is composed of several spectral bands but at lower spatial resolution. The two sensors mounted on the same platform can be found in several very high spatial resolution optical remote sensing satellites for Earth observation (e.g., Quickbird, WorldView and SPOT)
In this work we investigate an alternative acquisition strategy, which combines the information from both images into a single band image with the same number of pixels of the PAN. This operation allows to significantly reduce the burden of data downlink by achieving a fixed compression ratio of 1/(1+b/p2) compared to the conventional acquisition modes. Here, b and p denote the amount of distinct bands in the MS image and the scale ratio between the PAN and MS, respectively (e.g.: b = p = 4 as in many commercial high spatial resolution satellites). Many strategies can be conceived to generate such a compressed image from a given set of PAN and MS sources. A simple option, which will be presented here, is based on an application of the color filter array (CFA) theory. Specifically, the value of each pixel in the spatial support of the synthetic image is taken from the corresponding sample either in the PAN or in a given band of the MS up-sampled to the size of the PAN. The choice is deterministic and done according to a custom mask. There are several works in the literature proposing various methods to construct masks which are able to preserve as much spectral content as possible for conventional RGB images. However, those results are not directly applicable to the case at hand, since it deals with i) images with different spatial resolution, ii) potentially more than three spectral bands and, iii) in general, different radiometric dynamics across bands. A tentative approach to address these issues is presented in this work. The compressed image resulting from the proposed acquisition strategy will be processed to generate an image featuring both the spatial resolution of the PAN and the spectral bands of the MS. This final product allows a direct comparison with the result of any standard pan-sharpening algorithm; the latter refers to a specific instance of data fusion (i.e., fusion of a PAN and MS image), which differs from our scenario since both sources are separately taken as input. In our setting, the fusion step performed at the ground segment will jointly involve a fusion and reconstruction problem (also known as demosaicing in the CFA literature). We propose to address this problem with a variational approach. We present in this work preliminary results related to the proposed scheme on real remote sensed images, tested over two different datasets acquired by the Quickbird and Geoeye-1 platforms, which show superior performances compared to applying a basic radiometric compression algorithm to both sources before performing a pan-sharpening protocol. The validation of the final products in both scenarios allows to visually and numerically appreciate the tradeoff between the compression of the source data and the quality loss suffered.
Segmentation and classification are prolific research topics in the image processing community. These topics have been increasingly used in the context of analysis of cementitious materials on images acquired with a scanning electron microscope. Indeed, there is a need to be able to detect and to quantify the materials present in a cement paste in order to follow the chemical reactions occurring in the material even days after the solidification. We propose a new approach for segmentation and classification of cementitious materials based on the denoising of the data with a block-matching three-dimensional (3-D) algorithm, binary partition tree (BPT) segmentation, support vector machines (SVM) classification, and interactivity with the user. The BPT provides a hierarchical representation of the spatial regions of the data, allowing a segmentation to be selected among the admissible partitions of the image. SVMs are used to obtain a classification map of the image. This approach combines state-of-the-art image processing tools with user interactivity to allow a better segmentation to be performed, or to help the classifier discriminate the classes better. We show that the proposed approach outperforms a previous method when applied to synthetic data and several real datasets coming from cement samples, both qualitatively with visual examination and quantitatively with the comparison of experimental results with theoretical ones.
A new interactive approach for segmentation and classification of cementitious materials using Scanning Electron Microscope images is presented in this paper. It is based on the denoising of the data with the Block Matching 3D (BM3D) algorithm, Binary Partition Tree (BPT) segmentation and Support Vector Machines (SVM) classification. The latter two operations are both performed in an interactive way. The BPT provides a hierarchical representation of the spatial regions of the data and, after an appropriate pruning, it yields a segmentation map which can be improved by the user. SVMs are used to obtain a classification map of the image with which the user can interact to get better results. The interactivity is twofold: it allows the user to get a better segmentation by exploring the BPT structure, and to help the classifier to better discriminate the classes. This is performed by improving the representativity of the training set, adding new pixels from the segmented regions to the training samples. This approach performs similarly or better than methods currently used in an industrial environment. The validation is performed on several cement samples, both qualitatively by visual examination and quantitatively by the comparison of experimental results with theoretical values.
Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric,
radiometric and temporal resolution. In many applications the processing of such images needs high performance
computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or
distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU)
programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue
of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of
terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the
image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to
successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of
the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is
mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are
thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized
and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations.
Examples and comparisons with standard CPU processing are also reported and commented.
The detection of hedges is a very important task for the monitoring of a rural environment and aiding the
management of their related natural resources. Hedges are narrow vegetated areas composed of shrubs and/or
trees that are usually present at the boundaries of adjacent agricultural fields. In this paper, a technique
for detecting hedges is presented. It exploits the spectral and spatial characteristics of hedges. In detail,
spatial features are extracted with attribute filters, which are connected operators defined in the mathematical
morphology framework. Attribute filters are flexible operators that can perform a simplification of a grayscale
image driven by an arbitrary measure. Such a measure can be related to characteristics of regions in the scene such
as the scale, shape, contrast etc. Attribute filters can be computed on tree representations of an image (such as the
component tree) which either represent bright or dark regions (with respect to their surroundings graylevels).
In this work, it is proposed to compute attribute filters on the inclusion tree which is an hierarchical dual
representation of an image, in which nodes of the tree corresponds to both bright and dark regions. Specifically,
attribute filters are employed to aid the detection of woody elements in the image, which is a step in the process
aimed at detecting hedges. In order to perform a characterization of the spatial information of the hedges in
the image, different attributes have been considered in the analysis. The final decision is obtained by fusing the
results of different detectors applied to the filtered image.
In this paper, a technique for the integration of images and point cloud for urban areas classification is pre-
sented. A set of aerial RGB overlapping images are used as input. A photogrammetric Digital Surface Model
(DSM) is firstly generated by using advanced matching techniques. Subsequently, a thematic classification of
the surveyed areas is performed considering simultaneously the surface’s reflectance in the visible spectrum of
the image sequence, the altitude information (provided by the generated DSM) and additional spatial features
(Attribute Profiles). Exploiting the geometrical constraints provided by the collinearity condition and the epipo-
lar geometry between the images, the thematic classification of the land cover can be improved by considering
simultaneously the height information and the reflectance values of the DSM. Examples and comments of the
proposed classification algorithm are given using a set of aerial images over a dense urban area.
Watershed is one of the most widely used algorithms for segmenting remote sensing images. This segmentation
technique can be thought as a flooding performed on a topographic relief in which the water catchment basins,
separated by watershed lines, are the regions in the resulting segmentation. A popular technique for performing
a watershed relies on the flooding of the gradient image in which high level values correspond to watershed lines
and regional minima to the bottom of the catchment basins. Here we will refer as hierarchical segmentation
a decomposition of the segmentation map respecting the nesting property from a finer to a coarser scale i.e.
the set of partition lines at a coarser scale should be included in that of the finer scale. From the watershed
lines or partitions lines of the gradient image, we propose to perform a simplification using novel operators of
mathematical morphology for the filtering of thin and oriented features. By lowering the smallest edges, one can
reach a coarser partition of the image. Then, by applying a sequence of progressively more aggressive filters it is
possible to generate a hierarchy of segmentations.
In this paper we investigate the application of Morphological Attribute Profiles to both hyperspectral and LiDAR data
to fuse spectral, spatial and elevation data for classification purposes. While hyperspectral data provides a wealth of
spectral information, multi-return LiDAR data provides geometrical information on the elevation and the structure of the
objects on the ground as well as a measure of their laser cross section. Therefore, hyperspectral and LiDAR data are
complementary information sources and potentially their joint analysis can improve classification accuracies.
Morphological Profiles (MPs) and Morphological Attribute Profiles (MAPs) have been successfully used as tools to
combine spectral and spatial information for classification of remote sensing data. MPs and MAPs can also be used with
the LiDAR data to reduce the irregularities in the LiDAR measurements which are inherent with the sampling strategy
used in the acquisition process. Experiments carried out on hyperspectral and LiDAR data acquired on a urban area of
the city of Trento (Italy) point out the effectiveness of MAPs for the classification process.
The analysis of changes occurred in multi-temporal images acquired by the same sensor on the same geographical
area at different dates is usually done by performing a comparison of the two images after co-registration. When
one considers very high resolution (VHR) remote sensing images, the spatial information of the pixels becomes
very important and should be included in the analysis. However, taking into account spatial features for change
detection in VHR images is far from being straightforward, due to effects such as seasonal variations, differences
in illumination condition, residual mis-registration, different acquisition angles, etc., which make the comparison
of the structures in the scene complex to achieve from a spatial perspective. In this paper we propose a change
detection technique based on morphological Attribute Profiles (APs) suitable for the analysis of VHR images.
In greater detail, this work aims at detecting the changes occurred on the ground between the two acquisitions
by comparing the APs computed on the image of each date. The experimental analysis has been carried out on
two VHR multi-temporal images acquired by the Quickbird sensor on the city of Bam, Iran, before and after
the earthquake occurred on Dec. 26, 2003. The experiments confirm that the APs computed at different dates
show different behaviors for changed and unchanged areas. The change detection maps obtained by the proposed
technique are able to detect changes in the morphology of the correspondent regions at different dates regardless
their spectral variations.
In this paper we propose Alternating Sequential Attribute Filters, which are Alternating Sequential Filters (ASFs)
computed with Attribute Filters. ASFs are obtained by the iterative subsequent application of morphological
opening and closing transformations and process an image by filtering both bright and dark structures. ASFs
are widely used for achieving a simplification of a scene and for the removal of noisy structures. However, ASFs
are not suitable for the analysis of very high geometrical resolution remote sensing images since they do not
preserve the geometrical characteristics of the objects in the image. For this reason, instead of the conventional
morphological operators, we propose to use attribute filters, which are morphological connected filters and process
an image only by merging flat regions. Thus, they are suitable for the analysis of very high resolution images.
Since the attribute selected for use in the analysis mainly defines the effects obtained by the morphological
filter, when applying attribute filters in an alternate composition (as the ASF) it is possible to obtain a different
image simplification according to the attribute considered. For example, if one considers the area as attribute,
an input image will be processed by progressively removing both larger dark and bright areas. When using an
attribute that measures the homogeneity of the regions (e.g., the standard deviation of the values of the pixels)
the scene can be simplified by merging progressively more homogeneous zones. Moreover, the computation of
the ASF with attribute filters can be performed with a reduced computational load by taking advantage of the
efficient representation of the image as min- and max-tree. The proposed alternating sequential attribute filters
are qualitatively evaluated on a panchromatic GeoEye-1 image.
In this paper we propose to model the structural information in very high geometrical resolution optical images with
morphological attribute filters. In particular we propose to perform a multilevel analysis based on different features of
the image in contraposition to the use of conventional morphological profiles. We show how morphological attribute
filters are conceptually and experimentally more capable to describe the characteristics of buildings with respect to
morphological filters by reconstruction. Furthermore, we address the issue of selecting the most suitable parameters of
the filters by proposing an architecture which embeds in the filtering procedure an optimization step based on genetic
algorithms. The effectiveness of the proposed technique is stated by the experiments which were carried out on a
panchromatic image acquired by the Quickbird satellite.