The use of LIDAR (Light Imaging, Detection and Ranging) data for detailed terrain mapping and object recognition is becoming increasingly common. While the rendering of LIDAR imagery is expressive, there is a need for a comprehensive performance metric that presents the quality of the LIDAR image. A metric or scale for quantifying the interpretability of LIDAR point clouds would be extremely valuable to support image chain optimization, sensor design, tasking and collection management, and other operational needs. For many imaging modalities, including visible Electro-optical (EO) imagery, thermal infrared, and synthetic aperture radar, the National Imagery Interpretability Ratings Scale (NIIRS) has been a useful standard. In this paper, we explore methods for developing a comparable metric for LIDAR. The approach leverages the general image quality equation (IQE) and constructs a LIDAR quality metric based on the empirical properties of the point cloud data. We present the rationale and the construction of the metric, illustrating the properties with both measured and synthetic data.
The detection of farmland in synthetic aperture radar (SAR) images is useful to compute agriculture distribution in mountainous regions. The SAR technology is helpful to government agencies compiling much needed information for agricultural assessment of need-based data. We propose a texture signature to detect farmland in SAR. The texture signature is extracted from the texture pixels of the SAR image through the fuzzy c-means, where each texture pixel is a vector whose elements are the convolution value of the filters of the normalized Gaussian derivatives and SAR images at a spatial position. Then, we use the texture signatures to detect farmland in SAR images through the earth mover’s distance method. In the end, we propose a different approach to compute both the true positive rate and the false positive rate of receiver operating characteristic (ROC) curve. We use the area under the curve of ROC to achieve the best sample and the best threshold which realizes the best detection. The experiment results also show the best performance of the detection.
Large LiDAR (Light Detection And Ranging) data sets are used to create depth mapping of objects and geographic
areas. The suitability of image compression methods for these large LiDAR data sets was explored, analyzed and
optimized. Our research interprets LiDAR data as intensity based "depth images", and uses k-means clustering, reindexing
and JPEG2000 to compress the data. The first step in our method applies the k-means clustering algorithm to
an intensity image creating a small index table, an index map and residual image. Next we use methods from previous
research to re-index the index map to optimize compression when using JPEG2000. And lastly we compress both the reindexed
map and residual image using JPEG2000, exploring the use of both lossless and lossy compression.
Experimental results show that in general we can compress data to 23% of the original size losslessly and even further
allowing for small amounts of loss.
In this project, we propose to develop a prototype system that can automatically reconstruct 3D scenes of the interior of a
building, cave or other structure using ground-based LIDAR scanning technology. We develop a user-friendly real-time
visualization software package that will allow the users to interactively visualize, navigate and walk through the room
from different view angles, zoom in and out, etc.
This paper aims at analyzing gender differences in the 3D shapes of lateral ventricles, which will provide reference for
the analysis of brain abnormalities related to neurological disorders. Previous studies mostly focused on volume analysis,
and the main challenge in shape analysis is the required step of establishing shape correspondence among individual
shapes. We developed a simple and efficient method based on anatomical landmarks. 14 females and 10 males with
matching ages participated in this study. 3D ventricle models were segmented from MR images by a semiautomatic
method. Six anatomically meaningful landmarks were identified by detecting the maximum curvature point in a small
neighborhood of a manually clicked point on the 3D model. Thin-plate spline was used to transform a randomly selected
template shape to each of the rest shape instances, and the point correspondence was established according to Euclidean
distance and surface normal. All shapes were spatially aligned by Generalized Procrustes Analysis. Hotelling T<sup>2</sup> twosample
metric was used to compare the ventricle shapes between males and females, and False Discovery Rate
estimation was used to correct for the multiple comparison. The results revealed significant differences in the anterior
horn of the right ventricle.
Statistical shape analysis of brain structures has gained increasing interest from neuroimaging community because it
can precisely locate shape differences between healthy and pathological structures. The most difficult and crucial
problem is establishing shape correspondence among individual 3D shapes. This paper proposes a new algorithm for
3D shape correspondence. A set of landmarks are sampled on a template shape, and initial correspondence is
established between the template and the target shape based on the similarity of locations and normal directions. The
landmarks on the target are then refined by iterative thin plate spline. The algorithm is simple and fast, and no
spherical mapping is needed. We apply our method to the statistical shape analysis of the corpus callosum (CC) in
phenylketonuria (PKU), and significant local shape differences between the patients and the controls are found in the
most anterior and posterior aspects of the corpus callosum.
A number of studies have documented that autism has a neurobiological basis, but the anatomical extent of these
neurobiological abnormalities is largely unknown. In this study, we aimed at analyzing highly localized shape
abnormalities of the corpus callosum in a homogeneous group of autism children. Thirty patients with essential autism
and twenty-four controls participated in this study. 2D contours of the corpus callosum were extracted from MR images
by a semiautomatic segmentation method, and the 3D model was constructed by stacking the contours. The resulting 3D
model had two openings at the ends, thus a new conformal parameterization for high genus surfaces was applied in our
shape analysis work, which mapped each surface onto a planar domain. Surface matching among different individual
meshes was achieved by re-triangulating each mesh according to a template surface. Statistical shape analysis was used
to compare the 3D shapes point by point between patients with autism and their controls. The results revealed significant
abnormalities in the anterior most and anterior body in essential autism group.
Recent advances in imaging technologies, such as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) have accelerated brain research in many aspects. In order to better understand the synergy of the many processes involved in normal brain function, integrated modeling and analysis of MRI, PET, and DTI is highly desirable. Unfortunately, the current state-of-art computational tools fall short in offering a comprehensive computational framework that is accurate and mathematically rigorous. In this paper we present a framework which is based on conformal parameterization of a brain from high-resolution structural MRI data to a canonical spherical domain. This model allows natural integration of information from co-registered PET as well as DTI data and lays the foundation for a quantitative analysis of the relationship between diverse data sets. Consequently, the system can be designed to provide a software environment able to facilitate statistical detection of abnormal functional brain patterns in patients with a large number of neurological disorders.
We propose a new implicit surface polygonalization algorithm based on front propagation. The algorithm starts from a simple seed (e.g. a triangle) that can be automatically initialized, and always enlarges its boundary contour outwards along its tangent direction suggested by the underlying volume data. Our algorithm can conduct mesh optimization and Laplacian smoothing on-the-fly and generate meshes of much higher quality than the Marching-cubes algorithm. Experimental results on both real and synthetic volumetric datasets are shown to demonstrate the robustness and effectiveness of the new algorithm.