Hyperspectral imaging is a powerful technology that is plagued by large dimensionality. Our study explores a way to combat that hindrance via noncontiguous and contiguous (simpler to realize sensor) band grouping for dimensionality reduction. Our approach is different in the respect that it is flexible and it follows a well-studied process of visual clustering in high-dimensional spaces. Specifically, we extend the improved visual assessment of cluster tendency and clustering in ordered dissimilarity data unsupervised clustering algorithms for supervised hyperspectral learning. In addition, we propose a way to extract diverse features via the use of different proximity metrics (ways to measure the similarity between bands) and kernel functions. The discovered features are fused with ℓ ∞ -norm multiple kernel learning. Experiments are conducted on two benchmark data sets and our results are compared to related work. These data sets indicate that contiguous or not is application specific, but heterogeneous features and kernels usually lead to performance gain.
For autonomous vehicles 3D, rotating LiDAR sensors are often critically important towards the vehicle’s ability to sense its environment. Generally, these sensors scan their environment, using multiple laser beams to gather information about the range and the intensity of the reflection from an object. LiDAR capabilities have evolved such that some autonomous systems employ multiple rotating LiDARs to gather greater amounts of data regarding the vehicle’s surroundings. For these multi–LiDAR systems, the placement of the sensors determine the density of the combined point cloud. We perform preliminary research regarding the optimal LiDAR placement strategy on an off–road, autonomous vehicle known as the Halo project. We use the Mississippi State University Autonomous Vehicle Simulator (MAVS) to generate large amounts of labeled LiDAR data that can be used to train and evaluate a neural network used to process LiDAR data in the vehicle. The trained networks are evaluated and their performance metrics are then used to generalize the performance of the sensor pose. Data generation, training, and evaluation, was performed iteratively to perform a parametric analysis of the effectiveness of various LiDAR poses in the Multi–LiDAR system. We also, describe and evaluate intrinsic and extrinsic calibration methods that are applied in the multi–LiDAR system. In conclusion we found that our simulations are an effective way to evaluate the efficacy of various LiDAR placements based on the performance of the neural network used to process that data and the density of the point cloud in areas of interest.
The Sensor Analysis and Intelligence Laboratory (SAIL) at Mississippi State University's (MSU's) Center for Advanced Vehicular Systems (CAVS) and the Social, Therapeutic and Robotic Systems Lab (STaRS) at MSU's Computer Science and Engineering department have designed and implemented a modular platform for automated sensor data collection and processing, named the Hydra. The Hydra is an open-source system (all artifacts and code are published to the research community), and it consists of a modular rigid mounting platform (sensors, processors, power supply and conditioning) that utilize the Picatinny rail (a standardized mounting system originally developed for firearms) as a rigid mounting system, a software platform utilizing the Robotic Operating System (ROS) for data collection, and design packages (schematics, CAD drawings, etc.). The Hydra system streamlines the assembly of a configurable multi-sensor system. This system is motivated to enable researchers to quickly select sensors, assemble them as an integrated system, and collect data (without having to recreate the Hydras hardware and software). Prototype results are presented from a recent data collection on a small robot during a SWAT-robot training.