Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141501 (2 June 2020); doi: 10.1117/12.2572759
This PDF file contains the front matter associated with SPIE Proceedings Volume 11415, including the Title Page, Copyright Information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Cybersecurity and Communication Issues with Networked Autonomous Systems I
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141503 (23 April 2020); doi: 10.1117/12.2558717
The sensing range of Brillouin optical time-domain analysis (BOTDA) is typically restricted to tens of kilometers by the fiber attenuation, pump depletion, and unwanted nonlinear effects. It limits the use of BOTDA in applications such as oil and gas pipeline monitoring that requires a sensing range up to hundreds of kilometers. In this work, a Raman amplification technique and a differential pulse-width pair (DPP) technique are employed to achieve high spatial resolution and long distance measurement. The Raman amplification technique involves three Raman pump configurations such as forward/backward and bi-directional pump with respect to different Brillouin pump pulses. Variations in pump and probe power, Raman propagation direction and injection location are explored to allow full control over signal amplification in any particular section of the total sensing fiber length. The signal-to-noise ratio (SNR) for a certain location along the length of the fiber can be enhanced to provide more useful localized information. In addition, a novel fitting algorithm based on artificial neural networks (ANNs) for Brillouin scattering spectrum is proposed for the estimation of Brillouin frequency shift with high accuracy. It is experimentally demonstrated for a sensing range of 100 km with a spatial resolution of 1 m and ANN based novel fitting algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141504 (18 May 2020); doi: 10.1117/12.2558830
Conflict analysis using surrogate safety measures (SSMs) has turned into an efficient way to investigate safety issues in autonomous vehicles. Previous studies largely focus on video images taken from high elevation. However, it involves overwhelming work, high expense of maintenance, and even security limitations. This study applies a simulation-based model for surrogate safety analysis of pedestrian-vehicle conflicts in urban roads. We show how an automated vehicle system that utilizes a radar and a camera as an input to a Pedestrian Protection System (PPS) can be used for surrogate safety analysis under uncertain weather conditions. Different scenarios for surrogate safety measures were built and analyzed. The detection and tracking systems for vehicle and pedestrian trajectory are modeled. Three SSMs, namely, Pedestrian Classification Time to Collision (PCT), Total Braking Time to Collision (TBT), and Total Minimum Time to Collision (TMT) are employed to represent how spatially and temporally close the pedestrian-vehicle conflict is to a collision. The simulation is built using PreScan, and the software reproduces the test scenarios accurately as well as incorporates vehicle control and logic. The results from our analysis highlight the exposure of pedestrians to traffic conflict both inside and outside crosswalks. The findings demonstrate that simulation-based models can support urban roads safety analysis of autonomous vehicles in an accurate and yet cost-effective way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141505 (19 May 2020); doi: 10.1117/12.2558943
We propose a modification to the popular pure pursuit algorithm for path following of car like platforms in which multiple look-ahead points along the target path are aggregated to form a spatially filtered steering command. Our approach enables complex following behaviors while avoiding such as oscillatory behavior observed in approaches with a single look-ahead point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141506 (23 April 2020); doi: 10.1117/12.2559489
Intelligent transportation systems (ITS) are being deployed globally to support vehicular safety innovation using wireless communication. The ITS paradigm of vehicle-to-everything communication (V2X) enables vehicles equipped with an on-board unit (OBU) to communicate with a roadside unit (RSU) infrastructure to provide safe intersection movements between similarly equipped vehicles, pedestrians, and animal life. However, the current paradigm relies heavily on the availability of a Global Positioning System (GPS) for ensuring road user safety. In a doomsday scenario where GPS becomes unavailable, collision avoidance services provided through V2X may be rendered unavailable. The proposed solution is to develop a secure and reliable method for V2X nodes to reliably request positioning information from RSUs. The advantages are two-fold: V2X nodes continue to receive positioning information for safety applications and positioning information could be improved during normal GPS operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Applications for Autonomous Systems in National Security and Emergency Response
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141507 (19 May 2020); doi: 10.1117/12.2558989
We have collected an extensive winter autonomous driving data set consisting of over 4TB of data collected between November 2019 and March 2020. Our base configuration features two 16 channel LiDARs, forward facing color camera, wide field of view NIR camera, and an ADAS LWIR camera. RTK corrected GNSS positioning and IMU data is also available. Portions include data from four different HD LiDARs operating in a variety of winter driving conditions. The set highlights some of the unique aspects of operating in northern climates including changing landmarks due to snow accumulation, wildlife, and snowmobiles operating on local roadways.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141508 (23 April 2020); doi: 10.1117/12.2560798
The problem of leading dangerous missions or rescue missions in places where some heart quake or tsunami can destroy building, roads and network structures is always becoming a hot issue specially considering how to perform actions to save people in the smallest amount of time and considering that often the actions need to be performed in a distributed way among more regions affected by the disaster. At this purpose, robots and networked robots' systems could be a possible way to support human actions in order to fast the rescue process and to reduce the risk for human operators. This work considers a networking protocol to support the coordination of mobile robots with different capabilities in order to reduce the space discovery time to find the people to rescue and to help them to move out from the destroyed region. The protocol try to support a multi-task allocations considering the distributions of tasks and of robots in the space and trying to reduce the overall cost in terms of rescue time and control overhead in the team coordination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 1141509 (23 April 2020); doi: 10.1117/12.2561083
Autonomous vehicles are complex robotic and artificial intelligence systems working together to achieve safe operation in unstructured environments. The objective of this work is to provide a foundation to develop more advanced algorithms for off-road autonomy. The project explores the point cloud data captured from lidar sensors, and the processing to restore some of the geometric information lost during sensor sampling. Because ground truth values are needed for quantitative comparison, the MAVS was leveraged to generate a large off-road dataset in a variety of ecosystems. The results demonstrate data capture from the sensor suite and successful reconstruction of the selected geometric information. Using this geometric information, the point cloud data is more accurately segmented using the SqueezeSeg network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Joint Session with Conferences 11415 and 11425: Autonomous Ground Vehicles: Sensing, Processing, and Safety
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150B (23 April 2020); doi: 10.1117/12.2566765
Face recognition of vehicle occupants through windshields in unconstrained environments poses a number of unique challenges ranging from glare, poor illumination, driver pose and motion blur. In this paper, we further develop the hardware and software components of a custom vehicle imaging system to better overcome these challenges. After the build out of a physical prototype system that performs High Dynamic Range (HDR) imaging, we collect a small dataset of through-windshield image captures of known drivers. We then reformulate the classical Mertens-Kautz-Van Reeth HDR fusion algorithm as a pre-initialized neural network, which we name the Mertens Unrolled Network (MU-Net), for the purpose of fine-tuning the HDR output of through-windshield images. Reconstructed faces from this novel HDR method are then evaluated and compared against other traditional and experimental HDR methods in a pre-trained state-of-the-art (SOTA) facial recognition pipeline, verifying the efficacy of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150C (23 April 2020); doi: 10.1117/12.2558478
Unsupervised depth estimation methods that allow training on monocular image data have emerged as a promising tool for monocular vision-based vehicles that are not equipped with a stereo camera or a LIDAR. Predicted depths from single images could be used, for example, to avoid obstacles in autonomous navigation, or to improve in-vehicle change detection. We employ a self-supervised depth estimation network to predict depth in monocular image sequences acquired by a military vehicle and a UGV. We trained the models on the KITTI dataset, and performed a fine-tuning on monocular image data for each vehicle. The results illustrate that the estimated depths are visually plausible for on-road as well as for off-road environments. We also provide an example application by using the predicted depths for computing stixels, a medium-level representation of traffic scenes for self-driving vehicles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150D (23 April 2020); doi: 10.1117/12.2558590
Lidars are gaining popularity in the automotive domain, especially in the field of autonomous driving systems. Development of multi-channel, monolithic Lidars can reduce the system cost and size for better integration into automobile and/or robotic components. Since Lidars work on the principle of time of flight, an important function of the Lidar system is to precisely detect the time of arrival of the return pulse. A method is being discussed in this paper to detect the peak location in the Lidar return signal using differentiation. With a single chip solution, the proposed method can potentially reduce the footprint of the analog signal chain by 30%, but with a performance trade-off for signals with low SNR. The distance measurement accuracy of this method is being compared to the standard methods which use high-resolution, high-speed ADCs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150E (19 May 2020); doi: 10.1117/12.2558855
We present an open source 3-dimensional 6 DOF (Degrees Of Freedom) point cloud based mapping package and online map manager for ROS (Robot Operating System) using PCL (Point Cloud Library) for use in autonomous navigation and localization in any environment. This system will be of interest to the robotics community as many mapping systems are either not sufficiently configured for 3D mapping in 6 DOF systems or are proprietary. The goal of this is to be able to produce maps which contain enough detail to be used for localization, as well as path planning, without sacrificing memory or speed. The mapping method consists of several key features described in detail: a novel method of initial localization and odometry estimation, a mapping registration method, a map storage method, and a map manager which handles map recall based on robot pose. Using a 3D LiDAR scanner and an IMU we are able to map in structured environments, and with addition of another odometry source able to complete mapping in unstructured environments with fewer features. The results of this work is an open source, extendable, environment independent mapping scheme and is well defined for all 6DOF. Verification of this system has been done in a simulation environment as well as real-world experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150F (19 May 2020); doi: 10.1117/12.2559006
We describe contributions made to the Unreal gaming engine community supporting our development of algorithms that enable navigation in off-road and unstructured environments. Simulation of autonomous ground vehicles traversing complex 3D terrain requires accurate modeling of vehicle-terrain interaction. Widely used community gaming engines model this interaction well but do not include sensor model and other features found in open source tools. We have extended the Unreal engine to include models for LiDARs, IMU, and GNSS systems that are compatible with the Robot Operating System middle-ware. We have also added features that allow for the automated testing of off-road navigation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150G (13 May 2020); doi: 10.1117/12.2558960
Terrain sensing is an important aspect of navigation for autonomous ground vehicles (AGVs) in off-road conditions. Modern AGVs have several sensors that can be used to detect terrain. In this paper, we have implemented terrain classification using a fusion of visual data from a camera and vibrational data from an inertial measurement unit (IMU). The popular supervised learning technique, support vector machine (SVM), has been used due to its high accuracy and relatively small execution time. An image is first captured and the robot then traverses over the region defined by the image to record vibration data. Linear acceleration vectors, perpendicular to the terrain, are extracted from the IMU and statistical features are calculated to make up the vibration data. The images are manually labelled and aligned with the vibration data to create a fused feature vector and train the SVM. Our method has been tested on previously unseen field data and an average accuracy of 90% has been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Proc. SPIE 11415, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, 114150H (18 May 2020); doi: 10.1117/12.2558966
We compare two algorithms, Histogram of Oriented Gradient (HOG) with linear Support Vector Machine (SVM) and You Look Only Once (YOLO), to the task of sign detection and classification from imagery from the LISA dataset. Comparisons are made in terms of execution time, accuracy, and readiness for use on GPU or FPGA hardware for acceleration. We find the neural network-based approaches like YOLO have superior accuracy but run slower on general purpose CPUs without acceleration. On the other hand, while less accurate the SVM-based are faster without acceleration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.