14 May 2018 Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications
Author Affiliations +
Abstract
We discuss our efforts with event-based vision and describe our large-scale, heterogeneous robotic dataset that will add to the growing number of event-based datasets currently publicly available. Our dataset comprises over 10 hours of runtime from a mobile robot equipped with two DAVIS240C cameras and an Astra depth camera randomly wandering in an indoor environment while two other independently moving robots randomly wander in the same scene. Vicon ground truth pose is provided for all three robots. To our knowledge, this is the largest event-based dataset with ground truthed independently moving entities.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sarah Leung, Sarah Leung, E. Jared Shamwell, E. Jared Shamwell, Christopher Maxey, Christopher Maxey, William D. Nothwang, William D. Nothwang, } "Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications", Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391T (14 May 2018); doi: 10.1117/12.2305504; https://doi.org/10.1117/12.2305504
PROCEEDINGS
10 PAGES + PRESENTATION

SHARE
Back to Top