Presentation + Paper
14 May 2018 Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications
Sarah Leung, E. Jared Shamwell, Christopher Maxey, William D. Nothwang
Author Affiliations +
Abstract
We discuss our efforts with event-based vision and describe our large-scale, heterogeneous robotic dataset that will add to the growing number of event-based datasets currently publicly available. Our dataset comprises over 10 hours of runtime from a mobile robot equipped with two DAVIS240C cameras and an Astra depth camera randomly wandering in an indoor environment while two other independently moving robots randomly wander in the same scene. Vicon ground truth pose is provided for all three robots. To our knowledge, this is the largest event-based dataset with ground truthed independently moving entities.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Sarah Leung, E. Jared Shamwell, Christopher Maxey, and William D. Nothwang "Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications", Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391T (14 May 2018); https://doi.org/10.1117/12.2305504
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication and 2 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Robots

Cameras

Sensors

Calibration

Visualization

Image resolution

RGB color model

Back to Top