We discuss our efforts with event-based vision and describe our large-scale, heterogeneous robotic dataset that will add to the growing number of event-based datasets currently publicly available. Our dataset comprises over 10 hours of runtime from a mobile robot equipped with two DAVIS240C cameras and an Astra depth camera randomly wandering in an indoor environment while two other independently moving robots randomly wander in the same scene. Vicon ground truth pose is provided for all three robots. To our knowledge, this is the largest event-based dataset with ground truthed independently moving entities.
Sarah Leung, E. Jared Shamwell, Christopher Maxey, and William D. Nothwang, "Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications," Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391T (Presented at SPIE Defense + Security: April 18, 2018; Published: 14 May 2018); https://doi.org/10.1117/12.2305504.
Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the proceedings. They include the speaker's narration with video of the slides and animations. Most include full-text papers. Interactive, searchable transcripts and closed captioning are now available for 2018 presentations, with transcripts for prior recordings added daily.
Search our growing collection of more than 16,000 conference presentations, including many plenaries and keynotes.