14 May 2018 A sparse coding multi-scale precise-timing machine learning algorithm for neuromorphic event-based sensors
Author Affiliations +
This paper introduces an unsupervised compact architecture that can extract features and classify the contents of dynamic scenes from the temporal output of a neuromorphic asynchronous event-based camera. Event-based cameras are clock-less sensors where each pixel asynchronously reports intensity changes encoded in time at the microsecond precision. While this technology is gaining more attention, there is still a lack of methodology and understanding of their temporal properties. This paper introduces an unsupervised time-oriented event-based machine learning algorithm building on the concept of hierarchy of temporal descriptors called time surfaces. In this work we show that the use of sparse coding allows for a very compact yet efficient time-based machine learning that lowers both the computational cost and memory need. We show that we can represent visual scene temporal dynamics with a finite set of elementary time surfaces while providing similar recognition rates as an uncompressed version by storing the most representative time surfaces using clustering techniques. Experiments will illustrate the main optimizations and trade-offs to consider when implementing the method for online continuous vs. offline learning. We report results on the same previously published 36 class character recognition task and a 4 class canonical dynamic card pip task, achieving 100% accuracy on each.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Germain Haessig, Germain Haessig, Ryad Benosman, Ryad Benosman, "A sparse coding multi-scale precise-timing machine learning algorithm for neuromorphic event-based sensors", Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391U (14 May 2018); doi: 10.1117/12.2305933; https://doi.org/10.1117/12.2305933

Back to Top