Neuromorphic architectures enable machine learning on a faster timescale compared to conventional processors and require encoding of spike trains from images for computer vision applications. We report low-exposure image representation algorithms that can generate multiple short-exposure frames from a given long exposure image. The frame deconvolution is non-linear in the sense that the difference between adjacent short-exposure frames change with exposure time, however the frames have a structural representation of the original image such that the image reconstructed from these frames has a Peak Signal-to-Noise Ratio (PSNR) of over 300 and a Structural Similarity Index Metric (SSIM) close to unity. We show that the low-exposure frames generated by our algorithms enable feature extraction for machine learning or deep learning, e. g., classification using convolutional neural networks. The validation accuracy for classification depends on the range of the random subtraction parameter, a used in our algorithms to simulate low-exposure frames. When the maximum of a, equals to the largest allowed change in the pixel intensity per time step, the validation accuracy for classification of digits in the Digits dataset is 90±3% based on the 1st 1 ms frame. The accuracy increases to 97% with only 40% of the 1ms frames generated for a given exposure time. These results show that machine learning can be extended to low exposure images.