24 October 2012 Digital implementation of a neural network for imaging
Author Affiliations +
Proceedings Volume 8412, Photonics North 2012; 84121H (2012) https://doi.org/10.1117/12.2000727
Event: Photonics North 2012, 2012, Montréal, Canada
Abstract
This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.
© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Richard Wood, Alex McGlashan, Jay Yatulis, Peter Mascher, Ian Bruce, "Digital implementation of a neural network for imaging", Proc. SPIE 8412, Photonics North 2012, 84121H (24 October 2012); doi: 10.1117/12.2000727; https://doi.org/10.1117/12.2000727
PROCEEDINGS
8 PAGES


SHARE
Back to Top