An FPGA-based approach is proposed to build an augmented reality system in order to aid people affected by a visual disorder known as tunnel vision. The aim is to increase the user's knowledge of his environment by superimposing on his own view useful information obtained with image processing. Two different alternatives have been explored to perform the required image processing: a specific purpose algorithm to extract edge detection information, and a cellular neural network with the suitable template. Their implementations in reconfigurable hardware pursue to take advantage of the performance and flexibility that show modern FPGAs. This paper describes the hardware implementation of both the Canny algorithm and the cellular neural network, and the overall system architecture. Results of the implementations and examples of the system functionality are presented.
This paper explores different alternatives to carry out a model of digital CNN from the point of view of its implementation on FPGAs. It shows the developments of four different DT-CNN models obtained from different transformations made to the original continuous model of CNN. Next, each discrete approach is simulated and compared with the rest of approaches and the continuous models. The objective of this study is to find the approach which best emulates the continuous neuron model at minimum computational cost. The simulations and temporal analysis of the discrete models have been made both in feedback and open system in order to verify their functionality. Finally, the architecture of the best model is implementated on an FPGA obtaining very interesting results.