Multilayer Perceptron Networks with random hidden layers are very efficient at automatic feature extraction and offer significant performance improvements in the training process. They essentially employ large collection of fixed, random features, and are expedient for form-factor constrained embedded platforms. In this work, a reconfigurable and scalable architecture is proposed for the MLPs with random hidden layers with a customized building block based on CORDIC algorithm. The proposed architecture also exploits fixed point operations for area efficiency. The design is validated for classification on two different datasets. An accuracy of ~ 90% for MNIST dataset and 75% for gender classification on LFW dataset was observed. The hardware has 299 speed-up over the corresponding software realization.
The advent of nanoscale metal-insulator-metal (MIM) structures with memristive properties has given birth to a new generation
of hardware neural networks based on CMOS/memristor integration (CMHNNs). The advantage of the CMHNN
paradigm compared to a pure CMOS approach lies in the multi-faceted functionality of memristive devices: They can
efficiently store neural network configurations (weights and activation function parameters) via non-volatile, quasi-analog
resistance states. They also provide high-density interconnects between neurons when integrated into 2-D and 3-D crossbar
architectures. In this work, we explore the combination of CMHNN classifiers with manifold learning to reduce the
dimensionality of CMHNN inputs. This allows the size of the CMHNN to be reduced significantly (by ≈ 97%). We tested
the proposed system using the Caltech101 database and were able to achieve classification accuracies within ≈ 1:5% of
those produced by a traditional support vector machine.