The ability to rapidly identify symmetry and anti-symmetry is an essential attribute of intelligence. Symmetry perception is a central process in human vision and may be key to human 3D visualization. While previous work in understanding neuron symmetry perception has concentrated on the neuron as an integrator, here we show how the coincidence detecting property of the spiking neuron can be used to reveal symmetry density in spatial data. We develop a method for synchronizing symmetry-identifying spiking artificial neural networks to enable layering and feedback in the network. We show a method for building a network capable of identifying symmetry density between sets of data and present a digital logic implementation demonstrating an 8x8 leaky-integrate-and-fire (LIF) symmetry detector in a field programmable gate array. Our results show that the efficiencies of spiking neural networks can be harnessed to rapidly identify symmetry in spatial data with applications in image processing, 3D computer vision, and robotics.In conclusion, we have presented a novel algorithm for finding a scalar field representing the symmetry of points in a multi-dimensional space. We have shown how time synchronization in the input values of spiking neural networks, with the appropriate choice of threshold and spike period, results in the identification of output neurons along points of high symmetry density to the network inputs. We have demonstrated an implementation of the symmetry selective LIF neural network in common hardware with a high speed, 2.8 MHz identification of symmetry points in an 8x8 Manhattan metric space. Our results show that utilizing only the delay and coincidence detecting properties of a single layer of neurons in spiking neural networks naturally lead to effective symmetry identification. A greater understanding of symmetry perception in artificial intelligences will lead to systems with more effective pattern visualization, compression, and goal setting processes.
Convolutional neural networks have become an essential element of spatial deep learning systems. In the prevailing architecture, the convolution operation is performed with Fast Fourier Transforms (FFT) electronically in GPUs. The parallelism of GPUs provides an efficiency over CPUs, however both approaches being electronic are bound by the speed and power limits of the interconnect delay inside the circuits. Here we present a silicon photonics based architecture for convolutional neural networks that harnesses the phase property of light to perform FFTs efficiently. Our all-optical FFT is based on nested Mach-Zender Interferometers, directional couplers, and phase shifters, with backend electro-optic modulators for sampling. The FFT delay depends only on the propagation delay of the optical signal through the silicon photonics structures. Designing and analyzing the performance of a convolutional neural network deployed with our on-chip optical FFT, we find dramatic improvements by up to 102 when compared to state-of-the-art GPUs when exploring a compounded figure-of-merit given by power per convolution over area. At a high level, this performance is enabled by mapping the desired mathematical function, an FFT, synergistically onto hardware, in this case optical delay interferometers.
In the search for low-cost wide spectrum imagers it may become necessary to sacrifice the expense of the focal plane
array and revert to a scanning methodology. In many cases the sensor may be too unwieldy to physically scan and
mirrors may have adverse effects on particular frequency bands. In these cases, photonic masks can be devised to
modulate the incoming light field with a code over time. This is in essence code-division multiplexing of the light field
into a lower dimension channel. In this paper a simple method for modulating the light field with masks of the
Archimedes’ spiral is presented and a mathematical model of the two-dimensional mask set is developed.