Magnetic domain-wall devices, modulated by the spin-transfer torque or the spin-orbit torque effect, can implement logical operations in a manner that is inherently compact and cascadable. Using circuit simulations with micromagnetics-validated compact models, we evaluate the device requirements for domain-wall logic that has low latency, outperforms scaled CMOS logic in energy efficiency, and remains robust to process variations. We further show how the inherent non-volatility of these devices can be leveraged to construct stateful logic circuits that save energy and area relative to their CMOS counterparts and propose novel logic architectures that exploit the unique advantages of domain-wall devices.
Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive field, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is efficiently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).
Advances in machine intelligence have sparked interest in hardware accelerators to implement these algorithms, yet embedded electronics have stringent power, area budgets, and speed requirements that may limit non- volatile memory (NVM) integration. In this context, the development of fast nanomagnetic neural networks using minimal training data is attractive. Here, we extend an inference-only proposal using the intrinsic physics of domain-wall MTJ (DW-MTJ) neurons for online learning to implement fully unsupervised pattern recognition operation, using winner-take-all networks that contain either random or plastic synapses (weights). Meanwhile, a read-out layer trains in a supervised fashion. We find our proposed design can approach state-of-the-art success on the task relative to competing memristive neural network proposals, while eliminating much of the area and energy overhead that would typically be required to build the neuronal layers with CMOS devices.
The challenge of developing an efficient artificial neuron is impeded by the use of external CMOS circuits to perform leaking and lateral inhibition. The proposed leaky integrate-and-fire neuron based on the three terminal magnetic tunnel junction (3T-MTJ) performs integration by pushing its domain wall (DW) with spin-transfer or spin-orbit torque. The leaking capability is achieved by pushing the neurons’ DWs in the direction opposite of integration using a stray field from a hard ferromagnet or a non-uniform energy landscape resulting from shape or anisotropy variation. Firing is performed by the MTJ stack. Finally, analog lateral inhibition is achieved by dipolar field repulsive coupling from each neuron. An integrating neuron thus pushes slower neighboring neurons’ DWs in the direction opposite of integration. Applying this lateral inhibition to a ten-neuron output layer within a neuromorphic crossbar structure enables the identification of handwritten digits with 94% accuracy.