Recently, there has been impressive progress in the field of artificial intelligence. A striking example is Alphago, an algorithm developed by Google, that defeated the world champion Lee Sedol at the game of Go. However, in terms of power consumption, the brain remains the absolute winner, by four orders of magnitudes. Indeed, today, brain inspired algorithms are running on our current sequential computers, which have a very different architecture than the brain. If we want to build smart chips capable of cognitive tasks with a low power consumption, we need to fabricate on silicon huge parallel networks of artificial synapses and neurons, bringing memory close to processing. The aim of the presented work is to deliver a new breed of bio-inspired magnetic devices for pattern recognition. Their functionality is based on the magnetic reversal properties of an artificial spin ice in a Kagome geometry for which the magnetic switching occurs by avalanches.
Spin torque magnetic memory (ST-MRAM) is currently under intense academic and industrial development, as it features non-volatility, high write and read speed and high endurance. However, one of its great challenge is the probabilistic nature of programming magnetic tunnel junctions, which imposes significant circuit or energy overhead for conventional ST-MRAM applications. In this work, we show that in unconventional computing applications, this drawback can actually be turned into an advantage. First, we show that conventional magnetic tunnel junctions can be reinterpreted as stochastic “synapses” that can be the basic element of low-energy learning systems. System-level simulations on a task of vehicle counting highlight the potential of the technology for learning systems. We investigate in detail the impact of magnetic tunnel junctions’ imperfections. Second, we introduce how intentionally superparamagnetic tunnel junctions can be the basis for low-energy fundamentally stochastic computing schemes, which harness part of their energy in thermal noise. We give two examples built around the concepts of synchronization and Bayesian inference. These results suggest that the stochastic effects of spintronic devices, traditionally interpreted by electrical engineers as a drawback, can be reinvented as an opportunity for low energy circuit design.
The brain displays many features typical of non-linear dynamical networks, such as synchronization or chaotic behaviour. These observations have inspired a whole class of models that harness the power of complex non-linear dynamical networks for computing. In this framework, neurons are modeled as non-linear oscillators, and synapses as the coupling between oscillators. These abstract models are very good at processing waveforms for pattern recognition or at generating precise time sequences useful for robotic motion. However there are very few hardware implementations of these systems, because large numbers of interacting non-linear oscillators are indeed. In this talk, I will show that coupled spin-torque nano-oscillators are very promising for realizing cognitive computing at the nanometer and nanosecond scale, and will present our first results in this direction.
The cascading of logic gates is one of the primary challenges for spintronic computing, as there is a need to dynamically
create magnetic fields. Spin-diode logic provides this essential cascading, as the current through each spin-diode is
modulated by a magnetic field created by the current through other spin-diodes. This logic family can potentially be
applied to any device exhibiting strong positive or negative magnetoresistance, and allows for the creation of circuits
with exceptionally high performance. These novel circuit structures provide an opportunity for spintronics to replace
CMOS in general-purpose computing.