Advances in lithography and nanofabrication have long been driven by the requirements of the classic Von Neumann architecture. As Moore's Law comes to its end and advances in computing shift the market's demand for computing towards AI, entirely new architectures, neuromorphic architectures, are coming into vogue. The different requirements of neuromorphic computers, including enormous demand for memory and a high tolerance for defects, requires a reassessment of what is and is not feasible in IC manufacturing. Alternative approaches to lithography and fabrication, such as directed self-assembly, which suffer from high rates of defectivity incompatible with conventional architectures, may be necessary to build the most complicated and dense future architectures conceived.
We will review the different requirements for a range of future architectures based on novel nanotechnology and some of their use cases in neuromorphic computing. We will show that while some approaches, such as GPU replacements which seek to accelerate vector matrix multiplication, have many of the same manufacturing requirements as existing conventional computers, other, memory intensive architectures have different tolerances for defectivity depending on the nature of the defects and the use case. For crossbar topologies, different strategies can be used to tolerate row and column defects as well as point defects on individual devices. The different use cases involving inference, ex-situ training, in-situ training, supervised and unsupervised learning each also have distinct regimes of defect tolerance, with ex-situ trained systems being the least tolerant and unsupervised, in-situ trained systems being the most tolerant. With directed self-assembly also having a large dynamic range of defect densities (0.1%-10%), we use estimates of defect tolerance from the established literature to intuit approximate regimes of usefulness for different technologies.
Understanding the limits of defect tolerance in these systems is especially important given the increasing memory density demands of neuromorphic architectures requiring monolithic 3D integration of logic and memory. Manufacturing multiple stacked layers at the smallest feature size can become cost prohibitive due to the large number of critical mask steps. With a neuromorphic architecture however, operating in the most tolerant use case to defects, it may become possible exploit advances in 3D assembly to realize the largest and most complex computing architectures based on unsupervised learning of unstructured data.