Publications

Results 1–25 of 66
Skip to search filters

Proton Tunable Analog Transistor for Low Power Computing

Robinson, Donald A.; Foster, Michael R.; Bennett, Christopher H.; Bhandarkar, Austin B.; Fuller, Elliot J.; Stavila, Vitalie S.; Spataru, Dan C.; Krishnakumar, Raga K.; Cole-Filipiak, Neil C.; Schrader, Paul E.; Ramasesha, Krupa R.; Allendorf, Mark D.; Talin, A.A.

This project was broadly motivated by the need for new hardware that can process information such as images and sounds right at the point of where the information is sensed (e.g. edge computing). The project was further motivated by recent discoveries by group demonstrating that while certain organic polymer blends can be used to fabricate elements of such hardware, the need to mix ionic and electronic conducting phases imposed limits on performance, dimensional scalability and the degree of fundamental understanding of how such devices operated. As an alternative to blended polymers containing distinct ionic and electronic conducting phases, in this LDRD project we have discovered that a family of mixed valence coordination compounds called Prussian blue analogue (PBAs), with an open framework structure and ability to conduct both ionic and electronic charge, can be used for inkjet-printed flexible artificial synapses that reversibly switch conductance by more than four orders of magnitude based on electrochemically tunable oxidation state. Retention of programmed states is improved by nearly two orders of magnitude compared to the extensively studied organic polymers, thus enabling in-memory compute and avoiding energy costly off-chip access during training. We demonstrate dopamine detection using PBA synapses and biocompatibility with living neurons, evoking prospective application for brain - computer interfacing. By application of electron transfer theory to in-situ spectroscopic probing of intervalence charge transfer, we elucidate a switching mechanism whereby the degree of mixed valency between N-coordinated Ru sites controls the carrier concentration and mobility, as supported by density functional theory (DFT) .

More Details

Probabilistic Nanomagnetic Memories for Uncertain and Robust Machine Learning

Bennett, Christopher H.; Xiao, Tianyao X.; Liu, Samuel L.; Humphrey, Leonard H.; Incorvia, Jean A.; Debusschere, Bert D.; Ries, Daniel R.; Agarwal, Sapan A.

This project evaluated the use of emerging spintronic memory devices for robust and efficient variational inference schemes. Variational inference (VI) schemes, which constrain the distribution for each weight to be a Gaussian distribution with a mean and standard deviation, are a tractable method for calculating posterior distributions of weights in a Bayesian neural network such that this neural network can also be trained using the powerful backpropagation algorithm. Our project focuses on domain-wall magnetic tunnel junctions (DW-MTJs), a powerful multi-functional spintronic synapse design that can achieve low power switching while also opening the pathway towards repeatable, analog operation using fabricated notches. Our initial efforts to employ DW-MTJs as an all-in-one stochastic synapse with both a mean and standard deviation didn’t end up meeting the quality metrics for hardware-friendly VI. In the future, new device stacks and methods for expressive anisotropy modification may make this idea still possible. However, as a fall back that immediately satisfies our requirements, we invented and detailed how the combination of a DW-MTJ synapse encoding the mean and a probabilistic Bayes-MTJ device, programmed via a ferroelectric or ionically modifiable layer, can robustly and expressively implement VI. This design includes a physics-informed small circuit model, that was scaled up to perform and demonstrate rigorous uncertainty quantification applications, up to and including small convolutional networks on a grayscale image classification task, and larger (Residual) networks implementing multi-channel image classification. Lastly, as these results and ideas all depend upon the idea of an inference application where weights (spintronic memory states) remain non-volatile, the retention of these synapses for the notched case was further interrogated. These investigations revealed and emphasized the importance of both notch geometry and anisotropy modification in order to further enhance the endurance of written spintronic states. In the near future, these results will be mapped to effective predictions for room temperature and elevated operation DW-MTJ memory retention, and experimentally verified when devices become available.

More Details

CrossSim Inference Manual v2.0

Xiao, Tianyao X.; Bennett, Christopher H.; Feinberg, Benjamin F.; Marinella, Matthew J.; Agarwal, Sapan A.

Neural networks are largely based on matrix computations. During forward inference, the most heavily used compute kernel is the matrix-vector multiplication (MVM): $W \vec{x} $. Inference is a first frontier for the deployment of next-generation hardware for neural network applications, as it is more readily deployed in edge devices, such as mobile devices or embedded processors with size, weight, and power constraints. Inference is also easier to implement in analog systems than training, which has more stringent device requirements. The main processing kernel used during inference is the MVM.

More Details

An Accurate, Error-Tolerant, and Energy-Efficient Neural Network Inference Engine Based on SONOS Analog Memory

IEEE Transactions on Circuits and Systems I: Regular Papers

Xiao, T.P.; Feinberg, Benjamin F.; Bennett, Christopher H.; Agrawal, Vineet; Saxena, Prashant; Prabhakar, Venkatraman; Ramkumar, Krishnaswamy; Medu, Harsha; Raghavan, Vijay; Chettuvetty, Ramesh; Agarwal, Sapan A.; Marinella, Matthew J.

We demonstrate SONOS (silicon-oxide-nitride-oxide-silicon) analog memory arrays that are optimized for neural network inference. The devices are fabricated in a 40nm process and operated in the subthreshold regime for in-memory matrix multiplication. Subthreshold operation enables low conductances to be implemented with low error, which matches the typical weight distribution of neural networks, which is heavily skewed toward near-zero values. This leads to high accuracy in the presence of programming errors and process variations. We simulate the end-To-end neural network inference accuracy, accounting for the measured programming error, read noise, and retention loss in a fabricated SONOS array. Evaluated on the ImageNet dataset using ResNet50, the accuracy using a SONOS system is within 2.16% of floating-point accuracy without any retraining. The unique error properties and high On/Off ratio of the SONOS device allow scaling to large arrays without bit slicing, and enable an inference architecture that achieves 20 TOPS/W on ResNet50, a > 10× gain in energy efficiency over state-of-The-Art digital and analog inference accelerators.

More Details

Vector-Matrix Multiplication Engine for Neuromorphic Computation with a CBRAM Crossbar Array [Slides]

Tolleson, Blayne T.; Marinella, Matthew J.; Bennett, Christopher H.; Barnaby, Hugh J.; Wilson, Donald W.; Short, Jesse C.

The core function of many neural network algorithms is the dot product, or vector matrix multiply (VMM) operation. Crossbar arrays utilizing resistive memory elements can reduce computational energy in neural algorithms by up to five orders of magnitude compared to conventional CPUs. Moving data between a processor, SRAM, and DRAM dominates energy consumption. By utilizing analog operations to reduce data movement, resistive memory crossbars can enable processing of large amounts of data at lower energy than conventional memory architectures.

More Details

Purely Spintronic Leaky Integrate-and-Fire Neurons

Proceedings - IEEE International Symposium on Circuits and Systems

Brigner, Wesley H.; Hassan, Naimul; Hu, Xuan; Bennett, Christopher H.; Garcia-Sanchez, Felipe; Marinella, Matthew J.; Incorvia, Jean A.; Friedman, Joseph S.

Neuromorphic computing promises revolutionary improvements over conventional systems for applications that process unstructured information. To fully realize this potential, neuromorphic systems should exploit the biomimetic behavior of emerging nanodevices. In particular, exceptional opportunities are provided by the non-volatility and analog capabilities of spintronic devices. While spintronic devices that emulate neurons have been previously proposed, they require complementary metal-oxide semiconductor (CMOS) technology to function. In turn, this significantly increases the power consumption, fabrication complexity, and device area of a single neuron. This work reviews three previously proposed CMOS-free spintronic neurons designed to resolve this issue.

More Details

Analysis and mitigation of parasitic resistance effects for analog in-memory neural network acceleration

Semiconductor Science and Technology

Xiao, T.P.; Feinberg, Benjamin F.; Rohan, Jacob N.; Bennett, Christopher H.; Agarwal, Sapan A.; Marinella, Matthew J.

To support the increasing demands for efficient deep neural network processing, accelerators based on analog in-memory computation of matrix multiplication have recently gained significant attention for reducing the energy of neural network inference. However, analog processing within memory arrays must contend with the issue of parasitic voltage drops across the metal interconnects, which distort the results of the computation and limit the array size. This work analyzes how parasitic resistance affects the end-to-end inference accuracy of state-of-the-art convolutional neural networks, and comprehensively studies how various design decisions at the device, circuit, architecture, and algorithm levels affect the system's sensitivity to parasitic resistance effects. A set of guidelines are provided for how to design analog accelerator hardware that is intrinsically robust to parasitic resistance, without any explicit compensation or re-training of the network parameters.

More Details

A domain wall-magnetic tunnel junction artificial synapse with notched geometry for accurate and efficient training of deep neural networks

Applied Physics Letters

Liu, Samuel; Xiao, T.P.; Cui, Can; Incorvia, Jean A.; Bennett, Christopher H.; Marinella, Matthew J.

Inspired by the parallelism and efficiency of the brain, several candidates for artificial synapse devices have been developed for neuromorphic computing, yet a nonlinear and asymmetric synaptic response curve precludes their use for backpropagation, the foundation of modern supervised learning. Spintronic devices - which benefit from high endurance, low power consumption, low latency, and CMOS compatibility - are a promising technology for memory, and domain-wall magnetic tunnel junction (DW-MTJ) devices have been shown to implement synaptic functions such as long-term potentiation and spike-timing dependent plasticity. In this work, we propose a notched DW-MTJ synapse as a candidate for supervised learning. Using micromagnetic simulations at room temperature, we show that notched synapses ensure the non-volatility of the synaptic weight and allow for highly linear, symmetric, and reproducible weight updates using either spin transfer torque (STT) or spin-orbit torque (SOT) mechanisms of DW propagation. We use lookup tables constructed from micromagnetics simulations to model the training of neural networks built with DW-MTJ synapses on both the MNIST and Fashion-MNIST image classification tasks. Accounting for thermal noise and realistic process variations, the DW-MTJ devices achieve classification accuracy close to ideal floating-point updates using both STT and SOT devices at room temperature and at 400 K. Our work establishes the basis for a magnetic artificial synapse that can eventually lead to hardware neural networks with fully spintronic matrix operations implementing machine learning.

More Details

Ionizing Radiation Effects in SONOS-Based Neuromorphic Inference Accelerators

IEEE Transactions on Nuclear Science

Xiao, T.P.; Bennett, Christopher H.; Agarwal, Sapan A.; Hughart, David R.; Barnaby, Hugh J.; Puchner, Helmut; Prabhakar, Venkatraman; Talin, A.A.; Marinella, Matthew J.

We evaluate the sensitivity of neuromorphic inference accelerators based on silicon-oxide-nitride-oxide-silicon (SONOS) charge trap memory arrays to total ionizing dose (TID) effects. Data retention statistics were collected for 16 Mbit of 40-nm SONOS digital memory exposed to ionizing radiation from a Co-60 source, showing good retention of the bits up to the maximum dose of 500 krad(Si). Using this data, we formulate a rate-equation-based model for the TID response of trapped charge carriers in the ONO stack and predict the effect of TID on intermediate device states between 'program' and 'erase.' This model is then used to simulate arrays of low-power, analog SONOS devices that store 8-bit neural network weights and support in situ matrix-vector multiplication. We evaluate the accuracy of the irradiated SONOS-based inference accelerator on two image recognition tasks - CIFAR-10 and the challenging ImageNet data set - using state-of-the-art convolutional neural networks, such as ResNet-50. We find that across the data sets and neural networks evaluated, the accelerator tolerates a maximum TID between 10 and 100 krad(Si), with deeper networks being more susceptible to accuracy losses due to TID.

More Details

Heavy-Ion-Induced Displacement Damage Effects in Magnetic Tunnel Junctions with Perpendicular Anisotropy

IEEE Transactions on Nuclear Science

Xiao, T.P.; Bennett, Christopher H.; Mancoff, Frederick B.; Manuel, Jack E.; Hughart, David R.; Jacobs-Gedrim, Robin B.; Bielejec, Edward S.; Vizkelethy, Gyorgy V.; Sun, Jijun; Aggarwal, Sanjeev; Arghavani, Reza A.; Marinella, Matthew J.

We evaluate the resilience of CoFeB/MgO/CoFeB magnetic tunnel junctions (MTJs) with perpendicular magnetic anisotropy (PMA) to displacement damage induced by heavy-ion irradiation. MTJs were exposed to 3-MeV Ta2+ ions at different levels of ion beam fluence spanning five orders of magnitude. The devices remained insensitive to beam fluences up to $10^{11}$ ions/cm2, beyond which a gradual degradation in the device magnetoresistance, coercive magnetic field, and spin-transfer-torque (STT) switching voltage were observed, ending with a complete loss of magnetoresistance at very high levels of displacement damage (>0.035 displacements per atom). The loss of magnetoresistance is attributed to structural damage at the MgO interfaces, which allows electrons to scatter among the propagating modes within the tunnel barrier and reduces the net spin polarization. Ion-induced damage to the interface also reduces the PMA. This study clarifies the displacement damage thresholds that lead to significant irreversible changes in the characteristics of STT magnetic random access memory (STT-MRAM) and elucidates the physical mechanisms underlying the deterioration in device properties.

More Details

In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory

Frontiers in Neuroscience

Li, Yiyang; Xiao, T.P.; Bennett, Christopher H.; Isele, Erik; Melianas, Armantas; Tao, Hanbo; Marinella, Matthew J.; Salleo, Alberto; Fuller, Elliot J.; Talin, A.A.

In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network’s synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks.

More Details

Controllable Reset Behavior in Domain Wall-Magnetic Tunnel Junction Artificial Neurons for Task-Adaptable Computation

IEEE Magnetics Letters

Liu, Samuel; Bennett, Christopher H.; Friedman, Joseph; Marinella, Matthew J.; Paydarfar, David; Incorvia, Jean A.

Neuromorphic computing with spintronic devices has been of interest due to the limitations of CMOS-driven von Neumann computing. Domain wall-magnetic tunnel junction (DW-MTJ) devices have been shown to be able to intrinsically capture biological neuron behavior. Edgy-relaxed behavior, where a frequently firing neuron experiences a lower action potential threshold, may provide additional artificial neuronal functionality when executing repeated tasks. In this letter, we demonstrate that this behavior can be implemented in DW-MTJ artificial neurons via three alternative mechanisms: shape anisotropy, magnetic field, and current-driven soft reset. Using micromagnetics and analytical device modeling to classify the Optdigits handwritten digit dataset, we show that edgy-relaxed behavior improves both classification accuracy and classification rate for ordered datasets while sacrificing little to no accuracy for a randomized dataset. This letter establishes methods by which artificial spintronic neurons can be flexibly adapted to datasets.

More Details

Filament-Free Bulk Resistive Memory Enables Deterministic Analogue Switching

Advanced Materials

Li, Yiyang; Fuller, Elliot J.; Sugar, Joshua D.; Yoo, Sangmin; Ashby, David; Bennett, Christopher H.; Horton, Robert D.; Bartsch, Michael B.; Marinella, Matthew J.; Lu, Wei D.; Talin, A.A.

Digital computing is nearing its physical limits as computing needs and energy consumption rapidly increase. Analogue-memory-based neuromorphic computing can be orders of magnitude more energy efficient at data-intensive tasks like deep neural networks, but has been limited by the inaccurate and unpredictable switching of analogue resistive memory. Filamentary resistive random access memory (RRAM) suffers from stochastic switching due to the random kinetic motion of discrete defects in the nanometer-sized filament. In this work, this stochasticity is overcome by incorporating a solid electrolyte interlayer, in this case, yttria-stabilized zirconia (YSZ), toward eliminating filaments. Filament-free, bulk-RRAM cells instead store analogue states using the bulk point defect concentration, yielding predictable switching because the statistical ensemble behavior of oxygen vacancy defects is deterministic even when individual defects are stochastic. Both experiments and modeling show bulk-RRAM devices using TiO2-X switching layers and YSZ electrolytes yield deterministic and linear analogue switching for efficient inference and training. Bulk-RRAM solves many outstanding issues with memristor unpredictability that have inhibited commercialization, and can, therefore, enable unprecedented new applications for energy-efficient neuromorphic computing. Beyond RRAM, this work shows how harnessing bulk point defects in ionic materials can be used to engineer deterministic nanoelectronic materials and devices.

More Details
Results 1–25 of 66
Results 1–25 of 66