Publications

Results 26–50 of 161
Skip to search filters

Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator

Conference Proceedings - IEEE International Conference on Rebooting Computing (ICRC)

Jacobs-Gedrim, Robin B.; Agarwal, Sapan A.; Knisely, Kathrine E.; Stevens, Jim E.; Van Heukelom, Michael V.; Hughart, David R.; James, Conrad D.; Marinella, Matthew J.

Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaOx, and two conducting metallization systems, Cu-SiO2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. As a result, this suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.

More Details

Piecewise empirical model (PEM) of resistive memory for pulsed analog and neuromorphic applications

Journal of Computational Electronics

Niroula, John N.; Agarwal, Sapan A.; Jacobs-Gedrim, Robin B.; Schiek, Richard L.; Hughart, David R.; Hsia, Alexander W.; James, Conrad D.; Marinella, Matthew J.

With the end of Dennard scaling and the ever-increasing need for more efficient, faster computation, resistive switching devices (ReRAM), often referred to as memristors, are a promising candidate for next generation computer hardware. These devices show particular promise for use in an analog neuromorphic computing accelerator as they can be tuned to multiple states and be updated like the weights in neuromorphic algorithms. Modeling a ReRAM-based neuromorphic computing accelerator requires a compact model capable of correctly simulating the small weight update behavior associated with neuromorphic training. These small updates have a nonlinear dependence on the initial state, which has a significant impact on neural network training. Consequently, we propose the piecewise empirical model (PEM), an empirically derived general purpose compact model that can accurately capture the nonlinearity of an arbitrary two-terminal device to match pulse measurements important for neuromorphic computing applications. By defining the state of the device to be proportional to its current, the model parameters can be extracted from a series of voltages pulses that mimic the behavior of a device in an analog neuromorphic computing accelerator. This allows for a general, accurate, and intuitive compact circuit model that is applicable to different resistance-switching device technologies. In this work, we explain the details of the model, implement the model in the circuit simulator Xyce, and give an example of its usage to model a specific Ta / TaO x device.

More Details

Impact of linearity and write noise of analog resistive memory devices in a neural algorithm accelerator

2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings

Jacobs-Gedrim, Robin B.; Agarwal, Sapan A.; Knisely, Kathrine E.; Stevens, Jim E.; Van Heukelom, Michael V.; Hughart, David R.; Niroula, John; James, Conrad D.; Marinella, Matthew J.

Resistive memory (ReRAM) shows promise for use as an analog synapse element in energy-efficient neural network algorithm accelerators. A particularly important application is the training of neural networks, as this is the most computationally-intensive procedure in using a neural algorithm. However, training a network with analog ReRAM synapses can significantly reduce the accuracy at the algorithm level. In order to assess this degradation, analog properties of ReRAM devices were measured and hand-written digit recognition accuracy was modeled for the training using backpropagation. Bipolar filamentary devices utilizing three material systems were measured and compared: one oxygen vacancy system, Ta-TaOx, and two conducting metallization systems, Cu-SiO2, and Ag/chalcogenide. Analog properties and conductance ranges of the devices are optimized by measuring the response to varying voltage pulse characteristics. Key analog device properties which degrade the accuracy are update linearity and write noise. Write noise may improve as a function of device manufacturing maturity, but write nonlinearity appears relatively consistent among the different device material systems and is found to be the most significant factor affecting accuracy. This suggests that new materials and/or fundamentally different resistive switching mechanisms may be required to improve device linearity and achieve higher algorithm training accuracy.

More Details

A spike-Timing neuromorphic architecture

2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings

Hill, Aaron J.; Donaldson, Jonathon W.; Rothganger, Fredrick R.; Vineyard, Craig M.; Follett, David R.; Follett, Pamela L.; Smith, Michael R.; Verzi, Stephen J.; Severa, William M.; Wang, Felix W.; Aimone, James B.; Naegle, John H.; James, Conrad D.

Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.

More Details

Hardware Acceleration of Adaptive Neural Algorithms

James, Conrad D.

As tradit ional numerical computing has faced challenges, researchers have turned towards alternative computing approaches to reduce power - per - computation metrics and improve algorithm performance. Here, we describe an approach towards non - conventional computing that strengthens the connection between machine learning and neuroscience concepts. The Hardware Acceleration of Adaptive Neural Algorithms (HAANA) project ha s develop ed neural machine learning algorithms and hardware for applications in image processing and cybersecurity. While machine learning methods are effective at extracting relevant features from many types of data, the effectiveness of these algorithms degrades when subjected to real - world conditions. Our team has generated novel neural - inspired approa ches to improve the resiliency and adaptability of machine learning algorithms. In addition, we have also designed and fabricated hardware architectures and microelectronic devices specifically tuned towards the training and inference operations of neural - inspired algorithms. Finally, our multi - scale simulation framework allows us to assess the impact of microelectronic device properties on algorithm performance.

More Details

Achieving ideal accuracies in analog neuromorphic computing using periodic carry

Digest of Technical Papers - Symposium on VLSI Technology

Agarwal, Sapan A.; Jacobs-Gedrim, Robin B.; Hsia, Alexander W.; Hughart, David R.; Fuller, Elliot J.; Talin, A.A.; James, Conrad D.; Plimpton, Steven J.; Marinella, Matthew J.

Analog resistive memories promise to reduce the energy of neural networks by orders of magnitude. However, the write variability and write nonlinearity of current devices prevent neural networks from training to high accuracy. We present a novel periodic carry method that uses a positional number system to overcome this while maintaining the benefit of parallel analog matrix operations. We demonstrate how noisy, nonlinear TaOx devices that could only train to 80% accuracy on MNIST, can now reach 97% accuracy, only 1% away from an ideal numeric accuracy of 98%. On a file type dataset, the TaOx devices achieve ideal numeric accuracy. In addition, low noise, linear Li1-xCoO2 devices train to ideal numeric accuracies using periodic carry on both datasets.

More Details

Rapid nucleic acid extraction and purification using a miniature ultrasonic technique

Micromachines

Branch, Darren W.; Vreeland, Erika C.; McClain, Jaime L.; Murton, Jaclyn K.; James, Conrad D.; Achyuthan, Komandoor A.

Miniature ultrasonic lysis for biological sample preparation is a promising technique for efficient and rapid extraction of nucleic acids and proteins from a wide variety of biological sources. Acoustic methods achieve rapid, unbiased, and efficacious disruption of cellular membranes while avoiding the use of harsh chemicals and enzymes, which interfere with detection assays. In this work, a miniature acoustic nucleic acid extraction system is presented. Using a miniature bulk acoustic wave (BAW) transducer array based on 36° Y-cut lithium niobate, acoustic waves were coupled into disposable laminate-based microfluidic cartridges. To verify the lysing effectiveness, the amount of liberated ATP and the cell viability were measured and compared to untreated samples. The relationship between input power, energy dose, flow-rate, and lysing efficiency were determined. DNA was purified on-chip using three approaches implemented in the cartridges: a silica-based sol-gel silica-bead filled microchannel, nucleic acid binding magnetic beads, and Nafion-coated electrodes. Using E. coli, the lysing dose defined as ATP released per joule was 2.2× greater, releasing 6.1× more ATP for the miniature BAW array compared to a bench-top acoustic lysis system. An electric field-based nucleic acid purification approach using Nafion films yielded an extraction efficiency of 69.2% in 10 min for 50 μL samples.

More Details

Neuromorphic data microscope

ACM International Conference Proceeding Series

Follett, David R.; Karpman, Gabe D.; Townsend, Duncan; Naegle, John H.; Follett, Pamela L.; Suppona, Roger A.; Aimone, James B.; James, Conrad D.

In 2016, Lewis Rhodes Labs, (LRL), shipped the first commercially viable Neuromorphic Processing Unit, (NPU), branded as a Neuromorphic Data Microscope (NDM). This product leverages architectural mechanisms derived from the sensory cortex of the human brain to efficiently implement pattern matching. LRL and Sandia National Labs have optimized this product for streaming analytics, and demonstrated a 1,000x power per operation reduction in an FPGA format. When reduced to an ASIC, the efficiency will improve to 1,000,000x. Additionally, the neuromorphic nature of the device gives it powerful computational attributes that are counterintuitive to those schooled in traditional von Neumann architectures. The Neuromorphic Data Microscope is the first of a broad class of brain-inspired, time domain processors that will profoundly alter the functionality and economics of data processing.

More Details

Neurogenesis deep learning: Extending deep networks to accommodate new classes

Proceedings of the International Joint Conference on Neural Networks

Draelos, Timothy J.; Miner, Nadine E.; Lamb, Christopher L.; Cox, Jonathan A.; Vineyard, Craig M.; Carlson, Kristofor D.; Severa, William M.; James, Conrad D.; Aimone, James B.

Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.

More Details

A novel digital neuromorphic architecture efficiently facilitating complex synaptic response functions applied to liquid state machines

Proceedings of the International Joint Conference on Neural Networks

Smith, Michael R.; Hill, Aaron J.; Carlson, Kristofor D.; Vineyard, Craig M.; Donaldson, Jonathon W.; Follett, David R.; Follett, Pamela L.; Naegle, John H.; James, Conrad D.; Aimone, James B.

Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU - demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.

More Details

Optimization-based computation with spiking neurons

Proceedings of the International Joint Conference on Neural Networks

Verzi, Stephen J.; Vineyard, Craig M.; Vugrin, Eric D.; Galiardi, Meghan; James, Conrad D.; Aimone, James B.

Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.

More Details

Designing an analog crossbar based neuromorphic accelerator

2017 5th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2017 - Proceedings

Agarwal, Sapan A.; Hsia, Alexander W.; Jacobs-Gedrim, Robin B.; Hughart, David R.; Plimpton, Steven J.; James, Conrad D.; Marinella, Matthew J.

Resistive memory crossbars can dramatically reduce the energy required to perform computations in neural algorithms by three orders of magnitude when compared to an optimized digital ASIC [1]. For data intensive applications, the computational energy is dominated by moving data between the processor, SRAM, and DRAM. Analog crossbars overcome this by allowing data to be processed directly at each memory element. Analog crossbars accelerate three key operations that are the bulk of the computation in a neural network as illustrated in Fig 1: vector matrix multiplies (VMM), matrix vector multiplies (MVM), and outer product rank 1 updates (OPU)[2]. For an NxN crossbar the energy for each operation scales as the number of memory elements O(N2) [2]. This is because the crossbar performs its entire computation in one step, charging all the capacitances only once. Thus the CV2 energy of the array scales as array size. This fundamentally better than trying to read or write a digital memory. Each row of any NxN digital memory must be accessed one at a time, resulting in N columns of length O(N) being charged N times, requiring O(N3) energy to read a digital memory. Thus an analog crossbar has a fundamental O(N) energy scaling advantage over a digital system. Furthermore, if the read operation is done at low voltage and is therefore noise limited, the read energy can even be independent of the crossbar size, O(1) [2].

More Details
Results 26–50 of 161
Results 26–50 of 161