For decades, neural networks have shown promise for next-generation computing, and recent breakthroughs in machine learning techniques, such as deep neural networks, have provided state-of-the-art solutions for inference problems. However, these networks require thousands of training processes and are poorly suited for the precise computations required in scientific or similar arenas. The emergence of dedicated spiking neuromorphic hardware creates a powerful computational paradigm which can be leveraged towards these exact scientific or otherwise objective computing tasks. We forego any learning process and instead construct the network graph by hand. In turn, the networks produce guaranteed success often with easily computable complexity. We demonstrate a number of algorithms exemplifying concepts central to spiking networks including spike timing and synaptic delay. We also discuss the application of cross-correlation particle image velocimetry and provide two spiking algorithms; one uses time-division multiplexing, and the other runs in constant time.
The effort to develop larger-scale computing systems introduces a set of related challenges: Large machines are more difficult to synchronize. The sheer quantity of hardware introduces more opportunities for errors. New approaches to hardware, such as low-energy or neuromorphic devices are not directly programmable by traditional methods. These three challenges may be addressed, at least for a subset of interesting problems, by a dynamical systems approach. The initial state of system represents the problem, and the final state of the system represents the solution. By carefully controlling the attractive basin of the system, we can move it between these two points while tolerating errors, which appear as perturbations. Here we describe both conventional and neural computers as dynamical systems, and show how to construct algorithms with resilience to noise, using traditional numerical problems as a special case. This suggests a reduction from numerical problems to spiking neural hardware such as IBM's TrueNorth.
Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung; Dean, Thomas; Denker, Michael; Diesmann, Markus; Donofrio, David D.; Frank, Loren M.; Kasthuri, Narayanan; Koch, Chirstof; Ruebel, Oliver; Simon, Horst D.; Sommer, Friedrich T.; Prabhat
Opportunities offered by new neuro-technologies are threatened by lack of coherent plans to analyze, manage, and understand the data. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.
Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensemble's firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.
Proceedings of the National Academy of Sciences of the United States of America
Du, Huiyun; Deng, Wei; Aimone, James B.; Ge, Minyan; Parylak, Sarah; Walch, Keenan; Zhang, Wei; Cook, Jonathan; Song, Huina; Wang, Liping; Gage, Fred H.; Mu, Yangling
Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2-expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Overall, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience.
Dieni, Cristina V.; Panichi, Roberto; Aimone, James B.; Kuo, Chay T.; Wadiche, Jacques I.; Overstreet-Wadiche, Linda
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spike due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations.
The restriction of adult neurogenesis to only a handful of regions of the brain is suggestive of some shared requirement for this dramatic form of structural plasticity. However, a common driver across neurogenic regions has not yet been identified. Computational studies have been invaluable in providing insight into the functional role of new neurons; however, researchers have typically focused on specific scales ranging from abstract neural networks to specific neural systems, most commonly the dentate gyrus area of the hippocampus. These studies have yielded a number of diverse potential functions for new neurons, ranging from an impact on pattern separation to the incorporation of time into episodic memories to enabling the forgetting of old information. This review will summarize these past computational efforts and discuss whether these proposed theoretical functions can be unified into a common rationale for why neurogenesis is required in these unique neural circuits.
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.
Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.
Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.