Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
Graph algorithms enable myriad large-scale applications including cybersecurity, social network analysis, resource allocation, and routing. The scalability of current graph algorithm implementations on conventional computing architectures are hampered by the demise of Moore’s law. We present a theoretical framework for designing and assessing the performance of graph algorithms executing in networks of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze new spiking algorithms for shortest path and dynamic programming problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation. For fair and rigorous comparison with conventional algorithms and architectures, which is challenging but paramount, we develop new models of data-movement in conventional computing architectures. This allows us to prove polynomial-factor advantages, even when we assume a SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a rigorous asymptotic computational advantage for neuromorphic computing.
Probabilistic and Bayesian neural networks have long been proposed as a method to incorporate uncertainty about the world (both in training data and operation) into artificial intelligence applications. One approach to making a neural network probabilistic is to leverage a Monte Carlo sampling approach that samples a trained network while incorporating noise. Such sampling approaches for neural networks have not been extensively studied due to the prohibitive requirement of many computationally expensive samples. While the development of future microelectronics platforms that make this sampling more efficient is an attractive option, it has not been immediately clear how to sample a neural network and what the quality of random number generation should be. This research aimed to start addressing these two fundamental questions by examining basic “off the shelf” neural networks can be sampled through a few different mechanisms (including synapse “dropout” and neuron “dropout”) and examine how these sampling approaches can be evaluated both in terms of evaluating algorithm effectiveness and the required quality of random numbers.
We present a theoretical framework for designing and assessing the performance of algorithms executing in networks consisting of spiking artificial neurons. Although spiking neural networks (SNNs) are capable of general-purpose computation, few algorithmic results with rigorous asymptotic performance analysis are known. SNNs are exceptionally well-motivated practically, as neuromorphic computing systems with 100 million spiking neurons are available, and systems with a billion neurons are anticipated in the next few years. Beyond massive parallelism and scalability, neuromorphic computing systems offer energy consumption orders of magnitude lower than conventional high-performance computing systems. We employ our framework to design and analyze neuromorphic graph algorithms, focusing on shortest path problems. Our neuromorphic algorithms are message-passing algorithms relying critically on data movement for computation, and we develop data-movement lower bounds for conventional algorithms. A fair and rigorous comparison with conventional algorithms and architectures is challenging but paramount. We prove a polynomial-factor advantage even when we assume an SNN consisting of a simple grid-like network of neurons. To the best of our knowledge, this is one of the first examples of a provable asymptotic computational advantage for neuromorphic computing.
n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.
Recent advances in neuromorphic algorithm development have shown that neural inspired architectures can efficiently solve scientific computing problems including graph decision problems and partial-integro differential equations (PIDEs). The latter requires the generation of a large number of samples from a stochastic process. While the Monte Carlo approximation of the solution of the PIDEs converges with an increasing number of sampled neuromorphic trajectories, the fidelity of samples from a given stochastic process using neuromorphic hardware requires verification. Such an exercise increases our trust in this emerging hardware and works toward unlocking its energy and scaling efficiency for scientific purposes such as synthetic data generation and stochastic simulation. In this paper, we focus our verification efforts on a one-dimensional Ornstein- Uhlenbeck stochastic differential equation. Using a discrete-time Markov chain approximation, we sample trajectories of the stochastic process across a variety of parameters on an Intel 8- Loihi chip Nahuku neuromorphic platform. Using relative entropy as a verification measure, we demonstrate that the random samples generated on Loihi are, in an average sense, acceptable. Finally, we demonstrate how Loihi's fidelity to the distribution changes as a function of the parameters of the Ornstein- Uhlenbeck equation, highlighting a trade-off between the lower-precision random number generation of the neuromorphic platform and our algorithm's ability to represent a discrete-time Markov chain.
Boolean functions and binary arithmetic operations are central to standard computing paradigms. Accordingly, many advances in computing have focused upon how to make these operations more efficient as well as exploring what they can compute. To best leverage the advantages of novel computing paradigms it is important to consider what unique computing approaches they offer. However, for any special-purpose co-processor, Boolean functions and binary arithmetic operations are useful for, among other things, avoiding unnecessary I/O on-and-off the co-processor by pre- and post-processing data on-device. This is especially true for spiking neuromorphic architectures where these basic operations are not fundamental low-level operations. Instead, these functions require specific implementation. Here we discuss the implications of an advantageous streaming binary encoding method as well as a handful of circuits designed to exactly compute elementary Boolean and binary operations.
Neuromorphic computing is a critical future technology for the computing industry, but it has yet to achieve its promise and has struggled to establish a cohesive research community. A large part of the challenge is that full realization of the potential of brain inspiration requires advances in both device hardware, computing architectures, and algorithms. This simultaneous development across technology scales is unprecedented in the computing field. This article presents a strategy, framed by market and policy pressures, for moving past these current technological and cultural hurdles to realize its full impact across technology. Achieving the full potential of brain-derived algorithms as well as post-complementary metal-oxide-semiconductor (CMOS) scaling neuromorphic hardware requires appropriately balancing the near-term opportunities of deep learning applications with the long-term potential of less understood opportunities in neural computing.
The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.
Neuromorphic hardware architectures represent a growing family of potential post-Moore's Law Era platforms. Largely due to event-driving processing inspired by the human brain, these computer platforms can offer significant energy benefits compared to traditional von Neumann processors. Unfortunately there still remains considerable difficulty in successfully programming, configuring and deploying neuromorphic systems. We present the Fugu framework as an answer to this need. Rather than necessitating a developer attain intricate knowledge of how to program and exploit spiking neural dynamics to utilize the potential benefits of neuromorphic computing, Fugu is designed to provide a higher level abstraction as a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from a variety of sources. Individual kernels linked together provide sophisticated processing through compositionality. Fugu is intended to be suitable for a wide-range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Ultimately, we hope the community adopts this and other open standardization attempts allowing for free exchange and easy implementations of the ever-growing list of spiking neural algorithms.
With the advent of large-scale neuromorphic platforms, we seek to better understand the applications of neuromorphic computing to more general-purpose computing domains. Graph analysis problems have grown increasingly relevant in the wake of readily available massive data. We demonstrate that a broad class of combinatorial and graph problems known as dynamic programs enjoy simple and efficient neuromorphic implementations, by developing a general technique to convert dynamic programs to spiking neuromorphic algorithms. Dynamic programs have been studied for over 50 years and have dozens of applications across many fields.
With the successes deep neural networks have achieved across a range of applications, researchers have been exploring computational architectures to more efficiently execute their operation. In addition to the prevalent role of graphics processing units (GPUs), many accelerator architectures have emerged. Neuromorphic is one such particular approach which takes inspiration from the brain to guide the computational principles of the architecture including varying levels of biological realism. In this paper we present results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia National Laboratories developed Whetstone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.
A hybrid analogue–digital computing system based on memristive devices is capable of solving classic control problems with potentially a lower energy consumption and higher speed than fully digital systems.
Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. In addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.
A forensics investigation after a breach often uncovers network and host indicators of compromise (IOCs) that can be deployed to sensors to allow early detection of the adversary in the future. Over time, the adversary will change tactics, techniques, and procedures (TTPs), which will also change the data generated. If the IOCs are not kept up-to-date with the adversary's new TTPs, the adversary will no longer be detected once all of the IOCs become invalid. Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular expression (regexes), up-to-date with a dynamic adversary. Our framework solves the TTK problem in an automated, cyclic fashion to bracket a previously discovered adversary. This tracking is accomplished through a data-driven approach of self-adapting a given model based on its own detection capabilities.In our initial experiments, we found that the true positive rate (TPR) of the adaptive solution degrades much less significantly over time than the naïve solution, suggesting that self-updating the model allows the continued detection of positives (i.e., adversaries). The cost for this performance is in the false positive rate (FPR), which increases over time for the adaptive solution, but remains constant for the naïve solution. However, the difference in overall detection performance, as measured by the area under the curve (AUC), between the two methods is negligible. This result suggests that self-updating the model over time should be done in practice to continue to detect known, evolving adversaries.
The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by the grid cell spatial representation in the brain. The second method tracks the densities of random walkers at each spatial location directly. We analyze the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions.
Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.
The rise of low-power neuromorphic hardware has the potential to change high-performance computing; however much of the focus on brain-inspired hardware has been on machine learning applications. A low-power solution for solving partial differential equations could radically change how we approach large-scale computing in the future. The random walk is a fundamental stochastic process that underlies many numerical tasks in scientific computing applications. We consider here two neural algorithms that can be used to efficiently implement random walks on spiking neuromorphic hardware. The first method tracks the positions of individual walkers independently by using a modular code inspired by grid cells in the brain. The second method tracks the densities of random walkers at each spatial location directly. We present the scaling complexity of each of these methods and illustrate their ability to model random walkers under different probabilistic conditions. Finally, we present implementations of these algorithms on neuromorphic hardware.
Boolean circuits of McCulloch-Pitts threshold gates are a classic model of neural computation studied heavily in the late 20th century as a model of general computation. Recent advances in large-scale neural computing hardware has made their practical implementation a near-term possibility. We describe a theoretical approach for multiplying two N by N matrices that integrates threshold gate logic with conventional fast matrix multiplication algorithms, that perform O(Nω) arithmetic operations for a positive constant ω < 3. Our approach converts such a fast matrix multiplication algorithm into a constant-depth threshold circuit with approximately O(Nω) gates. Prior to our work, it was not known whether the Θ(N3)-gate barrier for matrix multiplication was surmountable by constant-depth threshold circuits. Dense matrix multiplication is a core operation in convolutional neural network training. Performing this work on a neural architecture instead of off-loading it to a GPU may be an appealing option.
Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.
Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.
Neural computing has been identified as a computing alternative in the post-Moore's Law era, however much of its attention has been directed at specialized applications such as machine learning. For scientific computing applications, particularly those that often depend on supercomputing, it is not clear that neural machine learning is the exclusive contribution to be made by neuromorphic platforms. In our presentation, we will discuss ways that looking to the brain as a whole and neurons specifically can provide new sources for inspiration for computing beyond current machine learning applications. Particularly for scientific computing, where approximate methods for computation introduce additional challenges, the development of non-approximate methods for neural computation is potentially quite valuable. In addition, the brain's dramatic ability to utilize context at many different scales and incorporate information from many different modalities is a capability currently poorly realized by neural machine learning approaches yet offers considerable potential impact on scientific applications.
In 2016, Lewis Rhodes Labs, (LRL), shipped the first commercially viable Neuromorphic Processing Unit, (NPU), branded as a Neuromorphic Data Microscope (NDM). This product leverages architectural mechanisms derived from the sensory cortex of the human brain to efficiently implement pattern matching. LRL and Sandia National Labs have optimized this product for streaming analytics, and demonstrated a 1,000x power per operation reduction in an FPGA format. When reduced to an ASIC, the efficiency will improve to 1,000,000x. Additionally, the neuromorphic nature of the device gives it powerful computational attributes that are counterintuitive to those schooled in traditional von Neumann architectures. The Neuromorphic Data Microscope is the first of a broad class of brain-inspired, time domain processors that will profoundly alter the functionality and economics of data processing.
Neural machine learning methods, such as deep neural networks (DNN), have achieved remarkable success in a number of complex data processing tasks. These methods have arguably had their strongest impact on tasks such as image and audio processing - data processing domains in which humans have long held clear advantages over conventional algorithms. In contrast to biological neural systems, which are capable of learning continuously, deep artificial networks have a limited ability for incorporating new information in an already trained network. As a result, methods for continuous learning are potentially highly impactful in enabling the application of deep networks to dynamic data sets. Here, inspired by the process of adult neurogenesis in the hippocampus, we explore the potential for adding new neurons to deep layers of artificial neural networks in order to facilitate their acquisition of novel information while preserving previously trained data representations. Our results on the MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes lower and upper case letters and digits, demonstrate that neurogenesis is well suited for addressing the stability-plasticity dilemma that has long challenged adaptive machine learning algorithms.
Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU - demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.
Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.
The dentate gyrus forms a critical link between the entorhinal cortex and CA3 by providing a sparse version of the signal. Concurrent with this increase in sparsity, a widely accepted theory suggests the dentate gyrus performs pattern separation-similar inputs yield decorrelated outputs. Although an active region of study and theory, few logically rigorous arguments detail the dentate gyrus's (DG) coding.We suggest a theoretically tractable, combinatorial model for this action. The model provides formal methods for a highly redundant, arbitrarily sparse, and decorrelated output signal. To explore the value of this model framework, we assess how suitable it is for two notable aspects of DG coding: how it can handle the highly structured grid cell representation in the input entorhinal cortex region and the presence of adult neurogenesis, which has been proposed to produce a heterogeneous code in the DG.We find tailoring themodel to grid cell input yields expansion parameters consistent with the literature. In addition, the heterogeneous coding reflects activity gradation observed experimentally. Finally,we connect this approach with more conventional binary threshold neural circuit models via a formal embedding.
The continuous integration of young neurons into the adult brain represents a novel form of structural plasticity and has inspired the creation of numerous computational models to understand the functional role of adult neurogenesis. These computational models consist of abstract models that focus on the utility of new neurons in simple neural networks and biologically based models constrained by anatomical data that explore the role of new neurons in specific neural circuits such as the hippocampus. Simulation results from both classes of models have suggested a number of theoretical roles for neurogenesis such as increasing the capacity to learn novel information, promoting temporal context encoding, and influencing pattern separation. In this review, we discuss strategies and findings of past computational modeling efforts, current challenges and limitations, and new computational approaches pertinent to modeling adult neurogenesis.
Biological neural networks continue to inspire new developments in algorithms and microelectronic hardware to solve challenging data processing and classification problems. Here, we survey the history of neural-inspired and neuromorphic computing in order to examine the complex and intertwined trajectories of the mathematical theory and hardware developed in this field. Early research focused on adapting existing hardware to emulate the pattern recognition capabilities of living organisms. Contributions from psychologists, mathematicians, engineers, neuroscientists, and other professions were crucial to maturing the field from narrowly-tailored demonstrations to more generalizable systems capable of addressing difficult problem classes such as object detection and speech recognition. Algorithms that leverage fundamental principles found in neuroscience such as hierarchical structure, temporal integration, and robustness to error have been developed, and some of these approaches are achieving world-leading performance on particular data classification tasks. In addition, novel microelectronic hardware is being developed to perform logic and to serve as memory in neuromorphic computing systems with optimized system integration and improved energy efficiency. Key to such advancements was the incorporation of new discoveries in neuroscience research, the transition away from strict structural replication and towards the functional replication of neural systems, and the use of mathematical theory frameworks to guide algorithm and hardware developments.
For decades, neural networks have shown promise for next-generation computing, and recent breakthroughs in machine learning techniques, such as deep neural networks, have provided state-of-the-art solutions for inference problems. However, these networks require thousands of training processes and are poorly suited for the precise computations required in scientific or similar arenas. The emergence of dedicated spiking neuromorphic hardware creates a powerful computational paradigm which can be leveraged towards these exact scientific or otherwise objective computing tasks. We forego any learning process and instead construct the network graph by hand. In turn, the networks produce guaranteed success often with easily computable complexity. We demonstrate a number of algorithms exemplifying concepts central to spiking networks including spike timing and synaptic delay. We also discuss the application of cross-correlation particle image velocimetry and provide two spiking algorithms; one uses time-division multiplexing, and the other runs in constant time.
The effort to develop larger-scale computing systems introduces a set of related challenges: Large machines are more difficult to synchronize. The sheer quantity of hardware introduces more opportunities for errors. New approaches to hardware, such as low-energy or neuromorphic devices are not directly programmable by traditional methods. These three challenges may be addressed, at least for a subset of interesting problems, by a dynamical systems approach. The initial state of system represents the problem, and the final state of the system represents the solution. By carefully controlling the attractive basin of the system, we can move it between these two points while tolerating errors, which appear as perturbations. Here we describe both conventional and neural computers as dynamical systems, and show how to construct algorithms with resilience to noise, using traditional numerical problems as a special case. This suggests a reduction from numerical problems to spiking neural hardware such as IBM's TrueNorth.
Bouchard, Kristofer E.; Aimone, James B.; Chun, Miyoung; Dean, Thomas; Denker, Michael; Diesmann, Markus; Donofrio, David D.; Frank, Loren M.; Kasthuri, Narayanan; Koch, Chirstof; Ruebel, Oliver; Simon, Horst D.; Sommer, Friedrich T.; Prabhat
Opportunities offered by new neuro-technologies are threatened by lack of coherent plans to analyze, manage, and understand the data. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.
Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensemble's firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.
Proceedings of the National Academy of Sciences of the United States of America
Du, Huiyun; Deng, Wei; Aimone, James B.; Ge, Minyan; Parylak, Sarah; Walch, Keenan; Zhang, Wei; Cook, Jonathan; Song, Huina; Wang, Liping; Gage, Fred H.; Mu, Yangling
Rewarding experiences are often well remembered, and such memory formation is known to be dependent on dopamine modulation of the neural substrates engaged in learning and memory; however, it is unknown how and where in the brain dopamine signals bias episodic memory toward preceding rather than subsequent events. Here we found that photostimulation of channelrhodopsin-2-expressing dopaminergic fibers in the dentate gyrus induced a long-term depression of cortical inputs, diminished theta oscillations, and impaired subsequent contextual learning. Computational modeling based on this dopamine modulation indicated an asymmetric association of events occurring before and after reward in memory tasks. In subsequent behavioral experiments, preexposure to a natural reward suppressed hippocampus-dependent memory formation, with an effective time window consistent with the duration of dopamine-induced changes of dentate activity. Overall, our results suggest a mechanism by which dopamine enables the hippocampus to encode memory with reduced interference from subsequent experience.
Dieni, Cristina V.; Panichi, Roberto; Aimone, James B.; Kuo, Chay T.; Wadiche, Jacques I.; Overstreet-Wadiche, Linda
Persistent neurogenesis in the dentate gyrus produces immature neurons with high intrinsic excitability and low levels of inhibition that are predicted to be more broadly responsive to afferent activity than mature neurons. Mounting evidence suggests that these immature neurons are necessary for generating distinct neural representations of similar contexts, but it is unclear how broadly responsive neurons help distinguish between similar patterns of afferent activity. Here we show that stimulation of the entorhinal cortex in mouse brain slices paradoxically generates spiking of mature neurons in the absence of immature neuron spiking. Immature neurons with high intrinsic excitability fail to spike due to insufficient excitatory drive that results from low innervation rather than silent synapses or low release probability. Our results suggest that low synaptic connectivity prevents immature neurons from responding broadly to cortical activity, potentially enabling excitable immature neurons to contribute to sparse and orthogonal dentate representations.
The restriction of adult neurogenesis to only a handful of regions of the brain is suggestive of some shared requirement for this dramatic form of structural plasticity. However, a common driver across neurogenic regions has not yet been identified. Computational studies have been invaluable in providing insight into the functional role of new neurons; however, researchers have typically focused on specific scales ranging from abstract neural networks to specific neural systems, most commonly the dentate gyrus area of the hippocampus. These studies have yielded a number of diverse potential functions for new neurons, ranging from an impact on pattern separation to the incorporation of time into episodic memories to enabling the forgetting of old information. This review will summarize these past computational efforts and discuss whether these proposed theoretical functions can be unified into a common rationale for why neurogenesis is required in these unique neural circuits.
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Final report for Cognitive Computing for Security LDRD 165613. It reports on the development of hybrid of general purpose/ne uromorphic computer architecture, with an emphasis on potential implementation with memristors.
As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].
The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.
Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.
Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation/Completion of Episodic Information - Sandia researchers developed novel methods and metrics for studying the computational function of neurogenesis, thus generating substantial impact to the neuroscience and neural computing communities. This work could benefit applications in machine learning and other analysis activities.
Recent cyber security events have demonstrated the need for algorithms that adapt to the rapidly evolving threat landscape of complex network systems. In particular, human analysts often fail to identify data exfiltration when it is encrypted or disguised as innocuous data. Signature-based approaches for identifying data types are easily fooled and analysts can only investigate a small fraction of network events. However, neural networks can learn to identify subtle patterns in a suitably chosen input space. To this end, we have developed a signal processing approach for classifying data files which readily adapts to new data formats. We evaluate the performance for three input spaces consisting of the power spectral density, byte probability distribution and sliding-window entropy of the byte sequence in a file. By combining all three, we trained a deep neural network to discriminate amongst nine common data types found on the Internet with 97.4% accuracy.
Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.
Adult neurogenesis in the hippocampus is a notable process due not only to its uniqueness and potential impact on cognition but also to its localized vertical integration of different scales of neuroscience, ranging from molecular and cellular biology to behavior. Our review summarizes the recent research regarding the process of adult neurogenesis from these different perspectives, with particular emphasis on the differentiation and development of new neurons, the regulation of the process by extrinsic and intrinsic factors, and their ultimate function in the hippocampus circuit. Arising from a local neural stem cell population, new neurons progress through several stages of maturation, ultimately integrating into the adult dentate gyrus network. Furthermore, the increased appreciation of the full neurogenesis process, from genes and cells to behavior and cognition, makes neurogenesis both a unique case study for how scales in neuroscience can link together and suggests neurogenesis as a potential target for therapeutic intervention for a number of disorders.
Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.
Mathematical modeling of anatomically-constrained neural networks has provided significant insights regarding the response of networks to neurological disorders or injury. A logical extension of these models is to incorporate treatment regimens to investigate network responses to intervention. The addition of nascent neurons from stem cell precursors into damaged or diseased tissue has been used as a successful therapeutic tool in recent decades. Interestingly, models have been developed to examine the incorporation of new neurons into intact adult structures, particularly the dentate granule neurons of the hippocampus. These studies suggest that the unique properties of maturing neurons, can impact circuit behavior in unanticipated ways. In this perspective, we review the current status of models used to examine damaged CNS structures with particular focus on cortical damage due to stroke. Secondly, we suggest that computational modeling of cell replacement therapies can be made feasible by implementing approaches taken by current models of adult neurogenesis. The development of these models is critical for generating hypotheses regarding transplant therapies and improving outcomes by tailoring transplants to desired effects.
Sandias parallel circuit simulator, Xyce, can address large scale neuron simulations in a new way extending the range within which one can perform high-fidelity, multi-compartment neuron simulations. This report documents the implementation of neuron devices in Xyce, their use in simulation and analysis of neuron systems.