Publications

Results 51–98 of 98
Skip to search filters

Spiking network algorithms for scientific computing

2016 IEEE International Conference on Rebooting Computing, ICRC 2016 - Conference Proceedings

Severa, William M.; Parekh, Ojas D.; Carlson, Kristofor D.; James, Conrad D.; Aimone, James B.

For decades, neural networks have shown promise for next-generation computing, and recent breakthroughs in machine learning techniques, such as deep neural networks, have provided state-of-the-art solutions for inference problems. However, these networks require thousands of training processes and are poorly suited for the precise computations required in scientific or similar arenas. The emergence of dedicated spiking neuromorphic hardware creates a powerful computational paradigm which can be leveraged towards these exact scientific or otherwise objective computing tasks. We forego any learning process and instead construct the network graph by hand. In turn, the networks produce guaranteed success often with easily computable complexity. We demonstrate a number of algorithms exemplifying concepts central to spiking networks including spike timing and synaptic delay. We also discuss the application of cross-correlation particle image velocimetry and provide two spiking algorithms; one uses time-division multiplexing, and the other runs in constant time.

More Details

Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

Frontiers in Neuroscience

Agarwal, Sapan A.; Quach, Tu-Thach Q.; Parekh, Ojas D.; Hsia, Alexander H.; DeBenedictis, Erik; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

More Details

Quantum Graph Analysis

Maunz, Peter L.; Sterk, Jonathan D.; Lobser, Daniel L.; Parekh, Ojas D.; Ryan-Anderson, Ciaran R.

In recent years, advanced network analytics have become increasingly important to na- tional security with applications ranging from cyber security to detection and disruption of ter- rorist networks. While classical computing solutions have received considerable investment, the development of quantum algorithms to address problems, such as data mining of attributed relational graphs, is a largely unexplored space. Recent theoretical work has shown that quan- tum algorithms for graph analysis can be more efficient than their classical counterparts. Here, we have implemented a trapped-ion-based two-qubit quantum information proces- sor to address these goals. Building on Sandia's microfabricated silicon surface ion traps, we have designed, realized and characterized a quantum information processor using the hyperfine qubits encoded in two 171 Yb + ions. We have implemented single qubit gates using resonant microwave radiation and have employed Gate set tomography (GST) to characterize the quan- tum process. For the first time, we were able to prove that the quantum process surpasses the fault tolerance thresholds of some quantum codes by demonstrating a diamond norm distance of less than 1 . 9 x 10 [?] 4 . We used Raman transitions in order to manipulate the trapped ions' motion and realize two-qubit gates. We characterized the implemented motion sensitive and insensitive single qubit processes and achieved a maximal process infidelity of 6 . 5 x 10 [?] 5 . We implemented the two-qubit gate proposed by Molmer and Sorensen and achieved a fidelity of more than 97 . 7%.

More Details

The energy scaling advantages of RRAM crossbars

2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings

Agarwal, Sapan A.; Parekh, Ojas D.; Quach, Tu-Thach Q.; James, Conrad D.; Aimone, James B.; Marinella, Matthew J.

As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].

More Details

Benchmarking Adiabatic Quantum Optimization for Complex Network Analysis

Parekh, Ojas D.; Wendt, Jeremy D.; Shulenburger, Luke N.; Landahl, Andrew J.; Moussa, Jonathan E.; Aidun, John B.

We lay the foundation for a benchmarking methodology for assessing current and future quantum computers. We pose and begin addressing fundamental questions about how to fairly compare computational devices at vastly different stages of technological maturity. We critically evaluate and offer our own contributions to current quantum benchmarking efforts, in particular those involving adiabatic quantum computation and the Adiabatic Quantum Optimizers produced by D-Wave Systems, Inc. We find that the performance of D-Wave's Adiabatic Quantum Optimizers scales roughly on par with classical approaches for some hard combinatorial optimization problems; however, architectural limitations of D-Wave devices present a significant hurdle in evaluating real-world applications. In addition to identifying and isolating such limitations, we develop algorithmic tools for circumventing these limitations on future D-Wave devices, assuming they continue to grow and mature at an exponential rate for the next several years.

More Details

On Bipartite Graphs Trees and Their Partial Vertex Covers

Caskurlu, Bugra C.; Mkrtchyan, Vahan M.; Parekh, Ojas D.; Subramani, K.S.

Graphs can be used to model risk management in various systems. Particularly, Caskurlu et al. in [7] have considered a system, which has threats, vulnerabilities and assets, and which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. One can either restricting the permissions of the users, or encapsulating the system assets. The pointed out two strategies correspond to deleting minimum number of elements corresponding to vulnerabilities and assets, such that the flow between threats and assets is reduced below the predefined threshold level. It can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the Vertex Cover problem is in P on bipartite graphs, however; the computational complexity of the Partial Vertex Cover problem on bipartite graphs has remained open. In this paper, we establish that the Partial Vertex Cover problem is NP-hard on bipartite graphs, which was also recently independently demonstrated [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37–48; G. Joret and A. Vetta, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. We then identify interesting special cases of bipartite graphs, for which the Partial Vertex Cover problem, the closely related Budgeted Maximum Coverage problem, and their weighted extensions can be solved in polynomial time. We also present an 8/9-approximation algorithm for the Budgeted Maximum Coverage problem in the class of bipartite graphs. We show that this matches and resolves the integrality gap of the natural LP relaxation of the problem and improves upon a recent 4/5-approximation.

More Details

Generalized hypergraph matching via iterated packing and local ratio

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Parekh, Ojas D.; Pritchard, David

In k-hypergraph matching, we are given a collection of sets of size at most k, each with an associated weight, and we seek a maximumweight subcollection whose sets are pairwise disjoint. More generally, in k-hypergraph b-matching, instead of disjointness we require that every element appears in at most b sets of the subcollection. Our main result is a linear-programming based (k - 1 + 1/k)-approximation algorithm for k-hypergraph b-matching. This settles the integrality gap when k is one more than a prime power, since it matches a previously-known lower bound. When the hypergraph is bipartite, we are able to improve the approximation ratio to k - 1, which is also best possible relative to the natural LP. These results are obtained using a more careful application of the iterated packing method. Using the bipartite algorithmic integrality gap upper bound, we show that for the family of combinatorial auctions in which anyone can win at most t items, there is a truthful-in-expectation polynomial-time auction that t-approximately maximizes social welfare. We also show that our results directly imply new approximations for a generalization of the recently introduced bounded-color matching problem. We also consider the generalization of b-matching to demand matching, where edges have nonuniform demand values. The best known approximation algorithm for this problem has ratio 2k on k-hypergraphs. We give a new algorithm, based on local ratio, that obtains the same approximation ratio in a much simpler way.

More Details

Geometric hitting set for segments of few orientations

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Fekete, Sándor P.; Huang, Kan; Mitchell, Joseph S.B.; Parekh, Ojas D.; Phillips, Cynthia A.

We study several natural instances of the geometric hitting set problem for input consisting of sets of line segments (and rays, lines) having a small number of distinct slopes. These problems model path monitoring (e.g., on road networks) using the fewest sensors (the “hitting points”). We give approximation algorithms for cases including (i) lines of 3 slopes in the plane, (ii) vertical lines and horizontal segments, (iii) pairs of horizontal/vertical segments. We give hardness and hardness of approximation results for these problems. We prove that the hitting set problem for vertical lines and horizontal rays is polynomially solvable.

More Details

A computational framework for ontologically storing and analyzing very large overhead image sets

Proceedings of the 3rd ACM SIGSPATIAL International Workshop on Analytics for Big Geospatial Data, BigSpatial 2014

Brost, Randolph B.; Rintoul, Mark D.; McLendon, William C.; Strip, David R.; Parekh, Ojas D.; Woodbridge, Diane W.

We describe a computational approach to remote sensing image analysis that addresses many of the classic problems associated with storage, search, and query. This process starts by automatically annotating the fundamental objects in the image data set that will be used as a basis for an ontology, including both the objects (such as building, road, water, etc.) and their spatial and temporal relationships (is within 100 m of, is surrounded by, has changed in the past year, etc.). Data sets that can include multiple time slices of the same area are then processed using automated tools that reduce the images to the objects and relationships defined in an ontology based on the primitive objects, and this representation is stored in a geospatial-temporal semantic graph. Image searches are then defined in terms of the ontology (e.g. find a building greater than 103 m2 that borders a body of water), and the graph is searched for such relationships. This approach also enables the incorporation of non-image data that is related to the ontology. We demonstrate through an initial implementation of the entire system on large data sets (109 - 1011 pixels) that this system is robust against variations in di?erent image collection parameters, provides a way for analysts to query data sets in a more natural way, and can greatly reduce the memory footprint of the search.

More Details

Encoding and analyzing aerial imagery using geospatial semantic graphs

Rintoul, Mark D.; Watson, Jean-Paul W.; McLendon, William C.; Parekh, Ojas D.

While collection capabilities have yielded an ever-increasing volume of aerial imagery, analytic techniques for identifying patterns in and extracting relevant information from this data have seriously lagged. The vast majority of imagery is never examined, due to a combination of the limited bandwidth of human analysts and limitations of existing analysis tools. In this report, we describe an alternative, novel approach to both encoding and analyzing aerial imagery, using the concept of a geospatial semantic graph. The advantages of our approach are twofold. First, intuitive templates can be easily specified in terms of the domain language in which an analyst converses. These templates can be used to automatically and efficiently search large graph databases, for specific patterns of interest. Second, unsupervised machine learning techniques can be applied to automatically identify patterns in the graph databases, exposing recurring motifs in imagery. We illustrate our approach using real-world data for Anne Arundel County, Maryland, and compare the performance of our approach to that of an expert human analyst.

More Details

Evaluating Near-Term Adiabatic Quantum Computing

Parekh, Ojas D.; Aidun, John B.; Dubicka, Irene D.; Landahl, Andrew J.; Shulenburger, Luke N.; Tigges, Chris P.; Wendt, Jeremy D.

This report summarizes the first year’s effort on the Enceladus project, under which Sandia was asked to evaluate the potential advantages of adiabatic quantum computing for analyzing large data sets in the near future, 5-to-10 years from now. We were not specifically evaluating the machine being sold by D-Wave Systems, Inc; we were asked to anticipate what future adiabatic quantum computers might be able to achieve. While realizing that the greatest potential anticipated from quantum computation is still far into the future, a special purpose quantum computing capability, Adiabatic Quantum Optimization (AQO), is under active development and is maturing relatively rapidly; indeed, D-Wave Systems Inc. already offers an AQO device based on superconducting flux qubits. The AQO architecture solves a particular class of problem, namely unconstrained quadratic Boolean optimization. Problems in this class include many interesting and important instances. Because of this, further investigation is warranted into the range of applicability of this class of problem for addressing challenges of analyzing big data sets and the effectiveness of AQO devices to perform specific analyses on big data. Further, it is of interest to also consider the potential effectiveness of anticipated special purpose adiabatic quantum computers (AQCs), in general, for accelerating the analysis of big data sets. The objective of the present investigation is an evaluation of the potential of AQC to benefit analysis of big data problems in the next five to ten years, with our main focus being on AQO because of its relative maturity. We are not specifically assessing the efficacy of the D-Wave computing systems, though we do hope to perform some experimental calculations on that device in the sequel to this project, at least to provide some data to compare with our theoretical estimates.

More Details

Optimization of large-scale heterogeneous system-of-systems models

Gray, Genetha A.; Hart, William E.; Hough, Patricia D.; Parekh, Ojas D.; Phillips, Cynthia A.; Siirola, John D.; Swiler, Laura P.; Watson, Jean-Paul W.

Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

More Details

Iterative packing for demand matching and sparse packing

Parekh, Ojas D.

The main result we will present is a 2k-approximation algorithm for the following 'k-hypergraph demand matching' problem: given a set system with sets of size <=k, where sets have profits & demands and vertices have capacities, find a max-profit subsystem whose demands do not exceed the capacities. The main tool is an iterative way to explicitly build a decomposition of the fractional optimum as 2k times a convex combination of integral solutions. If time permits we'll also show how the approach can be extended to a 3-approximation for 2-column sparse packing. The second result is tight w.r.t the integrality gap, and the first is near-tight as a gap lower bound of 2(k-1+1/k) is known.

More Details
Results 51–98 of 98
Results 51–98 of 98