Publications

Results 51–100 of 101
Skip to search filters

Rebooting Computing and Low-Power Image Recognition Challenge

2015 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2015

Lu, Yung H.; Kadin, Alan M.; Berg, Alexander C.; Conte, Thomas M.; DeBenedictis, Erik; Garg, Rachit; Gingade, Ganesh; Hoang, Bichlien; Huang, Yongzhen; Li, Boxun; Liu, Jingyu; Liu, Wei; Mao, Huizi; Peng, Junran; Tang, Tianqi; Track, Elie K.; Wang, Jingqiu; Wang, Tao; Wang, Yu; Yao, Jun

Rebooting Computing (RC) is an effort in the IEEE to rethink future computers. RC started in 2012 by the co-chairs, Elie Track (IEEE Council on Superconductivity) and Tom Conte (Computer Society). RC takes a holistic approach, considering revolutionary as well as evolutionary solutions needed to advance computer technologies. Three summits have been held in 2013 and 2014, discussing different technologies, from emerging devices to user interface, from security to energy efficiency, from neuromorphic to reversible computing. The first part of this paper introduces RC to the design automation community and solicits revolutionary ideas from the community for the directions of future computer research. Energy efficiency is identified as one of the most important challenges in future computer technologies. The importance of energy efficiency spans from miniature embedded sensors to wearable computers, from individual desktops to data centers. To gauge the state of the art, the RC Committee organized the first Low Power Image Recognition Challenge (LPIRC). Each image contains one or multiple objects, among 200 categories. A contestant has to provide a working system that can recognize the objects and report the bounding boxes of the objects. The second part of this paper explains LPIRC and the solutions from the top two winners.

More Details

Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

Frontiers in Neuroscience

Agarwal, Sapan A.; Quach, Tu-Thach Q.; Parekh, Ojas D.; Hsia, Alexander H.; DeBenedictis, Erik; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

More Details

Millivolt switches will support better energy-reliability tradeoffs

2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings

DeBenedictis, Erik; Zima, Hans

Millivolt switches will not only improve energy efficiency, but will enable a new capability to manage the energy-reliability tradeoff. By effectively utilizing this system-level capability, it may be possible to obtain one or two additional generations of scaling beyond current projections. Millivolt switches will enable further energy scaling, a process that is expected to continue until the technology encounters thermal noise errors [Theis 10]. If thermal noise errors can be accommodated at higher levels through a new form of error correction, it may be possible to scale about 3× lower in system energy than is currently projected. A general solution to errors would also address long standing problems with Cosmic Ray strikes, weak and aging parts, some cyber security vulnerabilities, etc.

More Details

Training neural hardware with noisy components

Proceedings of the International Joint Conference on Neural Networks

Rothganger, Fredrick R.; Evans, Brian R.; Aimone, James B.; DeBenedictis, Erik

Some next generation computing devices may consist of resistive memory arranged as a crossbar. Currently, the dominant approach is to use crossbars as the weight matrix of a neural network, and to use learning algorithms that require small incremental weight updates, such as gradient descent (for example Backpropagation). Using real-world measurements, we demonstrate that resistive memory devices are unlikely to support such learning methods. As an alternative, we offer a random search algorithm tailored to the measured characteristics of our devices.

More Details

Optimal adiabatic scaling and the processor-in-memory-and-storage architecture (OAS+PIMS)

Proceedings of the 2015 IEEE/ACM International Symposium on Nanoscale Architectures, NANOARCH 2015

DeBenedictis, Erik; Cook, Jeanine C.; Hoemmen, Mark F.; Metodi, Tzvetan S.

We discuss a new approach to computing that retains the possibility of exponential growth while making substantial use of the existing technology. The exponential improvement path of Moore's Law has been the driver behind the computing approach of Turing, von Neumann, and FORTRAN-like languages. Performance growth is slowing at the system level, even though further exponential growth should be possible. We propose two technology shifts as a remedy, the first being the formulation of a scaling rule for scaling into the third dimension. This involves use of circuit-level energy efficiency increases using adiabatic circuits to avoid overheating. However, this scaling rule is incompatible with the von Neumann architecture. The second technology shift is a computer architecture and programming change to an extremely aggressive form of Processor-In-Memory (PIM) architecture, which we call Processor-In-Memory-and-Storage (PIMS). Theoretical analysis shows that the PIMS architecture is compatible with the 3D scaling rule, suggesting both immediate benefit and a long-term improvement path.

More Details

Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

DeBenedictis, Erik

We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

More Details

Development characterization and modeling of a TaOx ReRAM for a neuromorphic accelerator

Marinella, Matthew J.; Mickel, Patrick R.; Lohn, Andrew L.; Hughart, David R.; Bondi, Robert J.; Mamaluy, Denis M.; Hjalmarson, Harold P.; Stevens, James E.; Decker, Seth D.; Apodaca, Roger A.; Evans, Brian R.; Aimone, James B.; Rothganger, Fredrick R.; James, Conrad D.; DeBenedictis, Erik

This report discusses aspects of neuromorphic computing and how it is used to model microsystems.

More Details

On the design of reversible QDCA systems

Murphy, Sarah M.; DeBenedictis, Erik

This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system.

More Details

Architectural specification for massively parallel computers: An experience and measurement-based approach

Concurrency and Computation: Practice and Experience

Brightwell, Ronald B.; Camp, William; Cole, Benjamin; DeBenedictis, Erik; Leland, Robert; Tomkins, James; Maccabe, Arthur B.

In this paper, we describe the hardware and software architecture of the Red Storm system developed at Sandia National Laboratories. We discuss the evolution of this architecture and provide reasons for the different choices that have been made. We contrast our approach of leveraging high-volume, mass-market commodity processors to that taken for the Earth Simulator. We present a comparison of benchmarks and application performance that support our approach. We also project the performance of Red Storm and the Earth Simulator. This projection indicates that the Red Storm architecture is a much more cost-effective approach to massively parallel computing. Published in 2005 by John Wiley & Sons, Ltd.

More Details

Reversible logic for supercomputing

DeBenedictis, Erik

This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical.

More Details

Reversible logic for supercomputing

DeBenedictis, Erik

This paper is about making reversible logic a reality for supercomputing. Reversible logic offers a way to exceed certain basic limits on the performance of computers, yet a powerful case will have to be made to justify its substantial development expense. This paper explores the limits of current, irreversible logic for supercomputers, thus forming a threshold above which reversible logic is the only solution. Problems above this threshold are discussed, with the science and mitigation of global warming being discussed in detail. To further develop the idea of using reversible logic in supercomputing, a design for a 1 Zettaflops supercomputer as required for addressing global climate warming is presented. However, to create such a design requires deviations from the mainstream of both the software for climate simulation and research directions of reversible logic. These deviations provide direction on how to make reversible logic practical

More Details

Will Moores law be sufficient?

DeBenedictis, Erik

It seems well understood that supercomputer simulation is an enabler for scientific discoveries, weapons, and other activities of value to society. It also seems widely believed that Moore's Law will make progressively more powerful supercomputers over time and thus enable more of these contributions. This paper seeks to add detail to these arguments, revealing them to be generally correct but not a smooth and effortless progression. This paper will review some key problems that can be solved with supercomputer simulation, showing that more powerful supercomputers will be useful up to a very high yet finite limit of around 1021 FLOPS (1 Zettaflops) . The review will also show the basic nature of these extreme problems. This paper will review work by others showing that the theoretical maximum supercomputer power is very high indeed, but will explain how a straightforward extrapolation of Moore's Law will lead to technological maturity in a few decades. The power of a supercomputer at the maturity of Moore's Law will be very high by today's standards at 1016-1019 FLOPS (100 Petaflops to 10 Exaflops), depending on architecture, but distinctly below the level required for the most ambitious applications. Having established that Moore's Law will not be that last word in supercomputing, this paper will explore the nearer term issue of what a supercomputer will look like at maturity of Moore's Law. Our approach will quantify the maximum performance as permitted by the laws of physics for extension of current technology and then find a design that approaches this limit closely. We study a 'multi-architecture' for supercomputers that combines a microprocessor with other 'advanced' concepts and find it can reach the limits as well. This approach should be quite viable in the future because the microprocessor would provide compatibility with existing codes and programming styles while the 'advanced' features would provide a boost to the limits of performance.

More Details

Taking ASCI supercomputing to the end game

DeBenedictis, Erik

The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

More Details

The Sandia petaflops planner

DeBenedictis, Erik; DeBenedictis, Erik

The Sandia Petaflops Planner is a tool for projecting the design and performance of parallel supercomputers into the future. The mathematical basis of these projections is the International Technology Roadmap for Semiconductors (ITRS, or a detailed version of Moore's Law) and DOE balance factors for supercomputer procurements. The planner is capable of various forms of scenario analysis, cost estimation, and technology analysis. The tool is described along with technology conclusions regarding PFLOPS-level supercomputers in the upcoming decade.

More Details

A network architecture for Petaflops supercomputers

DeBenedictis, Erik; DeBenedictis, Erik

If we are to build a supercomputer with a speed of 10{sup 15} floating operations per second (1 PetaFLOPS), interconnect technology will need to be improved considerably over what it is today. In this report, we explore one possible interconnect design for such a network. The guiding principle in this design is the optimization of all components for the finiteness of the speed of light. To achieve a linear speedup in time over well-tested supercomputers of todays' designs will require scaling up of processor power and bandwidth and scaling down of latency. Latency scaling is the most challenging: it requires a 100 ns user-to-user latency for messages traveling the full diameter of the machine. To meet this constraint requires simultaneously minimizing wire length through 3D packaging, new low-latency electrical signaling mechanisms, extremely fast routers, and new network interfaces. In this report, we outline approaches and implementations that will meet the requirements when implemented as a system. No technology breakthroughs are required.

More Details

Radiation transport algorithms on trans-petaflops supercomputers of different architectures

DeBenedictis, Erik; Christopher, Thomas W.

We seek to understand which supercomputer architecture will be best for supercomputers at the Petaflops scale and beyond. The process we use is to predict the cost and performance of several leading architectures at various years in the future. The basis for predicting the future is an expanded version of Moore's Law called the International Technology Roadmap for Semiconductors (ITRS). We abstract leading supercomputer architectures into chips connected by wires, where the chips and wires have electrical parameters predicted by the ITRS. We then compute the cost of a supercomputer system and the run time on a key problem of interest to the DOE (radiation transport). These calculations are parameterized by the time into the future and the technology expected to be available at that point. We find the new advanced architectures have substantial performance advantages but conventional designs are likely to be less expensive (due to economies of scale). We do not find a universal ''winner'', but instead the right architectural choice is likely to involve non-technical factors such as the availability of capital and how long people are willing to wait for results.

More Details
Results 51–100 of 101
Results 51–100 of 101