Publications

Results 1–25 of 27
Skip to search filters

A Novel Approach to Exponential Speedup of Simulation Events in Wireless Networks

2018 International Conference on Computing, Networking and Communications, ICNC 2018

Ganti, Anand G.; Onunkwo, Uzoma O.; Van Leeuwen, Brian P.; Scoggin, Michael P.; Schroeppel, Richard C.

We demonstrate a new approach that yields exponential speedup of communication events in discrete event simulations (DES) of mobile wireless networks. With the trending explosive growth of wireless devices including Internet of things devices, it is important to have the capability to simulate wireless networks at large scale accurately. Unfortunately, current simulation techniques are inadequate for large-scale network simulation especially at high rate of total transmission events due to poor performance scaling. This has limited many studies to much smaller sizes than desired. We propose a method for attaining high fidelity DES of mobile wireless networks that leverages (i) path loss of transmissions and (ii) the relatively slower topology changes in a high transmission rate environment. Our approach averages a runtime of k(r)O(log N) per event, where N is the number of simulated wireless nodes, r is a factor of maximum transmission range, and k(r) is typically constant obeying k(r)≪ N.

More Details

High Fidelity Simulations of Large-scale Wireless Networks (Part II - FY2017)

Onunkwo, Uzoma O.; Ganti, Anand G.; Mitchell, John A.; Scoggin, Michael P.; Schroeppel, Richard C.; Van Leeuwen, Brian P.; Wolf, Michael W.

The ability to simulate wireless networks at large-scale for meaningful amount of time is considerably lacking in today's network simulators. For this reason, many published work in this area often limit their simulation studies to less than a 1,000 nodes and either over-simplify channel characteristics or perform studies over time scales much less than a day. In this report, we show that one can overcome these limitations and study problems of high practical consequence. This work presents two key contributions to high fidelity simulation of large-scale wireless networks: (a) wireless simulations can be sped up by more than 100X in runtime using ideas from spatial indexing algorithms and clipping of negligible signals and (b) clustering and task-oriented programming paradigm can be used to reduce inter- process communication in a parallel discrete event simulation resulting in a better scaling efficiency.

More Details

High Fidelity Simulations of Large-Scale Wireless Networks (Part I)

Onunkwo, Uzoma O.; Cole, Robert G.; Ganti, Anand G.; Schroeppel, Richard C.; Scoggin, Michael P.; Van Leeuwen, Brian P.

Wireless systems and networks have experienced rapid growth over the last decade with the advent of smart devices for everyday use. These systems, which include smartphones, vehicular gadgets, and internet-of-things devices, are becoming ubiquitous and ever-more important. They pose interesting research challenges for design and analysis of new network protocols due to their large scale and complexity. In this work, we focus on the challenging aspect of simulating the inter-connectivity of many of these devices in wireless networks. The quantitative study of large scale wireless networks, with counts of wireless devices in the thousands, is a very difficult problem with no known acceptable solution. By necessity, simulations of this scale have to approximate reality, but the algorithms employed in most modern-day network simulators can be improved for wireless network simulations. In this report, we present advances that we have made and propositions for continuation of progress towards a framework for high fidelity simulations of wireless networks. This work is not complete in that a final simulation framework tool is yet to be produced. However, we highlight the major bottlenecks and address them individually with initial results showing enough promise.

More Details

Report on a Zero-Knowledge Protocal Tabletop Exercise

Marleau, Peter M.; Brubaker, Erik B.; Deland, Sharon M.; Hilton, Nathan R.; McDaniel, Michael M.; Schroeppel, Richard C.; Seager, Kevin D.; Stoddard, Mary C.; MacArthur, Duncan M.

This report summarizes the discussion and conclusions reached during a table top exercise held at Sandia National Laboratories, Albuquerque on September 3, 2014 regarding a recently described approach for nuclear warhead verification based on the cryptographic concept of a zero-knowledge protocol (ZKP) presented in a recent paper authored by Glaser, Barak, and Goldston. A panel of Sandia National Laboratories researchers, whose expertise includes radiation instrumentation design and development, cryptography, and arms control verification implementation, jointly reviewed the paper and identified specific challenges to implementing the approach as well as some opportunities. It was noted that ZKP as used in cryptography is a useful model for the arms control verification problem, but the direct analogy to arms control breaks down quickly. The ZKP methodology for warhead verification fits within the general class of template-based verification techniques, where a reference measurement is used to confirm that a given object is like another object that has already been accepted as a warhead by some other means. This can be a powerful verification approach, but requires independent means to trust the authenticity of the reference warhead - a standard that may be difficult to achieve, which the ZKP authors do not directly address. Despite some technical challenges, the concept of last-minute selection of the pre-loads and equipment could be a valuable component of a verification regime.

More Details

Parallelism of the SANDstorm hash algorithm

Schroeppel, Richard C.; Torgerson, Mark D.

Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

More Details

Small circuits for cryptography

Anderson, William E.; Draelos, Timothy J.; Schroeppel, Richard C.; Torgerson, Mark D.; Miller, Russell D.

This report examines a number of hardware circuit design issues associated with implementing certain functions in FPGA and ASIC technologies. Here we show circuit designs for AES and SHA-1 that have an extremely small hardware footprint, yet show reasonably good performance characteristics as compared to the state of the art designs found in the literature. Our AES performance numbers are fueled by an optimized composite field S-box design for the Stratix chipset. Our SHA-1 designs use register packing and feedback functionalities of the Stratix LE, which reduce the logic element usage by as much as 72% as compared to other SHA-1 designs.

More Details

Manticore and CS mode : parallelizable encryption with joint cipher-state authentication

Anderson, William E.; Beaver, Cheryl L.; Draelos, Timothy J.; Schroeppel, Richard C.; Torgerson, Mark D.; Miller, Russell D.

We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: (1) the encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined, (2) the authentication overhead is minimal, and (3) the authentication process remains resistant against some IV reuse. We offer a Manticore class of authenticated encryption algorithms based on cryptographic hash functions, which support variable block sizes up to twice the hash output length and variable key lengths. A proof of security is presented for the MTC4 and Pepper algorithms. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We provide hardware and software performance estimates for all of our constructions and give a concrete example of the CS mode of encryption that uses AES as the encryption primitive and adds a small speed overhead (10-15%) compared to AES alone.

More Details

Securing mobile code

Beaver, Cheryl L.; Neumann, William D.; Link, Hamilton E.; Schroeppel, Richard C.; Campbell, Philip L.; Pierson, Lyndon G.; Anderson, William E.

If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.

More Details

Photonic encryption : modeling and functional analysis of all optical logic

Tang, Jason D.; Robertson, Perry J.; Schroeppel, Richard C.

With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.

More Details

ManTiCore: Encryption with joint cipher-state authentication

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Anderson, Erik; Beaver, Cheryl L.; Draelos, Timothy J.; Schroeppel, Richard C.; Torgerson, Mark D.

We describe a new mode of encryption with inexpensive authentication, which uses information from the internal state of the cipher to provide the authentication. Our algorithms have a number of benefits: The encryption has properties similar to CBC mode, yet the encipherment and authentication can be parallelized and/or pipelined; The authentication overhead is minimal; The authentication process remains resistant against some IV reuse. Our first construction is the MTC4 encryption algorithm based on cryptographic hash functions which supports variable block sizes up to twice the hash output length, and variable key lengths. A proof of security is presented for MTC4. We then generalize the construction to create the Cipher-State (CS) mode of encryption that uses the internal state of any round-based block cipher as an authenticator. We give a concrete example using AES as the encryption primitive. We provide performance measurements for all constructions. © Springer-Verlag Berlin Heidelberg 2004.

More Details

Photonic encryption using all optical logic

Tang, Jason D.; Tang, Jason D.; Tarman, Thomas D.; Pierson, Lyndon G.; Blansett, Ethan B.; Vawter, Gregory A.; Robertson, Perry J.; Schroeppel, Richard C.

With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.

More Details

Quantum computing accelerator I/O : LDRD 52750 final report

Tigges, Chris P.; Tigges, Chris P.; Modine, N.A.; Pierson, Lyndon G.; Ganti, Anand G.; Schroeppel, Richard C.

In a superposition of quantum states, a bit can be in both the states '0' and '1' at the same time. This feature of the quantum bit or qubit has no parallel in classical systems. Currently, quantum computers consisting of 4 to 7 qubits in a 'quantum computing register' have been built. Innovative algorithms suited to quantum computing are now beginning to emerge, applicable to sorting and cryptanalysis, and other applications. A framework for overcoming slightly inaccurate quantum gate interactions and for causing quantum states to survive interactions with surrounding environment is emerging, called quantum error correction. Thus there is the potential for rapid advances in this field. Although quantum information processing can be applied to secure communication links (quantum cryptography) and to crack conventional cryptosystems, the first few computing applications will likely involve a 'quantum computing accelerator' similar to a 'floating point arithmetic accelerator' interfaced to a conventional Von Neumann computer architecture. This research is to develop a roadmap for applying Sandia's capabilities to the solution of some of the problems associated with maintaining quantum information, and with getting data into and out of such a 'quantum computing accelerator'. We propose to focus this work on 'quantum I/O technologies' by applying quantum optics on semiconductor nanostructures to leverage Sandia's expertise in semiconductor microelectronic/photonic fabrication techniques, as well as its expertise in information theory, processing, and algorithms. The work will be guided by understanding of practical requirements of computing and communication architectures. This effort will incorporate ongoing collaboration between 9000, 6000 and 1000 and between junior and senior personnel. Follow-on work to fabricate and evaluate appropriate experimental nano/microstructures will be proposed as a result of this work.

More Details

Algorithms for improved performance in cryptographic protocols

Beaver, Cheryl L.; Schroeppel, Richard C.; Beaver, Cheryl L.

Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.

More Details
Results 1–25 of 27
Results 1–25 of 27