Publications

43 Results
Skip to search filters

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Science & Engineering of Cyber Security by Uncertainty Quantification and Rigorous Experimentation (SECURE) HANDBOOK

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

Abstract not provided.

Foundations of Rigorous Cyber Experimentation

Stickland, Michael S.; Li, Justin D.; Swiler, Laura P.; Tarman, Thomas D.

This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.

More Details

Comparing reproduced cyber experimentation studies across different emulation testbeds

ACM International Conference Proceeding Series

Tarman, Thomas D.; Rollins, Trevor; Swiler, Laura P.; Cruz, Gerardo C.; Vugrin, Eric D.; Huang, Hao; Sahu, Abhijeet; Wlazlo, Patrick; Goulart, Ana; Davis, Kate

Cyber testbeds provide an important mechanism for experimentally evaluating cyber security performance. However, as an experimental discipline, reproducible cyber experimentation is essential to assure valid, unbiased results. Even minor differences in setup, configuration, and testbed components can have an impact on the experiments, and thus, reproducibility of results. This paper documents a case study in reproducing an earlier emulation study, with the reproduced emulation experiment conducted by a different research group on a different testbed. We describe lessons learned as a result of this process, both in terms of the reproducibility of the original study and in terms of the different testbed technologies used by both groups. This paper also addresses the question of how to compare results between two groups' experiments, identifying candidate metrics for comparison and quantifying the results in this reproduction study.

More Details

Defender Policy Evaluation and Resource Allocation against MITRE ATT&CK Data and Evaluations

Outkin, Alexander V.; Schulz, Patricia V.; Schulz, Timothy S.; Tarman, Thomas D.; Pinar, Ali P.

Protecting against multi-step attacks of uncertain duration and timing forces defenders into an indefinite, always ongoing, resource-intensive response. To effectively allocate resources, a defender must be able to analyze multi-step attacks under assumption of constantly allocating resources against an uncertain stream of potentially undetected attacks. To achieve this goal, we present a novel methodology that applies a game-theoretic approach to the attack, attacker, and defender data derived from MITRE´s ATT&CK® Framework. Time to complete attack steps is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This constraints attack success parameters and enables comparing different defender resource allocation strategies. By approximating attacker-defender games as Markov processes, we represent the attacker-defender interaction, estimate the attack success parameters, determine the effects of attacker and defender strategies, and maximize opportunities for defender strategy improvements against an uncertain stream of attacks. This novel representation and analysis of multi-step attacks enables defender policy optimization and resource allocation, which we illustrate using the data from MITRE´ s APT3 ATT&CK® Framework.

More Details

SECURE: An Evidence-based Approach to Cyber Experimentation

Proceedings - 2019 Resilience Week, RWS 2019

Pinar, Ali P.; Benz, Zachary O.; Castillo, Anya; Hart, Bill; Swiler, Laura P.; Tarman, Thomas D.

Securing cyber systems is of paramount importance, but rigorous, evidence-based techniques to support decision makers for high-consequence decisions have been missing. The need for bringing rigor into cybersecurity is well-recognized, but little progress has been made over the last decades. We introduce a new project, SECURE, that aims to bring more rigor into cyber experimentation. The core idea is to follow the footsteps of computational science and engineering and expand similar capabilities to support rigorous cyber experimentation. In this paper, we review the cyber experimentation process, present the research areas that underlie our effort, discuss the underlying research challenges, and report on our progress to date. This paper is based on work in progress, and we expect to have more complete results for the conference.

More Details

Research Directions for Cyber Experimentation: Workshop Discussion Analysis

DeWaard, Elizabeth D.; Deccio, Casey D.; Fritz, David J.; Tarman, Thomas D.

Sandia National Laboratories hosted a workshop on August 11, 2017 entitled "Research Directions for Cyber Experimentation," which focused on identifying and addressing research gaps within the field of cyber experimentation , particularly emulation testbeds . This report mainly documents the discussion toward the end of the workshop, which included research gaps such as developing a sustainable research infrastructure, exp anding cyber experimentation, and making the field more accessible to subject matter experts who may not have a background in computer science . Other gaps include methodologies for rigorous experimentation, validation, and uncertainty quantification, which , if addressed, also have the potential to bridge the gap between cyber experimentation and cyber engineering. Workshop attendees presented various ways to overcome these research gaps, however the main conclusion for overcoming these gaps is better commun ication through increased workshops, conferences, email lists, and slack chann els, among other opportunities.

More Details

Quantum information processing : science & technology

Carroll, Malcolm; Tarman, Thomas D.

Qubits demonstrated using GaAs double quantum dots (DQD). The qubit basis states are the (1) singlet and (2) triplet stationary states. Long spin decoherence times in silicon spurs translation of GaAs qubit in to silicon. In the near term the goals are: (1) Develop surface gate enhancement mode double quantum dots (MOS & strained-Si/SiGe) to demonstrate few electrons and spin read-out and to examine impurity doped quantum-dots as an alternative architecture; (2) Use mobility, C-V, ESR, quantum dot performance & modeling to feedback and improve upon processing, this includes development of atomic precision fabrication at SNL; (3) Examine integrated electronics approaches to RF-SET; (4) Use combinations of numerical packages for multi-scale simulation of quantum dot systems (NEMO3D, EMT, TCAD, SPICE); and (5) Continue micro-architecture evaluation for different device and transport architectures.

More Details

Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems

Burton, David; Tarman, Thomas D.; Van Leeuwen, Brian P.; Onunkwo, Uzoma O.; Urias, Vincent U.; McDonald, Michael J.

This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

More Details

Photonic encryption using all optical logic

Tang, Jason D.; Tang, Jason D.; Tarman, Thomas D.; Pierson, Lyndon G.; Blansett, Ethan B.; Vawter, Gregory A.; Robertson, Perry J.; Schroeppel, Richard C.

With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.

More Details

Prototyping Faithful Execution in a Java virtual machine

Campbell, Philip L.; Campbell, Philip L.; Pierson, Lyndon G.; Tarman, Thomas D.

This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.

More Details

Principles of Faithful Execution in the implementation of trusted objects

Campbell, Philip L.; Campbell, Philip L.; Pierson, Lyndon G.; Tarman, Thomas D.

We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instruction or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve

More Details

Final report for the Multiprotocol Label Switching (MPLS) control plane security LDRD project

Tarman, Thomas D.; Tarman, Thomas D.; Pierson, Lyndon G.; Michalski, John T.; Black, Stephen P.; Torgerson, Mark D.

As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, which determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).

More Details

Final Report for the Quality of Service for Networks Laboratory Directed Research and Development Project

Eldridge, John M.; Tarman, Thomas D.; Brenkosh, Joseph P.; Dillinger, John D.; Michalski, John T.; Michalski, John T.

The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.

More Details

Final Report for the 10 to 100 Gigabit/Second Networking Laboratory Directed Research and Development Project

Witzke, Edward L.; Pierson, Lyndon G.; Tarman, Thomas D.; Dean, Leslie B.; Robertson, Perry J.; Campbell, Philip L.

The next major performance plateau for high-speed, long-haul networks is at 10 Gbps. Data visualization, high performance network storage, and Massively Parallel Processing (MPP) demand these (and higher) communication rates. MPP-to-MPP distributed processing applications and MPP-to-Network File Store applications already require single conversation communication rates in the range of 10 to 100 Gbps. MPP-to-Visualization Station applications can already utilize communication rates in the 1 to 10 Gbps range. This LDRD project examined some of the building blocks necessary for developing a 10 to 100 Gbps computer network architecture. These included technology areas such as, OS Bypass, Dense Wavelength Division Multiplexing (DWDM), IP switching and routing, Optical Amplifiers, Inverse Multiplexing of ATM, data encryption, and data compression; standards bodies activities in the ATM Forum and the Optical Internetworking Forum (OIF); and proof-of-principle laboratory prototypes. This work has not only advanced the body of knowledge in the aforementioned areas, but has generally facilitated the rapid maturation of high-speed networking and communication technology by: (1) participating in the development of pertinent standards, and (2) by promoting informal (and formal) collaboration with industrial developers of high speed communication equipment.

More Details

The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

Pratt, Thomas J.; Tarman, Thomas D.; Miller, Marc M.; Adams, Roger L.; Chen, Helen Y.; Brandt, James M.; Wyckoff, Peter S.

This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

More Details

Proposed foreword to the ATM Security Specification Version 1.1

Witzke, Edward L.; Tarman, Thomas D.; Tarman, Thomas D.

A number of substantive modifications were made from Version 1.0 to Version 1.1 of the ATM Security Specification. To assist implementers in identifying these modifications, the authors propose to include a foreword to the Security 1.1 specification that lists these modifications. Typically, a revised specification provides some mechanism for implementers to determine the modifications that were made from previous versions. Since the Security 1.1 specification does not include change bars or other mechanisms that specifically direct the reader to these modifications, they proposed to include a modification table in a foreword to the document. This modification table should also be updated to include substantive modifications that are made at the San Francisco meeting.

More Details

Sandia's straw ballot comments on the Security Version 1.1. specification

Tarman, Thomas D.

This contribution provides Sandia's strawballot comments for the Security Version l.l specification, STR-SEC-02.01. Two major comments are addressed here that pertain to potential problems with the use of the Security Association Section digital signature, and potential inconsistencies with the allocation of relative identifiers in the initiating security agent.

More Details

Security services negotiation through OAM cells

Tarman, Thomas D.; Tarman, Thomas D.

As described in contribution AF99-0335, it is interesting that new security services and mechanisms are allowed to be negotiated during a connection in progress. To do that, new ''negotiation OAM cells'' dedicated to security should be defined, as well as some acknowledgment cells allowing negotiation OAM cells to be exchanged reliably. Remarks which were given at the New Orleans meeting regarding those cell formats are taken into account. This contribution presents some baseline text describing the format of the negotiation and acknowledgment cells, and the using of those cells. All the modifications brought to the specifications are reversible using the Word tools.

More Details
43 Results
43 Results