Publications

49 Results
Skip to search filters

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Science & Engineering of Cyber Security by Uncertainty Quantification and Rigorous Experimentation (SECURE) HANDBOOK

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

Abstract not provided.

Defender Policy Evaluation and Resource Allocation against MITRE ATT&CK Data and Evaluations

Outkin, Alexander V.; Schulz, Patricia V.; Schulz, Timothy S.; Tarman, Thomas D.; Pinar, Ali P.

Protecting against multi-step attacks of uncertain duration and timing forces defenders into an indefinite, always ongoing, resource-intensive response. To effectively allocate resources, a defender must be able to analyze multi-step attacks under assumption of constantly allocating resources against an uncertain stream of potentially undetected attacks. To achieve this goal, we present a novel methodology that applies a game-theoretic approach to the attack, attacker, and defender data derived from MITRE´s ATT&CK® Framework. Time to complete attack steps is drawn from a probability distribution determined by attacker and defender strategies and capabilities. This constraints attack success parameters and enables comparing different defender resource allocation strategies. By approximating attacker-defender games as Markov processes, we represent the attacker-defender interaction, estimate the attack success parameters, determine the effects of attacker and defender strategies, and maximize opportunities for defender strategy improvements against an uncertain stream of attacks. This novel representation and analysis of multi-step attacks enables defender policy optimization and resource allocation, which we illustrate using the data from MITRE´ s APT3 ATT&CK® Framework.

More Details

Techno-Economic Analysis: Best Practices and Assessment Tools

Kobos, Peter H.; Drennen, Thomas E.; Outkin, Alexander V.; Webb, Erik K.; Paap, Scott M.; Wiryadinata, Steven W.

A team at Sandia National Laboratories (SNL) recognized the growing need to maintain and organize the internal community of Techno - Economic Assessment analysts at the lab . To meet this need, an internal core team identified a working group of experienced, new, and future analysts to: 1) document TEA best practices; 2) identify existing resources at Sandia and elsewhere; and 3) identify gaps in our existing capabilities . Sandia has a long history of using techno - economic analyses to evaluate various technologies , including consideration of system resilience . Expanding our TEA capabilities will provide a rigorous basis for evaluating science, engineering and technology - oriented projects, allowing Sandia programs to quantify the impact of targeted research and development (R&D), and improving Sandia's competitiveness for external funding options . Developing this working group reaffirms the successful use of TEA and related techniques when evaluating the impact of R&D investments, proposed work, and internal approaches to leverage deep technical and robust, business - oriented insights . The main findings of this effort demonstrated the high - impact TEA has on future cost, adoption for applications and impact metric forecasting insights via key past exemplar applied techniques in a broad technology application space . Recommendations from this effort include maintaining and growing the best practices approaches when applying TEA, appreciating the tools (and their limits) from other national laboratories and the academic community, and finally a recognition that more proposals and R&D investment decision s locally at Sandia , and more broadly in the research community from funding agencies , require TEA approaches to justify and support well thought - out project planning.

More Details

GPLadd: Quantifying trust in government and commercial systems a game-theoretic approach

ACM Transactions on Privacy and Security

Outkin, Alexander V.; Eames, Brandon K.; Galiardi, Meghan A.; Walsh, Sarah; Vugrin, Eric D.; Heersink, Byron; Hobbs, Jacob A.; Wyss, Gregory D.

Trust in a microelectronics-based system can be characterized as the level of confidence that a system is free of subversive alterations made during system development, or that the development process of a system has not been manipulated by a malicious adversary. Trust in systems has become an increasing concern over the past decade. This article presents a novel game-theoretic framework, called GPLADD (Graph-based Probabilistic Learning Attacker and Dynamic Defender), for analyzing and quantifying system trustworthiness at the end of the development process, through the analysis of risk of development-time system manipulation. GPLADD represents attacks and attacker-defender contests over time. It treats time as an explicit constraint and allows incorporating the informational asymmetries between the attacker and defender into analysis. GPLADD includes an explicit representation of attack steps via multi-step attack graphs, attacker and defender strategies, and player actions at different times. GPLADD allows quantifying the attack success probability over time and the attacker and defender costs based on their capabilities and strategies. This ability to quantify different attacks provides an input for evaluation of trust in the development process. We demonstrate GPLADD on an example attack and its variants. We develop a method for representing success probability for arbitrary attacks and derive an explicit analytic characterization of success probability for a specific attack. We present a numeric Monte Carlo study of a small set of attacks, quantify attack success probabilities, attacker and defender costs, and illustrate the options the defender has for limiting the attack success and improving trust in the development process.

More Details

Analysis of Microgrid Locations Benefitting Community Resilience for Puerto Rico

Jeffers, Robert F.; Staid, Andrea S.; Baca, Michael J.; Currie, Frank M.; Fogleman, William; DeRosa, Sean D.; Wachtel, Amanda; Outkin, Alexander V.

An analysis of microgrids to increase resilience was conducted for the island of Puerto Rico. Critical infrastructure throughout the island was mapped to the key services provided by those sectors to help inform primary and secondary service sources during a major disruption to the electrical grid. Additionally, a resilience metric of burden was developed to quantify community resilience, and a related baseline resilience figure was calculated for the area. To improve resilience, Sandia performed an analysis of where clusters of critical infrastructure are located and used these suggested resilience node locations to create a portfolio of 159 microgrid options throughout Puerto Rico. The team then calculated the impact of these microgrids on the region's ability to provide critical services during an outage, and compared this impact to high-level estimates of cost for each microgrid to generate a set of efficient microgrid portfolios costing in the range of $218-$917M. This analysis is a refinement of the analysis delivered on June 01, 2018.

More Details

Teaching Game Theory to Kids and Limits of Prediction

Outkin, Alexander V.

I have once been asked to read a lecture to a group of 6th graders on Game Theory. After agreeing to it, I realized that explaining the game theory basics to 6th graders my be difficult, given that terms such as Nash equilibrium, minimax, maximin, optimization may not resonate in a 6th grade classroom. Instead I've introduced game theory using the rock-paper-scissors (RPS) game. Turns out kids are excellent gametheoreticians. In RPS, they understood both the benefits of randomizing their own strategy and of predicting their opponent's moves. They offered a number of heuristics both for the prediction and opening move. These heuristics included optimizing against past opponent moves, such as not playing rock if the opponent just played scissors, and playing a specific opening hand, such as "paper". Visualizing the effects of such strategic choices on-the-fly would be interesting and educational. This brief essay attempts demonstrating and visualizing the value of a few different strategic options in RPS. Specifically, we would like to illustrate the following: 1) what is the value of being unpredictable?; and 2) what is the value of being able to predict your opponent? In regard to prediction of human players, the question 2) has been reflected in Jon McLoone's entry in Wolfram Blog from January 20, 2014[1]. McLoone created a predictive algorithm for playing against human opponents, that learns to beat human opponents reliably after approximately 30 - 40 games. I use McLoone's implementation to represent a predictive and random strategies. The rest of this documents 1) investigates performance of this predictive strategy against a random strategy (which is optimal in RPS) and in 2) attempts to turn this predictive power against the predictive strategy by allowing the opponent the full knowledge of the predictor's strategy (but not the choices made using the strategy). This exposes a weakness in predictions made without taking risks into account by illustrating that predictive strategy may make the predictor predictable as well.

More Details

Evaluating Moving Target Defense with PLADD

Jones, Stephen T.; Outkin, Alexander V.; Gearhart, Jared L.; Hobbs, Jacob A.; Siirola, John D.; Phillips, Cynthia A.; Verzi, Stephen J.; Tauritz, Daniel T.; Mulder, Samuel A.; Naugle, Asmeret B.

This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. We show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems.

More Details

Natural Gas Value-Chain and Network Assessments

Kobos, Peter H.; Outkin, Alexander V.; Beyeler, Walter E.; Jenkins, La T.; Malczynski, Leonard A.; Myerly, Melissa M.; Vargas, Vanessa N.; Tenney, Craig M.; Borns, David J.

The current expansion of natural gas (NG) development in the United States requires an understanding of how this change will affect the natural gas industry, downstream consumers, and economic growth in order to promote effective planning and policy development. The impact of this expansion may propagate through the NG system and US economy via changes in manufacturing, electric power generation, transportation, commerce, and increased exports of liquefied natural gas. We conceptualize this problem as supply shock propagation that pushes the NG system and the economy away from its current state of infrastructure development and level of natural gas use. To illustrate this, the project developed two core modeling approaches. The first is an Agent-Based Modeling (ABM) approach which addresses shock propagation throughout the existing natural gas distribution system. The second approach uses a System Dynamics-based model to illustrate the feedback mechanisms related to finding new supplies of natural gas - notably shale gas - and how those mechanisms affect exploration investments in the natural gas market with respect to proven reserves. The ABM illustrates several stylized scenarios of large liquefied natural gas (LNG) exports from the U.S. The ABM preliminary results demonstrate that such scenario is likely to have substantial effects on NG prices and on pipeline capacity utilization. Our preliminary results indicate that the price of natural gas in the U.S. may rise by about 50% when the LNG exports represent 15% of the system-wide demand. The main findings of the System Dynamics model indicate that proven reserves for coalbed methane, conventional gas and now shale gas can be adequately modeled based on a combination of geologic, economic and technology-based variables. A base case scenario matches historical proven reserves data for these three types of natural gas. An environmental scenario, based on implementing a $50/tonne CO 2 tax results in less proven reserves being developed in the coming years while demand may decrease in the absence of acceptable substitutes, incentives or changes in consumer behavior. An increase in demand of 25% increases proven reserves being developed by a very small amount by the end of the forecast period of 2025.

More Details

Input-output model for MACCS nuclear accident impacts estimation¹

Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N.

Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domestic product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.

More Details

Resilience of Adapting Networks: Results from a Stylized Infrastructure Model

Beyeler, Walter E.; Vugrin, Eric D.; Forden, Geoffrey E.; Aamir, Munaf S.; Verzi, Stephen J.; Outkin, Alexander V.

Adaptation is believed to be a source of resilience in systems. It has been difficult to measure the contribution of adaptation to resilience, unlike other resilience mechanisms such as restoration and recovery. One difficulty comes from treating adaptation as a deus ex machina that is interjected after a disruption. This provides no basis for bounding possible adaptive responses. We can bracket the possible effects of adaptation when we recognize that it occurs continuously, and is in part responsible for the current system’s properties. In this way the dynamics of the system’s pre-disruption structure provides information about post-disruption adaptive reaction. Seen as an ongoing process, adaptation has been argued to produce “robust-yet-fragile” systems. Such systems perform well under historical stresses but become committed to specific features of those stresses in a way that makes them vulnerable to system-level collapse when those features change. In effect adaptation lessens the cost of disruptions within a certain historical range, at the expense of increased cost from disruptions outside that range. Historical adaptive responses leave a signature in the structure of the system. Studies of ecological networks have suggested structural metrics that pick out systemic resilience in the underlying ecosystems. If these metrics are generally reliable indicators of resilience they provide another strategy for gaging adaptive resilience. To progress in understanding how the process of adaptation and the property of resilience interrelate in infrastructure systems, we pose some specific questions: Does adaptation confer resilience?; Does it confer resilience to novel shocks as well, or does it tune the system to fragility?; Can structural features predict resilience to novel shocks?; Are there policies or constraints on the adaptive process that improve resilience?.

More Details

Creating interaction environments: Defining a two-sided market model of the development and dominance of platforms

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Beyeler, Walter E.; Kelic, Andjelka; Finley, Patrick D.; Aamir, Munaf S.; Outkin, Alexander V.; Conrad, Stephen H.; Mitchell, Michael D.; Vargas, Vanessa N.

Interactions between individuals, both economic and social, are increasingly mediated by technological systems. Such platforms facilitate interactions by controlling and regularizing access, while extracting rent from users. The relatively recent idea of two-sided markets has given insights into the distinctive economic features of such arrangements, arising from network effects and the power of the platform operator. Simplifications required to obtain analytical results, while leading to basic understanding, prevent us from posing many important questions. For example we would like to understand how platforms can be secured when the costs and benefits of security differ greatly across users and operators, and when the vulnerabilities of particular designs may only be revealed after they are in wide use. We define an agent-based model that removes many constraints limiting existing analyses (such as uniformity of users, free and perfect information), allowing insights into a much larger class of real systems. © 2012 Springer-Verlag.

More Details
49 Results
49 Results