Publications

84 Results
Skip to search filters

GPLadd: Quantifying trust in government and commercial systems a game-theoretic approach

ACM Transactions on Privacy and Security

Outkin, Alexander V.; Eames, Brandon K.; Galiardi, Meghan A.; Walsh, Sarah; Vugrin, Eric D.; Heersink, Byron; Hobbs, Jacob A.; Wyss, Gregory D.

Trust in a microelectronics-based system can be characterized as the level of confidence that a system is free of subversive alterations made during system development, or that the development process of a system has not been manipulated by a malicious adversary. Trust in systems has become an increasing concern over the past decade. This article presents a novel game-theoretic framework, called GPLADD (Graph-based Probabilistic Learning Attacker and Dynamic Defender), for analyzing and quantifying system trustworthiness at the end of the development process, through the analysis of risk of development-time system manipulation. GPLADD represents attacks and attacker-defender contests over time. It treats time as an explicit constraint and allows incorporating the informational asymmetries between the attacker and defender into analysis. GPLADD includes an explicit representation of attack steps via multi-step attack graphs, attacker and defender strategies, and player actions at different times. GPLADD allows quantifying the attack success probability over time and the attacker and defender costs based on their capabilities and strategies. This ability to quantify different attacks provides an input for evaluation of trust in the development process. We demonstrate GPLADD on an example attack and its variants. We develop a method for representing success probability for arbitrary attacks and derive an explicit analytic characterization of success probability for a specific attack. We present a numeric Monte Carlo study of a small set of attacks, quantify attack success probabilities, attacker and defender costs, and illustrate the options the defender has for limiting the attack success and improving trust in the development process.

More Details

Development of a statistically based access delay timeline methodology

Rivera, Wayne G.; Robinson, David G.; Wyss, Gregory D.; Hendrickson, Stacey M.

The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

More Details

What then do we do about computer security?

Berg, Michael J.; Davis, Christopher E.; Mayo, Jackson M.; Suppona, Roger A.; Wyss, Gregory D.

This report presents the answers that an informal and unfunded group at SNL provided for questions concerning computer security posed by Jim Gosler, Sandia Fellow (00002). The primary purpose of this report is to record our current answers; hopefully those answers will turn out to be answers indeed. The group was formed in November 2010. In November 2010 Jim Gosler, Sandia Fellow, asked several of us several pointed questions about computer security metrics. Never mind that some of the best minds in the field have been trying to crack this nut without success for decades. Jim asked Campbell to lead an informal and unfunded group to answer the questions. With time Jim invited several more Sandians to join in. We met a number of times both with Jim and without him. At Jim's direction we contacted a number of people outside Sandia who Jim thought could help. For example, we interacted with IBM's T.J. Watson Research Center and held a one-day, videoconference workshop with them on the questions.

More Details

Risk-based cost-benefit analysis for security assessment problems

Vulnerability, Uncertainty, and Risk: Analysis, Modeling, and Management - Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences

Wyss, Gregory D.; Hinton, John P.; Dunphy-Guzman, Katherine; Clem, John; Darby, John; Silva, Consuelo; Mitchiner, Kim

Decision-makers want to perform risk-based cost-benefit prioritization of security investments. However, strong nonlinearities in the most common physical security performance metric make it difficult to use for cost-benefit analysis. This paper extends the definition of risk for security applications and embodies this definition in a new but related security risk metric based on the degree of difficulty an adversary will encounter to successfully execute the most advantageous attack scenario. This metric is compatible with traditional cost-benefit optimization algorithms, and can lead to an objective risk-based cost-benefit method for security investment option prioritization. It also enables decision-makers to more effectively communicate the justification for their investment decisions with stakeholders and funding authorities. Copyright © ASCE 2011.

More Details

Risk-based cost-benefit analysis for security assessment problems

Proceedings - International Carnahan Conference on Security Technology

Wyss, Gregory D.; Clem, John F.; Darby, John L.; Guzman, Katherine D.; Hinton, John P.; Mitchiner, K.W.

Decision-makers want to perform risk-based cost-benefit prioritization of security investments. However, strong nonlinearities in the most common physical security performance metric make it difficult to use for cost-benefit analysis. This paper extends the definition of risk for security applications and embodies this definition in a new but related security risk metric based on the degree of difficulty an adversary will encounter to successfully execute the most advantageous attack scenario. This metric is compatible with traditional cost-benefit optimization algorithms, and can lead to an objective risk-based cost-benefit method for security investment option prioritization. It also enables decision-makers to more effectively communicate the justification for their investment decisions with stakeholders and funding authorities. ©2010 IEEE.

More Details

Human reliability-based MC&A models for detecting insider theft

Duran, Felicia A.; Wyss, Gregory D.

Material control and accounting (MC&A) safeguards operations that track and account for critical assets at nuclear facilities provide a key protection approach for defeating insider adversaries. These activities, however, have been difficult to characterize in ways that are compatible with the probabilistic path analysis methods that are used to systematically evaluate the effectiveness of a site's physical protection (security) system (PPS). MC&A activities have many similar characteristics to operator procedures performed in a nuclear power plant (NPP) to check for anomalous conditions. This work applies human reliability analysis (HRA) methods and models for human performance of NPP operations to develop detection probabilities for MC&A activities. This has enabled the development of an extended probabilistic path analysis methodology in which MC&A protections can be combined with traditional sensor data in the calculation of PPS effectiveness. The extended path analysis methodology provides an integrated evaluation of a safeguards and security system that addresses its effectiveness for attacks by both outside and inside adversaries.

More Details

A total risk assessment methodology for security assessment

Wyss, Gregory D.; Otero, Consuelo J.; Pless, Daniel J.; Rhea, Ronald E.; Kaplan, Paul G.

Sandia National Laboratories performed a two-year Laboratory Directed Research and Development project to develop a new collaborative risk assessment method to enable decision makers to fully consider the interrelationships between threat, vulnerability, and consequence. A five-step Total Risk Assessment Methodology was developed to enable interdisciplinary collaborative risk assessment by experts from these disciplines. The objective of this process is promote effective risk management by enabling analysts to identify scenarios that are simultaneously achievable by an adversary, desirable to the adversary, and of concern to the system owner or to society. The basic steps are risk identification, collaborative scenario refinement and evaluation, scenario cohort identification and risk ranking, threat chain mitigation analysis, and residual risk assessment. The method is highly iterative, especially with regard to scenario refinement and evaluation. The Total Risk Assessment Methodology includes objective consideration of relative attack likelihood instead of subjective expert judgment. The 'probability of attack' is not computed, but the relative likelihood for each scenario is assessed through identifying and analyzing scenario cohort groups, which are groups of scenarios with comparable qualities to the scenario being analyzed at both this and other targets. Scenarios for the target under consideration and other targets are placed into cohort groups under an established ranking process that reflects the following three factors: known targeting, achievable consequences, and the resources required for an adversary to have a high likelihood of success. The development of these target cohort groups implements, mathematically, the idea that adversaries are actively choosing among possible attack scenarios and avoiding scenarios that would be significantly suboptimal to their objectives. An adversary who can choose among only a few comparable targets and scenarios (a small comparable target cohort group) is more likely to choose to attack the specific target under analysis because he perceives it to be a relatively unique attack opportunity. The opposite is also true. Thus, total risk is related to the number of targets that exist in each scenario cohort group. This paper describes the Total Risk Assessment Methodology and illustrates it through an example.

More Details

Critical infrastructure systems of systems assessment methodology

Depoy, Jennifer M.; Phelan, James M.; Sholander, Peter E.; Varnado, G.B.; Wyss, Gregory D.; Darby, John; Walter, Andrew W.

Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

More Details

Comparison of two methods to quantify cyber and physical security effectiveness

Wyss, Gregory D.

With the increasing reliance on cyber technology to operate and control physical security system components, there is a need for methods to assess and model the interactions between the cyber system and the physical security system to understand the effects of cyber technology on overall security system effectiveness. This paper evaluates two methodologies for their applicability to the combined cyber and physical security problem. The comparison metrics include probabilities of detection (P{sub D}), interruption (P{sub I}), and neutralization (P{sub N}), which contribute to calculating the probability of system effectiveness (P{sub E}), the probability that the system can thwart an adversary attack. P{sub E} is well understood in practical applications of physical security but when the cyber security component is added, system behavior becomes more complex and difficult to model. This paper examines two approaches (Bounding Analysis Approach (BAA) and Expected Value Approach (EVA)) to determine their applicability to the combined physical and cyber security issue. These methods were assessed for a variety of security system characteristics to determine whether reasonable security decisions could be made based on their results. The assessments provided insight on an adversary's behavior depending on what part of the physical security system is cyber-controlled. Analysis showed that the BAA is more suited to facility analyses than the EVA because it has the ability to identify and model an adversary's most desirable attack path.

More Details

Risk assessment for physical and cyber attacks on critical infrastructures

Depoy, Jennifer M.; Phelan, James M.; Sholander, Peter E.; Smith, Bryan J.; Varnado, G.B.; Wyss, Gregory D.

Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies. Existing risk assessment methodologies consider physical security and cyber security separately. As such, they do not accurately model attacks that involve defeating both physical protection and cyber protection elements (e.g., hackers turning off alarm systems prior to forced entry). This paper presents a risk assessment methodology that accounts for both physical and cyber security. It also preserves the traditional security paradigm of detect, delay and respond, while accounting for the possibility that a facility may be able to recover from or mitigate the results of a successful attack before serious consequences occur. The methodology provides a means for ranking those assets most at risk from malevolent attacks. Because the methodology is automated the analyst can also play 'what if with mitigation measures to gain a better understanding of how to best expend resources towards securing the facilities. It is simple enough to be applied to large infrastructure facilities without developing highly complicated models. Finally, it is applicable to facilities with extensive security as well as those that are less well-protected.

More Details

A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version

Swiler, Laura P.; Wyss, Gregory D.

This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a library that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.

More Details

An Object-Oriented Approach to Risk and Reliability Analysis: Methodology and Aviation Safety Applications

SIMULATION

Wyss, Gregory D.; Duran, Felicia A.; Dandini, Vincent J.

This article describes how features of event tree analysis and Monte Carlo–based discrete event simulation can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology, with some of the best features of each. The resultant object-based event scenario tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible. Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST methodology is then applied to an aviation safety problem that considers mechanisms by which an aircraft might become involved in a runway incursion incident. The resulting OBEST model demonstrates how a close link between human reliability analysis and probabilistic risk assessment methods can provide important insights into aviation safety phenomenology. © 2004, SAGE Publications. All rights reserved.

More Details

OBEST: The Object-Based Event Scenario Tree Methodology

Wyss, Gregory D.; Duran, Felicia A.

Event tree analysis and Monte Carlo-based discrete event simulation have been used in risk assessment studies for many years. This report details how features of these two methods can be combined with concepts from object-oriented analysis to develop a new risk assessment methodology with some of the best features of each. The resultant Object-Based Event Scenarios Tree (OBEST) methodology enables an analyst to rapidly construct realistic models for scenarios for which an a priori discovery of event ordering is either cumbersome or impossible (especially those that exhibit inconsistent or variable event ordering, which are difficult to represent in an event tree analysis). Each scenario produced by OBEST is automatically associated with a likelihood estimate because probabilistic branching is integral to the object model definition. The OBEST method uses a recursive algorithm to solve the object model and identify all possible scenarios and their associated probabilities. Since scenario likelihoods are developed directly by the solution algorithm, they need not be computed by statistical inference based on Monte Carlo observations (as required by some discrete event simulation methods). Thus, OBEST is not only much more computationally efficient than these simulation methods, but it also discovers scenarios that have extremely low probabilities as a natural analytical result--scenarios that would likely be missed by a Monte Carlo-based method. This report documents the OBEST methodology, the demonstration software that implements it, and provides example OBEST models for several different application domains, including interactions among failing interdependent infrastructure systems, circuit analysis for fire risk evaluation in nuclear power plants, and aviation safety studies.

More Details

Cassini Spacecraft Uncertainty Analysis Data and Methodology Review and Update/Volume 1: Updated Parameter Uncertainty Models for the Consequence Analysis

Wheeler, Timothy A.; Wyss, Gregory D.; Harper, Frederick T.

Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.

More Details
84 Results
84 Results