Publications

47 Results
Skip to search filters

Applications of evidence theory to issues with nuclear weapons

PSA 2019 - International Topical Meeting on Probabilistic Safety Assessment and Analysis

Darby, John

Over the last 13 years, at Sandia National Laboratories we have applied the belief/plausibility measure from evidence theory to estimate the uncertainty for numerous safety and security issues for nuclear weapons. For such issues we have significant epistemic uncertainty and are unable to assign probability distributions. We have developed and applied custom software to implement the belief/plausibility measure of uncertainty. For safety issues we perform a quantitative evaluation, and for security issues (e.g., terrorist acts) we use linguistic variables (fuzzy sets) combined with approximate reasoning. We perform the following steps: Train Subject Matter Experts (SMEs) on assignment of evidence Work with SMEs to identify the concern(s): the top-level variable(s) Work with SMEs to identify lower-level variable and functional relationship(s) to the top-level variable(s) Then the SMEs gather their State of Knowledge (SOK) and assign evidence to the lower-level variables. Using this information, we evaluate the variables using custom software and produce an estimate for the top-level variable(s) including uncertainty. We have extended the Kaplan-Garrick risk triplet approach for risk to use the belief/plausibility measure of uncertainty.

More Details

Process for estimating likelihood and confidence in post detonation nuclear forensics

Craft, Charles M.; Darby, John

Technical nuclear forensics (TNF) must provide answers to questions of concern to the broader community, including an estimate of uncertainty. There is significant uncertainty associated with post-detonation TNF. The uncertainty consists of a great deal of epistemic (state of knowledge) as well as aleatory (random) uncertainty, and many of the variables of interest are linguistic (words) and not numeric. We provide a process by which TNF experts can structure their process for answering questions and provide an estimate of uncertainty. The process uses belief and plausibility, fuzzy sets, and approximate reasoning.

More Details

Risk-based cost-benefit analysis for security assessment problems

Vulnerability, Uncertainty, and Risk: Analysis, Modeling, and Management - Proceedings of the ICVRAM 2011 and ISUMA 2011 Conferences

Wyss, Gregory D.; Hinton, John P.; Dunphy-Guzman, Katherine; Clem, John; Darby, John; Silva, Consuelo; Mitchiner, Kim

Decision-makers want to perform risk-based cost-benefit prioritization of security investments. However, strong nonlinearities in the most common physical security performance metric make it difficult to use for cost-benefit analysis. This paper extends the definition of risk for security applications and embodies this definition in a new but related security risk metric based on the degree of difficulty an adversary will encounter to successfully execute the most advantageous attack scenario. This metric is compatible with traditional cost-benefit optimization algorithms, and can lead to an objective risk-based cost-benefit method for security investment option prioritization. It also enables decision-makers to more effectively communicate the justification for their investment decisions with stakeholders and funding authorities. Copyright © ASCE 2011.

More Details

Techniques to evaluate the importance of common cause degradation on reliability and safety of nuclear weapons

Darby, John

As the nuclear weapon stockpile ages, there is increased concern about common degradation ultimately leading to common cause failure of multiple weapons that could significantly impact reliability or safety. Current acceptable limits for the reliability and safety of a weapon are based on upper limits on the probability of failure of an individual item, assuming that failures among items are independent. We expanded the current acceptable limits to apply to situations with common cause failure. Then, we developed a simple screening process to quickly assess the importance of observed common degradation for both reliability and safety to determine if further action is necessary. The screening process conservatively assumes that common degradation is common cause failure. For a population with between 100 and 5000 items we applied the screening process and conclude the following. In general, for a reliability requirement specified in the Military Characteristics (MCs) for a specific weapon system, common degradation is of concern if more than 100(1-x)% of the weapons are susceptible to common degradation, where x is the required reliability expressed as a fraction. Common degradation is of concern for the safety of a weapon subsystem if more than 0.1% of the population is susceptible to common degradation. Common degradation is of concern for the safety of a weapon component or overall weapon system if two or more components/weapons in the population are susceptible to degradation. Finally, we developed a technique for detailed evaluation of common degradation leading to common cause failure for situations that are determined to be of concern using the screening process. The detailed evaluation requires that best estimates of common cause and independent failure probabilities be produced. Using these techniques, observed common degradation can be evaluated for effects on reliability and safety.

More Details

Sample sizes for confidence limits for reliability

Darby, John

We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.

More Details

Evaluation of containment failure and cleanup time for Pu shots on the Z machine

Darby, John

Between November 30 and December 11, 2009 an evaluation was performed of the probability of containment failure and the time for cleanup of contamination of the Z machine given failure, for plutonium (Pu) experiments on the Z machine at Sandia National Laboratories (SNL). Due to the unique nature of the problem, there is little quantitative information available for the likelihood of failure of containment components or for the time to cleanup. Information for the evaluation was obtained from Subject Matter Experts (SMEs) at the Z machine facility. The SMEs provided the State of Knowledge (SOK) for the evaluation. There is significant epistemic- or state of knowledge- uncertainty associated with the events that comprise both failure of containment and cleanup. To capture epistemic uncertainty and to allow the SMEs to reason at the fidelity of the SOK, we used the belief/plausibility measure of uncertainty for this evaluation. We quantified two variables: the probability that the Pu containment system fails given a shot on the Z machine, and the time to cleanup Pu contamination in the Z machine given failure of containment. We identified dominant contributors for both the time to cleanup and the probability of containment failure. These results will be used by SNL management to decide the course of action for conducting the Pu experiments on the Z machine.

More Details

Capturing the uncertainty in adversary attack simulations

Darby, John; Berry, Robert B.; Brooks, Traci B.

This work provides a comprehensive uncertainty technique to evaluate uncertainty, resulting in a more realistic evaluation of PI, thereby requiring fewer resources to address scenarios and allowing resources to be used across more scenarios. For a given set of dversary resources, two types of uncertainty are associated with PI for a scenario: (1) aleatory (random) uncertainty for detection probabilities and time delays and (2) epistemic (state of knowledge) uncertainty for the adversary resources applied during an attack. Adversary esources consist of attributes (such as equipment and training) and knowledge about the security system; to date, most evaluations have assumed an adversary with very high resources, adding to the conservatism in the evaluation of PI. The aleatory uncertainty in PI is ddressed by assigning probability distributions to detection probabilities and time delays. A numerical sampling technique is used to evaluate PI, addressing the repeated variable dependence in the equation for PI.

More Details

Qualitative evaluation of the accuracy of maps for release of hazardous materials

Darby, John

The LinguisticBelief%C2%A9 software tool developed by Sandia National Laboratories was applied to provide a qualitative evaluation of the accuracy of various maps that provide information on releases of hazardous material, especially radionuclides. The methodology, %E2%80%9CUncertainty for Qualitative Assessments,%E2%80%9D includes uncertainty in the evaluation. The software tool uses the mathematics of fuzzy sets, approximate reasoning, and the belief/ plausibility measure of uncertainty. SNL worked cooperatively with the Remote Sensing Laboratory (RSL) and the National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory (LLNL) to develop models for three types of maps for use in this study. SNL and RSL developed the maps for %E2%80%9CAccuracy Plot for Area%E2%80%9D and %E2%80%9CAerial Monitoring System (AMS) Product Confidence%E2%80%9D. SNL and LLNL developed the %E2%80%9CLLNL Model%E2%80%9D. For each of the three maps, experts from RSL and LLNL created a model in the LinguisticBelief software. This report documents the three models and provides evaluations of maps associated with the models, using example data. Future applications will involve applying the models to actual graphs to provide a qualitative evaluation of the accuracy of the maps, including uncertainty, for use by decision makers. A %E2%80%9CQuality Thermometer%E2%80%9D technique was developed to rank-order the quality of a set of maps of a given type. A technique for pooling expert option from different experts was provided using the PoolEvidence%C2%A9 software.

More Details

Framework for Integrating Safety, Operations, Security, and Safeguards in the Design and Operation of Nuclear Facilities

Darby, John; Horak, Karl E.; Tolk, Keith M.; Whitehead, Donnie W.; LaChance, Jeffrey L.

The US is currently on the brink of a nuclear renaissance that will result in near-term construction of new nuclear power plants. In addition, the Department of Energy’s (DOE) ambitious new Global Nuclear Energy Partnership (GNEP) program includes facilities for reprocessing spent nuclear fuel and reactors for transmuting safeguards material. The use of nuclear power and material has inherent safety, security, and safeguards (SSS) concerns that can impact the operation of the facilities. Recent concern over terrorist attacks and nuclear proliferation led to an increased emphasis on security and safeguard issues as well as the more traditional safety emphasis. To meet both domestic and international requirements, nuclear facilities include specific SSS measures that are identified and evaluated through the use of detailed analysis techniques. In the past, these individual assessments have not been integrated, which led to inefficient and costly design and operational requirements. This report provides a framework for a new paradigm where safety, operations, security, and safeguards (SOSS) are integrated into the design and operation of a new facility to decrease cost and increase effectiveness. Although the focus of this framework is on new nuclear facilities, most of the concepts could be applied to any new, high-risk facility.

More Details

LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning

Darby, John

LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of charge on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.

More Details

Linguistic evaluation of terrorist scenarios: example application

Darby, John

In 2005, a group of international decision makers developed a manual process for evaluating terrorist scenarios. That process has been implemented in the approximate reasoning Java software tool, LinguisticBelief, released in FY2007. One purpose of this report is to show the flexibility of the LinguisticBelief tool to automate a custom model developed by others. LinguisticBelief evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. This report documents the evaluation and rank-ordering of several example terrorist scenarios for the existing process implemented in our software. LinguisticBelief captures and propagates uncertainty and allows easy development of an expanded, more detailed evaluation, neither of which is feasible using a manual evaluation process. In conclusion, the Linguistic-Belief tool is able to (1) automate an expert-generated reasoning process for the evaluation of the risk of terrorist scenarios, including uncertainty, and (2) quickly evaluate and rank-order scenarios of concern using that process.

More Details

Evaluation of terrorist risk using belief and plausibility

Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management, PSAM 2006

Darby, John

The risk for a particular threat scenario can be evaluated as: Risk = fA * (1 - PE) * C where fA is the frequency of the attack, PE is the probability that the security system detects and neutralizes the attack, and C is the consequence if the attack is not neutralized. Risk has the units of consequence per unit time. Most evaluations of the effectiveness of a security system assume that the threat scenario is implemented and evaluate the conditional risk, given the attack. As the Design Basis Threat (DBT) has increased, traditional physical security starting at the facility boundary is hard pressed to counter the increased resources available to the adversary. Other aspects of security need to be considered including the use of intelligence to detect the threat during its formulation stage. Most of the evaluations to date have used a probabilistic approach, but for an overall evaluation of Risk the fidelity of the information available is insufficient to support the use of an entirely probabilistic approach. For example, it is difficult to assign a probability measure to the frequency of an attack, fA; a possibility measure is more appropriate. Both probability and possibility are special cases of belief, and using a belief measure for the three factors for Risk allows the risk to be evaluated including uncertainty consistent with the fidelity of the information available. If all terms in the risk equation are modeled with probability, the convolution process using belief is equivalent to standard convolution of probability distributions. If all terms in the risk equation are modeled with possibility, the convolution process using belief is equivalent to convolution of possibility distributions. A computer program named BeliefRisk has been written in Mathematica to implement the evaluation of Risk. Each factor in the Risk equation is modeled as a discrete set of values, and a distribution reflecting uncertainty is assigned to each set of values. Probability, possibility, or belief can be used as the metric for uncertainty for each factor. Risk is calculated by convoluting the uncertainty distributions for each constituent factor for risk using the mathematics of belief. A belief/plausibility distribution and an expected value interval are calculated for Risk. Also, belief and plausibility exceedance values are calculated for Risk. Belief can also be calculated for an infinite set on the reals given evidence on a finite number of intervals in the set. A computer program named BeliefConvolution has been written in Java to evaluate belief and plausibility for an algebraic combination of variables, each with evidence assigned to intervals of real numbers. For evaluation of Risk, BeliefConvolution provides identical results as BeliefRisk. BeliefConvolution has the ability to aggregate evidence into either linear or logarithmic bins. BeliefConvolution can calculate belief and plausibility for both crisp and fuzzy sets. © 2006 by ASME.

More Details

Critical infrastructure systems of systems assessment methodology

Depoy, Jennifer M.; Phelan, James M.; Sholander, Peter E.; Varnado, G.B.; Wyss, Gregory D.; Darby, John; Walter, Andrew W.

Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

More Details

Evaluation of risk from acts of terrorism :the adversary/defender model using belief and fuzzy sets

Darby, John

Risk from an act of terrorism is a combination of the likelihood of an attack, the likelihood of success of the attack, and the consequences of the attack. The considerable epistemic uncertainty in each of these three factors can be addressed using the belief/plausibility measure of uncertainty from the Dempster/Shafer theory of evidence. The adversary determines the likelihood of the attack. The success of the attack and the consequences of the attack are determined by the security system and mitigation measures put in place by the defender. This report documents a process for evaluating risk of terrorist acts using an adversary/defender model with belief/plausibility as the measure of uncertainty. Also, the adversary model is a linguistic model that applies belief/plausibility to fuzzy sets used in an approximate reasoning rule base.

More Details
47 Results
47 Results