Publications

43 Results

Search results

Jump to search filters

A causal perspective on reliability assessment

Reliability Engineering and System Safety

Hund, Lauren; Schroeder, Benjamin B.

Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.

More Details

A causal perspective on reliability assessment

Reliability Engineering and System Safety

Hund, Lauren; Schroeder, Benjamin B.

Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.

More Details

Statistically Rigorous Uncertainty Quantification for Physical Parameter Model Calibration with Functional Output

Hund, Lauren; Brown, Justin L.

In experiments conducted on the Z-machine at Sandia National Laboratories, dynamic material properties cannot be analyzed using traditional analytic methods, necessitating solving an inverse problem. Bayesian model calibration is a statistical framework for solving an inverse problem to estimate parameters input into a computational model in the presence of multiple uncertainties. Disentangling input parameter uncertainty and model misspecification is often poorly identified problem. When using computational models for physical parameter estimation, the issue of parameter identifiability must be carefully considered to obtain accurate and precise estimates of physical parameters. Additionally, in dynamic material properties applications, the experimental output is a function, velocity over time. While we can sample an arbitrarily large number of points from the measured velocity, these curves only contain a finite amount of information about the calibration parameters. In this report, we propose modifications to the Bayesian model calibration framework to simplify and improve the estimation of physical parameters with functional outputs. Specifically, we propose scaling the likelihood function by an effective sample size rather than modeling the discrepancy function; and modularizing input nuisance parameters with weakly identified parameters. We evaluate the performance of these proposed methods using a statistical simulation study and then apply these methods to estimate parameters of the tantalum equation of state. We conclude that these proposed methods can provide simple, fast, and statistically valid alternatives to the full Bayesian model calibration procedure; and that these methods can be used to estimate parameters of the equation of state for tantalum.

More Details

Bayesian Model Calibration for Extrapolative Prediction via Gibbs Posteriors

Woody, Spencer; Ghaffari, Novin; Hund, Lauren

The current standard Bayesian approach to model calibration, which assigns a Gaussian process prior to the discrepancy term, often suffers from issues of unidentifiability and computational complexity and instability. When the goal is to quantify uncertainty in physical parameters for extrapolative prediction, then there is no need to perform inference on the discrepancy term. With this in mind, we introduce Gibbs posteriors as an alternative Bayesian method for model calibration, which updates the prior with a loss function connecting the data to the parameter. The target of inference is the physical parameter value which minimizes the expected loss. We propose to tune the loss scale of the Gibbs posterior to maintain nominal frequentist coverage under assumptions of the form of model discrepancy, and present a bootstrap implementation for approximating coverage rates. Our approach is highly modular, allowing an analyst to easily encode a wide variety of such assumptions. Furthermore, we provide a principled method of combining posteriors calculated from data subsets. We apply our methods to data from an experiment measuring the material properties of tantalum.

More Details

Estimating material properties under extreme conditions by using Bayesian model calibration with functional outputs

Journal of the Royal Statistical Society, Series C: Applied Statistics

Brown, Justin L.; Hund, Lauren

Dynamic material properties experiments provide access to the most extreme temperatures and pressures attainable in a laboratory setting; the data from these experiments are often used to improve our understanding of material models at these extreme conditions. We apply Bayesian model calibration to dynamic material property applications where the experimental output is a function: velocity over time. This framework can accommodate more uncertainties and facilitate analysis of new types of experiments relative to techniques traditionally used to analyse dynamic material experiments. However, implementation of Bayesian model calibration requires more sophisticated statistical techniques, because of the functional nature of the output as well as parameter and model discrepancy identifiability. We propose a novel Bayesian model calibration process to simplify and improve the estimation of the material property calibration parameters. Specifically, we propose scaling the likelihood function by an effective sample size rather than modelling the auto–correlation function to accommodate the functional output. Additionally, we propose sensitivity analyses by using the notion of 'modularization' to assess the effect of experiment–specific nuisance input parameters on estimates of the physical parameters. Furthermore, the Bayesian model calibration framework proposed is applied to dynamic compression of tantalum to extreme pressures, and we conclude that the procedure results in simple, fast and valid inferences on the material properties for tantalum.

More Details

Applying Image Clutter Metrics to Domain-Specific Expert Visual Search

Speed, Ann E.; Stracuzzi, David J.; Lee, Jina; Hund, Lauren

Visual clutter metrics play an important role in both the design of information visualizations and in the continued theoretical development of visual search models. In visualization design, clutter metrics provide a mathematical prediction of the complexity of the display and the difficulty associated with locating and identifying key pieces of information. In visual search models, they offer a proxy to set size, which represents the number of objects in the search scene, but is difficult to estimate in real-world imagery. In this article, we first briefly review the literature on clutter metrics and then contribute our own results drawn from studies in two security-oriented visual search domains: airport X-ray imagery and radar imagery. We analyze our results with an eye toward bridging the gap between the scene features evaluated by current clutter metrics and the features that are relevant to our security tasks. The article concludes with a brief discussion of possible research steps to close this gap.

More Details

Robust approaches to quantification of margin and uncertainty for sparse data

Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin; Murchison, Nicole

Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of the risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.

More Details

xLPR Post-Processing Documentation

Martin, Nevin S.; Lewis, John R.; Hund, Lauren

The outputs available in the xLPR Version 2.0 code can be analyzed using statistical techniques that have been developed to compare sampling scheme selection, identify inputs for importance sampling, and assess result convergence and uncertainty. These techniques were developed and piloted for both the xLPR Scenario Analysis (SA) Report and the xLPR Sensitivity Analysis Template. This document provides a walk-through of the post-processing R code that was used to generate the results and figures presented in these documents. This page intentionally left blank.

More Details

Statistical guidance for setting product specification limits

Proceedings - Annual Reliability and Maintainability Symposium

Hund, Lauren; Campbell, Daniel L.; Newcomer, Justin T.

This document outlines a data-driven probabilistic approach to setting product acceptance testing limits. Product Specification (PS) limits are testing requirements for assuring that the product meets the product requirements. After identifying key manufacturing and performance parameters for acceptance testing, PS limits should be specified for these parameters, with the limits selected to assure that the unit will have a very high likelihood of meeting product requirements (barring any quality defects that would not be detected in acceptance testing). Because the settings for which the product requirements must be met is typically broader than the production acceptance testing space, PS limits should account for the difference between the acceptance testing setting relative to the worst-case setting. We propose an approach to setting PS limits that is based on demonstrating margin to the product requirement in the worst-case setting in which the requirement must be met. PS limits are then determined by considering the overall margin and uncertainty associated with a component requirement and then balancing this margin and uncertainty between the designer and producer. Specifically, after identifying parameters critical to component performance, we propose setting PS limits using a three step procedure: 1. Specify the acceptance testing and worst-case use-settings, the performance characteristic distributions in these two settings, and the mapping between these distributions. 2. Determine the PS limit in the worst-case use-setting by considering margin to the requirement and additional (epistemic) uncertainties. This step controls designer risk, namely the risk of producing product that violates requirements. 3. Define the PS limit for product acceptance testing by transforming the PS limit from the worst-case setting to the acceptance testing setting using the mapping between these distributions. Following this step, the producer risk is quantified by estimating the product scrap rate based on the projected acceptance testing distribution. The approach proposed here provides a framework for documenting the procedure and assumptions used to determine PS limits. This transparency in procedure will help inform what actions should occur when a unit violates a PS limit and how limits should change over time.

More Details

xLPR Scenario Analysis Report

Eckert, Aubrey; Lewis, John R.; Brooks, Dusty M.; Martin, Nevin S.; Hund, Lauren; Clark, Andrew J.; Mariner, Paul

This report describes the methods, results, and conclusions of the analysis of 11 scenarios defined to exercise various options available in the xLPR (Extremely Low Probability of Rupture) Version 2 .0 code. The scope of the scenario analysis is three - fold: (i) exercise the various options and components comprising xLPR v2.0 and defining each scenario; (ii) develop and exercise methods for analyzing and interpreting xLPR v2.0 outputs ; and (iii) exercise the various sampling options available in xLPR v2.0. The simulation workflow template developed during the course of this effort helps to form a basis for the application of the xLPR code to problems with similar inputs and probabilistic requirements and address in a systematic manner the three points covered by the scope.

More Details
43 Results
43 Results