Publications

39 Results
Skip to search filters

A causal perspective on reliability assessment

Reliability Engineering and System Safety

Hund, Lauren H.; Schroeder, Benjamin B.

Causality in an engineered system pertains to how a system output changes due to a controlled change or intervention on the system or system environment. Engineered systems designs reflect a causal theory regarding how a system will work, and predicting the reliability of such systems typically requires knowledge of this underlying causal structure. The aim of this work is to introduce causal modeling tools that inform reliability predictions based on biased data sources. We present a novel application of the popular structural causal modeling (SCM) framework to reliability estimation in an engineering application, illustrating how this framework can inform whether reliability is estimable and how to estimate reliability given a set of data and assumptions about the subject matter and data generating mechanism. When data are insufficient for estimation, sensitivity studies based on problem-specific knowledge can inform how much reliability estimates can change due to biases in the data and what information should be collected next to provide the most additional information. We apply the approach to a pedagogical example related to a real, but proprietary, engineering application, considering how two types of biases in data can influence a reliability calculation.

More Details

The need for credibility guidance for analyses quantifying margin and uncertainty

Conference Proceedings of the Society for Experimental Mechanics Series

Schroeder, Benjamin B.; Hund, Lauren H.; Kittinger, Robert

Current quantification of margin and uncertainty (QMU) guidance lacks a consistent framework for communicating the credibility of analysis results. Recent efforts at providing QMU guidance have pushed for broadening the analyses supporting QMU results beyond extrapolative statistical models to include a more holistic picture of risk, including information garnered from both experimental campaigns and computational simulations. Credibility guidance would assist in the consideration of belief-based aspects of an analysis. Such guidance exists for presenting computational simulation-based analyses and is under development for the integration of experimental data into computational simulations (calibration or validation), but is absent for the ultimate QMU product resulting from experimental or computational analyses. A QMU credibility assessment framework comprised of five elements is proposed: requirement definitions and quantity of interest selection, data quality, model uncertainty, calibration/parameter estimation, and validation. Through considering and reporting on these elements during a QMU analysis, the decision-maker will receive a more complete description of the analysis and be better positioned to understand the risks involved with using the analysis to support a decision. A molten salt battery application is used to demonstrate the proposed QMU credibility framework.

More Details

Robust approaches to quantification of margin and uncertainty for sparse data

Hund, Lauren H.; Schroeder, Benjamin B.; Rumsey, Kelin R.; Murchison, Nicole M.

Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of the risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.

More Details

Statistical guidance for setting product specification limits

Proceedings - Annual Reliability and Maintainability Symposium

Hund, Lauren H.; Campbell, Daniel L.; Newcomer, Justin T.

This document outlines a data-driven probabilistic approach to setting product acceptance testing limits. Product Specification (PS) limits are testing requirements for assuring that the product meets the product requirements. After identifying key manufacturing and performance parameters for acceptance testing, PS limits should be specified for these parameters, with the limits selected to assure that the unit will have a very high likelihood of meeting product requirements (barring any quality defects that would not be detected in acceptance testing). Because the settings for which the product requirements must be met is typically broader than the production acceptance testing space, PS limits should account for the difference between the acceptance testing setting relative to the worst-case setting. We propose an approach to setting PS limits that is based on demonstrating margin to the product requirement in the worst-case setting in which the requirement must be met. PS limits are then determined by considering the overall margin and uncertainty associated with a component requirement and then balancing this margin and uncertainty between the designer and producer. Specifically, after identifying parameters critical to component performance, we propose setting PS limits using a three step procedure: 1. Specify the acceptance testing and worst-case use-settings, the performance characteristic distributions in these two settings, and the mapping between these distributions. 2. Determine the PS limit in the worst-case use-setting by considering margin to the requirement and additional (epistemic) uncertainties. This step controls designer risk, namely the risk of producing product that violates requirements. 3. Define the PS limit for product acceptance testing by transforming the PS limit from the worst-case setting to the acceptance testing setting using the mapping between these distributions. Following this step, the producer risk is quantified by estimating the product scrap rate based on the projected acceptance testing distribution. The approach proposed here provides a framework for documenting the procedure and assumptions used to determine PS limits. This transparency in procedure will help inform what actions should occur when a unit violates a PS limit and how limits should change over time.

More Details

xLPR Scenario Analysis Report

Eckert, Aubrey C.; Lewis, John R.; Brooks, Dusty M.; Martin, Nevin S.; Hund, Lauren H.; Clark, Andrew; Mariner, Paul M.

This report describes the methods, results, and conclusions of the analysis of 11 scenarios defined to exercise various options available in the xLPR (Extremely Low Probability of Rupture) Version 2 .0 code. The scope of the scenario analysis is three - fold: (i) exercise the various options and components comprising xLPR v2.0 and defining each scenario; (ii) develop and exercise methods for analyzing and interpreting xLPR v2.0 outputs ; and (iii) exercise the various sampling options available in xLPR v2.0. The simulation workflow template developed during the course of this effort helps to form a basis for the application of the xLPR code to problems with similar inputs and probabilistic requirements and address in a systematic manner the three points covered by the scope.

More Details
39 Results
39 Results