Publications
A case study for integrating comp/sim credibility and convolved UQ and evidence theory results to support risk informed decision making
Orient, G.; Babuska, Vit B.; Lo, D.; Mersch, J.; Wapman, Walter P.
A case study highlighting the computational steps to establish credibility of a solid mechanics model and to use the compiled evidence to support quantitative program decisions is presented. An integrated modeling and testing strategy at the commencement of the CompSim (Computational Simulation) activity establishes the intended use of the model and documents the modeling and test integration plan. A PIRT (Phenomena Identification and Ranking Table) is used to identify and prioritize physical phenomena and perform gap analysis in terms of necessary capabilities and production-level code feature implementations required to construct the model. At significant stages of the project a PCMM (Predictive Capability Maturity Model) assessment, which is a qualitative expert elicitation based process, is performed to establish the rigor of the CompSim modeling effort. These activities are necessary conditions for establishing model credibility, but they are not sufficient because they provide no quantifiable guidance or insight about how to use and interpret the modeling results for decision making. This case study describes a project to determine the critical impact velocity beyond which a device is no longer guaranteed to function. Acceleration, weld failure and deformation based system integrity metrics of an internal structure are defined as QoIs (Quantities of Interest). A particularly challenging aspect of the case study is that predictiveness of the model for different QoIs is expected to vary. A solid mechanics model is constructed observing program resource limitations and analysis governance principles. An inventory of aleatory, computational and model form uncertainties is assembled, and strategies for their characterization are established. Formal UQ (Uncertainty Quantification) over the aleatory random variables is performed. Validation metrics are used to evaluate discrepancies between model and test data. At this point, the customers and the CompSim team agree that the model is useful for qualitative decisions such as design trades but its utility for quantitative conclusions including demonstration of compliance with requirements is not established. Expert judgment from CompSim SMEs is elicited to bound the effects of known uncertainties not currently modeled, such as the effect of tolerances, as well as to anticipate unknown uncertainties. The SME judgement also considers the expected accuracy variation of the different QoIs as recorded by previous organizational history with similar hardware, gaps identified by the PIRT, and completeness of PCMM evidence. Elicitation of the integrated team consisting of system engineering and CompSim practitioners results in quantified requirements expressed as ranges on acceptance threshold levels of the QoIs. Evidence theory is applied to convolve quantitative and qualitative uncertainties (aleatory UQ, numerical, model form uncertainties and SME judgement) resulting in belief and plausibility cumulative distributions at several impact velocities. The process outlined in this work illustrates a structured, transparent, and quantitative approach to establishing model credibility and supporting decisions by an integrated multi-disciplinary project team.