Extracting Low-Dimensional Features From Field Data for Calibration
Abstract not provided.
Abstract not provided.
International Journal for Uncertainty Quantification
This paper addresses two challenges in Bayesian calibration: (1) computational speed of existing sampling algorithms and (2) calibration with spatiotemporal responses. The commonly used Markov chain Monte Carlo (MCMC) approaches require many sequential model evaluations making the computational expense prohibitive. This paper proposes an efficient sampling algorithm: iterative importance sampling with genetic algorithm (IISGA). While iterative importance sampling enables computational efficiency, the genetic algorithm enables robustness by preventing sample degeneration and avoids getting stuck in multimodal search spaces. An inflated likelihood further enables robustness in high-dimensional parameter spaces by enlarging the target distribution. Spatiotemporal data complicate both surrogate modeling, which is necessary for expensive computational models, and the likelihood estimation. In this work, singular value decomposition is investigated for reducing the high-dimensional field data to a lower-dimensional space prior to Bayesian calibration. Then the likelihood is formulated and Bayesian inference is performed in the lower-dimension, latent space. An illustrative example is provided to demonstrate IISGA relative to existing sampling methods, and then IISGA is employed to calibrate a thermal battery model with 26 uncertain calibration parameters and spatiotemporal response data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Verification, Validation and Uncertainty Quantification
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecise data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Verification, Validation and Uncertainty Quantification
A discussion of the five responses to the 2014 Sandia Verification and Validation (V&V) Challenge Problem, presented within this special issue, is provided hereafter. Overviews of the challenge problem workshop, workshop participants, and the problem statement are also included. Brief summations of teams' responses to the challenge problem are provided. Issues that arose throughout the responses that are deemed applicable to the general verification, validation, and uncertainty quantification (VVUQ) community are the main focal point of this paper. The discussion is oriented and organized into big picture comparison of data and model usage, VVUQ activities, and differentiating conceptual themes behind the teams' VVUQ strategies. Significant differences are noted in the teams' approaches toward all VVUQ activities, and those deemed most relevant are discussed. Beyond the specific details of VVUQ implementations, thematic concepts are found to create differences among the approaches; some of the major themes are discussed. Lastly, an encapsulation of the key contributions, the lessons learned, and advice for the future are presented.
Abstract not provided.
Abstract not provided.
Conference Proceedings of the Society for Experimental Mechanics Series
As more and more high-consequence applications such as aerospace systems leverage computational models to support decisions, the importance of assessing the credibility of these models becomes a high priority. Two elements in the credibility assessment are verification and validation. The former focuses on convergence of the solution (i.e. solution verification) and the “pedigree” of the codes used to evaluate the model. The latter assess the agreement of the model prediction to real data. The outcome of these elements should map to a statement of credibility on the predictions. As such this credibility should be integrated into the decision making process. In this paper, we present a perspective as to how to integrate these element into a decision making process. The key challenge is to span the gap between physics-based codes, quantitative capability assessments (V&V/UQ), and qualitative risk-mitigation concepts.