Publications

59 Results
Skip to search filters

Economic analysis of model validation for a challenge problem

Journal of Verification, Validation and Uncertainty Quantification

Hu, Kenneth H.; Paez, Thomas L.; Paez, Paul P.; Hasselman, Tim H.

It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. As a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.

More Details

Why do verification and validation?

Journal of Verification, Validation and Uncertainty Quantification

Hu, Kenneth H.; Paez, Thomas L.

In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker's perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. As a result, the 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.

More Details

Laser tracker TSPI uncertainty quantification via centrifuge trajectory

Proceedings of SPIE - The International Society for Optical Engineering

Romero, Edward; Paez, Thomas L.; Brown, Timothy L.; Miller, Timothy

Sandia National Laboratories currently utilizes two laser tracking systems to provide time-space-position-information (TSPI) and high speed digital imaging of test units under flight. These laser trackers have been in operation for decades under the premise of theoretical accuracies based on system design and operator estimates. Advances in optical imaging and atmospheric tracking technology have enabled opportunities to provide more precise six degree of freedom measurements from these trackers. Applying these technologies to the laser trackers requires quantified understanding of their current errors and uncertainty. It was well understood that an assortment of variables contributed to laser tracker uncertainty but the magnitude of these contributions was not quantified and documented. A series of experiments was performed at Sandia National Laboratories large centrifuge complex to quantify TSPI uncertainties of Sandia National Laboratories laser tracker III. The centrifuge was used to provide repeatable and economical test unit trajectories of a test-unit to use for TSPI comparison and uncertainty analysis. On a centrifuge, testunits undergo a known trajectory continuously with a known angular velocity. Each revolution may represent an independent test, which may be repeated many times over for magnitudes of data practical for statistical analysis. Previously these tests were performed at Sandia's rocket sled track facility but were found to be costly with challenges in the measurement ground truth TSPI. The centrifuge along with on-board measurement equipment was used to provide known ground truth position of test units. This paper discusses the experimental design and techniques used to arrive at measures of laser tracker error and uncertainty. © 2009 Copyright SPIE - The International Society for Optical Engineering.

More Details

Validation of a viscoelastic model for foam encapsulated component response over a wide temperature range

Conference Proceedings of the Society for Experimental Mechanics Series

Hinnerichs, Terry; Urbina, Angel U.; Paez, Thomas L.; O'Gorman, Christian C.; Hunter, Patrick H.

Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.

More Details

Model validation of experimental hardware, design versus reality

Conference Proceedings of the Society for Experimental Mechanics Series

O'Gorman, Christian C.; Hunter, Patrick H.; Stasiunas, Eric C.; Hinnerichs, Terry D.; Paez, Thomas L.; Urbina, Angel U.

A detailed model validation study has been initiated to assess model predictions of foam encapsulated components. A bottom-up experimental approach has been used to first characterize the foam material, and then characterize foam/component interaction within increasingly complex systems. This paper presents a summary of the model validation approach at component and benchmark levels and details specific issues identified at the subsystem validation level. Specifically, manufacturing process issues were identified in the hardware which precluded continued validation. A summary of the modal data is given and the issues relating to the manufacturing process are discussed.

More Details

Top-down vs. bottom-up uncertainty quantification for validation of a mechanical joint model

Conference Proceedings of the Society for Experimental Mechanics Series

Hasselman, Timothy; Wathugala, G.W.; Urbina, Angel; Paez, Thomas L.

Mechanical systems behave randomly and it is desirable to capture this feature when making response predictions. Currently, there is an effort to develop predictive mathematical models and test their validity through the assessment of their predictive accuracy relative to experimental results. Traditionally, the approach to quantify modeling uncertainty is to examine the uncertainty associated with each of the critical model parameters and to propagate this through the model to obtain an estimate of uncertainty in model predictions. This approach is referred to as the "bottom-up" approach. However, parametric uncertainty does not account for all sources of the differences between model predictions and experimental observations, such as model form uncertainty and experimental uncertainty due to the variability of test conditions, measurements and data processing. Uncertainty quantification (UQ) based directly on the differences between model predictions and experimental data is referred to as the "top-down" approach. This paper discusses both the top-down and bottom-up approaches and uses the respective stochastic models to assess the validity of a joint model with respect to experimental data not used to calibrate the model, i.e. random vibration versus sine test data. Practical examples based on joint modeling and testing performed by Sandia are presented and conclusions are drawn as to the pros and cons of each approach.

More Details

Inductive model development for lithium-ion batteries to predict life and performance

Proposed for publication in the Electrochemical Society Symposium Publication.

Paez, Thomas L.; Jungst, Rudolph G.; Doughty, Daniel H.

Sandia National Laboratories has been conducting studies on performance of laboratory and commercial lithium-ion and other types of electrochemical cells using inductive models [1]. The objectives of these investigations are: (1) To develop procedures and techniques to rapidly determine performance degradation rates while these cells undergo life tests; (2) To model cell voltage and capacity in order to simulate cell performance characteristics under variable load and temperature conditions; (3) To model rechargeable battery degradation under charge/discharge cycles and many other conditions. The inductive model and methodology are particularly useful when complicated cell performance behaviors are involved, which are often difficult to be interpreted from simple empirical approaches. We find that the inductive model can be used effectively: (1) To enable efficient predictions of battery life; (2) To characterize system behavior. Inductive models provide convenient tools to characterize system behavior using experimentally or analytically derived data in an efficient and robust framework. The approach does not require detailed phenomenological development. There are certain advantages unique to this approach. Among these advantages is the ability to avoid making measurements of hard to determine physical parameters or having to understand cell processes sufficiently to write mathematical functions describing their behavior. We used artificial neural network for inductive modeling, along with ancillary mathematical tools to improve their accuracy. This paper summarizes efforts to use inductive tools for cell and battery modeling. Examples of numerical results will be presented. One of them is related to high power lithium-ion batteries tested under the U.S. Department of Energy Advanced Technology Development Program for hybrid vehicle applications. Sandia National Laboratories is involved in the development of accelerated life testing and thermal abuse tests to enhance the understanding of power and capacity fade issues and predict life of the battery under a nominal use condition. This paper will use power and capacity fade behaviors of a Ni-oxide-based lithium-ion battery system to illustrate how effective the inductive model can interpret the cell behavior and provide predictions of life. We will discuss the analysis of the fading behavior associated with the cell performance and explain how the model can predict cell performance.

More Details

Statistical analysis of the Karhunen-Loeve random process model

Proceedings of the Tenth International Congress on Sound and Vibration

Paez, Thomas L.; Morrison, Dennis

Practical structural dynamic phenomena involve excitations and responses that are random processes. Nonstationary random processes are most frequently encountered, and an accurate and efficient model for them is the Karhunen-Loeve (KL) expansion. The KL expansion can be obtained using experimentally measured random process realizations, but if it is, there is some level of statistical error associated with the identified parameters of the model. This paper shows how bootstrap techniques can be used to perform confidence analysis on the parameters of a KL model. Laboratory-measured data are used to demonstrate use of the model and statistical analysis of the model parameters.

More Details

Status and Integrated Road-Map for Joints Modeling Research

Segalman, Daniel J.; Segalman, Daniel J.; Smallwood, David O.; Sumali, Hartono S.; Paez, Thomas L.; Urbina, Angel U.

The constitutive behavior of mechanical joints is largely responsible for the energy dissipation and vibration damping in weapons systems. For reasons arising from the dramatically different length scales associated with those dissipative mechanisms and the length scales characteristic of the overall structure, this physics cannot be captured adequately through direct simulation of the contact mechanics within a structural dynamics analysis. The only practical method for accommodating the nonlinear nature of joint mechanisms within structural dynamic analysis is through constitutive models employing degrees of freedom natural to the scale of structural dynamics. This document discusses a road-map for developing such constitutive models.

More Details

Advanced Signal Processing for Thermal Flaw Detection

Valley, Michael T.; Hansche, Bruce D.; Paez, Thomas L.; Urbina, Angel U.; Ashbaugh, Dennis M.

Dynamic thermography is a promising technology for inspecting metallic and composite structures used in high-consequence industries. However, the reliability and inspection sensitivity of this technology has historically been limited by the need for extensive operator experience and the use of human judgment and visual acuity to detect flaws in the large volume of infrared image data collected. To overcome these limitations new automated data analysis algorithms and software is needed. The primary objectives of this research effort were to develop a data processing methodology that is tied to the underlying physics, which reduces or removes the data interpretation requirements, and which eliminates the need to look at significant numbers of data frames to determine if a flaw is present. Considering the strengths and weakness of previous research efforts, this research elected to couple both the temporal and spatial attributes of the surface temperature. Of the possible algorithms investigated, the best performing was a radiance weighted root mean square Laplacian metric that included a multiplicative surface effect correction factor and a novel spatio-temporal parametric model for data smoothing. This metric demonstrated the potential for detecting flaws smaller than 0.075 inch in inspection areas on the order of one square foot. Included in this report is the development of a thermal imaging model, a weighted least squares thermal data smoothing algorithm, simulation and experimental flaw detection results, and an overview of the ATAC (Automated Thermal Analysis Code) software that was developed to analyze thermal inspection data.

More Details

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian M.; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Representation of Random Shock via the Karhunen Loeve Expansion

Paez, Thomas L.

Shock excitations are normally random process realizations, and most of our efforts to represent them either directly or indirectly reflect this fact. The most common indirect representation of shock sources is the shock response spectrum. It seeks to establish the damage-causing potential of random shocks in terms of responses excited in linear, single-degree-of-freedom systems. This paper shows that shock sources can be represented directly by developing the probabilistic and statistical structure that underlies the random shock source. Confidence bounds on process statistics and probabilities of specific excitation levels can be established from the model. Some numerical examples are presented.

More Details

Utilizing Computational Probabilistic Methods to Derive Shock Specifications in a Nondeterministic Environment

Field, Richard V.; Red-Horse, John R.; Paez, Thomas L.

One of the key elements of the Stochastic Finite Element Method, namely the polynomial chaos expansion, has been utilized in a nonlinear shock and vibration application. As a result, the computed response was expressed as a random process, which is an approximation to the true solution process, and can be thought of as a generalization to solutions given as statistics only. This approximation to the response process was then used to derive an analytically-based design specification for component shock response that guarantees a balanced level of marginal reliability. Hence, this analytically-based reference SRS might lead to an improvement over the somewhat ad hoc test-based reference in the sense that it will not exhibit regions of conservativeness. nor lead to overtesting of the design.

More Details

First passage failure: Analysis alternatives

Paez, Thomas L.

Most mechanical and structural failures can be formulated as first passage problems. The traditional approach to first passage analysis models barrier crossings as Poisson events. The crossing rate is established and used in the Poisson framework to approximate the no-crossing probability. While this approach is accurate in a number of situations, it is desirable to develop analysis alternatives for those situations where traditional analysis is less accurate and situations where it is difficult to estimate parameters of the traditional approach. This paper develops an efficient simulation approach to first passage failure analysis. It is based on simulation of segments of complex random processes with the Karhunen-Loeve expansion, use of these simulations to estimate the parameters of a Markov chain, and use of the Markov chain to estimate the probability of first passage failure. Some numerical examples are presented.

More Details

A nondeterministic shock and vibration application using polynomial chaos expansions

Field, Richard V.; Red-Horse, John R.; Paez, Thomas L.

In the current study, the generality of the key underpinnings of the Stochastic Finite Element (SFEM) method is exploited in a nonlinear shock and vibration application where parametric uncertainty enters through random variables with probabilistic descriptions assumed to be known. The system output is represented as a vector containing Shock Response Spectrum (SRS) data at a predetermined number of frequency points. In contrast to many reliability-based methods, the goal of the current approach is to provide a means to address more general (vector) output entities, to provide this output as a random process, and to assess characteristics of the response which allow one to avoid issues of statistical dependence among its vector components.

More Details

Nonlinear system modeling based on experimental data

Paez, Thomas L.

The canonical variate analysis technique is used in this investigation, along with a data transformation algorithm, to identify a system in a transform space. The transformation algorithm involves the preprocessing of measured excitation/response data with a zero-memory-nonlinear transform, specifically, the Rosenblatt transform. This transform approximately maps the measured excitation and response data from its own space into the space of uncorrelated, standard normal random variates. Following this transform, it is appropriate to model the excitation/response relation as linear since Gaussian inputs excite Gaussian responses in linear structures. The linear model is identified in the transform space using the canonical variate analysis approach, and system responses in the original space are predicted using inverse Rosenblatt transformation. An example is presented.

More Details
59 Results
59 Results