Publications

114 Results
Skip to search filters

A Decision Theoretic Approach To Optimizing Machine Learning Decisions with Prediction Uncertainty

Field, Richard V.; Darling, Michael C.

While the use of machine learning (ML) classifiers is widespread, their output is often not part of any follow-on decision-making process. To illustrate, consider the scenario where we have developed and trained an ML classifier to find malicious URL links. In this scenario, network administrators must decide whether to allow a computer user to visit a particular website, or to instead block access because the site is deemed malicious. It would be very beneficial if decisions such as these could be made automatically using a trained ML classifier. Unfortunately, due to a variety of reasons discussed herein, the output from these classifiers can be uncertain, rendering downstream decisions difficult. Herein, we provide a framework for: (1) quantifying and propagating uncertainty in ML classifiers; (2) formally linking ML outputs with the decision-making process; and (3) making optimal decisions for classification under uncertainty with single or multiple objectives.

More Details

SAGE Intrusion Detection System: Sensitivity Analysis Guided Explainability for Machine Learning

Smith, Michael R.; Acquesta, Erin A.; Ames, Arlo L.; Carey, Alycia N.; Cueller, Christopher R.; Field, Richard V.; Maxfield, Trevor M.; Mitchell, Scott A.; Morris, Elizabeth S.; Moss, Blake C.; Nyre-Yu, Megan N.; Rushdi, Ahmad R.; Stites, Mallory C.; Smutz, Charles S.; Zhou, Xin Z.

This report details the results of a three-fold investigation of sensitivity analysis (SA) for machine learning (ML) explainability (MLE): (1) the mathematical assessment of the fidelity of an explanation with respect to a learned ML model, (2) quantifying the trustworthiness of a prediction, and (3) the impact of MLE on the efficiency of end-users through multiple users studies. We focused on the cybersecurity domain as the data is inherently non-intuitive. As ML is being using in an increasing number of domains, including domains where being wrong can elicit high consequences, MLE has been proposed as a means of generating trust in a learned ML models by end users. However, little analysis has been performed to determine if the explanations accurately represent the target model and they themselves should be trusted beyond subjective inspection. Current state-of-the-art MLE techniques only provide a list of important features based on heuristic measures and/or make certain assumptions about the data and the model which are not representative of the real-world data and models. Further, most are designed without considering the usefulness by an end-user in a broader context. To address these issues, we present a notion of explanation fidelity based on Shapley values from cooperative game theory. We find that all of the investigated MLE explainability methods produce explanations that are incongruent with the ML model that is being explained. This is because they make critical assumptions about feature independence and linear feature interactions for computational reasons. We also find that in deployed, explanations are rarely used due to a variety of reason including that there are several other tools which are trusted more than the explanations and there is little incentive to use the explanations. In the cases when the explanations are used, we found that there is the danger that explanations persuade the end users to wrongly accept false positives and false negatives. However, ML model developers and maintainers find the explanations more useful to help ensure that the ML model does not have obvious biases. In light of these findings, we suggest a number of future directions including developing MLE methods that directly model non-linear model interactions and including design principles that take into account the usefulness of explanations to the end user. We also augment explanations with a set of trustworthiness measures that measure geometric aspects of the data to determine if the model output should be trusted.

More Details

Compression Analytics for Classification and Anomaly Detection Within Network Communication

IEEE Transactions on Information Forensics and Security

Ting, Christina T.; Field, Richard V.; Fisher, Andrew N.; Bauer, Travis L.

The flexibility of network communication within Internet protocols is fundamental to network function, yet this same flexibility permits the possibility of malicious use. In particular, malicious behavior can masquerade as benign traffic, thus evading systems designed to catch misuse of network resources. However, perfect imitation of benign traffic is difficult, meaning that small unintentional deviations from normal can occur. Identifying these deviations requires that the defenders know what features reveal malicious behavior. Herein, we present an application of compression-based analytics to network communication that can reduce the need for defenders to know a priori what features they need to examine. Motivating the approach is the idea that compression relies on the ability to discover and make use of predictable elements in information, thereby highlighting any deviations between expected and received content. We introduce a so-called 'slice compression' score to identify malicious or anomalous communication in two ways. First, we apply normalized compression distances to classification problems and discuss methods for reducing the noise by excising application content (as opposed to protocol features) using slice compression. Second, we present a new technique for anomaly detection, referred to as slice compression for anomaly detection. A diverse collection of datasets are analyzed to illustrate the efficacy of the proposed approaches. While our focus is network communication, other types of data are also considered to illustrate the generality of the method.

More Details

Generalized Boundary Detection Using Compression-based Analytics

ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

Ting, Christina T.; Field, Richard V.; Quach, Tu-Thach Q.; Bauer, Travis L.

We present a new method for boundary detection within sequential data using compression-based analytics. Our approach is to approximate the information distance between two adjacent sliding windows within the sequence. Large values in the distance metric are indicative of boundary locations. A new algorithm is developed, referred to as sliding information distance (SLID), that provides a fast, accurate, and robust approximation to the normalized information distance. A modified smoothed z-score algorithm is used to locate peaks in the distance metric, indicating boundary locations. A variety of data sources are considered, including text and audio, to demonstrate the efficacy of our approach.

More Details

Efficient transfer learning for neural network language models

Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2018

Skryzalin, Jacek S.; Link, Hamilton E.; Wendt, Jeremy D.; Field, Richard V.; Richter, Samuel N.

We apply transfer learning techniques to create topically and/or stylistically biased natural language models from small data samples, given generic long short-term memory (LSTM) language models trained on larger data sets. Although LSTM language models are powerful tools with wide-ranging applications, they require enormous amounts of data and time to train. Thus, we build general purpose language models that take advantage of large standing corpora and computational resources proactively, allowing us to build more specialized analytical tools from smaller data sets on demand. We show that it is possible to construct a language model from a small, focused corpus by first training an LSTM language model on a large corpus (e.g., the text from English Wikipedia) and then retraining only the internal transition model parameters on the smaller corpus. We also show that a single general language model can be reused through transfer learning to create many distinct special purpose language models quickly with modest amounts of data.

More Details

A dynamic model for social networks

Field, Richard V.; Link, Hamilton E.; Skryzalin, Jacek S.; Wendt, Jeremy D.

Social network graph models are data structures representing entities (often people, corpora- tions, or accounts) as "vertices" and their interactions as "edges" between pairs of vertices. These graphs are most often total-graph models -- the overall structure of edges and vertices in a bidirectional or directional graph are described in global terms and the network is gen- erated algorithmically. We are interested in "egocentrie or "agent-based" models of social networks where the behavior of the individual participants are described and the graph itself is an emergent phenomenon. Our hope is that such graph models will allow us to ultimately reason from observations back to estimated properties of the individuals and populations, and result in not only more accurate algorithms for link prediction and friend recommen- dation, but also a more intuitive understanding of human behavior in such systems than is revealed by previous approaches. This report documents our preliminary work in this area; we describe several past graph models, two egocentric models of our own design, and our thoughts about the future direction of this research.

More Details

Developing a novel hierarchical approach for multiscale structural reliability predictions for ultra-high consequence applications

Emery, John M.; Coffin, Peter C.; Robbins, Brian A.; Carroll, Jay D.; Field, Richard V.; Jeremy Yoo, Yung S.; Kacher, Josh K.

Microstructural variabilities are among the predominant sources of uncertainty in structural performance and reliability. We seek to develop efficient algorithms for multiscale calcu- lations for polycrystalline alloys such as aluminum alloy 6061-T6 in environments where ductile fracture is the dominant failure mode. Our approach employs concurrent multiscale methods, but does not focus on their development. They are a necessary but not sufficient ingredient to multiscale reliability predictions. We have focused on how to efficiently use concurrent models for forward propagation because practical applications cannot include fine-scale details throughout the problem domain due to exorbitant computational demand. Our approach begins with a low-fidelity prediction at the engineering scale that is sub- sequently refined with multiscale simulation. The results presented in this report focus on plasticity and damage at the meso-scale, efforts to expedite Monte Carlo simulation with mi- crostructural considerations, modeling aspects regarding geometric representation of grains and second-phase particles, and contrasting algorithms for scale coupling.

More Details

Temporal anomaly detection in social media

Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017

Skryzalin, Jacek S.; Field, Richard V.; Fisher, Andrew N.; Bauer, Travis L.

In this work, we approach topic tracking and meme trending in social media with a temporal focus; rather than analyzing topics, we aim to identify time periods whose content differs significantly from normal. We detail two approaches. The first is an information-theoretic analysis of the distributions of terms emitted during each time period. In the second, we cluster the documents from each time period and analyze the tightness of each clustering. We also discuss a method of combining the scores created by each technique, and we provide ample empirical analysis of our methodology on various Twitter datasets.

More Details

Estimating users’ mode transition functions and activity levels from social media

Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM 2017

Link, Hamilton E.; Wendt, Jeremy D.; Field, Richard V.; Marthe, Jocelyn

We present a temporal model of individual-scale social media user behavior, comprising modal activity levels and mode switching patterns. We show that this model can be effectively and easily learned from available social media data, and that our model is sufficiently flexible to capture diverse users’ daily activity patterns. In applications such as electric power load prediction, computer network traffic analysis, disease spread modeling, and disease outbreak forecasting, it is useful to have a model of individual-scale patterns of human behavior. Our user model is intended to be suitable for integration into such population models, for future applications of prediction, change detection, or agent-based simulation.

More Details

Data Inferencing on Semantic Graphs (DISeG) Final Report

Wendt, Jeremy D.; Quach, Tu-Thach Q.; Zage, David J.; Field, Richard V.; Wells, Randall W.; Soundarajan, Sucheta S.; Cruz, Gerardo C.

The Data Inferencing on Semantic Graphs project (DISeG) was a two-year investigation of inferencing techniques (focusing on belief propagation) to social graphs with a focus on semantic graphs (also called multi-layer graphs). While working this problem, we developed a new directed version of inferencing we call Directed Propagation (Chapters 2 and 4), identified new semantic graph sampling problems (Chapter 3).

More Details

Bayesian methods for characterizing unknown parameters of material models

Applied Mathematical Modelling

Emery, John M.; Grigoriu, M.D.; Field, Richard V.

A Bayesian framework is developed for characterizing the unknown parameters of probabilistic models for material properties. In this framework, the unknown parameters are viewed as random and described by their posterior distributions obtained from prior information and measurements of quantities of interest that are observable and depend on the unknown parameters. The proposed Bayesian method is applied to characterize an unknown spatial correlation of the conductivity field in the definition of a stochastic transport equation and to solve this equation by Monte Carlo simulation and stochastic reduced order models (SROMs). The Bayesian method is also employed to characterize unknown parameters of material properties for laser welds from measurements of peak forces sustained by these welds.

More Details

On the efficacy of stochastic collocation, stochastic Galerkin, and stochastic reduced order models for solving stochastic problems

Probabilistic Engineering Mechanics

Field, Richard V.; Emery, John M.

The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method. Rather, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.

More Details

Predicting laser weld reliability with stochastic reduced-order models. Predicting laser weld reliability

International Journal for Numerical Methods in Engineering

Field, Richard V.; Foulk, James W.; Karlson, Kyle N.

Laser welds are prevalent in complex engineering systems and they frequently govern failure. The weld process often results in partial penetration of the base metals, leaving sharp crack-like features with a high degree of variability in the geometry and material properties of the welded structure. Furthermore, accurate finite element predictions of the structural reliability of components containing laser welds requires the analysis of a large number of finite element meshes with very fine spatial resolution, where each mesh has different geometry and/or material properties in the welded region to address variability. We found that traditional modeling approaches could not be efficiently employed. Consequently, a method is presented for constructing a surrogate model, based on stochastic reduced-order models, and is proposed to represent the laser welds within the component. Here, the uncertainty in weld microstructure and geometry is captured by calibrating plasticity parameters to experimental observations of necking as, because of the ductility of the welds, necking – and thus peak load – plays the pivotal role in structural failure. The proposed method is exercised for a simplified verification problem and compared with the traditional Monte Carlo simulation with rather remarkable results.

More Details

Probability distribution of von Mises stress in the presence of pre-load

Segalman, Daniel J.; Field, Richard V.; Reese, Garth M.

Random vibration under preload is important in multiple endeavors, including those involving launch and re-entry. There are some methods in the literature to begin to address this problem, but there is nothing that accommodates the existence of preloads and the necessity of making probabilistic statements about the stress levels likely to be encountered. An approach to achieve to this goal is presented along with several simple illustrations.

More Details

Statistical surrogate models for prediction of high-consequence climate change

Field, Richard V.; Constantine, Paul C.; Boslough, Mark B.

In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest. A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.

More Details

Computational thermal, chemical, fluid, and solid mechanics for geosystems management

Martinez, Mario J.; Red-Horse, John R.; Carnes, Brian C.; Mesh, Mikhail M.; Field, Richard V.; Davison, Scott M.; Yoon, Hongkyu Y.; Bishop, Joseph E.; Newell, Pania N.; Notz, Patrick N.; Turner, Daniel Z.; Subia, Samuel R.; Hopkins, Polly L.; Moffat, Harry K.; Jove Colon, Carlos F.; Dewers, Thomas D.; Klise, Katherine A.

This document summarizes research performed under the SNL LDRD entitled - Computational Mechanics for Geosystems Management to Support the Energy and Natural Resources Mission. The main accomplishment was development of a foundational SNL capability for computational thermal, chemical, fluid, and solid mechanics analysis of geosystems. The code was developed within the SNL Sierra software system. This report summarizes the capabilities of the simulation code and the supporting research and development conducted under this LDRD. The main goal of this project was the development of a foundational capability for coupled thermal, hydrological, mechanical, chemical (THMC) simulation of heterogeneous geosystems utilizing massively parallel processing. To solve these complex issues, this project integrated research in numerical mathematics and algorithms for chemically reactive multiphase systems with computer science research in adaptive coupled solution control and framework architecture. This report summarizes and demonstrates the capabilities that were developed together with the supporting research underlying the models. Key accomplishments are: (1) General capability for modeling nonisothermal, multiphase, multicomponent flow in heterogeneous porous geologic materials; (2) General capability to model multiphase reactive transport of species in heterogeneous porous media; (3) Constitutive models for describing real, general geomaterials under multiphase conditions utilizing laboratory data; (4) General capability to couple nonisothermal reactive flow with geomechanics (THMC); (5) Phase behavior thermodynamics for the CO2-H2O-NaCl system. General implementation enables modeling of other fluid mixtures. Adaptive look-up tables enable thermodynamic capability to other simulators; (6) Capability for statistical modeling of heterogeneity in geologic materials; and (7) Simulator utilizes unstructured grids on parallel processing computers.

More Details

Predicting fracture in micron-scale polycrystalline silicon MEMS structures

Boyce, Brad B.; Foulk, James W.; Field, Richard V.; Ohlhausen, J.A.

Designing reliable MEMS structures presents numerous challenges. Polycrystalline silicon fractures in a brittle manner with considerable variability in measured strength. Furthermore, it is not clear how to use a measured tensile strength distribution to predict the strength of a complex MEMS structure. To address such issues, two recently developed high throughput MEMS tensile test techniques have been used to measure strength distribution tails. The measured tensile strength distributions enable the definition of a threshold strength as well as an inferred maximum flaw size. The nature of strength-controlling flaws has been identified and sources of the observed variation in strength investigated. A double edge-notched specimen geometry was also tested to study the effect of a severe, micron-scale stress concentration on the measured strength distribution. Strength-based, Weibull-based, and fracture mechanics-based failure analyses were performed and compared with the experimental results.

More Details

Model selection for a class of stochastic processes or random fields with bounded range

Probabilistic Engineering Mechanics

Field, Richard V.; Grigoriu, M.

Methods are developed for finding an optimal model for a non-Gaussian stationary stochastic process or homogeneous random field under limited information. The available information consists of: (i) one or more finite length samples of the process or field; and (ii) knowledge that the process or field takes values in a bounded interval of the real line whose ends may or may not be known. The methods are developed and applied to the special case of non-Gaussian processes or fields belonging to the class of beta translation processes. Beta translation processes provide a flexible model for representing physical phenomena taking values in a bounded range, and are therefore useful for many applications. Numerical examples are presented to illustrate the utility of beta translation processes and the proposed methods for model selection.

More Details

Stochastic models: theory and simulation

Field, Richard V.

Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

More Details

A solution to the static frame validation challenge problem using Bayesian model selection

Computer Methods in Applied Mechanics and Engineering

Field, Richard V.

Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for the material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.

More Details

Model selection in applied science and engineering: A decision-theoretic approach

Journal of Engineering Mechanics

Field, Richard V.; Grigoriu, M.

Mathematical models are developed and used to study the properties of complex systems in just about every area of applied science and engineering. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. A decision-theoretic method is developed for selecting the optimal member from the collection. The optimal model depends on the available information, the class of candidate models, and the model use. The candidate models may be deterministic or random. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, are briefly reviewed. These methods ignore model use and require data to be available. In addition, examples are used to show that classical methods for model selection can be unreliable in the sense that they can deliver unsatisfactory models when data is limited. The proposed decision-theoretic method for model selection does not have these limitations. The method accounts for model use via a utility function. This feature is especially important when modeling high-risk systems where the consequences of using an inappropriate model for the system can be disastrous. © 2007 ASCE.

More Details

Convergence properties of polynomial chaos approximations for L2 random variables

Field, Richard V.

Polynomial chaos (PC) representations for non-Gaussian random variables are infinite series of Hermite polynomials of standard Gaussian random variables with deterministic coefficients. For calculations, the PC representations are truncated, creating what are herein referred to as PC approximations. We study some convergence properties of PC approximations for L{sub 2} random variables. The well-known property of mean-square convergence is reviewed. Mathematical proof is then provided to show that higher-order moments (i.e., greater than two) of PC approximations may or may not converge as the number of terms retained in the series, denoted by n, grows large. In particular, it is shown that the third absolute moment of the PC approximation for a lognormal random variable does converge, while moments of order four and higher of PC approximations for uniform random variables do not converge. It has been previously demonstrated through numerical study that this lack of convergence in the higher-order moments can have a profound effect on the rate of convergence of the tails of the distribution of the PC approximation. As a result, reliability estimates based on PC approximations can exhibit large errors, even when n is large. The purpose of this report is not to criticize the use of polynomial chaos for probabilistic analysis but, rather, to motivate the need for further study of the efficacy of the method.

More Details

Reliability of dynamic systems under limited information

Field, Richard V.

A method is developed for reliability analysis of dynamic systems under limited information. The available information includes one or more samples of the system output; any known information on features of the output can be used if available. The method is based on the theory of non-Gaussian translation processes and is shown to be particularly suitable for problems of practical interest. For illustration, we apply the proposed method to a series of simple example problems and compare with results given by traditional statistical estimators in order to establish the accuracy of the method. It is demonstrated that the method delivers accurate results for the case of linear and nonlinear dynamic systems, and can be applied to analyze experimental data and/or mathematical model outputs. Two complex applications of direct interest to Sandia are also considered. First, we apply the proposed method to assess design reliability of a MEMS inertial switch. Second, we consider re-entry body (RB) component vibration response during normal re-entry, where the objective is to estimate the time-dependent probability of component failure. This last application is directly relevant to re-entry random vibration analysis at Sandia, and may provide insights on test-based and/or model-based qualification of weapon components for random vibration environments.

More Details

Modeling and input optimization under uncertainty for a collection of RF MEMS devices

American Society of Mechanical Engineers, Micro-Electro Mechanical Systems Division, (Publications) MEMS

Allen, M.S.; Massed, J.E.; Field, Richard V.

The dynamic response of an RF MEMS device to a time-varying electrostatic force is optimized to enhance robustness to variations in material properties and geometry. The device functions as an electrical switch, where an applied voltage is used to close a circuit. The objective is to minimize the severity of the mechanical impact that occurs each time the switch closes, because severe impacts have been found to significantly decrease the design life of these switches. The switch is modeled as a classical vibro-impact system: a single degree-of-freedom oscillator subject to mechanical impact with a single rigid barrier. Certain model parameters are described as random variables to represent the significant unit-to-unit variability observed during fabrication and testing of the collection of nominally-identical switches; these models for unit-to-unit variability are calibrated to available experimental data. Our objective is to design the shape and duration of the voltage waveform so that impact velocity at switch closure for the collection of nominally-identical switches is minimized subject to design constraints. The methodology is also applied to search for design changes that reduce the impact velocity and to predict the effect of fabrication process improvements. Copyright © 2006 by ASME.

More Details

A decision-theoretic method for surrogate model selection

Field, Richard V.

The use of surrogate models to approximate computationally expensive simulation models, e.g., large comprehensive finite element models, is widespread. Applications include surrogate models for design, sensitivity analysis, and/or uncertainty quantification. Typically, a surrogate model is defined by a postulated functional form; values for the surrogate model parameters are estimated using results from a limited number of solutions to the comprehensive model. In general, there may be multiple surrogate models, each defined by possibly a different functional form, consistent with the limited data from the comprehensive model. We refer to each as a candidate surrogate model. Methods are developed and applied to select the optimal surrogate model from the collection of candidate surrogate models. The classical approach is to select the surrogate model that best fits the data provided by the comprehensive model; this technique is independent of the model use and, therefore, may be inappropriate for some applications. The proposed approach applies techniques from decision theory, where postulated utility functions are used to quantify the model use. Two applications are presented to illustrate the methods. These include surrogate model selection for the purpose of: (1) estimating the minimum of a deterministic function, and (2) the design under uncertainty of a physical system.

More Details

Methods for model selection in applied science and engineering

Field, Richard V.

Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be applied to a spacecraft during atmospheric re-entry, and (3) optimal design of a distributed sensor network for the purpose of vehicle tracking and identification.

More Details

Utilizing Computational Probabilistic Methods to Derive Shock Specifications in a Nondeterministic Environment

Field, Richard V.; Red-Horse, John R.; Paez, Thomas L.

One of the key elements of the Stochastic Finite Element Method, namely the polynomial chaos expansion, has been utilized in a nonlinear shock and vibration application. As a result, the computed response was expressed as a random process, which is an approximation to the true solution process, and can be thought of as a generalization to solutions given as statistics only. This approximation to the response process was then used to derive an analytically-based design specification for component shock response that guarantees a balanced level of marginal reliability. Hence, this analytically-based reference SRS might lead to an improvement over the somewhat ad hoc test-based reference in the sense that it will not exhibit regions of conservativeness. nor lead to overtesting of the design.

More Details

A nondeterministic shock and vibration application using polynomial chaos expansions

Field, Richard V.; Red-Horse, John R.; Paez, Thomas L.

In the current study, the generality of the key underpinnings of the Stochastic Finite Element (SFEM) method is exploited in a nonlinear shock and vibration application where parametric uncertainty enters through random variables with probabilistic descriptions assumed to be known. The system output is represented as a vector containing Shock Response Spectrum (SRS) data at a predetermined number of frequency points. In contrast to many reliability-based methods, the goal of the current approach is to provide a means to address more general (vector) output entities, to provide this output as a random process, and to assess characteristics of the response which allow one to avoid issues of statistical dependence among its vector components.

More Details

A tutorial on design analysis for random vibration

The Shock and Vibration Digest

Reese, Garth M.; Field, Richard V.; Segalman, Daniel J.; Field, Richard V.

The von Mises stress is often used as the metric for evaluating design margins, particularly for structures made of ductile materials. While computing the von Mises stress distribution in a structural system due to a deterministic load condition may be straightforward, difficulties arise when considering random vibration environments. As a result, alternate methods are used in practice. One such method involves resolving the random vibration environment to an equivalent static load. This technique, however, is only appropriate for a very small class of problems and can easily be used incorrectly. Monte Carlo sampling of numerical realizations that reproduce the second order statistics of the input is another method used to address this problem. This technique proves computationally inefficient and provides no insight as to the character of the distribution of von Mises stress. This tutorial describes a new methodology to investigate the design reliability of structural systems in a random vibration environment. The method provides analytic expressions for root mean square (RMS) von Mises stress and for the probability distributions of von Mises stress which can be evaluated efficiently and with good numerical precision. Further, this new approach has the important advantage of providing the asymptotic properties of the probability distribution. A brief overview of the theoretical development of the methodology is presented, followed by detailed instructions on how to implement the technique on engineering applications. As an example, the method is applied to a complex finite element model of a Global Positioning Satellite (GPS) system. This tutorial presents an efficient and accurate methodology for correctly applying the von Mises stress criterion to complex computational models. The von Mises criterion is the traditional method for determination of structural reliability issues in industry.

More Details
114 Results
114 Results