It is now commonplace for engineers to build mathematical models of the systems they are designing, building, or testing. And, it is nearly universally accepted that phenomenological models of physical systems must be validated prior to use for prediction in consequential scenarios. Yet, there are certain situations in which testing only or no testing and no modeling may be economically viable alternatives to modeling and its associated testing. This paper develops an economic framework within which benefit–cost can be evaluated for modeling and model validation relative to other options. The development is presented in terms of a challenge problem. As a result, we provide a numerical example that quantifies when modeling, calibration, and validation yield higher benefit–cost than a testing only or no modeling and no testing option.
In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker’s perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. The 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.
Advancements in our capabilities to accurately model physical systems using high resolution finite element models have led to increasing use of models for prediction of physical system responses. Yet models are typically not used without first demonstrating their accuracy or, at least, adequacy. In high consequence applications where model predictions are used to make decisions or control operations involving human life or critical systems, a movement toward accreditation of mathematical model predictions via validation is taking hold. Model validation is the activity wherein the predictions of mathematical models are demonstrated to be accurate or adequate for use within a particular regime. Though many types of predictions can be made with mathematical models, not all predictions have the same impact on the usefulness of a model. For example, predictions where the response of a system is greatest may be most critical to the adequacy of a model. Therefore, a model that makes accurate predictions in some environments and poor predictions in other environments may be perfectly adequate for certain uses. The current investigation develops a general technique for validating mathematical models where the measures of response are weighted in some logical manner. A combined experimental and numerical example that demonstrates the validation of a system using both weighted and non-weighted response measures is presented.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
Accurate material models are fundamental to predictive structural finite element models. Because potting foams are routinely used to mitigate shock and vibration of encapsulated components in electro/mechanical systems, accurate material models of foams are needed. A linear-viscoelastic foam constitutive model has been developed to represent the foam's stiffness and damping throughout an application space defined by temperature, strain rate or frequency and strain level. Validation of this linear-viscoelastic model, which is integrated into the Salinas structural dynamics code, is being achieved by modeling and testing a series of structural geometries of increasing complexity that have been designed to ensure sensitivity to material parameters. Both experimental and analytical uncertainties are being quantified to ensure the fair assessment of model validity. Quantitative model validation metrics are being developed to provide a means of comparison for analytical model predictions to observations made in the experiments. This paper is one of several recent papers documenting the validation process for simple to complex structures with foam encapsulated components. This paper specifically focuses on model validation over a wide temperature range and using a simple dumbbell structure for modal testing and simulation. Material variations of density and modulus have been included. A double blind validation process is described that brings together test data with model predictions.
Conference Proceedings of the Society for Experimental Mechanics Series
Hasselman, Timothy; Wathugala, G.W.; Urbina, Angel; Paez, Thomas L.
Mechanical systems behave randomly and it is desirable to capture this feature when making response predictions. Currently, there is an effort to develop predictive mathematical models and test their validity through the assessment of their predictive accuracy relative to experimental results. Traditionally, the approach to quantify modeling uncertainty is to examine the uncertainty associated with each of the critical model parameters and to propagate this through the model to obtain an estimate of uncertainty in model predictions. This approach is referred to as the "bottom-up" approach. However, parametric uncertainty does not account for all sources of the differences between model predictions and experimental observations, such as model form uncertainty and experimental uncertainty due to the variability of test conditions, measurements and data processing. Uncertainty quantification (UQ) based directly on the differences between model predictions and experimental data is referred to as the "top-down" approach. This paper discusses both the top-down and bottom-up approaches and uses the respective stochastic models to assess the validity of a joint model with respect to experimental data not used to calibrate the model, i.e. random vibration versus sine test data. Practical examples based on joint modeling and testing performed by Sandia are presented and conclusions are drawn as to the pros and cons of each approach.
Sandia National Laboratories has been conducting studies on performance of laboratory and commercial lithium-ion and other types of electrochemical cells using inductive models [1]. The objectives of these investigations are: (1) To develop procedures and techniques to rapidly determine performance degradation rates while these cells undergo life tests; (2) To model cell voltage and capacity in order to simulate cell performance characteristics under variable load and temperature conditions; (3) To model rechargeable battery degradation under charge/discharge cycles and many other conditions. The inductive model and methodology are particularly useful when complicated cell performance behaviors are involved, which are often difficult to be interpreted from simple empirical approaches. We find that the inductive model can be used effectively: (1) To enable efficient predictions of battery life; (2) To characterize system behavior. Inductive models provide convenient tools to characterize system behavior using experimentally or analytically derived data in an efficient and robust framework. The approach does not require detailed phenomenological development. There are certain advantages unique to this approach. Among these advantages is the ability to avoid making measurements of hard to determine physical parameters or having to understand cell processes sufficiently to write mathematical functions describing their behavior. We used artificial neural network for inductive modeling, along with ancillary mathematical tools to improve their accuracy. This paper summarizes efforts to use inductive tools for cell and battery modeling. Examples of numerical results will be presented. One of them is related to high power lithium-ion batteries tested under the U.S. Department of Energy Advanced Technology Development Program for hybrid vehicle applications. Sandia National Laboratories is involved in the development of accelerated life testing and thermal abuse tests to enhance the understanding of power and capacity fade issues and predict life of the battery under a nominal use condition. This paper will use power and capacity fade behaviors of a Ni-oxide-based lithium-ion battery system to illustrate how effective the inductive model can interpret the cell behavior and provide predictions of life. We will discuss the analysis of the fading behavior associated with the cell performance and explain how the model can predict cell performance.
The constitutive behavior of mechanical joints is largely responsible for the energy dissipation and vibration damping in weapons systems. For reasons arising from the dramatically different length scales associated with those dissipative mechanisms and the length scales characteristic of the overall structure, this physics cannot be captured adequately through direct simulation of the contact mechanics within a structural dynamics analysis. The only practical method for accommodating the nonlinear nature of joint mechanisms within structural dynamic analysis is through constitutive models employing degrees of freedom natural to the scale of structural dynamics. This document discusses a road-map for developing such constitutive models.
Dynamic thermography is a promising technology for inspecting metallic and composite structures used in high-consequence industries. However, the reliability and inspection sensitivity of this technology has historically been limited by the need for extensive operator experience and the use of human judgment and visual acuity to detect flaws in the large volume of infrared image data collected. To overcome these limitations new automated data analysis algorithms and software is needed. The primary objectives of this research effort were to develop a data processing methodology that is tied to the underlying physics, which reduces or removes the data interpretation requirements, and which eliminates the need to look at significant numbers of data frames to determine if a flaw is present. Considering the strengths and weakness of previous research efforts, this research elected to couple both the temporal and spatial attributes of the surface temperature. Of the possible algorithms investigated, the best performing was a radiance weighted root mean square Laplacian metric that included a multiplicative surface effect correction factor and a novel spatio-temporal parametric model for data smoothing. This metric demonstrated the potential for detecting flaws smaller than 0.075 inch in inspection areas on the order of one square foot. Included in this report is the development of a thermal imaging model, a weighted least squares thermal data smoothing algorithm, simulation and experimental flaw detection results, and an overview of the ATAC (Automated Thermal Analysis Code) software that was developed to analyze thermal inspection data.
This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.
Shock excitations are normally random process realizations, and most of our efforts to represent them either directly or indirectly reflect this fact. The most common indirect representation of shock sources is the shock response spectrum. It seeks to establish the damage-causing potential of random shocks in terms of responses excited in linear, single-degree-of-freedom systems. This paper shows that shock sources can be represented directly by developing the probabilistic and statistical structure that underlies the random shock source. Confidence bounds on process statistics and probabilities of specific excitation levels can be established from the model. Some numerical examples are presented.
One of the key elements of the Stochastic Finite Element Method, namely the polynomial chaos expansion, has been utilized in a nonlinear shock and vibration application. As a result, the computed response was expressed as a random process, which is an approximation to the true solution process, and can be thought of as a generalization to solutions given as statistics only. This approximation to the response process was then used to derive an analytically-based design specification for component shock response that guarantees a balanced level of marginal reliability. Hence, this analytically-based reference SRS might lead to an improvement over the somewhat ad hoc test-based reference in the sense that it will not exhibit regions of conservativeness. nor lead to overtesting of the design.
Most mechanical and structural failures can be formulated as first passage problems. The traditional approach to first passage analysis models barrier crossings as Poisson events. The crossing rate is established and used in the Poisson framework to approximate the no-crossing probability. While this approach is accurate in a number of situations, it is desirable to develop analysis alternatives for those situations where traditional analysis is less accurate and situations where it is difficult to estimate parameters of the traditional approach. This paper develops an efficient simulation approach to first passage failure analysis. It is based on simulation of segments of complex random processes with the Karhunen-Loeve expansion, use of these simulations to estimate the parameters of a Markov chain, and use of the Markov chain to estimate the probability of first passage failure. Some numerical examples are presented.
In the current study, the generality of the key underpinnings of the Stochastic Finite Element (SFEM) method is exploited in a nonlinear shock and vibration application where parametric uncertainty enters through random variables with probabilistic descriptions assumed to be known. The system output is represented as a vector containing Shock Response Spectrum (SRS) data at a predetermined number of frequency points. In contrast to many reliability-based methods, the goal of the current approach is to provide a means to address more general (vector) output entities, to provide this output as a random process, and to assess characteristics of the response which allow one to avoid issues of statistical dependence among its vector components.
The canonical variate analysis technique is used in this investigation, along with a data transformation algorithm, to identify a system in a transform space. The transformation algorithm involves the preprocessing of measured excitation/response data with a zero-memory-nonlinear transform, specifically, the Rosenblatt transform. This transform approximately maps the measured excitation and response data from its own space into the space of uncorrelated, standard normal random variates. Following this transform, it is appropriate to model the excitation/response relation as linear since Gaussian inputs excite Gaussian responses in linear structures. The linear model is identified in the transform space using the canonical variate analysis approach, and system responses in the original space are predicted using inverse Rosenblatt transformation. An example is presented.
Device penetration into media such as metal and soil is an application of some engineering interest. Often, these devices contain internal components and it is of paramount importance that all significant components survive the severe environment that accompanies the penetration event. In addition, the system must be robust to perturbations in its operating environment, some of which exhibit behavior which can only be quantified to within some level of uncertainty. In the analysis discussed herein, methods to address the reliability of internal components for a specific application system are discussed. The shock response spectrum (SRS) is utilized in conjunction with the Advanced Mean Value (AMV) and Response Surface methods to make probabilistic statements regarding the predicted reliability of internal components. Monte Carlo simulation methods are also explored.
Random vibration is the phenomenon wherein random excitation applied to a mechanical system induces random response. We summarize the state of the art in random vibration analysis and testing, commenting on history, linear and nonlinear analysis, the analysis of large-scale systems, and probabilistic structural testing.
We investigate the reliability If a rechargeable battery acting as the energy storage component in a photovoltaic power supply system. A model system was constructed for this that includes the solar resource, the photovoltaic power supp Iy system, the rechargeable battery and a load. The solar resource and the system load are modeled as SI ochastic processes. The photovoltaic system and the rechargeable battery are modeled deterministically, imd an artificial neural network is incorporated into the model of the rechargeable battery to simulate dartage that occurs during deep discharge cycles. The equations governing system behavior are solved simultaneously in the Monte Carlo framework and a fwst passage problem is solved to assess system reliability.
We developed a model for the probabilistic behavior of a rechargeable battery acting as the energy storage component in a photovoltaic power supply system. Stochastic and deterministic models are created to simulate the behavior of the system component;. The components are the solar resource, the photovoltaic power supply system, the rechargeable battery, and a load. Artificial neural networks are incorporated into the model of the rechargeable battery to simulate damage that occurs during deep discharge cycles. The equations governing system behavior are combined into one set and solved simultaneously in the Monte Carlo framework to evaluate the probabilistic character of measures of battery behavior.
When dealing with measured data from dynamic systems we often make the tacit assumption that the data are generated by linear dynamics. While some systematic tests for linearity and determinism are available - for example the coherence fimction, the probability density fimction, and the bispectrum - fi,u-ther tests that quanti$ the existence and the degree of nonlinearity are clearly needed. In this paper we demonstrate a statistical test for the nonlinearity exhibited by a dynamic system excited by Gaussian random noise. We perform the usual division of the input and response time series data into blocks as required by the Welch method of spectrum estimation and search for significant relationships between a given input fkequency and response at harmonics of the selected input frequency. We argue that systematic tests based on the recently developed statistical method of surrogate data readily detect significant nonlinear relationships. The paper elucidates the method of surrogate data. Typical results are illustrated for a linear single degree-of-freedom system and for a system with polynomial stiffness nonlinearity.
Many dynamical systems tested in the field and the laboratory display significant nonlinear behavior. Accurate characterization of such systems requires modeling in a nonlinear framework. One construct forming a basis for nonlinear modeling is that of the artificial neural network (ANN). However, when system behavior is complex, the amount of data required to perform training can become unreasonable. The authors reduce the complexity of information present in system response measurements using decomposition via canonical variate analysis. They describe a method for decomposing system responses, then modeling the components with ANNs. A numerical example is presented, along with conclusions and recommendations.
Efforts to optimize the design of mechanical systems for preestablished use environments and to extend the durations of use cycles establish a need for in-service health monitoring. Numerous studies have proposed measures of structural response for the identification of structural damage, but few have suggested systematic techniques to guide the decision as to whether or not damage has occurred based on real data. Such techniques are necessary because in field applications the environments in which systems operate and the measurements that characterize system behavior are random. This paper investigates the use of artificial neural networks (ANNs) to identify damage in mechanical systems. Two probabilistic neural networks (PNNs) are developed and used to judge whether or not damage has occurred in a specific mechanical system, based on experimental measurements. The first PNN is a classical type that casts Bayesian decision analysis into an ANN framework; it uses exemplars measured from the undamaged and damaged system to establish whether system response measurements of unknown origin come from the former class (undamaged) or the latter class (damaged). The second PNN establishes the character of the undamaged system in terms of a kernel density estimator of measures of system response; when presented with system response measures of unknown origin, it makes a probabilistic judgment whether or not the data come from the undamaged population. The physical system used to carry out the experiments is an aerospace system component, and the environment used to excite the system is a stationary random vibration. The results of damage identification experiments are presented along with conclusions rating the effectiveness of the approaches.
The simulation of mechanical system random vibrations is important in structural dynamics, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks provide a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and random vibration simulation, via component modeling with artificial neural networks, a much simpler problem. A numerical example is presented.
It is common practice in system analysis to develop mathematical models for system behavior. Frequently, the actual system being modeled is also available for testing and observation, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of systems when data taken during operation of the system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. An extension of the technique is also suggested, wherein randomness may be included in the mathematical model through the introduction of random variable and random process terms. These terms cause random system behavior that can be compared to the randomness in the bootstrap evaluation of experimental system behavior. In this framework, the stochastic mathematical model can be evaluated. A numerical example is presented to demonstrate the application of the technique.
It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.
Mathematical models of physical systems are used, among other purposes, to improve our understanding of the behavior of physical systems, predict physical system response, and control the responses of systems. Phenomenological models are frequently used to simulate system behavior, but an alternative is available - the artificial neural network (ANN). The ANN is an inductive, or data-based model for the simulation of input/output mappings. The ANN can be used in numerous frameworks to simulate physical system behavior. ANNs require training data to learn patterns of input/output behavior, and once trained, they can be used to simulate system behavior within the space where they were trained.They do this by interpolating specified inputs among the training inputs to yield outputs that are interpolations of =Ming outputs. The reason for using ANNs for the simulation of system response is that they provide accurate approximations of system behavior and are typically much more efficient than phenomenological models. This efficiency is very important in situations where multiple response computations are required, as in, for example, Monte Carlo analysis of probabilistic system response. This paper describes two frameworks in which we have used ANNs to good advantage in the approximate simulation of the behavior of physical system response. These frameworks are the non-recurrent and recurrent frameworks. It is assumed in these applications that physical experiments have been performed to obtain data characterizing the behavior of a system, or that an accurate finite element model has been run to establish system response. The paper provides brief discussions on the operation of ANNs, the operation of two different types of mechanical systems, and approaches to the solution of some special problems that occur in connection with ANN simulation of physical system response. Numerical examples are presented to demonstrate system simulation with ANNs.
Structural dynamic testing is concerned with the estimation of system properties, including frequency response functions and modal characteristics. These properties are derived from tests on the structure of interest, during which excitations and responses are measured and Fourier techniques are used to reduce the data. The inputs used in a test are frequently random, and they excite random responses in the structure of interest When these random inputs and responses are analyzed they yield estimates of system properties that are random variable and random process realizations. Of course, such estimates of system properties vary randomly from one test to another, but even when deterministic inputs are used to excite a structure, the estimated properties vary from test to test. When test excitations and responses are normally distributed, classical techniques permit us to statistically analyze inputs, responses, and some system parameters. However, when the input excitations are non-normal, the system is nonlinear, and/or the property of interest is anything but the simplest, the classical analyses break down. The bootstrap is a technique for the statistical analysis of data that are not necessarily normally distributed. It can be used to statistically analyze any measure of input excitation or response, or any system property, when data are available to make an estimate. It is designed to estimate the standard error, bias, and confidence intervals of parameter estimates. This paper shows how the bootstrap can be applied to the statistical analysis of modal parameters.
It is common practice in applied mechanics to develop mathematical models for mechanical system behavior. Frequently, the actual physical system being modeled is also available for testing, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of physical systems when data taken during operation of the physical system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a physical system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the physical system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. A numerical example is presented to demonstrate the application of the technique.
Structural system simulation is important in analysis, design, testing, control, and other areas, but it is particularly difficult when the system under consideration is nonlinear. Artificial neural networks offer a useful tool for the modeling of nonlinear systems, however, such modeling may be inefficient or insufficiently accurate when the system under consideration is complex. This paper shows that there are several transformations that can be used to uncouple and simplify the components of motion of a complex nonlinear system, thereby making its modeling and simulation a much simpler problem. A numerical example is also presented.
Structural dynamic testing is concerned with estimation of system properties, including frequency response functions and modal characteristics. These properties are derived from tests on the structure of interest, during which excitations and responses are measured and Fourier techniques are used to reduce the data. The inputs used in a test are frequently radom and excite random responses in the structure of interest. When these random inputs and responses are analyzed they yield estimates of system properties that are random variable and random process realizations. Of course, such estimates of system properties vary randomly from one test to another, but even when deterministic inputs are used to excite a structure, the estimated properties vary from test to test. When test excitations and responses are normally distributed, classical techniques permit us to statistically analyze inputs, responses, and system parameters. However, when the input excitations are non-normal, the system is nonlinear, and/or the property of interest is anything but the simplest, the classical analyses break down. The bootstrap is a technique for the statistical analysis of data that are not necessarily normally distributed. It can be used to statistically analyze any measure of input excitation on response, or any system property, when data are available to make an estimate. It is designed to estimate the standard error, bias, and confidence intervals of parameter estimates. This paper shows how the bootstrap can be applied to the statistical analysis of modal parameters.
Artificial neural networks (ANNs) have been shown capable of simulating the behavior of complex, nonlinear, systems, including structural systems. Under certain circumstances, it is desirable to simulate structures that are analyzed with the finite element method. For example, when we perform a probabilistic analysis with the Monte Carlo method, we usually perform numerous (hundreds or thousands of) repetitions of a response simulation with different input and system parameters to estimate the chance of specific response behaviors. In such applications, efficiency in computation of response is critical, and response simulation with ANNs can be valuable. However, finite element analyses of complex systems involve the use of models with tens or hundreds of thousands of degrees of freedom, and ANNs are practically limited to simulations that involve far fewer variables. This paper develops a technique for reducing the amount of information required to characterize the response of a general structure. We show how the reduced information can be used to train a recurrent ANN. Then the trained ANN can be used to simulate the reduced behavior of the original system, and the reduction transformation can be inverted to provide a simulation of the original system. A numerical example is presented.
System behaviors can be accurately simulated using artificial neural networks (ANNs), and one that performs well in simulation of structural response is the radial basis function network. A specific implementation of this is the connectionist normalized linear spline (CNLS) network, investigated in this study. A useful framework for ANN simulation of structural response is the recurrent network. This framework simulates the response of a structure one step at a time. It requires as inputs some measures of the excitation, and the response at previous times. On output, the recurrent ANN yields the response at some time in the future. This framework is practical to implement because every ANN requires training, and this is executed by showing the ANN examples of correct input/output behavior (exemplars), and requiring the ANN to simulate this behavior. In practical applications, hundreds or, perhaps, thousands, of exemplars are required for ANN training. The usual laboratory and non-neural numerical applications to be simulated by ANNs produce these amounts of information. Once the recurrent ANN is trained, it can be provided with excitation information, and used to propagate structural response, simulating the response it was trained to approximate. The structural characteristics, parameters in the CNLS network, and degree of training influence the accuracy of approximation. This investigation studies the accuracy of structural response simulation for a single-degree-of-freedom (SDF), nonlinear system excited by random vibration loading. The ANN used to simulate structural response is a recurrent CNLS network. We investigate the error in structural system simulation.
This paper proposes a framework for the comprehensive analysis of complex problems in probabilistic structural mechanics. Tools that can be used to accurately estimate the probabilistic behavior of mechanical systems are discussed, and some of the techniques proposed in the paper are developed and used in the solution of a problem in nonlinear structural dynamics.
System dynamicists frequently encounter signals they interpret as realizations of normal random processes. To simulate these analytically and in the laboratory they use methods that yield approximately normal random signals. The traditional digital methods for generating such signals have been developed during the past 25 years. During the same period of time much development has been done in the theory of chaotic processes. The conditions under which chaos occurs have been studied, and several measures of the nature of chaotic processes have been developed. Some of the measures used to characterize the nature of dynamic system motions are common to the study of both random vibrations and chaotic processes. This paper considers chaotic processes and random vibrations. It shows contrasts between the two and situations where they are indistinguishable. The applicability of the Central Limit Theorem to chaotic processes is demonstrated. 12 refs., 8 figs.
This paper and a companion paper show that the traditional limits on amplitude and frequency that can be generated in a laboratory test on a vibration exciter can be substantially extended. This is accomplished by attaching a device to the shaker that permits controlled metal to metal impacts that generate a high acceleration, high frequency environment on a test surface. A companion paper derives some of the mechanical relations for the system. This paper shows that a sinusoidal shaker input can be used to excite deterministic chaotic dynamics of the system yielding a random vibration environment on the test surface, or a random motion of the shaker can be used to generate a random vibration environment on the test surface. Numerical examples are presented to show the kind of environments that can be generated in this system. 9 refs., 9 figs.
This paper presents some simple concepts for fixtures that can be used in two and three-axis vibration testing. Two, two-axis fixtures were built and tested in the laboratory. Test results are shown, and serve to confirm the validity of the concept. Simple methods for extending the concepts for three-axis testing are discussed. 6 refs., 9 figs.
This paper presents a combined analytical and experimental method for establishing a set of equations to evaluate the equivalent forces acting on a structure. The method requires that a finite element model of the structure be established. It further requires that the accelerator responses to the external forces be measured at a number of points on the structure. The equivalent forces established in the analysis are a representation of the actual forces. The equivalent forces concentrate the effects of the external forces at the degrees of freedom where the acceleration responses are measured. 6 refs., 4 figs., 1 tab.