EVENT DETECTION IN MULTI-VARIATE SCIENTIFIC SIMULATIONS USING FEATURE ANOMALY METRICS
Abstract not provided.
Abstract not provided.
Journal of Turbomachinery
In film cooling flows, it is important to know the temperature distribution resulting from the interaction between a hot main flow and a cooler jet. However, current Reynoldsaveraged Navier-Stokes (RANS) models yield poor temperature predictions. A novel approach for RANS modeling of the turbulent heat flux is proposed, in which the simple gradient diffusion hypothesis (GDH) is assumed and a machine learning (ML) algorithm is used to infer an improved turbulent diffusivity field. This approach is implemented using three distinct data sets: two are used to train the model and the third is used for validation. The results show that the proposed method produces significant improvement compared to the common RANS closure, especially in the prediction of film cooling effectiveness.
AIAA Journal
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnel data has yielded parameter estimates that are far more predictive than nominal parameter values. In this paper, we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. Finally, the close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the ASME Turbo Expo
Classical RANS turbulence models have known deficiencies when applied to jets in crossflow. Identifying the linear Boussinesq stress-strain hypothesis as a major contribution to erroneous prediction, we consider and contrast two machine learning frameworks for turbulence model development. Gene Expression Programming, an evolutionary algorithm that employs a survival of the fittest analogy, and a Deep Neural Network, based on neurological processing, add non-linear terms to the stress-strain relationship. The results are Explicit Algebraic Stress Model-like closures. High fidelity data from an inline jet in crossflow study is used to regress new closures. These models are then tested on a skewed jet to ascertain their predictive efficacy. For both methodologies, a vast improvement over the linear relationship is observed.
Abstract not provided.
AIAA SciTech Forum - 55th AIAA Aerospace Sciences Meeting
In many aerospace applications, it is critical to be able to model fluid-structure interactions. In particular, correctly predicting the power spectral density of pressure fluctuations at surfaces can be important for assessing potential resonances and failure modes. Current turbulence modeling methods, such as wall-modeled Large Eddy Simulation and Detached Eddy Simulation, cannot reliably predict these pressure fluctuations for many applications of interest. The focus of this paper is on efforts to use data-driven machine learning methods to learn correction terms for the wall pressure fluctuation spectrum. In particular, the non-locality of the wall pressure fluctuations in a compressible boundary layer is investigated using random forests and neural networks trained and evaluated on Direct Numerical Simulation data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Fluid Mechanics
There exists significant demand for improved Reynolds-Averaged Navier-Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. The Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings - 2015 IEEE 14th International Conference on Machine Learning and Applications, ICMLA 2015
The question of how to accurately model turbulent flows is one of the most long-standing open problems in physics. Advances in high performance computing have enabled direct numerical simulations of increasingly complex flows. Nevertheless, for most flows of engineering relevance, the computational cost of these direct simulations is prohibitive, necessitating empirical model closures for the turbulent transport. These empirical models are prone to "model form uncertainty" when their underlying assumptions are violated. Understanding, quantifying, and mitigating this model form uncertainty has become a critical challenge in the turbulence modeling community. This paper will discuss strategies for using machine learning to understand the root causes of the model form error and to develop model corrections to mitigate this error. Rule extraction techniques are used to derive simple rules for when a critical model assumption is violated. The physical intuition gained from these simple rules is then used to construct a linear correction term for the turbulence model which shows improvement over naive linear fits.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Physics of Fluids
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. Feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the ASME Turbo Expo
Algebraic closures for the turbulent scalar fluxes were evaluated for a discrete hole film cooling geometry using the results from the high-fidelity Large Eddy Simulation (LES) of Bodart et al. [1]. Several models for the turbulent scalar fluxes exist, including the widely used Gradient Diffusion Hypothesis, the Generalized Gradient Diffusion Hypothesis [2], and the Higher Order Generalized Gradient Diffusion Hypothesis [3]. By analyzing the results from the LES, it was possible to isolate the error due to these turbulent mixing models. Distributions of the turbulent diffusivity, turbulent viscosity, and turbulent Prandtl number were extracted from the LES results. It was shown that the turbulent Prandtl number varies significantly spatially, undermining the applicability of the Reynolds analogy for this flow. The LES velocity field and Reynolds stresses were fed into a RANS solver to calculate the fluid temperature distribution. This analysis revealed in which regions of the flow various modeling assumptions were invalid and what effect those assumptions had on the predicted temperature distribution.
Abstract not provided.