Publications

Results 26–50 of 197
Skip to search filters

Rigorous Data Fusion for Computationally Expensive Simulations

Winovich, Nickolas W.; Rushdi, Ahmad R.; Phipps, Eric T.; Ray, Jaideep R.; Lin, Guang L.; Ebeida, Mohamed S.

This manuscript comprises the final report for the 1-year, FY19 LDRD project "Rigorous Data Fusion for Computationally Expensive Simulations," wherein an alternative approach to Bayesian calibration was developed based a new sampling technique called VoroSpokes. Vorospokes is a novel quadrature and sampling framework defined with respect to Voronoi tessellations of bounded domains in R d developed within this project. In this work, we first establish local quadrature and sampling results on convex polytopes using randomly directed rays, or spokes, to approximate the quantities of interest for a specified target function. A theoretical justification for both procedures is provided along with empirical results demonstrating the unbiased convergence in the resulting estimates/samples. The local quadrature and sampling procedures are then extended to global procedures defined on more general domains by applying the local results to the cells of a Voronoi tessellation covering the domain in consideration. We then demonstrate how the proposed global sampling procedure can be used to define a natural framework for adaptively constructing Voronoi Piecewise Surrogate (VPS) approximations based on local error estimates. Finally, we show that the adaptive VPS procedure can be used to form a surrogate model approximation to a specified, potentially unnormalized, density function, and that the global sampling procedure can be used to efficiently draw independent samples from the surrogate density in parallel. The performance of the resulting VoroSpokes sampling framework is assessed on a collection of Bayesian inference problems and is shown to provide highly accurate posterior predictions which align with the results obtained using traditional methods such as Gibbs sampling and random-walk Markov Chain Monte Carlo (MCMC). Importantly, the proposed framework provides a foundation for performing Bayesian inference tasks which is entirely independent from the theory of Markov chains.

More Details

Estimation of inflow uncertainties in laminar hypersonic double-cone experiments

AIAA Scitech Forum

Ray, Jaideep R.; Kieweg, Sarah K.; Dinzl, Derek J.; Carnes, Brian C.; Weirs, Vincent G.; Freno, Brian A.; Howard, Micah A.; Smith, Thomas M.

We propose herein a probabilistic framework for assessing the consistency of an experimental dataset, i.e., whether the stated experimental conditions are consistent with the measurements provided. In case the dataset is inconsistent, our framework allows one to hypothesize and test sources of inconsistencies. This is crucial in model validation efforts. The framework relies on Bayesian inference to estimate experimental settings deemed uncertain, from measurements deemed accurate. The quality of the inferred variables is gauged by its ability to reproduce held-out experimental measurements. We test the correctness of the framework on three double-cone experiments conducted in the CUBRC Inc.'s LENS-I shock tunnel, which have also been numerically simulated successfully. Thereafter, we use the framework to investigate two double-cone experiments (executed in the LENS-XX shock tunnel) which have encountered difficulties when used in model validation exercises. We detect an inconsistency with one of the LENS-XX experiments. In addition, we hypothesize two causes for our inability to simulate LEXS-XX experiments accurately and test them using our framework. We find that there is no single cause that explains all the discrepancies between model predictions and experimental data, but different causes explain different discrepancies, to larger or smaller extent. We end by proposing that uncertainty quantification methods be used more widely to understand experiments and characterize facilities, and we cite three different methods to do so, the third of which we present in this paper.

More Details

Conditioning multi-model ensembles for disease forecasting

Ray, Jaideep R.; Cauthen, Katherine R.; Lefantzi, Sophia L.; Burks, Lynne B.

In this study we investigate how an ensemble of disease models can be conditioned to observational data, in a bid to improve its predictive skill. We use the ensemble of influenza forecasting models gathered by the US Centers for Disease Control and Prevention (CDC) as the exemplar. This ensemble is used every year to forecast the annual influenza outbreak in the United States. The models constituting this ensemble draw on very different modeling assumptions and approximations and are a diverse collection of methods to approximate epidemiological dynamics. Currently, each models' predictions are accorded the same importance, or weight, when compiling the ensemble's forecast. We consider this equally-weighted ensemble as the baseline case which has to be improved upon. In this study, we explore whether an ensemble forecast can be improved by "conditionine the ensemble to whatever observational data is available from the ongoing outbreak. "Conditionine can imply according the ensemble's members different weights which evolve over time, or simply perform the forecast using the top k (equally-weighted) models. In the latter case, the composition of the "top-k-see of models evolves over time. This is called "model averagine in statistics. We explore four methods to perform model-averaging, three of which are new.. We find that the CDC ensemble responds best to the "top-k-models" approach to model-averaging. All the new MA methods perform better than the baseline equally-weighted ensemble. The four model-averaging methods treat the models as black-boxes and simply use their forecasts as inputs i.e., one does not need access to the models at all, but rather only their forecasts. The model-averaging approaches reviewed in this report thus form a general framework for model-averaging any model ensemble.

More Details

Robust Bayesian calibration of a k-ϵ model for compressible jet-in-crossflow simulations

AIAA Journal

Ray, Jaideep R.; DeChant, Lawrence J.; Lefantzi, Sophia L.; Ling, Julia; Arunajatesan, Srinivasan A.

Compressible jet-in-crossflow interactions are difficult to simulate accurately using Reynolds-averaged Navier-Stokes (RANS) models. This could be due to simplifications inherent in RANS or the use of inappropriate RANS constants estimated by fitting to experiments of simple or canonical flows. Our previous work on Bayesian calibration of a k - ϵ model to experimental data had led to a weak hypothesis that inaccurate simulations could be due to inappropriate constants more than model-form inadequacies of RANS. In this work, Bayesian calibration of k - ϵ constants to a set of experiments that span a range of Mach numbers and jet strengths has been performed. The variation of the calibrated constants has been checked to assess the degree to which parametric estimates compensate for RANS's model-form errors. An analytical model of jet-in-crossflow interactions has also been developed, and estimates of k - ϵ constants that are free of any conflation of parametric and RANS's model-form uncertainties have been obtained. It has been found that the analytical k - ϵ constants provide mean-flow predictions that are similar to those provided by the calibrated constants. Further, both of them provide predictions that are far closer to experimental measurements than those computed using "nominal" values of these constants simply obtained from the literature. It can be concluded that the lack of predictive skill of RANS jet-in-crossflow simulations is mostly due to parametric inadequacies, and our analytical estimates may provide a simple way of obtaining predictive compressible jet-in-crossflow simulations.

More Details

Biologically inspired approaches for biosurveillance anomaly detection and data fusion

Finley, Patrick D.; Finley, Patrick D.; Finley, Patrick D.; Finley, Patrick D.; Levin, Drew L.; Levin, Drew L.; Levin, Drew L.; Levin, Drew L.; Flanagan, Tatiana P.; Flanagan, Tatiana P.; Flanagan, Tatiana P.; Flanagan, Tatiana P.; Beyeler, Walter E.; Beyeler, Walter E.; Beyeler, Walter E.; Beyeler, Walter E.; Mitchell, Michael D.; Mitchell, Michael D.; Mitchell, Michael D.; Mitchell, Michael D.; Ray, Jaideep R.; Ray, Jaideep R.; Ray, Jaideep R.; Ray, Jaideep R.; Moses, Melanie M.; Moses, Melanie M.; Moses, Melanie M.; Moses, Melanie M.; Forrest, Stephanie F.; Forrest, Stephanie F.; Forrest, Stephanie F.; Forrest, Stephanie F.

This study developed and tested biologically inspired computational methods to detect anomalous signals in data streams that could indicate a pending outbreak or bio-weapon attack. Current large- scale biosurveillance systems are plagued by two principal deficiencies: (1) timely detection of disease-indicating signals in noisy data and (2) anomaly detection across multiple channels. Anomaly detectors and data fusion components modeled after human immune system processes were tested against a variety of natural and synthetic surveillance datasets. A pilot scale immune-system-based biosurveillance system performed at least as well as traditional statistical anomaly detection data fusion approaches. Machine learning approaches leveraging Deep Learning recurrent neural networks were developed and applied to challenging unstructured and multimodal health surveillance data. Within the limits imposed of data availability, both immune systems and deep learning methods were found to improve anomaly detection and data fusion performance for particularly challenging data subsets. ACKNOWLEDGEMENTS The authors acknowledge the close collaboration of Scott Lee, Jason Thomas, and Chad Heilig from the US Centers for Disease Control (CDC) in this effort. De-identified biosurveillance data provided by Ken Jeter of the New Mexico Department of Health proved to be an important contribution to our work. Discussions with members of the International Society of Disease Surveillance helped the researchers focus on questions relevant to practicing public health professionals. Funding for this work was provided by Sandia National Laboratories' Laboratory Directed Research and Development program.

More Details

Final Documentation: Incident Management And Probabilities Courses of action Tool (IMPACT)

Edwards, Donna M.; Ray, Jaideep R.; Tucker, Mark D.; Whetzel, Jonathan H.; Cauthen, Katherine R.

This report pulls together the documentation produced for the IMPACT tool, a software-based decision support tool that provides situational awareness, incident characterization, and guidance on public health and environmental response strategies for an unfolding bio-terrorism incident.

More Details

Learning an eddy viscosity model using shrinkage and Bayesian calibration: A jet-in-crossflow case study

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering

Ray, Jaideep R.; Lefantzi, Sophia L.; Arunajatesan, Srinivasan A.; DeChant, Lawrence J.

We demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynoldsaveraged Navier-Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscosity model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models ("curve-fits"). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. We find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.

More Details

Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

Journal of Applied Geophysics

Ren, Huiying; Ray, Jaideep R.; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura P.

In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.

More Details
Results 26–50 of 197
Results 26–50 of 197