Publications

159 Results
Skip to search filters

Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection

SIAM Journal on Mathematics of Data Science

Newman, Elizabeth N.; Ruthotto, Lars R.; Hart, Joseph L.; van Bloemen Waanders, Bart G.

Deep neural networks (DNNs) have achieved state-of-the-art performance across a variety of traditional machine learning tasks, e.g., speech recognition, image classification, and segmentation. The ability of DNNs to efficiently approximate high-dimensional functions has also motivated their use in scientific applications, e.g., to solve partial differential equations and to generate surrogate models. In this paper, we consider the supervised training of DNNs, which arises in many of the above applications. We focus on the central problem of optimizing the weights of the given DNN such that it accurately approximates the relation between observed input and target data. Devising effective solvers for this optimization problem is notoriously challenging due to the large number of weights, nonconvexity, data sparsity, and nontrivial choice of hyperparameters. To solve the optimization problem more efficiently, we propose the use of variable projection (VarPro), a method originally designed for separable nonlinear least-squares problems. Our main contribution is the Gauss--Newton VarPro method (GNvpro) that extends the reach of the VarPro idea to nonquadratic objective functions, most notably cross-entropy loss functions arising in classification. These extensions make GNvpro applicable to all training problems that involve a DNN whose last layer is an affine mapping, which is common in many state-of-the-art architectures. In our four numerical experiments from surrogate modeling, segmentation, and classification, GNvpro solves the optimization problem more efficiently than commonly used stochastic gradient descent (SGD) schemes. Finally, GNvpro finds solutions that generalize well, and in all but one example better than well-tuned SGD methods, to unseen data points.

More Details

Randomized algorithms for generalized singular value decomposition with application to sensitivity analysis

Numerical Linear Algebra with Applications

Saibaba, Arvind K.; Hart, Joseph L.; van Bloemen Waanders, Bart G.

The generalized singular value decomposition (GSVD) is a valuable tool that has many applications in computational science. However, computing the GSVD for large-scale problems is challenging. Motivated by applications in hyper-differential sensitivity analysis (HDSA), we propose new randomized algorithms for computing the GSVD which use randomized subspace iteration and weighted QR factorization. Detailed error analysis is given which provides insight into the accuracy of the algorithms and the choice of the algorithmic parameters. We demonstrate the performance of our algorithms on test matrices and a large-scale model problem where HDSA is used to study subsurface flow.

More Details

Parallel Solver Framework for Mixed-Integer PDE-Constrained Optimization

Phillips, Cynthia A.; Chatter, Michelle A.; Eckstein, Jonathan E.; Erturk, Alper E.; El-Kady, I.; Gerbe, Romain G.; Kouri, Drew P.; Loughlin, William L.; Reinke, Charles M.; Rokkam, Rohith R.; Ruzzene, Massimo R.; Sugino, Chris S.; Swanson, Calvin S.; van Bloemen Waanders, Bart G.

ROL-PEBBL is a C++, MPI-based parallel code for mixed-integer PDE-constrained optimization (MIPDECO). In these problems we wish to optimize (control, design, etc.) physical systems, which must obey the laws of physics, when some of the decision variables must take integer values. ROL-PEBBL combines a code to efficiently search over integer choices (PEBBL = Parallel Enumeration Branch-and-Bound Library) and a code for efficient nonlinear optimization, including PDE-constrained optimization (ROL = Rapid Optimization Library). In this report, we summarize the design of ROL-PEBBL and initial applications/results. For an artificial source-inversion problem, finding sources of pollution on a grid from sparse samples, ROL-PEBBLs solution for the nest grid gave the best optimization guarantee for any general solver that gives both a solution and a quality guarantee.

More Details

A fast solver for the fractional helmholtz equation

SIAM Journal on Scientific Computing

Glusa, Christian A.; ANTIL, HARBIR; D'Elia, Marta D.; van Bloemen Waanders, Bart G.; Weiss, Chester J.

The purpose of this paper is to study a Helmholtz problem with a spectral fractional Laplacian, instead of the standard Laplacian. Recently, it has been established that such a fractional Helmholtz problem better captures the underlying behavior in geophysical electromagnetics. We establish the well-posedness and regularity of this problem. We introduce a hybrid spectral-finite element approach to discretize it and show well-posedness of the discrete system. In addition, we derive a priori discretization error estimates. Finally, we introduce an efficient solver that scales as well as the best possible solver for the classical integer-order Helmholtz equation. We conclude with several illustrative examples that confirm our theoretical findings.

More Details

Stochastic Deep Model Reference Adaptive Control

Proceedings of the IEEE Conference on Decision and Control

Joshi, Girish; Chowdhary, Girish; van Bloemen Waanders, Bart G.

In this paper, we present a Stochastic Deep Neural Network-based Model Reference Adaptive Control. Building on our work "Deep Model Reference Adaptive Control", we extend the controller capability by using Bayesian deep neural networks (DNN) to represent uncertainties and model nonlinearities. Stochastic Deep Model Reference Adaptive Control uses a Lyapunov-based method to adapt the outputlayer weights of the DNN model in real-time, while a data-driven supervised learning algorithm is used to update the inner-layers parameters. This asynchronous network update ensures boundedness and guaranteed tracking performance with a learning-based real-time feedback controller. A Bayesian approach to DNN learning helped avoid over-fitting the data and provide confidence intervals over the predictions. The controller's stochastic nature also ensured "Induced Persistency of excitation,"leading to convergence of the overall system signal.

More Details

Extreme Scale Infrasound Inversion and Prediction for Weather Characterization and Acute Event Detection

van Bloemen Waanders, Bart G.; Ober, Curtis C.

Accurate and timely weather predictions are critical to many aspects of society with a profound impact on our economy, general well-being, and national security. In particular, our ability to forecast severe weather systems is necessary to avoid injuries and fatalities, but also important to minimize infrastructure damage and maximize mitigation strategies. The weather community has developed a range of sophisticated numerical models that are executed at various spatial and temporal scales in an attempt to issue global, regional, and local forecasts in pseudo real time. The accuracy however depends on the time period of the forecast, the nonlinearities of the dynamics, and the target spatial resolution. Significant uncertainties plague these predictions including errors in initial conditions, material properties, data, and model approximations. To address these shortcomings, a continuous data collection occurs at an effort level that is even larger than the modeling process. It has been demonstrated that the accuracy of the predictions depends on the quality of the data and is independent to a certain extent on the sophistication of the numerical models. Data assimilation has become one of the more critical steps in the overall weather prediction business and consequently substantial improvements in the quality of the data would have transformational benefits. This paper describes the use of infrasound inversion technology, enabled through exascale computing, that could potentially achieve orders of magnitude improvement in data quality and therefore transform weather predictions with significant impact on many aspects of our society.

More Details

Simultaneous inversion of shear modulus and traction boundary conditions in biomechanical imaging

Inverse Problems in Science and Engineering

Seidl, D.T.; van Bloemen Waanders, Bart G.; Wildey, T.M.

We present a formulation to simultaneously invert for a heterogeneous shear modulus field and traction boundary conditions in an incompressible linear elastic plane stress model. Our approach utilizes scalable deterministic methods, including adjoint-based sensitivities and quasi-Newton optimization, to reduce the computational requirements for large-scale inversion with partial differential equation (PDE) constraints. We address the use of regularization for such formulations and explore the use of different types of regularization for the shear modulus and boundary traction. We apply this PDE-constrained optimization algorithm to a synthetic dataset to verify the accuracy in the reconstructed parameters, and to experimental data from a tissue-mimicking ultrasound phantom. In all of these examples, we compare inversion results from full-field and sparse data measurements.

More Details

Hyperdifferential sensitivity analysis of uncertain parameters in PDE-constrained optimization

International Journal for Uncertainty Quantification

Hart, Joseph; van Bloemen Waanders, Bart G.; Herzog, Roland

Many problems in engineering and sciences require the solution of large scale optimization constrained by partial differential equations (PDEs). Though PDE-constrained optimization is itself challenging, most applications pose ad-ditional complexity, namely, uncertain parameters in the PDEs. Uncertainty quantification (UQ) is necessary to char-acterize, prioritize, and study the influence of these uncertain parameters. Sensitivity analysis, a classical tool in UQ, is frequently used to study the sensitivity of a model to uncertain parameters. In this article, we introduce “hyperdiffer-ential sensitivity analysis" which considers the sensitivity of the solution of a PDE-constrained optimization problem to uncertain parameters. Our approach is a goal-oriented analysis which may be viewed as a tool to complement other UQ methods in the service of decision making and robust design. We formally define hyperdifferential sensitivity indices and highlight their relationship to the existing optimization and sensitivity analysis literatures. Assuming the presence of low rank structure in the parameter space, computational efficiency is achieved by leveraging a generalized singular value decomposition in conjunction with a randomized solver which converts the computational bottleneck of the algorithm into an embarrassingly parallel loop. Two multiphysics examples, consisting of nonlinear steady state control and transient linear inversion, demonstrate efficient identification of the uncertain parameters which have the greatest influence on the optimal solution.

More Details

Using additive manufacturing as a pathway to change the qualification paradigm

Solid Freeform Fabrication 2018: Proceedings of the 29th Annual International Solid Freeform Fabrication Symposium - An Additive Manufacturing Conference, SFF 2018

Roach, R.A.; Bishop, Joseph E.; Johnson, Kyle J.; Rodgers, Theron R.; Boyce, B.L.; Swiler, L.; van Bloemen Waanders, Bart G.; Chandross, M.; Kammler, Daniel K.; Balch, Dorian K.; Jared, B.; Martinez, Mario J.; Leathe, Nicholas L.; Ford, K.

Additive Manufacturing (AM) offers the opportunity to transform design, manufacturing, and qualification with its unique capabilities. AM is a disruptive technology, allowing the capability to simultaneously create part and material while tightly controlling and monitoring the manufacturing process at the voxel level, with the inherent flexibility and agility in printing layer-by-layer. AM enables the possibility of measuring critical material and part parameters during manufacturing, thus changing the way we collect data, assess performance, and accept or qualify parts. It provides an opportunity to shift from the current iterative design-build-test qualification paradigm using traditional manufacturing processes to design-by-predictivity where requirements are addressed concurrently and rapidly. The new qualification paradigm driven by AM provides the opportunity to predict performance probabilistically, to optimally control the manufacturing process, and to implement accelerated cycles of learning. Exploiting these capabilities to realize a new uncertainty quantification-driven qualification that is rapid, flexible, and practical is the focus of this paper.

More Details

Prediction and Inference of Multi-scale Electrical Properties of Geomaterials

Weiss, Chester J.; Beskardes, G.D.; van Bloemen Waanders, Bart G.

Motivated by the need for improved forward modeling and inversion capabilities of geophysical response in geologic settings whose fine--scale features demand accountability, this project describes two novel approaches which advance the current state of the art. First is a hierarchical material properties representation for finite element analysis whereby material properties can be perscribed on volumetric elements, in addition to their facets and edges. Hence, thin or fine--scaled features can be economically represented by small numbers of connected edges or facets, rather than 10's of millions of very small volumetric elements. Examples of this approach are drawn from oilfield and near--surface geophysics where, for example, electrostatic response of metallic infastructure or fracture swarms is easily calculable on a laptop computer with an estimated reduction in resource allocation by 4 orders of magnitude over traditional methods. Second is a first-ever solution method for the space--fractional Helmholtz equation in geophysical electromagnetics, accompanied by newly--found magnetotelluric evidence supporting a fractional calculus representation of multi-scale geomaterials. Whereas these two achievements are significant in themselves, a clear understanding the intermediate length scale where these two endmember viewpoints must converge remains unresolved and is a natural direction for future research. Additionally, an explicit mapping from a known multi-scale geomaterial model to its equivalent fractional calculus representation proved beyond the scope of the present research and, similarly, remains fertile ground for future exploration.

More Details

Adaptive wavelet compression of large additive manufacturing experimental and simulation datasets

Computational Mechanics

Salloum, Maher S.; Johnson, Kyle J.; Bishop, Joseph E.; Aytac, Jon M.; Dagel, Daryl D.; van Bloemen Waanders, Bart G.

New manufacturing technologies such as additive manufacturing require research and development to minimize the uncertainties in the produced parts. The research involves experimental measurements and large simulations, which result in huge quantities of data to store and analyze. We address this challenge by alleviating the data storage requirements using lossy data compression. We select wavelet bases as the mathematical tool for compression. Unlike images, additive manufacturing data is often represented on irregular geometries and unstructured meshes. Thus, we use Alpert tree-wavelets as bases for our data compression method. We first analyze different basis functions for the wavelets and find the one that results in maximal compression and miminal error in the reconstructed data. We then devise a new adaptive thresholding method that is data-agnostic and allows a priori estimation of the reconstruction error. Finally, we propose metrics to quantify the global and local errors in the reconstructed data. One of the error metrics addresses the preservation of physical constraints in reconstructed data fields, such as divergence-free stress field in structural simulations. While our compression and decompression method is general, we apply it to both experimental and computational data obtained from measurements and thermal/structural modeling of the sintering of a hollow cylinder from metal powders using a Laser Engineered Net Shape process. The results show that monomials achieve optimal compression performance when used as wavelet bases. The new thresholding method results in compression ratios that are two to seven times larger than the ones obtained with commonly used thresholds. Overall, adaptive Alpert tree-wavelets can achieve compression ratios between one and three orders of magnitude depending on the features in the data that are required to preserve. These results show that Alpert tree-wavelet compression is a viable and promising technique to reduce the size of large data structures found in both experiments and simulations.

More Details

On the convergence of the Neumann series for electrostatic fracture response

Geophysics

Weiss, Chester J.; van Bloemen Waanders, Bart G.

The feasibility of Neumann-series expansion of Maxwell's equations in the electrostatic limit is investigated for potentially rapid and approximate subsurface imaging of geologic features proximal to metallic infrastructure in an oilfield environment. Although generally useful for efficient modeling of mild conductivity perturbations in uncluttered settings, we have raised the question of its suitability for situations such as oilfields, in which metallic artifacts are pervasive and, in some cases, in direct electrical contact with the conductivity perturbation on which the Neumann series is computed. Convergence of the Neumann series and its residual error are computed using the hierarchical finite-element framework for a canonical oilfield model consisting of an L-shaped, steel-cased well, energized by a steady-state electrode, and penetrating a small set of mildly conducting fractures near the heel of the well. For a given node spacing h in the finite-element mesh, we find that the Neumann series is ultimately convergent if the conductivity is small enough - a result consistent with previous presumptions on the necessity of small conductivity perturbations. However, we also determine that the spectral radius of the Neumann series operator grows as approximately 1/h, thus suggesting that in the limit of the continuous problem h→0, the Neumann series is intrinsically divergent for all conductivity perturbations, regardless of their smallness. The hierarchical finite-element methodology itself is critically analyzed and shown to possess the h2 error convergence of traditional linear finite elements, thereby supporting the conclusion of an inescapably divergent Neumann series for this benchmark example. Application of the Neumann series to oilfield problems with metallic clutter should therefore be done with careful consideration to the coupling between infrastructure and geology. The methods used here are demonstrably useful in such circumstances.

More Details

Wireless Temperature Sensing Using Permanent Magnets for Nonlinear Feedback Control of Exothermic Polymers

IEEE Sensors Journal

Mazumdar, Anirban; Chen, Yi; van Bloemen Waanders, Bart G.; Brooks, Carlton F.; Kuehl, Michael K.; Nemer, Martin N.

Epoxies and resins can require careful temperature sensing and control in order to monitor and prevent degradation. To sense the temperature inside a mold, it is desirable to utilize a small, wireless sensing element. In this paper, we describe a new architecture for wireless temperature sensing and closed-loop temperature control of exothermic polymers. This architecture is the first to utilize magnetic field estimates of the temperature of permanent magnets within a temperature feedback control loop. We further improve performance and applicability by demonstrating sensing performance at relevant temperatures, incorporating a cure estimator, and implementing a nonlinear temperature controller. This novel architecture enables unique experimental results featuring closed-loop control of an exothermic resin without any physical connection to the inside of the mold. In this paper, we describe each of the unique features of this approach, including magnetic field-based temperature sensing, extended Kalman filtering for cure state estimation, and nonlinear feedback control over time-varying temperature trajectories. We use experimental results to demonstrate how low-cost permanent magnets can provide wireless temperature sensing up to ∼ 90°C. In addition, we use a polymer cure-control testbed to illustrate how internal temperature sensing can provide improved temperature control over both short and long time-scales. This wireless temperature sensing and control architecture holds value for a range of manufacturing applications.

More Details

Data Analysis for the Born Qualified Grand LDRD Project

Swiler, Laura P.; van Bloemen Waanders, Bart G.; Jared, Bradley H.; Koepke, Joshua R.; Whetten, Shaun R.; Madison, Jonathan D.; Ivanoff, Thomas I.; Jackson, Olivia D.; Cook, Adam W.; Brown-Shaklee, Harlan J.; Kammler, Daniel K.; Johnson, Kyle J.; Ford, Kurtis R.; Bishop, Joseph E.; Roach, R.A.

This report summarizes the data analysis activities that were performed under the Born Qualified Grand Challenge Project from 2016 - 2018. It is meant to document the characterization of additively manufactured parts and processe s for this project as well as demonstrate and identify further analyses and data science that could be done relating material processes to microstructure to properties to performance.

More Details

Remote Distributed Vibration Sensing Through Opaque Media Using Permanent Magnets

IEEE Transactions on Magnetics

Chen, Yi; Mazumdar, Anirban; Brooks, Carlton F.; van Bloemen Waanders, Bart G.; Bond, Stephen D.; Nemer, Martin N.

Vibration sensing is critical for a variety of applications from structural fatigue monitoring to understanding the modes of airplane wings. In particular, remote sensing techniques are needed for measuring the vibrations of multiple points simultaneously, assessing vibrations inside opaque metal vessels, and sensing through smoke clouds and other optically challenging environments. In this paper, we propose a method which measures high-frequency displacements remotely using changes in the magnetic field generated by permanent magnets. We leverage the unique nature of vibration tracking and use a calibrated local model technique developed specifically to improve the frequency-domain estimation accuracy. The results show that two-dimensional local models surpass the dipole model in tracking high-frequency motions. A theoretical basis for understanding the effects of electronic noise and error due to correlated variables is generated in order to predict the performance of experiments prior to implementation. Simultaneous measurements of up to three independent vibrating components are shown. The relative accuracy of the magnet-based displacement tracking with respect to the video tracking ranges from 40 to 190 μ m when the maximum displacements approach ±5 mm and when sensor-to-magnet distances vary from 25 to 36 mm. Last, vibration sensing inside an opaque metal vessel and mode shape changes due to damage on an aluminum beam are also studied using the wireless permanent-magnet vibration sensing scheme.

More Details

Changing the Engineering Design & Qualification Paradigm in Component Design & Manufacturing (Born Qualified)

Roach, R.A.; Bishop, Joseph E.; Jared, Bradley H.; Keicher, David M.; Cook, Adam W.; Whetten, Shaun R.; Forrest, Eric C.; Stanford, Joshua S.; Boyce, Brad B.; Johnson, Kyle J.; Rodgers, Theron R.; Ford, Kurtis R.; Martinez, Mario J.; Moser, Daniel M.; van Bloemen Waanders, Bart G.; Chandross, M.; Abdeljawad, Fadi F.; Allen, Kyle M.; Stender, Michael S.; Beghini, Lauren L.; Swiler, Laura P.; Lester, Brian T.; Argibay, Nicolas A.; Brown-Shaklee, Harlan J.; Kustas, Andrew K.; Sugar, Joshua D.; Kammler, Daniel K.; Wilson, Mark A.

Abstract not provided.

Remote Temperature Distribution Sensing Using Permanent Magnets

IEEE Transactions on Magnetics

Chen, Yi; Guba, Oksana G.; Brooks, Carlton F.; Roberts, Christine C.; van Bloemen Waanders, Bart G.; Nemer, Martin N.

Remote temperature sensing is essential for applications in enclosed vessels, where feedthroughs or optical access points are not possible. A unique sensing method for measuring the temperature of multiple closely spaced points is proposed using permanent magnets and several three-axis magnetic field sensors. The magnetic field theory for multiple magnets is discussed and a solution technique is presented. Experimental calibration procedures, solution inversion considerations, and methods for optimizing the magnet orientations are described in order to obtain low-noise temperature estimates. The experimental setup and the properties of permanent magnets are shown. Finally, experiments were conducted to determine the temperature of nine magnets in different configurations over a temperature range of 5 °C to 60 °C and for a sensor-to-magnet distance of up to 35 mm. To show the possible applications of this sensing system for measuring temperatures through metal walls, additional experiments were conducted inside an opaque 304 stainless steel cylinder.

More Details

Visco-TTI-elastic FWI using discontinuous galerkin

SEG Technical Program Expanded Abstracts

Ober, Curtis C.; Smith, Thomas M.; Overfelt, James R.; Collis, Samuel S.; von Winckel, Gregory J.; van Bloemen Waanders, Bart G.; Downey, Nathan J.; Mitchell, Scott A.; Bond, Stephen D.; Aldridge, David F.; Krebs, Jerome R.

The need to better represent the material properties within the earth's interior has driven the development of higherfidelity physics, e.g., visco-tilted-transversely-isotropic (visco- TTI) elastic media and material interfaces, such as the ocean bottom and salt boundaries. This is especially true for full waveform inversion (FWI), where one would like to reproduce the real-world effects and invert on unprocessed raw data. Here we present a numerical formulation using a Discontinuous Galerkin (DG) finite-element (FE) method, which incorporates the desired high-fidelity physics and material interfaces. To offset the additional costs of this material representation, we include a variety of techniques (e.g., non-conformal meshing, and local polynomial refinement), which reduce the overall costs with little effect on the solution accuracy.

More Details

Wireless temperature sensing using permanent magnets for multiple points undergoing repeatable motions

ASME 2016 Dynamic Systems and Control Conference, DSCC 2016

Chen, Yi; Guba, Oksana G.; Brooks, Carlton F.; Roberts, Christine C.; van Bloemen Waanders, Bart G.; Nemer, Martin N.

Temperature monitoring is essential in automation, mechatronics, robotics and other dynamic systems. Wireless methods which can sense multiple temperatures at the same time without the use of cables or slip-rings can enable many new applications. A novel method utilizing small permanent magnets is presented for wirelessly measuring the temperature of multiple points moving in repeatable motions. The technique utilizes linear least squares inversion to separate the magnetic field contributions of each magnet as it changes temperature. The experimental setup and calibration methods are discussed. Initial experiments show that temperatures from 5 to 50 °C can be accurately tracked for three neodymium iron boron magnets in a stationary configuration and while traversing in arbitrary, repeatable trajectories. This work presents a new sensing capability that can be extended to tracking multiple temperatures inside opaque vessels, on rotating bearings, within batteries, or at the tip of complex endeffectors.

More Details

Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

Computer Methods in Applied Mechanics and Engineering

Carlberg, Kevin T.; Ray, Jaideep R.; van Bloemen Waanders, Bart G.

Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of a system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation.We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equations at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. The goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.

More Details

A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

Geoscientific Model Development

Ray, J.; Lee, Jina L.; Yadav, V.; Lefantzi, Sophia L.; Michalak, A.M.; van Bloemen Waanders, Bart G.

Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of 2. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.

More Details

Arctic Climate Systems Analysis

Ivey, Mark D.; Robinson, David G.; Boslough, Mark B.; Backus, George A.; Peterson, Kara J.; van Bloemen Waanders, Bart G.; Swiler, Laura P.; Desilets, Darin M.; Reinert, Rhonda K.

This study began with a challenge from program area managers at Sandia National Laboratories to technical staff in the energy, climate, and infrastructure security areas: apply a systems-level perspective to existing science and technology program areas in order to determine technology gaps, identify new technical capabilities at Sandia that could be applied to these areas, and identify opportunities for innovation. The Arctic was selected as one of these areas for systems level analyses, and this report documents the results. In this study, an emphasis was placed on the arctic atmosphere since Sandia has been active in atmospheric research in the Arctic since 1997. This study begins with a discussion of the challenges and benefits of analyzing the Arctic as a system. It goes on to discuss current and future needs of the defense, scientific, energy, and intelligence communities for more comprehensive data products related to the Arctic; assess the current state of atmospheric measurement resources available for the Arctic; and explain how the capabilities at Sandia National Laboratories can be used to address the identified technological, data, and modeling needs of the defense, scientific, energy, and intelligence communities for Arctic support.

More Details

Inverse problems in heterogeneous and fractured media using peridynamics

Journal of Mechanics of Materials and Structures

Turner, Daniel Z.; van Bloemen Waanders, Bart G.; Parks, Michael L.

The following work presents an adjoint-based methodology for solving inverse problems in heterogeneous and fractured media using state-based peridynamics. We show that the inner product involving the peridynamic operators is self-adjoint. The proposed method is illustrated for several numerical examples with constant and spatially varying material parameters as well as in the context of fractures. We also present a framework for obtaining material parameters by integrating digital image correlation (DIC) with inverse analysis. This framework is demonstrated by evaluating the bulk and shear moduli for a sample of nuclear graphite using digital photographs taken during the experiment. The resulting measured values correspond well with other results reported in the literature. Lastly, we show that this framework can be used to determine the load state given observed measurements of a crack opening. This type of analysis has many applications in characterizing subsurface stress-state conditions given fracture patterns in cores of geologic material.

More Details

Construction of energy-stable projection-based reduced order models

Applied Mathematics and Computation

Kalashnikova, Irina; Barone, Matthew F.; Arunajatesan, Srinivasan A.; van Bloemen Waanders, Bart G.

An approach for building energy-stable Galerkin reduced order models (ROMs) for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. This method is an extension of earlier work by the authors specific to the equations of linearized compressible inviscid flow. The key idea is to apply to the PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. For linear problems, the desired transformation is induced by a special inner product, termed the "symmetry inner product", which is derived herein for several systems of physical interest. Connections are established between the proposed approach and other stability-preserving model reduction methods, giving the paper a review flavor. More specifically, it is shown that a discrete counterpart of this inner product is a weighted L2 inner product obtained by solving a Lyapunov equation, first proposed by Rowley et al. and termed herein the "Lyapunov inner product". Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.

More Details

Viscoelastic material inversion using Sierra-SD and ROL

Walsh, Timothy W.; Aquino, Wilkins A.; Ridzal, Denis R.; Kouri, Drew P.; van Bloemen Waanders, Bart G.; Urbina, Angel U.

In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.

More Details

Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

Drohmann, M.D.; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep R.; van Bloemen Waanders, Bart G.; Carlberg, Kevin T.

Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This enables ROMs to be rigorously incorporated in uncertainty-quantification settings, as the error model can be treated as a source of epistemic uncertainty. This work was completed as part of a Truman Fellowship appointment. We note that much additional work was performed as part of the Fellowship. One salient project is the development of the Trilinos-based model-reduction software module Razor , which is currently bundled with the Albany PDE code and currently allows nonlinear reduced-order models to be constructed for any application supported in Albany. Other important projects include the following: 1. ROMES-equipped ROMs for Bayesian inference: K. Carlberg, M. Drohmann, F. Lu (Lawrence Berkeley National Laboratory), M. Morzfeld (Lawrence Berkeley National Laboratory). 2. ROM-enabled Krylov-subspace recycling: K. Carlberg, V. Forstall (University of Maryland), P. Tsuji, R. Tuminaro. 3. A pseudo balanced POD method using only dual snapshots: K. Carlberg, M. Sarovar. 4. An analysis of discrete v. continuous optimality in nonlinear model reduction: K. Carlberg, M. Barone, H. Antil (George Mason University). Journal articles for these projects are in progress at the time of this writing.

More Details

Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

Liu, Zhen L.; Safta, Cosmin S.; Sargsyan, Khachik S.; Najm, H.N.; van Bloemen Waanders, Bart G.; LaFranchi, Brian L.; Ivey, Mark D.; Schrader, Paul E.; Michelsen, Hope A.; Bambha, Ray B.

In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF assimilated meteorology fields, making it possible to perform a hybrid simulation, in which the Eulerian model (CMAQ) can be used to compute the initial condi- tion needed by the Lagrangian model, while the source-receptor relationships for a large state vector can be efficiently computed using the Lagrangian model in its backward mode. In ad- dition, CMAQ has a complete treatment of atmospheric chemistry of a suite of traditional air pollutants, many of which could help attribute GHGs from different sources. The inference of emissions sources using atmospheric observations is cast as a Bayesian model calibration problem, which is solved using a variety of Bayesian techniques, such as the bias-enhanced Bayesian inference algorithm, which accounts for the intrinsic model deficiency, Polynomial Chaos Expansion to accelerate model evaluation and Markov Chain Monte Carlo sampling, and Karhunen-Lo %60 eve (KL) Expansion to reduce the dimensionality of the state space. We have established an atmospheric measurement site in Livermore, CA and are collect- ing continuous measurements of CO2 , CH4 and other species that are typically co-emitted with these GHGs. Measurements of co-emitted species can assist in attributing the GHGs to different emissions sectors. Automatic calibrations using traceable standards are performed routinely for the gas-phase measurements. We are also collecting standard meteorological data at the Livermore site as well as planetary boundary height measurements using a ceilometer. The location of the measurement site is well suited to sample air transported between the San Francisco Bay area and the California Central Valley.

More Details

Reduced Order Modeling for Prediction and Control of Large-Scale Systems

Kalashnikova, Irina; Arunajatesan, Srinivasan A.; Barone, Matthew F.; van Bloemen Waanders, Bart G.; Fike, Jeffrey A.

This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.

More Details

Kalman-filtered compressive sensing for high resolution estimation of anthropogenic greenhouse gas emissions from sparse measurements

Ray, Jaideep R.; Lee, Jina L.; Lefantzi, Sophia L.; van Bloemen Waanders, Bart G.

The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.

More Details

Construction of energy-stable Galerkin reduced order models

Barone, Matthew F.; Arunajatesan, Srinivasan A.; van Bloemen Waanders, Bart G.; Kalashnikova, Irina

This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.

More Details

A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions

Ray, Jaideep R.; Lee, Jina L.; Lefantzi, Sophia L.; van Bloemen Waanders, Bart G.

The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

More Details

Bayesian data assimilation for stochastic multiscale models of transport in porous media

Lefantzi, Sophia L.; Klise, Katherine A.; Salazar, Luke S.; Mckenna, Sean A.; van Bloemen Waanders, Bart G.; Ray, Jaideep R.

We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman filter. We conclude with a demonstration of the use of multiscale stochastic finite elements to reconstruct permeability fields. This method, though computationally intensive, is general and can be used for multiscale inference in cases where a subgrid model cannot be constructed.

More Details

Truncated multiGaussian fields and effective conductance of binary media

Advances in Water Resources

Mckenna, Sean A.; Ray, Jaideep R.; Marzouk, Youssef; van Bloemen Waanders, Bart G.

Truncated Gaussian fields provide a flexible model for defining binary media with dispersed (as opposed to layered) inclusions. General properties of excursion sets on these truncated fields are coupled with a distance-based upscaling algorithm and approximations of point process theory to develop an estimation approach for effective conductivity in two-dimensions. Estimation of effective conductivity is derived directly from knowledge of the kernel size used to create the multiGaussian field, defined as the full-width at half maximum (FWHM), the truncation threshold and conductance values of the two modes. Therefore, instantiation of the multiGaussian field is not necessary for estimation of the effective conductance. The critical component of the effective medium approximation developed here is the mean distance between high conductivity inclusions. This mean distance is characterized as a function of the FWHM, the truncation threshold and the ratio of the two modal conductivities. Sensitivity of the resulting effective conductivity to this mean distance is examined for two levels of contrast in the modal conductances and different FWHM sizes. Results demonstrate that the FWHM is a robust measure of mean travel distance in the background medium. The resulting effective conductivities are accurate when compared to numerical results and results obtained from effective media theory, distance-based upscaling and numerical simulation. © 2011 Elsevier Ltd.

More Details

The effect of error models in the multiscale inversion of binary permeability fields

Ray, Jaideep R.; van Bloemen Waanders, Bart G.; Mckenna, Sean A.

We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.

More Details

Posterior predictive modeling using multi-scale stochastic inverse parameter estimates

Mckenna, Sean A.; Ray, Jaideep R.; van Bloemen Waanders, Bart G.

Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.

More Details

Unstructured discontinuous Galerkin for seismic inversion

Collis, Samuel S.; Ober, Curtis C.; van Bloemen Waanders, Bart G.

This abstract explores the potential advantages of discontinuous Galerkin (DG) methods for the time-domain inversion of media parameters within the earth's interior. In particular, DG methods enable local polynomial refinement to better capture localized geological features within an area of interest while also allowing the use of unstructured meshes that can accurately capture discontinuous material interfaces. This abstract describes our initial findings when using DG methods combined with Runge-Kutta time integration and adjoint-based optimization algorithms for full-waveform inversion. Our initial results suggest that DG methods allow great flexibility in matching the media characteristics (faults, ocean bottom and salt structures) while also providing higher fidelity representations in target regions. Time-domain inversion using discontinuous Galerkin on unstructured meshes and with local polynomial refinement is shown to better capture localized geological features and accurately capture discontinuous-material interfaces. These approaches provide the ability to surgically refine representations in order to improve predicted models for specific geological features. Our future work will entail automated extensions to directly incorporate local refinement and adaptive unstructured meshes within the inversion process.

More Details

Analysis of real-time reservoir monitoring : reservoirs, strategies, & modeling

Cooper, Scott P.; Elbring, Gregory J.; Jakaboski, Blake E.; Lorenz, John C.; Mani, Seethambal S.; Normann, Randy A.; Rightley, Michael J.; van Bloemen Waanders, Bart G.; Weiss, Chester J.

The project objective was to detail better ways to assess and exploit intelligent oil and gas field information through improved modeling, sensor technology, and process control to increase ultimate recovery of domestic hydrocarbons. To meet this objective we investigated the use of permanent downhole sensors systems (Smart Wells) whose data is fed real-time into computational reservoir models that are integrated with optimized production control systems. The project utilized a three-pronged approach (1) a value of information analysis to address the economic advantages, (2) reservoir simulation modeling and control optimization to prove the capability, and (3) evaluation of new generation sensor packaging to survive the borehole environment for long periods of time. The Value of Information (VOI) decision tree method was developed and used to assess the economic advantage of using the proposed technology; the VOI demonstrated the increased subsurface resolution through additional sensor data. Our findings show that the VOI studies are a practical means of ascertaining the value associated with a technology, in this case application of sensors to production. The procedure acknowledges the uncertainty in predictions but nevertheless assigns monetary value to the predictions. The best aspect of the procedure is that it builds consensus within interdisciplinary teams The reservoir simulation and modeling aspect of the project was developed to show the capability of exploiting sensor information both for reservoir characterization and to optimize control of the production system. Our findings indicate history matching is improved as more information is added to the objective function, clearly indicating that sensor information can help in reducing the uncertainty associated with reservoir characterization. Additional findings and approaches used are described in detail within the report. The next generation sensors aspect of the project evaluated sensors and packaging survivability issues. Our findings indicate that packaging represents the most significant technical challenge associated with application of sensors in the downhole environment for long periods (5+ years) of time. These issues are described in detail within the report. The impact of successful reservoir monitoring programs and coincident improved reservoir management is measured by the production of additional oil and gas volumes from existing reservoirs, revitalization of nearly depleted reservoirs, possible re-establishment of already abandoned reservoirs, and improved economics for all cases. Smart Well monitoring provides the means to understand how a reservoir process is developing and to provide active reservoir management. At the same time it also provides data for developing high-fidelity simulation models. This work has been a joint effort with Sandia National Laboratories and UT-Austin's Bureau of Economic Geology, Department of Petroleum and Geosystems Engineering, and the Institute of Computational and Engineering Mathematics.

More Details

Algorithm and simulation development in support of response strategies for contamination events in air and water systems

van Bloemen Waanders, Bart G.

Chemical/Biological/Radiological (CBR) contamination events pose a considerable threat to our nation's infrastructure, especially in large internal facilities, external flows, and water distribution systems. Because physical security can only be enforced to a limited degree, deployment of early warning systems is being considered. However to achieve reliable and efficient functionality, several complex questions must be answered: (1) where should sensors be placed, (2) how can sparse sensor information be efficiently used to determine the location of the original intrusion, (3) what are the model and data uncertainties, (4) how should these uncertainties be handled, and (5) how can our algorithms and forward simulations be sufficiently improved to achieve real time performance? This report presents the results of a three year algorithmic and application development to support the identification, mitigation, and risk assessment of CBR contamination events. The main thrust of this investigation was to develop (1) computationally efficient algorithms for strategically placing sensors, (2) identification process of contamination events by using sparse observations, (3) characterization of uncertainty through developing accurate demands forecasts and through investigating uncertain simulation model parameters, (4) risk assessment capabilities, and (5) reduced order modeling methods. The development effort was focused on water distribution systems, large internal facilities, and outdoor areas.

More Details

Physical Modeling of Scaled Water Distribution System Networks

O'Hern, Timothy J.; Hammond, Glenn E.; Orear, Leslie O.; van Bloemen Waanders, Bart G.

Threats to water distribution systems include release of contaminants and Denial of Service (DoS) attacks. A better understanding, and validated computational models, of the flow in water distribution systems would enable determination of sensor placement in real water distribution networks, allow source identification, and guide mitigation/minimization efforts. Validation data are needed to evaluate numerical models of network operations. Some data can be acquired in real-world tests, but these are limited by 1) unknown demand, 2) lack of repeatability, 3) too many sources of uncertainty (demand, friction factors, etc.), and 4) expense. In addition, real-world tests have limited numbers of network access points. A scale-model water distribution system was fabricated, and validation data were acquired over a range of flow (demand) conditions. Standard operating variables included system layout, demand at various nodes in the system, and pressure drop across various pipe sections. In addition, the location of contaminant (salt or dye) introduction was varied. Measurements of pressure, flowrate, and concentration at a large number of points, and overall visualization of dye transport through the flow network were completed. Scale-up issues that that were incorporated in the experiment design include Reynolds number, pressure drop across nodes, and pipe friction and roughness. The scale was chosen to be 20:1, so the 10 inch main was modeled with a 0.5 inch pipe in the physical model. Controlled validation tracer tests were run to provide validation to flow and transport models, especially of the degree of mixing at pipe junctions. Results of the pipe mixing experiments showed large deviations from predicted behavior and these have a large impact on standard network operations models.3

More Details

Nonlinear programming strategies for source detection of municipal water networks

van Bloemen Waanders, Bart G.; van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.

Increasing concerns for the security of the national infrastructure have led to a growing need for improved management and control of municipal water networks. To deal with this issue, optimization offers a general and extremely effective method to identify (possibly harmful) disturbances, assess the current state of the network, and determine operating decisions that meet network requirements and lead to optimal performance. This paper details an optimization strategy for the identification of source disturbances in the network. Here we consider the source inversion problem modeled as a nonlinear programming problem. Dynamic behavior of municipal water networks is simulated using EPANET. This approach allows for a widely accepted, general purpose user interface. For the source inversion problem, flows and concentrations of the network will be reconciled and unknown sources will be determined at network nodes. Moreover, intrusive optimization and sensitivity analysis techniques are identified to assess the influence of various parameters and models in the network in a computational efficient manner. A number of numerical comparisons are made to demonstrate the effectiveness of various optimization approaches.

More Details

Large Scale Non-Linear Programming for PDE Constrained Optimization

van Bloemen Waanders, Bart G.; Bartlett, Roscoe B.; Long, Kevin R.; Boggs, Paul T.; Salinger, Andrew G.

Three years of large-scale PDE-constrained optimization research and development are summarized in this report. We have developed an optimization framework for 3 levels of SAND optimization and developed a powerful PDE prototyping tool. The optimization algorithms have been interfaced and tested on CVD problems using a chemically reacting fluid flow simulator resulting in an order of magnitude reduction in compute time over a black box method. Sandia's simulation environment is reviewed by characterizing each discipline and identifying a possible target level of optimization. Because SAND algorithms are difficult to test on actual production codes, a symbolic simulator (Sundance) was developed and interfaced with a reduced-space sequential quadratic programming framework (rSQP++) to provide a PDE prototyping environment. The power of Sundance/rSQP++ is demonstrated by applying optimization to a series of different PDE-based problems. In addition, we show the merits of SAND methods by comparing seven levels of optimization for a source-inversion problem using Sundance and rSQP++. Algorithmic results are discussed for hierarchical control methods. The design of an interior point quadratic programming solver is presented.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.; Giunta, Anthony A.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Reference Manual

Eldred, Michael S.; Giunta, Anthony A.; van Bloemen Waanders, Bart G.; Wojtkiewicz, Steven F.; Hart, William E.

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

More Details
159 Results
159 Results