Publications

58 Results
Skip to search filters

Alpert multi-wavelets for functional inverse problems: direct optimization and deep learning

International Journal for Computational Methods in Engineering Science and Mechanics

Salloum, Maher S.; Bon, Bradley L.

Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.

More Details

Comparing field data using Alpert multi-wavelets

Computational Mechanics

Salloum, Maher S.; Karlson, Kyle N.; Jin, Helena; Brown, Judith A.; Bolintineanu, Dan S.; Long, Kevin N.

In this paper we introduce a method to compare sets of full-field data using Alpert tree-wavelet transforms. The Alpert tree-wavelet methods transform the data into a spectral space allowing the comparison of all points in the fields by comparing spectral amplitudes. The methods are insensitive to translation, scale and discretization and can be applied to arbitrary geometries. This makes them especially well suited for comparison of field data sets coming from two different sources such as when comparing simulation field data to experimental field data. We have developed both global and local error metrics to quantify the error between two fields. We verify the methods on two-dimensional and three-dimensional discretizations of analytical functions. We then deploy the methods to compare full-field strain data from a simulation of elastomeric syntactic foam.

More Details

Physics-Based Checksums for Silent-Error Detection in PDE Solvers

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Salloum, Maher S.; Mayo, Jackson M.; Armstrong, Robert C.

We discuss techniques for efficient local detection of silent data corruption in parallel scientific computations, leveraging physical quantities such as momentum and energy that may be conserved by discretized PDEs. The conserved quantities are analogous to “algorithm-based fault tolerance” checksums for linear algebra but, due to their physical foundation, are applicable to both linear and nonlinear equations and have efficient local updates based on fluxes between subdomains. These physics-based checksums enable precise intermittent detection of errors and recovery by rollback to a checkpoint, with very low overhead when errors are rare. We present applications to both explicit hyperbolic and iterative elliptic (unstructured finite-element) solvers with injected memory bit flips.

More Details

Adaptive wavelet compression of large additive manufacturing experimental and simulation datasets

Computational Mechanics

Salloum, Maher S.; Johnson, Kyle J.; Bishop, Joseph E.; Aytac, Jon M.; Dagel, Daryl D.; van Bloemen Waanders, Bart G.

New manufacturing technologies such as additive manufacturing require research and development to minimize the uncertainties in the produced parts. The research involves experimental measurements and large simulations, which result in huge quantities of data to store and analyze. We address this challenge by alleviating the data storage requirements using lossy data compression. We select wavelet bases as the mathematical tool for compression. Unlike images, additive manufacturing data is often represented on irregular geometries and unstructured meshes. Thus, we use Alpert tree-wavelets as bases for our data compression method. We first analyze different basis functions for the wavelets and find the one that results in maximal compression and miminal error in the reconstructed data. We then devise a new adaptive thresholding method that is data-agnostic and allows a priori estimation of the reconstruction error. Finally, we propose metrics to quantify the global and local errors in the reconstructed data. One of the error metrics addresses the preservation of physical constraints in reconstructed data fields, such as divergence-free stress field in structural simulations. While our compression and decompression method is general, we apply it to both experimental and computational data obtained from measurements and thermal/structural modeling of the sintering of a hollow cylinder from metal powders using a Laser Engineered Net Shape process. The results show that monomials achieve optimal compression performance when used as wavelet bases. The new thresholding method results in compression ratios that are two to seven times larger than the ones obtained with commonly used thresholds. Overall, adaptive Alpert tree-wavelets can achieve compression ratios between one and three orders of magnitude depending on the features in the data that are required to preserve. These results show that Alpert tree-wavelet compression is a viable and promising technique to reduce the size of large data structures found in both experiments and simulations.

More Details

A Numerical model of exchange chromatography through 3-D lattice structures

AIChE Journal

Salloum, Maher S.; Robinson, David R.

Rapid progress in the development of additive manufacturing technologies is opening new opportunities to fabricate structures that control mass transport in three dimensions across a broad range of length scales. We describe a structure that can be fabricated by newly available commercial 3-D printers. It contains an array of regular three-dimensional flow paths that are in intimate contact with a solid phase, and thoroughly shuffle material among the paths. We implement a chemically reacting flow model to study its behavior as an exchange chromatography column, and compare it to an array of 1-D flow paths that resemble more traditional honeycomb monoliths. A reaction front moves through the columns and then elutes. The front is sharper at all flow rates for the structure with three-dimensional flow paths, and this structure is more robust to channel width defects than the 1-D array. © 2018 American Institute of Chemical Engineers AIChE J, 64: 1874–1884, 2018.

More Details

Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets

Data Science and Engineering

Salloum, Maher S.; Fabian, Nathan D.; Hensinger, David M.; Lee, Jina L.; Allendorf, Elizabeth M.; Bhagatwala, Ankit; Blaylock, Myra L.; Chen, Jacqueline H.; Templeton, Jeremy A.; Tezaur, Irina

Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate its usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.

More Details

In-situ mitigation of silent data corruption in PDE solvers

FTXS 2016 - Proceedings of the ACM Workshop on Fault-Tolerance for HPC at Extreme Scale

Salloum, Maher S.; Mayo, Jackson M.; Armstrong, Robert C.

We present algorithmic techniques for parallel PDE solvers that leverage numerical smoothness properties of physics simulation to detect and correct silent data corruption within local computations. We initially model such silent hardware errors (which are of concern for extreme scale) via injected DRAM bit flips. Our mitigation approach generalizes previously developed "robust stencils" and uses modified linear algebra operations that spatially interpolate to replace large outlier values. Prototype implementations for 1D hyperbolic and 3D elliptic solvers, tested on up to 2048 cores, show that this error mitigation enables tolerating orders of magnitude higher bit-flip rates. The runtime overhead of the approach generally decreases with greater solver scale and complexity, becoming no more than a few percent in some cases. A key advantage is that silent data corruption can be handled transparently with data in cache, reducing the cost of false-positive detections compared to rollback approaches.

More Details

Final Report: Sublinear Algorithms for In-situ and In-transit Data Analysis at Exascale

Bennett, Janine C.; Pinar, Ali P.; Seshadhri, C.S.; Thompson, David T.; Salloum, Maher S.; Bhagatwala, Ankit B.; Chen, Jacqueline H.

Post-Moore's law scaling is creating a disruptive shift in simulation workflows, as saving the entirety of raw data to persistent storage becomes expensive. We are moving away from a post-process centric data analysis paradigm towards a concurrent analysis framework, in which raw simulation data is processed as it is computed. Algorithms must adapt to machines with extreme concurrency, low communication bandwidth, and high memory latency, while operating within the time constraints prescribed by the simulation. Furthermore, in- put parameters are often data dependent and cannot always be prescribed. The study of sublinear algorithms is a recent development in theoretical computer science and discrete mathematics that has significant potential to provide solutions for these challenges. The approaches of sublinear algorithms address the fundamental mathematical problem of understanding global features of a data set using limited resources. These theoretical ideas align with practical challenges of in-situ and in-transit computation where vast amounts of data must be processed under severe communication and memory constraints. This report details key advancements made in applying sublinear algorithms in-situ to identify features of interest and to enable adaptive workflows over the course of a three year LDRD. Prior to this LDRD, there was no precedent in applying sublinear techniques to large-scale, physics based simulations. This project has definitively demonstrated their efficacy at mitigating high performance computing challenges and highlighted the rich potential for follow-on re- search opportunities in this space.

More Details

Empirical and physics based mathematical models of uranium hydride decomposition kinetics with quantified uncertainties

Salloum, Maher S.; Gharagozloo, Patricia E.

Metal particle beds have recently become a major technique for hydrogen storage. In order to extract hydrogen from such beds, it is crucial to understand the decomposition kinetics of the metal hydride. We are interested in obtaining a a better understanding of the uranium hydride (UH3) decomposition kinetics. We first developed an empirical model by fitting data compiled from different experimental studies in the literature and quantified the uncertainty resulting from the scattered data. We found that the decomposition time range predicted by the obtained kinetics was in a good agreement with published experimental results. Secondly, we developed a physics based mathematical model to simulate the rate of hydrogen diffusion in a hydride particle during the decomposition. We used this model to simulate the decomposition of the particles for temperatures ranging from 300K to 1000K while propagating parametric uncertainty and evaluated the kinetics from the results. We compared the kinetics parameters derived from the empirical and physics based models and found that the uncertainty in the kinetics predicted by the physics based model covers the scattered experimental data. Finally, we used the physics-based kinetics parameters to simulate the effects of boundary resistances and powder morphological changes during decomposition in a continuum level model. We found that the species change within the bed occurring during the decomposition accelerates the hydrogen flow by increasing the bed permeability, while the pressure buildup and the thermal barrier forming at the wall significantly impede the hydrogen extraction.

More Details

A coupled transport and solid mechanics formulation with improved reaction kinetics parameters for modeling oxidation and decomposition in a uranium hydride bed

Salloum, Maher S.; Shugard, Andrew D.; Gharagozloo, Patricia E.

Modeling of reacting flows in porous media has become particularly important with the increased interest in hydrogen solid-storage beds. An advanced type of storage bed has been proposed that utilizes oxidation of uranium hydride to heat and decompose the hydride, releasing the hydrogen. To reduce the cost and time required to develop these systems experimentally, a valid computational model is required that simulates the reaction of uranium hydride and oxygen gas in a hydrogen storage bed using multiphysics finite element modeling. This SAND report discusses the advancements made in FY12 (since our last SAND report SAND2011-6939) to the model developed as a part of an ASC-P&EM project to address the shortcomings of the previous model. The model considers chemical reactions, heat transport, and mass transport within a hydride bed. Previously, the time-varying permeability and porosity were considered uniform. This led to discrepancies between the simulated results and experimental measurements. In this work, the effects of non-uniform changes in permeability and porosity due to phase and thermal expansion are accounted for. These expansions result in mechanical stresses that lead to bed deformation. To describe this, a simplified solid mechanics model for the local variation of permeability and porosity as a function of the local bed deformation is developed. By using this solid mechanics model, the agreement between our reacting bed model and the experimental data is improved. Additionally, more accurate uranium hydride oxidation kinetics parameters are obtained by fitting the experimental results from a pure uranium hydride oxidation measurement to the ones obtained from the coupled transport-solid mechanics model. Finally, the coupled transport-solid mechanics model governing equations and boundary conditions are summarized and recommendations are made for further development of ARIA and other Sandia codes in order for them to sufficiently implement the model.

More Details
58 Results
58 Results