Computational engineering models often contain unknown entities (e.g. parameters, initial and boundary conditions) that require estimation from other measured observable data. Estimating such unknown entities is challenging when they involve spatio-temporal fields because such functional variables often require an infinite-dimensional representation. We address this problem by transforming an unknown functional field using Alpert wavelet bases and truncating the resulting spectrum. Hence the problem reduces to the estimation of few coefficients that can be performed using common optimization methods. We apply this method on a one-dimensional heat transfer problem where we estimate the heat source field varying in both time and space. The observable data is comprised of temperature measured at several thermocouples in the domain. This latter is composed of either copper or stainless steel. The optimization using our method based on wavelets is able to estimate the heat source with an error between 5% and 7%. We analyze the effect of the domain material and number of thermocouples as well as the sensitivity to the initial guess of the heat source. Finally, we estimate the unknown heat source using a different approach based on deep learning techniques where we consider the input and output of a multi-layer perceptron in wavelet form. We find that this deep learning approach is more accurate than the optimization approach with errors below 4%.
In this paper we introduce a method to compare sets of full-field data using Alpert tree-wavelet transforms. The Alpert tree-wavelet methods transform the data into a spectral space allowing the comparison of all points in the fields by comparing spectral amplitudes. The methods are insensitive to translation, scale and discretization and can be applied to arbitrary geometries. This makes them especially well suited for comparison of field data sets coming from two different sources such as when comparing simulation field data to experimental field data. We have developed both global and local error metrics to quantify the error between two fields. We verify the methods on two-dimensional and three-dimensional discretizations of analytical functions. We then deploy the methods to compare full-field strain data from a simulation of elastomeric syntactic foam.
We discuss techniques for efficient local detection of silent data corruption in parallel scientific computations, leveraging physical quantities such as momentum and energy that may be conserved by discretized PDEs. The conserved quantities are analogous to “algorithm-based fault tolerance” checksums for linear algebra but, due to their physical foundation, are applicable to both linear and nonlinear equations and have efficient local updates based on fluxes between subdomains. These physics-based checksums enable precise intermittent detection of errors and recovery by rollback to a checkpoint, with very low overhead when errors are rare. We present applications to both explicit hyperbolic and iterative elliptic (unstructured finite-element) solvers with injected memory bit flips.
New manufacturing technologies such as additive manufacturing require research and development to minimize the uncertainties in the produced parts. The research involves experimental measurements and large simulations, which result in huge quantities of data to store and analyze. We address this challenge by alleviating the data storage requirements using lossy data compression. We select wavelet bases as the mathematical tool for compression. Unlike images, additive manufacturing data is often represented on irregular geometries and unstructured meshes. Thus, we use Alpert tree-wavelets as bases for our data compression method. We first analyze different basis functions for the wavelets and find the one that results in maximal compression and miminal error in the reconstructed data. We then devise a new adaptive thresholding method that is data-agnostic and allows a priori estimation of the reconstruction error. Finally, we propose metrics to quantify the global and local errors in the reconstructed data. One of the error metrics addresses the preservation of physical constraints in reconstructed data fields, such as divergence-free stress field in structural simulations. While our compression and decompression method is general, we apply it to both experimental and computational data obtained from measurements and thermal/structural modeling of the sintering of a hollow cylinder from metal powders using a Laser Engineered Net Shape process. The results show that monomials achieve optimal compression performance when used as wavelet bases. The new thresholding method results in compression ratios that are two to seven times larger than the ones obtained with commonly used thresholds. Overall, adaptive Alpert tree-wavelets can achieve compression ratios between one and three orders of magnitude depending on the features in the data that are required to preserve. These results show that Alpert tree-wavelet compression is a viable and promising technique to reduce the size of large data structures found in both experiments and simulations.
Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate its usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.